Previous Table of Contents Next


PART II
MEASUREMENT TECHNIQUES AND TOOLS

Computer system performance measurements involve monitoring the system while it is being subjected to a particular workload. In order to perform meaningful measurements, the workload should be carefully selected. To achieve that goal, the performance analyst needs to understand the following before performing the measurements:

1.  What are the different types of workloads?
2.  Which workloads are commonly used by other analysts?
3.  How are the appropriate workload types selected?
4.  How is the measured workload data summarized?
5.  How is the system performance monitored?
6.  How can the desired workload be placed on the system in a controlled manner?
7.  How are the results of the evaluation presented?

The answers to these questions and related issues are discussed in this part.

CHAPTER 4
TYPES OF WORKLOADS

benchmark v. trans. To subject (a system) to a series of tests in order to obtain prearranged results not available on competitive systems.

— S. Kelly-Bootle
The Devil’s DP Dictionary

This chapter describes workloads that have traditionally been used to compare computer systems. This description will familiarize you with workload-related names and terms that appear in performance reports. Most of these terms were developed for comparing processors and timesharing systems. In Chapter 5, the discussion is generalized to other computing systems such as database systems, network, and so forth.

The term test workload denotes any workload used in performance studies. A test workload can be real or synthetic. A real workload is one observed on a system being used for normal operations. It cannot be repeated, and therefore, is generally not suitable for use as a test workload. Instead, a synthetic workload, whose characteristics are similar to those of the real workload and can be applied repeatedly in a controlled manner, is developed and used for studies. The main reason for using a synthetic workload is that it is a representation or model of the real workload. Other reasons for using a synthetic workload are no real-world data files, which may be large and contain sensitive data, are required; the workload can be easily modified without affecting operation; it can be easily ported to different systems due to its small size; and it may have built-in measurement capabilities.

The following types of test workloads have been used to compare computer systems:

1.  Addition instruction
2.  Instruction mixes
3.  Kernels
4.  Synthetic programs
5.  Application benchmarks

Each of these workloads is explained in this chapter, and the circumstances under which they may be appropriate are discussed.

4.1 ADDITION INSTRUCTION

Historically, when computer systems were first introduced, processors were the most expensive and most used components of the system. The performance of the computer system was synonymous with that of the processor. Initially, the computers had very few instructions. The most frequent is the addition instruction. Thus, as a first approximation, the computer with the faster addition instruction was considered to be the better performer. The addition instruction was the sole workload used, and the addition time was the sole performance metric.

4.2 INSTRUCTION MIXES

As the number and complexity of instructions supported by the processors grew, the addition instruction was no longer sufficient, and a more detailed workload description was required. This need led several people to measure the relative frequencies of various instructions on real systems and to use these as weighting factors to get an average instruction time.

An instruction mix is a specification of various instructions coupled with their usage frequency. Given different instruction timings, it is possible to compute an average instruction time for a given mix and use the average to compare different processors. Several instruction mixes are used in the computer industry; the most commonly quoted one is the Gibson mix.

The Gibson mix was developed by Jack C. Gibson in 1959 for use with IBM 704 systems. At that time, processor speeds were measured by memory cycle time, addition time, or an average of addition and multiplication times. The Gibson mix extended the averaging to 13 different classes of instructions, shown in Table 4.1. The average speed of a processor can be computed from the weighted average of the execution times of instructions in the 13 classes listed in the table. The weights are based on the relative frequency of operation codes as measured on a few IBM 704 and IBM 650 systems.


Previous Table of Contents Next

Copyright © John Wiley & Sons, Inc.