Previous Table of Contents Next


PART I
AN OVERVIEW OF PERFORMANCE EVALUATION

Computer system users, administrators, and designers are all interested in performance evaluation since their goal is to obtain or provide the highest performance at the lowest cost. This goal has resulted in continuing evolution of higher performance and lower cost systems leading to today’s proliferation of workstations and personal computers, many of which have better performance than earlier supercomputers. As the field of computer design matures, the computer industry is becoming more competitive, and it is more important than ever to ensure that the alternative selected provides the best cost-performance trade-off.

Performance evaluation is required at every stage in the life cycle of a computer system, including its design, manufacturing, sales/purchase, use, upgrade, and so on. A performance evaluation is required when a computer system designer wants to compare a number of alternative designs and find the best design. It is required when a system administrator wants to compare a number of systems and wants to decide which system is best for a given set of applications. Even if there are no alternatives, performance evaluation of the current system helps in determining how well it is performing and whether any improvements need to be made. Unfortunately, the types of applications of computers are so numerous that it is not possible to have a standard measure of performance, a standard measurement environment (application), or a standard technique for all cases. The first step in performance evaluation is to select the right measures of performance, the right measurement environments, and the right techniques. This part will help in making these selections.

This part provides a general introduction to the field of performance evaluation. It consists of three chapters. Chapter 1 provides an overview of the book and discusses why performance evaluation is an art. Mistakes commonly observed in performance evaluation projects and a proper methodology to avoid them are presented in Chapter 2. Selection of performance evaluation techniques and evaluation criteria are discussed in Chapter 3.

CHAPTER 1
INTRODUCTION

I keep six honest serving men. They taught me all I knew. Their names are What and Why and When and How and Where and Who.

—Rudyard Kipling

Performance is a key criterion in the design, procurement, and use of computer systems. As such, the goal of computer systems engineers, scientists, analysts, and users is to get the highest performance for a given cost. To achieve that goal, computer systems professionals need, at least, a basic knowledge of performance evaluation terminology and techniques. Anyone associated with computer systems should be able to state the performance requirements of their systems and should be able to compare different alternatives to find the one that best meets their requirements.

1.1 OUTLINE OF TOPICS

The purpose of this book is to explain the performance evaluation terminology and techniques to computer systems designers and users. The goal is to emphasize simple techniques that help solve a majority of day-to-day problems. Examples of such problems are specifying performance requirements, evaluating design alternatives, comparing two or more systems, determining the optimal value of a parameter (system tuning), finding the performance bottleneck (bottleneck identification), characterizing the load on the system (workload characterization), determining the number and sizes of components (capacity planning), and predicting the performance at future loads (forecasting). Here, a system could be any collection of hardware, software, and firmware components. It could be a hardware component, for example, a central processing unit (CPU); a software system, such as a database system; or a network of several computers.

The following are examples of the types of problems that you should be able to solve after reading this book.

1.  Select appropriate evaluation techniques, performance metrics, and work-loads for a system. Later, in Chapters 2 and 3, the terms “evaluation techniques.” “metrics,” and “workload” are explained in detail. Briefly, the techniques that may be used for performance evaluation are measurement, simulation, and analytical modeling. The term metrics refers to the criteria used to evaluate the performance of the system. For example, response time—the time to service a request—could be used as a metric to compare two timesharing systems. Similarly, two transaction processing systems may be compared on the basis of their throughputs, which may be specified in transactions per second (TPS). The requests made by the users of the system are called workloads. For example, the CPU workload would consist of the instructions it is asked to execute. The workload of a database system would consist of queries and other requests it executes for users.
The issues related to the selection of metrics and evaluation techniques are discussed in Chapter 3. For example, after reading Part I you should be able to answer the following question.
Example 1.1 What performance metrics should be used to compare the performance of the following systems?
(a)  Two disk drives
(b)  Two transaction processing systems
(c)  Two packet retransmission algorithms
2.  Conduct performance measurements correctly. To measure the performance of a computer system, you need at least two tools—a tool to load the system (load generator) and a tool to measure the results (monitor). There are several types of load generators and monitors. For example, to emulate several users of a timesharing system, one would use a load generator called a remote terminal emulator (RTE). The issues related to the design and selection of such tools are discussed in Part II. The following is an example of a problem that you should be able to answer after that part.
Example 1.2 Which type of monitor (software or hardware) would be more suitable for measuring each of the following quantities?
(a)  Number of instructions executed by a processor
(b)  Degree of multiprogramming on a timesharing system
(c)  Response time of packets on a network
3.  Use proper statistical techniques to compare several alternatives. Most performance evaluation problems basically consist of finding the best among a number of alternatives. If a measurement or a simulation is repeated several times, generally the results would be slightly different each time. Simply comparing the average result of a number of repeated trials does not lead to correct conclusions, particularly if the variability of the result is high. The statistical techniques used to compare several alternatives are discussed in Part III. The following is an example of the type of question that you should be able to answer after reading that part.
Example 1.3 The number of packets lost on two links was measured for four file sizes as shown in Table 1.1.


Previous Table of Contents Next

Copyright © John Wiley & Sons, Inc.