Previous Table of Contents Next


Each of the organizations listed above organizes annual conferences. There are annual SIGMETRICS and CMG conferences. IFIP Working Group 7.3 sponsors conferences called “PERFORMANCE,” which are scheduled every 18 months and are held alternately in Europe and in North America. Both SIGMETRICS and PERFORMANCE conferences carry high-quality papers describing new research in performance evaluation techniques. Proceedings of SIGMETRICS conferences generally appear as special issues of Performance Evaluation Review, the quarterly journal published by ACM SIGMETRICS. Applied Computer Research, a private business organization (address: P.O. Box 82266, Phoenix, AZ 85071) organizes annual conferences on EDP Performance and Capacity Management. ACM SIGSIM and IEEE Computer Society Technical Committee on Simulation jointly sponsor conferences on simulations. The University of Pittsburgh’s School of Engineering and IEEE sponsor the Annual Pittsburgh Conference on Modeling and Simulation.

There are a number of journals devoted exclusively to computer systems performance evaluation. The papers in these journals are related to either performance techniques in general or their applications to computer systems. Of these, Performance Evaluation Review, CMG Transactions, Simulation, Simulation Digest, SIAM Review, and Operations Research have already been mentioned earlier. In addition, Performance Evaluation and EDP Performance Review should also be mentioned (published by private organizations). Performance Evaluation is published twice a year by Elsevier Science Publishers B.V. (North-Holland), P.O. Box 1991, 1000 BZ Amsterdam, The Netherlands. In the United States and Canada, it is distributed by Elsevier Science Publishing Company, 52 Vanderbilt Avenue, New York, NY 10017. EDP Performance Review is published monthly by Applied Computer Research. The annual reference issue carries a survey of commercial performance-related hardware and software tools including monitoring, simulation, accounting, program analyzer, and others.

A vast majority of papers on performance appear in other computer science or statistics journals. For example, more papers dealing with distributed systems performance appear in journals on distributed systems than in the performance journals. In particular, many of the seminal papers on analytical modeling and simulation techniques initially appeared in Communications of the ACM. Other journals that publish papers on computer systems performance analysis are IEEE Transactions on Software Engineering, IEEE Transactions on Computers, and ACM Transactions on Computers.

Students interested in taking additional courses on performance evaluation techniques may consider courses on statistical inference, operations research, stochastic processes, decision theory, time series analysis, design of experiments, system simulation, queueing theory, and other related subjects.

1.4 PERFORMANCE PROJECTS

I hear and I forget. I see and I remember. I do and I understand.
—Chinese Proverb

The best way to learn a subject is to apply the concepts to a real system. This is specially true of computer systems performance evaluation because even though the techniques appear simple on the surface, their applications to real systems offer a different experience since the real systems do not behave in a simple manner.

It is recommended that courses on performance evaluation include at least one project where student teams are required to select a computer subsystem, for example, a network mail program, an operating system, a language compiler, a text editor, a processor, or a database. They should also be required to perform some measurements, analyze the collected data, simulate or analytically model the subsystem, predict its performance, and validate the model. Student teams are preferable to individual student projects since most real-life projects require coordination and communication with several other people.

Examples of some of the projects completed by students as part of a course on computer system performance analysis techniques based on the contents of this book are as follows:

1.  Measure and compare the performance of window systems of two AI systems.
2.  Simulate and compare the performance of two processor interconnection networks.
3.  Measure and analyze the performance of two microprocessors.
4.  Characterize the workload of a campus timesharing system.
5.  Compute the effects of various factors and their interactions on the performance of two text-formatting programs.
6.  Measure and analyze the performance of a distributed information system.
7.  Simulate the communications controllers for an intelligent terminal system.
8.  Measure and analyze the performance of a computer-aided design tool.
9.  Measure and identify the factors that affect the performance of an experimental garbage collection algorithm.
10.  Measure and compare the performance of remote procedure calls and remote pipe calls.
11.  Analyze the effect of factors that impact the performance of two Reduced Instruction Set Computer (RISC) processor architectures.
12.  Analyze the performance of a parallel compiler running on a multiprocessor system.
13.  Develop a software monitor to observe the performance of a large multiprocessor system.
14.  Analyze the performance of a distributed game program running on a network of AI systems.
15.  Compare the performance of several robot control algorithms.

In each case, the goal was to provide an insight (or information) not obvious before the project. Most projects were real problems that the students were already required to solve as part of other courses, thesis work, or a job. As the course progressed and students learned new techniques, they attempted to apply the techniques to their particular problem. At the end of the course, the students presented the results to the class and discussed their findings and frustrations. The latter was especially enlightening since many techniques that worked in theory did not produce meaningful insights in practice.

At the end of many chapters in this book, there are exercises asking the reader to choose a computer system and apply the techniques of the chapter to that system. It is recommended that the students attempt to apply the techniques to the system of their project.

EXERCISE

1.1  The measured performance of two database systems on two different work-loads is shown in Table 1.6. Compare the performance of the two systems and show that
a.  System A is better
b.  System B is better
TABLE 1.6 Throughput in Queries per Second

System Workload 1 Workload 2

A 30 10
B 10 30


Previous Table of Contents Next

Copyright © John Wiley & Sons, Inc.