Computer Architecture : A Quantitative Approach, Second Edition
||Author: John Hennessy, John L. Hennessy, David Goldberg, David A. Patterson|
List Price: $85.95
Our Price: Click to see the latest and low price
Publisher: Morgan Kaufmann (01 August, 1995)
Sales Rank: 13,927
Average Customer Rating: 4 out of 5
Customer ReviewsRating: 4 out of 5
A little outdated...but still a great book
Anyone who is interested in computer architecture or computer performance benchmarking should have a copy of this book. It is well organized, packed full of information, and has many challenging exercises at the end of each chapter that reinforce and extend the concepts outlined. Also, the inside jacket gives a list of useful formula for quick reference. For those interested in vector processors, the authors have included an overview of these in the appendix. Do to new hardware and updated versions of operating systems, the book is of course somewhat out-of-date since it first appeared. It takes a long time to get through the book, but the time spent is well worth it. My interest in the book was mostly in performance aspects of computer architecture, and how to relate the material in the book to the SPEC benchmarking studies. For this reason, and for lack of space, my comments will briefly summarize the parts of the book that I found exceptionally well-written in this area.
The discussion on the measuring and reporting of computer performance begins early in the book, wherein the authors attempt to quantify what it means for one computer to be faster than then another. They take the position that the best measure of performance is the execution time of real programs. They of course mention benchmarks as a way of doing this, and discuss briefly the SPEC92 benchmark suites. The SPEC standards have changed considerably since this book was written however. After a discussion of the methods to calculate performance, and their drawbacks, the authors discuss Amdahl's Law and how to use it correctly. This is followed by a discussion of the CPU performance equation with several interesting examples given. There is a "fallacies and pitfalls" section at the end of chapter one, as there is at the end of every chapter, that discusses the problems with approaches taken in benchmarking performance. These arguments are considerably important if one is to step away from marketing claims when developing commercial software packages, especially for scientific applications. Customer satisfaction in using these packages is dictated by the actual performance, not what might be accomplished in an isolated test environment. The author's honest approach to these issues is extremely helpful to those involved in developing these kinds of programs and applications.
One of the more common fallacies that they discuss in this regard are: The MIPS value as being indicative of performance among computers. They argue that this is not the case since MIPS is dependent on the instruction set, the program being run, and it can vary inversely to performance. For the later, they give the well-known example of machines with optional floating-point hardware. The MIPS rating can be misleading since floating-point programs using the hardware take less time but have a lower MIPS rating. If software floating point routines are used, they result in a higher MIPS rating but the execution time is longer. The issues with instruction sets are given a very detailed treatment by the authors, along with the role of compilers in designing an efficient instruction set. They discuss how variables are allocated and addressed and how many registers are needed to allocate the variables appropriately. They use a hypothetical load-store architecture, which they call DLX to illustrate the points they are attempting to make. The DLX is generic enough so as to be convincing in its didactic quality, based as it is on the computer hardware that was available at the time of writing.
The authors give a thorough discussion of pipelining, including performance issues and potential pitfalls in using it. They also describe the use of dynamic scheduling to avoid stalling when data dependencies are present. The scoreboard and Tomasulo approaches to dynamic scheduling are discussed. In addition, the authors spend a lot of time discussing cache memory design and cache optimization, and virtual memory. The chapter on storage media is excellent and the authors employ some queuing theory to estimate the reponse time and throughput of an I/O system, assuming that the system is in equilibrium. The authors then discuss in detail different ways to benchmark I/O performance. This discussion is extremely important for those involved in Web server performance modeling and benchmarking. An excellent example is given dealing with the performance of a UNIX file system.
Chapter 7 is very important for those who need to study the performance of networked computers. The authors begin by considering a simple network consisting of two machines containing FIFO queues. They then design a simple protocol, similar to UDP for transferring data between these machines, and calculate the total latency of this network. Interconnection media are considered, although the presentation is somewhat out-of-date due to improvements and costs since the book was written. Performance issues with switched (ATM) versus shared medium (Ethernet) are discussed. The authors also treat connectionless networks with a brief overview of the TCP/IP protocol, and mention the role of the Internet, but do not discuss, disappointingly, performance issues with TCP/IP over the Internet, which is a formidable mathematical problem.
The treatment of multiprocessor architectures is excellent and the authors discuss two application kernels that are frequently used in scientific applications: the Fast Fourier Transform and the LU factorization from linear algebra. The parallel implementation of these algorithms is extremely important in scientific programming. They consider the Barnes-Hut n-body algorithm and the Ocean application to study scaling and performance issues in parallel programs.
Some excellent appendices appear in the book, particularly the ones on vector architectures. For those interested in scientific applications, vector processing is a popular methodology for performance enhancement. But the authors point out that the popularity of vector processing seems to becoming to an end, due to advances in microprocessor technology. Scientific progammers have realized this, and have devoted much of their time in writing code that will run on these processors, which is frequently a challenging proposition.
Rating: 3 out of 5
wordy and rambling
It is true that this is _the_ reference book for computer architecture. However, that has nothing to do with it being a well-written book. Its popularity may be attributed
to the lack of books on the same topic which allowed it to become
the standard textbook in many universities. This is how I came to have to suffer through it in a college graduate course.
Contrary to what some of the previous reviews described, this book is not conceptual at all. One of Patterson's main points is, to put it bluntly, why bother theorizing when you can benchmark with a set of most heavily used real programs for the intended application? The computations involved don't go beyond what one needs to balance a checkbook. And the few "laws" such as Amdahl's Law, is so common sense that it's sad that a name is attached to it. All of these are minor complaints, however, compared to the terrible writing style. I don't expect a technical writer to be polished or even engaging. But at the very least s/he must be coherent and to-the-point. In several chapters especially in the second half of the book, the authors would ramble on for pages without getting anywhere. Phrases or even paragraphs could have been taken out to clarify the content. It almost seemed that the authors were trying to fill enough pages just to get paid.
In short, this book does not live up to its reputation but anyone interested in computer architecture will probably have to endure it until a better book comes out.
Rating: 3 out of 5
Must have for college students, not for professionals
Hennessy and Patterson put heavy emphasis on conceptual understanding of how modern computer work and how the performance is measured by benchmarking technique. I think this book should be a good wrap for college student.
The thing that is pitiful is the exercises behind each chapter which is astronomically much more difficult to grasp and to comprehend. They might be too wordy and not suitable for someone who just learn computer architecture.
The 'virtual' DLX ISA, although functions as a simple architecture, should be carefully revised to support or channel various issues in modern processor (like superscalar or VLIW). Students are stuck with DLX for the whole semester without the opportunity to explore other ISA like MIPS, VAX, or Intel x86.
· Computer Architecture: A Quantitative Approach
· Computer Organization and Design Second Edition : The Hardware/Software Interface
· Concrete Mathematics: A Foundation for Computer Science (2nd Edition)
· Modern Operating Systems (2nd Edition)