previous | contents | next

Preface

When Computer Structures: Readings and Examples was originally published by Gordon Bell and Allen Newell in 1971, the concept of computer structures was just emerging. The book focused on the historical evolution of technology, instruction sets, and uniprocessors. Two new notations were introduced to provide more concise descriptions of instruction sets (ISP, for instruction- set processor) and uniprocessor structures (PMS, for processor- memory-switch).

In the last decade, the scene has changed dramatically. Technological advances have led to a virtual explosion in the number of computer types and installations. Minicomputers and calculators, still relatively new computer applications in 1971, are the basis of industries today. Entirely new types, such as microprocessors and maxicomputers with vector data-types, now command sizable markets of their own. Techniques such as microprogramming, networks, multiprocessors, and fault tolerance were infrequently applied in 1971; a decade later, these concepts are essential in almost all the new systems.

The 1971 edition of Computer Structures introduced the concept of a design space, with each computer structure representing a point in that space. This edition embraces and expands the computer space concept and reflects changes in several dimensions which have since either received common acceptance or been replaced by other dimensions with significantly more impact on the structure's performance.

The number of addresses per instruction is an example of a dimension where common acceptance has developed. Contemporary instruction sets are based on general-register organizations with multiple-byte or -word instructions. The variable-length instruction format enables the computer to assign the largest number of op codes and address bits into a 1-word instruction. Early instruction-set design often meant wasted memory if the instructions were too wide. Too short an instruction could require excess instructions to perform an otherwise simple task. A good instruction-set encoding can increase program density by over 50 percent. With the creation of a large number of instruction sets, designers of new instruction sets have been able to integrate the best features of their predecessors.

Networks are an example of where new dimensions are emerging. Variations in network performance due to instruction- set design are negligible compared to variations in network performance due to operating systems, network topology, network protocol, media bandwidth, etc.

This book emphasizes computer space dimensions with numerous and quantitative subdimensions. Each alternative value for a dimension represents a design alternative. These values and their interactions with other dimensions are illustrated by real ma chines.

All the machines discussed in this book have actually been constructed and evaluated. The papers, wherever possible, are written by the specific machine architects or people closely associated with the architectures. Several of the machines are presented in elaborate detail, enabling the reader to appreciate the design complexities encountered and design methodologies employed by the architects. Many of these papers have been written specifically for this book. In favoring depth over breadth, the book is not able to discuss all important architectures (nor even all major manufacturers). However, the architectures that are included were carefully selected to uniformly cover the major design principles of computer structures.

The proliferation of computer structures and the emergence of computer families have provided quantitative as well as descriptive data for the book. Wherever possible, data, models, and/or trends are derived from the actual computer structures,

Three notations help to summarize information about the computer structures: ISP, PMS, and Kiviat graphs. An updated version of the original ISP language- ISPS- has been used to formally describe a growing number of major computer architectures. A simulator has been utilized for debugging (e.g., running diagnostic programs written for the hardware implementations of the machines) and data collection (e.g., implementation- independent measures of benchmarks). ISPL, a predecessor of ISPS, was used in the Army-Navy Military Computer Family (MCF) project to evaluate alternative architectures.1 Several research projects based on formal machine descriptions have also developed, including the generation of microcode, assemblers, diagnostics, and compilers. Since a complete ISPS description of a contemporary machine can be over 50 pages long, we have chosen to provide subsets of the full ISPS descriptions for all but the very simple architectures. These ISPS descriptions are complete except that only a subset of each machine's instruction set is described. All the ISPS descriptions that appear in this book have been compiled and simulated.

The PMS notation for describing the information flow rate of computer structures has been simplified and made more readable. System performance is provided by Kiviat graphs, which display six major system parameters.

It is hoped that this book will serve as an educational resource for three professional groups: the computer engineer, who de

1"Military Computer Architectures: A Look at the Alternatives," special issue of Computer, vol. 10, no. 10, October 1977.

previous | contents | next