addresses-per-instruction: (0 address / stack ÷ address / 1÷ 1 + index / (1 + x) 1 + general register address / (1 + g) ÷ 2 address÷ 3 address ÷ n + 1 address compound))
A simple processor is always associated with a memory (its primary memory), which holds the program (and usually the data) for the processor. In addition, there may be secondary memories and also other components that are controlled by the processor.
The processor often functions as the main component of an essentially isolated system (often called stand-alone); it is then a central processor, Pc. Processors also occur as more specialized components in larger systems; e.g., to manage input/output (Pio) or display (P.display) or to do a subset of data-operations efficiently (P.data, P.vector_ move. P.array, or P.special_ -algorithm). Processors are sometimes built in hierarchy, using one processor to perform the interpretation and operations of another. Such processors have become known as microprogram processors.
The distinguishing feature of a processor is that it determines its own next instruction. The control that does this is called the interpreter. The repertoire of operations of the processor is partly a set of data-operations performed by its own subcomponents and partly the set of operations proper to a set of transducers, memories, links, and switches external to the processor but incorporated into its operation code. The operations are largely determined by the set of data-types (see the ISP section).
A processor may have considerable internal memory (called the processor state, Mps). Besides the instruction and instruction-address registers, which are necessary for interpretation, there may be various amounts of status information, accumulators, index registers, general registers, and accumulator stacks. No one system has all of these memories, since they often provide alternatives to each other (e.g., index registers and general registers).
Each of the operations has its own operation time and its own possibilities for being overlapped with other operations. Several parameters are given that summarize this array of information: the cycle-time of Mp, which in the long run limits the rate at which instructions and data can be accessed (and also determines the maximum throughput); the concurrency, which tells how many operations can be performed per cycle time (this requires an averaging of the various possibilities as given in the instruction set); and the program-switching time, which is the time required to change context from one program to another. In simple operating regimes (standard batch processing) program-switching time is not an important parameter; it becomes so when interrupts are permitted. For interrupts, the response time is critical. It is the time between when a request is made and when the request is acknowledged by P. The instruction set is really an entry point to the ISP description of the processor. One might give here simply the number of instructions, but this can be a very misleading number, since many variations of a basic instruction can be counted thus giving highly erroneous results. The algorithm-encoding-efficiency is the ratio of i-units used for data per unit time to the number of accesses for data + instructions per unit time. This efficiency is strongly affected by the address size, which is usually the address size of the Mp but need not be if a processor uses an incremental or relative addressing system. The ratio can be measured at many levels of the ISP: instruction-by-instruction, on a subroutine, or for a whole program. In a simple computer, this ratio is near 1/2. Vector operations can allow a ratio much closer to 1.
Common measures for the instructions give the size of the operation code, the address, and the instruction. The addresses per instruction is one of the best parameters to indicate the overall structure of the instruction set and is called the instruction-type. It ranges from 0 addresses (systems which execute a sequence of operations) through 1, 2, and 3 addresses per instruction to variable number of addresses. Between 1 and 2 addresses lie index register (1 + x) and general register (1 + g) machines. In a special class is the (n + 1) organization, which involves an additional address to obtain the next instruction; it can be added to any other organization.
EXAMPLESPc('DEC PDP-8; 1 address / instruction; ~2 w/ instruction; 12 b/w; 1.5, 3.0, 4.5 m s / instruction)
Pio('IBM 7909; 500 kw/s; data-types: words; integer; 1 address / instruction: 36 b/w)
11.3 complex-processor : = simple-processor (
Mp-concurrency: (1 P÷ 1 P with interrupt ÷ 1 program with multiple concurrent subprograms ÷ 1 Pc - n Pio ÷ monitor + 1 user program ÷ monitor + 1 swapped program ÷ fixed multiprogramming ÷ multiprogramming ÷ segmented-programming);
multiprogramming : = (no relocation ÷ protect only ÷ 1 segment ÷ 2 segment / pure ÷ impure segments ÷ > 1 segments ÷ paging)
segmented-programming : = (fixed length page segments ÷ multiple length page segments ÷ variable length page segments ÷ named segments);
P-concurrency: (serial / serial by bit ÷ parallel / parallel by word ÷ multiple instruction streams ÷ multiple data streams (arrays) ÷ pipeline processing ÷ instruction-memory );
instruction-memory : = (none ÷ 1 instruction look ahead÷ n instruction look ahead ÷ cache / look aside / slave memory))
A complex processor is often an extension of a simple processor along the dimension of memory mapping, since a processor is already a highly structured and "complex" component.
Note that a collection of processors does not constitute a compound processor in a way similar to other PMS components; hence, we denote a general collection of processors as a computer. Thus, a complex processor can be written in terms of a simple-P with new values. The central processor using a microprogrammed processor contains a specialized processor as a subcomponent (P.microprogram).
Three attributes separate a simple processor from a complex processor:
Mp-concurrency, P-concurrency, and instruction-memory. In essence, the simple processor has no Mp concurrency (interpreting a single program) and serial or parallel P concurrency, with no instruction-memory (buffer-