previous | contents | next

Section 1 ½ Microprogram-Based Processors 153

chines, however, usually embed the address of the next microinstruction in the current microinstruction. This increases the size of the microword but also increases performance, since fetching of the next microinstruction does not have to wait for the update of a counter.1

Microword Sequence Alteration. Fast changes in microword sequencing are an absolute necessity, since they happen so frequently. The most prevalent method is to alter the next microinstruction address field by ORing in status bits left as the result of a previous operation. Other possibilities include adding, repeating a microword until a condition is met, jumping or branching, and fetching a previously stored address (e.g., return from microsubroutine).

Any conditioned sequence changing will introduce some programming complexity, as depicted in Fig. 2. In order to execute microinstructions as fast as possible, the fetch of the next micro instruction is overlapped with the execution of the current microinstruction (see Part 2, Sec. 3). Thus the condition code- setting information from the ALU operation of microinstruction 1 is available only after the fetch of microinstruction 2 has begun. Thus the first time the operation of microinstruction sequencing can be altered is in the fetch of microinstruction 3. Microprogramming could be simplified (at the cost of performance) if micro instruction 2 were a null operation. Rather than lose the performance, microprogrammers attempt to set up the branch status at least one full microinstruction before the conditional branch.

Microword Constants. Another tradeoff between flexibility, speed, and microword width is the provision for constants. When emulating a target ISP, there will be key constants (e.g., the address of the program counter in a register file, masks for decoding, the number of a special memory location, and increments to a program counter) that have to be provided. These constants can be stored in a ROM addressed by a microword subfield (thereby incurring the delay of a ROM access) or by an immediate operand in a microword subfield called emit. The emit subfield is as wide as the widest desired constant and hence requires many more bits than are required to encode the number of different constants. If infrequently used, the emit field is a prime candidate for multiple-subfield definition via dynamic decoding.

Data-Path Concurrency. Performance can be increased via increased concurrency. In general the techniques discussed in Sec. 3, while targeted for the ISP level, can also be used at the microprogramming level. Figure 2 has already illustrated the overlap (pipelining) of microinstruction fetch and execution. Multiple operations can be triggered by the same microinstruction (e.g., an ALU function and program-counter update) if there are sufficient functional elements and data paths to support the concurrency. Overlap is also possible between the microprogrammed processor and Mp if the processor is given sufficient control over the bus (as opposed to the IBM System/360 Model 30, Chap.

1There are mechanisms of combinatorially implementing a counter so that the extra performance degradation is only that of a 6 to 10-gate delay rather than a ripple carry delay of 70 to 100 gates. It is also possible to overlap microprogram-counter update with microword fetch if the micro program counter is double-buffered. In this case the only performance degradation is the execution of a branch instruction for nonsequential flow. Microcode sequences tend to be short; hence one out of every three or four instructions could be a branch, still severely impacting performance.
 
 

previous | contents | next