previous | contents | next

INCREASING GENERALITY AND MODIFIABILITY

These criteria were not a part of our study. They clearly do not have much impact on a simple multiply operation. However, in more substantial systems they become important. The basic fact is that all generality costs in terms of hardware and (possibly) speed, since additional parametric inputs must be processed and either consulted during the computation or the computation adapted according to them. Thus, one rule to keep costs down and speed up is to decrease generality. However, if generality is required, and especially if the kinds- of amounts of generality are not clear at design time, then going to a software implementation provides one generally available solution.

INCREASING RELIABILITY

Like generality and modifiability, reliability is an evaluative dimension that we did not explore. As we discussed earlier, it is not really an appropriate concern of alternative designs is most cases, in this book In general, one should use a technology that has the right reliability characteristics so that straightforward design is appropriate. However, this is not always possible, and we list in the figure several standard strategies that are used to deal with unrealiability.

COMPUTE WITH LOGIC

Since Boolean operations are just selections on the basis of whether an operand is a 0 or 1, and since one can store the knowledge of a computation in the control path, it is possible to do all Boolean computations in the control part. In general, with a suitable functional system (e.g., our D, M, K labelling of components) there should be only a single way of doing basic operations and this should be with components of the appropriate functional label. Indeed, Boolean operations can be done with D components. They can, of course, be done with a DMgpa. However, this is extremely expensive, since a DMgpa processes 16 bits in parallel. Thus, there is a full set of combinatorial data operations, e.g., D(AND|OR|NAND|NOR). But the third way of computing with K's remains an alternative, which can sometime be of use in saving either time or hardware depending on the exact details of the design. Figure 34 illustrates how this is carried out.

PROBLEMS

1. Reliability as another criterion. (A) Using the formulation that the probability of failure is roughly proportional to cost, derive several tables for the failures of the RTM modules. Make reliability calculations for two similar implementations of multiplication in order to see how different their reliabilities are. (B) Design a highly reliable version of multiply, assuming that you wanted to remain within RTM's.

2. The multiplication scheme using table look-up would appear to work with any sized component, e.g., 2-bit, 3--bit, etc. With each component there is a trade- - off of the size of memory for the table and the number of stages in the computation. (A) Design the multiplication system using 3-bit components. (B) If one considers N-bit numbers and K-bit components, can one decide in general which D-bit versions are plausible candidates?

3. In the RP algorithm we introduced an initial selection of the smaller operand to be the multiplier in order to speed up the system somewhat. (A) Is this addition worthwhile (since it takes more hardware)? (B) How can you answer this question so that the answer is usefully available to future designers who might want to use this implementation, say by being added as a note in a designer's notebook?

158

previous | contents | next