subroutine which implements the Russian Peasant's Algorithm will be presented here (see Figure DATA-9). Two sixteen-bit, two's complement numbers, initially held in registers S and S, are multiplied together and the result placed in the double register D defined as S  S. This subroutine requires double-word addition and left-shift operations. The reader should note the high overhead time for setting up the multiplication. Four steps are required:
initialization of P\D; exchange of multiplier and multiplicand (so that |MPR|<|MPD|) to improve the performance of the algorithm; adjusting the sign (if necessary) of the multiplier and multiplicand so that the sign of the multiplier is positive; and extending the sign of the multiplicand into S(3).
SCIENTIFIC (FLOATING POINT) NOTATION
KEYWORDS: Scientific notation, floating point, exponent, fraction, mantissa
A common representation used to express numeric data is the scientific (floating point) notation. Numbers in this form are written as d*2^j (for base two), where d is a binary fraction (mantissa) multiplied by 2^j, where j is referred to as the exponent. For example: .1011*2^6=101100. , .0110*2^-3= .0000110. Floating point notation is most often used for numbers of a fixed precision, but with a widely varying exponent.
One possible format for floating point numbers using a sixteen bit word is shown in Figure DATA-l0. The exponent, R<14:9> is encoded in so-called excess 31 notation, that is, the true exponent of the number is 31 less than the number stored in the word; e.g., -31 is encoded as 0, -30 is encoded as 1,...,0 as 31,..., and +32 is encoded as +63. This method avoids using a sign bit for the exponent. The mantissa, R<8:0>, is stored as an unsigned binary fraction with the binary point assumed to be immediately to the left of R<8>. The sign bit, R<15>, stores the sign of the mantissa. This format gives a range of numbers from -.000 000 001 *2^-31 to .111 111 111 *2^32 whereas a sixteen-bit, two's complement notation gives the range -(2^15) to (2^15)-1.
In addition to the common arithmetic operations (addition, subtraction, multiplication and division), one other operation is usually provided for floating point numbers, i.e., normalization. Normalized floating point numbers always have the left-most one-bit of the mantissa in R<8>, except of course for a zero fraction. To normalize a number, the mantissa is shifted to the left until a one- bit is in R<8> while the exponent is decreased by one for each shift. Using the format shown in Figure DATA-10, the range of normalized floating point numbers is -.100 000 000 *2^-31 to +.111 111 111 *2^32.
Implement a set of subprocesses to provide floating point arithmetic operations for RTM systems; include addition, subtraction, multiplication, division, and normalization.
There are three types of errors which can result from the floating point operations:
1. overflow - the exponent of the result of an operation is too large to be represented
2. underflow - the exponent of the result of an operation is too small to be represented
3. significance - normalization of a number causes exponent underflow.