only because of external constraints of human readability. Internally, they simply add problems, rather than provide processing options.
The first alternative that comes to mind is the logarithm. Given that numbers were represented internally as their logarithms, then multiplication would simply be addition of logarithms:
log(X * Y) = log(X) + log(Y)
This yields the result in the form that is wanted, i.e., the logarithm of the product, which is in the appropriate logarithmic representation. The difficulty, of course, is that if any additions or subtractions are to be done on the resulting product, then the representation is impossible -- direct algorithms for adding two numbers represented as logarithms (and not involving multiplication) do not exist. Rather, it would be necessary to convert back to the numbers (by taking the antilogarithms) and then add.
Thus, one would use logarithms only when the total algorithm involved only multiplication (and division). If this were the case, the transformation to logarithms would undoubtedly be carried out at the level of the algorithm itself and would not be seen as occurring in the RT implementation.
It is worth considering briefly the feasibility of developing an implementation of multiplication that input and output numbers in binary, but operated internally in terms of logarithms. The basic scheme is shown in Figure 18. It requires a conversion from numbers on input and from the logarithm back to numbers on output. If these conversions take much time and hardware, then there can be no advantage in the scheme. Given the small amount of hardware in the basic multiply implementation itself, it would seem that the only possibility lies in a fast, though possibly expensive, scheme that would produce a fast total algorithm, thus competing with some of the parallel algorithms that take considerable hardware.
The above considerations dictate the use of table look-up, which is the only fast scheme for arbitrary algorithms. Let us suppose that we use sufficient memory so that each table look-up is done in the most efficient way possible, by a direct memory access with the argument as address. This is certainly possible for taking the logarithm of the inputs, where it requires a table of size 2t8. It is much more complex, however, on the output side, where the argument is a 16-bit logarithm of the product. Still, continuing with the assumption, we get the basic implementation shown in Figure 19. It can be seen that the time is about & multiply steps, and the cost is 256 + 16,384 read-only memory cells. Thus, it yields less than a factor of 2 in speed over the basic algorithm and provides little with which to compete against the array and pipeline parallel schemes. (Note that it is also incorrect as it stands because of the 0 factor case.)
Thus, we can finally lay to rest the use of the logarithm. One could have predicted the outcome, perhaps, since multiply yields a simple algorithm and the mere introduction of a function such as the logarithm bodes inescapable complexities. We carried out the analysis a bit to indicate the regard one should have for the actual examination of alternatives, rather than their dismissal on general grounds,
There exist other representations of numbers. For instance, a number can be expressed as the product (with repetition) of its prime factors. Multiplication then reduces to summing of exponents. The complexitites here turn out to be worse than for logarithms.
Numbers can also be represented as the residues taken modulo a basis set of primes. The number 18 for instance would be represented as the vector (0,0,3) where the basis is (2, 3, 5), since:
18 = 0 mod(2), 18 = 0 mod(3), 18 = 3 mod(S)
133