Lars T. Kyllingstad wrote:
Lars T. Kyllingstad wrote:
Here's a puzzle for you floating-point wizards out there. I have to
translate the following snippet of FORTRAN code to D:
REAL B,Q,T
C ------------------------------
C |*** COMPUTE MACHINE BASE ***|
C ------------------------------
T = 1.
10 T = T + T
IF ( (1.+T)-T .EQ. 1. ) GOTO 10
B = 0.
20 B = B + 1
IF ( T+B .EQ. T ) GOTO 20
IF ( T+2.*B .GT. T+B ) GOTO 30
B = B + B
30 Q = ALOG(B)
Q = .5/Q
Of course I could just do a direct translation, but I have a hunch
that T, B, and Q can be expressed in terms of real.epsilon, real.min
and so forth. I have no idea how, though. Any ideas?
(I am especially puzzled by the line after l.20. How can this test
ever be true? Is the fact that the 1 in l.20 is an integer literal
significant?)
-Lars
I finally solved the puzzle by digging through ancient scientific
papers, as well as some old FORTRAN and ALGOL code, and the solution
turned out to be an interesting piece of computer history trivia.
After the above code has finished, the variable B contains the radix of
the computer's numerical system.
Perhaps the comment should have tipped me off, but I had no idea that
computers had ever been anything but binary. But apparently, back in the
50s and 60s there were computers that used the decimal and hexadecimal
systems as well. Instead of just power on/off, they had 10 or 16
separate voltage levels to differentiate between bit values.
Not quite. They just used exponents which were powers of 10 or 16,
rather than 2. BTW, T == 1/real.epsilon. I don't know what ALOG does, so
I've no idea what Q is.
I guess I can just drop this part from my code, then. ;)
-Lars