Some important reason for using 64bit is due to the following.

The "effective" precision of a float (32bit) is about 5 dp
The "effective" precision of a double/real*8 (64 bit) is about 10 dp

The number of dp is only approximate since we are encoding this using binary.
The max/min value of a float (32bit) is e37, and for 8bit number if e128

Using a 64 bit number to store coordinates makes no sence of course since experiements don't provide this, but when doing complex calculations such as matrix manipulation (in graphics programs), or non-linear fitting (refinement) then we must store intermediate results with 64bit number or interesting distorsions will propogate in your structure. Error due to data precision propogates very fast !

In machine code a 64 bit number can be fetched with a single instruction on a 64 bit OS, whereas a 32bit OS requires a double memory fetch. It does depend on how the hardware vendor designed the underlying micro-code of the drivers - sometimes these will use 64bit firm-ware fetches on 32bit OS for forward compatibility, but in general a 64bit OS will allow the hard computation with 64bit numbers to run faster as the machine code has less memory fetches.

Ie programs with complex manipulation will run faster on a 64 bit machine.

Another issue is that the heap and stack limitations in computer languages limit the size of contigious regions of memory that can be address by a single "offset" address machine code instruction. An array defined in machine code uses a start point + offset - and sometimes this offset is only 16bit (ah!) or 32bit-signed. So not only is the maximum size of memory available to a program 4G on a 32bit computer, the single contigious memory chunk is less on a 32bit machine. This causes problems with 2D or ND arrays, and it was typical that a 4000x4000 matrix was the limit on 32bit computers (as it used 16bit offsetting) - and certainly on all machines I used in my past. Contigious memory sizes are much larger on 64bit OS - the effective size of an array is now unlimited (almost).

It is just easier to write more complex programs on computers so we (devlopers) can make these more powerful computers run slower than the old computers we had before.

Regards
Tom

On Thu, Sep 01, 2011 at 11:36:21AM -0700, Ethan Merritt wrote:
On Thursday, September 01, 2011 11:02:50 am Ed Pozharski wrote:
I am almost sure this has been addressed before, so you can go after me
for insufficient googling.  However,

1.  Is there any *significant* advantage in using 64-bit CCP4 binaries
(primarily speed)?
2.  IIUC, the standard CCP4 download results in 32-bit binaries being
run on a 64-bit system.  Works for me (except for the weird iMosflm
issue), but given that 64-bit OS is becoming more and more common, isn't
it time for 64-bit binaries option?  The answer, of course, is no if you
answered no to 1 above.
The generic answer is that there is no intrinsic speed advantage to running
a 64-bit binary rather than a 32-bit binary.  In fact it may run slower
due to larger pointer sizes and hence poorer cache performance.
However, 32-bit binaries cannot access more than 4GB of address space.

But the x64 architecture provides more registers and faster instructions
than x86.  So a 32-bit binary using the x64 instruction set can run faster
than a 32-bit binary using only x86 instructions.  Therefore you need to
choose the right compiler options in order to get the benefit of the faster
architecture.

I do not know if there are specific CCP4 programs that fall outside of
the generic case described above.

        Ethan

--
Ethan A Merritt
Biomolecular Structure Center,  K-428 Health Sciences Bldg
University of Washington, Seattle 98195-7742

Reply via email to