On Sunday 23 March 2003 02:50, Steinar H. Gunderson wrote:

> But then somebody said each HT `virtual CPU' had
> their own part of the bus, so it would definitely help with I/O bound (RAM
> I/O, of course, not disk I/O) programs as well... Could this be true, or is
> this just misinformation?

I think perhaps someone is getting confused; memory access is 
chipset-dependent not CPU-dependent. The "Granite Bay" chipsets support 
interleaved DDR access which doubles the effective bandwidth.

On Sunday 23 March 2003 02:13, John R Pierce wrote:
>
> the newest xeons have 533Mhz bus, which is supported by chipsets like the
> E7501.   I started running 4 instances of mprime on a pair of dual 2.8Ghz
> Xeons, but had to wipe them a few days later and forgot to save the
> work-in-progress...  Monday I'll restart them and note how fast 1 and 2
> instances run with and without hyperthreading enabled.   IIRC, they thought
> they'd finish 18,xxx,xxx exponents in 10 days.

My 2.66 GHz P4/Asus P4G8X system (e7502 chipset) is running exponent 18600979 
(1024K run length) at 0.040 sec/iter giving a total run time of ~8.5 days. 
Though it uses a "Granite Bay" chipset, this mobo supports "consumer" S478 P4 
CPUs. I'm using a "Northwood" 2.66 GHz processor (which doesn't support HT, 
though the chipset does) because this seems to be optimum grunt/$ at present.
>
> Note re: the memory contention issue, the dual xeon chipsets like the e7501
> have higher memory bandwidth as they use interleaved DDR (2 banks), this
> may at least partially shift the performance vis-a-vis two seperate p4
> systems.

Possibly, but dual-bank DDR on a uniprocessor system is better still - puts 
P4 DDR systems into the same league as systems supporting (expensive) PC1066 
RDRAM, maybe even a few percent ahead though using only PC2100 DDR ("266 MHz" 
actually 133 MHz dual-pumped).

> OTOH, dual xeon e7501 systems are not cheap.   The ones I built for work
> were $3300 each with dual 2.8Ghz and 2GB ram, but without hard drives,
> these are 2U rackmount servers using Intel's SE7501WV2 motherboard and a
> Intel SE2300 rack chassis.  They are also *extremely* noisy (seems to be a
> feature of all dual xeon 2U rack servers as they need massive cooling for
> the CPUs and 6 hotswap SCSI drives).

The availability of consumer mobos with "Granite Bay" chipsets makes 
Xeon-based systems look _very_ expensive for the CPU power you get from them. 
Effectively the only performance advantage from the Xeon is the larger L2 
cache - memory contention issues will totally undermine this so far as we're 
concerned. The benefit of Xeon server systems is power density - useful if 
you want to put a large bundle of them in a small area. But shifting all that 
heat from a small case really is going to require a lot of airflow, hence the 
noise. In a 2U rackmount case there's not much height for a heatsink & fan, 
therefore small components have to be driven fast. Even then the airflow from 
a rackfull of servers is _warm_ - sufficiently so to be useful as e.g. a 
hairdryer - you're going to need aircon to dump the excess heat to the 
outside world.

Summary - anyone self-building systems to run Prime95/mprime at home is 
_almost certainly_ going to get far more CPU power per dollar (purchase 
price; electricity costs will be similar) from 2 x P4 systems using "Granite 
Bay" chipset than from 1 x dual Xeon system with the same speed CPU.

Regards
Brian Beesley
_________________________________________________________________________
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to