64 bit servers have different performance characteristics and tend to be used for different things. than PC systems. They are generally backplanes to which are attached some number of processors and some amount of memory, and a lot of IO. They aren't used as much for processor speed (you could get a bunch of PCs to do that) as they are for doing a lot of IO. Even a small server like the old Sun e450's (4 processors) had something like 6 or 8 PCI busses on them. Larger systems could be configured with a large number of IO cards for those computers that just need a few gigabit per second of network IO and a ton of disk space (multiple disk controllers, or FC controllers all going full speed).

You would use the memory to store temp information as a query would run and you rely on the systems fast access to the disks to scan through the tables. You would generally attach anywhere from a few hundred gigs of disk (spread out over many smaller disks) up to many terabytes (it's been a while since I've done large system admin work, so I have no idea what the largest systems are doing, but imagine 72" cabinets full of 72 GB or larger disks). This way instead of getting speed from caching the data you get speed by reading the data off the disks quickly.

64 bit workstations had an advantage over PC systems most of the time in that the memory bus was not the bottleneck it can be on the PC avoiding delays due to cache misses, which made them great for visualization workstations where the system had to scan through a lot of memory quickly to generate an image or process scientific data.

There's a lot of other things going back to the fact that Digital, HP Sun and IBM have always had a head start on superscalar and multi-core CPU designs, so comparing Mz was never even close between two processors. On the other hand many people never saw that advantage because they would compile with gcc which was never the best choice for pure speed on a given processor.

If you need a 64 bit processor for memory and file size concerns and can sacrifice some of the processing speed (which often goes away because of the faster IO) there's always been a good used market, in particular for Sun equipment. I've seen some dirt cheap prices on fully loaded Sun E450 systems which are very nice for their size. I think they hold 20 disks internally and there's PCI slots for a lot more if you need large files.

On the other hand I think "need 64 bit" and "affordable" are rare situations.

--
Michael Conlen

Mike Wexler wrote:

Not necessarily. People that need relatively affordable 64 bit systems may be waiting for the Opteron to stabilize. My experience is the Wintel solutions (like Opteron) tend to have at least a 2-1 price performance over Sun and Dec. Also, given that HP has basically dropped Alpha, I don't think a lot of people are likely to be implementing that platform.


Dan Nelson wrote:


In the last episode (Jun 24), David Griffiths said:


I'm surprised there is not more interest in this; is it that not many
work with large-ish (10+ gig) databases that need high-end
performance?


I think we have a mysql database running on Tru64, and I'm sure it runs
great on Solaris.  My guess is the people that needed over 2gb of RAM
have switched to 64-bit CPUs long ago.









The best in online adult entertainment
<http://www.tarrob.com/ads.html>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to