On Tuesday 30 October 2007 09:42, Russell Jones wrote:
> Randall R Schulz wrote:
> > On Tuesday 30 October 2007 08:58, Patrick Shanahan wrote:
> >> * BandiPat <[EMAIL PROTECTED]> [10-30-07 11:54]:
> >>  [...]
> >>
> >>> Moving to full 64bit should indeed be a better choice, you would
> >>> think, yet many of apps & plugins are still 32bit only.
> >>
> >> I believe that this is fluff and not fact.  There are a few apps
> >> which only have 32bit plug-ins for certain capabilities, ie: I
> >> still run 32bit firefox and plug-ins and 32bit java.  But
> >> *everything* else on my 10.1 system *is* 64bit.
> >
> > But why? Do you run applications that need a 64-bit address space?
> > If not, it's only more execution overhead to move nearly twice as
> > much data around to get any given task done. The fact that the main
> > system busses are 64-bits wide does not negate this overhead.
> > Almost everything a desktop computer does is RAM-limited (this is
> > even true for most CPU-intensive applications), so using a lot less
> > RAM really does help.
>
> 64-bit machines don't use much more RAM.

This is not primarily about RAM required, but about the volume of data 
that must be moved between the CPU and main store to get any given 
computational task done. 64-bit code uses 64-bit pointers and 64-bit 
integers, and for any given hardware configuration, moving twice as 
much data takes twice as long.

A program that uses data items of a specific size (8-bit, 32-bit, 
64-bit, etc.) and accesses them randomly (i.e., with no benefit of 
locality owing to caches) will see no difference. Any program that 
accesses native sizes (pointers and integers, basically) and does so 
sequentially and with a relatively small amount of processing within 
the CPU per item transferred will exhibit nearly a twofold reduction in 
sustained processing when switching to a 64-bit ISA. This is because 
the performance advantage gained by caches when accessing sequential 
addresses is halved by using data items of twice the size.

Keep in mind that the level-1 and level-2 caches are the primary reason 
we've seen such dramatic improvements in our systems' throughput. 
Otherwise, our spiffy fast processors would spend the vast majority of 
their time waiting to get or get rid of data (including executable 
instructions).

Actual results depend on the exact mix of accesses and instructions, of 
course. But a 32-bit processor (any processor operating in 32-bit mode) 
installed in a system with 64-bit data paths will always have some 
advantage over the 64-bit processor in that system. Conversely (not 
that it really matters), a 64-bit processor in a system with 32-bit 
data paths would always operate at a disadvantage w.r.t. to the 32-bit 
processor.

That's why there is no point in running in 64-bit mode unless you're 
running applications that need to or are significantly advantaged by 
operating in a 64-bit virtual address space. Which in turn is an 
advantage only if you have more then 4 GB of physical memory.


> I asked Novell about this, 
> and they suggested that the 64-bit version of SLES 10 only uses about
> 1% more RAM.

That sounds a little low to me.


> There is a difference, but it's not that big. Do you run 
> 64-bit, Randall? I do at home, and I don't really have any memory
> problems (except with a certain Java application, but I think that's
> a bug). Also, if you're planning to go over 2Gb, it's the way to go,
> AFAICS. Things start to get kludgey in the OS above that limit with
> 32-bit, and it can't help performance.

I use 32-bit 10.3 on a Core 2 Duo system that (until I suffered a 
failure on one DIMM couple of weeks ago) had 4 GB of RAM installed. The 
BIOS has an option to map the I/O space out of the first four GB, so 
the full 4GB of RAM can be accessed, as long as the OS knows how to use 
PAE, which Linux certainly does.


Randall Schulz
-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to