On Wednesday 26 February 2003 01:31 pm, Tom Brinkman wrote:
> On Wednesday February 26 2003 11:23 am, Seedkum Aladeem wrote:
> > >       While there's already 64 bit production machines, that use
> > > gigs of ram, there's also already Linux performance optimized
> > > kernels for them.   IIRC, the kernels were ready before the
> > > systems were. OTOH, before you'll see 64 bit desktops systems,
> > > 512 mb of ram will still be overkill.
> >
> > Thanx Tom,
> >
> > This suggests that the performance penalty is brought about because
> > of hardware limitations (i.e. CPU architecture) and not
> > artificially introduced by sloppy memory management software. This
> > suggests that some register somewhere in the CPU is not full 32
> > bits long. I thought 32 bits of address give 4G of address space
> > and not 1G.
> >
> > Maybe AMD should make 32 bit CPUs address the full 4G before going
> > to 64 bit CPUs.
> >
> >
> > Seedkum
>
>     Well, you're straining the limits of my ability to explain it
> further ... mainly cause I dunno either ;)
>
>     I will say it's not so much "hardware limitations (i.e. CPU
> architecture)", but has more to do with mathematics... in the realm
> of hexadecimal numbers, and 2's complements, 32 things taken so many
> ways (permutations and combinations).
>
>     It's only been a few years since the 'other OS' even graduated
> from 16 bit computing, and as I understand M$ isn't there yet. Mostly
> due to tryin to support legacy applications. Linux has always been
> capable and willin to re-write software. There's been proprietary
> UN*X OS's and applications that have long been 64 bit capable.
> Hardware design isn't the big problem, user non-acceptance of change,
> and willing acceptance of hardware is probly the bigger factor.
>
>     Many might remember the 2000 hoopla that all computers would start
> messin up due to not bein able to recognize the difference between
> 1900 and 2000 dates.  My brief self taught foray into programming (C,
> C++) at least aquainted me with the fact that dates were stored as
> code numbers (even on M$/DOS OS's) and the real limitation was around
> 2037 when 32 bit systems would barf on higher numbers than 32 bit can
> address. The 32 bit mathematical limit just runs out of possible
> numbers in row and column (matrix) addressing.
>
>     Same goes for memory arrays (ram), altho there are some kludges
> that can be employed to get 32 bit kernels further. BUT, that's where
> the performance hit comes in.  So, at least in my understanding, it's
> not an OS or user accepted hardware limit, as much as it's just the
> axioms of mathematics at base 16 (hexidecimal), 2's compliment, with
> just 32 bits to work with.
>
>     BTW, I believe filesystems are also governed by the same
> mathematical laws and bits ;)  I'm mostly doin some educated guessing
> about all the above, Civileme, Juan, Warly, or Todd probly knows.
Yep

1G requires 30 bits (plus a sign bit) to address properly.. since address 
arithmetic should be unsigned, this represents the easiest way to implement 
address arithmetic using 32-bit signed arithmetic registers.

2G is as high as one is likely to go with 32-bit addressing using signed 
arithmetic in the registers,  unsigned comes extra, unless the hardware also 
supports unsigned arithmetic (the C compiler supports unsigned arith whether 
the architecture does or not, which means subroutines to add subtract 
multiply and divide unsigned 32 bit numbers if the architecture does not 
offer such arithmetic)

C compilers have target memory models as well, so that the compile is 
efficient for the expected runtime environment.  This is where the sizing of 
32-bit CPU kernels comes into play as well as the optimizations of the kernel 
itself.

Now the more recent Windows systems supposedly support a file size of 2 
Terabytes...  I think that is 2 to the power 41 bytes which indicates a 41 
bit address if unsigned and 42 if signed.  64-bit linices generally address 
either 8 or 16 exabytes (2 to the power 63 or 64 bytes), and file sizes are 
at least that large, potentially.

One compiler, xbasic, which works on any 386 using windows or linux with X, 
actually offers a 64bit signed fixed-point data type.  Some interpreters 
offer arbitrary number sizes in fixed point (like the Python BIG=1L).  

Limits are limits only because there are performance hits associated with 
size, given the architectures available.  

Civileme


Want to buy your Pack or Services from MandrakeSoft? 
Go to http://www.mandrakestore.com

Reply via email to