On 05/24/10 23:48, Adriano Moura wrote:
This is actually normal. 64 bits systems uses 64bits per memory
address, by default.

That alone would make 64bits systems eat twice as much memory than a
32bit systems.

Only for the memory-address part of the data (a.k.a. "pointers"). UTF-8 text will still take up the usual number of bytes for any given piece of text. Integer values will frequently take up the same amount of space. (Programmers *can*, if they're crazy, make any differences they want in their program depending on number of bits, but typically don't.) According to this logic (which is mostly correct), programs should use somewhere between 1x and 2x as much memory depending what fraction of their data is addresses. (Probably never as much as 2x because malloc() keeps some bookkeeping data that probably isn't all addresses; because executable code isn't made of addresses; because any external data such as on the disk or the Web won't be made of addresses; and so on.)

Of course you can program can be coded to use 32bit
variables,

not possible for memory addresses under 64-bit binary ABI, as far as I know..

but hey, isn't the larger number representation one of the
64bits advantage?

not really, not for integers. The advantage for integers is that operations are faster on integers that can hold values up to about 2^64. Integers that hold up to about 2^32 are the same speed. (Compilers can emulate 64-bit ints with 32-bit ints.) I don't see the point of C's volatile-size integers like "long" sometimes being 32bits and sometimes 64bits (except for the purpose of being exactly the same size as an address, essentially in order to hold an address... silly programs...), because people have to write their code to be correct at all possible integer sizes, which basically means constraining possible legitimate values to the lower size anyway.

The address space advantage of 64-bits is that your program can address more than 4 virtual GB of information at once (per executable, RAM+swap used = data+code+miscellaneous). If you have less than three or four gigabytes of RAM, this 32-bit limitation is unlikely to be of importance. Well, it affects 'mmap' of several-gigabyte-large files... (there are always obscure effects :-)

On x86 architectures, the 64-bit code also has access to more CPU registers, which tends to make code run faster (although code can suffer when you use all your RAM, or if bigger data fills up CPU caches quicker). There are other little differences like this too.

Also, if you want 64bit systems, you may want huge quantities of
memory. More than 3GB, which makes most of the memory consumption
somewhat useless.

No 3 GB doesn't make memory consumption useless. Web browser with 100 tabs eats RAM. Video editing application eats RAM. Heck, even Amarok eats 80 MB RAM, and uses some CPU when it's not even playing music, these days. Also check out 'df /'. However many gigabytes you're currently using for installed software, if you were using your software all at once, well it can make your system faster for software to remain cached in RAM... But if your system is fast enough for you, don't waste time tweaking it, because if you do, it will *still* be fast enough for you!

Personally, I've went back and forth between 64bit and 32bit systems several times on my 2 GB machine, and I don't think there's a very detectable performance difference. Maybe 64bit uses a bit more RAM yet uses the CPU a bit more efficiently.

On the other hand, there is a binary compatibility effect (proprietary code and viruses might work a bit better on 32bit x86, I dunno, I don't try them much).


2010/5/24 Dan McGee<dpmc...@gmail.com>:
On Mon, May 24, 2010 at 8:13 PM, Gary Wright<wrigg...@gmail.com>  wrote:
2010/5/24 Frédéric Perrin<frederic.per...@resel.fr>:

On a 64 bit machine, in « char *p; », p will use 64 bits (8 bytes),
instead of 4 bytes in a 32 bits machine [I'm talking about p, not about
*p which doesn't look like it exists]. Gary Wright seems to be saying
that the impact is negligible. Nicky726 seems to be saying that there
is a difference of up to 80%. I am surprised by such a claim, but there
seems to be anecdotes on Google of people seeing the same thing. As I
don't have a 64 bits machine, I can't test for myself.
--
Fred

Well, heres something vaguely empirical.  Just downloaded the two
latest netinstall medias and threw them on a usb stick.  I ran
precisely four commands after logging in as root on each netinstall
arch:

1) mkdir /mnt/tmp
2) mount /dev/sda3 /mnt/tmp  #my home partition
3) uname -a>>  /mnt/tmp/gary/memcomp
4) free -m>>  /mnt/tmp/gary/memcomp

results to be seen here:
http://aur.pastebin.com/YwTJA6cR

short story:  ~29 MB more used on x86_64... or about 30 percent.

But when installing a whole system, many more variables come into
play.  It might have just been my dumb luck that ram usage ended up
within 1-2 mb of eachother.

47 MB - 21 MB (for a difference of 26 MB) is what you want to be
looking at and nothing else. Throw buffers and cache out the window.
Of course, that now skews the percentage a lot higher than what you
stated to (47 - 21) / 21 = 123%. I'm not buying those numbers though
as you didn't capture near enough information and not all that much
was running.

More useful are probably things like pmap comparison of the same
binaries, etc. after doing as close to identical operations. I'm not
sure even that would help, see the following pastebin to see those
deceiving results: http://aur.pastebin.com/GzjTZYMe

-Dan



Reply via email to