Apologies again for commenting in the wrong place.

On 05/12/2019 16:38, Mark Shannon wrote:

Memory access is usually a limiting factor in the performance of modern CPUs. Better packing of data structures enhances locality and> reduces memory bandwith, at a modest increase in ALU usage (for shifting and masking).

I don't think this assertion holds much water:

1. Caching make memory access much less of a limit than you would expect.
2. Non-aligned memory access vary from inefficient to impossible depending on the processor. 3. Shifting and masking isn't free, and again on some processors can be very expensive.

Mark wrote:
>>> here is also the potential for a more efficient instruction format,
>>> speeding up interpreter dispatch.
I replied:
>> This is the ARM/IBM mistake all over again.
Mark challenged:
> Could you elaborate? Please bear in mind that this is software
> dispatching and decoding, not hardware.

Hardware generally has a better excuse for instruction formats, because for example you know that an ARM only has sixteen registers, so you only need four bits for any register operand in an instruction. Except that when they realised that they needed the extra address bits in the PC after all, they had to invent a seventeenth register to hold the status bits, and had to pretend it was a co-processor to get opcodes to access it. Decades later, status manipulation on modern ARMs is, in consequence, neither efficient nor pretty.

You've talked some about not making the 640k mistake (and all the others we could and have pointed to) and that one million is a ridiculous limit. You don't seem to have taken on board that when those limits were set, they *were* ridiculous. I remember when we couldn't source 20Mb hard discs any more, and were worried that 40Mb was far too much... to share between twelve computers. More recently were the serious discussions of how to manage transferring terabyte-sized datasets (by van, it turned out).

Sizes in computing projects have a habit of going up by orders of magnitude. Text files were just a few kilobytes, so why worry about only using sixteen bit sizes? Then flabby word processors turned that into megabytes, audio put another order of magnitude or two on that, video is up in the multiple gigabytes, and the amount of data involved in the Human Genome Project is utterly ludicrous. Have we hit the limit with Big Data? I'm not brave enough to say that, and when you start looking at the numbers involved, one million anythings doesn't look so ridiculous at all.

--
Rhodri James *-* Kynesim Ltd
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/I5KOXNVB3RCDP3D3CYKUBWCAWGZLWLOF/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to