On 05/06/2015 02:26 AM, Steven D'Aprano wrote:
On Wednesday 06 May 2015 14:05, Steven D'Aprano wrote:

My interpretation of this is that the difference has something to do with
the cost of multiplications. Multiplying upwards seems to be more expensive
than multiplying downwards, a result I never would have predicted, but
that's what I'm seeing. I can only guess that it has something to do with
the way multiplication is implemented, or perhaps the memory management
involved, or something. Who the hell knows?


I had guessed that the order of multiplication would make a big difference, once the product started getting bigger than the machine word size.

Reason I thought that is that if you multiply starting at the top value (and end with multiplying by 2) you're spending more of the time multiplying big-ints.

That's why I made sure that both Cecil's and my implementations were counting up, so that wouldn't be a distinction.

I'm still puzzled, as it seems your results imply that big-int*int is faster than int*int where the product is also int.

That could use some more testing, though.

I still say a cutoff of about 10% is where we should draw the line in an interpretive system. Below that, you're frequently measuring noise and coincidence.

Remember the days when you knew how many cycles each assembly instruction took, and could simply add them up to compare algorithms?


--
DaveA
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to