On 23/02/2018 12:41, Chris Angelico wrote:
On Fri, Feb 23, 2018 at 11:17 PM, bartc <b...@freeuk.com> wrote:

  Integer pixel values

Maybe in 64 bits for the time being, but 32 certainly won't be enough.

Why, people's eyes will evolve to see quintillions of colours?

As soon as you do any sort of high DPI image manipulation, you will
exceed 2**32 total pixels in an image (that's just 65536x65536, or
46341x46341 if you're using signed integers)

(Huh?)

; and it wouldn't surprise
me if some image manipulation needs that many on a single side - if
not today, then tomorrow. So 64 bits might not be enough once you
start counting total pixels.

A four-billion x four-billion image would be quite something. I wonder how long it would take Cpython to loop through all 18e18 pixels? (BTW this still fits into unsigned 64 bits.)

These speculations are pointless. Graphics is very hardware-oriented now, and this is mainly the concern of graphics processors which will already have wide datapaths, much wider than 64 bits. Because you can of course use more than one 64-bit value.

Most calculations on these can also be done in 64 bits (except multiply and
power where the ceiling might 32 bits or lower).

Exactly. Most.

OK, good, we've agreed on something: Most. Actually most values in my programs fit into 32 bits. Most of /those/ are likely to fit into 16 bits.

In a program, most integers are going to represent small values.

 And then you'll run into some limitation somewhere.

Let's say you want to know how many IP addresses, on average, are
allocated to one person. You'll probably get a single digit number as
your answer. Can you calculate that using 32-bit integers? What about
64-bit? Should you use bignums just in case? Take your pick, let me
know, and then I'll tell you my answer - and why.

I'm not saying bignums are needed 0% of the time. But they don't need to be used for 100% of all integer values, just in case.


What sort of handicap is it? Performance?

I don't know. Wasn't that brought up in the discussion? That Python's fib() routine could seamlessly accommodate bigger values as needed, and Julia's couldn't, and maybe that was one reason that Julia had the edge.

 Or maybe, in Bartville, this flexibility is
worth paying for but large integers are useless?

In 40 years of coding I can say that I have very rarely needed arbitrary precision integer values. (They come up in some recreational programs, or for implementing arbitrary precision integers in the languages used to run those programs!)

But I have needed flexible strings and lists all the time.

Is my programming experience really that different from anybody else's?

--
bartc
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to