On 11/05/2018 01:11, Chris Angelico wrote:
On Fri, May 11, 2018 at 8:43 AM, bartc <b...@freeuk.com> wrote:
This is Wrong, and would have been just as obviously wrong in 1989.

Having spent many years programming in C and working on Unix, I
strongly disagree.

Using C is apt to give you a rather warped view of things. Such that everything in that language is superior to any other way of doing it.

(And actually, because C didn't have binary literals for a long time (I think it still doesn't, officially), there has been a lot of discussion in comp.lang.c about how they are not really necessary:

: A real programmer can auto-convert from hex
: It's so easy to knock up some macro that will do it
: They have /never/ needed binary in decades of programming

And so on. Meanwhile my own /recent/ code includes lines like this:

    when 2x'101'11010'000 then ... # (x64/x87 disassembler)

although I think I still prefer a trailing B, with separator:

    when 101'11010'000'B then ...

Try /that/ in hex /or/ octal.)

This was *not* obviously wrong. It's easy to say
"but look at the real world"; but in the 80s and 90s, nobody would
have said that it was "obviously wrong" to have the native integer
wrap when it goes past a certain number of bits. And in fact, your
description of the real world makes it equally obvious that numbers
should have a fixed width:

Much of the real world /does/ use fixed widths for numbers, like that odometer for a start, or most mechanical or electronic devices that need to display numbers. And with many such devices, they wrap as well (remember tape counters).

Even my tax return has a limit on how big a sum I can enter in the boxes on the paper form.

So the concept of fixed upper width, sometimes modulo numbers isn't alien to the general public. But leading zeros that completely change the perceived value of a number IS.

Octal makes a lot of sense in the right contexts. Allowing octal
literals is a Good Thing. And sticking letters into the middle of a
number doesn't make that much sense, so the leading-zero notation is a
decent choice.

No it isn't. You need something that is much more explicit. I know your C loyalty is showing here, but just admit it was a terrible choice in that language, even in 1972. And just as bad in 1989.

(I've used one language where the default base (radix it called it) was octal. But even there, when overriding it, the override mechanism was more obvious than the presence of a leading zero.)

 It's all very well to argue that it's a suboptimal
choice; but you have to remember that you're arguing that point from
2018, and we have about thirty years of experience using Python. The
choice was most definitely not fundamentally wrong. Ten years ago, the
point was revisited, and a different choice made. That's all.

This gives a few different ways of writing hex and octal:

  https://rosettacode.org/wiki/Literals/Integer

The leading zero method for octal seems to have permeated a few other languages. While F#, among others, uses 0o, which in my browser looks like Oo12. (If, for some reason, you did want leading zeros, then the number would look like OoO12.)

Why do language designers perpetuate bad ideas? The reason for designing a new language is just so you can get rid of some of them!

--
bartc


--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to