On 27.01.2016 11:59, Terry Reedy wrote:
On 1/26/2016 12:35 PM, Sven R. Kunze wrote:
I completely agree with INADA.

I an not sure you do.


I am sure I am. He wants to solve a problem the way that is natural to him as a unique human being.

It's like saying, because a specific crossroad features a higher
accident rate, *people need to change their driving behavior*.
*No!* People won't change and it's not necessary either. The crossroad
needs to be changed to be safer.

Safer crossroads tend to be slower unless one switched to alternate designs that eliminate crossing streams of traffic.

So Python can be safer AND faster ( = different design) if we try hard enough.

Languages that don't have integers but use residue classes (with wraparound) or finite integer classes (with overflow) as a (faster) substitute have, in practice, lots of accidents (bugs) when used by non-experts. Guido noticed this, gave up on changing coder behavior, and put the expert behavior of checking for wraparound/overflow and switching to real integers (longs) into the language. (I forget when this was added.)

I am glad he did because it helps humans solve their problems in a natural way without artificial boundaries. :)

The purpose of the artificially low input to fib() is to hide and avoid the bugginess of most languages. The analogous trick with testing crossroads would be to artificially restrict the density of cars to mask the accident-proneness of a 'fast, consenting-adults' crossroads with no stop signs and no stop lights.


I am completely with you here, however I disagree about suspected hiding/avoiding mentality. You say:

Python -> *no problem with big integers but slow at small integers*
Other Language -> *faster but breaks at big integers*

Yes. That's it.

We haven't solved the human side, however. A human AGAIN would need to compromise on either speed or safety.


My point is: it would be insanely great if Python could be more like "*fast AND no problem with big integers*". No compromise here (at least no noticeable).

So, people could entirely *concentrate on their problem domain* without every worrying about such tiny little, nitty-gritty computer science details. I love computer science but people of other domains don't have the time nor the knowledge to decide properly. That's the reason why they might decide by using some weird micro-benchmarks. Just humans.

Same goes for Python. If it's slow using the very same piece of code
(even superficially), you better make the language faster.
Developers won't change and they won't change their code either. Just
not necessary.

Instead of making people rewrite fib to dramatically increase speed, we added the lru-cache decorator to get most of the benefit without a rewrite. But Inada rejected this Python speedup. An ast optimizer could potentially do the same speedup without the explicit decorator. (No side-effects? Multiple recursive calls? Add a cache!)


Bingo! That's the spirit.

Why that decorator in the first place? Hey, I mean, if I ever want to write some cryptic-looking source code with 3-letters abbreviations (LRU), I use Assembler again. But I discovered and love Python and I never want to go back when my problem domain does not require me to. So, when a machine can detect such an optimization, hell, do it, please. It's more likely that I apply it at the wrong function AND only in 10% of the correct cases: missing 90% and introducing some wild errors.

Again human stuff.

Btw. it would be a great feature for Python 3 to be faster than Python
2.

We all agree on that. One way for this to happen is to add optimizers that would make Python 'cheat' on micrebenchmarks

Then, we are all set. :)

Best,
Sven
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to