On 10/30/2012 01:17 AM, Steven D'Aprano wrote:
By the way Andrew, the timestamps on your emails appear to be off, or
possibly the time zone. Your posts are allegedly arriving before the
posts you reply to, at least according to my news client.
:D -- yes, I know about that problem. Every time I reboot it shows up again... It's a distribution issue, my hardware clock is in local time -- but when the clock is read by different scripts in my distribution, some refuse to accept that the system clock is not UTC. I'll be upgrading in a few weeks -- so I'm just limping along until then. My apology.

Then I look forward to seeing your profiling results that show that the
overhead of subclassing list is the bottleneck in your application.

Until then, you are making the classic blunder of the premature optimizer:

"More computing sins are committed in the name of efficiency (without
necessarily achieving it) than for any other single reason — including
blind stupidity." — W.A. Wulf

I'm sure that's true.  Optimization, though, is a very general word.

On a highway in my neighborhood -- the government keeps trying to put more safety restrictions on it, because it statistically registers as the "highest accident rate road" in the *entire* region.

Naturally, the government assumes that people in my neighborhood are worse drivers than usual and need to be policed more -- but the truth is, that highway is the *ONLY* access road in the region for dozens of miles in any direction for a densely populated area, so if there is going to be an accident it will happen there; the extra safety precautions are not necessary when the accident rate is looked at from a per-capita perspective of those driving the highway.

I haven't made *the* blunder of the premature optimizer because I haven't implemented anything yet. Premature optimizers don't bother to hold public conversation and take correction. OTOH: people who don't ever optimize out of fear, pay an increasing bloat price with time.

I am not impressed by performance arguments when you have (apparently)
neither identified the bottlenecks in your code, nor even measured the
performance.
Someone else already did a benchmark between a discrete loop and a slice operation.
The difference in speed was an order of magnitude different.
I bench-marked a map operation, which was *much* better -- but also still very slow in comparison.

Let's not confound an issue here -- I am going to implement the python interpreter; and am not bound by optimization considerations of the present python interpreter -- There are things I can do which as a python programmer -- you can't. I have no choice but to re-implement and optimize the interpreter -- the question is merely how to go about it.

  You are essentially *guessing* where the bottlenecks are,
and *hoping* that some suggested change will be an optimization rather
than a pessimization.

Of course I may be wrong, and you have profiled your code and determined
that the overhead of inheritance is a problem. If so, that's a different
ball game. But your posts so far suggest to me that you're trying to
predict performance optimizations rather than measure them.
Not really; Inheritance itself and it's timing aren't my main concern. Even if the time was *0* that wouldn't change my mind.

There are man hours in debugging time caused by not being able to wrap around in a slice. (I am not ignoring the contrary man hours of an API change's bugs).

Human psychology is important; and it's a double edged sword.

I would refer you to a book written by Steve Maguire, Writing Solid Code; Chapter 5; Candy machine interfaces.

He uses the "C" function "realloc()" as an excellent example of a bad API; but still comments on one need that it *does* fulfill -- "I've found it better to have one function that both shrinks and expands blocks so that I don't have to write *ifs* constructs every time I need to resize memory. True, I give up some extra argument checking, but this is offset by the *ifs* that I no longer need to write (*and possibly mess up*).

* Extra steps that a programmer must take to achieve a task are places where bugs get introduced.

* API's which must be debugged to see what particular operation it is performing rather than knowing what that operation is from looking at the un-compiled code are places where bugs get introduced.

These two points are not friendly with each other -- they are in fact, generally in conflict.
Right, which means that people developing the libraries made
contradictory assumptions.
Not necessarily. Not only can monkey-patches conflict, but they can
combine in bad ways. It isn't just that Fred assumes X and Barney assumes
not-X, but also that Fred assumes X and Barney assumes Y and *nobody*
imagined that there was some interaction between X and Y.
They *STILL* made contradictory assumptions; each of them assumed the interaction mechanism would not be applied in a certain way -- and then used it based on that assumption.
Not carefully enough. He notes that he was using a provocative title and
that he doesn't actually think that Ruby is being destroyed. But the
actual harm he describes is real, e.g. bugs that take months to track
down.
Yes.  I read that *carefully*.
BTW: I'm not planning on monkey patching -- I did say your comments were well taken.

What you are talking about is namespace preservation;
I haven't mentioned namespaces. Nothing I have said has anything to do
with namespaces.
keywords are a namespace.  xrange is a keyword, etc.
You don't want pre-defined API method name to have its operation altered; you want the original names preserved. Overloading pollutes namespaces, etc.

If you want to write a language that is not Python, go right ahead.

That's not the only possibility. It will still be called Python and for good reason.


If someone had a clear explanation of the disadvantages of allowing an
iterator, or a tuple -- in place of a slice() -- I would have no qualms
dropping the subject.  However, I am not finding that yet.  I am finding
very small optimization issues...
Python has certain public interfaces. If your language does not support
those public interfaces, then it might be an awesome language, but it is
not Python.
I didn't say I wouldn't support it.  How do you mean this?

Iterators cannot replace slices, because once you have looked up an
iterator value, that value is gone, never to be seen again.
Yes they can replace slices, but not always.
For example;  Lets say an iterator were allowed to replace a slice;

then a[ 1:3 ] would still be a slice, but a[ xrange(1,3) ] would not.
The second piece of code does not deny that slices exist, it just allows an iterator to be passed to __getitems__.

Tuples cannot replace slices, because tuples do not have start, step and
stop attributes;
I refer you to my comments to IAN.


And none of them have a public indices method.
Hmm.. a new point.  Do you have a link?

The actual *need* for a slice() object still hasn't been demonsrated.
Hell, the actual *need* for slicing hasn't been demonstrated! Or Python!
Since C doesn't have slicing, nobody needs it! Am I right?
I asked about the need for slices() not the others, in the OP.
Thanks for your comments on the names, etc. That's helpful -- and I hope I haven't driven your blood pressure up too much unintentionally.

Needed or not, that is what Python has, so if your language doesn't have
it, it isn't Python.
Fine. But if mine still has these, but only as an *optional* wrapper function -- it still *RUNS* all python progams.

Sure. And a tuple can also be () or (1, "x", [], None, None, None, None).
Now your list.__getitem__ code has to deal with those instead.

And the present __getitem__ can be passed a perverted slice() as IAN demonstrated and the bug-list examined as "goofy". These strange things simply cause an exception.

So you want to get rid of slices because you want to save every byte of memory you can, and your way to do that is to introduce a new, *heavier* type that does more than slices? Oh man, that's funny.

I'm not Introducing a new type !
Besides; tuples will likely be lighter on memory in my target and I don't recall saying a tuple does more than slices. I did say that *iterators* were more *flexible* -- is that what you are thinking of?

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to