"David Cournapeau" <courn...@gmail.com> wrote in message
news:mailman.455.1283665528.29448.python-l...@python.org...
On Thu, Sep 2, 2010 at 7:02 PM, Michael Kreim <mich...@perfect-kreim.de>
wrote:

imax = 1000000000
a = 0
for i in xrange(imax):
   a = a + 10
print a

Unfortunately my Python Code was much slower [than Matlab] and I do not
understand why.

Getting the above kind of code fast requires the interpreter to be
clever enough so that it will use native machine operations on a int
type instead of converting back and forth between internal
representations.

Writing for i in xrange(1000000000) you'd think would give it a clue, but it
makes no difference.

Matlab since version 6 I believe, has a JIT to do
just that. There is no mature JIT-like implementation of python which
will give you the same speed up for this exact case today.

Or do I have to live with the fact that Matlab beats Python in this
example?

Yes. Without a JIT, python cannot hope to get the same kind of speeds
for this kind of examples.

That being said, neither matlab nor matlab are especially good at
doing what you do in your example - for this exact operation, doing it
in C or other compiled languages will be at least one order of
magnitude faster.

One order of magnitude (say 10-20x slower) wouldn't be so bad. That's what
you might expect for a dynamically typed, interpreted language.

But on my machine this code was more like 50-200x slower than C, for
unaccelerated Python.

Generally, you use matlab's vectorized operations,
and in that case, numpy gives you similar performances (sometimes
faster, sometimes slower, but in the same ballpark in general).

That would simply be delegating Python to a scripting language. It would be
nice if you could directly code low-level algorithms in it without relying
on accelerators, and not have to wait two and a half minutes (or whatever) for a simple test to complete.

--
bartc
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to