Filip Wasilewski wrote:
Jon Harrop wrote:
Yes. The time taken is dominated by memory accesses. The amount of
arithmetic is almost irrelevant.
I can say from my experience, that this depends much on the input size
and the CPU cache size. For arrays of about n^18 or less and 2MB cache
the
sturlamolden wrote:
Jon Harrop wrote:
Yes. The time taken is dominated by memory accesses. The amount of
arithmetic is almost irrelevant.
That is extremely interesting. It would explain why I see almost the
same performance in NumPy and Fortran 95 on this kind of task, using
array slicing
Jon Harrop wrote:
Yes. Non-sequential access is hugely expensive these days, and bounds
checking is virtually free. So that's one less reason to use C/C++... ;-)
The lifiting scheme is sequential.
--
http://mail.python.org/mailman/listinfo/python-list
Jon Harrop wrote:
Can convolution be implemented efficiently in Python?
numpy.convolve
Functional programming makes this easy. You just compose closures from
closures instead of arrays from arrays.
Indeed. But learning yet another language is too much work.
This is what I meant by
Paul Rubin wrote:
Interesting, where do I get it, and is there source? I've never been
interested in Mono but maybe this is a reason. How does the compiled
code compare to OCaml or MLton code?
GC intensive code is 2-4x slower than OCaml or 4-8x slower than MLton.
Floating point intensive
Filip Wasilewski wrote:
Jon Harrop wrote:
Filip Wasilewski wrote:
Jon, both Python and Matlab implementations discussed here use the
lifting scheme, while yours is a classic convolution based approach.
I've done both in OCaml. The results are basically the same.
Have you tried taking
Jon Harrop wrote:
Yes. The time taken is dominated by memory accesses. The amount of
arithmetic is almost irrelevant.
That is extremely interesting. It would explain why I see almost the
same performance in NumPy and Fortran 95 on this kind of task, using
array slicing in both languages.
--
Jon Harrop wrote:
Filip Wasilewski wrote:
from numpy import ones, arange, reshape, int32
a = ones((2,3,4,5))
b = ones(a.shape, dtype=int32)
c = ones((3,4,5))
d = 2*a + 2.5*(3*b + 3.3)
d[0] = d[0] + 5
d[1] *= c * (2+3*1.2)
d[:,2,...] = reshape(arange(d[:,2,...].size), d[:,2,...].shape)
Jon Harrop wrote:
Filip Wasilewski wrote:
Jon Harrop wrote:
Filip Wasilewski wrote:
Jon, both Python and Matlab implementations discussed here use the
lifting scheme, while yours is a classic convolution based approach.
I've done both in OCaml. The results are basically the same.
Filip Wasilewski wrote:
Jon, both Python and Matlab implementations discussed here use the
lifting scheme, while yours is a classic convolution based approach.
I've done both in OCaml. The results are basically the same.
These are two *different* algorithms for computing wavelet transforms.
[EMAIL PROTECTED] wrote:
A concrete example of interest to me: can I get an OCaml-to-native
compiler for an IBM BlueGene? The processor is in the PowerPC family,
but it has some modifications, and the binary format is different
from standard Linux as well.
No idea. OCaml has quite a good PPC
In [EMAIL PROTECTED], Jon Harrop
wrote:
I don't think you would be much happier to see totally obfuscated golf
one-liners.
That doesn't even make sense. Squeezing code onto one line doesn't
improve byte count.
So you don't count line endings when counting bytes. ;-)
Ciao,
Marc
Marc 'BlackJack' Rintsch wrote:
So you don't count line endings when counting bytes. ;-)
You'd probably replace \n - so it wouldn't affect the byte count.
Anyway, I think I was using non-whitespace bytes, so neither \n nor
is counted.
--
Dr Jon D Harrop, Flying Frog Consultancy
Objective
Jon Harrop wrote:
Filip Wasilewski wrote:
Jon, both Python and Matlab implementations discussed here use the
lifting scheme, while yours is a classic convolution based approach.
I've done both in OCaml. The results are basically the same.
Have you tried taking advantage of the 50%
Paul Rubin wrote:
Well, work is already under way (already mentioned) to implement
Python in Python, including a reasonable compiler (Psyco).
The big deficiency of MLton from a concurrency perspective is
inability to use multiprocessors. Of course CPython has the same
deficiency. Same
On 12/12/06, Jon Harrop [EMAIL PROTECTED] wrote:
Paul Rubin wrote:
Well, work is already under way (already mentioned) to implement
Python in Python, including a reasonable compiler (Psyco).
The big deficiency of MLton from a concurrency perspective is
inability to use multiprocessors.
On Dec 11, 2006, at 14:21, Jon Harrop wrote:
F# runs under Linux with Mono.
Well, then it should also run on my Mac... Do you have any experience
with performance of numerical code under Mono, or, for that matter,
under .NET? I suspect that the JIT compilers were not written with
number
Konrad Hinsen wrote:
Well, then it should also run on my Mac... Do you have any experience
with performance of numerical code under Mono, or, for that matter,
under .NET? I suspect that the JIT compilers were not written with
number crunching in mind, but perhaps I am wrong.
Actually, F#
Jon Harrop wrote:
Filip Wasilewski wrote:
Besides of that this code is irrelevant to the original one and your
further conclusions may not be perfectly correct. Please learn first
about the topic of your benchmark and different variants of wavelet
transform, namely difference between
On 11.12.2006, at 14:21, Jon Harrop wrote:
It's not a matter of number, it's a matter of availability when new
processors appear on the market. How much time passes on average
between the availability of a new processor type and the availability
of a native code compiler for OCaml?
OCaml
[EMAIL PROTECTED] wrote:
On 10.12.2006, at 11:23, Jon Harrop wrote:
F# addresses this by adding operator overloading. However, you have
to add more type annotations...
That sounds interesting, but I'd have to see this in practice to form
an opinion. As long as F# is a Windows-only language,
Carl Banks wrote:
Jon Harrop wrote:
What about translating the current Python interpreter into a language
with a GC, like MLton-compiled SML? That would probably make it much
faster, more reliable and easier to develop.
I doubt it would work too well. MLton-compiled SML's semantics differ
Jon Harrop [EMAIL PROTECTED] writes:
F# runs under Linux with Mono.
Interesting, where do I get it, and is there source? I've never been
interested in Mono but maybe this is a reason. How does the compiled
code compare to OCaml or MLton code?
--
Jon Harrop [EMAIL PROTECTED] writes:
That's not what I meant. I was referring to translating the Python
_interpreter_ into another language, not translating Python programs into
other languages. MLton-compiled SML is especially fast at symbolic
manipulation, e.g. interpreters, so it will be
I came across SAGE Software for Algebra and Geometry Experimentation
http://sage.math.washington.edu/sage/ , which includes Python and
Numeric and consists of
Group theory and combinatorics -- GAP
Symbolic computation and Calculus -- Maxima
Commutative algebra -- Singular
Number theory -- PARI,
Paul Rubin wrote:
Jon Harrop [EMAIL PROTECTED] writes:
F# runs under Linux with Mono.
Interesting, where do I get it, and is there source? I've never been
interested in Mono but maybe this is a reason. How does the compiled
code compare to OCaml or MLton code?
The source is avaliable,
The [F#] source is avaliable, but it's under Microsoft's Shared Source
license, which isn't quite an open source license. There are some
restrictions on commercial usage.
You can call me a bigot, but it will be engraved upon my tombstone that
I never used a proprietary Microsoft language.
On Dec 5, 2006, at 16:35, Mark Morss wrote:
very well-written) _Practical OCaml_. However, I also understand that
OCaml supports only double-precision implementation of real numbers;
that its implementation of arrays is a little clunky compared to
Fortran 95 or Numpy (and I suspect not as
sturlamolden wrote:
Little is as efficient as well-written ISO C99 (not to be confused with
C++ or ANSI C).
OCaml and F# are almost as fast as C++ in this case. I suspect most other
modern languages are.
So I assume you make sure that the cache is
prefetched and exploited optimally for your
Filip Wasilewski wrote:
Besides of that this code is irrelevant to the original one and your
further conclusions may not be perfectly correct. Please learn first
about the topic of your benchmark and different variants of wavelet
transform, namely difference between lifting scheme and dwt, and
I doubt that anyone would dispute that even as boosted by Numpy/Scipy,
Python will almost certainly be notably slower than moderately
well-written code in a compiled language. The reason Numpy exists,
however, is not to deliver the best possible speed, but to deliver
enough speed to make it
Hans Langtangen, rather.
Mark Morss wrote:
I doubt that anyone would dispute that even as boosted by Numpy/Scipy,
Python will almost certainly be notably slower than moderately
well-written code in a compiled language. The reason Numpy exists,
however, is not to deliver the best possible
Carl,
I agree with practically everything you say about the choice between
Python and functional languages, but apropos of Ocaml, not these
remarks:
In the same way that a screwdriver can't prevent you from driving a
nail. Give me a break, we all know these guys (Haskell especially) are
Jon Harrop wrote:
[...]
I first wrote an OCaml translation of the Python and wrote my own
little slice implementation. I have since looked up a C++ solution and
translated that into OCaml instead:
let rec d4_aux a n =
let n2 = n lsr 1 in
let tmp = Array.make n 0. in
for i=0 to n2-2
Filip Wasilewski wrote:
So why not use C++ instead of all others if the speed really matters?
What is your point here?
If speed matters, one should consider using hand-coded assembly.
Nothing really beats that. But it's painful and not portable. So lets
forget about that for a moment.
Jon Harrop wrote:
[snip]
That's my point, using numpy encouraged the programmer to optimise in the
wrong direction in this case (to use slices instead of element-wise
operations).
Ok, I can see that. We have a sort of JIT compiler, psyco, that works
pretty well but I don't think it's
Carl Banks wrote:
Ok. Perhaps starting a Python JIT in something like MetaOCaml or Lisp/Scheme
would be a good student project?
...and finishing would be a good project for a well-funded team of
experienced engineers.
I think this is a good idea. We could use the AST from the CPython
sturlamolden wrote:
I don't agree that slicing is not the best way to approach this
problem.
Indeed, the C++ approach can be written very succinctly using slicing:
for i=0 to n/2-1 do
tmp[i] = dot a[2*i:] h;
tmp[i + n/2] = dot a[2*i + 1:] g;
where a[i:] denotes the array starting at index
sturlamolden wrote:
Carl Banks wrote:
Ok. Perhaps starting a Python JIT in something like MetaOCaml or
Lisp/Scheme
would be a good student project?
...and finishing would be a good project for a well-funded team of
experienced engineers.
I think this is a good idea. We could
Jon Harrop wrote:
So the super-fast BLAS routines are now iterating over the arrays many times
instead of once and the whole program is slower than a simple loop written
in C.
Yes. And the biggest overhead is probably Python's function calls.
Little is as efficient as well-written ISO C99
Jon Harrop wrote:
In particular, I think you are eagerly
allocating arrays when, in a functional language, you could just as
easily compose closures.
You are completely wrong.
I'll give an example. If you write the Python:
a[:] = b[:] + c[:] + d[:]
I think that is equivalent to
Carl Banks wrote:
fill a (map3 (fun b c d - b + c + d) b c d)
which will be much faster because it doesn't generate an intermediate
array.
Ah, but, this wasn't about temporaries when you spoke of eagerly
allocating arrays, was it?
I had thought that all of the array operations were
Jon Harrop wrote:
I had thought that all of the array operations were allocating new arrays at
first but it seems that at least assignment to a slice does not.
Does:
a[:] = b[:] + c[:]
allocate a temporary for b[:] + c[:]?
Yep.
[snip]
Not only is that shorter than the Python, it is
Carl Banks wrote:
0.56s C++ (direct arrays)
0.61s F# (direct arrays)
0.62s OCaml (direct arrays)
1.38s OCaml (slices)
2.38s Python (slices)
10s Mathematica 5.1
[snip]
1.57s Python (in-place)
So,
optimized Python is roughly the same speed as naive Ocaml
optimized Ocaml is roughly the
Jon Harrop wrote:
Carl Banks wrote:
0.56s C++ (direct arrays)
0.61s F# (direct arrays)
0.62s OCaml (direct arrays)
1.38s OCaml (slices)
2.38s Python (slices)
10s Mathematica 5.1
[snip]
1.57s Python (in-place)
So,
optimized Python is roughly the same speed as naive Ocaml
Carl Banks wrote:
Optimized Python is 14% slower than badly written OCaml.
I'd call that roughly the same speed. Did you use any sort of
benchmark suite that miminized testing error, or did you just run it
surrounded by calls to the system timer like I did?
System timer, best of three.
Jon Harrop wrote:
Ok. Perhaps starting a Python JIT in something like MetaOCaml or Lisp/Scheme
would be a good student project?
I guess for a student project it's not that important, but if you have
higher ambitions, make sure you read
Jon Harrop wrote:
So I'm keen to learn what Python programmers would want/expect from F# and
OCaml.
I think this discussion becoming is a little misguided.
The real strength of scipy is the elegant notation rather than speed.
Being raised with Matlab I find scipy nicely familiar, and its fast
Jon Harrop wrote:
I don't know Python but this benchmark caught my eye.
def D4_Transform(x, s1=None, d1=None, d2=None):
D4 Wavelet transform in NumPy
(C) Sturla Molden
C1 = 1.7320508075688772
C2 = 0.4330127018922193
C3 = -0.066987298107780702
C4 =
Carl Banks wrote:
Matlab has a few *cough* limitations when it comes to hand-optimizing.
When writing naive code, Matlab often is faster than Python with numpy
because it has many commerical man-year of optimizing behind it.
However, Matlab helps v
That should say:
However, Matlab helps
Carl Banks wrote:
No, they're never redefined (even in the recursive version). Slices of
them are reassigned. That's a big difference.
I see.
(Actually, it does create a temporary array to store the intermediate
result, but that array does not get bound to odd.)
Sure.
In particular, I
I don't know Python but this benchmark caught my eye.
def D4_Transform(x, s1=None, d1=None, d2=None):
D4 Wavelet transform in NumPy
(C) Sturla Molden
C1 = 1.7320508075688772
C2 = 0.4330127018922193
C3 = -0.066987298107780702
C4 = 0.51763809020504137
C5 =
John Henry wrote:
Bill Gates will have you jailed! :-)
On a more serious note, is there any alternative to Simulink though?
It's called SciCos, and as far as I've seen it not only covers Simulink
but also PowerSim.
I found only 1 major disadvantage inSciLab, ...
... it has no ActiveX
On 16 Nov 2006 13:09:03 -0800, sturlamolden [EMAIL PROTECTED]
wrote:
...SNIP...
To compare Matlab with NumPy we can e.g. use the D4 discrete wavelet
transform. I have here coded it in Matlab and Python/NumPy using Tim
Swelden's lifting scheme.
First the Matlab version (D4_Transform.m):
Hi all,
I'm not going to touch the big picture issues here -- you need to pick the
right tool for the job you're doing, and only you know what works best for
your task. However, since it didn't come up, I feel I need to add a piece
of info to the mix, since I spend my days getting MATLAB
Rob Purser wrote:
Anyway, I just wanted to call your attention to Data Acquisition Toolbox:
http://www.mathworks.com/products/daq/
Absolutely. If the hardware is supported by this toolbox, there is no
need to reinvent the wheel. The license is expensive, but development
time can be far more
Filip Wasilewski wrote:
As far as the speed comparison is concerned I totally agree that NumPy
can easily outperform Matlab in most cases. Of course one can use
compiled low-level extensions to speed up specific computations in
Matlab, but it's a lot easier and/or cheaper to find very good
Phil Schmidt wrote:
Well, that kind of gets right to my point. Does the added effort with
Python to interface with data acquisition hardware really result in
less productivity? I am very familiar with Matlab, Labview, and Python,
and frankly, Python is the most productive and powerful
Phil Schmidt wrote:
I'd love to use Python, but I'm not comfortable with the hardware side
of that. I'm certain that most, if not all data acquisition hardware
comes with DLL drivers, which I could interface with using ctypes. I'm
concerned though about spending more time messing around with
Phil Schmidt wrote:
Thanks for that list. I'm currently in the process of getting quotes
for a bunch of Matlab tools for hardware-in-the-loop simulation. Big
bucks.
Yup, and better spent elsewhere...
I'd love to use Python, but I'm not comfortable with the hardware side
of that. I'm
sturlamolden wrote:
Using Python just for the sake of using Python is silly.
Well, that kind of gets right to my point. Does the added effort with
Python to interface with data acquisition hardware really result in
less productivity? I am very familiar with Matlab, Labview, and Python,
and
Phil Schmidt wrote:
sturlamolden wrote:
Using Python just for the sake of using Python is silly.
Well, that kind of gets right to my point. Does the added effort with
Python to interface with data acquisition hardware really result in
less productivity? I am very familiar with Matlab,
Brian Blais wrote:
So my recommendation for a (nearly) complete Matlab replacement would be:
python
numpy
scipy
matplotlib
pyrex
Brian,
Thanks for that list. I'm currently in the process of getting quotes
for a bunch of Matlab tools for hardware-in-the-loop
sturlamolden wrote:
def D4_Transform(x, s1=None, d1=None, d2=None):
D4 Wavelet transform in NumPy
(C) Sturla Molden
C1 = 1.7320508075688772
C2 = 0.4330127018922193
C3 = -0.066987298107780702
C4 = 0.51763809020504137
C5 = 1.9318516525781364
if d1 ==
sturlamolden wrote:
[...]
Here is the correct explanation:
The factorization of the polyphase matrix is not unique. There are
several valid factorizations. Our implementations corresponds to
different factorizations of the analysis and synthesis poyphase
matrices, and both are in a sence
sturlamolden wrote:
Actually, there was a typo in the original code. I used d1[l-1] where I
should have used d1[l+1]. Arrgh. Here is the corrected version, the
Matlab code must be changed similarly. It has no relevance for the
performance timings though.
def D4_Transform(x, s1=None,
Brian == Brian Blais [EMAIL PROTECTED] writes:
Brian 3) 3D plotting requires yet-another library. luckily I
Brian haven't had to use this much, but I hope that someday that
Brian it will be part of matplotlib.
I'd rather not say anything about this since I have strong opinions
Filip Wasilewski wrote:
Actually you have not. The algorithm you presented gives completely
wrong results. Have a look at quickdirty(TM) implementation bellow.
God grief. I followed the implementation in Ingrid Daubechies' and Wim
Sweldens' original wavelet lifting paper (J. Fourier Anal.
sturlamolden wrote:
God grief. I followed the implementation in Ingrid Daubechies' and Wim
Sweldens' original wavelet lifting paper (J. Fourier Anal. Appl., 4:
247-269, 1998). If you look at the factorized polyphase matrix for D4
(which gives the inverse transform), their implementation of
Matimus wrote:
Boris wrote:
Hi, is there any alternative software for Matlab? Although Matlab is
powerful popular among mathematical engineering guys, it still
costs too much not publicly open. So I wonder if there's similar
software/lang that is open with comparable
sturlamolden wrote:
Boris wrote:
Hi, is there any alternative software for Matlab? Although Matlab is
powerful popular among mathematical engineering guys, it still
costs too much not publicly open. So I wonder if there's similar
software/lang that is open with comparable
On Nov 16, 10:46 pm, John Henry [EMAIL PROTECTED] wrote:
Bill Gates will have you jailed! :-)
On a more serious note, is there any alternative to Simulink though?
Ptolemy II. Java stuff in the core but components may be written in
Python
http://ptolemy.eecs.berkeley.edu/ptolemyII/
Thanks for pointing that out. I wasn't aware of this. Will take a
look.
Sébastien Boisgérault wrote:
On Nov 16, 10:46 pm, John Henry [EMAIL PROTECTED] wrote:
Bill Gates will have you jailed! :-)
On a more serious note, is there any alternative to Simulink though?
Ptolemy II. Java
sturlamolden wrote:
Sorry Mathworks, I have used your product for years, but you cannot
compete with NumPy.
Funny. I went exactly the other way. Had a full OO postprocessing library
for Python/Scipy/HDF etc which worked brilliantly. Then changed to a 64 bit
machine and spent three days trying
Boris wrote:
Hi, is there any alternative software for Matlab? Although Matlab is
powerful popular among mathematical engineering guys, it still
costs too much not publicly open. So I wonder if there's similar
software/lang that is open with comparable functionality, at least
for
Bill Gates will have you jailed! :-)
On a more serious note, is there any alternative to Simulink though?
sturlamolden wrote:
and is infinitely
more expensive.
Does anyone wonder why I am not paying for Matlab maintenance anymore?
Sorry Mathworks, I have used your product for years, but you
R (http://cran.r-project.org) might be an alternative, specially if
you do a lot of statistics and graphics. (R is probably the most
widely used language/system in statistical research).
R.
On 16 Nov 2006 13:09:03 -0800, sturlamolden [EMAIL PROTECTED] wrote:
Boris wrote:
Hi, is there any
In article [EMAIL PROTECTED],
sturlamolden [EMAIL PROTECTED] wrote:
Boris wrote:
Hi, is there any alternative software for Matlab? Although Matlab is
powerful popular among mathematical engineering guys, it still
costs too much not publicly open. So I wonder if there's similar
software/lang
Boris wrote:
Hi, is there any alternative software for Matlab? Although Matlab is
powerful popular among mathematical engineering guys, it still
costs too much not publicly open. So I wonder if there's similar
software/lang that is open with comparable functionality, at least
for
Matimus wrote:
There is also Scilab. I've only used it a tiny bit but for the task it
worked well. I do know that it has a somewhat restrictive license. It
is open source, but you aren't allowed to modify and redistribute the
source. I get the feeling that some people avoid Scilab but I'm not
80 matches
Mail list logo