Jon Harrop wrote: > Filip Wasilewski wrote: > > Jon, both Python and Matlab implementations discussed here use the > > lifting scheme, while yours is a classic convolution based approach. > > I've done both in OCaml. The results are basically the same.
Have you tried taking advantage of the 50% reduction of arithmetic operations for the lifting scheme? > > These are two *different* algorithms for computing wavelet transforms. > > Although they have similar names and give similar results, it does not > > mean you can use them interchangeably in one benchmark! It just does > > not make sense. > > It makes sense because they solve the same problem. When you're comparing > non-trivial problems between disparate languages you are not likely to use > the same algorithms. Using built-in hash functions is an obvious example. I don't even ask if you have checked the output result coefficients because I'm pretty sure you have not. They solve similar problem but in different ways and give a bit different results, for example the layout of coefficients in the memory is different as well as border distortion handling. Please don't tell me it's better to do something else instead of presenting a relevant solution. By the way, how do you think why there are both implementations included in Matlab Wavelet Toolbox and there are people paying lots of money for it? > > What's more, taking only very big input 'n' gives only very limited > > information about true performance. > > I've presented times for 2 different "n". I agree that it would be better to > present infinite different "n" but I only had finite time. A simple loglog plot with 25 or so dyadic lengths would be just fine. I don't expect you to measure times for non-dyadic lengths because neither of the solutions presented here so far support this. [...] > > This will > > also give you some idea about call overhead and possible memory > > bandwidth influence. For example, the 4-tap db2 lifting transform > > should be about 50% faster than the one using subband filtering. That's > > theory. In practice, due to specific memory usage, the speedup gained > > with reduction of number of arithmetic operations is easily wasted by > > huge cache miss overhead (C implementation, measured on x86 > > architecture) for arrays which don't fit entirely in the CPU cache. See > > my point? > > No. The effects you are talking about are swamped by the interpreted vs > compiled effect. Have I already mentioned it was observed with low-level C implementation on x86? I would really appreciate you took some time on exploring the unfortunate topic of this benchmark before posting another comment like that. > >> 1.88s C++ (816 non-whitespace bytes) > >> 2.00s OCaml (741 b) > >> 2.33s F# (741 b) > >> 9.83s Your Python (1,002 b) > >> > >> The original python was 789 bytes. > > > > Is the byte count a standard measure you apply to describe code > > quality? > > You can use bytes, lines, words, tokens or just look at the code. Whatever > you do, Python loses on this benchmark in terms of brevity, clarity and > performance. Jon, as I have already said, you did not even show a fully comparable example yet. What I would really like to see is a lifting scheme implementation that can take 1- to 10-dimensional arrays (single and double precision floats, convert on the fly from integer types when necessary and handle strided, non-contiguous arrays), axis and maximum decomposition level as input and return list of views (or arrays if necessary) with transform coefficients for every level of decomposition. Can do? > > I don't think you would be much happier to see totally > > obfuscated golf one-liners. > > That doesn't even make sense. Squeezing code onto one line doesn't improve > byte count. Yeah, right ;-) > >> It is probably just as easy. Instead of dynamic typing you have > >> parametric polymorphism. If you want to make your function generic over > >> arithmetic type then you can pass in the arithmetic operators. > > > > Probably? Using the interactive interpreter, how can I easily create > > two n-dimensional arrays of arbitrary data type, add them and multiply > > by 17? > > In F#: > > open Math.Matrix.Generic > let a = init n m (fun i j -> ...) > let b = init n m (fun i j -> ...) > 17. $* (a + b) > > In OCaml you'd either use an array of arrays and define your own functions > over them, or you'd use the built-in Bigarray.Array2 and define some > functions over that. Either way, you have to use different operator names > for different types. You'd end up with something like: > > open Array2 > let a = init n m (fun i j -> ...) > let b = init n m (fun i j -> ...) > 17. $* (a +: b) Are you telling I have to define a different operator for every arithmetic operation for every two types? That definitely does not look like a quick, flexible and clear solution to start with. Any working example I could type into OCamlWinPlus of the following would be very appreciated. I'm fairly new to this and would really like to see how to handle differently shaped arrays and data types in the interactive mode without much hassle: from numpy import ones, arange, reshape, int32 a = ones((2,3,4,5)) b = ones(a.shape, dtype=int32) c = ones((3,4,5)) d = 2*a + 2.5*(3*b + 3.3) d[0] = d[0] + 5 d[1] *= c * (2+3*1.2) d[:,2,...] = reshape(arange(d[:,2,...].size), d[:,2,...].shape) print d[...,1] > > I just prefer fair > > comparisons and clean use cases instead of marketing like that and I > > don't believe anyone will buy your story about OCaml superior brevity > > after seeing one fairly irrelevant loop with few arithmetic operations > > as a killer argument. > > Sure. There are lots of other examples of OCaml's brevity out there: > > http://www.ffconsultancy.com/free/ocaml > http://www.ffconsultancy.com/free/sudoku > http://www.ffconsultancy.com/free/maze > http://www.ffconsultancy.com/free/ray_tracer The ray tracer example is really nice, however the sudoku solver is not so impressive when compared to the K thingy ;-) - http://markbyers.com/moinmoin/moin.cgi/ShortestSudokuSolver. [...] > > I was also unable to run your OCaml example for n >= 2^21 (win32, out > > of the box OCaml installation). Is there some 16MB memory limit imposed > > on Array? > > # let q = Array.make (1 lsl 21) 0.0;; > > Exception: Invalid_argument "Array.make". > > On 32-bit, yes. You'd either have to use a different data structure > (Bigarrays), upgrade or switch to F#. That renders the Array type almost useless for me. Where can I find documentation or tutorial on using Bigarrays? best, fw -- http://mail.python.org/mailman/listinfo/python-list