On Wednesday, 30 September 2015 21:42:33 UTC+1, Steven G. Johnson wrote:
>
>
>
> On Wednesday, September 30, 2015 at 4:01:17 PM UTC-4, Christoph Ortner 
> wrote:
>>
>> I simply dislike is that linspace does not behave as expected, and I 
>> expect that this is the main reason for other as well. To give an extreme 
>> analogy, we don't go around and start defining A * B = A + B either, and 
>> linspace and similar names are just so ingrained in the Matlab (and 
>> apparently also Python) community, that it trips us up when they suddenly 
>> behave differently.
>>
>
> This is a bad analogy.  linspace still returns an AbstractVector with the 
> same elements.   So, it's basically doing the same thing as before, and is 
> just implemented differently.
>

My point was about "changing the expected behaviour", and I said this was 
an extreme analogy.
 

> The question is, why does this implementation detail of linspace matter to 
> you?  It still behaves the same way in nearly every context. 
>

as you say, "nearly".
 

>  The cases where it behaves differently are probably mostly bugs (overly 
> restrictive types of function parameters) that were waiting to be caught.
>

julia> x = linspace(0, 1, 1_000_000);
julia> y = collect(x);
julia> @time exp(x);
  0.021086 seconds (6 allocations: 7.630 MB)
julia> @time exp(y);
  0.012749 seconds (6 allocations: 7.630 MB)
julia> @time AppleAccelerate.exp!(y,y);
  0.001282 seconds (4 allocations: 160 bytes)
julia> @time AppleAccelerate.exp!(x,x);
ERROR: MethodError: `exp!` has no method matching exp!(::LinSpace{Float64}, 
::LinSpace{Float64})

(a) the speed improvement is probably hidden in the call to collect, but if 
I don't know about it and call several functions on x, then I will feel it.
(b) The error tells what is going wrong, which is good, so now I can go and 
fix it. But it is an extra 5-10 minutes taking me out of my flow-state, 
which in practise will cost me more like 1h or so.

You could now argue that when I try to optimise like that then I should 
know what I am doing. But I would equally argue that when you care whether 
linspace is a vector or a "range", then I should know whether to call 
linspace or linrange. 
 

>  Finally, I don't buy the argument that linspace should be abstracted 
> because of memory. It always creates one-dimensional grids, and those 
> aren't the issue. There is a much stronger argument to create an 
> abstraction for meshgrid and I even disliked that that one was dropped.
>
> We don't need an abstraction for meshgrid, since in pretty much all 
> applications of meshgrid you can use broadcasting operations instead (far 
> more efficiently).
>
> I used to want meshgrid too, but it was only because I wasn't used to 
> broadcasting operations. Since then, I have never found a case in which 
> meshgrid would have been easier than the broadcasting operations.
>

same point really. Why not provide mesh-grid and make it behave as 
expected,  and add a comment in the documentation (maybe even in the 
doc-string of mesh grid) that for performance one should use broadcasting.

The whole discussion reminds a bit about issue 
#10154, https://github.com/JuliaLang/julia/issues/10154, whether 
floating-point indexing should be implemented. By now I am used to it, and 
I will get used to linspace behaving as it does. But with every little 
change like that, the entry barrier for Matlab / R / Python users becomes 
higher and the take-up of the language by non-experts will decrease. The 
reason I started with Julia was that it behaved as I expected. I am now 
sticking with it because I like the type system, the Python interface (and 
the speed). But if I tried it right now, coming from Matlab, I would have 
struggled more than I did in the 0.2 version.

Christoph

Reply via email to