Re: [julia-users] Missing newline in file output?

2014-12-08 Thread Greg Plowman
Yes, you're right John.
I'm using Windows and viewing with Notepad.
File appears OK if I use Sublime Text or Notepad++.

The file contains LF only, which apparently Notepad doesn't interpret 
properly.
Interestingly, the equivalent C code (Visual C++ for Windows) creates CR/LF 
pairs. Presumably this is Microsoft specific, specifically for Windows??



On Monday, December 8, 2014 2:40:55 PM UTC+11, John Myles White wrote:

 What platform are you on? What's the hex dump of the file that gets 
 created? Are perhaps Unix newlines being used, but you're using something 
 like Notepad?

  -- John

 On Dec 7, 2014, at 7:37 PM, Greg Plowman greg.p...@gmail.com 
 javascript: wrote:

 Hi

 Are newlines missing from the following output to file? Or am I missing 
 something?

 fileout = open(test.txt, w)
 println(fileout, Hello)
 println(fileout, World)
 close(fileout)

 File test.txt contains:
 HelloWorld 

 and not what I expected:
 Hello
 World

 Cheers, Greg




Re: [julia-users] Re: Lot of allocations in Array assignement

2014-12-08 Thread remi . berson
Thank you for the explanation Stefan.
But isn't it possible to just consider the scopes declared inside of a 
function + the global scope while looking for a variable definition? I find 
the fact that the variable can come from the scope in which the function is 
called strange.


Le dimanche 7 décembre 2014 22:03:59 UTC+1, Stefan Karpinski a écrit :

 Values that are used inside of functions come from an outer scope – if 
 they are not defined in any enclosing local scopes, then they are global. 
 In particular, function names are just global constants, so this behavior 
 is really quite important. You wouldn't want to have to declare every 
 function that you're going to call to be global before calling it.

 On Sun, Dec 7, 2014 at 3:47 PM, remi@gmail.com javascript: wrote:

 Well indeed that was the problem. Thank you very much. I wasn't aware of 
 this behavior of Julia, and I didn't even see that n wasn't in the scope of 
 the function. Somehow I believed that if it were the case an error would be 
 raised.

 Is there situations where this behavior is wanted? Because I find not 
 counterintuitive (but maybe it's just me). I would be glad to know more 
 about the decisions that have lead to this design choice about Julia (or in 
 other language that could have the same feature).

 Best,
 Rémi



 On Sunday, December 7, 2014 5:31:40 PM UTC+1, remi@gmail.com wrote:



 Hey guys,

 I'm currently playing with some Eratosthenes sieving in Julia, and found 
 a strange behavior of memory allocation.
 My naive sieve is as follows:

 #=
 Naive version of Erato sieve.
 * bitarray to store primes
 * only eliminate multiples of primes
 * separately eliminate even non-primes
 =#
 function erato1(n::Int)
 # Create bitarray to store primes
 primes_mask = trues(n)
 primes_mask[1] = false

 # Eliminate even numbers first
 for i = 4:2:n
 primes_mask[i] = false
 end

 # Eliminate odd non-primes numbers
 for i = 3:2:n
 if primes_mask[i]
 for j = (i + i):i:n
 primes_mask[j] = false
 end
 end
 end

 # Collect every primes in an array
 n_primes = countnz(primes_mask)
 primes_arr = Array(Int64, n_primes)
 collect1!(primes_mask, primes_arr)
 end


 With the collect1! function that takes a BitArray as argument and return 
 an Array containing the primes numbers.

 function collect1!(primes_mask::BitArray{1}, primes_arr::Array{Int64, 1
 })
 prime_index = 1
 for i = 2:n
 if primes_mask[i]
 primes_arr[prime_index] = i
 prime_index += 1
 end
 end
 return primes_arr
 end

 The codes works, but is slow because of a lot of memory allocation at 
 the line:
 primes_arr[prime_index] = i

 Here is an extract of the memory allocation profile 
 (--track-allocation=user):

 - function collect1!(primes_mask::BitArray{1}, primes_arr::Array
 {Int64, 1})
 0 prime_index = 1
 -84934576 for i = 2:n
 0 if primes_mask[i]
 *184350208* primes_arr[prime_index] = i
 0 prime_index += 1
 - end
 - end
 0 return primes_arr
 - end



 But, if I inline the definition of collect1! into the erato1, this is 
 much faster and the allocation in the loop of collect disapears. Here is 
 the code updated:

 function erato1(n::Int)
 # Create bitarray to store primes
 primes_mask = trues(n)
 primes_mask[1] = false

 # Eliminate even numbers first
 for i = 4:2:n
 primes_mask[i] = false
 end

 # Eliminate odd non-primes numbers
 for i = 3:2:n
 if primes_mask[i]
 for j = (i + i):i:n
 primes_mask[j] = false
 end
 end
 end

 # Collect every primes in an array
 n_primes = countnz(primes_mask)
 primes_arr = Array(Int64, n_primes)
 prime_index = 1
 for i = 2:n
 if primes_mask[i]
 primes_arr[prime_index] = i
 prime_index += 1
 end
 end
 return primes_arr
 end

 And the memory profile seems more reasonable:

 0 n_primes = countnz(primes_mask)
  92183392 primes_arr = Array(Int64, n_primes)
 0 prime_index = 1
 0 for i = 2:n
 0 if primes_mask[i]
 0 primes_arr[prime_index] = i
 0 prime_index += 1
 - end
 - end


 So I'm wondering why the simple fact of inlining the code would remove 
 the massive memory allocation when assigning to the array.

 Thank you for your help,
 Rémis




[julia-users] Re: Difference between {T:AbstractType} and just x::AbstractType

2014-12-08 Thread David van Leeuwen
Good question, I would be interested in the answer myself.  If the tips 
indicate the explicit type dependency is preferred, then I guess that for 
the first form the compiler compiles a new instance of the function 
specifically for the concrete type `T`, while in the second, it presumably 
compiles to deal with a (partly) type-instable `x` within the function 
body.  

f1{R:Real}(x::R) = -x

f1(x::Real) = x

f1 (generic function with 2 methods)

So the methods are kept separate, but I think the first form hides access 
to the second form.

On Monday, December 8, 2014 7:12:39 AM UTC+1, Igor Demura wrote:

 What exactly the difference between:
 function foo{T:AbstractType}(x::T) = ...
 and 
 function foo(x::AbstractType) = ... ?
 Is any difference at all? The tips section of the docs says the second 
 is preferred. If they are the same, why first syntax? I can imagine that I 
 can benefit if I have several parameters:
 function bar{T:AbstractType}(x::Concrete{T}, y::AnotherOf{T}, z::T) = ...



Re: [julia-users] Re: Lot of allocations in Array assignement

2014-12-08 Thread elextr


On Monday, December 8, 2014 6:15:37 PM UTC+10, remi@gmail.com wrote:

 Thank you for the explanation Stefan.
 But isn't it possible to just consider the scopes declared inside of a 
 function + the global scope while looking for a variable definition? I find 
 the fact that the variable can come from the scope in which the function is 
 called strange.


It can come from the scope in which the function is defined, not the scope 
in which it is called, 
see 
http://docs.julialang.org/en/latest/manual/variables-and-scoping/#scope-of-variables

Cheers
Lex
 



 Le dimanche 7 décembre 2014 22:03:59 UTC+1, Stefan Karpinski a écrit :

 Values that are used inside of functions come from an outer scope – if 
 they are not defined in any enclosing local scopes, then they are global. 
 In particular, function names are just global constants, so this behavior 
 is really quite important. You wouldn't want to have to declare every 
 function that you're going to call to be global before calling it.

 On Sun, Dec 7, 2014 at 3:47 PM, remi@gmail.com wrote:

 Well indeed that was the problem. Thank you very much. I wasn't aware of 
 this behavior of Julia, and I didn't even see that n wasn't in the scope of 
 the function. Somehow I believed that if it were the case an error would be 
 raised.

 Is there situations where this behavior is wanted? Because I find not 
 counterintuitive (but maybe it's just me). I would be glad to know more 
 about the decisions that have lead to this design choice about Julia (or in 
 other language that could have the same feature).

 Best,
 Rémi



 On Sunday, December 7, 2014 5:31:40 PM UTC+1, remi@gmail.com wrote:



 Hey guys,

 I'm currently playing with some Eratosthenes sieving in Julia, and 
 found a strange behavior of memory allocation.
 My naive sieve is as follows:

 #=
 Naive version of Erato sieve.
 * bitarray to store primes
 * only eliminate multiples of primes
 * separately eliminate even non-primes
 =#
 function erato1(n::Int)
 # Create bitarray to store primes
 primes_mask = trues(n)
 primes_mask[1] = false

 # Eliminate even numbers first
 for i = 4:2:n
 primes_mask[i] = false
 end

 # Eliminate odd non-primes numbers
 for i = 3:2:n
 if primes_mask[i]
 for j = (i + i):i:n
 primes_mask[j] = false
 end
 end
 end

 # Collect every primes in an array
 n_primes = countnz(primes_mask)
 primes_arr = Array(Int64, n_primes)
 collect1!(primes_mask, primes_arr)
 end


 With the collect1! function that takes a BitArray as argument and 
 return an Array containing the primes numbers.

 function collect1!(primes_mask::BitArray{1}, primes_arr::Array{Int64, 1
 })
 prime_index = 1
 for i = 2:n
 if primes_mask[i]
 primes_arr[prime_index] = i
 prime_index += 1
 end
 end
 return primes_arr
 end

 The codes works, but is slow because of a lot of memory allocation at 
 the line:
 primes_arr[prime_index] = i

 Here is an extract of the memory allocation profile 
 (--track-allocation=user):

 - function collect1!(primes_mask::BitArray{1}, primes_arr::
 Array{Int64, 1})
 0 prime_index = 1
 -84934576 for i = 2:n
 0 if primes_mask[i]
 *184350208* primes_arr[prime_index] = i
 0 prime_index += 1
 - end
 - end
 0 return primes_arr
 - end



 But, if I inline the definition of collect1! into the erato1, this is 
 much faster and the allocation in the loop of collect disapears. Here is 
 the code updated:

 function erato1(n::Int)
 # Create bitarray to store primes
 primes_mask = trues(n)
 primes_mask[1] = false

 # Eliminate even numbers first
 for i = 4:2:n
 primes_mask[i] = false
 end

 # Eliminate odd non-primes numbers
 for i = 3:2:n
 if primes_mask[i]
 for j = (i + i):i:n
 primes_mask[j] = false
 end
 end
 end

 # Collect every primes in an array
 n_primes = countnz(primes_mask)
 primes_arr = Array(Int64, n_primes)
 prime_index = 1
 for i = 2:n
 if primes_mask[i]
 primes_arr[prime_index] = i
 prime_index += 1
 end
 end
 return primes_arr
 end

 And the memory profile seems more reasonable:

 0 n_primes = countnz(primes_mask)
  92183392 primes_arr = Array(Int64, n_primes)
 0 prime_index = 1
 0 for i = 2:n
 0 if primes_mask[i]
 0 primes_arr[prime_index] = i
 0 prime_index += 1
 - end
 - end


 So I'm wondering why the simple fact of inlining the code would remove 
 the massive memory allocation when assigning to the array.

 Thank you for your help,
 Rémis




[julia-users] Re: Difference between {T:AbstractType} and just x::AbstractType

2014-12-08 Thread elextr


On Monday, December 8, 2014 6:16:25 PM UTC+10, David van Leeuwen wrote:

 Good question, I would be interested in the answer myself.  If the tips 
 indicate the explicit type dependency is preferred, then I guess that for 
 the first form the compiler compiles a new instance of the function 
 specifically for the concrete type `T`, while in the second, it presumably 
 compiles to deal with a (partly) type-instable `x` within the function 
 body.  

 f1{R:Real}(x::R) = -x

 f1(x::Real) = x

 f1 (generic function with 2 methods)

 So the methods are kept separate, but I think the first form hides access 
 to the second form.

IIUC it doesn't really hide it, but for a call like f1(1.0) the generic 
can be used to generate f1(x::Float64) which is more specific than 
f1(x::Real) so it is used instead.

 


 On Monday, December 8, 2014 7:12:39 AM UTC+1, Igor Demura wrote:

 What exactly the difference between:
 function foo{T:AbstractType}(x::T) = ...
 and 
 function foo(x::AbstractType) = ... ?
 Is any difference at all? The tips section of the docs says the second 
 is preferred. If they are the same, why first syntax?


See above.
 

 I can imagine that I can benefit if I have several parameters:
 function bar{T:AbstractType}(x::Concrete{T}, y::AnotherOf{T}, z::T) = ...



Re: [julia-users] Re: How can I sort Dict efficiently?

2014-12-08 Thread Jeff Waller
This can be done in O(N).  Avoid sorting as it will be O(NlogN)

Here's one of many Q on how 
http://stackoverflow.com/questions/7272534/finding-the-first-n-largest-elements-in-an-array


Re: [julia-users] Missing newline in file output?

2014-12-08 Thread Pontus Stenetorp
On 8 December 2014 at 17:11, Greg Plowman greg.plow...@gmail.com wrote:

 The file contains LF only, which apparently Notepad doesn't interpret
 properly.
 Interestingly, the equivalent C code (Visual C++ for Windows) creates CR/LF
 pairs. Presumably this is Microsoft specific, specifically for Windows??

Different conventions apply for different operating systems when it
comes to newlines [1].  This is in many ways horribly, but something
that most editors take into account so that you don't have to worry
about it.

Pontus

[1]: https://en.wikipedia.org/wiki/Newline#Representations


[julia-users] Re: .travis.yml that tests against Julia v0.3 and v0.4

2014-12-08 Thread Ivar Nesje
I would recommend running Pkg.generate(AnyTespackageName), so that you 
can look at the current recommended .travis.yml layout for new packages. 

kl. 08:32:31 UTC+1 mandag 8. desember 2014 skrev John Zuhone følgende:

 I think I have answered my own question--found an example here:

 https://github.com/JuliaWeb/HttpCommon.jl/blob/master/.travis.yml

 On Monday, December 8, 2014 2:28:38 AM UTC-5, John Zuhone wrote:

 Hi all,

 Is anyone doing testing on Travis that tests against both v0.3 (the 
 current stable, e.g. v.0.3.3 at the moment) and v0.4 (the nightly)? If so, 
 can you show an example .travis.yml?

 Thanks,

 John Z



Re: [julia-users] Re: DataFrames and NamedArrays: are they suitable for heavier computations ?

2014-12-08 Thread Ján Dolinský


 For DataFrames, it depends on what you want to do. It is difficult to get 
 performance with DataArrays as columns using the current implementation. 
 With the ongoing work by John Myles White on the use of a Nullable type, 
 that should be much better. Also, you can use standard Arrays as columns of 
 a DataFrame. It's not documented well, but it can be done.

 Also, if you want to treat a DataFrame like a matrix, then generally the 
 answer is no. With some trickery, you can store a view to a matrix in a 
 DataFrame. Basically, you have to create column views into the matrix. Here 
 is an example. It might be useful if you want to treat all or part of a 
 DataFrame as a matrix.


Thanks a lot for an excellent explanation and the example. I'll try it out. 
Looks promising.

Jan 


[julia-users] Re: DataFrames and NamedArrays: are they suitable for heavier computations ?

2014-12-08 Thread Ján Dolinský


 I can only speak for NamedArrays.  On the one hand the deployment of BLAS 
 should be transparant and the use of NamedArray vs Array not lead to much 
 degradation in performance.  E.g., a * b with `a` and `b` a NamedArray, 
 effectively calls a.array * b.array which Base implements with 
 BLAS.gemm().  There is just a little overhead of filling in sensible names 
 in the result---so if you have small matrices in an inner loop, you're 
 going to get hurt. 

 On the other hand, I am not sure how much of the Julia BLAS cleverness is 
 retained in NamedArrays---but the intention of the package is that it is 
 completely transparent, and if you notice bad performance for a particular 
 situation then you should file an issue (or make a PR:-).  Individual 
 element indexing of a NamedArray with integers is just a little bit slower 
 than that of an Array.  Indexing by name is quite a bit slower---you may 
 try a different Associative than the standard Dict. 

 Incidentally, I've been toying with the idea of NamedArrays `*` check on 
 consistency of index and dimension names, but my guess is that people would 
 find such a thing annoying.  

 ArrayViews are currently not aware of NameArrays.  I believe the views are 
 going to be part ov julia-0.4, so then it would be a task for NamedArray to 
 implement views of NamedArrays I gather. 

 Cheers, 

 ---david


Hi,

Thanks for the explanation. Suppose I have a named array X with 3 columns 
x1, x2 and x3 and I do prod(X, 2). Will the resulting array (a single 
columns in this case) have a sensible name like x1x2x3 ? Or more 
generally, how are these new names generated and for which operations ?

Thanks,
Jan 


[julia-users] Overwriting getIndex

2014-12-08 Thread Christoph Ortner
Since julia 0.3.3 I receive the following error message

Warning: New definition 
getindex(AbstractArray{T,2},Array{Integer,1}) at 
/Users/ortner/Dropbox/Work/Projects/QM_Localisation/jqmmm/TBgeom.jl:74
is ambiguous with: 

getindex(SubArray{T,N,A:AbstractArray{T,N},I:(Union(Int64,Range{Int64})...,)},Union(AbstractArray{T,1},Real)...)
 at subarray.jl:335.
To fix, define 

getindex(SubArray{T,2,A:AbstractArray{T,N},I:(Union(Int64,Range{Int64})...,)},Array{Integer,1})
before the new definition.


The offending definitions are

getindex{T}(a::AbstractArray{T,2}, idx::Array{Integer,1}) = a[idx[1], 
idx[2]]
getindex{T}(a::AbstractArray{T,3}, idx::Array{Integer,1}) = a[idx[1], 
idx[2], idx[3]]


What changed? What should I have done? I would ike to better understand the 
To fix . . . sentence, before going ahead and fixing this.

Thank you,
   Christoph



[julia-users] scoping and begin

2014-12-08 Thread Simon Byrne
According to the docs 
http://julia.readthedocs.org/en/latest/manual/variables-and-scoping/, 
begin blocks do not introduce new scope blocks. But this block:

https://github.com/JuliaLang/julia/blob/1b9041ce2919f2976ec726372b49201c887398d7/base/string.jl#L1601-L1618

*does* seem to introduce a new scope (i.e. if I type Base.tmp at the REPL, 
I get an error). What am I missing here?

Simon


[julia-users] Re: Overwriting getIndex

2014-12-08 Thread elextr


On Monday, December 8, 2014 8:34:04 PM UTC+10, Christoph Ortner wrote:

 Since julia 0.3.3 I receive the following error message

 Warning: New definition 
 getindex(AbstractArray{T,2},Array{Integer,1}) at 
 /Users/ortner/Dropbox/Work/Projects/QM_Localisation/jqmmm/TBgeom.jl:74
 is ambiguous with: 
 
 getindex(SubArray{T,N,A:AbstractArray{T,N},I:(Union(Int64,Range{Int64})...,)},Union(AbstractArray{T,1},Real)...)
  at subarray.jl:335.
 To fix, define 
 
 getindex(SubArray{T,2,A:AbstractArray{T,N},I:(Union(Int64,Range{Int64})...,)},Array{Integer,1})
 before the new definition.


 The offending definitions are

 getindex{T}(a::AbstractArray{T,2}, idx::Array{Integer,1}) = a[idx[1], 
 idx[2]]
 getindex{T}(a::AbstractArray{T,3}, idx::Array{Integer,1}) = a[idx[1], 
 idx[2], idx[3]]


 What changed? What should I have done? I would ike to better understand 
 the To fix . . . sentence, before going ahead and fixing this.


I appears that in a new version the Julia library added a definition that 
is ambiguous with a definition in your code. Unfortunately this is always 
possible with multiple dispatch systems.  The solution is as described in 
the error message, if you are sure that the two definitions are correct. 
 IIUC this declaration is not added automatically as that could hide some 
types of real errors.  For more information 
see http://docs.julialang.org/en/release-0.3/manual/methods/#method-ambiguities

Chers
Lex


 Thank you,
Christoph



[julia-users] Overwriting getIndex

2014-12-08 Thread Toivo Henningsson
I'm not sure that those getindex methods that you added are kosher. I'm quite 
sure that there is an existing behavior that you are overriding, so they might 
very well break others' code that relies on that behavior.

Re: [julia-users] ijulia with multiple versions

2014-12-08 Thread Isaiah Norton
IIRC, the julia path is hard-coded in the profile spec under ~/.ipython

So, you could copy/rename the profile folder and change the path, for each
julia version (and then probably create a launcher script to start 'ipython
--notebook --profile=PROFILENAME' as needed)
On Dec 8, 2014 5:58 AM, Simon Byrne simonby...@gmail.com wrote:

 I have multiple versions of julia installed on my machine. Is there an
 easy way to specify which version of julia I want to use when running
 ijulia?

 Simon



Re: [julia-users] ijulia with multiple versions

2014-12-08 Thread Simon Byrne
Fantastic, thanks! For the record, in case anyone else is interested:

cd ~/.ipython
cp -R profile_julia profile_julia4

Then open ~/.ipython/profile_julia4/ipython_config.py and find the line 
starting with c.KernelManager.kernel_cmd = , to point to the relevant 
julia executable and ijulia package path. Then

ipython notebook --profile=julia4

to start.

Simon

On Monday, 8 December 2014 12:06:02 UTC, Isaiah wrote:

 IIRC, the julia path is hard-coded in the profile spec under ~/.ipython

 So, you could copy/rename the profile folder and change the path, for each 
 julia version (and then probably create a launcher script to start 'ipython 
 --notebook --profile=PROFILENAME' as needed)
 On Dec 8, 2014 5:58 AM, Simon Byrne simon...@gmail.com javascript: 
 wrote:

 I have multiple versions of julia installed on my machine. Is there an 
 easy way to specify which version of julia I want to use when running 
 ijulia?

 Simon



Re: [julia-users] Re: How can I sort Dict efficiently?

2014-12-08 Thread Stefan Karpinski
We have a select function as part of Base, which can do O(n) selection of
the top n:

julia v = randn(10^7);

julia let w = copy(v); @time sort!(w)[1:1000]; end;
elapsed time: 0.882989281 seconds (8168 bytes allocated)

julia let w = copy(v); @time select!(w,1:1000); end;
elapsed time: 0.054981192 seconds (8192 bytes allocated)


So for large arrays, this is substantially faster.

On Mon, Dec 8, 2014 at 3:50 AM, Jeff Waller truth...@gmail.com wrote:

 This can be done in O(N).  Avoid sorting as it will be O(NlogN)

 Here's one of many Q on how
 http://stackoverflow.com/questions/7272534/finding-the-first-n-largest-elements-in-an-array



Re: [julia-users] Re: Lot of allocations in Array assignement

2014-12-08 Thread Stefan Karpinski
On Mon, Dec 8, 2014 at 3:24 AM, ele...@gmail.com wrote:



 On Monday, December 8, 2014 6:15:37 PM UTC+10, remi@gmail.com wrote:

 Thank you for the explanation Stefan.
 But isn't it possible to just consider the scopes declared inside of a
 function + the global scope while looking for a variable definition? I find
 the fact that the variable can come from the scope in which the function is
 called strange.


 It can come from the scope in which the function is defined, not the scope
 in which it is called, see
 http://docs.julialang.org/en/latest/manual/variables-and-scoping/#scope-of-variables

 Cheers
 Lex


Yes, variables can only come from the scope where the function is defined,
not where it is called – that is lexical scoping. Allowing variables to
come from the calling scope is dynamic scoping, which has generally fallen
out of favor in modern programming languages because it makes it impossible
to reason locally about the meaning of code.


[julia-users] Re: Overwriting getIndex

2014-12-08 Thread Christoph Ortner

You are right - this behaves differently than what people would expect. But 
it is hidden in a module, and not exported, so should be no problem. I just 
use it to code that I can read more easily.

But for the sake of this discussion, I really just wanted to understand 
what the problem was.

Thank you both for the replies.
   Christoph



[julia-users] Re: Overwriting getIndex

2014-12-08 Thread Christoph Ortner
You are right - this behaves differently than what people would expect. But 
it is hidden in a module, and not exported, so should be no problem. I just 
use it to code that I can read more easily.

But for the sake of this discussion, I really just wanted to understand 
what the problem was.

Thank you both for the replies - the example in the documentation made it 
pretty clear.

   Christoph



[julia-users] Re: Force array to be 2 dimensional for type consistent matrix multiplication

2014-12-08 Thread Alan Edelman
Who would have thought what a long story this would be?:
https://github.com/JuliaLang/julia/issues/4774

Turns out the linear algebra world and the multilinear algebra world don't
play nicely together, and the dictates of performance can really splist
the needs of the human and the computer.

some day I hope to write a paper on this subject




On Saturday, December 6, 2014 8:40:59 AM UTC-5, Andrew wrote:

 (Edit: I answered my question somewhat and posted it at the end)

 I recently ran into a bug in my code in which Julia gave an error when 
 computing the matrix inverse of what I thought was a 1x1 matrix. I looked 
 into it, and it looks like my sequence of matrix multiplications resulted 
 in a 5x5 matrix being converted into a 1 dimensional array. I'm not really 
 sure why this happens. Here's an example:

 A = rand(2,2)
 B = rand(2,1)
 B'*A*B

 1x1 Array{Float64,2}:
  0.836342


 This is fine. But,
 R = [0;1]
 R'*A*R

 1-element Array{Float64,1}:
  0.921516


 I think the problem is that B is explicitly defined as a 2 dimensional 
 array. On the other hand, R is 2-element Array{Int64,1}: , a 1 
 dimensional array. This is a problem because
 (R'*A*R)^(-1)

 DomainError


 It looks like, in Julia, I have to be very careful about combining 
 1-dimensional vectors and 2 dimensional matrices. Is there a better way to 
 do this? Also, when constructing a matrix(or vector) by hand, as in R = 
 [0;1], can I force it to be 2 dimensional? 


 Edit: I just sort of answered my own question, but I'll post this anyway 
 in case anyone else has this question, or if there are any comments.
 If I define R as
 R = [0 1]'

 2x1 Array{Int64,2}:
  0
  1


 then there is no issue. 



[julia-users] Re: Overwriting getIndex

2014-12-08 Thread Toivo Henningsson
Oh, but it is exported implicitly, since you are adding methods to 
Base.getindex, which is exported to everyone. Exporting is per function, not 
per method. That is the reason that you can get the ambiguity with definitions 
in Base. 

Re: [julia-users] [WIP] CSVReaders.jl

2014-12-08 Thread Tom Short
Exciting, John! Although your documentation may be very sparse, the code
is nicely documented.

On Mon, Dec 8, 2014 at 12:35 AM, John Myles White johnmyleswh...@gmail.com
wrote:

 Over the last month or so, I've been slowly working on a new library that
 defines an abstract toolkit for writing CSV parsers. The goal is to provide
 an abstract interface that users can implement in order to provide
 functions for reading data into their preferred data structures from CSV
 files. In principle, this approach should allow us to unify the code behind
 Base's readcsv and DataFrames's readtable functions.

 The library is still very much a work-in-progress, but I wanted to let
 others see what I've done so that I can start getting feedback on the
 design.

 Because the library makes heavy use of Nullables, you can only try out the
 library on Julia 0.4. If you're interested, it's available at
 https://github.com/johnmyleswhite/CSVReaders.jl

 For now, I've intentionally given very sparse documentation to discourage
 people from seriously using the library before it's officially released.
 But there are some examples in the README that should make clear how the
 library is intended to be used.

  -- John




Re: [julia-users] [WIP] CSVReaders.jl

2014-12-08 Thread Stefan Karpinski
Doh. Obfuscate the code quick, before anyone uses it! This is very nice and
something I've always felt like we need for data formats like CSV – a way
of decoupling the parsing of the format from the populating of a data
structure with that data. It's a tough problem.

On Mon, Dec 8, 2014 at 8:08 AM, Tom Short tshort.rli...@gmail.com wrote:

 Exciting, John! Although your documentation may be very sparse, the code
 is nicely documented.

 On Mon, Dec 8, 2014 at 12:35 AM, John Myles White 
 johnmyleswh...@gmail.com wrote:

 Over the last month or so, I've been slowly working on a new library that
 defines an abstract toolkit for writing CSV parsers. The goal is to provide
 an abstract interface that users can implement in order to provide
 functions for reading data into their preferred data structures from CSV
 files. In principle, this approach should allow us to unify the code behind
 Base's readcsv and DataFrames's readtable functions.

 The library is still very much a work-in-progress, but I wanted to let
 others see what I've done so that I can start getting feedback on the
 design.

 Because the library makes heavy use of Nullables, you can only try out
 the library on Julia 0.4. If you're interested, it's available at
 https://github.com/johnmyleswhite/CSVReaders.jl

 For now, I've intentionally given very sparse documentation to discourage
 people from seriously using the library before it's officially released.
 But there are some examples in the README that should make clear how the
 library is intended to be used.

  -- John





[julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Simon Danisch
That's really great! I will have to investigate, how to convert between 
OpenGL datatypes and UMat.
It would be incredible, if we can convert between julia, opencl, opengl and 
opencv datatypes without a big overhead.
I'm pretty sure, that no other language has this solved nicely! ;)

Am Samstag, 6. Dezember 2014 11:44:45 UTC+1 schrieb Max Suster:


 Hi all, 

 A few months ago I set out to learn Julia in an attempt to find an 
 alternative to MATLAB for developing computer vision applications.
 Given the interest (1 
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-users/PjyfzxPt8Gk/SuwKtjTd9j4J
 ,2 
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-users/81V5zSNJY3Q/DRUT0dR2qhQJ
 ,3 
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-users/iUPqo8drYek/pUeHECk91AQJ
 ,4 
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-users/6QunG66MfNs/C63pDfI-EMAJ
 ) and wide application of OpenCV for fast real-time computer vision 
 applications, I set myself to put together a simple interface for OpenCV in 
 Julia.  Coding in Julia and developing the interface between C++ and 
 Julia has been a lot of fun!

 OpenCV.jl aims to provide an interface for OpenCV http://opencv.org/ 
 computer 
 vision applications (C++) directly in Julia 
 http://julia.readthedocs.org/en/latest/manual/. It relies primarily on 
 Keno´s amazing Cxx.jl https://github.com/Keno/Cxx.jl, the Julia C++ 
 foreign function interface (FFI).  You can find all the information on my 
 package at https://github.com/maxruby/OpenCV.jl.

 You can download and run the package as follows:

 Pkg.clone(git://github.com/maxruby/OpenCV.jl.git)using OpenCV


 For MacOSX, OpenCV.jl comes with pre-compiled shared libraries, so it is 
 extremely easy to run.  For Windows and Linux, you will need to first 
 compile the OpenCV libraries, but this is well documented and links to the 
 instructions for doing so are included in the README.md file.

 The package currently supports most of OpenCV´s C++ API; however, at this 
 point I have created custom wrappings for core, imgproc, videoio and 
 highgui modules so that these are easy to use for anyone. 

 The package also demonstrates/contains 

- preliminary interface with the Qt GUI framework (see imread() and 
imwrite() functions)
- thin-wrappers for C++ objects such as std::vectors, std::strings 
- conversion from Julia arrays to C++ std::vector
- conversion of Julia images (Images.jl) to Mat (OpenCV) - though this 
has much room for improvement (i.e., color handling)

 Please let me know if there are any features you would like to see added 
 and I will try my best to integrate them. In the meantime, I will continue 
 to integrate more advanced algorithms for computer vision and eventually 
 extend the documentation as needed.

 Cheers,
 Max 







Re: [julia-users] Memory allocation in BLAS wrappers

2014-12-08 Thread Andreas Noack
Ivar, I like that idea. It would make it easier to use pointers for arrays.
Maybe there are caveats I haven't seen.

Regarding AbstractArray, I think the call to elsize serves as a correctness
check for that method.

2014-12-07 17:31 GMT-05:00 Ivar Nesje iva...@gmail.com:

 Seems to me like pointer(A, 3, 6) would be nice and unambiguous for 2d
 arrays. Is there any reason why that shouldn't be implemented?

 The current implementation is a little too dangerous for AbstractArray, in
 my opinion. Can we limit it to ContiguousArray (or whatever it is called
 now), and make it somewhat safer?


 https://github.com/JuliaLang/julia/blob/4b299c2fd5464ece308a8e708789a9d2aa9e32d3/base/pointer.jl#L29

 I know these questions is a better fit for github, but I don't have time
 to create a PR right now.


Re: [julia-users] ijulia with multiple versions

2014-12-08 Thread Christoph Ortner
Does Jupyter not have this sort of functionality?
   Christoph


On Monday, 8 December 2014 12:20:08 UTC, Simon Byrne wrote:

 Fantastic, thanks! For the record, in case anyone else is interested:

 cd ~/.ipython
 cp -R profile_julia profile_julia4

 Then open ~/.ipython/profile_julia4/ipython_config.py and find the line 
 starting with c.KernelManager.kernel_cmd = , to point to the relevant 
 julia executable and ijulia package path. Then

 ipython notebook --profile=julia4

 to start.

 Simon

 On Monday, 8 December 2014 12:06:02 UTC, Isaiah wrote:

 IIRC, the julia path is hard-coded in the profile spec under ~/.ipython

 So, you could copy/rename the profile folder and change the path, for 
 each julia version (and then probably create a launcher script to start 
 'ipython --notebook --profile=PROFILENAME' as needed)
 On Dec 8, 2014 5:58 AM, Simon Byrne simon...@gmail.com wrote:

 I have multiple versions of julia installed on my machine. Is there an 
 easy way to specify which version of julia I want to use when running 
 ijulia?

 Simon



[julia-users] Re: Overwriting getIndex

2014-12-08 Thread Christoph Ortner
Thank you - this is good to know.
Christoph


Re: [julia-users] ijulia with multiple versions

2014-12-08 Thread Stefan Karpinski
I think they're working on it, but it's not there yet.

On Mon, Dec 8, 2014 at 9:07 AM, Christoph Ortner christophortn...@gmail.com
 wrote:

 Does Jupyter not have this sort of functionality?
Christoph


 On Monday, 8 December 2014 12:20:08 UTC, Simon Byrne wrote:

 Fantastic, thanks! For the record, in case anyone else is interested:

 cd ~/.ipython
 cp -R profile_julia profile_julia4

 Then open ~/.ipython/profile_julia4/ipython_config.py and find the line
 starting with c.KernelManager.kernel_cmd = , to point to the relevant
 julia executable and ijulia package path. Then

 ipython notebook --profile=julia4

 to start.

 Simon

 On Monday, 8 December 2014 12:06:02 UTC, Isaiah wrote:

 IIRC, the julia path is hard-coded in the profile spec under ~/.ipython

 So, you could copy/rename the profile folder and change the path, for
 each julia version (and then probably create a launcher script to start
 'ipython --notebook --profile=PROFILENAME' as needed)
 On Dec 8, 2014 5:58 AM, Simon Byrne simon...@gmail.com wrote:

 I have multiple versions of julia installed on my machine. Is there an
 easy way to specify which version of julia I want to use when running
 ijulia?

 Simon




Re: [julia-users] Memory allocation in BLAS wrappers

2014-12-08 Thread Ivar Nesje
 I don't think

elsize{T}(::AbstractArray{T}) = sizeof(T)

provides much in the sense of protection, but the *convert(Ptr{T}, x)* will 
probably filter out AbstractArrays that isn't backed by a pointer. SubArray 
implements its own pointer method, so we will probably get a MethodError 
when we expect it.

Ivar

kl. 15:04:59 UTC+1 mandag 8. desember 2014 skrev Andreas Noack følgende:

 Ivar, I like that idea. It would make it easier to use pointers for 
 arrays. Maybe there are caveats I haven't seen.

 Regarding AbstractArray, I think the call to elsize serves as a 
 correctness check for that method.

 2014-12-07 17:31 GMT-05:00 Ivar Nesje iva...@gmail.com javascript::

 Seems to me like pointer(A, 3, 6) would be nice and unambiguous for 2d 
 arrays. Is there any reason why that shouldn't be implemented?

 The current implementation is a little too dangerous for AbstractArray, 
 in my opinion. Can we limit it to ContiguousArray (or whatever it is called 
 now), and make it somewhat safer?


 https://github.com/JuliaLang/julia/blob/4b299c2fd5464ece308a8e708789a9d2aa9e32d3/base/pointer.jl#L29

 I know these questions is a better fit for github, but I don't have time 
 to create a PR right now.




[julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Max Suster



 http://stackoverflow.com/questions/9097756/converting-data-from-glreadpixels-to-opencvmat


Seem that the link above shows a solution to this. 

Max

On Monday, December 8, 2014 3:02:25 PM UTC+1, Simon Danisch wrote:

 That's really great! I will have to investigate, how to convert between 
 OpenGL datatypes and UMat.
 It would be incredible, if we can convert between julia, opencl, opengl 
 and opencv datatypes without a big overhead.
 I'm pretty sure, that no other language has this solved nicely! ;)

 Am Samstag, 6. Dezember 2014 11:44:45 UTC+1 schrieb Max Suster:


 Hi all, 

 A few months ago I set out to learn Julia in an attempt to find an 
 alternative to MATLAB for developing computer vision applications.
 Given the interest (1 
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-users/PjyfzxPt8Gk/SuwKtjTd9j4J
 ,2 
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-users/81V5zSNJY3Q/DRUT0dR2qhQJ
 ,3 
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-users/iUPqo8drYek/pUeHECk91AQJ
 ,4 
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-users/6QunG66MfNs/C63pDfI-EMAJ
 ) and wide application of OpenCV for fast real-time computer vision 
 applications, I set myself to put together a simple interface for OpenCV in 
 Julia.  Coding in Julia and developing the interface between C++ and 
 Julia has been a lot of fun!

 OpenCV.jl aims to provide an interface for OpenCV http://opencv.org/ 
 computer 
 vision applications (C++) directly in Julia 
 http://julia.readthedocs.org/en/latest/manual/. It relies primarily on 
 Keno´s amazing Cxx.jl https://github.com/Keno/Cxx.jl, the Julia C++ 
 foreign function interface (FFI).  You can find all the information on my 
 package at https://github.com/maxruby/OpenCV.jl.

 You can download and run the package as follows:

 Pkg.clone(git://github.com/maxruby/OpenCV.jl.git)using OpenCV


 For MacOSX, OpenCV.jl comes with pre-compiled shared libraries, so it is 
 extremely easy to run.  For Windows and Linux, you will need to first 
 compile the OpenCV libraries, but this is well documented and links to the 
 instructions for doing so are included in the README.md file.

 The package currently supports most of OpenCV´s C++ API; however, at this 
 point I have created custom wrappings for core, imgproc, videoio and 
 highgui modules so that these are easy to use for anyone. 

 The package also demonstrates/contains 

- preliminary interface with the Qt GUI framework (see imread() and 
imwrite() functions)
- thin-wrappers for C++ objects such as std::vectors, std::strings 
- conversion from Julia arrays to C++ std::vector
- conversion of Julia images (Images.jl) to Mat (OpenCV) - though 
this has much room for improvement (i.e., color handling)

 Please let me know if there are any features you would like to see added 
 and I will try my best to integrate them. In the meantime, I will continue 
 to integrate more advanced algorithms for computer vision and eventually 
 extend the documentation as needed.

 Cheers,
 Max 







Re: [julia-users] Memory allocation in BLAS wrappers

2014-12-08 Thread Andreas Noack
Okay. I missed that definition for AbstractArray. Thanks for the
clarification.

2014-12-08 9:51 GMT-05:00 Ivar Nesje iva...@gmail.com:

 I don't think

 elsize{T}(::AbstractArray{T}) = sizeof(T)

 provides much in the sense of protection, but the *convert(Ptr{T}, x)* will
 probably filter out AbstractArrays that isn't backed by a pointer. SubArray
 implements its own pointer method, so we will probably get a MethodError
 when we expect it.

 Ivar

 kl. 15:04:59 UTC+1 mandag 8. desember 2014 skrev Andreas Noack følgende:

 Ivar, I like that idea. It would make it easier to use pointers for
 arrays. Maybe there are caveats I haven't seen.

 Regarding AbstractArray, I think the call to elsize serves as a
 correctness check for that method.

 2014-12-07 17:31 GMT-05:00 Ivar Nesje iva...@gmail.com:

 Seems to me like pointer(A, 3, 6) would be nice and unambiguous for 2d
 arrays. Is there any reason why that shouldn't be implemented?

 The current implementation is a little too dangerous for AbstractArray,
 in my opinion. Can we limit it to ContiguousArray (or whatever it is called
 now), and make it somewhat safer?

 https://github.com/JuliaLang/julia/blob/4b299c2fd5464ece308a8e708789a9
 d2aa9e32d3/base/pointer.jl#L29

 I know these questions is a better fit for github, but I don't have time
 to create a PR right now.





Re: [julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Simon Danisch
That's the trivial solution with quite a few copies and transfers involved.
But if the Umat is used with OpenCL, the image data should reside on the
GPU already, which means you don't need to download it from the gpu and
upload it back again for displaying the data.
But that means, it must be possible to create a UMat from an OpenCL buffer,
as it must be created first via OpenGL (silly restriction for the OpenCL
OpenGL interoperability).
If that works, one can manipulate the Umat via OpenCL and the OpenGL buffer
will be the same memory and can be displayed without any copies ;)


2014-12-08 15:43 GMT+01:00 Max Suster mxsst...@gmail.com:


 http://stackoverflow.com/questions/9097756/converting-data-from-glreadpixels-to-opencvmat


 Seem that the link above shows a solution to this.

 Max

 On Monday, December 8, 2014 3:02:25 PM UTC+1, Simon Danisch wrote:

 That's really great! I will have to investigate, how to convert between
 OpenGL datatypes and UMat.
 It would be incredible, if we can convert between julia, opencl, opengl
 and opencv datatypes without a big overhead.
 I'm pretty sure, that no other language has this solved nicely! ;)

 Am Samstag, 6. Dezember 2014 11:44:45 UTC+1 schrieb Max Suster:


 Hi all,

 A few months ago I set out to learn Julia in an attempt to find an
 alternative to MATLAB for developing computer vision applications.
 Given the interest (1
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-users/PjyfzxPt8Gk/SuwKtjTd9j4J
 ,2
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-users/81V5zSNJY3Q/DRUT0dR2qhQJ
 ,3
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-users/iUPqo8drYek/pUeHECk91AQJ
 ,4
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-users/6QunG66MfNs/C63pDfI-EMAJ
 ) and wide application of OpenCV for fast real-time computer vision
 applications, I set myself to put together a simple interface for OpenCV in
 Julia.  Coding in Julia and developing the interface between C++ and
 Julia has been a lot of fun!

 OpenCV.jl aims to provide an interface for OpenCV http://opencv.org/ 
 computer
 vision applications (C++) directly in Julia
 http://julia.readthedocs.org/en/latest/manual/. It relies primarily
 on Keno´s amazing Cxx.jl https://github.com/Keno/Cxx.jl, the Julia
 C++ foreign function interface (FFI).  You can find all the information on
 my package at https://github.com/maxruby/OpenCV.jl.

 You can download and run the package as follows:

 Pkg.clone(git://github.com/maxruby/OpenCV.jl.git)using OpenCV


 For MacOSX, OpenCV.jl comes with pre-compiled shared libraries, so it is
 extremely easy to run.  For Windows and Linux, you will need to first
 compile the OpenCV libraries, but this is well documented and links to the
 instructions for doing so are included in the README.md file.

 The package currently supports most of OpenCV´s C++ API; however, at
 this point I have created custom wrappings for core, imgproc, videoio
 and highgui modules so that these are easy to use for anyone.

 The package also demonstrates/contains

- preliminary interface with the Qt GUI framework (see imread() and
imwrite() functions)
- thin-wrappers for C++ objects such as std::vectors, std::strings
- conversion from Julia arrays to C++ std::vector
- conversion of Julia images (Images.jl) to Mat (OpenCV) - though
this has much room for improvement (i.e., color handling)

 Please let me know if there are any features you would like to see added
 and I will try my best to integrate them. In the meantime, I will continue
 to integrate more advanced algorithms for computer vision and eventually
 extend the documentation as needed.

 Cheers,
 Max








Re: [julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Simon Danisch
This would be a related (unanswered ) question:
http://answers.opencv.org/question/38848/opencv-displaying-umat-efficiently/
As I don't see any related constructors for the Umat, the easiest starting
point would be to ask in the forum.
This will probably get faster results, than diving into the code ourselves.


2014-12-08 15:59 GMT+01:00 Simon Danisch sdani...@gmail.com:

 That's the trivial solution with quite a few copies and transfers involved.
 But if the Umat is used with OpenCL, the image data should reside on the
 GPU already, which means you don't need to download it from the gpu and
 upload it back again for displaying the data.
 But that means, it must be possible to create a UMat from an OpenCL
 buffer, as it must be created first via OpenGL (silly restriction for the
 OpenCL OpenGL interoperability).
 If that works, one can manipulate the Umat via OpenCL and the OpenGL
 buffer will be the same memory and can be displayed without any copies ;)


 2014-12-08 15:43 GMT+01:00 Max Suster mxsst...@gmail.com:


 http://stackoverflow.com/questions/9097756/converting-data-from-glreadpixels-to-opencvmat


 Seem that the link above shows a solution to this.

 Max

 On Monday, December 8, 2014 3:02:25 PM UTC+1, Simon Danisch wrote:

 That's really great! I will have to investigate, how to convert between
 OpenGL datatypes and UMat.
 It would be incredible, if we can convert between julia, opencl, opengl
 and opencv datatypes without a big overhead.
 I'm pretty sure, that no other language has this solved nicely! ;)

 Am Samstag, 6. Dezember 2014 11:44:45 UTC+1 schrieb Max Suster:


 Hi all,

 A few months ago I set out to learn Julia in an attempt to find an
 alternative to MATLAB for developing computer vision applications.
 Given the interest (1
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-users/PjyfzxPt8Gk/SuwKtjTd9j4J
 ,2
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-users/81V5zSNJY3Q/DRUT0dR2qhQJ
 ,3
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-users/iUPqo8drYek/pUeHECk91AQJ
 ,4
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-users/6QunG66MfNs/C63pDfI-EMAJ
 ) and wide application of OpenCV for fast real-time computer vision
 applications, I set myself to put together a simple interface for OpenCV in
 Julia.  Coding in Julia and developing the interface between C++ and
 Julia has been a lot of fun!

 OpenCV.jl aims to provide an interface for OpenCV http://opencv.org/ 
 computer
 vision applications (C++) directly in Julia
 http://julia.readthedocs.org/en/latest/manual/. It relies primarily
 on Keno´s amazing Cxx.jl https://github.com/Keno/Cxx.jl, the Julia
 C++ foreign function interface (FFI).  You can find all the information on
 my package at https://github.com/maxruby/OpenCV.jl.

 You can download and run the package as follows:

 Pkg.clone(git://github.com/maxruby/OpenCV.jl.git)using OpenCV


 For MacOSX, OpenCV.jl comes with pre-compiled shared libraries, so it
 is extremely easy to run.  For Windows and Linux, you will need to first
 compile the OpenCV libraries, but this is well documented and links to the
 instructions for doing so are included in the README.md file.

 The package currently supports most of OpenCV´s C++ API; however, at
 this point I have created custom wrappings for core, imgproc, videoio
 and highgui modules so that these are easy to use for anyone.

 The package also demonstrates/contains

- preliminary interface with the Qt GUI framework (see imread() and
imwrite() functions)
- thin-wrappers for C++ objects such as std::vectors, std::strings
- conversion from Julia arrays to C++ std::vector
- conversion of Julia images (Images.jl) to Mat (OpenCV) - though
this has much room for improvement (i.e., color handling)

 Please let me know if there are any features you would like to see
 added and I will try my best to integrate them. In the meantime, I will
 continue to integrate more advanced algorithms for computer vision and
 eventually extend the documentation as needed.

 Cheers,
 Max









[julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Simon Danisch
If you're interested here are some more links:
https://software.intel.com/en-us/articles/opencl-and-opengl-interoperability-tutorial
Valentine's and mine prototype for OpenGL OpenCL interoperability in Julia:
https://github.com/vchuravy/qjulia_gpu


Am Samstag, 6. Dezember 2014 11:44:45 UTC+1 schrieb Max Suster:


 Hi all, 

 A few months ago I set out to learn Julia in an attempt to find an 
 alternative to MATLAB for developing computer vision applications.
 Given the interest (1 
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-users/PjyfzxPt8Gk/SuwKtjTd9j4J
 ,2 
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-users/81V5zSNJY3Q/DRUT0dR2qhQJ
 ,3 
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-users/iUPqo8drYek/pUeHECk91AQJ
 ,4 
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-users/6QunG66MfNs/C63pDfI-EMAJ
 ) and wide application of OpenCV for fast real-time computer vision 
 applications, I set myself to put together a simple interface for OpenCV in 
 Julia.  Coding in Julia and developing the interface between C++ and 
 Julia has been a lot of fun!

 OpenCV.jl aims to provide an interface for OpenCV http://opencv.org/ 
 computer 
 vision applications (C++) directly in Julia 
 http://julia.readthedocs.org/en/latest/manual/. It relies primarily on 
 Keno´s amazing Cxx.jl https://github.com/Keno/Cxx.jl, the Julia C++ 
 foreign function interface (FFI).  You can find all the information on my 
 package at https://github.com/maxruby/OpenCV.jl.

 You can download and run the package as follows:

 Pkg.clone(git://github.com/maxruby/OpenCV.jl.git)using OpenCV


 For MacOSX, OpenCV.jl comes with pre-compiled shared libraries, so it is 
 extremely easy to run.  For Windows and Linux, you will need to first 
 compile the OpenCV libraries, but this is well documented and links to the 
 instructions for doing so are included in the README.md file.

 The package currently supports most of OpenCV´s C++ API; however, at this 
 point I have created custom wrappings for core, imgproc, videoio and 
 highgui modules so that these are easy to use for anyone. 

 The package also demonstrates/contains 

- preliminary interface with the Qt GUI framework (see imread() and 
imwrite() functions)
- thin-wrappers for C++ objects such as std::vectors, std::strings 
- conversion from Julia arrays to C++ std::vector
- conversion of Julia images (Images.jl) to Mat (OpenCV) - though this 
has much room for improvement (i.e., color handling)

 Please let me know if there are any features you would like to see added 
 and I will try my best to integrate them. In the meantime, I will continue 
 to integrate more advanced algorithms for computer vision and eventually 
 extend the documentation as needed.

 Cheers,
 Max 







Re: [julia-users] Reviewing a Julia programming book for Packt

2014-12-08 Thread Shashi Gowda
I got this request too. Not going to reply. I wonder who is writing it.

On Sat, Dec 6, 2014 at 4:58 AM, ele...@gmail.com wrote:

 Have had the same problem with other open source projects I participate
 in, they spam anybody prominent on the ML or github.

 The resulting books seem to contain large parts consisting of the projects
 manuals, often verbatim.

 Cheers
 Lex

 On Saturday, December 6, 2014 3:45:19 AM UTC+10, Stefan Karpinski wrote:

 Yes, as the contact has been so relentlessly spammy, I've started to
 treat it as spam.

 On Fri, Dec 5, 2014 at 12:23 PM, Iain Dunning iaind...@gmail.com wrote:

 I also refused to review for them, both in this new wave of spam and the
 previous one, on the basis that I don't think such a book should exist
 (yet). I also felt that by being a reviewer, I'm authorizing the use of my
 name for a product I have no control over (they can just ignore what you
 say). They seem like a bunch of crooks to be honest.





Re: [julia-users] ijulia with multiple versions

2014-12-08 Thread Stefan Karpinski
I meant from Jupyter UI. I think that switching kernels in notebooks is in
progress.

On Mon, Dec 8, 2014 at 10:33 AM, Shashi Gowda shashigowd...@gmail.com
wrote:

 In fact there is a way to do this.

 Just go to the REPL with the Julia version you want to switch to and run
 Pkg.build(IJulia). This replaces the julia profile files in ~/.ipython to
 use the right Julia executable.



Re: [julia-users] ijulia with multiple versions

2014-12-08 Thread Isaiah Norton

 Just go to the REPL with the Julia version you want to switch to and run
 Pkg.build(IJulia). This replaces the julia profile files in ~/.ipython to
 use the right Julia executable.


Right, but this is kind of slow and would be annoying to do for routine use
I think.
Another option might be to change the hard-coded path to look at an
environment variable instead (or a global set from a wrapper script, etc.)

On Mon, Dec 8, 2014 at 10:33 AM, Shashi Gowda shashigowd...@gmail.com
wrote:

 In fact there is a way to do this.

 Just go to the REPL with the Julia version you want to switch to and run
 Pkg.build(IJulia). This replaces the julia profile files in ~/.ipython to
 use the right Julia executable.



Re: [julia-users] Re: Article on finite element programming in Julia

2014-12-08 Thread Petr Krysl
Thanks!

On Monday, December 8, 2014 2:09:27 AM UTC-8, Mauro wrote:

 There was one talk about FEM at JuliaCon: 
 https://www.youtube.com/watch?v=8wx6apk3xQs 

 On Mon, 2014-12-08 at 04:48, Petr Krysl krysl...@gmail.com javascript: 
 wrote: 
  Bit more optimization: 
  
  Amuthan's code: 29 seconds 
  J FinEALE: 54 seconds 
  Matlab FinEALE: 810 seconds 
  
  Petr 
  
  
  On Sunday, December 7, 2014 1:45:28 PM UTC-8, Petr Krysl wrote: 
  
  Sorry: minor correction.  I mistakenly timed also output to a VTK file 
 for 
  paraview postprocessing.  This being an ASCII file, it takes a few 
 seconds. 
  
  With J FinEALE the timing is now 78 seconds. 
  
  Petr 
  
  On Sunday, December 7, 2014 10:21:51 AM UTC-8, Petr Krysl wrote: 
  
  
  Amuthan's code: 29 seconds 
  
  J FinEALE: 86 seconds 
  
  FinEALE: 810 seconds 
  
  
  



Re: [julia-users] [WIP] CSVReaders.jl

2014-12-08 Thread John Myles White
Thanks, Tom. I wanted this to be my first package that uses the full 
functionality of the new documentation system.

 -- John

On Dec 8, 2014, at 5:08 AM, Tom Short tshort.rli...@gmail.com wrote:

 Exciting, John! Although your documentation may be very sparse, the code is 
 nicely documented.
 
 On Mon, Dec 8, 2014 at 12:35 AM, John Myles White johnmyleswh...@gmail.com 
 wrote:
 Over the last month or so, I've been slowly working on a new library that 
 defines an abstract toolkit for writing CSV parsers. The goal is to provide 
 an abstract interface that users can implement in order to provide functions 
 for reading data into their preferred data structures from CSV files. In 
 principle, this approach should allow us to unify the code behind Base's 
 readcsv and DataFrames's readtable functions.
 
 The library is still very much a work-in-progress, but I wanted to let others 
 see what I've done so that I can start getting feedback on the design.
 
 Because the library makes heavy use of Nullables, you can only try out the 
 library on Julia 0.4. If you're interested, it's available at 
 https://github.com/johnmyleswhite/CSVReaders.jl
 
 For now, I've intentionally given very sparse documentation to discourage 
 people from seriously using the library before it's officially released. But 
 there are some examples in the README that should make clear how the library 
 is intended to be used.
 
  -- John
 
 



Re: [julia-users] [WIP] CSVReaders.jl

2014-12-08 Thread John Myles White
I believe/hope the proposed solution will work for most cases, although there's 
still a bunch of performance work left to be done. I think the decoupling 
problem isn't as hard as it might seem since there are very clearly distinct 
stages in parsing a CSV file. But we'll find out if the indirection I've 
introduced causes performance problems when things can't be inlined.

While writing this package, I found the two most challenging problems to be:

(A) The disconnect between CSV files providing one row at a time and Julia's 
usage of column major arrays, which encourage reading one column at a time.
(B) The inability to easily resize! a matrix.

 -- John

On Dec 8, 2014, at 5:16 AM, Stefan Karpinski ste...@karpinski.org wrote:

 Doh. Obfuscate the code quick, before anyone uses it! This is very nice and 
 something I've always felt like we need for data formats like CSV – a way of 
 decoupling the parsing of the format from the populating of a data structure 
 with that data. It's a tough problem.
 
 On Mon, Dec 8, 2014 at 8:08 AM, Tom Short tshort.rli...@gmail.com wrote:
 Exciting, John! Although your documentation may be very sparse, the code is 
 nicely documented.
 
 On Mon, Dec 8, 2014 at 12:35 AM, John Myles White johnmyleswh...@gmail.com 
 wrote:
 Over the last month or so, I've been slowly working on a new library that 
 defines an abstract toolkit for writing CSV parsers. The goal is to provide 
 an abstract interface that users can implement in order to provide functions 
 for reading data into their preferred data structures from CSV files. In 
 principle, this approach should allow us to unify the code behind Base's 
 readcsv and DataFrames's readtable functions.
 
 The library is still very much a work-in-progress, but I wanted to let others 
 see what I've done so that I can start getting feedback on the design.
 
 Because the library makes heavy use of Nullables, you can only try out the 
 library on Julia 0.4. If you're interested, it's available at 
 https://github.com/johnmyleswhite/CSVReaders.jl
 
 For now, I've intentionally given very sparse documentation to discourage 
 people from seriously using the library before it's officially released. But 
 there are some examples in the README that should make clear how the library 
 is intended to be used.
 
  -- John
 
 
 



[julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Max Suster
Thanks for the feedback.  I realize that the copying needs to be skipped if 
possible . . .
I have been playing a bit with the OpenCL UMat and it will need indeed some 
tweeking because UMat is not always advantageous.
While there 10x gain with cvtColor and other functions such as 
GasussianBlur are actually a little slower.

I will have closer look at this tonight.

Max

On Monday, December 8, 2014 4:15:28 PM UTC+1, Simon Danisch wrote:

 If you're interested here are some more links:

 https://software.intel.com/en-us/articles/opencl-and-opengl-interoperability-tutorial
 Valentine's and mine prototype for OpenGL OpenCL interoperability in Julia:
 https://github.com/vchuravy/qjulia_gpu


 Am Samstag, 6. Dezember 2014 11:44:45 UTC+1 schrieb Max Suster:


 Hi all, 

 A few months ago I set out to learn Julia in an attempt to find an 
 alternative to MATLAB for developing computer vision applications.
 Given the interest (1 
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-users/PjyfzxPt8Gk/SuwKtjTd9j4J
 ,2 
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-users/81V5zSNJY3Q/DRUT0dR2qhQJ
 ,3 
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-users/iUPqo8drYek/pUeHECk91AQJ
 ,4 
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-users/6QunG66MfNs/C63pDfI-EMAJ
 ) and wide application of OpenCV for fast real-time computer vision 
 applications, I set myself to put together a simple interface for OpenCV in 
 Julia.  Coding in Julia and developing the interface between C++ and 
 Julia has been a lot of fun!

 OpenCV.jl aims to provide an interface for OpenCV http://opencv.org/ 
 computer 
 vision applications (C++) directly in Julia 
 http://julia.readthedocs.org/en/latest/manual/. It relies primarily on 
 Keno´s amazing Cxx.jl https://github.com/Keno/Cxx.jl, the Julia C++ 
 foreign function interface (FFI).  You can find all the information on my 
 package at https://github.com/maxruby/OpenCV.jl.

 You can download and run the package as follows:

 Pkg.clone(git://github.com/maxruby/OpenCV.jl.git)using OpenCV


 For MacOSX, OpenCV.jl comes with pre-compiled shared libraries, so it is 
 extremely easy to run.  For Windows and Linux, you will need to first 
 compile the OpenCV libraries, but this is well documented and links to the 
 instructions for doing so are included in the README.md file.

 The package currently supports most of OpenCV´s C++ API; however, at this 
 point I have created custom wrappings for core, imgproc, videoio and 
 highgui modules so that these are easy to use for anyone. 

 The package also demonstrates/contains 

- preliminary interface with the Qt GUI framework (see imread() and 
imwrite() functions)
- thin-wrappers for C++ objects such as std::vectors, std::strings 
- conversion from Julia arrays to C++ std::vector
- conversion of Julia images (Images.jl) to Mat (OpenCV) - though 
this has much room for improvement (i.e., color handling)

 Please let me know if there are any features you would like to see added 
 and I will try my best to integrate them. In the meantime, I will continue 
 to integrate more advanced algorithms for computer vision and eventually 
 extend the documentation as needed.

 Cheers,
 Max 







Re: [julia-users] scoping and begin

2014-12-08 Thread Isaiah Norton
tmp is declared local to the begin blocks
if that sounds odd (it did to me at first), try typing `local t = 1; t`)

On Mon, Dec 8, 2014 at 6:11 AM, Simon Byrne simonby...@gmail.com wrote:

 According to the docs
 http://julia.readthedocs.org/en/latest/manual/variables-and-scoping/,
 begin blocks do not introduce new scope blocks. But this block:


 https://github.com/JuliaLang/julia/blob/1b9041ce2919f2976ec726372b49201c887398d7/base/string.jl#L1601-L1618

 *does* seem to introduce a new scope (i.e. if I type Base.tmp at the
 REPL, I get an error). What am I missing here?

 Simon



[julia-users] Re: [WIP] CSVReaders.jl

2014-12-08 Thread Simon Byrne
Very nice. I was thinking about this recently when I came across the rust 
csv library:
http://burntsushi.net/rustdoc/csv/

It had a few neat features that I thought were useful:
* the ability to iterate by row, without saving the entire table to an 
object first (i.e. like awk)
* the option to specify the type of each column (to improve performance)

Some other things I've often wished for in CSV libraries:
* be able to specify an arbitrary functions for mapping a string to data 
type (e.g. strip out currency symbols, fix funny formatting, etc.)
* be able to specify a end of data rule, other than end-of-file or number 
of lines (e.g. stop on an empty line)

s

On Monday, 8 December 2014 05:35:02 UTC, John Myles White wrote:

 Over the last month or so, I've been slowly working on a new library that 
 defines an abstract toolkit for writing CSV parsers. The goal is to provide 
 an abstract interface that users can implement in order to provide 
 functions for reading data into their preferred data structures from CSV 
 files. In principle, this approach should allow us to unify the code behind 
 Base's readcsv and DataFrames's readtable functions.

 The library is still very much a work-in-progress, but I wanted to let 
 others see what I've done so that I can start getting feedback on the 
 design.

 Because the library makes heavy use of Nullables, you can only try out the 
 library on Julia 0.4. If you're interested, it's available at 
 https://github.com/johnmyleswhite/CSVReaders.jl

 For now, I've intentionally given very sparse documentation to discourage 
 people from seriously using the library before it's officially released. 
 But there are some examples in the README that should make clear how the 
 library is intended to be used.

  -- John



Re: [julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Simon Danisch
That's interesting, gaussian blur should definitely be faster on the gpu!
Maybe this thread helps?
http://answers.opencv.org/question/34127/opencl-in-opencv-300/
It seems like things are a little complicated, as it isn't really clear if
the data is currently in VRAM or RAM...

2014-12-08 17:39 GMT+01:00 Max Suster mxsst...@gmail.com:

 Thanks for the feedback.  I realize that the copying needs to be skipped
 if possible . . .
 I have been playing a bit with the OpenCL UMat and it will need indeed
 some tweeking because UMat is not always advantageous.
 While there 10x gain with cvtColor and other functions such as
 GasussianBlur are actually a little slower.

 I will have closer look at this tonight.

 Max

 On Monday, December 8, 2014 4:15:28 PM UTC+1, Simon Danisch wrote:

 If you're interested here are some more links:
 https://software.intel.com/en-us/articles/opencl-and-opengl-
 interoperability-tutorial
 Valentine's and mine prototype for OpenGL OpenCL interoperability in
 Julia:
 https://github.com/vchuravy/qjulia_gpu


 Am Samstag, 6. Dezember 2014 11:44:45 UTC+1 schrieb Max Suster:


 Hi all,

 A few months ago I set out to learn Julia in an attempt to find an
 alternative to MATLAB for developing computer vision applications.
 Given the interest (1
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-users/PjyfzxPt8Gk/SuwKtjTd9j4J
 ,2
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-users/81V5zSNJY3Q/DRUT0dR2qhQJ
 ,3
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-users/iUPqo8drYek/pUeHECk91AQJ
 ,4
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-users/6QunG66MfNs/C63pDfI-EMAJ
 ) and wide application of OpenCV for fast real-time computer vision
 applications, I set myself to put together a simple interface for OpenCV in
 Julia.  Coding in Julia and developing the interface between C++ and
 Julia has been a lot of fun!

 OpenCV.jl aims to provide an interface for OpenCV http://opencv.org/ 
 computer
 vision applications (C++) directly in Julia
 http://julia.readthedocs.org/en/latest/manual/. It relies primarily
 on Keno´s amazing Cxx.jl https://github.com/Keno/Cxx.jl, the Julia
 C++ foreign function interface (FFI).  You can find all the information on
 my package at https://github.com/maxruby/OpenCV.jl.

 You can download and run the package as follows:

 Pkg.clone(git://github.com/maxruby/OpenCV.jl.git)using OpenCV


 For MacOSX, OpenCV.jl comes with pre-compiled shared libraries, so it is
 extremely easy to run.  For Windows and Linux, you will need to first
 compile the OpenCV libraries, but this is well documented and links to the
 instructions for doing so are included in the README.md file.

 The package currently supports most of OpenCV´s C++ API; however, at
 this point I have created custom wrappings for core, imgproc, videoio
 and highgui modules so that these are easy to use for anyone.

 The package also demonstrates/contains

- preliminary interface with the Qt GUI framework (see imread() and
imwrite() functions)
- thin-wrappers for C++ objects such as std::vectors, std::strings
- conversion from Julia arrays to C++ std::vector
- conversion of Julia images (Images.jl) to Mat (OpenCV) - though
this has much room for improvement (i.e., color handling)

 Please let me know if there are any features you would like to see added
 and I will try my best to integrate them. In the meantime, I will continue
 to integrate more advanced algorithms for computer vision and eventually
 extend the documentation as needed.

 Cheers,
 Max








Re: [julia-users] scoping and begin

2014-12-08 Thread Simon Byrne
So begin blocks can introduce a scope, they just don't by default?

In your `local t = 1; t` example: what, then, is the scope of t?


On Monday, 8 December 2014 16:56:24 UTC, Isaiah wrote:

 tmp is declared local to the begin blocks
 if that sounds odd (it did to me at first), try typing `local t = 1; t`)

 On Mon, Dec 8, 2014 at 6:11 AM, Simon Byrne simon...@gmail.com 
 javascript: wrote:

 According to the docs 
 http://julia.readthedocs.org/en/latest/manual/variables-and-scoping/, 
 begin blocks do not introduce new scope blocks. But this block:


 https://github.com/JuliaLang/julia/blob/1b9041ce2919f2976ec726372b49201c887398d7/base/string.jl#L1601-L1618

 *does* seem to introduce a new scope (i.e. if I type Base.tmp at the 
 REPL, I get an error). What am I missing here?

 Simon




Re: [julia-users] Re: [WIP] CSVReaders.jl

2014-12-08 Thread John Myles White
Thanks, Simon. In response to your comments:

* This package and the current DataFrames code both support streaming CSV files 
in minibatches. It's a little hard to do this with the current DataFrames 
reader, but it is possible. It is designed to be easier with CSVReaders.

* This package and the current DataFrames code both support specifying the 
types of all columns before parsing begins. There's no fast path in CSVReaders 
that uses this information to full-advantage because the functions were 
designed to never fail -- instead they always enlarge types to ensure 
successful parsing. It would be good to think about how the library needs to be 
restructured to support both use cases. I believe the DataFrames parser will 
fail if the hand-specified types are invalidated by the data.

* I'm hopeful that the String rewrite Stefan is involved with will make it 
easier to write parser functions that take in an Array{Uint8} and return values 
of type T. There's certainly no reason that CSVReaders couldn't be configured 
to use other parser functions, although it might be best not to pass parsing 
functions in as function arguments since the parsing functions might not get 
inlined. At the moment, I'd prefer to see new parsers be added to the default 
list and therefore available to everyone. This is particularly relevant to me, 
since I want to add support for reading in data from Hive tables -- which 
require parsing Array and Map objects from CSV-style files.

One thing that makes parsing tricky is that type inference requires that all 
parseable types be placed into a linear order: if parsing as Int fails, the 
parser falls over to Float64, then Bool, then UTF8String. Coming up with a 
design that handles arbitrary types in a non-linear tree, while still 
supporting automatic type inference, seems tricky.

* Does the CSV standard have anything like END-OF-DATA? It's a very cool idea, 
but it seems that you'd need to introduce an arbitrary predicate that occurs 
per-row to make things work in the absence of existing conventions.

 -- John

On Dec 8, 2014, at 8:51 AM, Simon Byrne simonby...@gmail.com wrote:

 Very nice. I was thinking about this recently when I came across the rust csv 
 library:
 http://burntsushi.net/rustdoc/csv/
 
 It had a few neat features that I thought were useful:
 * the ability to iterate by row, without saving the entire table to an object 
 first (i.e. like awk)
 * the option to specify the type of each column (to improve performance)
 
 Some other things I've often wished for in CSV libraries:
 * be able to specify an arbitrary functions for mapping a string to data type 
 (e.g. strip out currency symbols, fix funny formatting, etc.)
 * be able to specify a end of data rule, other than end-of-file or number 
 of lines (e.g. stop on an empty line)
 
 s
 
 On Monday, 8 December 2014 05:35:02 UTC, John Myles White wrote:
 Over the last month or so, I've been slowly working on a new library that 
 defines an abstract toolkit for writing CSV parsers. The goal is to provide 
 an abstract interface that users can implement in order to provide functions 
 for reading data into their preferred data structures from CSV files. In 
 principle, this approach should allow us to unify the code behind Base's 
 readcsv and DataFrames's readtable functions.
 
 The library is still very much a work-in-progress, but I wanted to let others 
 see what I've done so that I can start getting feedback on the design.
 
 Because the library makes heavy use of Nullables, you can only try out the 
 library on Julia 0.4. If you're interested, it's available at 
 https://github.com/johnmyleswhite/CSVReaders.jl
 
 For now, I've intentionally given very sparse documentation to discourage 
 people from seriously using the library before it's officially released. But 
 there are some examples in the README that should make clear how the library 
 is intended to be used.
 
  -- John
 



Re: [julia-users] [WIP] CSVReaders.jl

2014-12-08 Thread Tim Holy
My suspicion is you should read into a 1d vector (and use `append!`), then at 
the end do a reshape and finally a transpose. I bet that will be many times 
faster than any other alternative, because we have a really fast transpose 
now.

The only disadvantage I see is taking twice as much memory as would be 
minimally needed. (This can be fixed once we have row-major arrays.)

--Tim

On Monday, December 08, 2014 08:38:06 AM John Myles White wrote:
 I believe/hope the proposed solution will work for most cases, although
 there's still a bunch of performance work left to be done. I think the
 decoupling problem isn't as hard as it might seem since there are very
 clearly distinct stages in parsing a CSV file. But we'll find out if the
 indirection I've introduced causes performance problems when things can't
 be inlined.
 
 While writing this package, I found the two most challenging problems to be:
 
 (A) The disconnect between CSV files providing one row at a time and Julia's
 usage of column major arrays, which encourage reading one column at a time.
 (B) The inability to easily resize! a matrix.
 
  -- John
 
 On Dec 8, 2014, at 5:16 AM, Stefan Karpinski ste...@karpinski.org wrote:
  Doh. Obfuscate the code quick, before anyone uses it! This is very nice
  and something I've always felt like we need for data formats like CSV – a
  way of decoupling the parsing of the format from the populating of a data
  structure with that data. It's a tough problem.
  
  On Mon, Dec 8, 2014 at 8:08 AM, Tom Short tshort.rli...@gmail.com wrote:
  Exciting, John! Although your documentation may be very sparse, the code
  is nicely documented.
  
  On Mon, Dec 8, 2014 at 12:35 AM, John Myles White
  johnmyleswh...@gmail.com wrote: Over the last month or so, I've been
  slowly working on a new library that defines an abstract toolkit for
  writing CSV parsers. The goal is to provide an abstract interface that
  users can implement in order to provide functions for reading data into
  their preferred data structures from CSV files. In principle, this
  approach should allow us to unify the code behind Base's readcsv and
  DataFrames's readtable functions.
  
  The library is still very much a work-in-progress, but I wanted to let
  others see what I've done so that I can start getting feedback on the
  design.
  
  Because the library makes heavy use of Nullables, you can only try out the
  library on Julia 0.4. If you're interested, it's available at
  https://github.com/johnmyleswhite/CSVReaders.jl
  
  For now, I've intentionally given very sparse documentation to discourage
  people from seriously using the library before it's officially released.
  But there are some examples in the README that should make clear how the
  library is intended to be used. 
   -- John



Re: [julia-users] [WIP] CSVReaders.jl

2014-12-08 Thread John Myles White
Yes, this is how I've been doing things so far.

 -- John

On Dec 8, 2014, at 9:12 AM, Tim Holy tim.h...@gmail.com wrote:

 My suspicion is you should read into a 1d vector (and use `append!`), then at 
 the end do a reshape and finally a transpose. I bet that will be many times 
 faster than any other alternative, because we have a really fast transpose 
 now.
 
 The only disadvantage I see is taking twice as much memory as would be 
 minimally needed. (This can be fixed once we have row-major arrays.)
 
 --Tim
 
 On Monday, December 08, 2014 08:38:06 AM John Myles White wrote:
 I believe/hope the proposed solution will work for most cases, although
 there's still a bunch of performance work left to be done. I think the
 decoupling problem isn't as hard as it might seem since there are very
 clearly distinct stages in parsing a CSV file. But we'll find out if the
 indirection I've introduced causes performance problems when things can't
 be inlined.
 
 While writing this package, I found the two most challenging problems to be:
 
 (A) The disconnect between CSV files providing one row at a time and Julia's
 usage of column major arrays, which encourage reading one column at a time.
 (B) The inability to easily resize! a matrix.
 
 -- John
 
 On Dec 8, 2014, at 5:16 AM, Stefan Karpinski ste...@karpinski.org wrote:
 Doh. Obfuscate the code quick, before anyone uses it! This is very nice
 and something I've always felt like we need for data formats like CSV – a
 way of decoupling the parsing of the format from the populating of a data
 structure with that data. It's a tough problem.
 
 On Mon, Dec 8, 2014 at 8:08 AM, Tom Short tshort.rli...@gmail.com wrote:
 Exciting, John! Although your documentation may be very sparse, the code
 is nicely documented.
 
 On Mon, Dec 8, 2014 at 12:35 AM, John Myles White
 johnmyleswh...@gmail.com wrote: Over the last month or so, I've been
 slowly working on a new library that defines an abstract toolkit for
 writing CSV parsers. The goal is to provide an abstract interface that
 users can implement in order to provide functions for reading data into
 their preferred data structures from CSV files. In principle, this
 approach should allow us to unify the code behind Base's readcsv and
 DataFrames's readtable functions.
 
 The library is still very much a work-in-progress, but I wanted to let
 others see what I've done so that I can start getting feedback on the
 design.
 
 Because the library makes heavy use of Nullables, you can only try out the
 library on Julia 0.4. If you're interested, it's available at
 https://github.com/johnmyleswhite/CSVReaders.jl
 
 For now, I've intentionally given very sparse documentation to discourage
 people from seriously using the library before it's officially released.
 But there are some examples in the README that should make clear how the
 library is intended to be used. 
 -- John
 



Re: [julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Max Suster

I know this is a Julia coding forum, but if you have a chance, can you 
compare the two examples below?   
Also, what OS are you using?   Before testing a lot of Julia wrapped C++ 
API with OpenCV/OpenCL/OpenGL, 
it would be good to know what we can expect at best. . . 

*CPU*
int main(int argc, char** argv) 
{ 
  Mat img, gray; 
  img = imread(your RGB image, -1); 
  imshow(original, img); 
*  cvtColor(img, gray, COLOR_BGR2GRAY);// 50 ms *
*  GaussianBlur(gray, gray,  Size(7, 7), 1.5);  // 2 
ms *   
  double t = (double)getTickCount(); 
*  Canny(gray, gray, 0, 50);   
  // 6 ms *
  t = ((double)getTickCount() - t)/getTickFrequency(); 
  cout  Times passed in seconds:   t  endl; 
  imshow(edges, gray); 
  waitKey(); 
  return 0; 
} 


*GPU*
*static UMat img, gray; *

int main(int argc, char** argv) 
{  
Mat src;
src = imread(your RGB image, -1); 
src.copyTo(img); // copy from CPU to GPU buffer
imshow(original, src); 
double t = (double)getTickCount(); 
*cvtColor(img, gray, COLOR_BGR2GRAY);  // 3 ms *
t = ((double)getTickCount() - t)/getTickFrequency(); 
cout  Times passed in seconds:   t  endl; 

*GaussianBlur(gray, gray, Size(7, 7), 1.5);   // 13 ms 
Canny(gray, gray, 0, 50); 
// 10 ms *
imshow(edges, gray); 
waitKey(); 
return 0;
} 






Re: [julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Tim Holy
I wonder if the bigger problem might be that your numbers for the grayscale 
conversion (which were very promising) might be misleading. Are you sure the 
calculation is done (and the results are available to the CPU) by the time 
it finishes? If we assume a best-case scenario of 6GB/s of data transfer to the 
GPU, then transferring 3MB to the GPU and 1MB back takes about 0.7ms. That's 
many times longer than what you reported for that calculation. Or did you not 
include transfer time in your results?

--Tim

On Monday, December 08, 2014 05:50:32 PM Simon Danisch wrote:
 That's interesting, gaussian blur should definitely be faster on the gpu!
 Maybe this thread helps?
 http://answers.opencv.org/question/34127/opencl-in-opencv-300/
 It seems like things are a little complicated, as it isn't really clear if
 the data is currently in VRAM or RAM...
 
 2014-12-08 17:39 GMT+01:00 Max Suster mxsst...@gmail.com:
  Thanks for the feedback.  I realize that the copying needs to be skipped
  if possible . . .
  I have been playing a bit with the OpenCL UMat and it will need indeed
  some tweeking because UMat is not always advantageous.
  While there 10x gain with cvtColor and other functions such as
  GasussianBlur are actually a little slower.
  
  I will have closer look at this tonight.
  
  Max
  
  On Monday, December 8, 2014 4:15:28 PM UTC+1, Simon Danisch wrote:
  If you're interested here are some more links:
  https://software.intel.com/en-us/articles/opencl-and-opengl-  
  interoperability-tutorial
  Valentine's and mine prototype for OpenGL OpenCL interoperability in
  Julia:
  https://github.com/vchuravy/qjulia_gpu
  
  Am Samstag, 6. Dezember 2014 11:44:45 UTC+1 schrieb Max Suster:
  Hi all,
  
  A few months ago I set out to learn Julia in an attempt to find an
  alternative to MATLAB for developing computer vision applications.
  Given the interest (1
  https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-use
  rs/PjyfzxPt8Gk/SuwKtjTd9j4J ,2
  https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-use
  rs/81V5zSNJY3Q/DRUT0dR2qhQJ ,3
  https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-u
  sers/iUPqo8drYek/pUeHECk91AQJ ,4
  https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-u
  sers/6QunG66MfNs/C63pDfI-EMAJ ) and wide application of OpenCV for fast
  real-time computer vision applications, I set myself to put together a
  simple interface for OpenCV in Julia.  Coding in Julia and developing
  the interface between C++ and Julia has been a lot of fun!
  
  OpenCV.jl aims to provide an interface for OpenCV http://opencv.org/
  computer vision applications (C++) directly in Julia
  http://julia.readthedocs.org/en/latest/manual/. It relies primarily
  on Keno´s amazing Cxx.jl https://github.com/Keno/Cxx.jl, the Julia
  C++ foreign function interface (FFI).  You can find all the information
  on
  my package at https://github.com/maxruby/OpenCV.jl.
  
  You can download and run the package as follows:
  
  Pkg.clone(git://github.com/maxruby/OpenCV.jl.git)using OpenCV
  
  
  For MacOSX, OpenCV.jl comes with pre-compiled shared libraries, so it is
  extremely easy to run.  For Windows and Linux, you will need to first
  compile the OpenCV libraries, but this is well documented and links to
  the
  instructions for doing so are included in the README.md file.
  
  The package currently supports most of OpenCV´s C++ API; however, at
  this point I have created custom wrappings for core, imgproc, videoio
  and highgui modules so that these are easy to use for anyone.
  
  The package also demonstrates/contains
  
 - preliminary interface with the Qt GUI framework (see imread() and
 imwrite() functions)
 - thin-wrappers for C++ objects such as std::vectors, std::strings
 - conversion from Julia arrays to C++ std::vector
 - conversion of Julia images (Images.jl) to Mat (OpenCV) - though
 this has much room for improvement (i.e., color handling)
  
  Please let me know if there are any features you would like to see added
  and I will try my best to integrate them. In the meantime, I will
  continue
  to integrate more advanced algorithms for computer vision and eventually
  extend the documentation as needed.
  
  Cheers,
  Max



Re: [julia-users] Re: [WIP] CSVReaders.jl

2014-12-08 Thread Simon Byrne

On Monday, 8 December 2014 17:04:10 UTC, John Myles White wrote:

 * This package and the current DataFrames code both support specifying the 
 types of all columns before parsing begins. There's no fast path in 
 CSVReaders that uses this information to full-advantage because the 
 functions were designed to never fail -- instead they always enlarge types 
 to ensure successful parsing. It would be good to think about how the 
 library needs to be restructured to support both use cases. I believe the 
 DataFrames parser will fail if the hand-specified types are invalidated by 
 the data.


I agree that being permissive by default is probably a good idea, but 
sometimes it is nice if the parser throws an error if it finds something 
unexpected. This could also be useful for the end-of-data problem below.

* Does the CSV standard have anything like END-OF-DATA? It's a very cool 
 idea, but it seems that you'd need to introduce an arbitrary predicate that 
 occurs per-row to make things work in the absence of existing conventions.


Well, there isn't really a standard, just this RFC:
http://www.ietf.org/rfc/rfc4180.txt
which seems to assume end-of-data = end-of-file.
 
When I hit this problem the files I was reading weren't actually CSV, but 
this:
http://lsbr.niams.nih.gov/bsoft/bsoft_param.html
which have multiple tables per file, ended by a blank line. I think I ended 
up devising a hack that would count the number of lines beforehand.


Re: [julia-users] [WIP] CSVReaders.jl

2014-12-08 Thread Tim Holy
Does the reshape/transpose really take any appreciable time (compared to the 
I/O)?

--Tim

On Monday, December 08, 2014 09:14:35 AM John Myles White wrote:
 Yes, this is how I've been doing things so far.
 
  -- John
 
 On Dec 8, 2014, at 9:12 AM, Tim Holy tim.h...@gmail.com wrote:
  My suspicion is you should read into a 1d vector (and use `append!`), then
  at the end do a reshape and finally a transpose. I bet that will be many
  times faster than any other alternative, because we have a really fast
  transpose now.
  
  The only disadvantage I see is taking twice as much memory as would be
  minimally needed. (This can be fixed once we have row-major arrays.)
  
  --Tim
  
  On Monday, December 08, 2014 08:38:06 AM John Myles White wrote:
  I believe/hope the proposed solution will work for most cases, although
  there's still a bunch of performance work left to be done. I think the
  decoupling problem isn't as hard as it might seem since there are very
  clearly distinct stages in parsing a CSV file. But we'll find out if the
  indirection I've introduced causes performance problems when things can't
  be inlined.
  
  While writing this package, I found the two most challenging problems to
  be:
  
  (A) The disconnect between CSV files providing one row at a time and
  Julia's usage of column major arrays, which encourage reading one column
  at a time. (B) The inability to easily resize! a matrix.
  
  -- John
  
  On Dec 8, 2014, at 5:16 AM, Stefan Karpinski ste...@karpinski.org 
wrote:
  Doh. Obfuscate the code quick, before anyone uses it! This is very nice
  and something I've always felt like we need for data formats like CSV –
  a
  way of decoupling the parsing of the format from the populating of a
  data
  structure with that data. It's a tough problem.
  
  On Mon, Dec 8, 2014 at 8:08 AM, Tom Short tshort.rli...@gmail.com
  wrote:
  Exciting, John! Although your documentation may be very sparse, the
  code
  is nicely documented.
  
  On Mon, Dec 8, 2014 at 12:35 AM, John Myles White
  johnmyleswh...@gmail.com wrote: Over the last month or so, I've been
  slowly working on a new library that defines an abstract toolkit for
  writing CSV parsers. The goal is to provide an abstract interface that
  users can implement in order to provide functions for reading data into
  their preferred data structures from CSV files. In principle, this
  approach should allow us to unify the code behind Base's readcsv and
  DataFrames's readtable functions.
  
  The library is still very much a work-in-progress, but I wanted to let
  others see what I've done so that I can start getting feedback on the
  design.
  
  Because the library makes heavy use of Nullables, you can only try out
  the
  library on Julia 0.4. If you're interested, it's available at
  https://github.com/johnmyleswhite/CSVReaders.jl
  
  For now, I've intentionally given very sparse documentation to
  discourage
  people from seriously using the library before it's officially released.
  But there are some examples in the README that should make clear how the
  library is intended to be used.
  -- John



Re: [julia-users] [WIP] CSVReaders.jl

2014-12-08 Thread John Myles White
Not really. It's mostly that the current interface doesn't make it easy to ask 
for a Matrix back when the intermediates are Vector objects. But I can change 
that.

 -- John

On Dec 8, 2014, at 9:25 AM, Tim Holy tim.h...@gmail.com wrote:

 Does the reshape/transpose really take any appreciable time (compared to the 
 I/O)?
 
 --Tim
 
 On Monday, December 08, 2014 09:14:35 AM John Myles White wrote:
 Yes, this is how I've been doing things so far.
 
 -- John
 
 On Dec 8, 2014, at 9:12 AM, Tim Holy tim.h...@gmail.com wrote:
 My suspicion is you should read into a 1d vector (and use `append!`), then
 at the end do a reshape and finally a transpose. I bet that will be many
 times faster than any other alternative, because we have a really fast
 transpose now.
 
 The only disadvantage I see is taking twice as much memory as would be
 minimally needed. (This can be fixed once we have row-major arrays.)
 
 --Tim
 
 On Monday, December 08, 2014 08:38:06 AM John Myles White wrote:
 I believe/hope the proposed solution will work for most cases, although
 there's still a bunch of performance work left to be done. I think the
 decoupling problem isn't as hard as it might seem since there are very
 clearly distinct stages in parsing a CSV file. But we'll find out if the
 indirection I've introduced causes performance problems when things can't
 be inlined.
 
 While writing this package, I found the two most challenging problems to
 be:
 
 (A) The disconnect between CSV files providing one row at a time and
 Julia's usage of column major arrays, which encourage reading one column
 at a time. (B) The inability to easily resize! a matrix.
 
 -- John
 
 On Dec 8, 2014, at 5:16 AM, Stefan Karpinski ste...@karpinski.org 
 wrote:
 Doh. Obfuscate the code quick, before anyone uses it! This is very nice
 and something I've always felt like we need for data formats like CSV –
 a
 way of decoupling the parsing of the format from the populating of a
 data
 structure with that data. It's a tough problem.
 
 On Mon, Dec 8, 2014 at 8:08 AM, Tom Short tshort.rli...@gmail.com
 wrote:
 Exciting, John! Although your documentation may be very sparse, the
 code
 is nicely documented.
 
 On Mon, Dec 8, 2014 at 12:35 AM, John Myles White
 johnmyleswh...@gmail.com wrote: Over the last month or so, I've been
 slowly working on a new library that defines an abstract toolkit for
 writing CSV parsers. The goal is to provide an abstract interface that
 users can implement in order to provide functions for reading data into
 their preferred data structures from CSV files. In principle, this
 approach should allow us to unify the code behind Base's readcsv and
 DataFrames's readtable functions.
 
 The library is still very much a work-in-progress, but I wanted to let
 others see what I've done so that I can start getting feedback on the
 design.
 
 Because the library makes heavy use of Nullables, you can only try out
 the
 library on Julia 0.4. If you're interested, it's available at
 https://github.com/johnmyleswhite/CSVReaders.jl
 
 For now, I've intentionally given very sparse documentation to
 discourage
 people from seriously using the library before it's officially released.
 But there are some examples in the README that should make clear how the
 library is intended to be used.
 -- John
 



Re: [julia-users] Re: [WIP] CSVReaders.jl

2014-12-08 Thread John Myles White
Ok, we can change things to fail on type-misspecification.

There's no real standard, but the rule does Excel read this in a sane way? is 
pretty effective for determining what you should try parsing and when you 
should tell people to reformat their data.

Given the current infrastructure, I think the easiest way to read that data 
would be to split it into separate files. There are other hacks that would 
work, but your problem is harder than just specifying end-of-data (which can be 
done by reading N rows) -- it's also specifying start-of-data (which can be 
done by skipping M rows at the start).

 -- John

On Dec 8, 2014, at 9:24 AM, Simon Byrne simonby...@gmail.com wrote:

 
 On Monday, 8 December 2014 17:04:10 UTC, John Myles White wrote:
 * This package and the current DataFrames code both support specifying the 
 types of all columns before parsing begins. There's no fast path in 
 CSVReaders that uses this information to full-advantage because the functions 
 were designed to never fail -- instead they always enlarge types to ensure 
 successful parsing. It would be good to think about how the library needs to 
 be restructured to support both use cases. I believe the DataFrames parser 
 will fail if the hand-specified types are invalidated by the data.
 
 I agree that being permissive by default is probably a good idea, but 
 sometimes it is nice if the parser throws an error if it finds something 
 unexpected. This could also be useful for the end-of-data problem below.
 
 * Does the CSV standard have anything like END-OF-DATA? It's a very cool 
 idea, but it seems that you'd need to introduce an arbitrary predicate that 
 occurs per-row to make things work in the absence of existing conventions.
 
 Well, there isn't really a standard, just this RFC:
 http://www.ietf.org/rfc/rfc4180.txt
 which seems to assume end-of-data = end-of-file.
  
 When I hit this problem the files I was reading weren't actually CSV, but 
 this:
 http://lsbr.niams.nih.gov/bsoft/bsoft_param.html
 which have multiple tables per file, ended by a blank line. I think I ended 
 up devising a hack that would count the number of lines beforehand.



[julia-users] Easy way to copy structs.

2014-12-08 Thread Utkarsh Upadhyay
I have just tried playing with Julia and I find myself often copying 
`immutable` objects while changing just one field:

function setX(pt::Point, x::Float64)
  # Only changing the `x` field
  return Point(x, pt.y, pt.z, pt.color, pt.collision, pt.foo, pt.bar);
end

Is there is any syntactical sugar to avoid having to write out all the 
other fields which are not changing (e.g. pt.y, pt.z, etc.) while creating 
a new object?
Apart from making the function much easier to understand, this will also 
make it much less of a hassle to change fields on the type.

Thanks.


Re: [julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Max Suster
Its an exact comparison side-by-side of the same code, and actually already 
tested by others in the OpenCV forum.
The Mat/UMat image is available for display with imshow -- the step imshow(
edges, gray); in both cases -- which is how the test was set up.
The main point is the time it takes to complete the entire process and the 
fact that cvtColor with OpenCL can generate an image for viewing much more 
quickly.

I never intended to be misleading, and I am sorry that you interpret it 
this way. 

Max


On Monday, December 8, 2014 6:22:39 PM UTC+1, Tim Holy wrote:

 I wonder if the bigger problem might be that your numbers for the 
 grayscale 
 conversion (which were very promising) might be misleading. Are you sure 
 the 
 calculation is done (and the results are available to the CPU) by the 
 time 
 it finishes? If we assume a best-case scenario of 6GB/s of data transfer 
 to the 
 GPU, then transferring 3MB to the GPU and 1MB back takes about 0.7ms. 
 That's 
 many times longer than what you reported for that calculation. Or did you 
 not 
 include transfer time in your results? 

 --Tim 

 On Monday, December 08, 2014 05:50:32 PM Simon Danisch wrote: 
  That's interesting, gaussian blur should definitely be faster on the 
 gpu! 
  Maybe this thread helps? 
  http://answers.opencv.org/question/34127/opencl-in-opencv-300/ 
  It seems like things are a little complicated, as it isn't really clear 
 if 
  the data is currently in VRAM or RAM... 
  
  2014-12-08 17:39 GMT+01:00 Max Suster mxss...@gmail.com javascript:: 

   Thanks for the feedback.  I realize that the copying needs to be 
 skipped 
   if possible . . . 
   I have been playing a bit with the OpenCL UMat and it will need indeed 
   some tweeking because UMat is not always advantageous. 
   While there 10x gain with cvtColor and other functions such as 
   GasussianBlur are actually a little slower. 
   
   I will have closer look at this tonight. 
   
   Max 
   
   On Monday, December 8, 2014 4:15:28 PM UTC+1, Simon Danisch wrote: 
   If you're interested here are some more links: 
   https://software.intel.com/en-us/articles/opencl-and-opengl-  
 interoperability-tutorial 
   Valentine's and mine prototype for OpenGL OpenCL interoperability in 
   Julia: 
   https://github.com/vchuravy/qjulia_gpu 
   
   Am Samstag, 6. Dezember 2014 11:44:45 UTC+1 schrieb Max Suster: 
   Hi all, 
   
   A few months ago I set out to learn Julia in an attempt to find an 
   alternative to MATLAB for developing computer vision applications. 
   Given the interest (1 
   
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-use 
   rs/PjyfzxPt8Gk/SuwKtjTd9j4J ,2 
   
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-use 
   rs/81V5zSNJY3Q/DRUT0dR2qhQJ ,3 
   
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-u 
   sers/iUPqo8drYek/pUeHECk91AQJ ,4 
   
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-u 
   sers/6QunG66MfNs/C63pDfI-EMAJ ) and wide application of OpenCV for 
 fast 
   real-time computer vision applications, I set myself to put together 
 a 
   simple interface for OpenCV in Julia.  Coding in Julia and 
 developing 
   the interface between C++ and Julia has been a lot of fun! 
   
   OpenCV.jl aims to provide an interface for OpenCV 
 http://opencv.org/ 
   computer vision applications (C++) directly in Julia 
   http://julia.readthedocs.org/en/latest/manual/. It relies 
 primarily 
   on Keno´s amazing Cxx.jl https://github.com/Keno/Cxx.jl, the 
 Julia 
   C++ foreign function interface (FFI).  You can find all the 
 information 
   on 
   my package at https://github.com/maxruby/OpenCV.jl. 
   
   You can download and run the package as follows: 
   
   Pkg.clone(git://github.com/maxruby/OpenCV.jl.git)using OpenCV 
   
   
   For MacOSX, OpenCV.jl comes with pre-compiled shared libraries, so 
 it is 
   extremely easy to run.  For Windows and Linux, you will need to 
 first 
   compile the OpenCV libraries, but this is well documented and links 
 to 
   the 
   instructions for doing so are included in the README.md file. 
   
   The package currently supports most of OpenCV´s C++ API; however, at 
   this point I have created custom wrappings for core, imgproc, 
 videoio 
   and highgui modules so that these are easy to use for anyone. 
   
   The package also demonstrates/contains 
   
  - preliminary interface with the Qt GUI framework (see imread() 
 and 
  imwrite() functions) 
  - thin-wrappers for C++ objects such as std::vectors, 
 std::strings 
  - conversion from Julia arrays to C++ std::vector 
  - conversion of Julia images (Images.jl) to Mat (OpenCV) - though 
  this has much room for improvement (i.e., color handling) 
   
   Please let me know if there are any features you would like to see 
 added 
   and I will try my best to integrate them. In the meantime, I will 
   continue 
   to integrate more advanced algorithms for 

Re: [julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Simon Danisch
Under the premise, that everything can be done on the GPU up to a certain
point, transfer times shouldn't be included in the benchmark, as it
wouldn't be necessary to have the data available to the CPU.
For all simple image transformation, OpenCL should be strictly faster. (
obviously not if you have a Intel Xeon E5-2699 and a weak graphic card)
Could it be that OpenCL automatically accelerates even normal Mats with the
GPU?
This might be the reason why the initial cvtColor is so slow on the CPU, as
it includes transferring the data to the CPU.
Other reason could be, that OpenCV does transfers and copies even with UMat.
I'm not sure what's going on here, but I might try to build OpenCV with
OpenCL tomorrow and see for myself.
Have you tried to explicitly turn OpenCL on and off with
OPENCV_OPENCL_RUNTIME?


2014-12-08 18:22 GMT+01:00 Tim Holy tim.h...@gmail.com:

 I wonder if the bigger problem might be that your numbers for the grayscale
 conversion (which were very promising) might be misleading. Are you sure
 the
 calculation is done (and the results are available to the CPU) by the
 time
 it finishes? If we assume a best-case scenario of 6GB/s of data transfer
 to the
 GPU, then transferring 3MB to the GPU and 1MB back takes about 0.7ms.
 That's
 many times longer than what you reported for that calculation. Or did you
 not
 include transfer time in your results?

 --Tim

 On Monday, December 08, 2014 05:50:32 PM Simon Danisch wrote:
  That's interesting, gaussian blur should definitely be faster on the gpu!
  Maybe this thread helps?
  http://answers.opencv.org/question/34127/opencl-in-opencv-300/
  It seems like things are a little complicated, as it isn't really clear
 if
  the data is currently in VRAM or RAM...
 
  2014-12-08 17:39 GMT+01:00 Max Suster mxsst...@gmail.com:
   Thanks for the feedback.  I realize that the copying needs to be
 skipped
   if possible . . .
   I have been playing a bit with the OpenCL UMat and it will need indeed
   some tweeking because UMat is not always advantageous.
   While there 10x gain with cvtColor and other functions such as
   GasussianBlur are actually a little slower.
  
   I will have closer look at this tonight.
  
   Max
  
   On Monday, December 8, 2014 4:15:28 PM UTC+1, Simon Danisch wrote:
   If you're interested here are some more links:
   https://software.intel.com/en-us/articles/opencl-and-opengl- 
 interoperability-tutorial
   Valentine's and mine prototype for OpenGL OpenCL interoperability in
   Julia:
   https://github.com/vchuravy/qjulia_gpu
  
   Am Samstag, 6. Dezember 2014 11:44:45 UTC+1 schrieb Max Suster:
   Hi all,
  
   A few months ago I set out to learn Julia in an attempt to find an
   alternative to MATLAB for developing computer vision applications.
   Given the interest (1
   
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-use
   rs/PjyfzxPt8Gk/SuwKtjTd9j4J ,2
   
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-use
   rs/81V5zSNJY3Q/DRUT0dR2qhQJ ,3
   
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-u
   sers/iUPqo8drYek/pUeHECk91AQJ ,4
   
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-u
   sers/6QunG66MfNs/C63pDfI-EMAJ ) and wide application of OpenCV for
 fast
   real-time computer vision applications, I set myself to put together
 a
   simple interface for OpenCV in Julia.  Coding in Julia and developing
   the interface between C++ and Julia has been a lot of fun!
  
   OpenCV.jl aims to provide an interface for OpenCV 
 http://opencv.org/
   computer vision applications (C++) directly in Julia
   http://julia.readthedocs.org/en/latest/manual/. It relies
 primarily
   on Keno´s amazing Cxx.jl https://github.com/Keno/Cxx.jl, the Julia
   C++ foreign function interface (FFI).  You can find all the
 information
   on
   my package at https://github.com/maxruby/OpenCV.jl.
  
   You can download and run the package as follows:
  
   Pkg.clone(git://github.com/maxruby/OpenCV.jl.git)using OpenCV
  
  
   For MacOSX, OpenCV.jl comes with pre-compiled shared libraries, so
 it is
   extremely easy to run.  For Windows and Linux, you will need to first
   compile the OpenCV libraries, but this is well documented and links
 to
   the
   instructions for doing so are included in the README.md file.
  
   The package currently supports most of OpenCV´s C++ API; however, at
   this point I have created custom wrappings for core, imgproc, videoio
   and highgui modules so that these are easy to use for anyone.
  
   The package also demonstrates/contains
  
  - preliminary interface with the Qt GUI framework (see imread()
 and
  imwrite() functions)
  - thin-wrappers for C++ objects such as std::vectors, std::strings
  - conversion from Julia arrays to C++ std::vector
  - conversion of Julia images (Images.jl) to Mat (OpenCV) - though
  this has much room for improvement (i.e., color handling)
  
   Please let me know if there 

Re: [julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Tim Holy
Now that you posted the code (along with timing markers), it's much clearer. 
Your timings are just for the computation, and don't include transfer time.

My misleading statement was not intended as an accusation :-), merely trying 
to help explain the gaussian blur result: if you included transfer time in one 
but not the other, that might explain it. But from your posted code, it's 
clear that wasn't the problem.

--Tim

On Monday, December 08, 2014 09:54:03 AM Max Suster wrote:
 Its an exact comparison side-by-side of the same code, and actually already
 tested by others in the OpenCV forum.
 The Mat/UMat image is available for display with imshow -- the step imshow(
 edges, gray); in both cases -- which is how the test was set up. The
 main point is the time it takes to complete the entire process and the fact
 that cvtColor with OpenCL can generate an image for viewing much more
 quickly.
 
 I never intended to be misleading, and I am sorry that you interpret it
 this way.
 
 Max
 
 On Monday, December 8, 2014 6:22:39 PM UTC+1, Tim Holy wrote:
  I wonder if the bigger problem might be that your numbers for the
  grayscale
  conversion (which were very promising) might be misleading. Are you sure
  the
  calculation is done (and the results are available to the CPU) by the
  time
  it finishes? If we assume a best-case scenario of 6GB/s of data transfer
  to the
  GPU, then transferring 3MB to the GPU and 1MB back takes about 0.7ms.
  That's
  many times longer than what you reported for that calculation. Or did you
  not
  include transfer time in your results?
  
  --Tim
  
  On Monday, December 08, 2014 05:50:32 PM Simon Danisch wrote:
   That's interesting, gaussian blur should definitely be faster on the
  
  gpu!
  
   Maybe this thread helps?
   http://answers.opencv.org/question/34127/opencl-in-opencv-300/
   It seems like things are a little complicated, as it isn't really clear
  
  if
  
   the data is currently in VRAM or RAM...
   
   2014-12-08 17:39 GMT+01:00 Max Suster mxss...@gmail.com javascript::
Thanks for the feedback.  I realize that the copying needs to be
  
  skipped
  
if possible . . .
I have been playing a bit with the OpenCL UMat and it will need indeed
some tweeking because UMat is not always advantageous.
While there 10x gain with cvtColor and other functions such as
GasussianBlur are actually a little slower.

I will have closer look at this tonight.

Max

On Monday, December 8, 2014 4:15:28 PM UTC+1, Simon Danisch wrote:
If you're interested here are some more links:
https://software.intel.com/en-us/articles/opencl-and-opengl- 
  
  interoperability-tutorial
  
Valentine's and mine prototype for OpenGL OpenCL interoperability in
Julia:
https://github.com/vchuravy/qjulia_gpu

Am Samstag, 6. Dezember 2014 11:44:45 UTC+1 schrieb Max Suster:
Hi all,

A few months ago I set out to learn Julia in an attempt to find an
alternative to MATLAB for developing computer vision applications.
Given the interest (1

  
  https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-use
  
rs/PjyfzxPt8Gk/SuwKtjTd9j4J ,2

  
  https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-use
  
rs/81V5zSNJY3Q/DRUT0dR2qhQJ ,3

  
  https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-u
  
sers/iUPqo8drYek/pUeHECk91AQJ ,4

  
  https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-u
  
sers/6QunG66MfNs/C63pDfI-EMAJ ) and wide application of OpenCV for
  
  fast
  
real-time computer vision applications, I set myself to put together
  
  a
  
simple interface for OpenCV in Julia.  Coding in Julia and
  
  developing
  
the interface between C++ and Julia has been a lot of fun!

OpenCV.jl aims to provide an interface for OpenCV 
  
  http://opencv.org/
  
computer vision applications (C++) directly in Julia
http://julia.readthedocs.org/en/latest/manual/. It relies
  
  primarily
  
on Keno´s amazing Cxx.jl https://github.com/Keno/Cxx.jl, the
  
  Julia
  
C++ foreign function interface (FFI).  You can find all the
  
  information
  
on
my package at https://github.com/maxruby/OpenCV.jl.

You can download and run the package as follows:

Pkg.clone(git://github.com/maxruby/OpenCV.jl.git)using OpenCV


For MacOSX, OpenCV.jl comes with pre-compiled shared libraries, so
  
  it is
  
extremely easy to run.  For Windows and Linux, you will need to
  
  first
  
compile the OpenCV libraries, but this is well documented and links
  
  to
  
the
instructions for doing so are included in the README.md file.

The package currently supports most of OpenCV´s C++ API; however, at
this point I have created custom wrappings for core, imgproc,
  
  videoio
  
and highgui modules so that these are easy to use for anyone.

The 

Re: [julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Max Suster
Thanks Tim (I did not mean to be defensive). 
I totally agree that it is best to get to the bottom of the issue so that 
we can get it work!

The point I was trying to make more generally is that there does seem to be 
quite a bit of variability in the gains from 
OpenCL-acceleration at least with OpenCV. It would be nice to figure out 
how and which bits can truly be accelerated most efficiently 
and which may not be worth spending a lot more energy on. . .

Max

On Monday, December 8, 2014 6:58:42 PM UTC+1, Tim Holy wrote:

 Now that you posted the code (along with timing markers), it's much 
 clearer. 
 Your timings are just for the computation, and don't include transfer 
 time. 

 My misleading statement was not intended as an accusation :-), merely 
 trying 
 to help explain the gaussian blur result: if you included transfer time in 
 one 
 but not the other, that might explain it. But from your posted code, it's 
 clear that wasn't the problem. 

 --Tim 

 On Monday, December 08, 2014 09:54:03 AM Max Suster wrote: 
  Its an exact comparison side-by-side of the same code, and actually 
 already 
  tested by others in the OpenCV forum. 
  The Mat/UMat image is available for display with imshow -- the step 
 imshow( 
  edges, gray); in both cases -- which is how the test was set up. The 
  main point is the time it takes to complete the entire process and the 
 fact 
  that cvtColor with OpenCL can generate an image for viewing much more 
  quickly. 
  
  I never intended to be misleading, and I am sorry that you interpret it 
  this way. 
  
  Max 
  
  On Monday, December 8, 2014 6:22:39 PM UTC+1, Tim Holy wrote: 
   I wonder if the bigger problem might be that your numbers for the 
   grayscale 
   conversion (which were very promising) might be misleading. Are you 
 sure 
   the 
   calculation is done (and the results are available to the CPU) by 
 the 
   time 
   it finishes? If we assume a best-case scenario of 6GB/s of data 
 transfer 
   to the 
   GPU, then transferring 3MB to the GPU and 1MB back takes about 0.7ms. 
   That's 
   many times longer than what you reported for that calculation. Or did 
 you 
   not 
   include transfer time in your results? 
   
   --Tim 
   
   On Monday, December 08, 2014 05:50:32 PM Simon Danisch wrote: 
That's interesting, gaussian blur should definitely be faster on the 
   
   gpu! 
   
Maybe this thread helps? 
http://answers.opencv.org/question/34127/opencl-in-opencv-300/ 
It seems like things are a little complicated, as it isn't really 
 clear 
   
   if 
   
the data is currently in VRAM or RAM... 

2014-12-08 17:39 GMT+01:00 Max Suster mxss...@gmail.com 
 javascript:: 
 Thanks for the feedback.  I realize that the copying needs to be 
   
   skipped 
   
 if possible . . . 
 I have been playing a bit with the OpenCL UMat and it will need 
 indeed 
 some tweeking because UMat is not always advantageous. 
 While there 10x gain with cvtColor and other functions such as 
 GasussianBlur are actually a little slower. 
 
 I will have closer look at this tonight. 
 
 Max 
 
 On Monday, December 8, 2014 4:15:28 PM UTC+1, Simon Danisch wrote: 
 If you're interested here are some more links: 
 https://software.intel.com/en-us/articles/opencl-and-opengl-  
   
   interoperability-tutorial 
   
 Valentine's and mine prototype for OpenGL OpenCL interoperability 
 in 
 Julia: 
 https://github.com/vchuravy/qjulia_gpu 
 
 Am Samstag, 6. Dezember 2014 11:44:45 UTC+1 schrieb Max Suster: 
 Hi all, 
 
 A few months ago I set out to learn Julia in an attempt to find 
 an 
 alternative to MATLAB for developing computer vision 
 applications. 
 Given the interest (1 
  
   
   
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-use 
   
 rs/PjyfzxPt8Gk/SuwKtjTd9j4J ,2 
  
   
   
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-use 
   
 rs/81V5zSNJY3Q/DRUT0dR2qhQJ ,3 
  
   
   
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-u 
   
 sers/iUPqo8drYek/pUeHECk91AQJ ,4 
  
   
   
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-u 
   
 sers/6QunG66MfNs/C63pDfI-EMAJ ) and wide application of OpenCV 
 for 
   
   fast 
   
 real-time computer vision applications, I set myself to put 
 together 
   
   a 
   
 simple interface for OpenCV in Julia.  Coding in Julia and 
   
   developing 
   
 the interface between C++ and Julia has been a lot of fun! 
 
 OpenCV.jl aims to provide an interface for OpenCV  
   
   http://opencv.org/ 
   
 computer vision applications (C++) directly in Julia 
 http://julia.readthedocs.org/en/latest/manual/. It relies 
   
   primarily 
   
 on Keno´s amazing Cxx.jl https://github.com/Keno/Cxx.jl, the 
   
   Julia 
   
 C++ foreign function interface (FFI).  You can find all 

Re: [julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Simon Danisch
Strictly faster is probably a little bit exaggerated
http://www.dict.cc/englisch-deutsch/exaggerated.html, but simple image
manipulations fit the GPU very well. This should be valid for all
algorithms which can be massively parallelized
http://www.dict.cc/englisch-deutsch/parallelized.html  and don't use
scatter writes or reads.
So if you have decent video card, the cpu should have a hard time to match
the performance.
I'm not sure about the transfers, as OpenCV might actually do transfers
even in the UMat case... It's not that obvious how they manage their memory.

2014-12-08 18:58 GMT+01:00 Tim Holy tim.h...@gmail.com:

 Now that you posted the code (along with timing markers), it's much
 clearer.
 Your timings are just for the computation, and don't include transfer time.

 My misleading statement was not intended as an accusation :-), merely
 trying
 to help explain the gaussian blur result: if you included transfer time in
 one
 but not the other, that might explain it. But from your posted code, it's
 clear that wasn't the problem.

 --Tim

 On Monday, December 08, 2014 09:54:03 AM Max Suster wrote:
  Its an exact comparison side-by-side of the same code, and actually
 already
  tested by others in the OpenCV forum.
  The Mat/UMat image is available for display with imshow -- the step
 imshow(
  edges, gray); in both cases -- which is how the test was set up. The
  main point is the time it takes to complete the entire process and the
 fact
  that cvtColor with OpenCL can generate an image for viewing much more
  quickly.
 
  I never intended to be misleading, and I am sorry that you interpret it
  this way.
 
  Max
 
  On Monday, December 8, 2014 6:22:39 PM UTC+1, Tim Holy wrote:
   I wonder if the bigger problem might be that your numbers for the
   grayscale
   conversion (which were very promising) might be misleading. Are you
 sure
   the
   calculation is done (and the results are available to the CPU) by the
   time
   it finishes? If we assume a best-case scenario of 6GB/s of data
 transfer
   to the
   GPU, then transferring 3MB to the GPU and 1MB back takes about 0.7ms.
   That's
   many times longer than what you reported for that calculation. Or did
 you
   not
   include transfer time in your results?
  
   --Tim
  
   On Monday, December 08, 2014 05:50:32 PM Simon Danisch wrote:
That's interesting, gaussian blur should definitely be faster on the
  
   gpu!
  
Maybe this thread helps?
http://answers.opencv.org/question/34127/opencl-in-opencv-300/
It seems like things are a little complicated, as it isn't really
 clear
  
   if
  
the data is currently in VRAM or RAM...
   
2014-12-08 17:39 GMT+01:00 Max Suster mxss...@gmail.com
 javascript::
 Thanks for the feedback.  I realize that the copying needs to be
  
   skipped
  
 if possible . . .
 I have been playing a bit with the OpenCL UMat and it will need
 indeed
 some tweeking because UMat is not always advantageous.
 While there 10x gain with cvtColor and other functions such as
 GasussianBlur are actually a little slower.

 I will have closer look at this tonight.

 Max

 On Monday, December 8, 2014 4:15:28 PM UTC+1, Simon Danisch wrote:
 If you're interested here are some more links:
 https://software.intel.com/en-us/articles/opencl-and-opengl- 
  
   interoperability-tutorial
  
 Valentine's and mine prototype for OpenGL OpenCL interoperability
 in
 Julia:
 https://github.com/vchuravy/qjulia_gpu

 Am Samstag, 6. Dezember 2014 11:44:45 UTC+1 schrieb Max Suster:
 Hi all,

 A few months ago I set out to learn Julia in an attempt to find
 an
 alternative to MATLAB for developing computer vision
 applications.
 Given the interest (1
 
  
  
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-use
  
 rs/PjyfzxPt8Gk/SuwKtjTd9j4J ,2
 
  
  
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-use
  
 rs/81V5zSNJY3Q/DRUT0dR2qhQJ ,3
 
  
  
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-u
  
 sers/iUPqo8drYek/pUeHECk91AQJ ,4
 
  
  
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-u
  
 sers/6QunG66MfNs/C63pDfI-EMAJ ) and wide application of OpenCV
 for
  
   fast
  
 real-time computer vision applications, I set myself to put
 together
  
   a
  
 simple interface for OpenCV in Julia.  Coding in Julia and
  
   developing
  
 the interface between C++ and Julia has been a lot of fun!

 OpenCV.jl aims to provide an interface for OpenCV 
  
   http://opencv.org/
  
 computer vision applications (C++) directly in Julia
 http://julia.readthedocs.org/en/latest/manual/. It relies
  
   primarily
  
 on Keno´s amazing Cxx.jl https://github.com/Keno/Cxx.jl, the
  
   Julia
  
 C++ foreign function interface (FFI).  You can find all the
  
   information
  
 on
 my package at 

Re: [julia-users] Re: Difference between {T:AbstractType} and just x::AbstractType

2014-12-08 Thread Jeff Bezanson
f{T:Real}(x::T) and f(x::Real) have the exact same applicability. One
definition should overwrite the other, so currently this is a bug that
should be fixed as part of #8974. The way to read the first type is

Union over all T:Real . T

Of course the biggest difference is that there will be a `T` inside
the first function set to the type of the argument. If you don't need
that, the second version is preferred since it's simpler.

In *some* cases the need to bind `T` inside the function will force
the compiler to specialize a function for more different types than it
otherwise would. This typically happens with tuple types, since there
are too many of them (O(n^m)).


On Mon, Dec 8, 2014 at 3:36 AM,  ele...@gmail.com wrote:


 On Monday, December 8, 2014 6:16:25 PM UTC+10, David van Leeuwen wrote:

 Good question, I would be interested in the answer myself.  If the tips
 indicate the explicit type dependency is preferred, then I guess that for
 the first form the compiler compiles a new instance of the function
 specifically for the concrete type `T`, while in the second, it presumably
 compiles to deal with a (partly) type-instable `x` within the function body.

 f1{R:Real}(x::R) = -x

 f1(x::Real) = x

 f1 (generic function with 2 methods)

 So the methods are kept separate, but I think the first form hides access
 to the second form.

 IIUC it doesn't really hide it, but for a call like f1(1.0) the generic
 can be used to generate f1(x::Float64) which is more specific than
 f1(x::Real) so it is used instead.




 On Monday, December 8, 2014 7:12:39 AM UTC+1, Igor Demura wrote:

 What exactly the difference between:
 function foo{T:AbstractType}(x::T) = ...
 and
 function foo(x::AbstractType) = ... ?
 Is any difference at all? The tips section of the docs says the second
 is preferred. If they are the same, why first syntax?


 See above.


 I can imagine that I can benefit if I have several parameters:
 function bar{T:AbstractType}(x::Concrete{T}, y::AnotherOf{T}, z::T) =
 ...




Re: [julia-users] scoping and begin

2014-12-08 Thread Jeff Bezanson
This is not a behavior of `begin` blocks but of top-level expressions.
Each top-level expression is wrapped in a soft scope that can have
its own locals but allows globals to leak in:

julia a = 0
0

julia begin
 local x = 1
 a = 2
   end
2

julia a
2

julia x
ERROR: x not defined

On Mon, Dec 8, 2014 at 12:02 PM, Simon Byrne simonby...@gmail.com wrote:
 So begin blocks can introduce a scope, they just don't by default?

 In your `local t = 1; t` example: what, then, is the scope of t?


 On Monday, 8 December 2014 16:56:24 UTC, Isaiah wrote:

 tmp is declared local to the begin blocks
 if that sounds odd (it did to me at first), try typing `local t = 1; t`)

 On Mon, Dec 8, 2014 at 6:11 AM, Simon Byrne simon...@gmail.com wrote:

 According to the docs, begin blocks do not introduce new scope blocks.
 But this block:


 https://github.com/JuliaLang/julia/blob/1b9041ce2919f2976ec726372b49201c887398d7/base/string.jl#L1601-L1618

 does seem to introduce a new scope (i.e. if I type Base.tmp at the REPL,
 I get an error). What am I missing here?

 Simon





Re: [julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Simon Danisch
To give it some more underpinning, I searched for a benchmark.
I only found one for cuda so far from 2012: http://www.timzaman.com/?p=2256
Results in a table:
https://docs.google.com/spreadsheet/pub?key=0AkfBSyx6TpqMdDFnclY5UjlyNmVsSnhGV0hscnJQcVEoutput=html
Resulting graph:


I think the scale is speedup from using the gpu, so the cpu is always one.

2014-12-08 19:07 GMT+01:00 Simon Danisch sdani...@gmail.com:

 Strictly faster is probably a little bit exaggerated
 http://www.dict.cc/englisch-deutsch/exaggerated.html, but simple image
 manipulations fit the GPU very well. This should be valid for all
 algorithms which can be massively parallelized
 http://www.dict.cc/englisch-deutsch/parallelized.html  and don't use
 scatter writes or reads.
 So if you have decent video card, the cpu should have a hard time to match
 the performance.
 I'm not sure about the transfers, as OpenCV might actually do transfers
 even in the UMat case... It's not that obvious how they manage their memory.

 2014-12-08 18:58 GMT+01:00 Tim Holy tim.h...@gmail.com:

 Now that you posted the code (along with timing markers), it's much
 clearer.
 Your timings are just for the computation, and don't include transfer
 time.

 My misleading statement was not intended as an accusation :-), merely
 trying
 to help explain the gaussian blur result: if you included transfer time
 in one
 but not the other, that might explain it. But from your posted code, it's
 clear that wasn't the problem.

 --Tim

 On Monday, December 08, 2014 09:54:03 AM Max Suster wrote:
  Its an exact comparison side-by-side of the same code, and actually
 already
  tested by others in the OpenCV forum.
  The Mat/UMat image is available for display with imshow -- the step
 imshow(
  edges, gray); in both cases -- which is how the test was set up. The
  main point is the time it takes to complete the entire process and the
 fact
  that cvtColor with OpenCL can generate an image for viewing much more
  quickly.
 
  I never intended to be misleading, and I am sorry that you interpret it
  this way.
 
  Max
 
  On Monday, December 8, 2014 6:22:39 PM UTC+1, Tim Holy wrote:
   I wonder if the bigger problem might be that your numbers for the
   grayscale
   conversion (which were very promising) might be misleading. Are you
 sure
   the
   calculation is done (and the results are available to the CPU) by
 the
   time
   it finishes? If we assume a best-case scenario of 6GB/s of data
 transfer
   to the
   GPU, then transferring 3MB to the GPU and 1MB back takes about 0.7ms.
   That's
   many times longer than what you reported for that calculation. Or did
 you
   not
   include transfer time in your results?
  
   --Tim
  
   On Monday, December 08, 2014 05:50:32 PM Simon Danisch wrote:
That's interesting, gaussian blur should definitely be faster on the
  
   gpu!
  
Maybe this thread helps?
http://answers.opencv.org/question/34127/opencl-in-opencv-300/
It seems like things are a little complicated, as it isn't really
 clear
  
   if
  
the data is currently in VRAM or RAM...
   
2014-12-08 17:39 GMT+01:00 Max Suster mxss...@gmail.com
 javascript::
 Thanks for the feedback.  I realize that the copying needs to be
  
   skipped
  
 if possible . . .
 I have been playing a bit with the OpenCL UMat and it will need
 indeed
 some tweeking because UMat is not always advantageous.
 While there 10x gain with cvtColor and other functions such as
 GasussianBlur are actually a little slower.

 I will have closer look at this tonight.

 Max

 On Monday, December 8, 2014 4:15:28 PM UTC+1, Simon Danisch wrote:
 If you're interested here are some more links:
 https://software.intel.com/en-us/articles/opencl-and-opengl- 
  
   interoperability-tutorial
  
 Valentine's and mine prototype for OpenGL OpenCL
 interoperability in
 Julia:
 https://github.com/vchuravy/qjulia_gpu

 Am Samstag, 6. Dezember 2014 11:44:45 UTC+1 schrieb Max Suster:
 Hi all,

 A few months ago I set out to learn Julia in an attempt to find
 an
 alternative to MATLAB for developing computer vision
 applications.
 Given the interest (1
 
  
  
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-use
  
 rs/PjyfzxPt8Gk/SuwKtjTd9j4J ,2
 
  
  
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-use
  
 rs/81V5zSNJY3Q/DRUT0dR2qhQJ ,3
 
  
  
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-u
  
 sers/iUPqo8drYek/pUeHECk91AQJ ,4
 
  
  
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-u
  
 sers/6QunG66MfNs/C63pDfI-EMAJ ) and wide application of OpenCV
 for
  
   fast
  
 real-time computer vision applications, I set myself to put
 together
  
   a
  
 simple interface for OpenCV in Julia.  Coding in Julia and
  
   developing
  
 the interface between C++ and Julia has been a lot of 

Re: [julia-users] [WIP] CSVReaders.jl

2014-12-08 Thread John Myles White
Looking at this again, the problem with doing reshape/transpose is that it's 
very awkward when trying to read data in a stream, since you need to undo the 
reshape and transpose before starting to read from the stream again. I think 
the best solution to getting a row-major matrix of data is to add a wrapper 
around the readall method from this package that handles the final reshape and 
transpose operations when you're not reading in streaming data.

 -- John

On Dec 8, 2014, at 9:25 AM, Tim Holy tim.h...@gmail.com wrote:

 Does the reshape/transpose really take any appreciable time (compared to the 
 I/O)?
 
 --Tim
 
 On Monday, December 08, 2014 09:14:35 AM John Myles White wrote:
 Yes, this is how I've been doing things so far.
 
 -- John
 
 On Dec 8, 2014, at 9:12 AM, Tim Holy tim.h...@gmail.com wrote:
 My suspicion is you should read into a 1d vector (and use `append!`), then
 at the end do a reshape and finally a transpose. I bet that will be many
 times faster than any other alternative, because we have a really fast
 transpose now.
 
 The only disadvantage I see is taking twice as much memory as would be
 minimally needed. (This can be fixed once we have row-major arrays.)
 
 --Tim
 
 On Monday, December 08, 2014 08:38:06 AM John Myles White wrote:
 I believe/hope the proposed solution will work for most cases, although
 there's still a bunch of performance work left to be done. I think the
 decoupling problem isn't as hard as it might seem since there are very
 clearly distinct stages in parsing a CSV file. But we'll find out if the
 indirection I've introduced causes performance problems when things can't
 be inlined.
 
 While writing this package, I found the two most challenging problems to
 be:
 
 (A) The disconnect between CSV files providing one row at a time and
 Julia's usage of column major arrays, which encourage reading one column
 at a time. (B) The inability to easily resize! a matrix.
 
 -- John
 
 On Dec 8, 2014, at 5:16 AM, Stefan Karpinski ste...@karpinski.org 
 wrote:
 Doh. Obfuscate the code quick, before anyone uses it! This is very nice
 and something I've always felt like we need for data formats like CSV –
 a
 way of decoupling the parsing of the format from the populating of a
 data
 structure with that data. It's a tough problem.
 
 On Mon, Dec 8, 2014 at 8:08 AM, Tom Short tshort.rli...@gmail.com
 wrote:
 Exciting, John! Although your documentation may be very sparse, the
 code
 is nicely documented.
 
 On Mon, Dec 8, 2014 at 12:35 AM, John Myles White
 johnmyleswh...@gmail.com wrote: Over the last month or so, I've been
 slowly working on a new library that defines an abstract toolkit for
 writing CSV parsers. The goal is to provide an abstract interface that
 users can implement in order to provide functions for reading data into
 their preferred data structures from CSV files. In principle, this
 approach should allow us to unify the code behind Base's readcsv and
 DataFrames's readtable functions.
 
 The library is still very much a work-in-progress, but I wanted to let
 others see what I've done so that I can start getting feedback on the
 design.
 
 Because the library makes heavy use of Nullables, you can only try out
 the
 library on Julia 0.4. If you're interested, it's available at
 https://github.com/johnmyleswhite/CSVReaders.jl
 
 For now, I've intentionally given very sparse documentation to
 discourage
 people from seriously using the library before it's officially released.
 But there are some examples in the README that should make clear how the
 library is intended to be used.
 -- John
 



Re: [julia-users] scoping and begin

2014-12-08 Thread Simon Byrne
Ah, that makes more sense now. Thanks.

s

On Monday, 8 December 2014 18:16:48 UTC, Jeff Bezanson wrote:

 This is not a behavior of `begin` blocks but of top-level expressions. 
 Each top-level expression is wrapped in a soft scope that can have 
 its own locals but allows globals to leak in: 

 julia a = 0 
 0 

 julia begin 
  local x = 1 
  a = 2 
end 
 2 

 julia a 
 2 

 julia x 
 ERROR: x not defined 

 On Mon, Dec 8, 2014 at 12:02 PM, Simon Byrne simon...@gmail.com 
 javascript: wrote: 
  So begin blocks can introduce a scope, they just don't by default? 
  
  In your `local t = 1; t` example: what, then, is the scope of t? 
  
  
  On Monday, 8 December 2014 16:56:24 UTC, Isaiah wrote: 
  
  tmp is declared local to the begin blocks 
  if that sounds odd (it did to me at first), try typing `local t = 1; 
 t`) 
  
  On Mon, Dec 8, 2014 at 6:11 AM, Simon Byrne simon...@gmail.com 
 wrote: 
  
  According to the docs, begin blocks do not introduce new scope blocks. 
  But this block: 
  
  
  
 https://github.com/JuliaLang/julia/blob/1b9041ce2919f2976ec726372b49201c887398d7/base/string.jl#L1601-L1618
  
  
  does seem to introduce a new scope (i.e. if I type Base.tmp at the 
 REPL, 
  I get an error). What am I missing here? 
  
  Simon 
  
  
  



Re: [julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Tim Holy
Certainly it's possible that the quality of the kernels varies. The blur case 
is presumably harder to write a good kernel for and more subject to the chosen 
schedule, because it has to access memory non-locally. The grayscale 
conversion is pointwise and so is fairly trivial. But I would have expected 
the blur kernel to be highly optimized; I agree it's a puzzling result.

But that table that Simon linked suggests it should be faster, so I wonder if 
there's something fishy going on.

--Tim

On Monday, December 08, 2014 10:04:32 AM Max Suster wrote:
 Thanks Tim (I did not mean to be defensive).
 I totally agree that it is best to get to the bottom of the issue so that
 we can get it work!
 
 The point I was trying to make more generally is that there does seem to be
 quite a bit of variability in the gains from
 OpenCL-acceleration at least with OpenCV. It would be nice to figure out
 how and which bits can truly be accelerated most efficiently
 and which may not be worth spending a lot more energy on. . .
 
 Max
 
 On Monday, December 8, 2014 6:58:42 PM UTC+1, Tim Holy wrote:
  Now that you posted the code (along with timing markers), it's much
  clearer.
  Your timings are just for the computation, and don't include transfer
  time.
  
  My misleading statement was not intended as an accusation :-), merely
  trying
  to help explain the gaussian blur result: if you included transfer time in
  one
  but not the other, that might explain it. But from your posted code, it's
  clear that wasn't the problem.
  
  --Tim
  
  On Monday, December 08, 2014 09:54:03 AM Max Suster wrote:
   Its an exact comparison side-by-side of the same code, and actually
  
  already
  
   tested by others in the OpenCV forum.
   The Mat/UMat image is available for display with imshow -- the step
  
  imshow(
  
   edges, gray); in both cases -- which is how the test was set up. The
   main point is the time it takes to complete the entire process and the
  
  fact
  
   that cvtColor with OpenCL can generate an image for viewing much more
   quickly.
   
   I never intended to be misleading, and I am sorry that you interpret it
   this way.
   
   Max
   
   On Monday, December 8, 2014 6:22:39 PM UTC+1, Tim Holy wrote:
I wonder if the bigger problem might be that your numbers for the
grayscale
conversion (which were very promising) might be misleading. Are you
  
  sure
  
the
calculation is done (and the results are available to the CPU) by
  
  the
  
time
it finishes? If we assume a best-case scenario of 6GB/s of data
  
  transfer
  
to the
GPU, then transferring 3MB to the GPU and 1MB back takes about 0.7ms.
That's
many times longer than what you reported for that calculation. Or did
  
  you
  
not
include transfer time in your results?

--Tim

On Monday, December 08, 2014 05:50:32 PM Simon Danisch wrote:
 That's interesting, gaussian blur should definitely be faster on the

gpu!

 Maybe this thread helps?
 http://answers.opencv.org/question/34127/opencl-in-opencv-300/
 It seems like things are a little complicated, as it isn't really
  
  clear
  
if

 the data is currently in VRAM or RAM...
 
 2014-12-08 17:39 GMT+01:00 Max Suster mxss...@gmail.com
  
  javascript::
  Thanks for the feedback.  I realize that the copying needs to be

skipped

  if possible . . .
  I have been playing a bit with the OpenCL UMat and it will need
  
  indeed
  
  some tweeking because UMat is not always advantageous.
  While there 10x gain with cvtColor and other functions such as
  GasussianBlur are actually a little slower.
  
  I will have closer look at this tonight.
  
  Max
  
  On Monday, December 8, 2014 4:15:28 PM UTC+1, Simon Danisch wrote:
  If you're interested here are some more links:
  https://software.intel.com/en-us/articles/opencl-and-opengl- 

interoperability-tutorial

  Valentine's and mine prototype for OpenGL OpenCL interoperability
  
  in
  
  Julia:
  https://github.com/vchuravy/qjulia_gpu
  
  Am Samstag, 6. Dezember 2014 11:44:45 UTC+1 schrieb Max Suster:
  Hi all,
  
  A few months ago I set out to learn Julia in an attempt to find
  
  an
  
  alternative to MATLAB for developing computer vision
  
  applications.
  
  Given the interest (1
  
  
  https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-use
  
  rs/PjyfzxPt8Gk/SuwKtjTd9j4J ,2
  
  
  https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-use
  
  rs/81V5zSNJY3Q/DRUT0dR2qhQJ ,3
  
  
  https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-u
  
  sers/iUPqo8drYek/pUeHECk91AQJ ,4
  
  
  https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-u
  
  sers/6QunG66MfNs/C63pDfI-EMAJ ) and wide application of OpenCV
  
  for
  

Re: [julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Max Suster
Thanks!  That is very useful.   

Why is cvColor so slow in the Intel tests?
I notice they used a 4000x4000, CV_8UC4  (Uint8, 4 channels) for the 
GaussianBlur test!


On Monday, December 8, 2014 7:16:42 PM UTC+1, Simon Danisch wrote:

 To give it some more underpinning, I searched for a benchmark.
 I only found one for cuda so far from 2012: 
 http://www.timzaman.com/?p=2256
 Results in a table:

 https://docs.google.com/spreadsheet/pub?key=0AkfBSyx6TpqMdDFnclY5UjlyNmVsSnhGV0hscnJQcVEoutput=html
 Resulting graph:


 I think the scale is speedup from using the gpu, so the cpu is always one.

 2014-12-08 19:07 GMT+01:00 Simon Danisch sdan...@gmail.com javascript:
 :

 Strictly faster is probably a little bit exaggerated 
 http://www.dict.cc/englisch-deutsch/exaggerated.html, but simple image 
 manipulations fit the GPU very well. This should be valid for all 
 algorithms which can be massively parallelized 
 http://www.dict.cc/englisch-deutsch/parallelized.html  and don't use 
 scatter writes or reads.
 So if you have decent video card, the cpu should have a hard time to 
 match the performance.
 I'm not sure about the transfers, as OpenCV might actually do transfers 
 even in the UMat case... It's not that obvious how they manage their memory.

 2014-12-08 18:58 GMT+01:00 Tim Holy tim@gmail.com javascript::

 Now that you posted the code (along with timing markers), it's much 
 clearer.
 Your timings are just for the computation, and don't include transfer 
 time.

 My misleading statement was not intended as an accusation :-), merely 
 trying
 to help explain the gaussian blur result: if you included transfer time 
 in one
 but not the other, that might explain it. But from your posted code, it's
 clear that wasn't the problem.

 --Tim

 On Monday, December 08, 2014 09:54:03 AM Max Suster wrote:
  Its an exact comparison side-by-side of the same code, and actually 
 already
  tested by others in the OpenCV forum.
  The Mat/UMat image is available for display with imshow -- the step 
 imshow(
  edges, gray); in both cases -- which is how the test was set up. The
  main point is the time it takes to complete the entire process and the 
 fact
  that cvtColor with OpenCL can generate an image for viewing much more
  quickly.
 
  I never intended to be misleading, and I am sorry that you interpret it
  this way.
 
  Max
 
  On Monday, December 8, 2014 6:22:39 PM UTC+1, Tim Holy wrote:
   I wonder if the bigger problem might be that your numbers for the
   grayscale
   conversion (which were very promising) might be misleading. Are you 
 sure
   the
   calculation is done (and the results are available to the CPU) by 
 the
   time
   it finishes? If we assume a best-case scenario of 6GB/s of data 
 transfer
   to the
   GPU, then transferring 3MB to the GPU and 1MB back takes about 0.7ms.
   That's
   many times longer than what you reported for that calculation. Or 
 did you
   not
   include transfer time in your results?
  
   --Tim
  
   On Monday, December 08, 2014 05:50:32 PM Simon Danisch wrote:
That's interesting, gaussian blur should definitely be faster on 
 the
  
   gpu!
  
Maybe this thread helps?
http://answers.opencv.org/question/34127/opencl-in-opencv-300/
It seems like things are a little complicated, as it isn't really 
 clear
  
   if
  
the data is currently in VRAM or RAM...
   
2014-12-08 17:39 GMT+01:00 Max Suster mxss...@gmail.com 
 javascript::
 Thanks for the feedback.  I realize that the copying needs to be
  
   skipped
  
 if possible . . .
 I have been playing a bit with the OpenCL UMat and it will need 
 indeed
 some tweeking because UMat is not always advantageous.
 While there 10x gain with cvtColor and other functions such as
 GasussianBlur are actually a little slower.

 I will have closer look at this tonight.

 Max

 On Monday, December 8, 2014 4:15:28 PM UTC+1, Simon Danisch 
 wrote:
 If you're interested here are some more links:
 https://software.intel.com/en-us/articles/opencl-and-opengl- 
 
  
   interoperability-tutorial
  
 Valentine's and mine prototype for OpenGL OpenCL 
 interoperability in
 Julia:
 https://github.com/vchuravy/qjulia_gpu

 Am Samstag, 6. Dezember 2014 11:44:45 UTC+1 schrieb Max Suster:
 Hi all,

 A few months ago I set out to learn Julia in an attempt to 
 find an
 alternative to MATLAB for developing computer vision 
 applications.
 Given the interest (1
 
  
   
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-use
  
 rs/PjyfzxPt8Gk/SuwKtjTd9j4J ,2
 
  
   
 https://groups.google.com/forum/#!searchin/julia-users/OpenCV/julia-use
  
 rs/81V5zSNJY3Q/DRUT0dR2qhQJ ,3
 
  
   
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-u
  
 sers/iUPqo8drYek/pUeHECk91AQJ ,4
 
  
   
 https://groups.google.com/forum/%23!searchin/julia-users/OpenCV/julia-u
  
 

Re: [julia-users] [WIP] CSVReaders.jl

2014-12-08 Thread Tim Holy
Right, indeed I meant to suggest making the conversion to matrix form the very 
last step of the process. But obviously you didn't need that suggestion :-).

--Tim

On Monday, December 08, 2014 10:20:00 AM John Myles White wrote:
 Looking at this again, the problem with doing reshape/transpose is that it's
 very awkward when trying to read data in a stream, since you need to undo
 the reshape and transpose before starting to read from the stream again. I
 think the best solution to getting a row-major matrix of data is to add a
 wrapper around the readall method from this package that handles the final
 reshape and transpose operations when you're not reading in streaming data.
 
  -- John
 
 On Dec 8, 2014, at 9:25 AM, Tim Holy tim.h...@gmail.com wrote:
  Does the reshape/transpose really take any appreciable time (compared to
  the I/O)?
  
  --Tim
  
  On Monday, December 08, 2014 09:14:35 AM John Myles White wrote:
  Yes, this is how I've been doing things so far.
  
  -- John
  
  On Dec 8, 2014, at 9:12 AM, Tim Holy tim.h...@gmail.com wrote:
  My suspicion is you should read into a 1d vector (and use `append!`),
  then
  at the end do a reshape and finally a transpose. I bet that will be many
  times faster than any other alternative, because we have a really fast
  transpose now.
  
  The only disadvantage I see is taking twice as much memory as would be
  minimally needed. (This can be fixed once we have row-major arrays.)
  
  --Tim
  
  On Monday, December 08, 2014 08:38:06 AM John Myles White wrote:
  I believe/hope the proposed solution will work for most cases, although
  there's still a bunch of performance work left to be done. I think the
  decoupling problem isn't as hard as it might seem since there are very
  clearly distinct stages in parsing a CSV file. But we'll find out if
  the
  indirection I've introduced causes performance problems when things
  can't
  be inlined.
  
  While writing this package, I found the two most challenging problems
  to
  be:
  
  (A) The disconnect between CSV files providing one row at a time and
  Julia's usage of column major arrays, which encourage reading one
  column
  at a time. (B) The inability to easily resize! a matrix.
  
  -- John
  
  On Dec 8, 2014, at 5:16 AM, Stefan Karpinski ste...@karpinski.org
  
  wrote:
  Doh. Obfuscate the code quick, before anyone uses it! This is very
  nice
  and something I've always felt like we need for data formats like CSV
  –
  a
  way of decoupling the parsing of the format from the populating of a
  data
  structure with that data. It's a tough problem.
  
  On Mon, Dec 8, 2014 at 8:08 AM, Tom Short tshort.rli...@gmail.com
  wrote:
  Exciting, John! Although your documentation may be very sparse, the
  code
  is nicely documented.
  
  On Mon, Dec 8, 2014 at 12:35 AM, John Myles White
  johnmyleswh...@gmail.com wrote: Over the last month or so, I've been
  slowly working on a new library that defines an abstract toolkit for
  writing CSV parsers. The goal is to provide an abstract interface that
  users can implement in order to provide functions for reading data
  into
  their preferred data structures from CSV files. In principle, this
  approach should allow us to unify the code behind Base's readcsv and
  DataFrames's readtable functions.
  
  The library is still very much a work-in-progress, but I wanted to let
  others see what I've done so that I can start getting feedback on the
  design.
  
  Because the library makes heavy use of Nullables, you can only try out
  the
  library on Julia 0.4. If you're interested, it's available at
  https://github.com/johnmyleswhite/CSVReaders.jl
  
  For now, I've intentionally given very sparse documentation to
  discourage
  people from seriously using the library before it's officially
  released.
  But there are some examples in the README that should make clear how
  the
  library is intended to be used.
  -- John



Re: [julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Simon Danisch
That's a good point! Are you using CV_8UC3?
Video cards like everything that has 32 bits, so if you use RGB (3*8=24)
depending on how OpenCV optimizes things, it could mean quite a slow down
for the UMat case.

2014-12-08 19:29 GMT+01:00 Max Suster mxsst...@gmail.com:

 Thanks!  That is very useful.

 Why is cvColor so slow in the Intel tests?
 I notice they used a 4000x4000, CV_8UC4  (Uint8, 4 channels) for the
 GaussianBlur test!


 On Monday, December 8, 2014 7:16:42 PM UTC+1, Simon Danisch wrote:

 To give it some more underpinning, I searched for a benchmark.
 I only found one for cuda so far from 2012: http://www.timzaman.com/
 ?p=2256
 Results in a table:
 https://docs.google.com/spreadsheet/pub?key=
 0AkfBSyx6TpqMdDFnclY5UjlyNmVsSnhGV0hscnJQcVEoutput=html
 Resulting graph:


 I think the scale is speedup from using the gpu, so the cpu is always one.

 2014-12-08 19:07 GMT+01:00 Simon Danisch sdan...@gmail.com:

 Strictly faster is probably a little bit exaggerated
 http://www.dict.cc/englisch-deutsch/exaggerated.html, but simple
 image manipulations fit the GPU very well. This should be valid for all
 algorithms which can be massively parallelized
 http://www.dict.cc/englisch-deutsch/parallelized.html  and don't use
 scatter writes or reads.
 So if you have decent video card, the cpu should have a hard time to
 match the performance.
 I'm not sure about the transfers, as OpenCV might actually do transfers
 even in the UMat case... It's not that obvious how they manage their memory.

 2014-12-08 18:58 GMT+01:00 Tim Holy tim@gmail.com:

 Now that you posted the code (along with timing markers), it's much
 clearer.
 Your timings are just for the computation, and don't include transfer
 time.

 My misleading statement was not intended as an accusation :-), merely
 trying
 to help explain the gaussian blur result: if you included transfer time
 in one
 but not the other, that might explain it. But from your posted code,
 it's
 clear that wasn't the problem.

 --Tim

 On Monday, December 08, 2014 09:54:03 AM Max Suster wrote:
  Its an exact comparison side-by-side of the same code, and actually
 already
  tested by others in the OpenCV forum.
  The Mat/UMat image is available for display with imshow -- the step
 imshow(
  edges, gray); in both cases -- which is how the test was set up.
 The
  main point is the time it takes to complete the entire process and
 the fact
  that cvtColor with OpenCL can generate an image for viewing much more
  quickly.
 
  I never intended to be misleading, and I am sorry that you interpret
 it
  this way.
 
  Max
 
  On Monday, December 8, 2014 6:22:39 PM UTC+1, Tim Holy wrote:
   I wonder if the bigger problem might be that your numbers for the
   grayscale
   conversion (which were very promising) might be misleading. Are you
 sure
   the
   calculation is done (and the results are available to the CPU) by
 the
   time
   it finishes? If we assume a best-case scenario of 6GB/s of data
 transfer
   to the
   GPU, then transferring 3MB to the GPU and 1MB back takes about
 0.7ms.
   That's
   many times longer than what you reported for that calculation. Or
 did you
   not
   include transfer time in your results?
  
   --Tim
  
   On Monday, December 08, 2014 05:50:32 PM Simon Danisch wrote:
That's interesting, gaussian blur should definitely be faster on
 the
  
   gpu!
  
Maybe this thread helps?
http://answers.opencv.org/question/34127/opencl-in-opencv-300/
It seems like things are a little complicated, as it isn't really
 clear
  
   if
  
the data is currently in VRAM or RAM...
   
2014-12-08 17:39 GMT+01:00 Max Suster mxss...@gmail.com
 javascript::
 Thanks for the feedback.  I realize that the copying needs to be
  
   skipped
  
 if possible . . .
 I have been playing a bit with the OpenCL UMat and it will need
 indeed
 some tweeking because UMat is not always advantageous.
 While there 10x gain with cvtColor and other functions such as
 GasussianBlur are actually a little slower.

 I will have closer look at this tonight.

 Max

 On Monday, December 8, 2014 4:15:28 PM UTC+1, Simon Danisch
 wrote:
 If you're interested here are some more links:
 https://software.intel.com/en-us/articles/opencl-and-opengl-
 
  
   interoperability-tutorial
  
 Valentine's and mine prototype for OpenGL OpenCL
 interoperability in
 Julia:
 https://github.com/vchuravy/qjulia_gpu

 Am Samstag, 6. Dezember 2014 11:44:45 UTC+1 schrieb Max Suster:
 Hi all,

 A few months ago I set out to learn Julia in an attempt to
 find an
 alternative to MATLAB for developing computer vision
 applications.
 Given the interest (1
 
  
   https://groups.google.com/forum/#!searchin/julia-users/
 OpenCV/julia-use
  
 rs/PjyfzxPt8Gk/SuwKtjTd9j4J ,2
 
  
   https://groups.google.com/forum/#!searchin/julia-users/
 OpenCV/julia-use
  
 rs/81V5zSNJY3Q/DRUT0dR2qhQJ ,3
 
 

Re: [julia-users] Re: OpenCV.jl: A package for computer vision in Julia

2014-12-08 Thread Max Suster
Yes, I am using CV_8UC3 for my tests (and dim  512x512 or 1000x1000).

I think the most likely explanation for my results is that while OpenCV 
comes with OpenCL enabled in the makefile

-D WITH_OPENCL=ON
 the flags for including OpenCL AMD FFT and AMD BLAS libraries are OFF by 
default:
-D WITH_OPENCLAMDFFT=OFF 
-D WITH_OPENCLAMDBLAS=OFF 

I suppose that the consequence is likely that the gains with OpenCL are 
inconsistent at best on OSX.
Unfortunately, these libraries are not available for OSX right off the 
shelf, and it seems that I need to build them first. 

http://developer.amd.com/tools-and-sdks/opencl-zone/amd-accelerated-parallel-processing-math-libraries/

They are readily available for Windows and Linux.   I found  a post in 
which someone managed to get them working with a hack. 
I need to this investigate further.  . . 





Re: [julia-users] [WIP] CSVReaders.jl

2014-12-08 Thread Iain Dunning
Tried it out (built Julia 0.4 just to do it!), made a CSV-to-JSON type 
thing:

https://github.com/johnmyleswhite/CSVReaders.jl/issues/1

Quite excited about this - I find myself writing code that basically 
mangles a row into a type pretty often.
In fact, 90% of my needs would be satisfied by a variant of readall that 
takes a type, reads a row, and calls a function like 

function readrow(::Type{T}, values::Vector{Any})
  # ...
  return T(...)
end

and returns a Vector{T}.

Not sure how that fits in with the design of this.

Cheers,
Iain


On Monday, December 8, 2014 1:29:46 PM UTC-5, Tim Holy wrote:

 Right, indeed I meant to suggest making the conversion to matrix form the 
 very 
 last step of the process. But obviously you didn't need that suggestion 
 :-). 

 --Tim 

 On Monday, December 08, 2014 10:20:00 AM John Myles White wrote: 
  Looking at this again, the problem with doing reshape/transpose is that 
 it's 
  very awkward when trying to read data in a stream, since you need to 
 undo 
  the reshape and transpose before starting to read from the stream again. 
 I 
  think the best solution to getting a row-major matrix of data is to add 
 a 
  wrapper around the readall method from this package that handles the 
 final 
  reshape and transpose operations when you're not reading in streaming 
 data. 
  
   -- John 
  
  On Dec 8, 2014, at 9:25 AM, Tim Holy tim@gmail.com javascript: 
 wrote: 
   Does the reshape/transpose really take any appreciable time (compared 
 to 
   the I/O)? 
   
   --Tim 
   
   On Monday, December 08, 2014 09:14:35 AM John Myles White wrote: 
   Yes, this is how I've been doing things so far. 
   
   -- John 
   
   On Dec 8, 2014, at 9:12 AM, Tim Holy tim@gmail.com javascript: 
 wrote: 
   My suspicion is you should read into a 1d vector (and use 
 `append!`), 
   then 
   at the end do a reshape and finally a transpose. I bet that will be 
 many 
   times faster than any other alternative, because we have a really 
 fast 
   transpose now. 
   
   The only disadvantage I see is taking twice as much memory as would 
 be 
   minimally needed. (This can be fixed once we have row-major arrays.) 
   
   --Tim 
   
   On Monday, December 08, 2014 08:38:06 AM John Myles White wrote: 
   I believe/hope the proposed solution will work for most cases, 
 although 
   there's still a bunch of performance work left to be done. I think 
 the 
   decoupling problem isn't as hard as it might seem since there are 
 very 
   clearly distinct stages in parsing a CSV file. But we'll find out 
 if 
   the 
   indirection I've introduced causes performance problems when things 
   can't 
   be inlined. 
   
   While writing this package, I found the two most challenging 
 problems 
   to 
   be: 
   
   (A) The disconnect between CSV files providing one row at a time 
 and 
   Julia's usage of column major arrays, which encourage reading one 
   column 
   at a time. (B) The inability to easily resize! a matrix. 
   
   -- John 
   
   On Dec 8, 2014, at 5:16 AM, Stefan Karpinski ste...@karpinski.org 
 javascript: 
   
   wrote: 
   Doh. Obfuscate the code quick, before anyone uses it! This is very 
   nice 
   and something I've always felt like we need for data formats like 
 CSV 
   – 
   a 
   way of decoupling the parsing of the format from the populating of 
 a 
   data 
   structure with that data. It's a tough problem. 
   
   On Mon, Dec 8, 2014 at 8:08 AM, Tom Short tshort...@gmail.com 
 javascript: 
   wrote: 
   Exciting, John! Although your documentation may be very sparse, 
 the 
   code 
   is nicely documented. 
   
   On Mon, Dec 8, 2014 at 12:35 AM, John Myles White 
   johnmyl...@gmail.com javascript: wrote: Over the last month 
 or so, I've been 
   slowly working on a new library that defines an abstract toolkit 
 for 
   writing CSV parsers. The goal is to provide an abstract interface 
 that 
   users can implement in order to provide functions for reading data 
   into 
   their preferred data structures from CSV files. In principle, this 
   approach should allow us to unify the code behind Base's readcsv 
 and 
   DataFrames's readtable functions. 
   
   The library is still very much a work-in-progress, but I wanted to 
 let 
   others see what I've done so that I can start getting feedback on 
 the 
   design. 
   
   Because the library makes heavy use of Nullables, you can only try 
 out 
   the 
   library on Julia 0.4. If you're interested, it's available at 
   https://github.com/johnmyleswhite/CSVReaders.jl 
   
   For now, I've intentionally given very sparse documentation to 
   discourage 
   people from seriously using the library before it's officially 
   released. 
   But there are some examples in the README that should make clear 
 how 
   the 
   library is intended to be used. 
   -- John 



[julia-users] Re: FFTW in parallel

2014-12-08 Thread Jim Christoff
Thanks

That resulted in a 30% improvement

On Thursday, December 4, 2014 10:15:14 AM UTC-5, Jim Christoff wrote:



 Does Julia's implementation of FFTW use multi-core, if not how would I 
 implement this. I need speed for realtime processing.



Re: [julia-users] Re: FFTW in parallel

2014-12-08 Thread Tim Holy
I've found that if you're doing FFT on the same size arrays repeatedly, it can 
be well worth it to invest some time in creating a plan with FFTW_MEASURE or 
FFTW_PATIENT.

--Tim

On Monday, December 08, 2014 11:21:09 AM Jim Christoff wrote:
 Thanks
 
 That resulted in a 30% improvement
 
 On Thursday, December 4, 2014 10:15:14 AM UTC-5, Jim Christoff wrote:
  Does Julia's implementation of FFTW use multi-core, if not how would I
  implement this. I need speed for realtime processing.



[julia-users] Finite element code in Julia: Curious overhead in .* product

2014-12-08 Thread Petr Krysl
I posted a message yesterday about a simple port of fragments a finite 
element code to solve the heat conduction equation.  The Julia port was 
compared to the original Matlab toolkit FinEALE and the Julia code 
presented last year by Amuthan:

https://groups.google.com/forum/?fromgroups=#!searchin/julia-users/Krysl%7Csort:relevance/julia-users/3tTljDSQ6cs/-8UCPnNmzn4J

I have now speeded the code up some more, so that we have the table (on my 
laptop, i7, 16 gig of memory):

Amuthan's 29 seconds
J FinEALE 58 seconds
FinEALE 810 seconds

So, given that Amuthan reports to be slower by a general-purpose C++ code 
by a factor of around 1.36, J FinEALE is presumably slower with respect to 
an equivalent FE solver coded in C++ by a factor of 2.7.  So far so good.

The curious thing is that @time reports huge amounts of memory allocated 
(something like 10% of GC time).
One particular source of wild memory allocation was this line (executed in 
this case 2 million times)

Fe += Ns[j] .*  (f * Jac * w[j]);

where 
Fe = 3 x 1 matrix
Ns[j] = 3 by one matrix
f = one by one matrix
Jac, w[j]= scalars

The cost of the operation that encloses this line (among many others):
19.835263928 seconds (4162094480 bytes allocated, 16.79% gc time

Changing the one by one matrix f into a scalar (and replacing .*)

Fe += Ns[j] *  (f * Jac * w[j]);

changed the cost quite drastically:
10.105620394 seconds (2738120272 bytes allocated, 21.33% gc time

Any ideas, you Julian wizards?

Thanks,

Petr


[julia-users] Re: Easy way to copy structs.

2014-12-08 Thread Josh Langsfeld
On Monday, December 8, 2014 12:15:23 PM UTC-5, Utkarsh Upadhyay wrote:

 I have just tried playing with Julia and I find myself often copying 
 `immutable` objects while changing just one field:

 function setX(pt::Point, x::Float64)
   # Only changing the `x` field
   return Point(x, pt.y, pt.z, pt.color, pt.collision, pt.foo, pt.bar);
 end

 Is there is any syntactical sugar to avoid having to write out all the 
 other fields which are not changing (e.g. pt.y, pt.z, etc.) while creating 
 a new object?
 Apart from making the function much easier to understand, this will also 
 make it much less of a hassle to change fields on the type.

 Thanks.


I played with it some more and this seems to work:

defn = setdiff(names(Point), [:x])
 @eval Point(x, $([:(pt.$n) for n in defn]...))

Of course, this will only work for the x parameter which comes first in the 
field list. I'm not sure how you might extend it to arguments in the middle 
of the list, but maybe you can figure it out.


[julia-users] Re: Easy way to copy structs.

2014-12-08 Thread Josh Langsfeld
There isn't any special syntax for listing some of the type's fields, but 
you might be able to pull it off with some metaprogramming.

setdiff(names(Point), [:x]) will give you the leftover fields to use as 
default. You might be able to write a macro you could insert into the 
constructor call to copy all the other fields given that list as input.

On Monday, December 8, 2014 12:15:23 PM UTC-5, Utkarsh Upadhyay wrote:

 I have just tried playing with Julia and I find myself often copying 
 `immutable` objects while changing just one field:

 function setX(pt::Point, x::Float64)
   # Only changing the `x` field
   return Point(x, pt.y, pt.z, pt.color, pt.collision, pt.foo, pt.bar);
 end

 Is there is any syntactical sugar to avoid having to write out all the 
 other fields which are not changing (e.g. pt.y, pt.z, etc.) while creating 
 a new object?
 Apart from making the function much easier to understand, this will also 
 make it much less of a hassle to change fields on the type.

 Thanks.



[julia-users] Re: Easy way to copy structs.

2014-12-08 Thread Tobias Knopp
please have a look at https://github.com/JuliaLang/julia/pull/6122 which 
looks like the way to go (once merged)

Cheers

Tobi

Am Montag, 8. Dezember 2014 20:18:18 UTC+1 schrieb Josh Langsfeld:

 On Monday, December 8, 2014 12:15:23 PM UTC-5, Utkarsh Upadhyay wrote:

 I have just tried playing with Julia and I find myself often copying 
 `immutable` objects while changing just one field:

 function setX(pt::Point, x::Float64)
   # Only changing the `x` field
   return Point(x, pt.y, pt.z, pt.color, pt.collision, pt.foo, pt.bar);
 end

 Is there is any syntactical sugar to avoid having to write out all the 
 other fields which are not changing (e.g. pt.y, pt.z, etc.) while creating 
 a new object?
 Apart from making the function much easier to understand, this will also 
 make it much less of a hassle to change fields on the type.

 Thanks.


 I played with it some more and this seems to work:

 defn = setdiff(names(Point), [:x])
  @eval Point(x, $([:(pt.$n) for n in defn]...))

 Of course, this will only work for the x parameter which comes first in 
 the field list. I'm not sure how you might extend it to arguments in the 
 middle of the list, but maybe you can figure it out.



Re: [julia-users] Re: Lot of allocations in Array assignement

2014-12-08 Thread remi . berson
Ok I got it. Sorry, I was a bit confused.
Thank you :)


On Monday, December 8, 2014 1:28:47 PM UTC+1, Stefan Karpinski wrote:

 On Mon, Dec 8, 2014 at 3:24 AM, ele...@gmail.com javascript: wrote:



 On Monday, December 8, 2014 6:15:37 PM UTC+10, remi@gmail.com wrote:

 Thank you for the explanation Stefan.
 But isn't it possible to just consider the scopes declared inside of a 
 function + the global scope while looking for a variable definition? I find 
 the fact that the variable can come from the scope in which the function is 
 called strange.


 It can come from the scope in which the function is defined, not the 
 scope in which it is called, see 
 http://docs.julialang.org/en/latest/manual/variables-and-scoping/#scope-of-variables

 Cheers
 Lex


 Yes, variables can only come from the scope where the function is defined, 
 not where it is called – that is lexical scoping. Allowing variables to 
 come from the calling scope is dynamic scoping, which has generally fallen 
 out of favor in modern programming languages because it makes it impossible 
 to reason locally about the meaning of code.



Re: [julia-users] Re: Easy way to copy structs.

2014-12-08 Thread Tom Short
You can also define your own constructor that works the same as the issue
Tobias pointed to:

immutable Point
x::Float64
y::Float64
z::Float64
color::Int
collision::Bool
end

Point(pt::Point; x = pt.x, y = pt.y, z = pt.z, color = pt.color, collision
= pt.collision) = Point(x, y, z, color, collision)

setX(pt::Point, x::Float64) = Point(pt, x = x)


On Mon, Dec 8, 2014 at 4:16 PM, Tobias Knopp tobias.kn...@googlemail.com
wrote:

 please have a look at https://github.com/JuliaLang/julia/pull/6122 which
 looks like the way to go (once merged)

 Cheers

 Tobi

 Am Montag, 8. Dezember 2014 20:18:18 UTC+1 schrieb Josh Langsfeld:

 On Monday, December 8, 2014 12:15:23 PM UTC-5, Utkarsh Upadhyay wrote:

 I have just tried playing with Julia and I find myself often copying
 `immutable` objects while changing just one field:

 function setX(pt::Point, x::Float64)
   # Only changing the `x` field
   return Point(x, pt.y, pt.z, pt.color, pt.collision, pt.foo, pt.bar);
 end

 Is there is any syntactical sugar to avoid having to write out all the
 other fields which are not changing (e.g. pt.y, pt.z, etc.) while creating
 a new object?
 Apart from making the function much easier to understand, this will also
 make it much less of a hassle to change fields on the type.

 Thanks.


 I played with it some more and this seems to work:

 defn = setdiff(names(Point), [:x])
  @eval Point(x, $([:(pt.$n) for n in defn]...))

 Of course, this will only work for the x parameter which comes first in
 the field list. I'm not sure how you might extend it to arguments in the
 middle of the list, but maybe you can figure it out.




Re: [julia-users] Reviewing a Julia programming book for Packt

2014-12-08 Thread Ivar Nesje
The book is written by Malcolm Sherrington. He has a very limited online 
presence and only two threads on julia-users, and I couldn't find his github 
username. I got the two first chapters for review, but they gave me short 
deadlines and I felt so bad about missing their imposed 3 day deadline, that I 
stopped replying. I didn't have the attention span to really sit down to try to 
nest the loose ends, and give more useful comments than typos and small errors. 
Maybe it was just that a language introduction is hard to write, and that it 
will improve when the basics are known.

I'm very curious how someone can write a book about Julia, at this stage, 
without discovering anything that would be worth opening an issue on github to 
get fixed. I generally find issues and potential improvements everywhere I 
look, and frankly that's one of the most fun thing about being part of the 
language development at this early stage.

Re: [julia-users] Re: Lot of allocations in Array assignement

2014-12-08 Thread Stefan Karpinski
You are quite welcome. Glad to clarify.

On Mon, Dec 8, 2014 at 4:18 PM, remi.ber...@gmail.com wrote:

 Ok I got it. Sorry, I was a bit confused.
 Thank you :)


 On Monday, December 8, 2014 1:28:47 PM UTC+1, Stefan Karpinski wrote:

 On Mon, Dec 8, 2014 at 3:24 AM, ele...@gmail.com wrote:



 On Monday, December 8, 2014 6:15:37 PM UTC+10, remi@gmail.com wrote:

 Thank you for the explanation Stefan.
 But isn't it possible to just consider the scopes declared inside of a
 function + the global scope while looking for a variable definition? I find
 the fact that the variable can come from the scope in which the function is
 called strange.


 It can come from the scope in which the function is defined, not the
 scope in which it is called, see http://docs.julialang.org/
 en/latest/manual/variables-and-scoping/#scope-of-variables

 Cheers
 Lex


 Yes, variables can only come from the scope where the function is
 defined, not where it is called – that is lexical scoping. Allowing
 variables to come from the calling scope is dynamic scoping, which has
 generally fallen out of favor in modern programming languages because it
 makes it impossible to reason locally about the meaning of code.




[julia-users] Re: Finite element code in Julia: Curious overhead in .* product

2014-12-08 Thread Valentin Churavy
I would think that when f is a 1x1 matrix Julia is allocating a new 1x1 
matrix to store the result. If it is a scalar that allocation can be 
skipped. When this part of the code is now in a hot loop it might happen 
that you allocate millions of very small short-lived objects and that taxes 
the GC quite a lot.  

On Monday, 8 December 2014 21:09:19 UTC+1, Petr Krysl wrote:

 I posted a message yesterday about a simple port of fragments a finite 
 element code to solve the heat conduction equation.  The Julia port was 
 compared to the original Matlab toolkit FinEALE and the Julia code 
 presented last year by Amuthan:


 https://groups.google.com/forum/?fromgroups=#!searchin/julia-users/Krysl%7Csort:relevance/julia-users/3tTljDSQ6cs/-8UCPnNmzn4J

 I have now speeded the code up some more, so that we have the table (on my 
 laptop, i7, 16 gig of memory):

 Amuthan's 29 seconds
 J FinEALE 58 seconds
 FinEALE 810 seconds

 So, given that Amuthan reports to be slower by a general-purpose C++ code 
 by a factor of around 1.36, J FinEALE is presumably slower with respect to 
 an equivalent FE solver coded in C++ by a factor of 2.7.  So far so good.

 The curious thing is that @time reports huge amounts of memory allocated 
 (something like 10% of GC time).
 One particular source of wild memory allocation was this line (executed in 
 this case 2 million times)

 Fe += Ns[j] .*  (f * Jac * w[j]);

 where 
 Fe = 3 x 1 matrix
 Ns[j] = 3 by one matrix
 f = one by one matrix
 Jac, w[j]= scalars

 The cost of the operation that encloses this line (among many others):
 19.835263928 seconds (4162094480 bytes allocated, 16.79% gc time

 Changing the one by one matrix f into a scalar (and replacing .*)

 Fe += Ns[j] *  (f * Jac * w[j]);

 changed the cost quite drastically:
 10.105620394 seconds (2738120272 bytes allocated, 21.33% gc time

 Any ideas, you Julian wizards?

 Thanks,

 Petr



Re: [julia-users] Reviewing a Julia programming book for Packt

2014-12-08 Thread Stefan Karpinski
I find this odd as well.

On Mon, Dec 8, 2014 at 4:54 PM, Ivar Nesje iva...@gmail.com wrote:

 The book is written by Malcolm Sherrington. He has a very limited online
 presence and only two threads on julia-users, and I couldn't find his
 github username. I got the two first chapters for review, but they gave me
 short deadlines and I felt so bad about missing their imposed 3 day
 deadline, that I stopped replying. I didn't have the attention span to
 really sit down to try to nest the loose ends, and give more useful
 comments than typos and small errors. Maybe it was just that a language
 introduction is hard to write, and that it will improve when the basics are
 known.

 I'm very curious how someone can write a book about Julia, at this stage,
 without discovering anything that would be worth opening an issue on github
 to get fixed. I generally find issues and potential improvements everywhere
 I look, and frankly that's one of the most fun thing about being part of
 the language development at this early stage.


Re: [julia-users] Reviewing a Julia programming book for Packt

2014-12-08 Thread Ivar Nesje
By the way, I think his github presence is at https://github.com/sherrinm (not 
much to see).

Re: [julia-users] Reviewing a Julia programming book for Packt

2014-12-08 Thread cdm

see also:

http://datasciencelondon.org/julia-language-shooting-star-or-a-flash-in-the-pan-by-malcolm-sherrington/

https://skillsmatter.com/legacy_profile/malcolm-sherrington#overview


... and the meetup group:

http://www.meetup.com/London-Julia-User-Group

which seems to get good reviews.

cdm



On Monday, December 8, 2014 2:32:36 PM UTC-8, Ivar Nesje wrote:

 By the way, I think his github presence is at https://github.com/sherrinm 
 (not much to see).



Re: [julia-users] Reviewing a Julia programming book for Packt

2014-12-08 Thread cdm

oh, and this:

http://www.meetup.com/Financial-Engineers-Quants-London/members/14596802/

for what it is worth ... i have known a few Quant types that are quite 
capable
of operating on an island ...

in fact, sometimes their environments require this of them.

best,

cdm




On Monday, December 8, 2014 3:25:43 PM UTC-8, cdm wrote:


 see also:


 http://datasciencelondon.org/julia-language-shooting-star-or-a-flash-in-the-pan-by-malcolm-sherrington/

 https://skillsmatter.com/legacy_profile/malcolm-sherrington#overview


 ... and the meetup group:

 http://www.meetup.com/London-Julia-User-Group

 which seems to get good reviews.

 cdm



 On Monday, December 8, 2014 2:32:36 PM UTC-8, Ivar Nesje wrote:

 By the way, I think his github presence is at https://github.com/sherrinm 
 (not much to see).



Re: [julia-users] Reviewing a Julia programming book for Packt

2014-12-08 Thread Ivar Nesje
Thanks cdm, those where much more useful links. I'm just accustomed to search 
Github, StackOverflow and personal homepages to get an overview of a developer, 
but as you say, I'll likely get a very wrong impression of many people that way.

Re: [julia-users] Reviewing a Julia programming book for Packt

2014-12-08 Thread Avik Sengupta
Yes, Malcom runs the London Julia user group. 

On Monday, 8 December 2014 23:25:43 UTC, cdm wrote:


 see also:


 http://datasciencelondon.org/julia-language-shooting-star-or-a-flash-in-the-pan-by-malcolm-sherrington/

 https://skillsmatter.com/legacy_profile/malcolm-sherrington#overview


 ... and the meetup group:

 http://www.meetup.com/London-Julia-User-Group

 which seems to get good reviews.

 cdm



 On Monday, December 8, 2014 2:32:36 PM UTC-8, Ivar Nesje wrote:

 By the way, I think his github presence is at https://github.com/sherrinm 
 (not much to see).



Re: [julia-users] Reviewing a Julia programming book for Packt

2014-12-08 Thread John Myles White
I've met Malcolm and like him quite a lot. I didn't realize he was writing this 
specific book.

 -- John

On Dec 8, 2014, at 4:42 PM, Avik Sengupta avik.sengu...@gmail.com wrote:

 Yes, Malcom runs the London Julia user group. 
 
 On Monday, 8 December 2014 23:25:43 UTC, cdm wrote:
 
 see also:
 
 http://datasciencelondon.org/julia-language-shooting-star-or-a-flash-in-the-pan-by-malcolm-sherrington/
 
 https://skillsmatter.com/legacy_profile/malcolm-sherrington#overview
 
 
 ... and the meetup group:
 
 http://www.meetup.com/London-Julia-User-Group
 
 which seems to get good reviews.
 
 cdm
 
 
 
 On Monday, December 8, 2014 2:32:36 PM UTC-8, Ivar Nesje wrote:
 By the way, I think his github presence is at https://github.com/sherrinm 
 (not much to see).



[julia-users] Re: Finite element code in Julia: Curious overhead in .* product

2014-12-08 Thread Robert Gates
Hi Petr,

just on a side-note: are you planning on implementing a complete FE package 
in Julia? In that case we could pool our efforts.

Best regards,

Robert

On Monday, December 8, 2014 9:09:19 PM UTC+1, Petr Krysl wrote:

 I posted a message yesterday about a simple port of fragments a finite 
 element code to solve the heat conduction equation.  The Julia port was 
 compared to the original Matlab toolkit FinEALE and the Julia code 
 presented last year by Amuthan:


 https://groups.google.com/forum/?fromgroups=#!searchin/julia-users/Krysl%7Csort:relevance/julia-users/3tTljDSQ6cs/-8UCPnNmzn4J

 I have now speeded the code up some more, so that we have the table (on my 
 laptop, i7, 16 gig of memory):

 Amuthan's 29 seconds
 J FinEALE 58 seconds
 FinEALE 810 seconds

 So, given that Amuthan reports to be slower by a general-purpose C++ code 
 by a factor of around 1.36, J FinEALE is presumably slower with respect to 
 an equivalent FE solver coded in C++ by a factor of 2.7.  So far so good.

 The curious thing is that @time reports huge amounts of memory allocated 
 (something like 10% of GC time).
 One particular source of wild memory allocation was this line (executed in 
 this case 2 million times)

 Fe += Ns[j] .*  (f * Jac * w[j]);

 where 
 Fe = 3 x 1 matrix
 Ns[j] = 3 by one matrix
 f = one by one matrix
 Jac, w[j]= scalars

 The cost of the operation that encloses this line (among many others):
 19.835263928 seconds (4162094480 bytes allocated, 16.79% gc time

 Changing the one by one matrix f into a scalar (and replacing .*)

 Fe += Ns[j] *  (f * Jac * w[j]);

 changed the cost quite drastically:
 10.105620394 seconds (2738120272 bytes allocated, 21.33% gc time

 Any ideas, you Julian wizards?

 Thanks,

 Petr



Re: [julia-users] Initializing a SharedArray Memory Error

2014-12-08 Thread Isaiah Norton
Hopefully you will get an answer on pmap from someone more familiar with
the parallel stuff, but: have you tried splitting the init step? (see the
example in the manual for how to init an array in chunks done by different
workers). Just guessing though: I'm not sure if/how those will be
serialized if each worker is contending for the whole array.

On Fri, Dec 5, 2014 at 4:23 PM, benFranklin gcam...@gmail.com wrote:

 Hi all, I'm trying to figure out how to best initialize a SharedArray,
 using a C function to fill it up that computes a huge matrix in parts, and
 all comments are appreciated. To summarise: Is A, making an empty shared
 array, computing the matrix in parallel using pmap and then filling it up
 serially, better than using B, computing in parallel and storing in one
 step by using an init function in the SharedArray declaration?


 The difference tends to be that B uses a lot more memory, each process
 using the exact same amount of memory. However it is much faster than A, as
 the copy step takes longer than the computation, but in A most of the
 memory usage is in one process, using less memory overall.

 Any tips on how to do this better? Also, this pmap is how I'm handling
 more complex paralellizations in Julia. Any comments on that approach?

 Thanks a lot!

 Best,
 Ben


 CODE A:

 Is this, making an empty shared array, computing the matrix in parallel
 and then filling it up serially:

 function findZeroDividends(model::ModelPrivate)

 nW = length(model.vW)
 nZ = length(model.vZ)
 nK = length(model.vK)
 nQ = length(model.vQ)
  zeroMatrix = SharedArray(Float64,(nW,nZ,nK,nQ,nQ,nQ),pids=workers())

 input = [stateFindZeroK(w,z,k,model) for w in 1:nW, z in 1:nZ,  k in 1:nK];
 results = pmap(findZeroInC,input);

 for w in 1:nW
 for z in 1:nZ
 for k in 1:nK

 zeroMatrix[w,z,k,:,:,:] = results[w + nW*((z-1) + nZ*(k-1))]
  end
 end
 end

 return zeroMatrix
 end

 ___

 CODE B:

 Better than these two:

 function
 start(x::SharedArray,nW::Int64,nZ::Int64,nK::Int64,model::ModelPrivate)

 for j in myid()-1:nworkers():(nW*nZ*nK)
 inds = ind2sub((nW,nZ,nK),j)
 x[inds[1],inds[2],inds[3],:,:,:]
 =findZeroInC(stateFindZeroK(inds[1],inds[2],inds[3],model))
 end

 x

 end

 function findZeroDividendsSmart(model::ModelPrivate)

 nW = length(model.vW)
 nZ = length(model.vZ)
 nK = length(model.vK)
 nQ = length(model.vQ)

 #input = [stateFindZeroK(w,z,k,model) for w in 1:nW, z in 1:nZ,  k in
 1:nK];
 #results = pmap(findZeroInC,input);

 zeroMatrix = SharedArray(Float64,(nW,nZ,nK,nQ,nQ,nQ),pids=workers(), init
 = x-start(x,nW,nZ,nK,model) )

 return zeroMatrix
 end

 

 The C function being called is inside this wrapper and returns the pointer
 to  double *capitalChoices = (double *)malloc(sizeof(double)*nQ*nQ*nQ);

 function findZeroInC(state::stateFindZeroK)

 w = state.wealth
 z = state.z
 k = state.k
 model = state.model

 #findZeroInC(double wealth, int z,int k,  double theta, double delta,
  double* vK,
 # int nK, double* vQ, int nQ, double* transition, double betaGov)

 nQ = length(model.vQ)

 t = ccall((:findZeroInC,findP.so),
 Ptr{Float64},(Float64,Int64,Int64,Float64,Float64,Ptr{Float64},Int64,Ptr{Float64},Int64,Ptr{Float64},Float64),

 model.vW[w],z-1,k-1,model.theta,model.delta,model.vK,length(model.vK),model.vQ,nQ,model.transition,model.betaGov)
 if t == C_NULL
 error(NULL)
 end

 return pointer_to_array(t,(nQ,nQ,nQ),true)

 end


 https://lh5.googleusercontent.com/-5rJqYh2oUqQ/VIIiFQUl2rI/AvM/gwAXG7N0Gxc/s1600/mem.png





[julia-users] Re: Finite element code in Julia: Curious overhead in .* product

2014-12-08 Thread Petr Krysl
Hi Robert,

At this point I am really impressed with Julia.  Consequently, I am tempted 
to rewrite my Matlab toolkit FinEALE 
(https://github.com/PetrKryslUCSD/FinEALE) in Julia and to implement 
further research ideas in this new environment. 

What did you have in mind?

Petr
pkr...@ucsd.edu

On Monday, December 8, 2014 6:41:02 PM UTC-8, Robert Gates wrote:

 Hi Petr,

 just on a side-note: are you planning on implementing a complete FE 
 package in Julia? In that case we could pool our efforts.




[julia-users] THANKS to Julia core developers!

2014-12-08 Thread Petr Krysl
I've been playing in Julia for the past week or so, but already the results 
are convincing.  This language is GREAT.   I've coded hundreds of thousands 
of lines in Fortran, C, C++, Matlab, and this is the first language that 
feels good. And it is precisely what I envision for my project.

So, THANKS! 

Petr Krysl


Re: [julia-users] Reviewing a Julia programming book for Packt

2014-12-08 Thread Stefan Karpinski
Interesting. I'll have to check out these links. Started on one of the
presentations. Glad to know that he's fairly involved in the London Julia
community. Would love to meet some time. I feel like the process of getting
reviewers might be more effective if he reached out directly. Malcolm, this
is an invite: if you read this, email me, let's chat :-)

On Mon, Dec 8, 2014 at 7:49 PM, John Myles White johnmyleswh...@gmail.com
wrote:

 I've met Malcolm and like him quite a lot. I didn't realize he was writing
 this specific book.

  -- John

 On Dec 8, 2014, at 4:42 PM, Avik Sengupta avik.sengu...@gmail.com wrote:

 Yes, Malcom runs the London Julia user group.

 On Monday, 8 December 2014 23:25:43 UTC, cdm wrote:


 see also:

 http://datasciencelondon.org/julia-language-shooting-star-
 or-a-flash-in-the-pan-by-malcolm-sherrington/

 https://skillsmatter.com/legacy_profile/malcolm-sherrington#overview


 ... and the meetup group:

 http://www.meetup.com/London-Julia-User-Group

 which seems to get good reviews.

 cdm



 On Monday, December 8, 2014 2:32:36 PM UTC-8, Ivar Nesje wrote:

 By the way, I think his github presence is at
 https://github.com/sherrinm (not much to see).





Re: [julia-users] THANKS to Julia core developers!

2014-12-08 Thread Stefan Karpinski
:-D

On Tue, Dec 9, 2014 at 12:02 AM, Petr Krysl krysl.p...@gmail.com wrote:

 I've been playing in Julia for the past week or so, but already the
 results are convincing.  This language is GREAT.   I've coded hundreds of
 thousands of lines in Fortran, C, C++, Matlab, and this is the first
 language that feels good. And it is precisely what I envision for my
 project.

 So, THANKS!

 Petr Krysl



[julia-users] Re: Finite element code in Julia: Curious overhead in .* product

2014-12-08 Thread Robert Gates
Hi Petr,

I wrote you an email.

Although I haven't seen your Julia implementation, I believe you could cut 
down on GC time by running the GC at regular intervals only. For example, 
when I loop over something, I usually choose to purge memory every 20Mb, 
e.g. every 1000 iterations. That way, I have experienced 10x speed-ups for 
GC heavy problems. For example: 
https://gist.github.com/rleegates/c305c0fec0b963257cba

Best regards,

Robert

On Monday, December 8, 2014 9:09:19 PM UTC+1, Petr Krysl wrote:

 I posted a message yesterday about a simple port of fragments a finite 
 element code to solve the heat conduction equation.  The Julia port was 
 compared to the original Matlab toolkit FinEALE and the Julia code 
 presented last year by Amuthan:


 https://groups.google.com/forum/?fromgroups=#!searchin/julia-users/Krysl%7Csort:relevance/julia-users/3tTljDSQ6cs/-8UCPnNmzn4J

 I have now speeded the code up some more, so that we have the table (on my 
 laptop, i7, 16 gig of memory):

 Amuthan's 29 seconds
 J FinEALE 58 seconds
 FinEALE 810 seconds

 So, given that Amuthan reports to be slower by a general-purpose C++ code 
 by a factor of around 1.36, J FinEALE is presumably slower with respect to 
 an equivalent FE solver coded in C++ by a factor of 2.7.  So far so good.

 The curious thing is that @time reports huge amounts of memory allocated 
 (something like 10% of GC time).
 One particular source of wild memory allocation was this line (executed in 
 this case 2 million times)

 Fe += Ns[j] .*  (f * Jac * w[j]);

 where 
 Fe = 3 x 1 matrix
 Ns[j] = 3 by one matrix
 f = one by one matrix
 Jac, w[j]= scalars

 The cost of the operation that encloses this line (among many others):
 19.835263928 seconds (4162094480 bytes allocated, 16.79% gc time

 Changing the one by one matrix f into a scalar (and replacing .*)

 Fe += Ns[j] *  (f * Jac * w[j]);

 changed the cost quite drastically:
 10.105620394 seconds (2738120272 bytes allocated, 21.33% gc time

 Any ideas, you Julian wizards?

 Thanks,

 Petr



[julia-users] Best way to remove an observation from a TimeArray

2014-12-08 Thread colintbowers
Hi all,

The TimeArray type in the TimeSeries package is immutable. I think I can 
see why this makes sense. However, I think this means that if the array I 
have in the values field is of dimension greater than 1 (let's say it is 
a matrix), then the only way I can remove an observation from my TimeArray 
is to construct a completely new TimeArray variable with that observation 
missing. Is this right? My reasoning is that although I could remove the 
observation from the timestamp field using deleteat!, I can't do the same 
for the values field since it is a matrix, nor can I assign a new matrix 
with one less row to the values field, because the TimeArray type is 
immutable.

I just wanted to check if my reasoning was correct or if there is some 
clever way to accomplish the removal of an observation without constructing 
an entirely new TimeArray variable.

Cheers,

Colin