Re: [julia-users] function with input parameter Vector{VecOrMat{Float64}}

2015-04-28 Thread Tim Holy
http://docs.julialang.org/en/release-0.3/manual/types/#parametric-composite-types

Use foo{V:VecOrMat}(X::Vector{V})

--Tim

On Tuesday, April 28, 2015 02:40:41 AM Ján Dolinský wrote:
 Hi guys,
 
 I am trying to write a function which accepts as an input either a vector
 of vectors or a vector of matrices e.g.
 
 function foo(X::Vector{VecOrMat{Float64}})
 
 When running the function with a vector of matrices I get the following
 error  'foo' has no method matching foo(::Array{Array{Float64,2},1})
 
 Am I missing something here ?
 
 Thanks,
 Jan



Re: [julia-users] Code starting to be slow after 30 min of execution

2015-04-28 Thread 'Antoine Messager' via julia-users
The previous picture was only an example, I should by the end solve non 
linear system of dimension 500. I expect NLsolve to work with one dimension 
too. 

I have not figure out how to use anonymous function within NLsolve. I don't 
understand of what README Tim Holy you are talking about. I went on the 
julia docs and on wikipedia and I have found the following definition:
*   h = (x,z)-[z^2-1+x,x^2-z^3]*
or 
 *function (x,z)*
 *[z^2-1+x,x^2-z^3]*
*  end*
But it does not seem to work. When it is not a problem of number of 
argument, It is said that *`*` has no method matching *(::Array{Float64,1}, 
::Array{Float64,1}) *so I used *h = (x,z)-[z.^2-1+x,x.^2-z.^3] *but it 
does not give the correct result. 

In the loop the function is always under the same name but the function is 
located into a different place:

for instance for the first iteration:

*##f#32489 (generic function with 1 method) *

and for the second one:

*##f#32490 (generic function with 1 method)*

and so on...

So it seems that new space is allocated at each iteration even if the function 
is created under the same name (myfun).


Le mardi 28 avril 2015 13:27:21 UTC+1, Yuuki Soho a écrit :

 Do you really need to use a different name for your function each time ? 
 you could just use the same name it seems. I'm not sure it would solve the 
 problem though.



Re: [julia-users] Yet Another String Concatenation Thread (was: Re: Naming convention)

2015-04-28 Thread Yuuki Soho
I think one should go over all the names in Base and see if there's some 
rules that can be applied sanely to come up with better naming scheme.

If you introduce factorize(MyType,...) and want to be consistent about this 
kind of things you might end up changing a lot of the functions in base. 
E.g.

sqrtm - sqrt(Matrix,...)
hist2d - hist(Dimension{2},...)
...


[julia-users] Re: Code starting to be slow after 30 min of execution

2015-04-28 Thread 'Antoine Messager' via julia-users
Both ideas you have given are working. Wonderful! I just need to figure out 
which one is the fastest, the @gensym out of the creation of the function 
probably. 
Thank you very much!
Antoine.

Le lundi 27 avril 2015 15:49:56 UTC+1, Antoine Messager a écrit :

 Dear all,

 I need to create a lot of systems of equation, find some characteristics 
 of each system and store the system if of interest. Each system is created 
 under the same name. It works fine for the first 1000 systems but after the 
 program starts to be too slow. I have tried to use the garbage collector 
 each time I create a new system but it did not speed up the code. I don't 
 know what to do, I don't understand where it could come from. 

 Cheers,
 Antoine



Re: [julia-users] Re: Newbie help... First implementation of 3D heat equation solver VERY slow in Julia

2015-04-28 Thread Tim Holy
Intel compilers won't help, because your julia code is being compiled by LLVM.

It's still hard to tell what's up from what you've shown us. When you run 
@time, does it allocate any memory? (You still have global variables in there, 
but maybe you made them const.)

But you can save yourself two iterations through the arrays (i.e., more cache 
misses) by putting
T[i-1,j-1,k-1] += RHS[i-1,j-1,k-1]
inside the first loop and discarding the second loop (except for cleaning up 
the edges). Fortran may be doing this automatically for you? 
http://en.wikipedia.org/wiki/Polytope_model

--Tim

On Tuesday, April 28, 2015 07:14:40 AM Ángel de Vicente wrote:
 Hi Tim,
 
 On Tuesday, April 28, 2015 at 2:53:45 PM UTC+1, Tim Holy wrote:
  Before deciding that the compiler is the answer...profile. Where is the
  bottleneck?
 
 well, the code now runs quite fast (double the time it takes for my Fortran
 version), after following the suggestions made in this thread. Basically
 there is only one function in the code, so the bottleneck has to be there
 
 :-), but I'm not sure I can do anything else to improve its performance.
 
 The relevant part of the code is:
 
 const T = zeros(Float64,NX,NY,NZ)
 const RHS = zeros(Float64,NX,NY,NZ)
 
 [...]
 
 function main_loop()
  for n = 0:NT-1
   @inbounds for k=2:NZ-1, j=2:NY-1, i=2:NX-1
   RHS[i,j,k] = dt*A*(
 (T[i-1,j,k]-2*T[i,j,k]+T[i+1,j,k])/dx2  +
(T[i,j-1,k]-2*T[i,j,k]+T[i,j+1,k])/dy2  +
 (T[i,j,k-1]-2*T[i,j,k]+T[i,j,k+1])/dz2 )
 
end
 
@inbounds for k=2:NZ-1, j=2:NY-1, i=2:NX-1
  T[i,j,k] = T[i,j,k] + RHS[i,j,k]
 end
 
  end
 end
 
 Trying to get Julia compiled with the Intel compilers was just to see if I
 could squeeze a bit more performance out of it, but certainly I would also
 appreciate any suggestions on how to speed up my existing Julia code.
 
 Thanks,
 Ángel de Vicente



Re: [julia-users] Newbie help... First implementation of 3D heat equation solver VERY slow in Julia

2015-04-28 Thread Angel de Vicente
Hi,

Ángel de Vicente angel.vicente.garr...@gmail.com writes:
 Now I have two more questions, to see if I can get better performance:


 1) I'm just running the Julia distribuation that came with my Ubuntu
 distro. I don't know how this was compiled. Is there a way to see
 which optimization level and which compiler options were used when
 compiling Julia? Would I be able to get better performance out of
 Julia if I do my own compilation from source? (either using a high
 optimization flag or perhaps even using another compiler (I have
 access to the Intel compilers suite here).

regarding this, I downloaded Julia source and I compiled it with the
default makefiles (gfortran and optimization -O3 as far as I can see),
and there was no important time difference.

I tried to compile with Intel compilers by creating the Make.user file
with the following content

,
| USEICC=1
| USEIFC=1
| USE_INTEL_LIBM=1
`

but it failed, with the following error:

,
| Making all in src
| /usr/include/c++/4.8/string(38): catastrophic error: cannot open source
| file bits/c++config.h
|   #include bits/c++config.h
|  ^
| 
| compilation aborted for patchelf.cc (code 4)
`

Any hints on getting it compiled with the Intel compilers? 

 2) Is it possible to give optimization flags somehow to the JIT
 compiler? In this case I know that the main_loop function is crucial,
 and it is going to be executed hundreds/thousands of times, so I
 wouldn't mind spending more time the first time it is compiled if it
 can be optimized as much as possible.

I looked around the Julia documentation, but saw nothing, so perhaps
this is not possible at all?

Thanks,
-- 
Ángel de Vicente
http://www.iac.es/galeria/angelv/  


Re: [julia-users] Re: Newbie help... First implementation of 3D heat equation solver VERY slow in Julia

2015-04-28 Thread Ángel de Vicente
Hi Tim,

On Tuesday, April 28, 2015 at 2:53:45 PM UTC+1, Tim Holy wrote:

 Before deciding that the compiler is the answer...profile. Where is the 
 bottleneck? 


well, the code now runs quite fast (double the time it takes for my Fortran 
version), after following the suggestions made in this thread. Basically 
there is only one function in the code, so the bottleneck has to be there 
:-), but I'm not sure I can do anything else to improve its performance. 
The relevant part of the code is:

const T = zeros(Float64,NX,NY,NZ)
const RHS = zeros(Float64,NX,NY,NZ)

[...]

function main_loop()
 for n = 0:NT-1
  @inbounds for k=2:NZ-1, j=2:NY-1, i=2:NX-1
  RHS[i,j,k] = dt*A*( 
(T[i-1,j,k]-2*T[i,j,k]+T[i+1,j,k])/dx2  +
   (T[i,j-1,k]-2*T[i,j,k]+T[i,j+1,k])/dy2  +
   (T[i,j,k-1]-2*T[i,j,k]+T[i,j,k+1])/dz2 )

   end

   @inbounds for k=2:NZ-1, j=2:NY-1, i=2:NX-1
 T[i,j,k] = T[i,j,k] + RHS[i,j,k]
end

 end
end

Trying to get Julia compiled with the Intel compilers was just to see if I 
could squeeze a bit more performance out of it, but certainly I would also 
appreciate any suggestions on how to speed up my existing Julia code.

Thanks,
Ángel de Vicente


Re: [julia-users] Code starting to be slow after 30 min of execution

2015-04-28 Thread Tim Holy
On Tuesday, April 28, 2015 05:45:10 AM 'Antoine Messager' via julia-users 
wrote:
 The previous picture was only an example, I should by the end solve non
 linear system of dimension 500. I expect NLsolve to work with one dimension
 too.
 
 I have not figure out how to use anonymous function within NLsolve. I don't
 understand of what README Tim Holy you are talking about.

Ah, I was misreading that as NLopt.jl. There's one anonymous function on 
https://github.com/EconForge/NLsolve.jl, but it isn't explicitly used. Still, 
it seems like it should work. If you can't figure it out (and the NLopt README 
might give some good examples/inspiration), I'd recommend trying opening an 
issue with NLsolve or trying to fix the problem and contributing it to NLsolve.

Best,
--Tim

 I went on the
 julia docs and on wikipedia and I have found the following definition:
 *   h = (x,z)-[z^2-1+x,x^2-z^3]*
 or
  *function (x,z)*
  *[z^2-1+x,x^2-z^3]*
 *  end*
 But it does not seem to work. When it is not a problem of number of
 argument, It is said that *`*` has no method matching *(::Array{Float64,1},
 
 ::Array{Float64,1}) *so I used *h = (x,z)-[z.^2-1+x,x.^2-z.^3] *but it
 
 does not give the correct result.
 
 In the loop the function is always under the same name but the function is
 located into a different place:
 
 for instance for the first iteration:
 
 *##f#32489 (generic function with 1 method) *
 
 and for the second one:
 
 *##f#32490 (generic function with 1 method)*
 
 and so on...
 
 So it seems that new space is allocated at each iteration even if the
 function is created under the same name (myfun).
 Le mardi 28 avril 2015 13:27:21 UTC+1, Yuuki Soho a écrit :
  Do you really need to use a different name for your function each time ?
  you could just use the same name it seems. I'm not sure it would solve the
  problem though.



Re: [julia-users] Re: Defining a function in different modules

2015-04-28 Thread David Gold
Re: implementing such a merge function.

My first instinct would be to create a list of methods from each function, 
find the intersection, then return a function with methods determined by 
the methods from each input function, with methods in the intersection 
going to the value of conflicts_favor. My question is, would it be okay 
to create a function f within the scope of merge, add methods to f by 
iterating through a list, and then return f? In particular, if one assigns 
the value of a global constant 'connect' to the returned function of merge, 
is a copy of the returned function created and then bound to 'connect', or 
would 'connect' be bound to something else that would cause trouble down 
the line?

Thank you!
D

On Sunday, April 26, 2015 at 1:41:24 PM UTC-4, Jeff Bezanson wrote:

 I keep getting accused of insisting that every name have only one 
 meaning. Not at all. When you extend a function there are no 
 restrictions. The `connect` methods for GlobalDB and SQLDB could 
 absolutely belong to the same generic function. From there, it's 
 *nice* if they implement compatible interfaces, but nobody will force 
 them to. 

 Scott, I think you're overstating the damage done by a name collision 
 error. You can't expect to change package versions underneath your 
 code and have everything keep working. A clear, fairly early error is 
 one of the *better* outcomes in that case. 

 In your design, there are *also* situations where an update to a 
 package causes an error or warning in client code. I'll grant you that 
 those situations might be rarer, but they're also subtler. The user 
 might see 

 Warning: modules A and B conflict over method foo( #= some huge signature 
 =# ) 

 What are you supposed to do about that? 

 It's worth pointing out that merging functions is currently very 
 possible; we just don't do it automatically. You can do it manually: 

 using GlobalDB 
 using SQLDB 

 connect(c::GlobalDBMgr, args...) = GlobalDB.connect(c, args...) 
 connect(c::SQLDBMgr, args...) = SQLDB.connect(c, args...) 

 This will perform well since such small definitions will usually be 
 inlined. 

 If people want to experiment, I'd encourage somebody to implement a 
 function merger using reflection. You could write 

 const connect = merge(GlobalDB.connect, SQLDB.connect, 
 conflicts_favor=SQLDB.connect) 


 On Sun, Apr 26, 2015 at 12:10 PM, David Gold david@gmail.com 
 javascript: wrote: 
  You, Jeff and Stefan seem to be concerned with different kinds of 
  ambiguity. Suppose I import `foo(T1, T2)` from module `A` and `foo(T2, 
  T1)` from module `B`. I take you to claim that if I call `foo(x, y)` 
 then, 
  as long as there is no ambiguity which method is appropriate, the 
 compiler 
  should just choose which of `A.foo()` and `B.foo()` has the proper 
  signature. I take you to be concerned with potential ambiguity about 
 which 
  method should apply to a given argument. 
  
  On the other hand, I take Jeff and Stefan to be concerned with the 
 ambiguity 
  of to which object the name '`foo()`' refers (though they should chime 
 in if 
  this claim or any of the following are wayward). Suppose we're the 
 compiler 
  and we come across a use of the name '`foo()`' while running through the 
  code. Our first instinct when we see a name like '`foo()`' is to look up 
 the 
  object to which the name refers. But what object does this name refer 
 to? 
  You (Scott) seem to want the compiler to be able to say to itself before 
  looking up the referent, If the argument types in this use of '`foo()`' 
  match the signature of `A.foo()`, then this instance of '`foo()`' refers 
 to 
  `A.foo()`. If the argument types match the signature of '`B.foo()`', 
 then 
  this instance of '`foo()`' refers to '`B.foo()`'. But then '`foo()`' 
 isn't 
  really a name, for the referent of a name cannot be a disjunction of 
  objects. Indeed, the notion of a disjunctive name is an oxymoron. If 
 you 
  use my name, but your reference could be to either me or some other 
 object 
  depending on context, then you haven't really used my name, which refers 
 to 
  me regardless of context. 
  
  I suspect that many problems await if you try to adopt this disjunctive 
  reference scheme. In particular, you'd need to develop a general way 
 for 
  the compiler to recognize not only that your use of '`foo()`' isn't 
 intended 
  as a name but rather as a disjunctive reference but also that every 
 other 
  name-like object is actually a name. I strongly suspect that there is no 
  general way to do this. After all, what sorts of contexts could possibly 
  determine with sufficient generality whether or not a given name-like 
 object 
  is actually a name or instead a disjunctive reference. The most obvious 
  approach seems to be to let the compiler try to determine the referent 
 of 
  '`foo()`' and, if there is no referent, then to see whether or not there 
  exist imported functions with the same name. If such 

Re: [julia-users] function with input parameter Vector{VecOrMat{Float64}}

2015-04-28 Thread Ján Dolinský
Hi Tim,

Thanks for the tip. Very interesting. In function definition it works. I 
read the parametric-composite-types manual. I am still puzzled however.

Consider the example below which works as I expect:

a = rand(10)
b = rand(10,2)

julia a :: VecOrMat{Float64}
10-element Array{Float64,1}:
...

julia b :: VecOrMat{Float64}
10x2 Array{Float64,2}:
...


The following example does not work as I would expect:

a = Vector{Float64}[rand(10), rand(10)]
b = Matrix{Float64}[rand(10,2), rand(10,2)]

julia a :: Vector{VecOrMat{Float64}}
ERROR: type: typeassert: expected 
Array{Union(Array{Float64,1},Array{Float64,2}),1}, got 
Array{Array{Float64,1},1}

julia b :: Vector{VecOrMat{Float64}}
ERROR: type: typeassert: expected 
Array{Union(Array{Float64,1},Array{Float64,2}),1}, got 
Array{Array{Float64,2},1}

however, this:
julia a :: Vector{Vector{Float64}}
2-element Array{Array{Float64,1},1}:
...
and this works:
julia b :: Vector{Matrix{Float64}}
2-element Array{Array{Float64,2},1}:
...

Thanks,
Jan



Dňa utorok, 28. apríla 2015 13:13:36 UTC+2 Tim Holy napísal(-a):


 http://docs.julialang.org/en/release-0.3/manual/types/#parametric-composite-types
  

 Use foo{V:VecOrMat}(X::Vector{V}) 

 --Tim 

 On Tuesday, April 28, 2015 02:40:41 AM Ján Dolinský wrote: 
  Hi guys, 
  
  I am trying to write a function which accepts as an input either a 
 vector 
  of vectors or a vector of matrices e.g. 
  
  function foo(X::Vector{VecOrMat{Float64}}) 
  
  When running the function with a vector of matrices I get the following 
  error  'foo' has no method matching foo(::Array{Array{Float64,2},1}) 
  
  Am I missing something here ? 
  
  Thanks, 
  Jan 



Re: [julia-users] function with input parameter Vector{VecOrMat{Float64}}

2015-04-28 Thread Ján Dolinský
Thanks for the clarification.

If my function foo has more parameters I just go like this ?

function foo{V:VecOrMat}(X::Vector{V}, param1::Int, param2::String)  
... 
end

Regards,
Jan



Dňa utorok, 28. apríla 2015 16:31:37 UTC+2 Tom Breloff napísal(-a):

 The reason is a little subtle, but it's because you have an abstract type 
 inside a parametric type, which confuses Julia.  When you annotate 
 a::MyAbstractType, julia understands what to do with it (i.e. compiles 
 functions for each concrete subtype).  When you annotate 
 a::Vector{MyAbstractType}, it is expecting a concrete type 
 Vector{MyAbstractType}, but you are in fact passing it a different concrete 
 type Vector{MyConcreteType}. Use the signature that Tim suggested to get 
 around the issue.

 On Tuesday, April 28, 2015 at 10:08:57 AM UTC-4, Ján Dolinský wrote:

 Hi Tim,

 Thanks for the tip. Very interesting. In function definition it works. I 
 read the parametric-composite-types manual. I am still puzzled however.

 Consider the example below which works as I expect:

 a = rand(10)
 b = rand(10,2)

 julia a :: VecOrMat{Float64}
 10-element Array{Float64,1}:
 ...

 julia b :: VecOrMat{Float64}
 10x2 Array{Float64,2}:
 ...


 The following example does not work as I would expect:

 a = Vector{Float64}[rand(10), rand(10)]
 b = Matrix{Float64}[rand(10,2), rand(10,2)]

 julia a :: Vector{VecOrMat{Float64}}
 ERROR: type: typeassert: expected 
 Array{Union(Array{Float64,1},Array{Float64,2}),1}, got 
 Array{Array{Float64,1},1}

 julia b :: Vector{VecOrMat{Float64}}
 ERROR: type: typeassert: expected 
 Array{Union(Array{Float64,1},Array{Float64,2}),1}, got 
 Array{Array{Float64,2},1}

 however, this:
 julia a :: Vector{Vector{Float64}}
 2-element Array{Array{Float64,1},1}:
 ...
 and this works:
 julia b :: Vector{Matrix{Float64}}
 2-element Array{Array{Float64,2},1}:
 ...

 Thanks,
 Jan



 Dňa utorok, 28. apríla 2015 13:13:36 UTC+2 Tim Holy napísal(-a):


 http://docs.julialang.org/en/release-0.3/manual/types/#parametric-composite-types
  

 Use foo{V:VecOrMat}(X::Vector{V}) 

 --Tim 

 On Tuesday, April 28, 2015 02:40:41 AM Ján Dolinský wrote: 
  Hi guys, 
  
  I am trying to write a function which accepts as an input either a 
 vector 
  of vectors or a vector of matrices e.g. 
  
  function foo(X::Vector{VecOrMat{Float64}}) 
  
  When running the function with a vector of matrices I get the 
 following 
  error  'foo' has no method matching foo(::Array{Array{Float64,2},1}) 
  
  Am I missing something here ? 
  
  Thanks, 
  Jan 



Re: [julia-users] Code starting to be slow after 30 min of execution

2015-04-28 Thread Tom Breloff
Agreed that gensym may be unnecessary.  Do you really need access to all 
those functions?  If you redefine the same function every time, is that 
faster?

On Tuesday, April 28, 2015 at 9:17:16 AM UTC-4, Yuuki Soho wrote:

 The README.md is just the default page shown on github,

 https://github.com/EconForge/NLsolve.jl/blob/master/README.md

 but there's no example of anonymous function use there. I think you need 
 to do something of the sort:

 (x, fvec) - begin
fvec[1] = (x[1]+3)*(x[2]^3-7)+18
fvec[2] = sin(x[2]*exp(x[1])-1)
 end

 Otherwise it's *@gensym *that is generating a new function name at each 
 iteration, I'm not sure you need it.



Re: [julia-users] function with input parameter Vector{VecOrMat{Float64}}

2015-04-28 Thread Tom Breloff
The reason is a little subtle, but it's because you have an abstract type 
inside a parametric type, which confuses Julia.  When you annotate 
a::MyAbstractType, julia understands what to do with it (i.e. compiles 
functions for each concrete subtype).  When you annotate 
a::Vector{MyAbstractType}, it is expecting a concrete type 
Vector{MyAbstractType}, but you are in fact passing it a different concrete 
type Vector{MyConcreteType}. Use the signature that Tim suggested to get 
around the issue.

On Tuesday, April 28, 2015 at 10:08:57 AM UTC-4, Ján Dolinský wrote:

 Hi Tim,

 Thanks for the tip. Very interesting. In function definition it works. I 
 read the parametric-composite-types manual. I am still puzzled however.

 Consider the example below which works as I expect:

 a = rand(10)
 b = rand(10,2)

 julia a :: VecOrMat{Float64}
 10-element Array{Float64,1}:
 ...

 julia b :: VecOrMat{Float64}
 10x2 Array{Float64,2}:
 ...


 The following example does not work as I would expect:

 a = Vector{Float64}[rand(10), rand(10)]
 b = Matrix{Float64}[rand(10,2), rand(10,2)]

 julia a :: Vector{VecOrMat{Float64}}
 ERROR: type: typeassert: expected 
 Array{Union(Array{Float64,1},Array{Float64,2}),1}, got 
 Array{Array{Float64,1},1}

 julia b :: Vector{VecOrMat{Float64}}
 ERROR: type: typeassert: expected 
 Array{Union(Array{Float64,1},Array{Float64,2}),1}, got 
 Array{Array{Float64,2},1}

 however, this:
 julia a :: Vector{Vector{Float64}}
 2-element Array{Array{Float64,1},1}:
 ...
 and this works:
 julia b :: Vector{Matrix{Float64}}
 2-element Array{Array{Float64,2},1}:
 ...

 Thanks,
 Jan



 Dňa utorok, 28. apríla 2015 13:13:36 UTC+2 Tim Holy napísal(-a):


 http://docs.julialang.org/en/release-0.3/manual/types/#parametric-composite-types
  

 Use foo{V:VecOrMat}(X::Vector{V}) 

 --Tim 

 On Tuesday, April 28, 2015 02:40:41 AM Ján Dolinský wrote: 
  Hi guys, 
  
  I am trying to write a function which accepts as an input either a 
 vector 
  of vectors or a vector of matrices e.g. 
  
  function foo(X::Vector{VecOrMat{Float64}}) 
  
  When running the function with a vector of matrices I get the following 
  error  'foo' has no method matching foo(::Array{Array{Float64,2},1}) 
  
  Am I missing something here ? 
  
  Thanks, 
  Jan 



[julia-users] configuring style of PyPlot

2015-04-28 Thread Christian Peel
Is it possible to use MatPlotlib style commands in PyPlot?
From   http://matplotlib.org/users/style_sheets.html  I get the impression
that I can quickly switch to a 'ggplot' style interface.   Translating the
commands on that page to Julia  I thought I could do something like
  PyPlot.style('ggplot')
but there is no 'style' function visible in PyPlot.  I also tried import
PyPlot; const plt = PyPlot.   I guess this is a trivial thing, but I'm not
certain how to do it.

I have tried Gadfly out a bit, but have not yet found a way to have the
figures show up in a separate window in my current workspace (I use
multiple Spaces on os X) rather than in a browser window.  I like the fun
files Gadflly can write, I'd just  like a window such as for PyPlot if it
is possible.

thanks
-- 
chris.p...@ieee.org


[julia-users] Re: Something wrong with Optim?

2015-04-28 Thread Pooya
Thanks, but I think if iter  2 (line 21) makes sure that x_previous is 
defined in the previous iteration. Just to be clear, the condition to check 
here was g_norm  g_norm_old, but I changed it to get there as early as 
the second iteration.  

On Tuesday, April 28, 2015 at 9:13:49 PM UTC-4, Avik Sengupta wrote:

 I'm seeing the error in line 22 of your gist where you are trying to print 
 the current value of x_previous. However, x_previous is first defined in 
 line 38 of your gist, and so the error is correct and doesnt have anything 
 to do with Optim, as far as I can see. 

 On Wednesday, 29 April 2015 01:39:02 UTC+1, Pooya wrote:

 Hi all,

 I have a problem that has made me scratch my head for many hours now! It 
 might be something obvious that I am missing. I have a Newton-Raphson code 
 to solve a system of nonlinear equations. The error that I get here does 
 not have anything to do with the algorithm, but just to be clear, I need to 
 find the best possible solution if the equations are not solvable, so I am 
 trying to stop simulation when the direction found by Newton-Raphson is not 
 correct! In order to do that I put an if-loop in the beginning of the main 
 loop to take x from the previous iteration (x_previous), but I get 
 x_previous not defined! I am using the Optim package to do a line search 
 after the direction has been found by Newton-Raphson. If Optim is not used, 
 things work perfectly (I tried by commenting out those lines of code). 
 Otherwise I get the error I mentioned. My code is here: 
 https://gist.github.com/prezaei85/372bde76012472865a94, which solves a 
 simple one-variable quadratic equation. Any thoughts are very much 
 appreciated.

 Thanks,
 Pooya



Re: [julia-users] Yet Another String Concatenation Thread (was: Re: Naming convention)

2015-04-28 Thread Iain Dunning


 Sorry for being a pain, but doesn't LinAlg be LinearAlgebra? What's the 
 point of issuing naming convention if it is not even respected by the main 
 developers? 


I'm glad you are apologizing, because I find the way you are expressing 
yourself is borderline insulting to the hard work of others (whose work you 
stand to benefit from, as it seems like you are just being to write Julia 
code going by your ODE thread).
Its not that the style guide isn't respected by the main developers, its 
that some things happened organically, we're all human, we all have biases, 
that the style guide wasn't formalized from the start, and that perfect is 
the enemy of good.

Many people in this thread could do with taking a more charitable position. 
I see some constructive suggestions, that seem pretty darn sensible. Even 
better would be pull requests - anything than simply tearing down what 
exists. There is no shortage of people to offer opinions, but there is 
always a lack of people willing to actually do something about it - whether 
its improving documentation like the style guide, or implementing changes.

On Tuesday, April 28, 2015 at 4:44:40 PM UTC-4, François Fayard wrote:

 Sorry for being a pain, but doesn't LinAlg be LinearAlgebra? What's the 
 point of issuing naming convention if it is not even respected by the main 
 developers? 

 Besides, I really find that Julia underuses multiple dispatch. It's a big 
 selling point of the language and it's not even used that much in the 
 standard library! Mathematica has a kind of multiple dispatch and it's what 
 makes the language so consistent. If people mimic Matlab, multiple dispatch 
 will be underused.



Re: [julia-users] Yet Another String Concatenation Thread (was: Re: Naming convention)

2015-04-28 Thread John Myles White
I think it's time for this thread to stop. People are already upset and 
things could easily get much worse.

Let's all go back to our core work: writing packages, building 
infrastructure and improving Base Julia's functionality. We can discuss 
naming conventions when we've got the functionality in place that Julia is 
sorely lacking right now.

 -- John

On Tuesday, April 28, 2015 at 4:47:51 PM UTC-7, François Fayard wrote:

 Ian. I am really sorry if I hurt people. I really respect what has been 
 done with Julia. I kind of like when people push me in the corner because 
 it helps me build better tools. That's why I might act this way, and I am 
 sorry if it hurts people. 

 I've expressed my ideas that I would like to resume: 
 - I think consistency in naming is really important for big languages like 
 Julia (as opposed to languages such as C) 
 - I wanted to follow a style guide, and the one I've found is not 
 respected at all. It's a fact. When I find LinAlg, sprandn and so many 
 other names whereas the style guide says clearly that one should avoid 
 abbreviation, I just don't get it. If a style guide is not enforced, it is 
 useless because it does not pass the reality test. 



[julia-users] Is anyone, anywhere using Julia on a supercomputer/cluster

2015-04-28 Thread Lyndon White
Hi,

I have a big numerical problem that julia is nice for.
But I really want to farm it out over a few hundren cores.

I know my local research supercomputing provider (iVec since I am in 
Western Australia),
prefers it if you are running programs in C or Fortran.

But I know they have run things in Python and Matlab.
I know they losely appreciate the trade off between development time and 
CPU time. 
But I think there main hesitation is the knowledge that the CPU cycles 
python wastes could be going to another project,
and that other project could be curing cancer etc.

Julia on the other hand is comparable to C or Fortran, so that objection is 
out.
It is on the other hand imature and not exactly well known.
(I would not be surpised if i was the only user in my universivy.

It would help any argument I might have,
or explination I need to render if I could say: They are using it on the 
super-computers in X.

Have you, or do you know of anyone who used it on supercomputers / 
medium-large clusters?

Did it go well?
What are the pitfalls


[julia-users] Something wrong with Optim?

2015-04-28 Thread Pooya
Hi all,

I have a problem that has made me scratch my head for many hours now! It 
might be something obvious that I am missing. I have a Newton-Raphson code 
to solve a system of nonlinear equations. The error that I get here does 
not have anything to do with the algorithm, but just to be clear, I need to 
find the best possible solution if the equations are not solvable, so I am 
trying to stop simulation when the direction found by Newton-Raphson is not 
correct! In order to do that I put an if-loop in the beginning of the main 
loop to take x from the previous iteration (x_previous), but I get 
x_previous not defined! I am using the Optim package to do a line search 
after the direction has been found by Newton-Raphson. If Optim is not used, 
things work perfectly (I tried by commenting out those lines of code). 
Otherwise I get the error I mentioned. My code is here: 
https://gist.github.com/prezaei85/372bde76012472865a94, which solves a 
simple one-variable quadratic equation. Any thoughts are very much 
appreciated.

Thanks,
Pooya


Re: [julia-users] Is anyone, anywhere using Julia on a supercomputer/cluster

2015-04-28 Thread Andreas Noack
As I'm writing this, I'm running Julia on a pretty new 90 node cluster. I
don't know if that counts as medium size cluster, but recently it was
reported on the mailing list that Julia was running on

http://www.top500.org/system/178451

which I think counts as a supercomputer.

2015-04-28 19:58 GMT-04:00 Lyndon White oxina...@ucc.asn.au:

 Hi,

 I have a big numerical problem that julia is nice for.
 But I really want to farm it out over a few hundren cores.

 I know my local research supercomputing provider (iVec since I am in
 Western Australia),
 prefers it if you are running programs in C or Fortran.

 But I know they have run things in Python and Matlab.
 I know they losely appreciate the trade off between development time and
 CPU time.
 But I think there main hesitation is the knowledge that the CPU cycles
 python wastes could be going to another project,
 and that other project could be curing cancer etc.

 Julia on the other hand is comparable to C or Fortran, so that objection
 is out.
 It is on the other hand imature and not exactly well known.
 (I would not be surpised if i was the only user in my universivy.

 It would help any argument I might have,
 or explination I need to render if I could say: They are using it on the
 super-computers in X.

 Have you, or do you know of anyone who used it on supercomputers /
 medium-large clusters?

 Did it go well?
 What are the pitfalls



[julia-users] Re: Something wrong with Optim?

2015-04-28 Thread Pooya
If you comment out lines 42-49, you will see that it works fine!

On Tuesday, April 28, 2015 at 9:20:49 PM UTC-4, Pooya wrote:

 Thanks, but I think if iter  2 (line 21) makes sure that x_previous is 
 defined in the previous iteration. Just to be clear, the condition to check 
 here was g_norm  g_norm_old, but I changed it to get there as early as 
 the second iteration.  

 On Tuesday, April 28, 2015 at 9:13:49 PM UTC-4, Avik Sengupta wrote:

 I'm seeing the error in line 22 of your gist where you are trying to 
 print the current value of x_previous. However, x_previous is first 
 defined in line 38 of your gist, and so the error is correct and doesnt 
 have anything to do with Optim, as far as I can see. 

 On Wednesday, 29 April 2015 01:39:02 UTC+1, Pooya wrote:

 Hi all,

 I have a problem that has made me scratch my head for many hours now! It 
 might be something obvious that I am missing. I have a Newton-Raphson code 
 to solve a system of nonlinear equations. The error that I get here does 
 not have anything to do with the algorithm, but just to be clear, I need to 
 find the best possible solution if the equations are not solvable, so I am 
 trying to stop simulation when the direction found by Newton-Raphson is not 
 correct! In order to do that I put an if-loop in the beginning of the main 
 loop to take x from the previous iteration (x_previous), but I get 
 x_previous not defined! I am using the Optim package to do a line search 
 after the direction has been found by Newton-Raphson. If Optim is not used, 
 things work perfectly (I tried by commenting out those lines of code). 
 Otherwise I get the error I mentioned. My code is here: 
 https://gist.github.com/prezaei85/372bde76012472865a94, which solves a 
 simple one-variable quadratic equation. Any thoughts are very much 
 appreciated.

 Thanks,
 Pooya



[julia-users] Re: Something wrong with Optim?

2015-04-28 Thread Avik Sengupta
Yes, sorry I jumped the gun. Thanks for clarifying. 

But it still does not have anything to do with Optim :)

The problem is due to defining an inline function (line 43) that creates 
closure over the x_previous variable.  To test this, just comment that 
line (and adjust the Optim.optimize call), the problem goes away. 

A simpler version of the code that fails is as follows: 

julia function f() 
 for i=1:10
   if i2; println(z); end
   z=2
   g() = 2z
 end
   end
f (generic function with 1 method)

julia f()
ERROR: z not defined
 in f at none:3

A fix to get it to work is to declare the variable at the start of your 
function. Similarly, adding a local x_previous at the top of your 
function makes it work correctly. Remember, variables in Julia are lexical 
in scope. 

julia function f()
 local z
 for i=1:10
   if i2; println(z); end
   z=2
   g()=2z
 end
   end
f (generic function with 1 method)

julia f()
2
2
2
2
2
2
2
2


On Wednesday, 29 April 2015 02:23:28 UTC+1, Pooya wrote:

 If you comment out lines 42-49, you will see that it works fine!

 On Tuesday, April 28, 2015 at 9:20:49 PM UTC-4, Pooya wrote:

 Thanks, but I think if iter  2 (line 21) makes sure that x_previous 
 is defined in the previous iteration. Just to be clear, the condition to 
 check here was g_norm  g_norm_old, but I changed it to get there as 
 early as the second iteration.  

 On Tuesday, April 28, 2015 at 9:13:49 PM UTC-4, Avik Sengupta wrote:

 I'm seeing the error in line 22 of your gist where you are trying to 
 print the current value of x_previous. However, x_previous is first 
 defined in line 38 of your gist, and so the error is correct and doesnt 
 have anything to do with Optim, as far as I can see. 

 On Wednesday, 29 April 2015 01:39:02 UTC+1, Pooya wrote:

 Hi all,

 I have a problem that has made me scratch my head for many hours now! 
 It might be something obvious that I am missing. I have a Newton-Raphson 
 code to solve a system of nonlinear equations. The error that I get here 
 does not have anything to do with the algorithm, but just to be clear, I 
 need to find the best possible solution if the equations are not solvable, 
 so I am trying to stop simulation when the direction found by 
 Newton-Raphson is not correct! In order to do that I put an if-loop in the 
 beginning of the main loop to take x from the previous iteration 
 (x_previous), but I get x_previous not defined! I am using the Optim 
 package to do a line search after the direction has been found by 
 Newton-Raphson. If Optim is not used, things work perfectly (I tried by 
 commenting out those lines of code). Otherwise I get the error I 
 mentioned. 
 My code is here: https://gist.github.com/prezaei85/372bde76012472865a94, 
 which solves a simple one-variable quadratic equation. Any thoughts are 
 very much appreciated.

 Thanks,
 Pooya



[julia-users] Re: configuring style of PyPlot

2015-04-28 Thread Steven G. Johnson
Yes, in general you can do anything from PyPlot that you can do from 
Matplotlib, because PyPlot is just a thin wrapper around Matplotlib using 
PyCall, and PyCall lets you call arbitrary Python code.

The pyplot.style module is not currently exported by PyCall, you can 
access it via plt.style:

using PyPlot
plt.style[:available]

will show available styles, and you can use the ggplot style (assuming it 
is available) with:

plt.style[:use](ggplot)

(Note that foo.bar in Python becomes foo[:bar] in PyCall.)

Then subsequent plot commands will use the ggplot style.


Re: [julia-users] Yet Another String Concatenation Thread (was: Re: Naming convention)

2015-04-28 Thread François Fayard
Ian. I am really sorry if I hurt people. I really respect what has been done 
with Julia. I kind of like when people push me in the corner because it helps 
me build better tools. That's why I might act this way, and I am sorry if it 
hurts people.

I've expressed my ideas that I would like to resume:
- I think consistency in naming is really important for big languages like 
Julia (as opposed to languages such as C)
- I wanted to follow a style guide, and the one I've found is not respected at 
all. It's a fact. When I find LinAlg, sprandn and so many other names whereas 
the style guide says clearly that one should avoid abbreviation, I just don't 
get it. If a style guide is not enforced, it is useless because it does not 
pass the reality test.


Re: [julia-users] Yet Another String Concatenation Thread (was: Re: Naming convention)

2015-04-28 Thread François Fayard
I've also tried to help proposing solutions to those problems, such as using 
multiple dispatch. But I understand the fact that my tone was not appropriate.

I was also puzzled to hear from Stefan that there is nothing wrong with 
sprandn, whereas the coding guideline says otherwise. It just feels like when 
there is a stop sign and nobody cares about even slowing down. Then, one of 
the main designer of the rules for driving looks at you and says : So what?. 
It feels really weird, which might explain but does not excuse my tone.

[julia-users] [ANN] DecFP.jl - decimal floating-point math

2015-04-28 Thread Steven G. Johnson
The DecFP package

  https://github.com/stevengj/DecFP.jl

provides 32-bit, 64-bit, and 128-bit binary-encoded decimal floating-point 
types following the IEEE 754-2008, implemented as a wrapper around the 
(BSD-licensed) Intel Decimal Floating-Point Math Library 
https://software.intel.com/en-us/articles/intel-decimal-floating-point-math-library.
  
Decimal floating-point types are useful in situations where you need to 
exactly represent decimal values, typically human inputs.

As software floating point, this is about 100x slower than hardware binary 
floating-point math.  On the other hand, it is significantly (10-100x) 
faster than arbitrary-precision decimal arithmetic, and is a 
memory-efficient bitstype.

The basic arithmetic functions, conversions from other numeric types, and 
numerous special functions are supported.


Re: [julia-users] Yet Another String Concatenation Thread (was: Re: Naming convention)

2015-04-28 Thread Jiahao Chen
With all due respect, talk is cheap. If anyone really wants to help, submit
a pull request with your proposed changes.

On Tue, Apr 28, 2015, 20:03 François Fayard francois.fay...@gmail.com
wrote:

 I've also tried to help proposing solutions to those problems, such as
 using multiple dispatch. But I understand the fact that my tone was not
 appropriate.

 I was also puzzled to hear from Stefan that there is nothing wrong with
 sprandn, whereas the coding guideline says otherwise. It just feels like
 when there is a stop sign and nobody cares about even slowing down. Then,
 one of the main designer of the rules for driving looks at you and says :
 So what?. It feels really weird, which might explain but does not excuse
 my tone.


[julia-users] Re: Something wrong with Optim?

2015-04-28 Thread Avik Sengupta
I'm seeing the error in line 22 of your gist where you are trying to print 
the current value of x_previous. However, x_previous is first defined in 
line 38 of your gist, and so the error is correct and doesnt have anything 
to do with Optim, as far as I can see. 

On Wednesday, 29 April 2015 01:39:02 UTC+1, Pooya wrote:

 Hi all,

 I have a problem that has made me scratch my head for many hours now! It 
 might be something obvious that I am missing. I have a Newton-Raphson code 
 to solve a system of nonlinear equations. The error that I get here does 
 not have anything to do with the algorithm, but just to be clear, I need to 
 find the best possible solution if the equations are not solvable, so I am 
 trying to stop simulation when the direction found by Newton-Raphson is not 
 correct! In order to do that I put an if-loop in the beginning of the main 
 loop to take x from the previous iteration (x_previous), but I get 
 x_previous not defined! I am using the Optim package to do a line search 
 after the direction has been found by Newton-Raphson. If Optim is not used, 
 things work perfectly (I tried by commenting out those lines of code). 
 Otherwise I get the error I mentioned. My code is here: 
 https://gist.github.com/prezaei85/372bde76012472865a94, which solves a 
 simple one-variable quadratic equation. Any thoughts are very much 
 appreciated.

 Thanks,
 Pooya



Re: [julia-users] Code starting to be slow after 30 min of execution

2015-04-28 Thread 'Antoine Messager' via julia-users
Would it be possible to use a pointer to allocate space for the creation of 
my function every time at the same location?

Le mardi 28 avril 2015 13:45:11 UTC+1, Antoine Messager a écrit :

 The previous picture was only an example, I should by the end solve non 
 linear system of dimension 500. I expect NLsolve to work with one dimension 
 too. 

 I have not figure out how to use anonymous function within NLsolve. I 
 don't understand of what README Tim Holy you are talking about. I went on 
 the julia docs and on wikipedia and I have found the following definition:
 *   h = (x,z)-[z^2-1+x,x^2-z^3]*
 or 
  *function (x,z)*
  *[z^2-1+x,x^2-z^3]*
 *  end*
 But it does not seem to work. When it is not a problem of number of 
 argument, It is said that *`*` has no method matching 
 *(::Array{Float64,1}, ::Array{Float64,1}) *so I used *h = 
 (x,z)-[z.^2-1+x,x.^2-z.^3] *but it does not give the correct result. 

 In the loop the function is always under the same name but the function is 
 located into a different place:

 for instance for the first iteration:

 *##f#32489 (generic function with 1 method) *

 and for the second one:

 *##f#32490 (generic function with 1 method)*

 and so on...

 So it seems that new space is allocated at each iteration even if the 
 function is created under the same name (myfun).


 Le mardi 28 avril 2015 13:27:21 UTC+1, Yuuki Soho a écrit :

 Do you really need to use a different name for your function each time ? 
 you could just use the same name it seems. I'm not sure it would solve the 
 problem though.



Re: [julia-users] Code starting to be slow after 30 min of execution

2015-04-28 Thread Stefan Karpinski
You seem to passing nlsolve a one-argument anonymous function whereas your
generating two-argument functions above.

On Tuesday, April 28, 2015, 'Antoine Messager' via julia-users 
julia-users@googlegroups.com wrote:

 Is there any other possibility? Because, I need to use NLsolve, as it is
 the faster non linear solver I have found for my problem.

 Le mardi 28 avril 2015 10:45:59 UTC+1, Antoine Messager a écrit :

 I would love too but it seems that NLsolve does not accept anonymous
 function.


 https://lh3.googleusercontent.com/-spe5mTRqJDQ/VT9WpGNgDEI/ABg/_caaIfVwkec/s1600/Capture%2Bd%E2%80%99e%CC%81cran%2B2015-04-28%2Ba%CC%80%2B10.44.04.png


 Le lundi 27 avril 2015 18:38:30 UTC+1, Tim Holy a écrit :

 gc() doesn't clear memory from compiled functions---the overhead of
 compilation is so high that any function, once compiled, hangs around
 forever.

 The solution is to avoid creating so many compiled functions. Can you
 use
 anonymous functions?

 --Tim

 On Monday, April 27, 2015 10:22:20 AM 'Antoine Messager' via julia-users
 wrote:
  And then (approximatively):
 
  *  myfunction = eval(code_f)*
 
  Le lundi 27 avril 2015 18:21:09 UTC+1, Antoine Messager a écrit :
   I use meta programming to create my function. This is a simpler
 example.
   The parameters are generated randomly in the actual function.
  
   *  lhs = {ode1= :(fy[1]), ode2= :(fy[2])};*
   *  rhs = {ode1= :(y[1]*y[1]-2.0), ode2= :(y[2]-y[1]*y[1])};*
  
   *  function code_f(lhs::Dict, rhs::Dict)*
   *  lines = {}*
   *  for key in keys(lhs)*
   *  push!(lines, :( $(lhs[key]) = $(rhs[key])) )*
   *  end*
   *  @gensym f*
   *  quote*
   *  function $f(y, fy)*
   *  $(lines...)*
   *  end*
   *  end*
   *  end*
  
   Le lundi 27 avril 2015 18:12:24 UTC+1, Tom Breloff a écrit :
   Can you give us the definition of make_function as well?  This is
 being
   run in global scope?
  
   On Monday, April 27, 2015 at 12:37:48 PM UTC-4, Antoine Messager
 wrote:
   When I input the following code, where myfunction is only a system
 of 2
   equations with 2 unknowns, the code starts to be really slow after
   10,000
   iterations. NLsolve is a non linear solver (
   https://github.com/EconForge/NLsolve.jl).
  
   *  size=2*
   *  for k in 1:10*
   *  myfun=make_function(size);*
   *  try{*
   *  res=nlsolve(myfun,rand(size))*
   *  }*
   *  end*
   *  end*
  
   Thank you for your help,
   Antoine
  
   Le lundi 27 avril 2015 16:30:19 UTC+1, Mauro a écrit :
   It is a bit hard to tell what is going wrong with essentially no
   information.  Does the memory usage of Julia go up more than you
 would
   expect from storing the results?  Any difference between 0.3 and
 0.4?
   Anyway, you should try and make a small self-contained runable
 example
   and post it otherwise it will be hard to divine an answer.
  
   On Mon, 2015-04-27 at 16:49, 'Antoine Messager' via julia-users 
  
   julia...@googlegroups.com wrote:
Dear all,
   
I need to create a lot of systems of equation, find some
  
   characteristics of
  
each system and store the system if of interest. Each system is
  
   created
  
under the same name. It works fine for the first 1000 systems
 but
  
   after the
  
program starts to be too slow. I have tried to use the garbage
  
   collector
  
each time I create a new system but it did not speed up the
 code. I
  
   don't
  
know what to do, I don't understand where it could come from.
   
Cheers,
Antoine




Re: [julia-users] Code starting to be slow after 30 min of execution

2015-04-28 Thread Yuuki Soho
Do you really need to use a different name for your function each time ? 
you could just use the same name it seems. I'm not sure it would solve the 
problem though.


Re: [julia-users] the state of GUI toolkits?

2015-04-28 Thread Tim Holy
Also, on 0.3 Gtk loads just fine for me. Not sure why it's not working on 
PkgEvaluator.

--Tim

On Tuesday, April 28, 2015 12:46:52 AM Andreas Lobinger wrote:
 Hello colleagues,
 
 what is status of availability and usecases for GUI toolkits.
 
 I see Tk and Gtk on the pkg.julialang.org. Gtk has the tag 'doesn't load'
 from testing, Tk seems OK.
 In a recent discussion here, Tim Holy mentioned himself tesing Qwt and Qt
 in general seem to be a testcase for Cxx.
 
 Do i miss something here?
 
 Wishing a happy day,
  Andreas



Re: [julia-users] Code starting to be slow after 30 min of execution

2015-04-28 Thread Tim Holy
On Tuesday, April 28, 2015 02:45:59 AM 'Antoine Messager' via julia-users 
wrote:
 I would love too but it seems that NLsolve does not accept anonymous
 function.

I'd be really surprised if this were true. Search the README for -.

--Tim

 
 https://lh3.googleusercontent.com/-spe5mTRqJDQ/VT9WpGNgDEI/ABg/_caa
 IfVwkec/s1600/Capture%2Bd%E2%80%99e%CC%81cran%2B2015-04-28%2Ba%CC%80%2B10.44
 .04.png
 Le lundi 27 avril 2015 18:38:30 UTC+1, Tim Holy a écrit :
  gc() doesn't clear memory from compiled functions---the overhead of
  compilation is so high that any function, once compiled, hangs around
  forever.
  
  The solution is to avoid creating so many compiled functions. Can you use
  anonymous functions?
  
  --Tim
  
  On Monday, April 27, 2015 10:22:20 AM 'Antoine Messager' via julia-users
  
  wrote:
   And then (approximatively):
   
   *  myfunction = eval(code_f)*
   
   Le lundi 27 avril 2015 18:21:09 UTC+1, Antoine Messager a écrit :
I use meta programming to create my function. This is a simpler
  
  example.
  
The parameters are generated randomly in the actual function.

*  lhs = {ode1= :(fy[1]), ode2= :(fy[2])};*
*  rhs = {ode1= :(y[1]*y[1]-2.0), ode2= :(y[2]-y[1]*y[1])};*

*  function code_f(lhs::Dict, rhs::Dict)*
*  lines = {}*
*  for key in keys(lhs)*
*  push!(lines, :( $(lhs[key]) = $(rhs[key])) )*
*  end*
*  @gensym f*
*  quote*
*  function $f(y, fy)*
*  $(lines...)*
*  end*
*  end*
*  end*

Le lundi 27 avril 2015 18:12:24 UTC+1, Tom Breloff a écrit :
Can you give us the definition of make_function as well?  This is
  
  being
  
run in global scope?

On Monday, April 27, 2015 at 12:37:48 PM UTC-4, Antoine Messager
  
  wrote:
When I input the following code, where myfunction is only a system
  
  of 2
  
equations with 2 unknowns, the code starts to be really slow after
10,000
iterations. NLsolve is a non linear solver (
https://github.com/EconForge/NLsolve.jl).

*  size=2*
*  for k in 1:10*
*  myfun=make_function(size);*
*  try{*
*  res=nlsolve(myfun,rand(size))*
*  }*
*  end*
*  end*

Thank you for your help,
Antoine

Le lundi 27 avril 2015 16:30:19 UTC+1, Mauro a écrit :
It is a bit hard to tell what is going wrong with essentially no
information.  Does the memory usage of Julia go up more than you
  
  would
  
expect from storing the results?  Any difference between 0.3 and
  
  0.4?
  
Anyway, you should try and make a small self-contained runable
  
  example
  
and post it otherwise it will be hard to divine an answer.

On Mon, 2015-04-27 at 16:49, 'Antoine Messager' via julia-users 

julia...@googlegroups.com wrote:
 Dear all,
 
 I need to create a lot of systems of equation, find some

characteristics of

 each system and store the system if of interest. Each system is

created

 under the same name. It works fine for the first 1000 systems but

after the

 program starts to be too slow. I have tried to use the garbage

collector

 each time I create a new system but it did not speed up the code.
  
  I
  
don't

 know what to do, I don't understand where it could come from.
 
 Cheers,
 Antoine



Re: [julia-users] Code starting to be slow after 30 min of execution

2015-04-28 Thread Yuuki Soho
The README.md is just the default page shown on github,

https://github.com/EconForge/NLsolve.jl/blob/master/README.md

but there's no example of anonymous function use there. I think you need to 
do something of the sort:

(x, fvec) - begin
   fvec[1] = (x[1]+3)*(x[2]^3-7)+18
   fvec[2] = sin(x[2]*exp(x[1])-1)
end

Otherwise it's *@gensym *that is generating a new function name at each 
iteration, I'm not sure you need it.


Re: [julia-users] Yet Another String Concatenation Thread (was: Re: Naming convention)

2015-04-28 Thread Tom Breloff
+1 for factorize(MyType, ...), Sparse(MyDist, ...) and other similar 
examples that have been suggested.  It's only a very slight hardship for 
those copying their code directly from matlab, but for everyone else I 
think it's a big win for readability and type safety.  It's also likely 
easier to learn (assuming you don't already know matlab too well), since 
it'll be easier to guess the appropriate function name without reading 
through a long list of jumbled letters in function names.   Each method 
name will be more powerful, so I can see people having to reference docs 
less often.

On Monday, April 27, 2015 at 8:49:43 PM UTC-4, Glen H wrote:

 +1 for using multi-dispatch to separate concepts and to make things more 
 readable:

 cholfact - factorize(Cholesky, )
 sprandn - Sparse(Normal, m, n)

 The main benefit isn't readability but for speed and generality.  If you 
 want to use a different distribution (in sprandn) or a different 
 factorization type then than is easily parameterized and can be specialized 
 by the compiler to make the fastest code.  This desire to stick with 
 defacto standards doesn't make sense because it is coming from a 
 different language that doesn't have multidispatch and has poor types.  

 As an example, what would you call the function that factorizes 
 polynomials?  The answer for MATLAB is factor.  I would much rather it be:

 factorize(Polynomial, )

 This is self documenting and all factorization methods have the same name 
 and and can be easily found.  It would be nice to also have:

 ?factorize(Polynomial, )

 To return the help for how to factorize a polynomial and some way to find 
 out all the types that can go in the first argument of factorize().  

 If people are really set on not learning something new then there could be 
 a MATLAB compatibility package that does:

 cholfact - factorize(Cholesky ...)

 But that leads to bad code so I would rather it just be a chapter in the 
 documentation for MATLAB users (or maybe a script to do the conversion).

 Forwards compatibility from MATLAB doesn't exist anyways so why stick to 
 it when it leads to worse code?

 I agree with François's reasons for why people use MATLAB...it isn't 
 because they came up with the best function names for all languages to 
 use...it likely just happened and people got used to it.

 Glen



Re: [julia-users] Re: Defining a function in different modules

2015-04-28 Thread Scott Jones


On Monday, April 27, 2015 at 6:40:50 PM UTC-4, ele...@gmail.com wrote:



 On Sunday, April 26, 2015 at 8:24:15 PM UTC+10, Scott Jones wrote:

 Yes, precisely... and I *do* want Julia to protect the user *in that 
 case*.
 If a module has functions that are potentially ambiguous, then 1) if the 
 module writer intends to extend something, they should do it *explicitly*, 
 exactly as now, and 2) Julia *should* warn when you have using package, 
 not just at run-time, IMO.
 I have *only* been talking about the case where you have functions that 
 the compiler can tell in advance, just by looking locally at your module, 
 by a very simple rule, that they cannot be ambiguous.


 The issue is that, in the example I gave, the compiler can't tell, just by 
 looking at your module, if that case exists.  It has to look at everything 
 else imported and defined in the users program, and IIUC with macros, 
 staged functions and lots of other ways of defining functions that can 
 become an expensive computation, and may need to be delayed to runtime.

 Cheers
 Lex


To me, that case is not as interesting... if you, the writer of the module, 
want to use some type that is not defined in your module, then the burden 
should be on you, to explicitly import from the module you wish to 
extend... (Base, or whatever package/module the types you are using for 
that function are defined)...

I'm only concerned about having a way that somebody can write a module, and 
guarantee (by always using a specific type from the module) that the names 
it wants to export cannot be ambiguous with other methods with the same 
name.




Re: [julia-users] the state of GUI toolkits?

2015-04-28 Thread Tom Breloff
I would check out PySide.jl.  I'm not sure of the current package status, 
but I have used Qt from both C++ and Python to do fairly intensive gui 
work, and it's a very good framework.  IMO, the only potential downside is 
the license, but you'd have to evaluate that yourself.

On Tuesday, April 28, 2015 at 7:11:17 AM UTC-4, Tim Holy wrote:

 Also, on 0.3 Gtk loads just fine for me. Not sure why it's not working on 
 PkgEvaluator. 

 --Tim 

 On Tuesday, April 28, 2015 12:46:52 AM Andreas Lobinger wrote: 
  Hello colleagues, 
  
  what is status of availability and usecases for GUI toolkits. 
  
  I see Tk and Gtk on the pkg.julialang.org. Gtk has the tag 'doesn't 
 load' 
  from testing, Tk seems OK. 
  In a recent discussion here, Tim Holy mentioned himself tesing Qwt and 
 Qt 
  in general seem to be a testcase for Cxx. 
  
  Do i miss something here? 
  
  Wishing a happy day, 
   Andreas 



[julia-users] Re: Newbie help... First implementation of 3D heat equation solver VERY slow in Julia

2015-04-28 Thread Ángel de Vicente
Hi,

Ángel de Vicente writes:
 Now I have two more questions, to see if I can get better performance:


 1) I'm just running the Julia distribuation that came with my Ubuntu
 distro. I don't know how this was compiled. Is there a way to see
 which optimization level and which compiler options were used when
 compiling Julia? Would I be able to get better performance out of
 Julia if I do my own compilation from source? (either using a high
 optimization flag or perhaps even using another compiler (I have
 access to the Intel compilers suite here).

regarding this, I downloaded Julia source and I compiled it with the
default makefiles (gfortran and optimization -O3 as far as I can see),
and there was no important time difference.

I tried to compile with Intel compilers by creating the Make.user file
with the following content

,
| USEICC=1
| USEIFC=1
| USE_INTEL_LIBM=1
`

but it failed, with the following error:

,
| Making all in src
| /usr/include/c++/4.8/string(
38): catastrophic error: cannot open source
| file bits/c++config.h
|   #include bits/c++config.h
|  ^
|
| compilation aborted for patchelf.cc (code 4)
`

Any hints on getting it compiled with the Intel compilers?

 2) Is it possible to give optimization flags somehow to the JIT
 compiler? In this case I know that the main_loop function is crucial,
 and it is going to be executed hundreds/thousands of times, so I
 wouldn't mind spending more time the first time it is compiled if it
 can be optimized as much as possible.

I looked around the Julia documentation, but saw nothing, so perhaps
this is not possible at all?

Thanks,


Re: [julia-users] Yet Another String Concatenation Thread (was: Re: Naming convention)

2015-04-28 Thread Andreas Noack
I like the idea of something like factorize(MyType,...), but it is not
without problems for generic programming. Right now cholfact(Matrix) and
cholfact(SparseMatrixCSC) return different types, i.e. LinAlg.Cholesky and
SparseMatrix.CHOLMOD.Factor. The reason is that internally, they are very
different, but from the perspective of solving a positive definite system
of equations they could be considered the same, and therefore they share
the cholfact name.

2015-04-28 9:26 GMT-04:00 Tom Breloff t...@breloff.com:

 +1 for factorize(MyType, ...), Sparse(MyDist, ...) and other similar
 examples that have been suggested.  It's only a very slight hardship for
 those copying their code directly from matlab, but for everyone else I
 think it's a big win for readability and type safety.  It's also likely
 easier to learn (assuming you don't already know matlab too well), since
 it'll be easier to guess the appropriate function name without reading
 through a long list of jumbled letters in function names.   Each method
 name will be more powerful, so I can see people having to reference docs
 less often.


 On Monday, April 27, 2015 at 8:49:43 PM UTC-4, Glen H wrote:

 +1 for using multi-dispatch to separate concepts and to make things more
 readable:

 cholfact - factorize(Cholesky, )
 sprandn - Sparse(Normal, m, n)

 The main benefit isn't readability but for speed and generality.  If you
 want to use a different distribution (in sprandn) or a different
 factorization type then than is easily parameterized and can be specialized
 by the compiler to make the fastest code.  This desire to stick with
 defacto standards doesn't make sense because it is coming from a
 different language that doesn't have multidispatch and has poor types.

 As an example, what would you call the function that factorizes
 polynomials?  The answer for MATLAB is factor.  I would much rather it be:

 factorize(Polynomial, )

 This is self documenting and all factorization methods have the same name
 and and can be easily found.  It would be nice to also have:

 ?factorize(Polynomial, )

 To return the help for how to factorize a polynomial and some way to find
 out all the types that can go in the first argument of factorize().

 If people are really set on not learning something new then there could
 be a MATLAB compatibility package that does:

 cholfact - factorize(Cholesky ...)

 But that leads to bad code so I would rather it just be a chapter in the
 documentation for MATLAB users (or maybe a script to do the conversion).

 Forwards compatibility from MATLAB doesn't exist anyways so why stick to
 it when it leads to worse code?

 I agree with François's reasons for why people use MATLAB...it isn't
 because they came up with the best function names for all languages to
 use...it likely just happened and people got used to it.

 Glen




Re: [julia-users] function with input parameter Vector{VecOrMat{Float64}}

2015-04-28 Thread Ján Dolinský
Great, thanks!

Dňa utorok, 28. apríla 2015 17:07:27 UTC+2 Tom Breloff napísal(-a):

 Yes that's fine

 On Tuesday, April 28, 2015 at 10:41:15 AM UTC-4, Ján Dolinský wrote:

 Thanks for the clarification.

 If my function foo has more parameters I just go like this ?

 function foo{V:VecOrMat}(X::Vector{V}, param1::Int, param2::String)  
 ... 
 end

 Regards,
 Jan



 Dňa utorok, 28. apríla 2015 16:31:37 UTC+2 Tom Breloff napísal(-a):

 The reason is a little subtle, but it's because you have an abstract 
 type inside a parametric type, which confuses Julia.  When you annotate 
 a::MyAbstractType, julia understands what to do with it (i.e. compiles 
 functions for each concrete subtype).  When you annotate 
 a::Vector{MyAbstractType}, it is expecting a concrete type 
 Vector{MyAbstractType}, but you are in fact passing it a different concrete 
 type Vector{MyConcreteType}. Use the signature that Tim suggested to get 
 around the issue.

 On Tuesday, April 28, 2015 at 10:08:57 AM UTC-4, Ján Dolinský wrote:

 Hi Tim,

 Thanks for the tip. Very interesting. In function definition it works. 
 I read the parametric-composite-types manual. I am still puzzled however.

 Consider the example below which works as I expect:

 a = rand(10)
 b = rand(10,2)

 julia a :: VecOrMat{Float64}
 10-element Array{Float64,1}:
 ...

 julia b :: VecOrMat{Float64}
 10x2 Array{Float64,2}:
 ...


 The following example does not work as I would expect:

 a = Vector{Float64}[rand(10), rand(10)]
 b = Matrix{Float64}[rand(10,2), rand(10,2)]

 julia a :: Vector{VecOrMat{Float64}}
 ERROR: type: typeassert: expected 
 Array{Union(Array{Float64,1},Array{Float64,2}),1}, got 
 Array{Array{Float64,1},1}

 julia b :: Vector{VecOrMat{Float64}}
 ERROR: type: typeassert: expected 
 Array{Union(Array{Float64,1},Array{Float64,2}),1}, got 
 Array{Array{Float64,2},1}

 however, this:
 julia a :: Vector{Vector{Float64}}
 2-element Array{Array{Float64,1},1}:
 ...
 and this works:
 julia b :: Vector{Matrix{Float64}}
 2-element Array{Array{Float64,2},1}:
 ...

 Thanks,
 Jan



 Dňa utorok, 28. apríla 2015 13:13:36 UTC+2 Tim Holy napísal(-a):


 http://docs.julialang.org/en/release-0.3/manual/types/#parametric-composite-types
  

 Use foo{V:VecOrMat}(X::Vector{V}) 

 --Tim 

 On Tuesday, April 28, 2015 02:40:41 AM Ján Dolinský wrote: 
  Hi guys, 
  
  I am trying to write a function which accepts as an input either a 
 vector 
  of vectors or a vector of matrices e.g. 
  
  function foo(X::Vector{VecOrMat{Float64}}) 
  
  When running the function with a vector of matrices I get the 
 following 
  error  'foo' has no method matching 
 foo(::Array{Array{Float64,2},1}) 
  
  Am I missing something here ? 
  
  Thanks, 
  Jan 



Re: [julia-users] Yet Another String Concatenation Thread (was: Re: Naming convention)

2015-04-28 Thread Jiahao Chen
 Distributions is an awesome example of a package that explains what I was
trying to say about using multi-dispatch instead of compound function names
-- a work of art.  I hope to use it in the future.  Have you had an uproar
from the community that the names don't follow the MATLAB defacto
standard?

Kindly note that the point I made about Matlab and sparse matrices was
mainly historical and partly psychological, not one about rational PL
design.

In the particular case of sparse matrices, Matlab had the first mover
advantage because they were actually the first provider of sparse matrix
functionality in a high level language. Furthermore, because most users of
sparse matrices become familiar with them from a teaching environment that
uses said language, they become accustomed to Matlab's spelling.

The example of Distributions makes a different point about rational design,
which I am not opposed to. However, the meaning of the type annotation is
not unambiguous. In some cases it refers to the _input_, and in other
cases, the _output_. In some cases you may even need to annotate both the
input and output types, which then strays dangerously close to function
types and I/O monads.

Changing the spelling of writemime vs write(MIME) was discussed in Issue
#7959, but it was difficult to disambiguate write(MIME) from other write
methods IIRC.


[julia-users] Re: Help to optimize a loop through a list

2015-04-28 Thread Ronan Chagas
Thank you very much Duane! It will help me a lot.

Right now, I found something that was really slowing down my algorithm.

The original function checkDominance has the following signature:

function checkDominance(mgeoData::MGEOStructure, 
candidatePoint::ParetoPoint, paretoFrontier::Array{ParetoPoint,1})

in which

type MGEOStructure
# Number of objective functions.
nf::Integer
# Parameter to set the search determinism.
tau::Float64
# Maximum number of generations.
ngenMax::Float64
# Maximum number of independent runs (reinitializations).
runMax::Float64
# Structure that store the configuration for each design variable.
designVars::Array{DesignVariable,1}
# Epsilon to compare objective functions.
mgeoEps::Float64
end

type DesignVariable
# Number of bits.
bits::Uint32
# Minimum allowed value for the design variable.
min::Float64
# Maximum allowed value for the design variable.
max::Float64
# Full scale of the design variable.
fullScale::Uint64
# Index in the string.
index::Uint64
# Name of the variable.
name::String
end

type ParetoPoint
# Design variables.
vars::Array{Float64,1}
# Objective functions.
f::Array{Float64, 1}
end

In this case, it took almost 62s to find the Pareto Frontier for the 
problem mentioned before in which this function is called almost 128,000 
times.
After some tests, I decided to change the function signature to:

function checkDominance(nf::Int, eps::Float64, candidatePoint::ParetoPoint, 
paretoFrontier::Array{ParetoPoint,1})

Now the code takes now only 2s to optimize the same problem (it is 30x 
faster). Just for comparison, optimized (-O3) C++ version takes 0.6s.

Question: Is it expected such slow down when a structure is passed to a 
function? I can easily provide a working example if it helps julia devs.

Thanks very much,
Ronan

P.S.: I will upload the entire algorithm this week, I promise :)



Em segunda-feira, 27 de abril de 2015 22:19:21 UTC-3, Duane Wilson escreveu:

 Hi Ronan,

 Please ignore my last post. I misunderstood (got a little mixed up when 
 you were talking about bits) what the issue was, but after reading a little 
 more carefully I see where your challenge is. Some of the points in the 
 algorithm I showed you before will still be valid but in general here are a 
 few points.

 From an algorithmic standpoint, you don't actually have to check all 
 points against the all points on the frontier to determine if it is on the 
 frontier or not. The key to doing this is to make sure your current 
 frontier is sorted lexicographically (from best to worst) on 
 each objective. After that only you will need to take your current point 
 and find where it falls within the sorted frontier. Only points that are 
 above it can actually dominate the point you're looking to add. So you 
 only need to check the points above it in the current Pareto frontier. You 
 should check the points above it in reverse order as those are the points 
 that most likely to dominate your current point first. Of course if any one 
 of those points points dominates your current point you can stop, as that 
 point is not on the Pareto frontier.

 If you find that no point above the point you're looking at dominates it, 
 then you know that point is a part of the Pareto frontier. What you need to 
 do now is start to look at the points below the point you're adding to 
 see if they are dominated by your current point. You can early exit from 
 this as soon as you find a point that is not dominated by your new point, 
 because all points following it will also not be dominated by it. Then you 
 can remove all points your new point dominates from the Pareto frontier. 
 The result is a new Pareto frontier with only non-dominated points on it. 

 That is a lot to say, but the pseudo steps look a little bit like this:

 1) Find position for the new point where it would be placed if sorted into 
 the current front. (I'd use a binary search for this at first thought)
 2) For each point above it in the frontier, from last to first, check to 
 see if it is dominated. If a point dominates the new point exit.

 3) If the new point is not dominated, check to see which points below it 
 that are dominated by the new point. Remove them from the pareto frontier. 
 If you find a point that is not dominated by the new point exit.

 As an example, I updated the gist with code that does exactly the above 
 and added in your example:

 f = [ x^2; (x-2)^2 ]

 given that -10  x  10

  6000 points on the current frontier and determining if 16,000 new points 
 should be added to it or not. 


 That part of the code is done in cell 3.

 http://nbviewer.ipython.org/gist/dwil/5cc31d1fea141740cf96

 Hope this helps you out a bit more.

 On Friday, April 24, 2015 at 6:01:31 PM UTC-3, Duane Wilson wrote:

 Ah I see. Yeah this probably depends a lot on the way you are 
 representing the bits and how you 

Re: [julia-users] Re: Newbie help... First implementation of 3D heat equation solver VERY slow in Julia

2015-04-28 Thread Yuuki Soho
The code allocate only 432 bytes on my computer once I removed all global 
variables, and it's pretty fast.

Multiplying by the inverse of dx2 ... instead of dividing also make quite a 
difference, 2-3x.

http://pastebin.com/PSZyLXJX


Re: [julia-users] Re: Code starting to be slow after 30 min of execution

2015-04-28 Thread Stefan Karpinski
NLsolve seems to work fine with anonymous functions:

julia using NLsolve

julia f! = function (x, fvec)
   fvec[1] = (x[1]+3)*(x[2]^3-7)+18
   fvec[2] = sin(x[2]*exp(x[1])-1)
   end
(anonymous function)

julia g! = function (x, fjac)
   fjac[1, 1] = x[2]^3-7
   fjac[1, 2] = 3*x[2]^2*(x[1]+3)
   u = exp(x[1])*cos(x[2]*exp(x[1])-1)
   fjac[2, 1] = x[2]*u
   fjac[2, 2] = u
   end
(anonymous function)

julia nlsolve(f!, g!, [0.1; 1.2])
Results of Nonlinear Solver Algorithm
 * Algorithm: Trust-region with dogleg and autoscaling
 * Starting Point: [0.1,1.2]
 * Zero: [-3.7818e-16,1.0]
 * Inf-norm of residuals: 0.00
 * Iterations: 4
 * Convergence: true
   * |x - x'|  0.0e+00: false
   * |f(x)|  1.0e-08: true
 * Function Calls (f): 5
 * Jacobian Calls (df/dx): 5


That's just the example in the NLsolve README but with anonymous functions.

On Tue, Apr 28, 2015 at 10:04 AM, 'Antoine Messager' via julia-users 
julia-users@googlegroups.com wrote:

 Both ideas you have given are working. Wonderful! I just need to figure
 out which one is the fastest, the @gensym out of the creation of the
 function probably.
 Thank you very much!
 Antoine.


 Le lundi 27 avril 2015 15:49:56 UTC+1, Antoine Messager a écrit :

 Dear all,

 I need to create a lot of systems of equation, find some characteristics
 of each system and store the system if of interest. Each system is created
 under the same name. It works fine for the first 1000 systems but after the
 program starts to be too slow. I have tried to use the garbage collector
 each time I create a new system but it did not speed up the code. I don't
 know what to do, I don't understand where it could come from.

 Cheers,
 Antoine




Re: [julia-users] function with input parameter Vector{VecOrMat{Float64}}

2015-04-28 Thread Tom Breloff
Yes that's fine

On Tuesday, April 28, 2015 at 10:41:15 AM UTC-4, Ján Dolinský wrote:

 Thanks for the clarification.

 If my function foo has more parameters I just go like this ?

 function foo{V:VecOrMat}(X::Vector{V}, param1::Int, param2::String)  
 ... 
 end

 Regards,
 Jan



 Dňa utorok, 28. apríla 2015 16:31:37 UTC+2 Tom Breloff napísal(-a):

 The reason is a little subtle, but it's because you have an abstract type 
 inside a parametric type, which confuses Julia.  When you annotate 
 a::MyAbstractType, julia understands what to do with it (i.e. compiles 
 functions for each concrete subtype).  When you annotate 
 a::Vector{MyAbstractType}, it is expecting a concrete type 
 Vector{MyAbstractType}, but you are in fact passing it a different concrete 
 type Vector{MyConcreteType}. Use the signature that Tim suggested to get 
 around the issue.

 On Tuesday, April 28, 2015 at 10:08:57 AM UTC-4, Ján Dolinský wrote:

 Hi Tim,

 Thanks for the tip. Very interesting. In function definition it works. I 
 read the parametric-composite-types manual. I am still puzzled however.

 Consider the example below which works as I expect:

 a = rand(10)
 b = rand(10,2)

 julia a :: VecOrMat{Float64}
 10-element Array{Float64,1}:
 ...

 julia b :: VecOrMat{Float64}
 10x2 Array{Float64,2}:
 ...


 The following example does not work as I would expect:

 a = Vector{Float64}[rand(10), rand(10)]
 b = Matrix{Float64}[rand(10,2), rand(10,2)]

 julia a :: Vector{VecOrMat{Float64}}
 ERROR: type: typeassert: expected 
 Array{Union(Array{Float64,1},Array{Float64,2}),1}, got 
 Array{Array{Float64,1},1}

 julia b :: Vector{VecOrMat{Float64}}
 ERROR: type: typeassert: expected 
 Array{Union(Array{Float64,1},Array{Float64,2}),1}, got 
 Array{Array{Float64,2},1}

 however, this:
 julia a :: Vector{Vector{Float64}}
 2-element Array{Array{Float64,1},1}:
 ...
 and this works:
 julia b :: Vector{Matrix{Float64}}
 2-element Array{Array{Float64,2},1}:
 ...

 Thanks,
 Jan



 Dňa utorok, 28. apríla 2015 13:13:36 UTC+2 Tim Holy napísal(-a):


 http://docs.julialang.org/en/release-0.3/manual/types/#parametric-composite-types
  

 Use foo{V:VecOrMat}(X::Vector{V}) 

 --Tim 

 On Tuesday, April 28, 2015 02:40:41 AM Ján Dolinský wrote: 
  Hi guys, 
  
  I am trying to write a function which accepts as an input either a 
 vector 
  of vectors or a vector of matrices e.g. 
  
  function foo(X::Vector{VecOrMat{Float64}}) 
  
  When running the function with a vector of matrices I get the 
 following 
  error  'foo' has no method matching 
 foo(::Array{Array{Float64,2},1}) 
  
  Am I missing something here ? 
  
  Thanks, 
  Jan 



[julia-users] Re: abs inside vector norm

2015-04-28 Thread Steven G. Johnson
See https://github.com/JuliaLang/julia/pull/11043


[julia-users] Re: Newbie help... First implementation of 3D heat equation solver VERY slow in Julia

2015-04-28 Thread Michael Prentiss
I implemented an program in Fortran and Julia for time comparison when 
learning the language.  
This was very helpful to find problems in how I was learning julia.  Maybe 
I did not read carefully enough,
but I would compile the fortran with the intel compilers (not MKL) instead 
of gcc as another means for 
comparing speed. The intel compilers tend to make faster executables.


On Saturday, April 25, 2015 at 10:21:25 AM UTC-5, Ángel de Vicente wrote:

 Hi,

 a complete Julia newbie here... I spent a couple of days learning the 
 syntax and main aspects of Julia, and since I heard many good things about 
 it, I decided to try a little program to see how it compares against the 
 other ones I regularly use: Fortran and Python.

 I wrote a minimal program to solve the 3D heat equation in a cube of 
 100x100x100 points in the three languages and the time it takes to run in 
 each one is:

 Fortran: ~7s
 Python: ~33s
 Julia:~80s

 The code runs for 1000 iterations, and I'm being nice to Julia, since the 
 programs in Fortran and Python write 100 HDF5 files with the complete 100^3 
 data (every 10 iterations).

 I attach the code (and you can also get it at: 
 http://pastebin.com/y5HnbWQ1)

 Am I doing something obviously wrong? Any suggestions on how to improve 
 its speed?

 Thanks a lot,
 Ángel de Vicente



[julia-users] Re: Something wrong with Optim?

2015-04-28 Thread Pooya
Ah! Thank you. I had not heard of closure before. Now, I have heard of it, 
but am not sure if I can completely understand it! I guess this might be 
worth being explained in the manual. One thing that is still kind of 
confusing is that in your example, z is defined after the closure is 
created until the end of that iteration. So, z is printed once below, but 
not the second time in the beginning of the second iteration! 

julia for i=1:10
   if i=2; println(z); end 
   z=2 
   g()=2z 
   println(z) 
   end 
2 
ERROR: z not defined 
 in anonymous at no file:2


On Tuesday, April 28, 2015 at 10:17:17 PM UTC-4, Avik Sengupta wrote:

 Yes, sorry I jumped the gun. Thanks for clarifying. 

 But it still does not have anything to do with Optim :)

 The problem is due to defining an inline function (line 43) that creates 
 closure over the x_previous variable.  To test this, just comment that 
 line (and adjust the Optim.optimize call), the problem goes away. 

 A simpler version of the code that fails is as follows: 

 julia function f() 
  for i=1:10
if i2; println(z); end
z=2
g() = 2z
  end
end
 f (generic function with 1 method)

 julia f()
 ERROR: z not defined
  in f at none:3

 A fix to get it to work is to declare the variable at the start of your 
 function. Similarly, adding a local x_previous at the top of your 
 function makes it work correctly. Remember, variables in Julia are lexical 
 in scope. 

 julia function f()
  local z
  for i=1:10
if i2; println(z); end
z=2
g()=2z
  end
end
 f (generic function with 1 method)

 julia f()
 2
 2
 2
 2
 2
 2
 2
 2


 On Wednesday, 29 April 2015 02:23:28 UTC+1, Pooya wrote:

 If you comment out lines 42-49, you will see that it works fine!

 On Tuesday, April 28, 2015 at 9:20:49 PM UTC-4, Pooya wrote:

 Thanks, but I think if iter  2 (line 21) makes sure that x_previous 
 is defined in the previous iteration. Just to be clear, the condition to 
 check here was g_norm  g_norm_old, but I changed it to get there as 
 early as the second iteration.  

 On Tuesday, April 28, 2015 at 9:13:49 PM UTC-4, Avik Sengupta wrote:

 I'm seeing the error in line 22 of your gist where you are trying to 
 print the current value of x_previous. However, x_previous is first 
 defined in line 38 of your gist, and so the error is correct and doesnt 
 have anything to do with Optim, as far as I can see. 

 On Wednesday, 29 April 2015 01:39:02 UTC+1, Pooya wrote:

 Hi all,

 I have a problem that has made me scratch my head for many hours now! 
 It might be something obvious that I am missing. I have a Newton-Raphson 
 code to solve a system of nonlinear equations. The error that I get here 
 does not have anything to do with the algorithm, but just to be clear, I 
 need to find the best possible solution if the equations are not 
 solvable, 
 so I am trying to stop simulation when the direction found by 
 Newton-Raphson is not correct! In order to do that I put an if-loop in 
 the 
 beginning of the main loop to take x from the previous iteration 
 (x_previous), but I get x_previous not defined! I am using the Optim 
 package to do a line search after the direction has been found by 
 Newton-Raphson. If Optim is not used, things work perfectly (I tried by 
 commenting out those lines of code). Otherwise I get the error I 
 mentioned. 
 My code is here: 
 https://gist.github.com/prezaei85/372bde76012472865a94, which solves 
 a simple one-variable quadratic equation. Any thoughts are very much 
 appreciated.

 Thanks,
 Pooya



Re: [julia-users] Re: Defining a function in different modules

2015-04-28 Thread MA Laforge
I can see that this issue is convoluted.  There appears to be competing 
requirements, and getting things to start humming is non trivial.

Instead of dealing with what if-s... I want to start with more concrete 
what does...

*Transgressions.sin*
First, I don't fully understand Jeff's talk about Transgressions.sin.  I 
disagree that you can't get both behaviors with map(sin, [1.0, sloth, 
2pi, gluttony]).

I tried the following code in Julia, and everything works fine:
module Transgressions
Base.sin(x::String) = Sin in progress: $x
end

using Transgressions #Doesn't really do anything in this example...
map(sin, [1.0, sloth, 2pi, gluttony])

This tells me that when one uses map on an Array{Any}, Julia dynamically 
checks the object type, and applies multi-dispatch to execute the expected 
code.

I admit that one could argue this is not how object oriented design 
usually deals with this... but that's duck typing for you!

Ok... so what is the *real* problem (as I see it)?  Well, the problem is 
that Julia essentially decides that Base owns sin... simply because it 
was defined first.

The workaround here was to extend Base.sin from module Transgressions.  
This works reasonably well when one *knows* that Base defines the sin 
family of methods... but not very good when one wants to appropriate a 
new verb (enable, trigger, paint, draw, ...).

Why should any one module own such a verb (family of methods)?  This 
makes little sense to me.

*As for Michael Turok's idea of the SuperSecretBase*
As some people have pointed out, SuperSecretBase is a relatively elegant 
way to define a common interface for multiple implementations (Ex: 
A/BConnectionManager).  However, this is not really appropriate in the case 
when modules want to use the same verb for two completely different domains 
(ex: draw(x::Canvas, ...) vs draw(x::SixShooter, ...)).

And, as others have also pointed out: the SuperSecretBase solution is not 
even that great for modules that *do* want to implement a common 
interface.  If company A needs to convince standards committee X to settle 
on an interface of accepted verbs... that will surely impede on product 
deployment.  And even then... Why should standards committee X own that 
verb in the first place???  Why not standards committee Y?

*Regarding the comment about not using using*
Well, that just seems silly to me... by not using using... you completely 
under-utilize the multi-dispatch engine  its ability to author crisp, 
succinct code.

==And I would like to point out: The reason that multi-dispatch works so 
well at the moment is because (almost) everyting in Julia is owned by 
Base... so there are no problems extending methods In base Julia

*Some improvements on Transgressions.sin*
FYI: I don't really like my previous example of Transgressions.sin.  The 
reason: The implementation does not make sufficient use of what I would 
call hard types (user-defined types).  Instead, it uses soft types 
(int/char/float/string).

Hard types are very explicit, and they take advantage of multiple 
dispatch.  On the other hand, a method that takes *only* soft types is more 
likely to collide with others  fail to be resolved by multiple dispatch.

I feel the following example is a *much* better implementation to resolve 
the sin paradox:
module Religion
#Name Transgressions has a high-likelyhood of name collisions - don't 
export:
type Transgressions; name::String; end

#Personally, I find this Transgressions example shows that base 
should *not* own sin.
#Multi-dispatch *should* be able to deal with resolving ambiguities...
#In any case, this is my workaround for the moment:
Base.sin(x::Transgressions) = Sin in progress: $x

#Let's hope no other module wants to own method absolve...
absolve(x::Transgressions) = Sin absolved: $x

export absolve #Logically should have sin here too... but does not work 
with Julia model.
end

using Religion
Xgress = Religion.Transgressions #Shorthand... export-ing Transgressions 
susceptible to collisions.

map(sin, [1.0, Xgress(sloth), 2pi, Xgress(gluttony)])

Initially, creating a type Transgressions seems to be overdoing things a 
bit.  However, I have not noticed a performance hit.  I also find it has 
very little impact on readability.  In fact I find it *helps* with 
readability in most cases.

Best of all: Despite requiring a little more infrastructure in the module 
definition itself, there is negligible overhead in the code that *uses* 
module Religion.


Re: [julia-users] Re: Defining a function in different modules

2015-04-28 Thread elextr
Nice summary, it shows that, in the case where the module developer knows 
about an existing module and intends to extend its functions, Julia just 
works.

But it misses the actual problem case, where two modules are developed in 
isolation and each exports an original sin (sorry couldn't resist).  In 
this case the user of both modules has to distinguish which sin they are 
committing.

Although in some situations it might be possible for Julia to determine 
that there is no overlap between the methods simply, it is my understanding 
that in general it could be an expensive whole program computation.  So at 
the moment Julia just objects to overlapping names where they are both 
original sin.

Cheers
Lex

On Wednesday, April 29, 2015 at 1:50:54 PM UTC+10, MA Laforge wrote:

 I can see that this issue is convoluted.  There appears to be competing 
 requirements, and getting things to start humming is non trivial.

 Instead of dealing with what if-s... I want to start with more concrete 
 what does...

 *Transgressions.sin*
 First, I don't fully understand Jeff's talk about Transgressions.sin.  I 
 disagree that you can't get both behaviors with map(sin, [1.0, sloth, 
 2pi, gluttony]).

 I tried the following code in Julia, and everything works fine:
 module Transgressions
 Base.sin(x::String) = Sin in progress: $x
 end

 using Transgressions #Doesn't really do anything in this example...
 map(sin, [1.0, sloth, 2pi, gluttony])

 This tells me that when one uses map on an Array{Any}, Julia dynamically 
 checks the object type, and applies multi-dispatch to execute the expected 
 code.

 I admit that one could argue this is not how object oriented design 
 usually deals with this... but that's duck typing for you!

 Ok... so what is the *real* problem (as I see it)?  Well, the problem is 
 that Julia essentially decides that Base owns sin... simply because it 
 was defined first.

 The workaround here was to extend Base.sin from module 
 Transgressions.  This works reasonably well when one *knows* that Base 
 defines the sin family of methods... but not very good when one wants to 
 appropriate a new verb (enable, trigger, paint, draw, ...).

 Why should any one module own such a verb (family of methods)?  This 
 makes little sense to me.

 *As for Michael Turok's idea of the SuperSecretBase*
 As some people have pointed out, SuperSecretBase is a relatively elegant 
 way to define a common interface for multiple implementations (Ex: 
 A/BConnectionManager).  However, this is not really appropriate in the case 
 when modules want to use the same verb for two completely different domains 
 (ex: draw(x::Canvas, ...) vs draw(x::SixShooter, ...)).

 And, as others have also pointed out: the SuperSecretBase solution is not 
 even that great for modules that *do* want to implement a common 
 interface.  If company A needs to convince standards committee X to settle 
 on an interface of accepted verbs... that will surely impede on product 
 deployment.  And even then... Why should standards committee X own that 
 verb in the first place???  Why not standards committee Y?

 *Regarding the comment about not using using*
 Well, that just seems silly to me... by not using using... you 
 completely under-utilize the multi-dispatch engine  its ability to author 
 crisp, succinct code.

 ==And I would like to point out: The reason that multi-dispatch works so 
 well at the moment is because (almost) everyting in Julia is owned by 
 Base... so there are no problems extending methods In base Julia

 *Some improvements on Transgressions.sin*
 FYI: I don't really like my previous example of Transgressions.sin.  The 
 reason: The implementation does not make sufficient use of what I would 
 call hard types (user-defined types).  Instead, it uses soft types 
 (int/char/float/string).

 Hard types are very explicit, and they take advantage of multiple 
 dispatch.  On the other hand, a method that takes *only* soft types is more 
 likely to collide with others  fail to be resolved by multiple dispatch.

 I feel the following example is a *much* better implementation to resolve 
 the sin paradox:
 module Religion
 #Name Transgressions has a high-likelyhood of name collisions - 
 don't export:
 type Transgressions; name::String; end

 #Personally, I find this Transgressions example shows that base 
 should *not* own sin.
 #Multi-dispatch *should* be able to deal with resolving ambiguities...
 #In any case, this is my workaround for the moment:
 Base.sin(x::Transgressions) = Sin in progress: $x

 #Let's hope no other module wants to own method absolve...
 absolve(x::Transgressions) = Sin absolved: $x

 export absolve #Logically should have sin here too... but does not 
 work with Julia model.
 end

 using Religion
 Xgress = Religion.Transgressions #Shorthand... export-ing 
 Transgressions susceptible to collisions.

 map(sin, [1.0, Xgress(sloth), 2pi, Xgress(gluttony)])

 Initially, creating a 

[julia-users] Re: configuring style of PyPlot

2015-04-28 Thread Christian Peel
Thanks.  The main problem I had was that I was using an older version of
Matplotlib; I've upgraded it and used the ggplot style.  Thanks for your
help.



On Tue, Apr 28, 2015 at 3:44 PM, Steven G. Johnson stevenj@gmail.com
wrote:

 Yes, in general you can do anything from PyPlot that you can do from
 Matplotlib, because PyPlot is just a thin wrapper around Matplotlib using
 PyCall, and PyCall lets you call arbitrary Python code.

 The pyplot.style module is not currently exported by PyCall, you can
 access it via plt.style:

 using PyPlot
 plt.style[:available]

 will show available styles, and you can use the ggplot style (assuming it
 is available) with:

 plt.style[:use](ggplot)

 (Note that foo.bar in Python becomes foo[:bar] in PyCall.)

 Then subsequent plot commands will use the ggplot style.




-- 
chris.p...@ieee.org


Re: [julia-users] Yet Another String Concatenation Thread (was: Re: Naming convention)

2015-04-28 Thread Scott Jones
People who might realize, after becoming acquainted with Julia, for general 
computing, or maybe for some light usage of the math packages, might much 
rather have understandable names available, so they don't always have to 
run to the manual...
With decent code completion in your editor, I don't think it takes any more 
typing...

Also, there would be much less problems of name collision (only if you 
loaded the abbreviations into your module would it become as likely as now).

Just because those really short names (which had their roots back before 
IDEs with fancy code completion, when file names were limited to 8.3 (or 14 
in Unix) characters),
became traditional, like the names of the C functions, doesn't mean that 
they are necessarily *good* names...

I would definitely complain about the cryptic names in C more, if there 
were thousands of them with short names... but as the set of functions that 
I'd use is very small, and generally fit a few simple patterns (such as 
mem*, *printf, *alloc [I'd never use the str* ones that expect \0 
terminated strings anyway]), it doesn't bother me that much.


[julia-users] Re: Help to optimize a loop through a list

2015-04-28 Thread Patrick O'Leary
On Tuesday, April 28, 2015 at 11:17:55 AM UTC-5, Ronan Chagas wrote:

 Sorry, my mistake. Every problem is gone when I change

 nf::Integer

 to

 nf::Int64

 in type MGEOStructure.

 I didn't know that such thing would affect the performance this much...

 Sorry about that,
 Ronan


No problem. For future reference (or others coming to this thread in the 
future): 
http://julia.readthedocs.org/en/latest/manual/performance-tips/#avoid-containers-with-abstract-type-parameters

It is somewhat confusing that Integer is an abstract type. If you're not 
sure, you can check with the `isleaftype()` method.

help? isleaftype
search: isleaftype

Base.isleaftype(T)

   Determine whether T is a concrete type that can have instances,
   meaning its only subtypes are itself and None (but T itself
   is not None).

julia isleaftype(Integer)
false

julia isleaftype(Int)
true


Re: [julia-users] Yet Another String Concatenation Thread (was: Re: Naming convention)

2015-04-28 Thread Patrick O'Leary
On Tuesday, April 28, 2015 at 11:56:29 AM UTC-5, Scott Jones wrote:

 People who might realize, after becoming acquainted with Julia, for 
 general computing, or maybe for some light usage of the math packages, 
 might much rather have understandable names available, so they don't always 
 have to run to the manual...
 With decent code completion in your editor, I don't think it takes any 
 more typing...


There's a tradeoff. Reading is more common than writing--which at first 
makes long names sound appealing. But long names can also obscure 
algorithms. So you want names to be long enough to be unambiguous, but 
short enough that code can look like the algorithm you're implementing. 
Support for Unicode in identifiers is huge in that respect, and it is nice 
to write a (non-CompSci, in my case) algorithm in Julia that looks 
remarkably like what's in the textbook. And someone else working in my 
domain--the people who are reviewing and modifying my code--can very 
quickly grok that.

If long names were unambiguously better, no one would pick on Java.


[julia-users] Re: Help to optimize a loop through a list

2015-04-28 Thread Kristoffer Carlsson
Integer is an abstract type and thus kills performance if you use it in a 
type and need to access it frequently. There was discussion somewhere about 
renaming abstract types like Integer and FloatingPoint to include the name 
Abstract in them to avoid accidents like this. I guess this shows that it 
might not be a bad idea.

If you see surprising allocations in your program it is likely a type 
stability issue and if you are on 0.4 the @code_warntype macro is your 
friend.



On Tuesday, April 28, 2015 at 6:17:55 PM UTC+2, Ronan Chagas wrote:

 Sorry, my mistake. Every problem is gone when I change

 nf::Integer

 to

 nf::Int64

 in type MGEOStructure.

 I didn't know that such thing would affect the performance this much...

 Sorry about that,
 Ronan



[julia-users] Re: Help to optimize a loop through a list

2015-04-28 Thread Ronan Chagas
Sorry, my mistake. Every problem is gone when I change

nf::Integer

to

nf::Int64

in type MGEOStructure.

I didn't know that such thing would affect the performance this much...

Sorry about that,
Ronan


Re: [julia-users] Yet Another String Concatenation Thread (was: Re: Naming convention)

2015-04-28 Thread François Fayard
Sorry for being a pain, but doesn't LinAlg be LinearAlgebra? What's the point 
of issuing naming convention if it is not even respected by the main developers?

Besides, I really find that Julia underuses multiple dispatch. It's a big 
selling point of the language and it's not even used that much in the standard 
library! Mathematica has a kind of multiple dispatch and it's what makes the 
language so consistent. If people mimic Matlab, multiple dispatch will be 
underused.

[julia-users] Can't publish package because: fatal: tag 'v0.0.1' already exists

2015-04-28 Thread Zenna Tavares
I am trying to publish my package, AbstractDomains 
https://github.com/zenna/AbstractDomains.jl and having some issues.  I 
already registered AbstractDomains before, but failed to tag it.  I am 
attempting to both tag it and update with recent changes.

Currently, if I run

git tag

I get back 

v0.0.1


If in Julia I run Pkg.status() I get back 

60 additional packages:
 - AbstractDomains   0.0.0- master


But if I try to tag, I get back:

julia Pkg.tag(AbstractDomains)
INFO: Tagging AbstractDomains v0.0.1
fatal: tag 'v0.0.1' already exists
ERROR: failed process: Process(`git 
--work-tree=/home/zenna/.julia/v0.3/AbstractDomains 
--git-dir=/home/zenna/.julia/v0.3/AbstractDomains/.git tag --annotate 
--message 'AbstractDomains v0.0.1 [f7511f9061]' v0.0.1 
f7511f90619e3b64303553a5c45456822acb1afa`, ProcessExited(128)) [128]
 in pipeline_error at process.jl:502
 in run at pkg/git.jl:22
 in tag at pkg/entry.jl:568
 in tag at pkg/entry.jl:524
 in anonymous at pkg/dir.jl:28
 in cd at ./file.jl:20
 in cd at pkg/dir.jl:28
 in tag at pkg.jl:47

If I try to tag at v0.0.2

julia Pkg.tag(AbstractDomains, VersionNumber(0,0,2))
ERROR: 0.0.2 is not a valid initial version (try 0.0.0, 0.0.1, 0.1 or 1.0)
 in error at error.jl:21
 in check_new_version at version.jl:177
 in tag at pkg/entry.jl:557
 in tag at pkg/entry.jl:524
 in anonymous at pkg/dir.jl:28
 in cd at ./file.jl:20
 in cd at pkg/dir.jl:28


What is wrong, why does Julia think I'm both at v0.0.1 and v0.0.0. More 
importantly, how can I fix it?

Thanks


Re: [julia-users] Yet Another String Concatenation Thread (was: Re: Naming convention)

2015-04-28 Thread Josh Langsfeld
I don't know why it hasn't been mentioned (it was hinted at by Tamas) but 
it seems to me the clear solution is for most of Base to actually be moved 
into submodules like 'LinAlg'. Then to use those names, people need to call 
'using LinAlg' or 'using Sparse', etc... Somebody mentioned how 'cholfact' 
might be confusing to generic programmer, but a generic programmer should 
never even see the name unless he or she goes looking for it.

I would be highly skeptical of any attempt to make the standard library a 
single gigantic list of function names that everyone can understand the 
purpose of by glancing at.

On Tuesday, April 28, 2015 at 9:45:39 AM UTC-4, Yuuki Soho wrote:

 I think one should go over all the names in Base and see if there's some 
 rules that can be applied sanely to come up with better naming scheme.

 If you introduce factorize(MyType,...) and want to be consistent about 
 this kind of things you might end up changing a lot of the functions in 
 base. E.g.

 sqrtm - sqrt(Matrix,...)
 hist2d - hist(Dimension{2},...)
 ...



Re: [julia-users] Re: Newbie help... First implementation of 3D heat equation solver VERY slow in Julia

2015-04-28 Thread Ángel de Vicente
Hi,

On Tuesday, April 28, 2015 at 3:36:48 PM UTC+1, Tim Holy wrote:

 Intel compilers won't help, because your julia code is being compiled by 
 LLVM. 


But I saw a discussion about using Intel's MKL for greater performance and 
the Make.user options to use Intel compilers are meant to be supported by 
Julia. Why if there is no advantage in using them?
 

 It's still hard to tell what's up from what you've shown us. When you run 
 @time, does it allocate any memory? (You still have global variables in 
 there, 
 but maybe you made them const.) 


I'm posting some numbers again in reply to Yuuki's mail. 


 But you can save yourself two iterations through the arrays (i.e., more 
 cache 
 misses) by putting 
 T[i-1,j-1,k-1] += RHS[i-1,j-1,k-1] 
 inside the first loop and discarding the second loop (except for cleaning 
 up 
 the edges). Fortran may be doing this automatically for you? 
 http://en.wikipedia.org/wiki/Polytope_model 


I'm not sure if Fortran is doing that, but I certainly would not like to 
implement those sort of low-level details in the code itself, since it 
makes understanding the code quite more cumbersome...

(But Yuuki's mail gave me the trick. I reply to his mail below)

Thanks a lot (starting to get the feel for Julia...),
Ángel de Vicente 


[julia-users] Using Multiple DistributedArrays with map

2015-04-28 Thread Christopher Fisher

I'm fitting a complex cognitive model to data. Because the model does not 
have closed-form solution for the likelihood, computationally intensive 
simulation is required to generate the model predictions for fitting. My 
fitting routine involves two steps: (1) a brute force search of the 
plausible parameter space to find a good starting point for (2) finetuning 
with Optim. I have been able to parallelize the second step but I am having 
trouble with the first. 
 
Here is what I did: I precomputed kernel density functions from 100,000 
parameter combinations, resulting in three files: (1) ParmList, a list of 
the 100,000 parameter combinations, (2) RangeVar, the range inputs into the 
kernel density function, UnivariateKDE(), and (3) DensityVar,  the density 
inputs also for the kernel density function,UnivariateKDE(). I would like 
to use a distributed array to compute the loglikelihood of various sets of 
data across the 100,000 paramter combinations.  I'm having trouble using 
map with a function that loops over the kernel densities. I'm also having 
trouble getting the data to the function.  The basic code I have is 
provided below. Any help would be greatly appreciated.
 
addprocs(16)

#RangeVar and DensityVar are precomputed inputs for the kernel density 
function

#UnivariateKDE

#100,000 by 4 array

RangeVar = readcsv(RangeVar.csv)

#2048 by 100,000 array

DensityVar = readcsv(DesnsityVar.csv)

#List of parameters corresponding to each kernel density function

#100,000 by 4 array

ParmList = readcsv(ParmList.csv)

 

 

#Convert to Distributed Array

RangeVarD = distribute(RangeVar)

#Convert to Distributed Array

DensityVarD = distribute(DensityVar)

 

#Example data

data = [.34 .27 .32 .35 .34 .33]

 

@everywhere function LogLikelihood(DensityVar,DensityRange,data)

 #Loop over the parameter combinations, grabbing the appropriate inputes to 
reconstruct the kernel density functions

for i = 1:size(RangeVar,1)

range = FloatRange(RangeVar[i,1],RangeVar[i,2],RangeVar[i,3],RangeVar[i,4])

Dens = DensityVar[:,i]

#Reconstruct the Kernal density function corresponding to the ith parameter 
set

f = UnivariateKDE(range,Dens)

#Estimate the likelihood of the data given the parameters using 
the kernel density function

L = pdf(f,data)

#Compute the summed log likelihood

KDLL[i] = sum(log(L))

end

return KDLL

end

 


Re: [julia-users] Re: Newbie help... First implementation of 3D heat equation solver VERY slow in Julia

2015-04-28 Thread Ángel de Vicente
Yuuki,

thanks a lot, this was what I was missing!

On Tuesday, April 28, 2015 at 3:49:58 PM UTC+1, Yuuki Soho wrote:

 The code allocate only 432 bytes on my computer once I removed all global 
 variables, and it's pretty fast.

 Multiplying by the inverse of dx2 ... instead of dividing also make quite 
 a difference, 2-3x.

 http://pastebin.com/PSZyLXJX


Looking at your code, I realized that the trick was to annotate the 
functions. Without using the inverse of dx2 trick (since I'm not using it 
in Fortran) I get these timings:

Fortran version: 6.7s   (compiled with gfortran -O3)
Julia version: 7.013900449 seconds (3282172 bytes allocated)

Which is getting VERY close to my baseline. For reference, I created 
pastebins for both versions:

Fortran code: http://pastebin.com/nHn44fBa
Julia code:http://pastebin.com/Q8uc0maL

Thanks a lot. After this, the next goal is to approach Fortran's MPI 
parallel performance with Julia's parallel performance

Cheers,
Ángel de Vicente


Re: [julia-users] Re: Newbie help... First implementation of 3D heat equation solver VERY slow in Julia

2015-04-28 Thread Isaiah Norton

 But I saw a discussion about using Intel's MKL for greater performance and
 the Make.user options to use Intel compilers are meant to be supported by
 Julia. Why if there is no advantage in using them?


Intel MKL only helps with faster linear algebra than the default OpenBLAS
(in some cases).  Not with runtime of pure-Julia code.

On Tue, Apr 28, 2015 at 2:31 PM, Ángel de Vicente 
angel.vicente.garr...@gmail.com wrote:

 Hi,

 On Tuesday, April 28, 2015 at 3:36:48 PM UTC+1, Tim Holy wrote:

 Intel compilers won't help, because your julia code is being compiled by
 LLVM.


 But I saw a discussion about using Intel's MKL for greater performance and
 the Make.user options to use Intel compilers are meant to be supported by
 Julia. Why if there is no advantage in using them?


 It's still hard to tell what's up from what you've shown us. When you run
 @time, does it allocate any memory? (You still have global variables in
 there,
 but maybe you made them const.)


 I'm posting some numbers again in reply to Yuuki's mail.


 But you can save yourself two iterations through the arrays (i.e., more
 cache
 misses) by putting
 T[i-1,j-1,k-1] += RHS[i-1,j-1,k-1]
 inside the first loop and discarding the second loop (except for cleaning
 up
 the edges). Fortran may be doing this automatically for you?
 http://en.wikipedia.org/wiki/Polytope_model


 I'm not sure if Fortran is doing that, but I certainly would not like to
 implement those sort of low-level details in the code itself, since it
 makes understanding the code quite more cumbersome...

 (But Yuuki's mail gave me the trick. I reply to his mail below)

 Thanks a lot (starting to get the feel for Julia...),
 Ángel de Vicente



Re: [julia-users] Yet Another String Concatenation Thread (was: Re: Naming convention)

2015-04-28 Thread Glen H
Having using LinAlg would help to clean up the namespace of available 
functions for people who don't need it but what I was proposing is doing 
using LinAlg shouldn't change the API that much (if at all).  You still 
want to factorize numbers and polynomials (which are not part of LinAlg). 
 The API shouldn't grow 10x because LinAlg adds a bunch of matrix types 
than can also be factorized.  What I was proposing is actually a lot 
smaller list of fuctions to export (not more). 

For example

Using multi-dispatch:
factorize
factorize!

Not using multi-dispatch:
factor
factorize
factorize!
chol
cholfact
cholfact!
lu
lufact
ldltfact
qr
qrfact
qrfact!
bkfact
eigfact
hessfact
shurfact
svdfact
etc.

(I don't tend to pick on LinAlg in general...it is just the example brought 
up by this thread.  Please pick on the using multi-dispatch idea in general 
and not on specific errors/ommissions in the above list.)

I found multi-dispatch to be annoying at first but came around to really 
like it.  It made my code more generic, reduced the number of lines (if 
statements disappeared), ran faster, more modular, was more extendable and 
generic.  I haven't used another programming language that gives such 
enjoyment...but if people don't adopt multi-dispatch then these benefits 
dissappear.  The compound (shorthand) names will be shorter (horizontally) 
but the code will be less generic and you will have more lines to look at 
(vertically).  There are no guidelines for these compound names to follow 
so it is confusing for everyone except the people who have memorized these 
names.


Re: [julia-users] Yet Another String Concatenation Thread (was: Re: Naming convention)

2015-04-28 Thread Tony Kelman
Renaming the sparse matrix functions should absolutely be done. And this is 
coming from a long-time former user of Matlab who works with sparse 
matrices much more often than I work with strings. SparseMatrixCSC is far 
from the only sparse matrix format in the world when you venture beyond 
what Matlab provides. A major shortcoming of spzeros, sprandn, etc is they 
are not at all generic or extensible in the output format. We should 
eventually use zeros(SparseMatrixCSC, ...) or something to that effect, and 
we can easily put deprecations in place for the current Matlab-heritage 
names. What will be somewhat challenging is figuring out how to coordinate 
this kind of restructuring along with the existing dense version of zeros, 
randn, etc - since these currently take an element type as an input and 
always output as a dense Array{eltype, N}. Since generally the container 
types already have their element types as a parameter, I think it would be 
more general to move towards asking for the complete container type of the 
output in functions like zeros, rather than asking for the element type.


On Monday, April 27, 2015 at 2:03:17 PM UTC-7, Jiahao Chen wrote:

 For sparse matrix functions, Matlab's nomenclature is the de facto 
 standard, since providing a high level interface to sparse matrices is a 
 Matlab innovation. Renaming these functions can be considered, but will 
 doubtless offend the sensibilities of those accustomed to other languages.



[julia-users] abs inside vector norm

2015-04-28 Thread Mauro
In ODE.jl, I would like to be able to take the norm of a Vector
containing other things than just numbers:

type Point
x::Float64
y::Float64
end
Base.norm(pt::Point, p=2) = norm([pt.x, pt.y], p)
norm(Point(3,4))  # == 5
norm([Point(3,4), Point(3,1)])  # does not work:
 # ERROR: `abs` has no method matching abs(::Point)
 # in generic_vecnormInf at linalg/generic.jl:70
 # in generic_vecnorm2 at linalg/generic.jl:92
 # in vecnorm at linalg/generic.jl:146
 # in norm at linalg/generic.jl:156

Of course defining

Base.abs(pt::Point) = norm(pt)

makes it work, however, it's not that nice as abs is reserved for
numbers.  Does that mean that I also need to define my own norm for each
Vector{...} where ... is my compound type or is there a more general
approach?  Would it not make sense to use `norm` instead of `abs` inside
vecnorm, as abs==norm for numbers?  (If so, what should p be for the
inside norms?)

References:
https://github.com/JuliaLang/julia/pull/6057
https://groups.google.com/forum/#!msg/julia-users/RJO-JCVV7rg/4qHjMbPd9YAJ


Re: [julia-users] Configuring tComment for Julia-mode in vim?

2015-04-28 Thread René Donner
I am using

  autocmd FileType julia set commentstring=#\ %s

together with https://github.com/tpope/vim-commentary (don't have experience 
with tComment)


Am 28.04.2015 um 11:14 schrieb Magnus Lie Hetland m...@idi.ntnu.no:

 
 
 I'm using vim for Julia editing, and the Julia mode is great; for some 
 reason, though either it or tComment (or their interaction ;-) uses the 
 multiline comment style when commenting individual lines. So I end up with
 
 #= for i=1:n =#
 #= prinln(i) =#
 #= end =#
 
 When I really would have wanted
 
 # for i=1:n
 # prinln(i)
 # end
 
 Or, I guess, some version with #= at the beginning and =# at the end, for 
 that matter. I couldn't immediately find out how to configure tComment to 
 “behave,” so I thought I'd check if anyone else is using it :-)



Re: [julia-users] the state of GUI toolkits?

2015-04-28 Thread Tim Holy
Here's one vote for Gtk. Currently it might need some love to fix up for recent 
julia changes---presumably you (or someone) could fix it up in a couple of 
hours.

Once I finish with my array work in base julia, I'm hoping to get back into 
graphics this summer and switch all of my packages to Gtk.

--Tim

P.S. I was just using Qwt as an example of something I had played with a long 
time ago that provided yet more evidence that Cairo (which both Gtk.jl and 
Tk.jl rely on for drawing) is slow. But there seem to be other possible 
solutions (still in exploratory phases) for faster drawing that might work 
well with Gtk.

On Tuesday, April 28, 2015 12:46:52 AM Andreas Lobinger wrote:
 Hello colleagues,
 
 what is status of availability and usecases for GUI toolkits.
 
 I see Tk and Gtk on the pkg.julialang.org. Gtk has the tag 'doesn't load'
 from testing, Tk seems OK.
 In a recent discussion here, Tim Holy mentioned himself tesing Qwt and Qt
 in general seem to be a testcase for Cxx.
 
 Do i miss something here?
 
 Wishing a happy day,
  Andreas



[julia-users] function with input parameter Vector{VecOrMat{Float64}}

2015-04-28 Thread Ján Dolinský
Hi guys,

I am trying to write a function which accepts as an input either a vector 
of vectors or a vector of matrices e.g.

function foo(X::Vector{VecOrMat{Float64}})

When running the function with a vector of matrices I get the following 
error  'foo' has no method matching foo(::Array{Array{Float64,2},1})

Am I missing something here ?

Thanks,
Jan 


Re: [julia-users] Code starting to be slow after 30 min of execution

2015-04-28 Thread 'Antoine Messager' via julia-users


I would love too but it seems that NLsolve does not accept anonymous 
function.

https://lh3.googleusercontent.com/-spe5mTRqJDQ/VT9WpGNgDEI/ABg/_caaIfVwkec/s1600/Capture%2Bd%E2%80%99e%CC%81cran%2B2015-04-28%2Ba%CC%80%2B10.44.04.png


Le lundi 27 avril 2015 18:38:30 UTC+1, Tim Holy a écrit :

 gc() doesn't clear memory from compiled functions---the overhead of 
 compilation is so high that any function, once compiled, hangs around 
 forever. 

 The solution is to avoid creating so many compiled functions. Can you use 
 anonymous functions? 

 --Tim 

 On Monday, April 27, 2015 10:22:20 AM 'Antoine Messager' via julia-users 
 wrote: 
  And then (approximatively): 
  
  *  myfunction = eval(code_f)* 
  
  Le lundi 27 avril 2015 18:21:09 UTC+1, Antoine Messager a écrit : 
   I use meta programming to create my function. This is a simpler 
 example. 
   The parameters are generated randomly in the actual function. 
   
   *  lhs = {ode1= :(fy[1]), ode2= :(fy[2])};* 
   *  rhs = {ode1= :(y[1]*y[1]-2.0), ode2= :(y[2]-y[1]*y[1])};* 
   
   *  function code_f(lhs::Dict, rhs::Dict)* 
   *  lines = {}* 
   *  for key in keys(lhs)* 
   *  push!(lines, :( $(lhs[key]) = $(rhs[key])) )* 
   *  end* 
   *  @gensym f* 
   *  quote* 
   *  function $f(y, fy)* 
   *  $(lines...)* 
   *  end* 
   *  end* 
   *  end* 
   
   Le lundi 27 avril 2015 18:12:24 UTC+1, Tom Breloff a écrit : 
   Can you give us the definition of make_function as well?  This is 
 being 
   run in global scope? 
   
   On Monday, April 27, 2015 at 12:37:48 PM UTC-4, Antoine Messager 
 wrote: 
   When I input the following code, where myfunction is only a system 
 of 2 
   equations with 2 unknowns, the code starts to be really slow after 
   10,000 
   iterations. NLsolve is a non linear solver ( 
   https://github.com/EconForge/NLsolve.jl). 
   
   *  size=2* 
   *  for k in 1:10* 
   *  myfun=make_function(size);* 
   *  try{* 
   *  res=nlsolve(myfun,rand(size))* 
   *  }* 
   *  end* 
   *  end* 
   
   Thank you for your help, 
   Antoine 
   
   Le lundi 27 avril 2015 16:30:19 UTC+1, Mauro a écrit : 
   It is a bit hard to tell what is going wrong with essentially no 
   information.  Does the memory usage of Julia go up more than you 
 would 
   expect from storing the results?  Any difference between 0.3 and 
 0.4? 
   Anyway, you should try and make a small self-contained runable 
 example 
   and post it otherwise it will be hard to divine an answer. 
   
   On Mon, 2015-04-27 at 16:49, 'Antoine Messager' via julia-users  
   
   julia...@googlegroups.com wrote: 
Dear all, 

I need to create a lot of systems of equation, find some 
   
   characteristics of 
   
each system and store the system if of interest. Each system is 
   
   created 
   
under the same name. It works fine for the first 1000 systems but 
   
   after the 
   
program starts to be too slow. I have tried to use the garbage 
   
   collector 
   
each time I create a new system but it did not speed up the code. 
 I 
   
   don't 
   
know what to do, I don't understand where it could come from. 

Cheers, 
Antoine 



[julia-users] Re: Configuring tComment for Julia-mode in vim?

2015-04-28 Thread Magnus Lie Hetland
I guess using

setlocal commentstring=#= %s

rather than the

setlocal commentstring=#=%s=#

from julia.vim works. Not sure if it's the right way, though.


Re: [julia-users] Code starting to be slow after 30 min of execution

2015-04-28 Thread 'Antoine Messager' via julia-users
Is there any other possibility? Because, I need to use NLsolve, as it is 
the faster non linear solver I have found for my problem.

Le mardi 28 avril 2015 10:45:59 UTC+1, Antoine Messager a écrit :

 I would love too but it seems that NLsolve does not accept anonymous 
 function.


 https://lh3.googleusercontent.com/-spe5mTRqJDQ/VT9WpGNgDEI/ABg/_caaIfVwkec/s1600/Capture%2Bd%E2%80%99e%CC%81cran%2B2015-04-28%2Ba%CC%80%2B10.44.04.png


 Le lundi 27 avril 2015 18:38:30 UTC+1, Tim Holy a écrit :

 gc() doesn't clear memory from compiled functions---the overhead of 
 compilation is so high that any function, once compiled, hangs around 
 forever. 

 The solution is to avoid creating so many compiled functions. Can you use 
 anonymous functions? 

 --Tim 

 On Monday, April 27, 2015 10:22:20 AM 'Antoine Messager' via julia-users 
 wrote: 
  And then (approximatively): 
  
  *  myfunction = eval(code_f)* 
  
  Le lundi 27 avril 2015 18:21:09 UTC+1, Antoine Messager a écrit : 
   I use meta programming to create my function. This is a simpler 
 example. 
   The parameters are generated randomly in the actual function. 
   
   *  lhs = {ode1= :(fy[1]), ode2= :(fy[2])};* 
   *  rhs = {ode1= :(y[1]*y[1]-2.0), ode2= :(y[2]-y[1]*y[1])};* 
   
   *  function code_f(lhs::Dict, rhs::Dict)* 
   *  lines = {}* 
   *  for key in keys(lhs)* 
   *  push!(lines, :( $(lhs[key]) = $(rhs[key])) )* 
   *  end* 
   *  @gensym f* 
   *  quote* 
   *  function $f(y, fy)* 
   *  $(lines...)* 
   *  end* 
   *  end* 
   *  end* 
   
   Le lundi 27 avril 2015 18:12:24 UTC+1, Tom Breloff a écrit : 
   Can you give us the definition of make_function as well?  This is 
 being 
   run in global scope? 
   
   On Monday, April 27, 2015 at 12:37:48 PM UTC-4, Antoine Messager 
 wrote: 
   When I input the following code, where myfunction is only a system 
 of 2 
   equations with 2 unknowns, the code starts to be really slow after 
   10,000 
   iterations. NLsolve is a non linear solver ( 
   https://github.com/EconForge/NLsolve.jl). 
   
   *  size=2* 
   *  for k in 1:10* 
   *  myfun=make_function(size);* 
   *  try{* 
   *  res=nlsolve(myfun,rand(size))* 
   *  }* 
   *  end* 
   *  end* 
   
   Thank you for your help, 
   Antoine 
   
   Le lundi 27 avril 2015 16:30:19 UTC+1, Mauro a écrit : 
   It is a bit hard to tell what is going wrong with essentially no 
   information.  Does the memory usage of Julia go up more than you 
 would 
   expect from storing the results?  Any difference between 0.3 and 
 0.4? 
   Anyway, you should try and make a small self-contained runable 
 example 
   and post it otherwise it will be hard to divine an answer. 
   
   On Mon, 2015-04-27 at 16:49, 'Antoine Messager' via julia-users  
   
   julia...@googlegroups.com wrote: 
Dear all, 

I need to create a lot of systems of equation, find some 
   
   characteristics of 
   
each system and store the system if of interest. Each system is 
   
   created 
   
under the same name. It works fine for the first 1000 systems 
 but 
   
   after the 
   
program starts to be too slow. I have tried to use the garbage 
   
   collector 
   
each time I create a new system but it did not speed up the 
 code. I 
   
   don't 
   
know what to do, I don't understand where it could come from. 

Cheers, 
Antoine 



[julia-users] Configuring tComment for Julia-mode in vim?

2015-04-28 Thread Magnus Lie Hetland


I'm using vim for Julia editing, and the Julia mode is great; for some 
reason, though either it or tComment 
https://github.com/tomtom/tcomment_vim (or their interaction ;-) uses the 
multiline comment style when commenting individual lines. So I end up with

#= for i=1:n =#
#= prinln(i) =#
#= end =#

When I really would have wanted

# for i=1:n
# prinln(i)
# end

Or, I guess, some version with #= at the beginning and =# at the end, for 
that matter. I couldn't immediately find out how to configure tComment to 
“behave,” so I thought I'd check if anyone else is using it :-)


[julia-users] Re: Using Multiple DistributedArrays with map

2015-04-28 Thread Christopher Fisher
correction: map(LogLikelihood,RangeVar)

On Tuesday, April 28, 2015 at 3:39:34 PM UTC-4, Christopher Fisher wrote:

 I forgot to add that when I tried to set up simpler code with only one 
 array (and also no data passed through), it produced an error because it 
 read the distributed array, RangeVar, as a one dimensional array instead of 
 a two dimensional array. 

 map(LogLikelihood,DensityVar)



Re: [julia-users] Re: Newbie help... First implementation of 3D heat equation solver VERY slow in Julia

2015-04-28 Thread Patrick O'Leary
On Tuesday, April 28, 2015 at 1:38:02 PM UTC-5, Ángel de Vicente wrote:

 Fortran code: http://pastebin.com/nHn44fBa
 Julia code:http://pastebin.com/Q8uc0maL


We typically share snippets of code using https://gist.github.com/. It 
provides syntax highlighting for Julia code, integrated comments, and a 
number of other helpful features.

Patrick


[julia-users] Re: Using Multiple DistributedArrays with map

2015-04-28 Thread Christopher Fisher
I forgot to add that when I tried to set up simpler code with only one 
array (and also no data passed through), it produced an error because it 
read the distributed array, RangeVar, as a one dimensional array instead of 
a two dimensional array. 

map(LogLikelihood,DensityVar)


Re: [julia-users] Yet Another String Concatenation Thread (was: Re: Naming convention)

2015-04-28 Thread Josh Langsfeld
Turning named functions into methods of a generic function is definitely 
something Julia should take maximum advantage of. It does look like it 
should work well in the case of 'factorize'. But I think it's a bit 
unrelated from the naming convention issue. Most, if not the large 
majority, of the standard library functions are not going to be neatly 
separable into generic concepts.

On Tuesday, April 28, 2015 at 3:00:48 PM UTC-4, Glen H wrote:

 Having using LinAlg would help to clean up the namespace of available 
 functions for people who don't need it but what I was proposing is doing 
 using LinAlg shouldn't change the API that much (if at all).  You still 
 want to factorize numbers and polynomials (which are not part of LinAlg). 
  The API shouldn't grow 10x because LinAlg adds a bunch of matrix types 
 than can also be factorized.  What I was proposing is actually a lot 
 smaller list of fuctions to export (not more). 

 For example

 Using multi-dispatch:
 factorize
 factorize!

 Not using multi-dispatch:
 factor
 factorize
 factorize!
 chol
 cholfact
 cholfact!
 lu
 lufact
 ldltfact
 qr
 qrfact
 qrfact!
 bkfact
 eigfact
 hessfact
 shurfact
 svdfact
 etc.

 (I don't tend to pick on LinAlg in general...it is just the example 
 brought up by this thread.  Please pick on the using multi-dispatch idea in 
 general and not on specific errors/ommissions in the above list.)

 I found multi-dispatch to be annoying at first but came around to really 
 like it.  It made my code more generic, reduced the number of lines (if 
 statements disappeared), ran faster, more modular, was more extendable and 
 generic.  I haven't used another programming language that gives such 
 enjoyment...but if people don't adopt multi-dispatch then these benefits 
 dissappear.  The compound (shorthand) names will be shorter (horizontally) 
 but the code will be less generic and you will have more lines to look at 
 (vertically).  There are no guidelines for these compound names to follow 
 so it is confusing for everyone except the people who have memorized these 
 names.



[julia-users] the state of GUI toolkits?

2015-04-28 Thread Andreas Lobinger
Hello colleagues,

what is status of availability and usecases for GUI toolkits.

I see Tk and Gtk on the pkg.julialang.org. Gtk has the tag 'doesn't load' 
from testing, Tk seems OK.
In a recent discussion here, Tim Holy mentioned himself tesing Qwt and Qt 
in general seem to be a testcase for Cxx.

Do i miss something here?

Wishing a happy day,
 Andreas




[julia-users] Re: abs inside vector norm

2015-04-28 Thread Steven G. Johnson
On Tuesday, April 28, 2015 at 3:19:26 AM UTC-4, Mauro wrote: 

 approach?  Would it not make sense to use `norm` instead of `abs` inside 
 vecnorm, as abs==norm for numbers?  (If so, what should p be for the 
 inside norms?) 


Yes, abs - norm in these functions would make sense, and shouldn't hurt 
performance for number types (where norm should inline to abs).