[julia-users] Re: Julia vs Matlab: interpolation and looping

2016-01-30 Thread Tomas Lycken
I'd love to add non-uniform interpolation schemes of higher degree than linear 
to Interpolations.jl - the two main reasons for why it hasn't been done already 
are time (focus has been on reaching feature parity with Grid.jl, for which the 
package started out as a replacement effort) and knowledge (I don't know about 
good algorithms for it). The second one is easily solved if you can link to a 
few good papers on the subject and/or liberally licensed implementations - the 
first one is best fixed with a PR :) 

// T 

[julia-users] Has module pre-compilation has been back-ported to Julia 0.3.11?

2016-01-30 Thread Tero Frondelius
Do you mind me asking: why would you need an old version of Julia?

Re: [julia-users] Julia deps question

2016-01-30 Thread Yichao Yu
On Sat, Jan 30, 2016 at 2:33 PM,   wrote:
> Why is it that when building Julia, it wants to compile almost all
> dependencies itself?
> I think users should either check dependencies and build them themselves or
> install them through package manager. Doesn't seem to me the job of Julia to
> do this.
> I know that LLVM has some optimizations that are not yet upstreamed. But
> when they are upstreamed in the future, will Julia stop compiling LLVM then?
> What about all other deps?

Use the `USE_SYSTEM_*` options.


[julia-users] Re: Capturing output of interactive julia

2016-01-30 Thread Josef Sachs
> On Tue, 26 Jan 2016 07:18:26 -0800 (PST), Sebastian Nowozin said:

> for a non-programmatic way to do that (i.e. to record a single
> session) you can use the "screen" utility for that.  In our shell,
> run screen, then press ctrl-a H, and then run Julia.  The file is
> created in the directory you started screen in, and named screenlog.0

[This is a digression that is not related to julia.]

Thanks for the suggestion.  My question was actually prompted by the
nuisance in GNU screen that the "hardcopy" command (normally C-a h)
produces unwanted newlines in the output file when there are line wraps.
The "log" command (C-a H) doesn't do that, but it includes terminal
control sequences in the output file.  I have a perl script that
removes the terminal control sequences, but it does not satisfactorily
handle situations like zsh's bck-i-search or the julia REPL's
reverse-i-search.  (If anyone can suggest a solution for that,
I would be grateful.)

This might prove to be the impetus for me to finally switch from
screen to tmux, whose capture-pane command joins the wrapped lines
when invoked with the -J argument.


Re: [julia-users] Embarrassingly parallel workload

2016-01-30 Thread Christopher Alexander
I have tried the construction below with no success.  In v0.4.3, I end up 
getting a segmentation fault.  In the latest v.0.5.0, the run time is 3-4x 
as long as the non-parallelized version and the array constructed is vastly 
different than the one that is constructed using the non-parallelized code. 
 Below is the C++ code that I am essentially trying to emulate:

void TreeLattice::stepback(Size i, const Array& values,

 Array& newValues) const {

#pragma omp parallel for

for (Size j=0; jimpl().size(i); j++) {

Real value = 0.0;

for (Size l=0; limpl().probability(i,j,l) *

 values[this->impl().descendant(i,j,l)];

}

value *= this->impl().discount(i,j);

newValues[j] = value;

}

}

The calls to probability, descendant, and discount all end up accessing 
data in other objects, so I tried to prepend those function and type 
definitions with @everywhere.  However, that started me on a long chain of 
having to eventually wrap each file in my module in @everywhere, and there 
were still errors complaining about things not being defined.  At this 
point I am really confused as to how to construct what would appear to be a 
rather simple parallelized for loop that generates the same results as 
non-parallelized code.  I've poured over both this forum and other 
resources, and nothing has really worked.

Any help would be appreciated.

Thanks!

Chris


On Thursday, August 20, 2015 at 4:52:52 AM UTC-4, Nils Gudat wrote:
>
> Sebastian, I'm not sure I understand you correctly, but point (1) in your 
> list can usually be taken care of by wrapping all the necessary 
> usings/requires/includes and definitions in a @everywhere begin ... end 
> block.
>
> Julio, as for your original problem, I think Tim's advice about 
> SharedArrays was perfectly reasonable. Without having looked at your 
> problem in detail, I think you should be able to do something like this 
> (and I also think this gets close enough to what Sebastian was talking 
> about, and to Matlab's parfor, unless I'm completely misunderstanding your 
> problem):
>
> nprocs()==CPU_CORES || addprocs(CPU_CORES-1)
> results = SharedArray(Float64, (m,n))
>
> @sync @parallel for i = 1:n
> results[:, i] = complicatedfunction(inputs[i])
> end
>


[julia-users] Re: Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2016-01-30 Thread Andrew
I just ran several of these benchmarks using the code and compilation flags 
available 
at https://github.com/jesusfv/Comparison-Programming-Languages-Economics . 
On my computer Julia is faster than C, C++, and Fortran, which I find 
surprising, unless some really dramatic optimization happened since 0.2.

My results are, on a Linux machine:

Julia 0.4.2: 1.44s
Julia 0.3.13 1.60s
C, gcc 4.8.4: 1.65s
C++, g++: 1.64s
Fortran, gfortran 4.8.4: 1.65s
Matlab R2015b : 5.65s
Matlab R2015b, Mex inside loop: 1.83s
Python 2.7: 50.9s 
Python 2.7 Numba: 1.88s with warmup

It's possible there's something bad about my configuration as I don't 
normally use C and Fortran. In the paper their C/Fortran code runs in 0.7s, 
I don't think their computer is twice as fast as mine, but maybe it is. 

On Monday, June 16, 2014 at 11:52:07 AM UTC-4, Florian Oswald wrote:
>
> Dear all,
>
> I thought you might find this paper interesting: 
> http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
>
> It takes a standard model from macro economics and computes it's solution 
> with an identical algorithm in several languages. Julia is roughly 2.6 
> times slower than the best C++ executable. I was bit puzzled by the result, 
> since in the benchmarks on http://julialang.org/, the slowest test is 
> 1.66 times C. I realize that those benchmarks can't cover all possible 
> situations. That said, I couldn't really find anything unusual in the Julia 
> code, did some profiling and removed type inference, but still that's as 
> fast as I got it. That's not to say that I'm disappointed, I still think 
> this is great. Did I miss something obvious here or is there something 
> specific to this algorithm? 
>
> The codes are on github at 
>
> https://github.com/jesusfv/Comparison-Programming-Languages-Economics
>
>
>

Re: [julia-users] Embarrassingly parallel workload

2016-01-30 Thread Christopher Alexander
By "construction below", I mean this:

results = SharedArray(Float64, (m,n))

@sync @parallel for i = 1:n
results[:, i] = complicatedfunction(inputs[i])
end

On Saturday, January 30, 2016 at 2:31:40 PM UTC-5, Christopher Alexander 
wrote:
>
> I have tried the construction below with no success.  In v0.4.3, I end up 
> getting a segmentation fault.  In the latest v.0.5.0, the run time is 3-4x 
> as long as the non-parallelized version and the array constructed is vastly 
> different than the one that is constructed using the non-parallelized code. 
>  Below is the C++ code that I am essentially trying to emulate:
>
> void TreeLattice::stepback(Size i, const Array& values,
>
>  Array& newValues) const {
>
> #pragma omp parallel for
>
> for (Size j=0; jimpl().size(i); j++) {
>
> Real value = 0.0;
>
> for (Size l=0; l
> value += this->impl().probability(i,j,l) *
>
>  values[this->impl().descendant(i,j,l)];
>
> }
>
> value *= this->impl().discount(i,j);
>
> newValues[j] = value;
>
> }
>
> }
>
> The calls to probability, descendant, and discount all end up accessing 
> data in other objects, so I tried to prepend those function and type 
> definitions with @everywhere.  However, that started me on a long chain of 
> having to eventually wrap each file in my module in @everywhere, and there 
> were still errors complaining about things not being defined.  At this 
> point I am really confused as to how to construct what would appear to be a 
> rather simple parallelized for loop that generates the same results as 
> non-parallelized code.  I've poured over both this forum and other 
> resources, and nothing has really worked.
>
> Any help would be appreciated.
>
> Thanks!
>
> Chris
>
>
> On Thursday, August 20, 2015 at 4:52:52 AM UTC-4, Nils Gudat wrote:
>>
>> Sebastian, I'm not sure I understand you correctly, but point (1) in your 
>> list can usually be taken care of by wrapping all the necessary 
>> usings/requires/includes and definitions in a @everywhere begin ... end 
>> block.
>>
>> Julio, as for your original problem, I think Tim's advice about 
>> SharedArrays was perfectly reasonable. Without having looked at your 
>> problem in detail, I think you should be able to do something like this 
>> (and I also think this gets close enough to what Sebastian was talking 
>> about, and to Matlab's parfor, unless I'm completely misunderstanding your 
>> problem):
>>
>> nprocs()==CPU_CORES || addprocs(CPU_CORES-1)
>> results = SharedArray(Float64, (m,n))
>>
>> @sync @parallel for i = 1:n
>> results[:, i] = complicatedfunction(inputs[i])
>> end
>>
>

[julia-users] Re: Julia vs Matlab: interpolation and looping

2016-01-30 Thread Lutfullah Tomak
I did not pay attention to stackoverflow post. There all code is wrapped around 
a function for some.
However, I was talking about examples here as in
 
for banana=1:NoIter
xprime=ones(Nalal,Naa)
W_temp = spl(xprime[:])
end

If all code run as it shown in the example here then Nalal, Naa are going to be 
checked for their types and 'ones' will be dispatced accordingly for each 
iteration in the loop.
Having looked at stackoverflow post, there are some differences.
I cannot try the codes right now. I wish I could tell something practical 
perspective using the code actually.

Regards

[julia-users] Julia deps question

2016-01-30 Thread albapompeo
 

Why is it that when building Julia, it wants to compile almost all 
dependencies itself?
I think users should either check dependencies and build them themselves or 
install them through package manager. Doesn't seem to me the job of Julia 
to do this.
I know that LLVM has some optimizations that are not yet upstreamed. But 
when they are upstreamed in the future, will Julia stop compiling LLVM 
then? What about all other deps? 


Re: [julia-users] Julia deps question

2016-01-30 Thread Jeff Bezanson
Yes, you can use the USE_SYSTEM_ build options for this. Otherwise,
building everything ourselves is the best way to guarantee a working
system, with compatible versions of everything. In particular, LLVM
changes very significantly from version to version, and in general we
need a specific version. (It also helps to have our local patches that
fix bugs in LLVM we happen to hit.) In some cases distro maintainers
have literally refused to provide a package for the version of LLVM we
need, instead arbitrarily picking one version that will be available
on their system.

On Sat, Jan 30, 2016 at 2:45 PM, Yichao Yu  wrote:
> On Sat, Jan 30, 2016 at 2:33 PM,   wrote:
>> Why is it that when building Julia, it wants to compile almost all
>> dependencies itself?
>> I think users should either check dependencies and build them themselves or
>> install them through package manager. Doesn't seem to me the job of Julia to
>> do this.
>> I know that LLVM has some optimizations that are not yet upstreamed. But
>> when they are upstreamed in the future, will Julia stop compiling LLVM then?
>> What about all other deps?
>
> Use the `USE_SYSTEM_*` options.


Re: [julia-users] Re: Retrieving documentation for types and their constructors

2016-01-30 Thread Tim Holy
Yeah. This was on 0.4, and it sounds like the change does what I was after. 
Thanks!

--Tim

On Saturday, January 30, 2016 08:39:00 AM Michael Hatherly wrote:
> Tim, is this with 0.4 or 0.5? The behaviour of type/constructor docs was
> changed quite recently
> to be more similar to that of function/method docs. On 0.5 `help?> Foo`
> should concatenate all
> the docstrings for `Foo` and it's constructors. Is that the behaviour
> you're looking for?
> 
> -- Mike
> 
> On Saturday, 30 January 2016 14:26:25 UTC+2, Tim Holy wrote:
> > Do we have an easy way to retrieve documentation for constructors
> > separately
> > from the documentation of a type? I spent a few minutes looking at the
> > source
> > and searching issues, and got to the point where I thought it was better
> > to
> > ask before, say, considering a PR.
> > 
> > Illustration:
> > """
> > Foo is a type that represents some amazing stuff
> > """
> > type Foo
> > end
> > 
> > """
> > `foo = Foo(7)` creates an empty `Foo` instance with room for 7
> > 
> > LittleFoos
> > in it.
> > 
> > """
> > Foo(n::Integer) = nothing
> > 
> > """
> > `foo = Foo(x, y)` puts the LittleFoos `x` and `y` into a grown-up Foo.
> > """
> > Foo(x, y) = nothing
> > 
> > Now let's try it:
> > help?> Foo
> > search: Foo floor ifloor pointer_from_objref OverflowError
> > 
> > RoundFromZero
> > FileMonitor functionloc functionlocs StackOverflowError Factorization
> > OutOfMemoryError
> > 
> >   Foo is a type that represents some amazing stuff
> > 
> > Now, if I know how to call the constructor I want, then it's no problem to
> > 
> > retrieve the documentation:
> > help?> Foo(7)
> > 
> >   foo = Foo(7) creates an empty Foo instance with room for 7
> > 
> > LittleFoos in
> > it.
> > 
> > But what if I don't know my choices? Here are two things I tried:
> > julia> @eval @doc $(methods(Foo))
> > 
> > help?> call(Type{Foo})
> >   
> >   call(x, args...)
> >   
> >   If x is not a Function, then x(args...) is equivalent to call(x,
> > 
> > args...).
> > This means that function-like behavior can be added to any type by
> > defining new
> > call methods.
> > 
> > Neither produced useful results. Any thoughts?
> > 
> > Best,
> > --Tim



Re: [julia-users] Re: Crashing while parsing large XML file

2016-01-30 Thread Brandon Booth
I'm a moron, but that's a different issue. I fixed the readline/eachline 
issue, but that didn't address the crashing problem. I did some 
experimenting though and I think I fixed the problem. 

I added free(str) at the end of each loop to free up the memory from 
parse_string. I parsed each line and for some reason my program was hanging 
onto the results so the memory usage was slowly creeping up until the 
program crashed. Adding frree(str) kept the memory usage flat and ran 
through the entire file.



On Thursday, January 28, 2016 at 3:38:45 PM UTC-5, Stefan Karpinski wrote:
>
> At best, you'll only see every other line, right? At worst, eachline may 
> do some IO lookahead (i.e. read one line ahead) and this will do something 
> even more confusing.
>
> On Thu, Jan 28, 2016 at 3:35 PM, Brandon Booth  > wrote:
>
>> No real reason. I was going back and forth between eachline(f) and for i 
>> = 1:n to see if it worked for 1000 rows, then 10,000 rows, etc. I ended up 
>> with a hybrid of the two. Will that matter much?
>>
>>
>> On Thursday, January 28, 2016 at 1:32:09 PM UTC-5, Diego Javier Zea wrote:
>>>
>>> Hi! 
>>>
>>> Why you are using 
>>>
>>> for line in eachline(f)  l = readline(f)
>>>
>>>
>>> instead of
>>>
>>> for l in eachline(f)
>>>
>>>
>>> ?
>>>
>>> Best
>>>
>>> El jueves, 28 de enero de 2016, 12:42:35 (UTC-3), Brandon Booth escribió:

 I'm parsing an XML file that's about 30gb and wrote the loop below to 
 parse it line by line. My code cycles through each line and builds a 1x200 
 dataframe that is appended to a larger dataframe. When the larger 
 dataframe 
 gets to 1000 rows I stream it to an SQLite table. The code works for the 
 first 25 million or so lines (which equates to 125,000 or so records in 
 the 
 SQLite table) and then freezes. I've tried it without the larger dataframe 
 but that didn't help.

 Any suggestions to avoid crashing?

 Thanks.

 Brandon



 The XML structure:
 
 value
 value>/field2>
 ...
 
 
 value
 value>/field2>
 ...
 


 My loop:

 f = open("contracts.xml","r")readline(f)n = countlines(f)tic()for line in 
 eachline(f)  l = readline(f)  if startswith(l,">> append!(df1,df)if size(df1,1) == 1000  source = convertdf(df1) 
  Data.stream!(source,sink)  deleterows!(df1,1:1000)end  else
 str = parse_string(l)r = root(str)df[symbol(name(r))] = 
 string(content(r))  endend

 close(f)


>

[julia-users] Re: Retrieving documentation for types and their constructors

2016-01-30 Thread Michael Hatherly
Tim, is this with 0.4 or 0.5? The behaviour of type/constructor docs was 
changed quite recently
to be more similar to that of function/method docs. On 0.5 `help?> Foo` 
should concatenate all
the docstrings for `Foo` and it's constructors. Is that the behaviour 
you're looking for?

-- Mike

On Saturday, 30 January 2016 14:26:25 UTC+2, Tim Holy wrote:
>
> Do we have an easy way to retrieve documentation for constructors 
> separately 
> from the documentation of a type? I spent a few minutes looking at the 
> source 
> and searching issues, and got to the point where I thought it was better 
> to 
> ask before, say, considering a PR. 
>
> Illustration: 
>
> """ 
> Foo is a type that represents some amazing stuff 
> """ 
> type Foo 
> end 
>
> """ 
> `foo = Foo(7)` creates an empty `Foo` instance with room for 7 
> LittleFoos 
> in it. 
> """ 
> Foo(n::Integer) = nothing 
>
> """ 
> `foo = Foo(x, y)` puts the LittleFoos `x` and `y` into a grown-up Foo. 
> """ 
> Foo(x, y) = nothing 
>
> Now let's try it: 
>
> help?> Foo 
> search: Foo floor ifloor pointer_from_objref OverflowError 
> RoundFromZero 
> FileMonitor functionloc functionlocs StackOverflowError Factorization 
> OutOfMemoryError 
>
>   Foo is a type that represents some amazing stuff 
>
> Now, if I know how to call the constructor I want, then it's no problem to 
> retrieve the documentation: 
>
> help?> Foo(7) 
>   foo = Foo(7) creates an empty Foo instance with room for 7 
> LittleFoos in 
> it. 
>
> But what if I don't know my choices? Here are two things I tried: 
>
> julia> @eval @doc $(methods(Foo)) 
>
> help?> call(Type{Foo}) 
>   call(x, args...) 
>
>   If x is not a Function, then x(args...) is equivalent to call(x, 
> args...). 
> This means that function-like behavior can be added to any type by 
> defining new 
> call methods. 
>
> Neither produced useful results. Any thoughts? 
>
> Best, 
> --Tim 
>
>

[julia-users] Multiple dispatch doesn't work for named parameters?

2016-01-30 Thread Daniel Carrera
This is very weird. It looks like multiple dispatch doesn't work at least 
in some cases. Look:

julia> semimajor_axis(;M=1,P=10) = (P^2 * M)^(1/3)
semimajor_axis (generic function with 1 method)

julia> semimajor_axis(;α=25,mp=1,M=1,d=10) = α * d * M/mp / 954.9
semimajor_axis (generic function with 1 method)

julia> semimajor_axis(M=3,P=10)
ERROR: unrecognized keyword argument "P"


I do understand that it may be risky to have two functions with the same 
name and different named parameters (I really wish Julia allowed me to make 
some/all named parameters mandatory), but I was expecting that my code 
would work. I clearly defined a version of the function that has a named 
parameter called "P".

Does anyone know why this happens and what I can do to fix it? Am I 
basically required to either use different function names, or give up on 
using named parameters for everything?

Cheers,
Daniel.


[julia-users] deep learning for regression?

2016-01-30 Thread michael . creel
I'm interested in using neural networks (deep learning) for multivariate 
multiple regression, with multiple real valued inputs and multiple real 
valued outputs. At the moment, the mocha.jl package looks very promising, 
but the examples seem to be all for classification problems. Does anyone 
have examples of use of mocha (or other deep learning packages for Julia) 
for regression problems? Or any tips for deep learning and regression?


Re: [julia-users] Re: ANN: Julia "lite" branch available

2016-01-30 Thread Scott Jones
Thank you also for your interest!  Pixie also looks interesting, with the 
(limited - only 1st two arguments, for math operators) polymorphic 
dispatch, and JITing,
although I think my interest in new Lisps has been caught recently by: 
https://github.com/swadey/LispSyntax.jl and 
https://github.com/swadey/LispREPL.jl
This looks like it would give Gerrald Sussman what he was pestering Jeff 
for, after his thesis defense last year 邏

(There are really too darn many interesting things popping up these days - 
it's really hard to be able to sleep!)
Thanks for pointing Pixie out!

On Saturday, January 30, 2016 at 1:14:12 AM UTC-5, cdm wrote:
>
>
> intriguing ... Thank You, Scott !
>
> possibly of interest:
>
>
> https://groups.google.com/forum/#!searchin/julia-users/alternate$20lisp/julia-users/am8opcv-5Mc/UdXyususBwAJ
>
>
> enjoy !!!
>


[julia-users] Re: Multiple dispatch doesn't work for named parameters?

2016-01-30 Thread Andrew
I see in the 
manual 
http://docs.julialang.org/en/latest/manual/methods/#note-on-optional-and-keyword-arguments
 
that:

"Keyword arguments behave quite differently from ordinary positional 
arguments. In particular, they do not participate in method dispatch. 
Methods are dispatched based only on positional arguments, with keyword 
arguments processed after the matching method is identified."

I don't think you defined 2 different methods. You defined one method 
taking no positional arguments and a bunch of keyword arguments. Then you 
overwrote that method.

On Saturday, January 30, 2016 at 11:37:51 AM UTC-5, Daniel Carrera wrote:
>
> This is very weird. It looks like multiple dispatch doesn't work at least 
> in some cases. Look:
>
> julia> semimajor_axis(;M=1,P=10) = (P^2 * M)^(1/3)
> semimajor_axis (generic function with 1 method)
>
> julia> semimajor_axis(;α=25,mp=1,M=1,d=10) = α * d * M/mp / 954.9
> semimajor_axis (generic function with 1 method)
>
> julia> semimajor_axis(M=3,P=10)
> ERROR: unrecognized keyword argument "P"
>
>
> I do understand that it may be risky to have two functions with the same 
> name and different named parameters (I really wish Julia allowed me to make 
> some/all named parameters mandatory), but I was expecting that my code 
> would work. I clearly defined a version of the function that has a named 
> parameter called "P".
>
> Does anyone know why this happens and what I can do to fix it? Am I 
> basically required to either use different function names, or give up on 
> using named parameters for everything?
>
> Cheers,
> Daniel.
>


[julia-users] Re: SharedArray / parallel question

2016-01-30 Thread Christopher Alexander
I can confirm that I do not see the seg fault issue in 0.5 (the latest 
master), but I am getting fundamentally different results when using the 
@sync @parallel construction.  In essence, @sync @parallel is causing 
arrays of different values (compared to using a non-parallelized 
construction) to be built, which is causing an issue further along in my 
program.  It is also much slower, so I am wondering if I my syntax is 
incorrect.

Any input would be appreciated.  You can see what is supposed to be 
generated by loading this script 
(https://github.com/pazzo83/QuantJulia.jl/blob/master/swaption_test_code.jl) 
and calling main().

Thanks!

On Saturday, January 30, 2016 at 4:48:51 AM UTC-5, Lutfullah Tomak wrote:
>
> There is this issue on on github 
> https://github.com/JuliaLang/julia/issues/14764 . I am no expert about 
> parallel computing but may be related.
>
> Regards
>


[julia-users] Re: Julia vs Matlab: interpolation and looping

2016-01-30 Thread pokerhontas2k8
Ok. My original code certainly spends most of the time on looping the 
interpolation. In that sense, the example I post here is similar to the 
original code and hightlights the problem I am facing I think. Fwiw, I wrap 
the original code in a function. I also do it above, at least for the 
performance relevant part I guess. If I do it for the whole code above -- 
no difference. 

The first time I translated the code from MATLAB to Julia, I actually 
computed the expectation point by point with 2 double loops (or also with 1 
loop and then reshaping) -- it was slower than vectorizing. (see also the 
stackoverflow post) . 


@Lutfullah Tomak: I don't really get that point, maybe I am 
misunderstanding something but the innermost part of the loop is wrapped in 
a function, so there should be no global variable?! 


[julia-users] Re: Escher example problem

2016-01-30 Thread Leonardo
Hi,
also with new version of Escher (0.3.1) and latest version of Gadfly, I 
obtain following error running *plotting.jl*:

MethodError: `drawing` has no method matching 
drawing(::Measures.Length{:mm,Float64}, ::Measures.Length{:mm,Float64}, 
::Gadfly.Plot)
Closest candidates are:
  drawing(::Any, ::Any)
 in anonymous at C:\Users\Leonardo\.julia\v0.4\Escher\src\cli\serve.jl:170
 in anonymous at C:\Users\Leonardo\.julia\v0.4\Mux\src\Mux.jl:15
 in anonymous at C:\Users\Leonardo\.julia\v0.4\Mux\src\Mux.jl:8
 in splitquery at C:\Users\Leonardo\.julia\v0.4\Mux\src\basics.jl:28
 in anonymous at C:\Users\Leonardo\.julia\v0.4\Mux\src\Mux.jl:8
 in wcatch at 
C:\Users\Leonardo\.julia\v0.4\Mux\src\websockets_integration.jl:12
 in anonymous at C:\Users\Leonardo\.julia\v0.4\Mux\src\Mux.jl:8
 in todict at C:\Users\Leonardo\.julia\v0.4\Mux\src\basics.jl:21
 in anonymous at C:\Users\Leonardo\.julia\v0.4\Mux\src\Mux.jl:12 (repeats 2 
times)
 in anonymous at C:\Users\Leonardo\.julia\v0.4\Mux\src\Mux.jl:8
 in anonymous at C:\Users\Leonardo\.julia\v0.4\Mux\src\server.jl:38
 in handle at C:\Users\Leonardo\.julia\v0.4\WebSockets\src\WebSockets.jl:382
 in on_message_complete at 
C:\Users\Leonardo\.julia\v0.4\HttpServer\src\HttpServer.jl:393
 in on_message_complete at 
C:\Users\Leonardo\.julia\v0.4\HttpServer\src\RequestParser.jl:104
 in http_parser_execute at 
C:\Users\Leonardo\.julia\v0.4\HttpParser\src\HttpParser.jl:92
 in process_client at 
C:\Users\Leonardo\.julia\v0.4\HttpServer\src\HttpServer.jl:365
 in anonymous at task.jl:447

and also running example *mc.jl* I obtain an error:

MethodError: `convert` has no method matching convert(::Type{Escher.Tile}, 
::Gadfly.Plot)
This may have arisen from a call to the constructor Escher.Tile(...),
since type constructors fall back to convert methods.
Closest candidates are:
  call{T}(::Type{T}, ::Any)
  convert(::Type{Escher.Tile}, !Matched::AbstractString)
  convert(::Type{Escher.Tile}, !Matched::Char)
  ...
 in anonymous at C:\Users\Leonardo\.julia\v0.4\Escher\src\cli\serve.jl:170
 in anonymous at C:\Users\Leonardo\.julia\v0.4\Mux\src\Mux.jl:15
 in anonymous at C:\Users\Leonardo\.julia\v0.4\Mux\src\Mux.jl:8
 in splitquery at C:\Users\Leonardo\.julia\v0.4\Mux\src\basics.jl:28
 in anonymous at C:\Users\Leonardo\.julia\v0.4\Mux\src\Mux.jl:8
 in wcatch at 
C:\Users\Leonardo\.julia\v0.4\Mux\src\websockets_integration.jl:12
 in anonymous at C:\Users\Leonardo\.julia\v0.4\Mux\src\Mux.jl:8
 in todict at C:\Users\Leonardo\.julia\v0.4\Mux\src\basics.jl:21
 in anonymous at C:\Users\Leonardo\.julia\v0.4\Mux\src\Mux.jl:12 (repeats 2 
times)
 in anonymous at C:\Users\Leonardo\.julia\v0.4\Mux\src\Mux.jl:8
 in anonymous at C:\Users\Leonardo\.julia\v0.4\Mux\src\server.jl:38
 in handle at C:\Users\Leonardo\.julia\v0.4\WebSockets\src\WebSockets.jl:382
 in on_message_complete at 
C:\Users\Leonardo\.julia\v0.4\HttpServer\src\HttpServer.jl:393
 in on_message_complete at 
C:\Users\Leonardo\.julia\v0.4\HttpServer\src\RequestParser.jl:104
 in http_parser_execute at 
C:\Users\Leonardo\.julia\v0.4\HttpParser\src\HttpParser.jl:92
 in process_client at 
C:\Users\Leonardo\.julia\v0.4\HttpServer\src\HttpServer.jl:365
 in anonymous at task.jl:447

It seems that examples that use Gadfly or Compose had some problems.

Any suggestion is appreciated.

Leonardo


Il giorno lunedì 11 gennaio 2016 08:39:17 UTC+1, Leonardo ha scritto:
>
> Hi, I've a problem running Escher's example plotting.jl caused by Gadfly:
>
> WARNING: using Gadfly.render in module Main conflicts with an existing 
> identifier.
> WARNING: Method definition convert(Type{Escher.Tile}, Gadfly.Plot) in 
> module Escher at C:\Users\Leonardo\.julia\v0.4\Escher\src\basics/lazyload.
> jl:73 overwritten in module Main at C:\Users\Leonardo\.julia\v0.4\Escher\
> src\basics\lazyload.jl:73.
> WARNING: Error requiring DataFrames from Main:
> LoadError: error in method definition: function Escher.render must be 
> explicitly imported to be extended
>  in include at boot.jl:261
>  in include_from_node1 at loading.jl:304
>  in anonymous at C:\Users\Leonardo\.julia\v0.4\Requires\src\require.jl:60
>  in err at C:\Users\Leonardo\.julia\v0.4\Requires\src\require.jl:47
>  in anonymous at C:\Users\Leonardo\.julia\v0.4\Requires\src\require.jl:59
>  in withpath at C:\Users\Leonardo\.julia\v0.4\Requires\src\require.jl:37
>  in anonymous at C:\Users\Leonardo\.julia\v0.4\Requires\src\require.jl:58
>  in listenmod at C:\Users\Leonardo\.julia\v0.4\Requires\src\require.jl:21
>  in include at boot.jl:261
>  in include_from_node1 at loading.jl:304
>  in external_setup at C:\Users\Leonardo\.julia\v0.4\Escher\src\Escher.jl:
> 53
>  in include at boot.jl:261
>  in include_from_node1 at loading.jl:304
>  in loadfile at C:\Users\Leonardo\.julia\v0.4\Escher\src\cli\serve.jl:17
>  in anonymous at C:\Users\Leonardo\.julia\v0.4\Escher\src\cli\serve.jl:164
>  in anonymous at C:\Users\Leonardo\.julia\v0.4\Mux\src\Mux.jl:15
>  in anonymous at C:\Users\Leonardo\.julia\v0.4\Mux\src\Mux.jl:8
> 

[julia-users] Re: write julia consolle output in a text file

2016-01-30 Thread SundaraRaman R
I'm interpreting "julia console" to mean the Julia REPL, in which case this 
recent thread seems to be asking the same question: 
https://groups.google.com/forum/#!topic/julia-users/gVyNkJT6ej0 and has two 
decent suggestions on how to do it. 

Or, depending on your exact needs, you can always use Jupyter instead of 
the command-line REPL, and save the notebook to the disk.

On Friday, January 29, 2016 at 9:49:20 PM UTC+5:30, Michela Di Lullo wrote:
>
> How do I write the julia consolle output in a txt file? 
>
> Thanks 
>
> Michela
>


[julia-users] Re: Julia vs Matlab: interpolation and looping

2016-01-30 Thread Andrew
When I say Dierckx isn't a bottleneck for me, I mean my own code spends 
most of its time doing things other than interpolation, like solving 
non-linear equations and other calculations. All your loop does is 
interpolate, so there it must be the bottleneck. 

For the expectation, you can reuse the same vector. You could also 
devectorize it and compute the expectation point by point, though I don't 
know if this any faster.

Maybe you have problems with unnecessary allocation and global variables in 
the original code.

On Saturday, January 30, 2016 at 8:59:27 AM UTC-5, 
pokerho...@googlemail.com wrote:
>
> @Tomas: maybe check out Numerical Recipes in C: The Art of Scientific 
> Computing, 2nd edition. There is also an edition for Fortran. The code that 
> I use in C is basically from there. 
>
> @Andrew: The xprime needs to be in the loop. I just made it ones to 
> simplify but normally it changes every iteration. (In the DP problem, the 
> loop is calculating an expecation and xprime is the possible future value 
> of the state variable for each state of the world). Concerning the Dierckx 
> package. I don't know about the general behaviour but for my particular 
> problem (irregular grid + cubic spline) it is very slow. Run the following 
> code:
>
> using Dierckx
>
> spacing=1.5
> Nxx = 300
> Naa = 350
> Nalal = 200
> sigma = 10
> NoIter = 1
>
> xx=Array(Float64,Nxx)
> xmin = 0.01
> xmax = 400
> xx[1] = xmin
> for i=2:Nxx
> xx[i] = xx[i-1] + (xmax-xx[i-1])/((Nxx-i+1)^spacing)
> end
>
> f_util(c) =  c.^(1-sigma)/(1-sigma)
> W=Array(Float64,Nxx,1)
> W[:,1] = f_util(xx)
>
>
> spl = Spline1D(xx,W[:,1])
>
> function performance2(NoIter::Int64)
> W_temp = Array(Float64,Nalal*Naa)
> W_temp2 = Array(Float64,Nalal,Naa)
> xprime=Array(Float64,Nalal,Naa)
> for banana=1:NoIter
> xprime=ones(Nalal,Naa)
> W_temp = spl(xprime[:])
> end
> W_temp2 = reshape(W_temp,Nalal,Naa)
> end
>
> @time performance2(1)
>
> 30.878093 seconds (100.01 k allocations: 15.651 GB, 2.19% gc time)
>
>
>
> That's why I went on and asked my friend to help me out in the first 
> place. I  think the mnspline is really (not saying it's THE fastest) fast 
> in doing the interpolation itself (magnitudes faster than MATLAB). But then 
> I just don't understand how MATLAB can catch up by just looping through the 
> same operation over and over. Intuitively (maybe I'm wrong) it should be 
> somewhat proportional. If my code in Julia is 10 times faster whitin a 
> loop, and then I just repeat the operation in that particular loop very 
> often, how can it turn out to be only equally fast as MATLAB. Again, the 
> mnspline uses all my threads maybe it has something to do with overhead, 
> whatever. I don't know, hints appreciated. 
>
>

Re: [julia-users] are tasks threads in 0.4?

2016-01-30 Thread andrew cooke

i guess that makes sense.  thanks.  it's not clear to me why there's a 
deadlock but the code looks pretty ugly.  i'll try simplifying it and see 
how it goes.  andrew

On Saturday, 30 January 2016 02:05:15 UTC-3, Yichao Yu wrote:
>
> On Fri, Jan 29, 2016 at 10:53 PM, andrew cooke  > wrote: 
> > 
> > i've been away from julia for a while so am not up-to-date on changes, 
> and 
> > am looking at an odd problem. 
> > 
> > i have some code, which is messier and more complex than i would like, 
> which 
> > is called to print a graph of values.  the print code uses tasks.  in 
> 0.3 
> > this works, but in 0.4 the program sits, using no CPU. 
> > 
> > if i dump the stack (using gstack PID) i see: 
> > 
> > Thread 4 (Thread 0x7efe3b6bb700 (LWP 1709)): 
> > #0  0x7f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from 
> > /lib64/libpthread.so.0 
> > #1  0x7efe3bf62b5b in blas_thread_server () from 
> > /home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so 
> > #2  0x7f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0 
> > #3  0x7f004231604d in clone () from /lib64/libc.so.6 
> > Thread 3 (Thread 0x7efe3aeba700 (LWP 1710)): 
> > #0  0x7f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from 
> > /lib64/libpthread.so.0 
> > #1  0x7efe3bf62b5b in blas_thread_server () from 
> > /home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so 
> > #2  0x7f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0 
> > #3  0x7f004231604d in clone () from /lib64/libc.so.6 
> > Thread 2 (Thread 0x7efe3a6b9700 (LWP 1711)): 
> > #0  0x7f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from 
> > /lib64/libpthread.so.0 
> > #1  0x7efe3bf62b5b in blas_thread_server () from 
> > /home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so 
> > #2  0x7f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0 
> > #3  0x7f004231604d in clone () from /lib64/libc.so.6 
> > Thread 1 (Thread 0x7f0044710740 (LWP 1708)): 
> > #0  0x7f0042e8120d in pause () from /lib64/libpthread.so.0 
> > #1  0x7f0040a190fe in julia_wait_17546 () at task.jl:364 
> > #2  0x7f0040a18ea1 in julia_wait_17544 () at task.jl:286 
> > #3  0x7f0040a40ffc in julia_lock_18599 () at lock.jl:23 
> > #4  0x7efe3ecdbeb7 in ?? () 
> > #5  0x7ffd3e6ad2c0 in ?? () 
> > #6  0x in ?? () 
> > 
> > which looks suspiciously like some kind of deadlock. 
> > 
> > but i am not using threads, myself.  just tasks. 
>
> Tasks are not threads. You can see the threads are started by openblas. 
>
> IIUC tasks can have dead lock too, depending on how you use it. 
>
> > 
> > hence the question.  any pointers appreciated. 
> > 
> > thanks, 
> > andrew 
> > 
>


[julia-users] Re: Multiple dispatch doesn't work for named parameters?

2016-01-30 Thread Daniel Carrera
Oh, I see. Thanks. At least now I know.

Cheers,
Daniel.

On Saturday, 30 January 2016 17:50:18 UTC+1, Yichao Yu wrote:
>
> This is documented[1]. Not sure if there's plan to change that. 
>
> [1] 
> http://julia.readthedocs.org/en/latest/manual/methods/?highlight=keyword#note-on-optional-and-keyword-arguments
>  
>
>
On Saturday, 30 January 2016 17:54:07 UTC+1, Andrew wrote:
>
> I see in the manual 
> http://docs.julialang.org/en/latest/manual/methods/#note-on-optional-and-keyword-arguments
>  
> that:
>
> "Keyword arguments behave quite differently from ordinary positional 
> arguments. In particular, they do not participate in method dispatch. 
> Methods are dispatched based only on positional arguments, with keyword 
> arguments processed after the matching method is identified."
>
> I don't think you defined 2 different methods. You defined one method 
> taking no positional arguments and a bunch of keyword arguments. Then you 
> overwrote that method.
>


Re: [julia-users] Multiple dispatch doesn't work for named parameters?

2016-01-30 Thread Yichao Yu
On Sat, Jan 30, 2016 at 11:37 AM, Daniel Carrera  wrote:
> This is very weird. It looks like multiple dispatch doesn't work at least in
> some cases. Look:
>
> julia> semimajor_axis(;M=1,P=10) = (P^2 * M)^(1/3)
> semimajor_axis (generic function with 1 method)
>
> julia> semimajor_axis(;α=25,mp=1,M=1,d=10) = α * d * M/mp / 954.9
> semimajor_axis (generic function with 1 method)
>
> julia> semimajor_axis(M=3,P=10)
> ERROR: unrecognized keyword argument "P"
>
>
> I do understand that it may be risky to have two functions with the same
> name and different named parameters (I really wish Julia allowed me to make
> some/all named parameters mandatory), but I was expecting that my code would
> work. I clearly defined a version of the function that has a named parameter
> called "P".
>
> Does anyone know why this happens and what I can do to fix it? Am I
> basically required to either use different function names, or give up on
> using named parameters for everything?

This is documented[1]. Not sure if there's plan to change that.

[1] 
http://julia.readthedocs.org/en/latest/manual/methods/?highlight=keyword#note-on-optional-and-keyword-arguments

>
> Cheers,
> Daniel.


[julia-users] Re: deep learning for regression?

2016-01-30 Thread Cedric St-Jean
AFAIK deep learning in general does not have any problem with redundant 
inputs. If you have fewer nodes in your first layer than input nodes, then 
the redundant (or nearly-redundant) input nodes will be combined into one 
node (... more or less). And there are approaches that favor using 
so-called overcomplete representations with more hidden nodes / layer than 
input nodes.

Cédric

On Saturday, January 30, 2016 at 9:46:06 AM UTC-5, michae...@gmail.com 
wrote:
>
> Thanks, that's pretty much my understanding. Scaling the inputs seems to 
> be important, too, from what I read. I'm also interested in a framework 
> that will trim off redundant inputs. 
>
> I have run the mocha tutorial examples, and it looks very promising 
> because the structure is clear, and there are C++ and cuda backends. The 
> C++ backend, with openmp, gives me a good performance boost over the pure 
> Julia backend. However, I'm not so sure that it will allow for trimming 
> redundant inputs. Also, I have some ideas on how to restrict the net to 
> remove observationally equivalent configurations, which should aid in 
> training, and I don't think I could implement those ideas with mocha.
>
> From what I see, the focus of much recent work in neural nets seems to be 
> on classification and labeling of images, and regression examples using the 
> modern tools seem to be scarce. I'm wondering if that's because other tools 
> work better for regression, or simply because it's an old problem that is 
> considered to be well studied. I would like to see some examples of 
> regression nets that work well, using the modern tools, though, if there 
> are any out there.
>
> On Saturday, January 30, 2016 at 2:32:16 PM UTC+1, Jason Eckstein wrote:
>>
>> I've been using NN for regression and I've experimented with Mocha.  I 
>> ended up coding my own network for speed purposes but in general you simply 
>> leave the final output of the neural network as a linear combination 
>> without applying an activation function.  That way the output can represent 
>> a real number rather than compress it into a 0 to 1 or -1 to 1 range for 
>> classification.  You can leave the rest of the network unchanged.
>>
>> On Saturday, January 30, 2016 at 3:45:27 AM UTC-7, michae...@gmail.com 
>> wrote:
>>>
>>> I'm interested in using neural networks (deep learning) for multivariate 
>>> multiple regression, with multiple real valued inputs and multiple real 
>>> valued outputs. At the moment, the mocha.jl package looks very promising, 
>>> but the examples seem to be all for classification problems. Does anyone 
>>> have examples of use of mocha (or other deep learning packages for Julia) 
>>> for regression problems? Or any tips for deep learning and regression?
>>>
>>

Re: [julia-users] Re: ANN: Julia "lite" branch available

2016-01-30 Thread Scott Jones
Thanks for your interest!
I had to rerun everything since so much has been changing in base (yeah 
#13412!) recently, but now I've got some new numbers.
(Remember, this still isn't as "lite" as I think it can be made and still 
be a very useful system)

Julia Version 0.5.0-dev+9834
> Commit 7f205e5 (2016-01-30 03:20 UTC)
> Platform Info:
>   System: Darwin (x86_64-apple-darwin15.4.0)
>   CPU: Intel(R) Core(TM) i7-4980HQ CPU @ 2.80GHz

 

>
> ✔ /j/julia-lite [spj/lite|✔]
> Built doing:
> git clone https://github.com/ScottPJones/julia julia-lite ; git checkout 
> spj/lite ; cd julia-lite ; time make
> real 28m58.903s
> user 26m5.807s
> sys 2m11.073s
> usr directory -> 167 MB
> maxrss() right after start: 113.6 MB
> -rw-r--r--  1 scott  staff  23541200 Jan 30 00:07 sys.o
>

make clean ; make ->

real 5m9.700s

user 5m3.185s
sys 0m4.809s 
 

> ✔ /j/julia-full [master|✔] 
> Built doing:
> git clone https://github.com/ScottPJones/julia julia-full ; cd julia-full 
> ; time make
> real 44m17.359s
> user 72m7.969s
> sys 10m52.074s 

usr directory -> 272.4 MB
> maxrss() right after start: 189 MB
> -rw-r--r--  1 scott  staff  37088452 Jan 30 00:56 sys.o
>

make clean -> make

real 5m43.646s

user 5m39.142s
sys 0m4.167s 

After a make clean, the full build was about 11% slower, most of the time 
was actually spent building LLVM on the "Lite" system, and LLVM + all of 
the big libraries such as BLAS, LAPACK, etc. took most all of the time for 
a normal build.




On Friday, January 29, 2016 at 11:02:33 PM UTC-5, Jeff Bezanson wrote:
>
> This is interesting, and a good starting point for refactoring our 
> large Base library. Any fun statistics, e.g. build time and system 
> image size for the minimal version? 
>
>
> On Fri, Jan 29, 2016 at 3:00 PM, Scott Jones  > wrote: 
> > I've updated the branch again (after a tracking down and working around 
> an 
> > issue introduced with #13412), 
> > had to get that great jb/function PR in! 
> > All unit tests pass. 
> > 
> > On Thursday, January 21, 2016 at 9:02:50 PM UTC-5, Scott Jones wrote: 
> >> 
> >> This is still a WIP, and can definitely use some more work in 1) 
> testing 
> >> on other platforms 2) better disentangling of documentation 3) advice 
> on how 
> >> better to accomplish it's goals. 4) testing with different subsets of 
> >> functionality turned on (I've tested just with BUILD_FULL disabled 
> ("lite" 
> >> version), or enabled (same as master) so far. 
> >> 
> >> This branch (spj/lite in ScottPJones repository, 
> >> https://github.com/ScottPJones/julia/tree/spj/lite) by default will 
> build a 
> >> "lite" version of Julia, and by putting 
> >> override BUILD_xxx = 1 
> >> lines in Make.user, different functionality can be built back in (such 
> as 
> >> BigInt, BigFloat, LinAlg, Float16, Mmap, Threads, ...).  See Make.inc 
> for 
> >> the full list. 
> >> 
> >> I've also made it so that all unit tests pass (that don't use disabled 
> >> functionality). 
> >> (the hard part there was that testing can be spread all over the place, 
> >> esp. for BigInt, BigFloat, Complex, and Rational types). 
> >> 
> >> It will also not build libraries such as arpack, lapack, openblas, 
> fftw, 
> >> suitesparse, mpfr, gmp, depending on what BUILD_* options have been 
> set. 
> >> 
> >> This is only a first step, the real goal is to be able to have a 
> minimal 
> >> useful core, that can have the other parts easily added, in such a way 
> that 
> >> they still appear to have 
> >> been defined completely in Base. 
> >> One place where I think this can be very useful is for building minimal 
> >> versions of Julia to run on things like the Raspberry Pi. 
> >> 
> >> -Scott 
> >> 
> >> 
> > 
>


[julia-users] Re: deep learning for regression?

2016-01-30 Thread Jason Eckstein
I've been using NN for regression and I've experimented with Mocha.  I 
ended up coding my own network for speed purposes but in general you simply 
leave the final output of the neural network as a linear combination 
without applying an activation function.  That way the output can represent 
a real number rather than compress it into a 0 to 1 or -1 to 1 range for 
classification.  You can leave the rest of the network unchanged.

On Saturday, January 30, 2016 at 3:45:27 AM UTC-7, michae...@gmail.com 
wrote:
>
> I'm interested in using neural networks (deep learning) for multivariate 
> multiple regression, with multiple real valued inputs and multiple real 
> valued outputs. At the moment, the mocha.jl package looks very promising, 
> but the examples seem to be all for classification problems. Does anyone 
> have examples of use of mocha (or other deep learning packages for Julia) 
> for regression problems? Or any tips for deep learning and regression?
>


[julia-users] IBM LinuxONE ...

2016-01-30 Thread cdm

https://developer.ibm.com/linuxone/resources/


anyone running Julia there yet ... ?

an Ubuntu machine is due to
be available later in Q1 2016 ...


[julia-users] Re: Julia vs Matlab: interpolation and looping

2016-01-30 Thread pokerhontas2k8
@Tomas: maybe check out Numerical Recipes in C: The Art of Scientific 
Computing, 2nd edition. There is also an edition for Fortran. The code that 
I use in C is basically from there. 

@Andrew: The xprime needs to be in the loop. I just made it ones to 
simplify but normally it changes every iteration. (In the DP problem, the 
loop is calculating an expecation and xprime is the possible future value 
of the state variable for each state of the world). Concerning the Dierckx 
package. I don't know about the general behaviour but for my particular 
problem (irregular grid + cubic spline) it is very slow. Run the following 
code:

using Dierckx

spacing=1.5
Nxx = 300
Naa = 350
Nalal = 200
sigma = 10
NoIter = 1

xx=Array(Float64,Nxx)
xmin = 0.01
xmax = 400
xx[1] = xmin
for i=2:Nxx
xx[i] = xx[i-1] + (xmax-xx[i-1])/((Nxx-i+1)^spacing)
end

f_util(c) =  c.^(1-sigma)/(1-sigma)
W=Array(Float64,Nxx,1)
W[:,1] = f_util(xx)


spl = Spline1D(xx,W[:,1])

function performance2(NoIter::Int64)
W_temp = Array(Float64,Nalal*Naa)
W_temp2 = Array(Float64,Nalal,Naa)
xprime=Array(Float64,Nalal,Naa)
for banana=1:NoIter
xprime=ones(Nalal,Naa)
W_temp = spl(xprime[:])
end
W_temp2 = reshape(W_temp,Nalal,Naa)
end

@time performance2(1)

30.878093 seconds (100.01 k allocations: 15.651 GB, 2.19% gc time)



That's why I went on and asked my friend to help me out in the first place. 
I  think the mnspline is really (not saying it's THE fastest) fast in doing 
the interpolation itself (magnitudes faster than MATLAB). But then I just 
don't understand how MATLAB can catch up by just looping through the same 
operation over and over. Intuitively (maybe I'm wrong) it should be 
somewhat proportional. If my code in Julia is 10 times faster whitin a 
loop, and then I just repeat the operation in that particular loop very 
often, how can it turn out to be only equally fast as MATLAB. Again, the 
mnspline uses all my threads maybe it has something to do with overhead, 
whatever. I don't know, hints appreciated. 



[julia-users] Retrieving documentation for types and their constructors

2016-01-30 Thread Tim Holy
Do we have an easy way to retrieve documentation for constructors separately 
from the documentation of a type? I spent a few minutes looking at the source 
and searching issues, and got to the point where I thought it was better to 
ask before, say, considering a PR.

Illustration:

"""
Foo is a type that represents some amazing stuff
"""
type Foo
end

"""
`foo = Foo(7)` creates an empty `Foo` instance with room for 7 LittleFoos 
in it.
"""
Foo(n::Integer) = nothing

"""
`foo = Foo(x, y)` puts the LittleFoos `x` and `y` into a grown-up Foo.
"""
Foo(x, y) = nothing

Now let's try it:

help?> Foo
search: Foo floor ifloor pointer_from_objref OverflowError RoundFromZero 
FileMonitor functionloc functionlocs StackOverflowError Factorization 
OutOfMemoryError

  Foo is a type that represents some amazing stuff

Now, if I know how to call the constructor I want, then it's no problem to 
retrieve the documentation:

help?> Foo(7)
  foo = Foo(7) creates an empty Foo instance with room for 7 LittleFoos in 
it.

But what if I don't know my choices? Here are two things I tried:

julia> @eval @doc $(methods(Foo))

help?> call(Type{Foo})
  call(x, args...)

  If x is not a Function, then x(args...) is equivalent to call(x, args...). 
This means that function-like behavior can be added to any type by defining new 
call methods.

Neither produced useful results. Any thoughts?

Best,
--Tim



[julia-users] Re: SharedArray / parallel question

2016-01-30 Thread Lutfullah Tomak
There is this issue on on github 
https://github.com/JuliaLang/julia/issues/14764 . I am no expert about parallel 
computing but may be related.

Regards

[julia-users] Re: Julia vs Matlab: interpolation and looping

2016-01-30 Thread Lutfullah Tomak
If you do not change length of xprime or use it later for another purposes then 
just update existing array instead of re-allocating each time. Also, using 
global variables in the innermost loop is very inefficient in Julia. 
It would be good to revise the code in the light of this tips from docs 
http://docs.julialang.org/en/release-0.4/manual/performance-tips/

[julia-users] Re: deep learning for regression?

2016-01-30 Thread michael . creel
Thanks, that's pretty much my understanding. Scaling the inputs seems to be 
important, too, from what I read. I'm also interested in a framework that 
will trim off redundant inputs. 

I have run the mocha tutorial examples, and it looks very promising because 
the structure is clear, and there are C++ and cuda backends. The C++ 
backend, with openmp, gives me a good performance boost over the pure Julia 
backend. However, I'm not so sure that it will allow for trimming redundant 
inputs. Also, I have some ideas on how to restrict the net to remove 
observationally equivalent configurations, which should aid in training, 
and I don't think I could implement those ideas with mocha.

>From what I see, the focus of much recent work in neural nets seems to be 
on classification and labeling of images, and regression examples using the 
modern tools seem to be scarce. I'm wondering if that's because other tools 
work better for regression, or simply because it's an old problem that is 
considered to be well studied. I would like to see some examples of 
regression nets that work well, using the modern tools, though, if there 
are any out there.

On Saturday, January 30, 2016 at 2:32:16 PM UTC+1, Jason Eckstein wrote:
>
> I've been using NN for regression and I've experimented with Mocha.  I 
> ended up coding my own network for speed purposes but in general you simply 
> leave the final output of the neural network as a linear combination 
> without applying an activation function.  That way the output can represent 
> a real number rather than compress it into a 0 to 1 or -1 to 1 range for 
> classification.  You can leave the rest of the network unchanged.
>
> On Saturday, January 30, 2016 at 3:45:27 AM UTC-7, michae...@gmail.com 
> wrote:
>>
>> I'm interested in using neural networks (deep learning) for multivariate 
>> multiple regression, with multiple real valued inputs and multiple real 
>> valued outputs. At the moment, the mocha.jl package looks very promising, 
>> but the examples seem to be all for classification problems. Does anyone 
>> have examples of use of mocha (or other deep learning packages for Julia) 
>> for regression problems? Or any tips for deep learning and regression?
>>
>

Re: [julia-users] Re: Julia vs Matlab: interpolation and looping

2016-01-30 Thread Isaiah Norton
>
> Numerical Recipes in C: The Art of Scientific Computing, 2nd edition.


Please note that code derived from this book cannot be included in BSD,
LGPL, or GPL licensed libraries (which is to say, most Julia packages).
Distribution is restricted to compiled binaries only, with commercial and
non-commercial license variants. Source redistribution is prohibited.

Numerical Recipes should not be relied on for any implementations that are
submitted to Julia core or the open source Julia package ecosystem.

On Sat, Jan 30, 2016 at 8:59 AM,  wrote:

> @Tomas: maybe check out Numerical Recipes in C: The Art of Scientific
> Computing, 2nd edition. There is also an edition for Fortran. The code that
> I use in C is basically from there.
>
> @Andrew: The xprime needs to be in the loop. I just made it ones to
> simplify but normally it changes every iteration. (In the DP problem, the
> loop is calculating an expecation and xprime is the possible future value
> of the state variable for each state of the world). Concerning the Dierckx
> package. I don't know about the general behaviour but for my particular
> problem (irregular grid + cubic spline) it is very slow. Run the following
> code:
>
> using Dierckx
>
> spacing=1.5
> Nxx = 300
> Naa = 350
> Nalal = 200
> sigma = 10
> NoIter = 1
>
> xx=Array(Float64,Nxx)
> xmin = 0.01
> xmax = 400
> xx[1] = xmin
> for i=2:Nxx
> xx[i] = xx[i-1] + (xmax-xx[i-1])/((Nxx-i+1)^spacing)
> end
>
> f_util(c) =  c.^(1-sigma)/(1-sigma)
> W=Array(Float64,Nxx,1)
> W[:,1] = f_util(xx)
>
>
> spl = Spline1D(xx,W[:,1])
>
> function performance2(NoIter::Int64)
> W_temp = Array(Float64,Nalal*Naa)
> W_temp2 = Array(Float64,Nalal,Naa)
> xprime=Array(Float64,Nalal,Naa)
> for banana=1:NoIter
> xprime=ones(Nalal,Naa)
> W_temp = spl(xprime[:])
> end
> W_temp2 = reshape(W_temp,Nalal,Naa)
> end
>
> @time performance2(1)
>
> 30.878093 seconds (100.01 k allocations: 15.651 GB, 2.19% gc time)
>
>
>
> That's why I went on and asked my friend to help me out in the first
> place. I  think the mnspline is really (not saying it's THE fastest) fast
> in doing the interpolation itself (magnitudes faster than MATLAB). But then
> I just don't understand how MATLAB can catch up by just looping through the
> same operation over and over. Intuitively (maybe I'm wrong) it should be
> somewhat proportional. If my code in Julia is 10 times faster whitin a
> loop, and then I just repeat the operation in that particular loop very
> often, how can it turn out to be only equally fast as MATLAB. Again, the
> mnspline uses all my threads maybe it has something to do with overhead,
> whatever. I don't know, hints appreciated.
>
>


Re: [julia-users] Re: deep learning for regression?

2016-01-30 Thread Christof Stocker
I am happy to see people interested in messing around with Julia for ML. 
The best way to wrap your head around the concepts is usually to try it 
out and see what happens.


My 2 cents are that I doubt that you will get competitive results with 
neural networks for your regression problems (even with the modern 
trickeries such as ReLU-variant hidden layers). Two of the strengths of 
current deep learning are convolutions and very large datasets. This 
allows them to handle high dimensional data like no other approach. I 
suspect that is why you usually observe NNs in image, text and speech 
classification tasks.


I haven't read any recent papers on the NN regression front, but from my 
limited personal experience I would say that neural networks are just 
too flexible to be useful for datasets that are smaller than a few 
thousand observations (I am sure there are some data size guidelines in 
the upcoming DL book).


Concerning DL in Julia, there is also https://github.com/dmlc/MXNet.jl. 
Although I haven't tried it for regression.


On 2016-01-30 18:13, Cedric St-Jean wrote:
AFAIK deep learning in general does not have any problem with 
redundant inputs. If you have fewer nodes in your first layer than 
input nodes, then the redundant (or nearly-redundant) input nodes will 
be combined into one node (... more or less). And there are approaches 
that favor using so-called overcomplete representations with more 
hidden nodes / layer than input nodes.


Cédric

On Saturday, January 30, 2016 at 9:46:06 AM UTC-5, michae...@gmail.com 
wrote:


Thanks, that's pretty much my understanding. Scaling the inputs
seems to be important, too, from what I read. I'm also interested
in a framework that will trim off redundant inputs.

I have run the mocha tutorial examples, and it looks very
promising because the structure is clear, and there are C++ and
cuda backends. The C++ backend, with openmp, gives me a good
performance boost over the pure Julia backend. However, I'm not so
sure that it will allow for trimming redundant inputs. Also, I
have some ideas on how to restrict the net to remove
observationally equivalent configurations, which should aid in
training, and I don't think I could implement those ideas with mocha.

From what I see, the focus of much recent work in neural nets
seems to be on classification and labeling of images, and
regression examples using the modern tools seem to be scarce. I'm
wondering if that's because other tools work better for
regression, or simply because it's an old problem that is
considered to be well studied. I would like to see some examples
of regression nets that work well, using the modern tools, though,
if there are any out there.

On Saturday, January 30, 2016 at 2:32:16 PM UTC+1, Jason Eckstein
wrote:

I've been using NN for regression and I've experimented with
Mocha.  I ended up coding my own network for speed purposes
but in general you simply leave the final output of the neural
network as a linear combination without applying an activation
function.  That way the output can represent a real number
rather than compress it into a 0 to 1 or -1 to 1 range for
classification.  You can leave the rest of the network unchanged.

On Saturday, January 30, 2016 at 3:45:27 AM UTC-7,
michae...@gmail.com wrote:

I'm interested in using neural networks (deep learning)
for multivariate multiple regression, with multiple real
valued inputs and multiple real valued outputs. At the
moment, the mocha.jl package looks very promising, but the
examples seem to be all for classification problems. Does
anyone have examples of use of mocha (or other deep
learning packages for Julia) for regression problems? Or
any tips for deep learning and regression?





[julia-users] Re: Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2016-01-30 Thread Scott Jones


On Saturday, January 30, 2016 at 3:12:44 PM UTC-5, Andrew wrote:
>
> I just ran several of these benchmarks using the code and compilation 
> flags available at 
> https://github.com/jesusfv/Comparison-Programming-Languages-Economics . 
> On my computer Julia is faster than C, C++, and Fortran, which I find 
> surprising, unless some really dramatic optimization happened since 0.2.
>

>From the 9 months since I learned about Julia, I have seen major 
improvements in different areas of performance (starting with v0.3.4, I'm 
now using v0.4.3 for work, and v0.5 master for fun).
Just look at the speedup from the *very* recent merge of 
https://github.com/JuliaLang/julia/pull/13412, wonderful stuff. Warts and 
rough edges are getting removed also, it just keeps getting better.


[julia-users] Julia deps question

2016-01-30 Thread Tony Kelman
Not all systems have package managers with the appropriate versions of 
everything we need packaged. Or if they do have up to date packages, they may 
not be built in the configuration or including the bugfix patches that Julia 
needs. We end up needing custom cobfigurations or specific versions for many of 
the dependencies, but perhaps not all if you try USE_SYSTEM_* and can get a 
working build that passes tests in the end.