[julia-users] Ipopt.jl won't build

2015-03-02 Thread Gabriel Mihalache
It's trying to download a dependency but the link is broken and it fails.

Did anyone manage to find a way around this? Especially since I won't be 
using the AMPL stuff anyway.

Thank you!


Running script for downloading the source code for the ASL

Downloading the source code from Github...
--2015-03-02 22:28:50--  https://github.com/ampl/mp/archive/1.3.0.tar.gz
Resolving github.com... 192.30.252.128
Connecting to github.com|192.30.252.128|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://codeload.github.com/ampl/mp/tar.gz/1.3.0 [following]
--2015-03-02 22:28:51--  https://codeload.github.com/ampl/mp/tar.gz/1.3.0
Resolving codeload.github.com... 192.30.252.145
Connecting to codeload.github.com|192.30.252.145|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1322272 (1.3M) [application/x-gzip]
Saving to: “1.3.0”

100%[=>]
 
1,322,272   6.06M/s   in 0.2s

2015-03-02 22:28:51 (6.06 MB/s) - “1.3.0” saved [1322272/1322272]

Download finished.
Unpacking the source code...
gzip: 1.3.0.tar.gz: No such file or directory



Re: [julia-users] Multiple lines statement?

2015-05-30 Thread Gabriel Mihalache
Once you spend a few days tracking down a bug due to this, you never
forget. The idea would be to find a way to save people from this experience.

Some lines are naturally long because e.g. the equation is long or because
you prefer long, informative variable names. You can always use variables
for parts of the expression but then that just feels like working around
poor language features/design.


[julia-users] C/MPI/SLURM => Julia/?/SLURM?

2014-11-23 Thread Gabriel Mihalache
Hello!
I'm looking to use Julia for an upcoming project and right now I'm trying 
to do a simple example which includes everything I need.I managed to 
port/code the application and the results are correct. Now I'm stuck trying 
to parallelize it.

My experience is with MPI on C. I use broadcast + allgather to have 
separate nodes compute and fill parts of a multi-dimensional array and 
otherwise all nodes doing the same thing. I'm not sure how to translate 
that into Julia's way of doing things. I'm trying distribute arrays.

Another complication is that I need to use SLURM to schedule jobs. So, my 
questions are:

1) was anyone successful running DArray code with SLURM?

2) any idea why I get this output:

Iteration 1
(11,100), DArray{Float64,2,Array{Float64,2}}
(1:0,1:0)
elapsed time: 2.388602661 seconds
Err = 0.0


when I run this code

using MAT;
using Grid;
using NLopt;

const alpha = 0.25;
const uBeta = 0.90;
const delta = 1.00;

const kGridMin = 0.01;
const kGridMax = 0.50;
const kGridSize = 100;

const vErrTol = 1e-8;
const maxIter = 200;

file = matopen("shock.mat")
yData = read(file, "y");
yGridSize = size(yData, 1);
y = linrange(yData[1], yData[end], yGridSize);
yPi = read(file, "yPi")
close(file)

function sgmVfi()
addprocs(24);
kGrid = linrange(kGridMin, kGridMax, kGridSize);
 V0 = zeros(yGridSize, kGridSize);
V1 = zeros(size(V0));
kPr = zeros(size(V0));

gdp(yIx, kIx) = exp(y[yIx]) * kGrid[kIx].^alpha;

iter = 0;
err = 1.0;
while iter < maxIter && err > vErrTol
tic();
iter += 1;
println("Iteration ", iter);
V0 = copy(V1);
V0f = CoordInterpGrid( (y, kGrid), V0, BCnearest, InterpQuadratic);

DV1 = distribute(V1);
DkPr = distribute(kPr);

println(size(DV1), ", ", typeof(DV1));

myIx = localindexes(DV1);
println(myIx);
for yIx = myIx[1]
for kIx = myIx[2]
println(yIx, " ", kIx, " ", myid());
opt = Opt(:LN_BOBYQA, 1)
lower_bounds!(opt, kGridMin)
upper_bounds!(opt, min(kGridMax, gdp(yIx, kIx)  + (1-delta) * kGrid[kIx] - 
1e-5))

myObj(kk) = log(gdp(yIx, kIx) - kk + (1-delta) * kGrid[kIx]) + uBeta * sum( 
[ yPi[yIx,yPrIx] * V0f[y[yPrIx], kk] for yPrIx = 1:yGridSize ]);

max_objective!(opt, (x::Vector, grad::Vector) -> myObj(x[1]) );
(maxVal, maxK, ret) = optimize!(opt, [kGridMin] );
localpart(DkPr)[yIx, kIx] = maxK[1];
localpart(DV1)[yIx, kIx] = maxVal;
end
end

V1 = convert(typeof(V1), DV1);
kPr = convert(typeof(kPr), DkPr);

err = maximum(abs(V1[:] - V0[:]));
toc()
println("Err = ", err)
end

return kGrid, V1, kPr;
end

kGrid, V, kPr = sgmVfi()



using this submission script:

#!/bin/bash
#SBATCH -J sgmVfi

#SBATCH -e elog_sgm
#SBATCH -o olog_sgm

#SBATCH -t 1:00:00

#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=24

#SBATCH -p standard
#SBATCH --mail-type=fail

module load julia
srun julia -q sgm.j


but the code runs fine (iterates 157 times and converges)when I just call 
the function from Julia and don't ask for procs (without paralleling but 
with DArray)?

3) Finally, if I call DV1 = distribute(V1) and DkPr = distribute(kPr) is 
there a way for me to make sure that localindexes(DV1) and 
localindexes(DkPr) return the same indices?

Any hint/lead is very much appreciated!

Thank you for your time!
Gabriel


[julia-users] Re: C/MPI/SLURM => Julia/?/SLURM?

2014-11-23 Thread Gabriel Mihalache
I made some progress on getting the code to run correctly in parallel while 
using all the cores of one machine, by using pmap and running some prep 
code on all the workers:
The next steps are to get this running on multiple machines (using 
addprocs_ssh?) and then benchmarking. Here's the state of my solution so 
far, for anyone interested/stuck at this too:

The SLURM batch file:

#!/bin/bash
#SBATCH -J sgmVfi

#SBATCH -e elog_sgm
#SBATCH -o olog_sgm

#SBATCH -t 0:10:00

#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=24

#SBATCH -p standard
#SBATCH --mail-type=fail

srun julia -q -p 24 -L loader.jl sgm.jl


the "loader" file which defines constants for all workers:

using MAT;
using Grid;
using NLopt;

const alpha = 0.25;
const uBeta = 0.90;
const delta = 1.00;

const kGridMin = 0.01;
const kGridMax = 0.50;
const kGridSize = 100;
const kGrid = linrange(kGridMin, kGridMax, kGridSize);

const vErrTol = 1e-8;
const maxIter = 200;

file = matopen("shock.mat")
yData = read(file, "y");
yGridSize = size(yData, 1);
const y = linrange(yData[1], yData[end], yGridSize);
const yPi = read(file, "yPi")
close(file)

gdp(yIx, kIx) = exp(y[yIx]) * kGrid[kIx].^alpha;


and finally, the code running on the master worker/driver:

function sgmVfi()
V0 = zeros(yGridSize, kGridSize);
V1 = zeros(size(V0));
kPr = zeros(size(V0));

iter = 0;
err = 1.0;
while iter < maxIter && err > vErrTol
tic();
iter += 1;
println("Iteration ", iter);
V0 = copy(V1);
V0f = CoordInterpGrid( (y, kGrid), V0, BCnearest, InterpQuadratic);

function workAt(iix)
yIx = iix[1];
kIx = iix[2];
opt = Opt(:LN_BOBYQA, 1)
lower_bounds!(opt, kGridMin)
upper_bounds!(opt, min(kGridMax, gdp(yIx, kIx)  + (1-delta) * kGrid[kIx] - 
1e-5))

myObj(kk) = log(gdp(yIx, kIx) - kk + (1-delta) * kGrid[kIx]) + uBeta * sum( 
[ yPi[yIx,yPrIx] * V0f[y[yPrIx], kk] for yPrIx = 1:yGridSize ]);

max_objective!(opt, (x::Vector, grad::Vector) -> myObj(x[1]) );
(maxVal, maxK, ret) = optimize!(opt, [kGridMin] );
return maxVal, maxK[1];
end

tmp = pmap(workAt, [[yIx, kIx] for yIx = 1:yGridSize, kIx = 1:kGridSize]);
tmp = reshape(tmp, size(V1)); 
for yIx = 1:yGridSize, kIx = 1:kGridSize
V1[yIx, kIx] = tmp[yIx, kIx][1];
kPr[yIx, kIx] = tmp[yIx, kIx][2];
end

err = maximum(abs(V1[:] - V0[:]));
toc()
println("Err = ", err)
end

return kGrid, V1, kPr;
end

kGrid, V, kPr = sgmVfi()

file = matopen("results.mat", "w")
write(file, "kPr", kPr)
write(file, "V", V)
write(file, "y", [y])
write(file, "yPi", yPi)
write(file, "kGrid", [kGrid])
close(file)






[julia-users] Re: C/MPI/SLURM => Julia/?/SLURM?

2014-11-23 Thread Gabriel Mihalache

>
> 1) It's pretty easy to use MPI from Julia. For some use cases it may make 
> more sense than Julia's built-in approach to parallelism, especially if 
> you're already comfortable with MPI. Though if you're well served by pmap, 
> that's much simpler.
>

Do you mean I should be able to broadcast and allgather memory managed by 
Julia? That sounds scary. Luckily for me these models should be amenable to 
pmap. Is there an example of MPI-from-Julia, just in case?
 

> 2) It looks like the subproblem you're solving is a 1d optimization. I 
> wouldn't be surprised if you could get a significant speedup by tweaking 
> the optimization algorithm (e.g., using a derivative-based method) and 
> making the function evaluations more efficient (e.g., avoiding temporary 
> allocations).
>

Yes, thank you! This is a toy example for which we have an analytic 
solution to check against. The project I'm working on involves a few more 
discrete and continuous controls and states. I only hope that NLOpt.jl will 
run in reasonable time compared to the C version.


Thank you for your reply!
Gabriel


Re: [julia-users] Re: C/MPI/SLURM => Julia/?/SLURM?

2014-11-24 Thread Gabriel Mihalache
On Mon Nov 24 2014 at 8:37:36 AM Erik Schnetter 
wrote:

> To use MPI from Julia, use the MPI.jl package
>  (`Pkg.add("MPI")`). There are also
> examples in this package.
>

That's great! Somehow I missed the examples when I first saw that package.
If I can't get pmap to work nice across machines, this should be at least
as good.

Thank you for your work on this!


[julia-users] Multiple lines statement?

2014-11-28 Thread Gabriel Mihalache
Hello! Is there a Julia equivalent to MATLAB's ... ?
E.g.

2 ...
+3

evaluates to 5.

This is for the purpose of making a .jl file more readable, of course.

Thank you!


[julia-users] max_parallel keyword of addprocs?

2014-11-28 Thread Gabriel Mihalache
In the documentation the signature for addprocs doesn't have it but it's 
discussed below.

http://julia.readthedocs.org/en/latest/stdlib/base/?highlight=max_parallel

In practice it's not working, it complained of unknown keyword.

Should I just addprocs each node as many CPUs it has?

Thank you!


Re: [julia-users] max_parallel keyword of addprocs?

2014-11-29 Thread Gabriel Mihalache
I'm on 0.3.2 so that explains it. Thank you!

On Saturday, November 29, 2014 3:29:43 AM UTC-5, Amit Murthy wrote:
>
> This is only supported on 0.4. Which version of Julia are you running?
>
> Yes, you can repeat the same host to addprocs as the number of cores.
>
>

Re: [julia-users] Multiple lines statement?

2014-11-30 Thread Gabriel Mihalache
Thank you all! That clears it up.

On Sun Nov 30 2014 at 6:55:22 AM Christoph Ortner <
christophortn...@gmail.com> wrote:

> I think that the standard in mathematical typesetting is to write
> 2
>  + 3
> rather than
>2 +
>   3
>
> so personally I find the Matlab syntax easier to read. One of the very few
> choices Julia made that  I am not so sure about.
>
> Christoph
>
>


[julia-users] No performance gain from pmap

2014-12-16 Thread Gabriel Mihalache
Hello!
I would greatly appreciate any thought on these findings:

I run my code in 3 different cases, and the performance is pretty much the 
same:
- 1 node, 2 cores on it
- 1 node, 24 cores on it
- 3 nodes, 24 cores on each

I made sure that
1. the function is indeed being run on the right host/worker (using 
println-s)
2. the function has the right values for the various variables it needs

The issue seems to be that all the julia processes other than the master 
one are effectively idle. This is the top output under various scenarios, 
on various machines (master or not): https://imgur.com/a/5gtL7

This is how I add workers:

hostname = chomp(readall(`hostname`))
if hostname == ARGS[2]
println("$hostname master")
tasksPerNode = parseint(ARGS[1])

nodes = ARGS[2:end]

if length(nodes) > 1
for ix = 1:tasksPerNode
addprocs(nodes; dir="/software/julia/0.3.2/bin/")
end
else
addprocs(tasksPerNode)
end

@everywhere cd("/scratch/gmihalac/exitSwap/")
@everywhere include("swap.jl")
ExitSwap.compute()
else
println("$hostname not master")
end


I am attaching the swap.jl file, to show how I call/use pmap.

I'll try to put together a small example, to see if i can replicate this in 
a simpler setup.

Any comments/ideas/suggestions are greatly appreciated!

Thank you!
Gabriel


swap.jl
Description: Binary data


[julia-users] pmap "load balancing"?

2015-01-01 Thread Gabriel Mihalache
Hello, all!

I have a pmap over a "large" array, 25^2, and each call does quite a bit of 
work, but some end up having to do a lot more than others. Using htop I 
notice that most CPUs on most nodes are close to idle (I'm running this on 
a SLURM cluster).

Is there a pmap (drop-in?) replacement which has a different scheduling 
strategy? Something comparable with OpenMP's dynamic scheduling? Am I 
misunderstanding the entire philosophy of pmap?

Thank you!
Gabriel


[julia-users] Re: ARIMA function not working

2015-01-01 Thread Gabriel Mihalache
AR is just OLS, so you should be able to do 

(X' * X) \ X' * Y

with the right X and Y matrices.

Note the difference between * and .* OLS is a matrix expression, not a 
element-wise operation.

Start by making sure that the X and Y matrices look like you expect them 
to, shape-wise and content-wise.

Regarding the TimeSeries package, I can't test my hunch right now, but I 
suspect it's because 0 and 0.0 are different in Julia. One's an Integer and 
the other's a Float. Same with 1 and 1.0 My guess is that the TimeSeries 
function needs Float arguments and you pass it Integers.

On Thursday, January 1, 2015 7:58:05 PM UTC-5, jspark wrote:
>
> Hi,
>
> Now I become a two weeks old for Julia and still struggling with Julia.
>
> I am porting *AR1 function* below from a *cran R* but for some reason it 
> gives me an error (SingularException(3)) when I convert *solve() to get 
> Alpha* into *inv()*. However, it works with R and Matlab.
>
> y=float(r,1])  # r is data vector from the attached DJ.csv file
>
> T = length(y)-1;
>
> Y = y[2:T]';
>
> Y1= y[1:(T-1)]';
>
> Alpha= inv(Y1'.*Y).*(Y1').*Y;
>
>
> *ERROR : SingularException(3)*
>
> As an alternative, I am trying to get AR1 coefficient from *arima() *of 
> the *TimeModels*  package but it's not working either.
>
> using TimeModels
>
> zz=rand(500,1);   #500*1 Array(Float64,2)
> arima(zz,1,0,0); 
>
> *ERROR : 'arima' has no method matching arima (::Array{Float64,2), 
> ::Int64, ::Int64, ::Int64)*
>
> Can someone tell me what is the behind story of the error?
>
> Many thanks and Happy New Year to all!!
>   
>
> AR1 <-function(x){T <- length(x) -1Y <- x[2:T]Y_ <- x[1:(T-1)] ALPHA <- 
> solve(t(Y_) %*% Y_ ) %*% t(Y_) %*% YRE <- Y - Y_ %*% ALPHASIGMAS <- sum(RE
> ^2) / (T-1)STDA <- sqrt( SIGMAS * solve(t(Y_) %*% Y_ ))return(list(ALPHA=
> ALPHA,STDA=STDA))} Source : ttps://
> github.com/cran/vrtest/blob/master/R/AR1.R
>
>

[julia-users] Re: pmap "load balancing"?

2015-01-01 Thread Gabriel Mihalache
That's good news in a way. I'll work to see if I can fix it on my end then.

Any advice as far as minimizing network overhead? I have 5 machines with 24 
CPUs each so I suspect some of the low CPU loads might reflect I/O.

[julia-users] WARNING: replacing module ...

2016-02-05 Thread Gabriel Mihalache
Hello, all!
I must be doing something wrong. Say I have the following code in a file 
called myMod.jl


module myMod

using NLopt

function compute()
  ...
  pmap(...)
  ...
end

end

and then in a console I do

addprocs()
@everywhere include("myMod.jl")
myMod.compute()

I get a bunch of warnings, one per worker, complaining that the module 
NLopt is being replaced. This seems harmless in practice, but it's hinting 
that I'm doing something wrong.

What's the proper way to do this? (Let all workers know about my module and 
the packages it's using.) I'm on 0.4.3

Thank you!