[julia-users] Re: CoinOptServices

2016-10-25 Thread Miles Lubin
Please open an issue on the CoinOptServices repository with as much 
information as you can to help reproduce the problem.

On Tuesday, October 25, 2016 at 11:04:56 AM UTC-4, Frank Kampas wrote:
>
> When I run Pkg.build("CoinOptServices") using Julia 5.0 and Windows 10, I 
> get load errors and a build error message.
>
> On Tuesday, October 25, 2016 at 9:01:29 AM UTC-4, Frank Kampas wrote:
>>
>> Is anybody working on fixing CoinOptServices?
>>
>

[julia-users] Re: CoinOptServices

2016-10-25 Thread Miles Lubin
I'm not aware of any broken functionality in CoinOptServices, but it hasn't 
been updated to get rid of deprecation warnings on 0.5. We'd welcome a PR.

On Tuesday, October 25, 2016 at 9:01:29 AM UTC-4, Frank Kampas wrote:
>
> Is anybody working on fixing CoinOptServices?
>


[julia-users] Re: ANN: JuMP 0.14 released

2016-08-08 Thread Miles Lubin
ForwardDiff 0.2 introduced some breaking changes, you will need to update 
your code (GradientNumber is no longer defined). See the upgrading guide 
<http://www.juliadiff.org/ForwardDiff.jl/upgrade.html>.

On Monday, August 8, 2016 at 11:10:50 AM UTC-6, Uwe Fechner wrote:
>
> Hello,
> I updated, and now I get the following error:
> julia> include("Plotting.jl")
> INFO: Recompiling stale cache file /home/ufechner/.julia/lib/v0.4/JuMP.ji 
> for module JuMP.
> INFO: Recompiling stale cache file 
> /home/ufechner/.julia/lib/v0.4/ReverseDiffSparse.ji for module 
> ReverseDiffSparse.
> INFO: Recompiling stale cache file 
> /home/ufechner/.julia/lib/v0.4/ForwardDiff.ji for module ForwardDiff.
> INFO: Recompiling stale cache file /home/ufechner/.julia/lib/v0.4/HDF5.ji 
> for module HDF5.
> ERROR: LoadError: LoadError: LoadError: LoadError: UndefVarError: 
> GradientNumber not defined
> while loading /home/ufechner/00PythonSoftware/FastSim/src/Projects.jl, in 
> expression starting on line 433
> while loading /home/ufechner/00PythonSoftware/FastSim/src/Model.jl, in 
> expression starting on line 19
> while loading /home/ufechner/00PythonSoftware/FastSim/src/Optimizer.jl, in 
> expression starting on line 13
> while loading /home/ufechner/00PythonSoftware/FastSim/src/Plotting.jl, in 
> expression starting on line 22
>
> The code, that fails is the following:
> """
> Helper function to convert the value of an optimization results, but also
> simple real values.
> """
> my_value(value::ForwardDiff.GradientNumber) = ForwardDiff.value(value)
> my_value(value::Real) = value
> my_value(val_vector::Vector) = [my_value(value) for value in val_vector]
>
> Any idea how to fix this?
>
> Uwe
>
> On Monday, August 8, 2016 at 4:57:16 PM UTC+2, Miles Lubin wrote:
>>
>> The JuMP team is happy to announce the release of JuMP 0.14. The release 
>> should clear most, if not all, deprecation warnings on Julia 0.5 and is 
>> compatible with ForwardDiff 0.2. The full release notes are here 
>> <https://github.com/JuliaOpt/JuMP.jl/blob/master/NEWS.md#version-0140-august-7-2016>,
>>  
>> and I'd just like to highlight a few points:
>>
>> - *All JuMP users read this*: As previously announced 
>> <https://groups.google.com/d/msg/julia-opt/vUK1NHEHqfk/WD-6lSbMCAAJ>, we 
>> will be deprecating the sum{}, prod{}, and norm{} syntax in favor of using 
>> Julia 0.5's new syntax for generator statements, e.g., sum(x[i] for i in 
>> 1:N) instead of sum{x[i], i in 1:N}. In this release, the new syntax is 
>> available for testing if using Julia 0.5. No deprecation warnings are 
>> printed yet. In JuMP 0.15, which will drop support for Julia 0.4, we will 
>> begin printing deprecation warnings for the old syntax.
>>
>> - *Advanced JuMP users read this*: We have introduced a new syntax for 
>> "anonymous" objects, which means that when declaring an optimization 
>> variable, constraint, expression, or parameter, you may omit the name of 
>> the object within the macro. The macro will instead return the object 
>> itself which you can assign to a variable if you'd like. Example:
>>
>> # instead of @variable(m, l[i] <= x[i=1:N] <= u[i]):
>> x = @variable(m, [i=1:N], lowerbound=l[i], upperbound=u[i]) 
>>
>> This syntax should be comfortable for advanced use cases of JuMP (e.g., 
>> within a library) and should obviate some confusions about JuMP's variable 
>> scoping rules.
>>
>> - We also have a new input form for nonlinear expressions that has the 
>> potential to extend JuMP's scope as an AD tool. Previously all nonlinear 
>> expressions needed to be input via macros, which isn't convenient if the 
>> expression is generated programmatically. You can now set nonlinear 
>> objectives and add nonlinear constraints by providing a Julia Expr 
>> object directly with JuMP variables spliced in. This means that you can now 
>> generate expressions via symbolic manipulation and add them directly to a 
>> JuMP model. See the example in the documentation 
>> <http://www.juliaopt.org/JuMP.jl/0.14/nlp.html#raw-expression-input>.
>>
>> Finally, I'd like to thank Joaquim Dias Garcia, Oscar Dowson, Mehdi 
>> Madani, and Jarrett Revels for contributions to this release which are 
>> cited in the release notes.
>>
>> Miles, Iain, and Joey
>>
>>
>>

[julia-users] ANN: JuMP 0.14 released

2016-08-08 Thread Miles Lubin
The JuMP team is happy to announce the release of JuMP 0.14. The release
should clear most, if not all, deprecation warnings on Julia 0.5 and is
compatible with ForwardDiff 0.2. The full release notes are here
,
and I'd just like to highlight a few points:

- *All JuMP users read this*: As previously announced
, we
will be deprecating the sum{}, prod{}, and norm{} syntax in favor of using
Julia 0.5's new syntax for generator statements, e.g., sum(x[i] for i in
1:N) instead of sum{x[i], i in 1:N}. In this release, the new syntax is
available for testing if using Julia 0.5. No deprecation warnings are
printed yet. In JuMP 0.15, which will drop support for Julia 0.4, we will
begin printing deprecation warnings for the old syntax.

- *Advanced JuMP users read this*: We have introduced a new syntax for
"anonymous" objects, which means that when declaring an optimization
variable, constraint, expression, or parameter, you may omit the name of
the object within the macro. The macro will instead return the object
itself which you can assign to a variable if you'd like. Example:

# instead of @variable(m, l[i] <= x[i=1:N] <= u[i]):
x = @variable(m, [i=1:N], lowerbound=l[i], upperbound=u[i])

This syntax should be comfortable for advanced use cases of JuMP (e.g.,
within a library) and should obviate some confusions about JuMP's variable
scoping rules.

- We also have a new input form for nonlinear expressions that has the
potential to extend JuMP's scope as an AD tool. Previously all nonlinear
expressions needed to be input via macros, which isn't convenient if the
expression is generated programmatically. You can now set nonlinear
objectives and add nonlinear constraints by providing a Julia Expr object
directly with JuMP variables spliced in. This means that you can now
generate expressions via symbolic manipulation and add them directly to a
JuMP model. See the example in the documentation
.

Finally, I'd like to thank Joaquim Dias Garcia, Oscar Dowson, Mehdi Madani,
and Jarrett Revels for contributions to this release which are cited in the
release notes.

Miles, Iain, and Joey


[julia-users] Re: MUMPS server workaround?

2016-08-04 Thread Miles Lubin
On recent versions of Ubuntu you should be able to just install the 
"coinor-libipopt-dev" package and Pkg.build will pick up the system-wide 
Ipopt.

On Wednesday, August 3, 2016 at 9:52:05 PM UTC-6, Sarah Koehler wrote:
>
>  I have been trying to add Ipopt to Julia on a new computer (Ubuntu 
> 64-bit) today, and I am running into the issue of the MUMPS server being 
> down. Is it at all possible to tell Julia to download MUMPS from another 
> server such as ANL ? 
> It seems the server for MUMPS has been down for at least 20 minutes now. 
> Example error message below...
>
>
> --2016-08-03 17:37:07--  (try: 4)  
> http://mumps.enseeiht.fr/MUMPS_4.10.0.tar.gz
> Connecting to mumps.enseeiht.fr (mumps.enseeiht.fr)|147.127.176.144|:80... 
> connected.
> HTTP request sent, awaiting response... No data received.
> Retrying.
>
>
>

[julia-users] Re: JuliaOpt for nonlinear constrained optimization with around 500 variables

2016-05-10 Thread Miles Lubin
Hi Jorge,

Gurobi is a state of the art solver. If your problem falls into a class 
supported by gurobi (e.g., convex quadratic), then you can rely on the 
results. Using Julia to specify the problem has a minimal effect on the 
solution time, specifically only in some preprocessing and generating data 
structures to feed to the solver. If you'd like to understand why a 
particular piece of code is running more slowly than you expect, feel free 
to post a standalone, executable sample to here or the julia-opt list.

Miles

On Tuesday, May 10, 2016 at 10:30:00 AM UTC-4, Jorge Fernández de Cossío 
Díaz wrote:
>
> I want to use JuliaOpt to solve a constrained optimization problem with 
> around 500 variables. There are around 200 constrains, most are linear, 
> only a couple are nonlinear.
> I have Gurobi. I am looking for benchmarks of JuliaOpt/Gurobi on systems 
> of this size, including nonlinear constrains, to build some confidence or 
> not on the results that I am obtaining. Has anyone performed such tests?
>


[julia-users] Re: ANN: JuMP 0.13 released (or, JuMP dumps camelCase)

2016-05-06 Thread Miles Lubin
Thanks for reporting, JuMP 0.13.1 was released to fix this.

On Sunday, May 1, 2016 at 7:41:00 AM UTC-4, Uwe Fechner wrote:
>
> In the announcement you wrote: "*no existing code should break after 
> updating to JuMP 0.13*".
>
> Well, it broke my code, as described in the following issue:
> https://github.com/JuliaOpt/JuMP.jl/issues/753
>
> But I found a solution:
> Replace  registerNLFunction "registerNLFunction" with "JuMP.register".
>
> It took be a while to find the correct name in the documentation.
>
> Nevertheless, keep up the good work!
>
> Uwe
>
> On Saturday, April 30, 2016 at 2:33:07 AM UTC+2, Miles Lubin wrote:
>>
>> The JuMP team is happy to announce the release of JuMP 0.13.
>>
>> This is the most visible JuMP release in quite a while since *all of 
>> JuMP's macros and most exported methods have been renamed* to avoid the 
>> camelCase convention. The original naming scheme was put in place before 
>> the 0.1 release of Julia and was never updated after the Julia community 
>> converged to its current naming conventions. We don't take this change 
>> lightly since it will require an update of all existing JuMP code, but we 
>> believe that now, before a JuMP 1.0 release, is the best time to take these 
>> steps to correct previous decisions so that we can continue to grow and 
>> remain visually appealing to new users. All of these changes are simple 
>> renamings, and it is sufficient to perform a find and replace on existing 
>> code in order to update it. We have put deprecation warnings in place so 
>> that *no existing code should break after updating to JuMP 0.13*. We 
>> expect to leave these deprecations in place for quite a while (at least 6 
>> months) to minimize the impact. For the definitive list of the new names, 
>> see our deprecation list 
>> <https://github.com/JuliaOpt/JuMP.jl/blob/1e0228abc6f9e968d5c03f21d914f713bd7d334a/src/deprecated.jl#L30>
>> .
>>
>> Here's a preview of the new names:
>>
>> m = Model()
>> @variable(m, x >= 0)
>> @variable(m, y >= 0)
>> @objective(m, Max, 3x-2y)
>> @constraint(m, x+y <= 1)
>> @constraint(m, 2x+y <= 3)
>> status = solve(m)
>> @show getvalue(x)
>>
>> Or, using the pretty block syntax:
>>
>> m = Model()
>> @variables(m, begin
>> x >= 0
>> y >= 0
>> end)
>> @objective(m, Max, 3x-2y)
>> @constraints(m, begin
>> x+y <= 1
>> 2x+y <= 3
>> end)
>> status = solve(m)
>> @show getvalue(x)
>>
>> We request the help of the community to update existing code that may be 
>> posted on the internet. If you've written a blog post, stackoverflow post, 
>> public Jupyter notebook, or book(! <http://www.chkwon.net/julia/>) 
>> containing JuMP code, please make an effort to update it to the new naming 
>> scheme to avoid confusing new users.
>>
>> I'd like to thank those who participated in the various discussions (here 
>> <https://github.com/JuliaOpt/JuMP.jl/pull/726> and here 
>> <https://github.com/JuliaOpt/JuMP.jl/pull/732>) which helped steer this 
>> change in the right direction. While this release focused on simple 
>> renaming, we may have some more interesting syntax changes or additions in 
>> the next release, so keep an eye on the JuMP repository if you are 
>> interested. For a more complete list of changes in this release, see the 
>> NEWS <https://github.com/JuliaOpt/JuMP.jl/blob/master/NEWS.md> entry.
>>
>> Miles, Iain, and Joey
>>
>>

[julia-users] Re: ANN: Optim.jl v0.4.5 released

2016-05-03 Thread Miles Lubin
Yes, the motivation was to allow passing options specific to the BFGS 
algorithm by using BFGS(foo=bar).

On Tuesday, May 3, 2016 at 6:27:42 AM UTC-4, Kristoffer Carlsson wrote:
>
> Dispatching on Type{xx} is quite inconvenient and loocks the API a lot. By 
> using an instance you still have the option to add fields to the type and 
> to use different types that dispatch to the same function. 



[julia-users] ANN: JuMP 0.13 released (or, JuMP dumps camelCase)

2016-04-29 Thread Miles Lubin
The JuMP team is happy to announce the release of JuMP 0.13.

This is the most visible JuMP release in quite a while since *all of JuMP's 
macros and most exported methods have been renamed* to avoid the camelCase 
convention. The original naming scheme was put in place before the 0.1 
release of Julia and was never updated after the Julia community converged 
to its current naming conventions. We don't take this change lightly since 
it will require an update of all existing JuMP code, but we believe that 
now, before a JuMP 1.0 release, is the best time to take these steps to 
correct previous decisions so that we can continue to grow and remain 
visually appealing to new users. All of these changes are simple renamings, 
and it is sufficient to perform a find and replace on existing code in 
order to update it. We have put deprecation warnings in place so that *no 
existing code should break after updating to JuMP 0.13*. We expect to leave 
these deprecations in place for quite a while (at least 6 months) to 
minimize the impact. For the definitive list of the new names, see our 
deprecation 
list 

.

Here's a preview of the new names:

m = Model()
@variable(m, x >= 0)
@variable(m, y >= 0)
@objective(m, Max, 3x-2y)
@constraint(m, x+y <= 1)
@constraint(m, 2x+y <= 3)
status = solve(m)
@show getvalue(x)

Or, using the pretty block syntax:

m = Model()
@variables(m, begin
x >= 0
y >= 0
end)
@objective(m, Max, 3x-2y)
@constraints(m, begin
x+y <= 1
2x+y <= 3
end)
status = solve(m)
@show getvalue(x)

We request the help of the community to update existing code that may be 
posted on the internet. If you've written a blog post, stackoverflow post, 
public Jupyter notebook, or book(! ) 
containing JuMP code, please make an effort to update it to the new naming 
scheme to avoid confusing new users.

I'd like to thank those who participated in the various discussions (here 
 and here 
) which helped steer this 
change in the right direction. While this release focused on simple 
renaming, we may have some more interesting syntax changes or additions in 
the next release, so keep an eye on the JuMP repository if you are 
interested. For a more complete list of changes in this release, see the 
NEWS  entry.

Miles, Iain, and Joey



[julia-users] Re: Why does QR beat LU for sparse, square matrices?

2016-04-26 Thread Miles Lubin
Hey Jonas,

I don't have an answer to this, but if you're looking for state-of-the-art 
performance for sparse linear algebra, I'd recommend using a proprietary 
library like PARDISO .

Miles

On Monday, April 25, 2016 at 4:49:40 PM UTC-4, Jonas Kersulis wrote:
>
> Can someone explain why qrfact is faster and requires less memory than 
> lufact for square, sparse matrices? LU is less computationally complex than 
> QR, and with decent pivoting LU should be able to preserve sparsity better 
> than QR, so I'm confused by what I'm seeing in practice.
>
> The plots below compare computation time, memory allocation, and garbage 
> collection time for the two factorization methods. I generated them by 
> factorizing sprand sparse matrices. The top plot shows results for matrices 
> with 10% nonzeros; the bottom plot shows results for matrices with 50% 
> nonzeros. The methods seem to converge in memory performance as density 
> increases, but LU loses to QR in both cases.
>
> I have looked through the paper describing the multifrontal QR algorithm 
> Julia calls, but I don't understand it fully. Before I spend more time 
> studying it, I figured I would see if someone here knows the secret sauce 
> that helps it beat LU.
>
> 
>
> 
>
>
>

[julia-users] Re: ANN: JuMP 0.12 released

2016-03-29 Thread Miles Lubin
Are you suggesting a clarification in the documentation or new 
functionality? The docs clearly explain when ForwardDiff is used and what 
the performance drawbacks are. If you know of a package in Julia which we 
could use to perform reverse mode AD on user-defined functions, we'll 
gladly accept pull requests.

On Tuesday, March 29, 2016 at 12:45:02 PM UTC-4, feza wrote:
>
> I suggest clarification in the documents regarding which mode of automatic 
> differentiation since this can have a large impact on computation time.
>
> It seems like this 'ForwardDiff is only used for used-defined functions 
> with the autodiff=true option. ReverseDiffSparse is used for all other 
> derivative computations.'   is not very well thought out. 
> If the input dimension is much larger than the output dimension then 
> autodiff=true should by default use reverse mode differentiation and 
> otherwise forward mode differentation.
>
>
>
> On Wednesday, March 9, 2016 at 8:27:06 AM UTC-5, Miles Lubin wrote:
>>
>> On Wednesday, March 9, 2016 at 12:52:38 AM UTC-5, Evan Fields wrote:
>>>
>>> Great to hear. Two minor questions which aren't clear (to me) from the 
>>> documentation:
>>> - Once a user defined function has been defined and registered, can it 
>>> be incorporated into NL expressions via @defNLExpr?
>>>
>>
>> Yes.
>>  
>>
>>> - The documentation references both ForwardDiff.jl and 
>>> ReverseDiffSparse.jl. Which is used where? What are the tradeoffs users 
>>> should be aware of?
>>>
>>
>> ForwardDiff is only used for used-defined functions with the 
>> autodiff=true option. ReverseDiffSparse is used for all other derivative 
>> computations.
>> Using ForwardDiff to compute a gradient of a user-defined function is not 
>> particularly efficient for functions with high-dimensional input.
>>  
>>
>>> Semi-unrelated: two days ago I was using JuMP 0.12 and NLopt to solve 
>>> what should have been a very simple (2 variable) nonlinear problem. When I 
>>> fed the optimal solution as the starting values for the variables, the 
>>> solve(model) command (or NLopt) hung indefinitely. Perturbing my starting 
>>> point by .0001 fixed that - solve returned a solution 
>>> instantaneously-by-human-perception. Am I doing something dumb?
>>>
>>
>> I've also observed hanging within NLopt but haven't had a chance to debug 
>> it (anyone is welcome to do so!). Hanging usually means that NLopt is 
>> iterating without converging, since NLopt has no output 
>> <https://github.com/JuliaOpt/NLopt.jl/issues/16>. Try setting an 
>> iteration limit.
>>
>

Re: [julia-users] ANN: A potential new Discourse-based Julia forum

2016-03-09 Thread Miles Lubin
I suspect julia-opt will wait and see how the julia-users transition goes. 
julia-opt has much lower volume than julia-users.

On Wednesday, March 9, 2016 at 9:12:11 AM UTC-5, Johan Sigfrids wrote:
>
> What about other lists like julia-stats and julia-opt? Would they also 
> move to Discourse or be left behind?
>
> On Wednesday, March 9, 2016 at 4:05:24 PM UTC+2, Stefan Karpinski wrote:
>>
>> It seems like we can import the email list from julia-users and 
>> julia-dev, and you can reply on Discourse by email, so people who interact 
>> with groups via email won't necessarily even notice much difference. For 
>> people who interact via the web interface, I think that once the Discourse 
>> setup is ready and populated with past discussions, we will disable posting 
>> to groups and thereby cut over to the new system. It will be briefly 
>> disruptive but shouldn't be too bad.
>>
>> On Wed, Mar 9, 2016 at 8:59 AM, Tom Breloff  wrote:
>>
>>> Is it possible to auto-forward new posts from google to discourse?  What 
>>> about an automated warning to direct users to the new site?
>>>
>>> +1 for moving
>>>
>>> On Wed, Mar 9, 2016 at 8:47 AM, Stefan Karpinski  
>>> wrote:
>>>
 Yes, I think we should give this a shot.

 On Wed, Mar 9, 2016 at 6:28 AM, Viral Shah  wrote:

> Yeah, we should qualify. The next step would be to have a someone 
> (Jonathan Malmud?) come up with a migrating plan, and we should also 
> formally appoint a group of list admins to carry out the day to day 
> activities, enforce community standards, etc.
>
> -viral
>
>
>
> > On 09-Mar-2016, at 4:31 PM, Hans-Peter  wrote:
> >
> > There is a new discourse blog post were Discourse offers free 
> hosting for 'community friendly Github projects'. I think Julia would 
> qualify and this would solve the question about self-hosting and its 
> associated work.
> >
> > On the other hand I don't recall what is the current situation about 
> some people strongly preferring traditional maillists and Discourse' 
> support thereof.
>
>

>>>
>>

[julia-users] Re: ANN: JuMP 0.12 released

2016-03-09 Thread Miles Lubin
On Wednesday, March 9, 2016 at 12:52:38 AM UTC-5, Evan Fields wrote:
>
> Great to hear. Two minor questions which aren't clear (to me) from the 
> documentation:
> - Once a user defined function has been defined and registered, can it be 
> incorporated into NL expressions via @defNLExpr?
>

Yes.
 

> - The documentation references both ForwardDiff.jl and 
> ReverseDiffSparse.jl. Which is used where? What are the tradeoffs users 
> should be aware of?
>

ForwardDiff is only used for used-defined functions with the autodiff=true 
option. ReverseDiffSparse is used for all other derivative computations.
Using ForwardDiff to compute a gradient of a user-defined function is not 
particularly efficient for functions with high-dimensional input.
 

> Semi-unrelated: two days ago I was using JuMP 0.12 and NLopt to solve what 
> should have been a very simple (2 variable) nonlinear problem. When I fed 
> the optimal solution as the starting values for the variables, the 
> solve(model) command (or NLopt) hung indefinitely. Perturbing my starting 
> point by .0001 fixed that - solve returned a solution 
> instantaneously-by-human-perception. Am I doing something dumb?
>

I've also observed hanging within NLopt but haven't had a chance to debug 
it (anyone is welcome to do so!). Hanging usually means that NLopt is 
iterating without converging, since NLopt has no output 
. Try setting an iteration 
limit.


[julia-users] Re: GSOC2016

2016-03-08 Thread Miles Lubin
Hi Rohan,

Head over to the julia-opt list and open a discussion there.

Miles

On Tuesday, March 8, 2016 at 8:56:39 AM UTC-5, Rohan Goel wrote:
>
> Hello everyone,
> I want to contribute to the idea "Support for complex numbers within 
> Convex.jl".
> Can someone please help me getting started what all things should I start 
> studying and setup my machine.
> Thankyou
>


[julia-users] Re: ANN: JuMP 0.12 released

2016-03-08 Thread Miles Lubin
This is a bug, I've opened an issue 
here: https://github.com/JuliaOpt/JuMP.jl/issues/695

As a workaround, if you replace sqrt(y0) with 0.0 then the NaNs go away. 
Clearly it shouldn't affect the result since y0 is a constant.

On Tuesday, March 8, 2016 at 2:46:12 AM UTC-5, JP Dussault wrote:
>
> Hi,
>
> I experience NaN in second derivatives with JuMp 0.12  which  were not 
> present with the previous version. I scaled down one example, a 
> Brachistochrone model (see 
> http://link.springer.com/article/10.1007%2Fs11081-013-9244-4)  to n=4 
> yielding 6 variables and the hessian clearly shows many NaN entries. I 
> double checked with an "equivalent" AMPL version and the hessian is clean 
> (no NaN).
>
> Here is the model as I ran it within the MPB Newton solver notebook.
>
> n=4
> nlp = Model(solver=NewtonSolver())
> eps = 1e-8
>
> xn = 5.0
> yn = 1.0 
> x0 = 0.0
> y0 = 0.0
>
> @defVar(nlp, x[j=1:n-1], start = xn*(j/n))
> @defVar(nlp, y[j=1:n-1], start = (j/n))
> 
> @defNLExpr(nlp, dx1,  (x[1] - x0))
> @defNLExpr(nlp, dxn,  (xn - x[n-1])) 
> @defNLExpr(nlp, dx[j=2:n-1],  (x[j] - x[j-1]))
>
> @defNLExpr(nlp, dy[j=2:n-1],  (y[j] - y[j-1]))
> @defNLExpr(nlp, dy1,  (y[1] - y0))
> @defNLExpr(nlp, dyn,  (yn - y[n-1]))
>
> @defNLExpr(nlp, s[j=2:n-1],  sqrt(dx[j]^2 + dy[j]^2))
> @defNLExpr(nlp, s1,  sqrt(dx1^2 + dy1^2))
> @defNLExpr(nlp, sn,  sqrt(dxn^2 + dyn^2))
>
> @defNLExpr(nlp, f[j=2:n-1],  s[j] /(sqrt(y[j-1])+sqrt(y[j])))
> @defNLExpr(nlp, f1,  s1 /(sqrt(y0) + sqrt(y[1])))
> @defNLExpr(nlp, fn,  sn /(sqrt(y[n-1])+sqrt(yn)))
>
> 
> @setNLObjective(
> nlp,
> Min,
>     sum{f[i], i = 2:n-1} + f1 + fn
> )
>
>
> status = solve(nlp);
>
> Thx, 
>
>
> JPD
>
>
> Le samedi 27 février 2016 23:14:12 UTC+1, Miles Lubin a écrit :
>>
>> The JuMP team is happy to announce the release of JuMP 0.12.
>>
>> This release features a complete rewrite of JuMP's automatic 
>> differentiation functionality, which is the largest change in JuMP's 
>> nonlinear optimization support since JuMP 0.5. Most of the changes are 
>> under the hood, but as previously announced 
>> <https://groups.google.com/forum/#!topic/julia-opt/wt36Y6nzysY> there 
>> are a couple of syntax changes:
>> - The first parameter to @defNLExpr *and* @defExpr should now be the 
>> model object. All linear and nonlinear subexpressions are now attached to 
>> their corresponding model.
>> - If solving a sequence of nonlinear models, you should now use nonlinear 
>> parameters instead of Julia's variable binding rules.
>>
>> Many nonlinear models should see performance improvements in JuMP 0.12, 
>> let us know if you observe otherwise.
>>
>> We also now support user-defined functions 
>> <http://jump.readthedocs.org/en/latest/nlp.html#user-defined-functions> 
>> and *automatic differentiation of user-defined functions*. This is quite 
>> a significant new feature which allows users to integrate (almost) 
>> arbitrary code as a nonlinear function within JuMP expressions, thanks to 
>> ForwardDiff.jl <https://github.com/JuliaDiff/ForwardDiff.jl>. We're 
>> looking forward to seeing how this feature can be used in practice; please 
>> give us feedback on the syntax and any rough edges you run into.
>>
>> Other changes include:
>> - Changed syntax for iterating over AffExpr objects
>> - Stopping the solver from within a callback now causes the solver to 
>> return :UserLimit instead of throwing an error.
>> - getDual() now works for conic problems (thanks to Emre Yamangil)
>>
>> Given the large number of changes, bugs are possible. Please let us know 
>> of any incorrect or confusing behavior.
>>
>> Miles, Iain, and Joey
>>
>

[julia-users] Re: How to have many arguments in a Jump.jl nonlinear model?

2016-02-29 Thread Miles Lubin
Yes, it will be supported eventually but I can't give a time frame.

On Monday, February 29, 2016 at 8:37:30 PM UTC-5, jock@gmail.com wrote:
>
> OK, thanks Miles. I'll head to Optim.jl.
> Is there an intention to implement this functionality?
> Suggestion for julia-opt is noted, thanks again.
>
>
> On Tuesday, March 1, 2016 at 12:24:05 PM UTC+11, Miles Lubin wrote:
>>
>> There's no syntax for this at the moment, it's a known issue. The problem 
>> is that JuMP's internal representation of nonlinear expressions doesn't 
>> allow vectors or matrices.
>> For the moment we're targeting the use cases where the function is low 
>> dimensional. For box-constrained nonlinear optimization you can use 
>> Optim.jl.
>>
>> (By the way, better to post questions like these to julia-opt 
>> <https://groups.google.com/forum/#!forum/julia-opt>.)
>>
>> On Monday, February 29, 2016 at 6:50:50 PM UTC-5, jock@gmail.com 
>> wrote:
>>>
>>> Hi there,
>>>
>>> I have a nonlinear varargs function f(x...) that I'd like to maximize. 
>>> That is, the function is defined as follows:
>>>
>>> function f(x...)
>>> # do stuff here
>>> result
>>> end
>>>
>>> With a small number of arguments, for example 2, I can write the 
>>> following and get the correct result:
>>>
>>> registerNLFunction(:f, 2, f, autodiff=true)
>>> m = Model()
>>> @defVar(m, x[1:2] >= 0.0)
>>> @setNLObjective(m, Max, f(x[1], x[2]))
>>>
>>> With a large number of arguments, say 100, I'd prefer not to manually 
>>> write f(x[1], ..., x[100]) in the @setNLObjective macro.
>>> I have tried the following to no avail:
>>> @setNLObjective(m, Max, f(x...))
>>> @setNLObjective(m, Max, f(tuple(x...)))
>>>
>>> Is there a way to get this going for 100 variables without having to 
>>> manually write f(x[1], ..., x[100])?
>>>
>>> Cheers,
>>> Jock
>>>
>>> p.s. Thanks for the great work on 0.12.0 - it's awesome.
>>>
>>>

[julia-users] Re: How to have many arguments in a Jump.jl nonlinear model?

2016-02-29 Thread Miles Lubin
There's no syntax for this at the moment, it's a known issue. The problem 
is that JuMP's internal representation of nonlinear expressions doesn't 
allow vectors or matrices.
For the moment we're targeting the use cases where the function is low 
dimensional. For box-constrained nonlinear optimization you can use 
Optim.jl.

(By the way, better to post questions like these to julia-opt 
.)

On Monday, February 29, 2016 at 6:50:50 PM UTC-5, jock@gmail.com wrote:
>
> Hi there,
>
> I have a nonlinear varargs function f(x...) that I'd like to maximize. 
> That is, the function is defined as follows:
>
> function f(x...)
> # do stuff here
> result
> end
>
> With a small number of arguments, for example 2, I can write the following 
> and get the correct result:
>
> registerNLFunction(:f, 2, f, autodiff=true)
> m = Model()
> @defVar(m, x[1:2] >= 0.0)
> @setNLObjective(m, Max, f(x[1], x[2]))
>
> With a large number of arguments, say 100, I'd prefer not to manually 
> write f(x[1], ..., x[100]) in the @setNLObjective macro.
> I have tried the following to no avail:
> @setNLObjective(m, Max, f(x...))
> @setNLObjective(m, Max, f(tuple(x...)))
>
> Is there a way to get this going for 100 variables without having to 
> manually write f(x[1], ..., x[100])?
>
> Cheers,
> Jock
>
> p.s. Thanks for the great work on 0.12.0 - it's awesome.
>
>

Re: [julia-users] Re: Blog-post on how to make package documentation

2016-02-29 Thread Miles Lubin
FYI we're planning on moving JuMP away from Sphinx and ReadTheDocs for various 
reasons .

On Monday, February 29, 2016 at 2:56:23 PM UTC-5, Cedric St-Jean wrote:
>
> Thank you for this. 
>
> Has anyone evaluated using Sphinx instead? JuMP uses it, and it may fit my 
> needs better, but it's hard to tell.
>
> On Wednesday, February 17, 2016 at 5:01:51 PM UTC-5, Mauro wrote:
>>
>> Actually, Mike, is there a way to sort the API docs? Alphabetical or to 
>> some given order? 
>>
>> On Wed, 2016-02-17 at 17:44, Michael Hatherly  
>> wrote: 
>> > Thanks for writing that up Mauro and also the shout-out :) Worth noting 
>> > Mike Innes' important work in getting the docsystem integrated into 
>> `Base` 
>> > and his 
>> > heroic effort converting most of the docs from rst to markdown as well. 
>> > 
>> > On a technical note, Lexicon doesn't actually use the docsystem in 
>> `Base` 
>> > at all and still relies on Docile's own implementation currently, so 
>> it's 
>> > possible 
>> > that people may start to run into problems with this approach on 0.5 as 
>> it 
>> > further diverges from 0.3/0.4. (Do open issues if they happen to crop 
>> up 
>> > though and I'll do my best to keep things working as long as possible.) 
>> > Future work using the `Base` docsystem for documentation generation is 
>> > moving 
>> > ahead, when time allows, in a separate 
>> > package, https://github.com/MichaelHatherly/Lapidary.jl. 
>> > 
>> > -- Mike 
>> > 
>> > On Wednesday, 17 February 2016 17:13:38 UTC+2, Mauro wrote: 
>> >> 
>> >> Until a few days ago, I was a bit confused on how to go about writing 
>> >> nice package documentation, including API-docs.  I figured it out and 
>> it 
>> >> turned out to be pretty trivial, once I knew how.  I wrote it up in 
>> case 
>> >> someone else is confused (or I become confused again): 
>> >> http://maurow.bitbucket.org/notes/documenting-a-julia-package.html 
>> >> (feedback welcome) 
>> >> 
>> >>   Mauro 
>> >> 
>>
>

[julia-users] ANN: JuMP 0.12 released

2016-02-27 Thread Miles Lubin
The JuMP team is happy to announce the release of JuMP 0.12.

This release features a complete rewrite of JuMP's automatic 
differentiation functionality, which is the largest change in JuMP's 
nonlinear optimization support since JuMP 0.5. Most of the changes are 
under the hood, but as previously announced 
 there are a 
couple of syntax changes:
- The first parameter to @defNLExpr *and* @defExpr should now be the model 
object. All linear and nonlinear subexpressions are now attached to their 
corresponding model.
- If solving a sequence of nonlinear models, you should now use nonlinear 
parameters instead of Julia's variable binding rules.

Many nonlinear models should see performance improvements in JuMP 0.12, let 
us know if you observe otherwise.

We also now support user-defined functions 
 and 
*automatic 
differentiation of user-defined functions*. This is quite a significant new 
feature which allows users to integrate (almost) arbitrary code as a 
nonlinear function within JuMP expressions, thanks to ForwardDiff.jl 
. We're looking forward to 
seeing how this feature can be used in practice; please give us feedback on 
the syntax and any rough edges you run into.

Other changes include:
- Changed syntax for iterating over AffExpr objects
- Stopping the solver from within a callback now causes the solver to 
return :UserLimit instead of throwing an error.
- getDual() now works for conic problems (thanks to Emre Yamangil)

Given the large number of changes, bugs are possible. Please let us know of 
any incorrect or confusing behavior.

Miles, Iain, and Joey


[julia-users] Re: ERROR: AssertionError: x.head == :escape

2015-10-25 Thread Miles Lubin
Could you provide some standalone code that reproduces this error? Hard to 
say without an example.

On Sunday, October 25, 2015 at 12:24:34 PM UTC-4, Michela Di Lullo wrote:
>
> After the JuMP Package checkout
> I get this error
>
> *ERROR: AssertionError: x.head == :escape*
>
>
> while declaring some functions.
>
> what does it means? 
>
> Michela
>
>

[julia-users] Re: Help on optimization problem

2015-10-22 Thread Miles Lubin
On Tuesday, October 20, 2015 at 11:12:44 PM UTC-4, Spencer Russell wrote:
>
>
> 1. What solver is appropriate for this sort of problem? (or should I just 
> go with a spring-force algorithm like in GraphLayout.jl?) 
>

That depends very directly on the precise formulation for the objective 
function. If you could sketch out what it might look like algebraically 
then we can give better suggestions.
 

> 2. (JuMP Specific) - Should I specify my known positions as model 
> variables with equality constraints, or just normal julia variables that 
> show up in my objective function?
>

In the case of nonlinear optimization, it may make the solver's job easier 
if you avoid equality constraints and substitute in the known values 
directly.


[julia-users] Re: new function to calculate cycle basis in connected graphs

2015-10-13 Thread Miles Lubin
The Graphs.jl package is maintained by volunteers but has no active owner. 
You may want to look at contributing to 
the https://github.com/JuliaGraphs/LightGraphs.jl package instead.

On Tuesday, October 13, 2015 at 6:28:53 AM UTC-4, Michela Di Lullo wrote:
>
> Hello,
>
> I would share into the Graphs.jl package a new function to calculate cycle 
> basis in connected graphs.
>
> I looked at the source codes of some functions in the package and actually 
> they use a specific language that is not exhaustively explained anywhere 
> (as far as I know).
>  
> So my question is, when making a pull request, should I adapt my function 
> to the package's standards or it is not necessary?
> If yes, where could I find something about the language used for the 
> functions in Graphs.jl? 
>
> Thank you very much,
>
> Michela
>


Re: [julia-users] Stateflow equivalent in Julia

2015-09-25 Thread Miles Lubin
It's not too clear to me what's trying to be accomplished here, but "gear" 
and "N" are optimization variables and can't be used inside conditional 
statements. You can use getValue() to query the value of the variables in 
an optimal solution after calling solve(). If these statements are supposed 
to be constraints in the optimization problem, you'll have to reformulate 
them into standard nonlinear programming form, possibly with the addition 
of integer variables to model logical relationships.

On Friday, September 25, 2015 at 10:22:15 AM UTC-4, NV wrote:
>
> Yes, I have a transmission model to implement. I used the following code. 
> But I need it to run for as long as I have initialized the time variable to.
>
> using JuMP, Ipopt
> Truck = Model(solver=IpoptSolver(print_level=0))
> #Initial Values
> gear_0 = 1
> v_0 = 20
> N_0 = 600
> #Constants:
> i_f = 3.27
> eta_f = 0.95
> eta_t = 0.95
> r_w = 0.52
> i = [11.27; 9.14; 7.17; 5.81; 4.62; 3.75; 3.01; 2.44; 1.91; 1.55; 1.23; 1]
> #Discretization
> n = 500
> @defVar(Truck,Δt≥0,start = 1/n)
> @defNLExpr(t_f,Δt*n)
> @defVar(Truck,600<=N[1:n]<=2500)
> @defVar(Truck,1<=gear[1:n]<=12)
> @defVar(Truck,i_t[1:n])
> @addConstraint(Truck,N[0]==N_0)
> @addConstraint(Truck,gear[0]==gear_0)
> @addConstraint(Truck,i_t[0]==i[1])
> @defNLExpr(N[j=0:n],(30*i_t[j]*i_f)/(pi*r_w))
> for j in 1:1:n
> if(gear[j]==1)
> if(N[j]>=1500)
> gear[j+1]= 2
> else
> gear[j+1] = 1
> end
> elseif(gear[j]==2)
> if(N[j]>=1501)
> gear[j+1] = 3
> elseif(N[j]<=950)
> gear[j+1]= 1
> else
> gear[j+1] = 2
> end
> elseif(gear[j]==3)
> if(N[j]>=1502)
> gear[j+1] = 4
> elseif(N[j]<=960)
> gear[j+1] = 2
> else
> gear[j+1] = 3
> end
> elseif(gear[j]==4)
> if(N[j]>=1503)
> gear[j+1] = 5;
> elseif(N[j]<=970)
> gear[j+1] = 3
> else
> gear[j+1] = 4
> end
> elseif(gear[j]==5)
> if(N[]>=1504)
> gear[j+1] = 6;
> elseif(N[j]<=980)
> gear[j+1] = 4
> else
> gear[j+1] = 5
> end
> elseif(gear[j]==6)
> if(N[j]>=1505)
> gear[j+1] = 7
> elseif(N[j]<=990)
> gear[j+1] = 5
> else
> gear[j+1] = 6
> end
> elseif(gear[j]==7)
> if(N[j]>=1497)
> gear[j+1] = 8
> elseif(N[j]<=1000)
> gear[j+1] = 6
> else
> gear[j+1] = 7
> end
> elseif(gear[j]==8)
> if(N[j]>=1489)
> gear[j+1] = 9
> elseif(N[j]<=1006)
> gear[j+1] = 7
> else
> gear[j+1] = 8
> end
> elseif(gear[j]==9)
> if(N[j]>=1481)
> gear[j+1] = 10
> elseif(N[j]<=1012)
> gear[j+1] = 8
> else
> gear[j+1] = 9
> end
> elseif(gear[j]==10)
> if(N[j]>=1473)
> gear[j+1] = 11
> elseif(N[j]<=1018)
> gear[j+1] = 9
> else
> gear[j+1] = 10
> end
> elseif(gear[j]==11)
> if(N[j]>=1465)
> gear[j+1] = 12
> elseif(N[j]<=1024)
> gear[j+1] = 10
> else
> gear[j+1] = 11
> end 
> elseif(gear[j]==12)
> if(N[j]<=1030)
> gear[j+1] = 11
> else
> gear[j+1] = 12
> end
> end   
> i_t[j] = i[gear[j+1]]
> end
>
> To solve for the system dynamics, I am following the example in 
> http://www.juliaopt.org/notebooks/JuMP-Rocket.html . I tried passing gear 
> position(gear), engine rpm(N) and transmission ratio(i_t) as variables and 
> have added the necessary constraints as well. However, on trying to run it, 
> I kept getting an error:
>
>
> The comparison operator == has been deprecated for constructing constraints. 
> Use the macro form @addConstraint instead.
>
>
> LoadError: TypeError: non-boolean 
> (JuMP.GenericRangeConstraint{JuMP.GenericAffExpr{Float64,JuMP.Variable}}) 
> used in boolean context
> while loading In[2], in expression starting on line 30
>
>  in anonymous at no file
>
>
> On Friday, 25 September 2015 08:18:53 UTC-5, Spencer Russell wrote:
>>
>> Welcome to Julia! 
>>
>> To get useful answers you’re going to need to provide quite a bit more 
>> detail on what problems you’re running into. What did you try? What errors 
>> are you getting? Are there specific concepts from the documentation that 
>> you’re having trouble with? 
>>
>> -s 
>>
>> > On Sep 25, 2015, at 1:18 AM, Narayani Vedam  
>> wrote: 
>> > 
>> > Hi, 
>> >I am new to Julia. I tried implementing a logic that I had in 
>> Simulink - Stateflow using Julia, but ran into trouble. Any heads-up on 
>> this? 
>> > 
>> > Thank you 
>>
>>

[julia-users] Re: ANN: JuMP 0.10 released

2015-09-03 Thread Miles Lubin
Hi Geoff,

Very interesting work. I think it's fair to say that the current approach 
for modular models in JuMP is pretty ad hoc. The goal was develop a 
modeling system that's similar enough to what people are used to as opposed 
to putting forward a new philosophy for writing down and composing models. 
There's no explicit concept of modularity in JuMP itself, but since we're 
embedded in a fully programming language, it's easy to structure your 
models by putting different pieces of the model in different functions. 
That said, I think Julia is the best platform out there at the moment for 
experimenting with modeling ideas and releasing them for public consumption 
in a usable form. With very little work you can have full access to the 
existing solver infrastructure, currently at 11 native interfaces and many 
more through writing .osil or .nl files. If you'd like to discuss more, 
please open up a new thread on the julia-opt list.

Best,
Miles

On Thursday, September 3, 2015 at 6:03:29 PM UTC-6, Geoff Leyland wrote:
>
>
> As a way of contributing that discussion, could I point interested people 
> to
>
>
> https://github.com/geoffleyland/rima/blob/master/papers/Rima-ORSNZ-presentation-2010.pdf
>
> It's from a while ago (before my knowledge of Julia) and now I'm really 
> not up-to-date with what's happening in the modular models space, but I'd 
> be keen to know if there's interest in the Julia community for a similar 
> thing in Julia, any feedback on what's there, and any pointers on how to 
> get started on making it Julia-like.
>
> Cheers,
> Geoff
>


[julia-users] ANN: JuMP 0.10 released

2015-08-31 Thread Miles Lubin
The JuMP team is happy to announce the release of JuMP 0.10.

This is a major release with the greatest amount of new functionality since 
the addition of nonlinear modeling last year. This will likely be the last 
major release of JuMP to support Julia 0.3. Thanks to the heroic work of 
Joey Huchette, JuMP now supports *vectorized syntax* and modeling for 
*semidefinite 
programming*.

You can now write, for example:

@defVar(m, x[1:5])
@addConstraint(m, A*x .== 0)

where A is a Julia matrix (dense or sparse). Note that we require dot 
comparison operators .== (and similarly .<= and .>=) for vectorized 
constraints. The vectorized syntax extends to quadratic but not general 
nonlinear expressions.

An important new concept to keep in mind is that this vectorized syntax 
only applies to sets of variables which are one-based arrays. If you 
declare variables indexed by more complicated sets, e.g.,

@defVar(m, y[3:5])
s = [:cat, :dog, :pizza]
@defVar(m, z[s])

then dot(y,z) and rand(3,3)*z are undefined. A result of this new concept 
of one-based arrays is that x above now has the type Vector{JuMP.Variable}. 
In this case, getValue() now returns a Vector{Float64} instead of an opaque 
JuMP object. We hope users find this new distinction between one-indexed 
array variables and all other symbolically indexed variables useful and 
intuitive (if not, let us know).

For semidefinite modeling, you can declare variables as SDP matrices and 
add LMI (linear matrix inequality) constraints as illustrated in the 
examples for minimal ellipse 

 and 
max cut 
,
 
among others.

We also have a *new syntax for euclidean norms:*

@addConstraint(m, norm2{c[i]*x[i]+b[i],i=1:N} <= 10)
# or
@addConstraint(m, norm(c.*x+b) <= 10)

You may be wondering how JuMP compares with Convex.jl given these new 
additions. Not much has changed philosophically; JuMP directly translates 
SDP constraints and euclidean norms into the sparse matrix formats as 
required by conic solvers. Unlike Convex.jl, *JuMP accepts only 
standard-form SDP and second-order conic constraints and will not perform 
any automatic transformations* such as modeling nuclear norms, minimum 
eigenvalue, geometric mean, rational norms, etc. We would recommend using 
Convex.jl for easy modeling of such functions. Our focus, for now, is on 
the large-scale performance and stability of the huge amount of new syntax 
introduced in this release.

Also notable in this release:
- JuMP models now store a dictionary of attached variables, so that you can 
look up a variable from a model by name by using the new getVar() method.
- On Julia 0.4 only, you can now have a filter variable declarations, e.g.,
@defVar(m, x[i=1:5,j=1:5; i+j >= 3])
will only create variables for the indices which satisfy the filter 
condition. (These are not one-based arrays as introduced above.)
- Dual multipliers are available for nonlinear problems from the solvers 
which provide them
- There is improved documentation for querying derivatives from a nonlinear 
JuMP model 

- *We now try to print warnings for two common performance traps*: calling 
getValue() in a tight loop and using operator overloading to construct 
large JuMP expressions. Please let us know if these are useful or annoying 
or both so that we can tune the warning thresholds.
- Thanks to Tony Kelman and Jack Dunn, you can now call a large number of 
external solvers including Bonmin and Couenne through either the .osil or 
.nl exchange formats.
- Module precompilation speeds up using JuMP considerably, for those on 
Julia 0.4

The delay since the last release of JuMP is mostly due to us trying to test 
and refine the new syntax, but inevitably some bugs have slipped through, 
so please let us know of any incorrect or confusing behavior.

Also newsworthy is our new paper  
describing the methods used in JuMP with benchmark comparisons to existing 
open-source and commercial optimization modeling software.

Miles, Iain, and Joey



Re: [julia-users] Creating a new version of a package

2015-08-22 Thread Miles Lubin
Pkg.tag("NaNMath")
Pkg.publish()

is all you should have to do.

[julia-users] Re: ANN: Major overhaul of ForwardDiff.jl

2015-08-14 Thread Miles Lubin
Thanks Jarrett!

This work is the product of a Julia Summer of Code project. Please kick the 
tires, we're planning letting the new code sit on master for a bit before 
tagging a new release. Users of Julia 0.3 will be stuck on the old API, 
since the code makes extensive use the generated function feature of Julia 
0.4.

On Friday, August 14, 2015 at 2:20:09 PM UTC-6, Jarrett Revels wrote:
>
> Hi!
>
> ForwardDiff.jl , a Julia 
> package for performing automatic differentiation, has just been updated and 
> is now much less buggy, much more performant, and much more comprehensively 
> tested. 
>
> An additional goal of the update was to provide a more friendly API, so 
> hopefully the package is now easier to use for newcomers (documentation can 
> be found in the master branch's README).
>
> *To use the updated version of the package, check out it's most recent 
> commit on the master branch*. The update is not yet tagged for release, 
> as we want to give people a chance to work with the new API some before 
> tagging a new version. Feel free to provide feedback in the form of an 
> issue or pull request to the repository!
>
> Note that this update only supports Julia v0.4.
>
> Best,
> Jarrett
>
>
>

Re: [julia-users] [ANN] SparseVectors

2015-07-29 Thread Miles Lubin
This was discussed pretty extensively in the github 
issue: https://github.com/JuliaLang/julia/issues/11324

On Wednesday, July 29, 2015 at 8:32:13 PM UTC-6, Christian Peel wrote:
>
> Nice!
>
> I saw somewhere recently that the plan was to move more out of base and 
> into packages.  I'm curious as to why this needs to go into Julia base.
>
> cheers
>
> On Wed, Jul 29, 2015 at 2:42 AM, Dahua Lin  > wrote:
>
>> I searched over the forum and it seems that I did not announce this 
>> package when it is initially created.
>>
>> So here it is:
>>
>> https://github.com/JuliaSparse/SparseVectors.jl
>>
>> Simply put, it provides sparse vectors (not sparse matrices with 
>> one-column) and a series of methods operating thereon. Recently, I did a 
>> major upgrade of the package and many more functions are added. The package 
>> has been systematically tested and it is ready for general use.
>>
>> A majority of the functionalities will be migrated to Julia base, after 
>> Julia v0.4 is released. If your applications involve processing sparse 
>> vectors (e.g. text analysis), you may check it out and play with it. 
>>
>> Cheers,
>> Dahua
>>  
>>
>
>
>
> -- 
> chris...@ieee.org 
>  


Re: [julia-users] Re: Errors while trying to use JuMP

2015-06-14 Thread Miles Lubin
You're using a prerelease version of 0.3 from nearly a year ago. Not sure
how this came to be installed, but you'll need to update to 0.3.0 release
or later.
On Jun 14, 2015 2:38 PM, "Kostas Tavlaridis-Gyparakis" <
kostas.tavlari...@gmail.com> wrote:

> The output of versioninfo() is the following:
>
> Julia Version 0.3.0-rc1+54
> Commit 4e92487 (2014-07-17 05:40 UTC)
> Platform Info:
>   System: Linux (x86_64-linux-gnu)
>   CPU: Intel(R) Core(TM) i5-4300U CPU @ 1.90GHz
>   WORD_SIZE: 64
>   BLAS: libblas.so.3
>   LAPACK: liblapack.so.3
>   LIBM: libopenlibm
>
>
> I already deleted the Docile package from the julia directory and
> reinstalled it,
> but again with no luck and also again the old version of 0.1 was installed.
>
> On Sunday, June 14, 2015 at 2:34:30 PM UTC+2, Miles Lubin wrote:
>>
>> What is the output of versioninfo() (at the Julia command line)?
>>
>> Also, try deleting your ~/.julia directory and reinstalling the packages
>> you need.
>>
>> On Saturday, June 13, 2015 at 3:11:16 PM UTC+2, Kostas
>> Tavlaridis-Gyparakis wrote:
>>>
>>> Here is the outcome of Pkg.status():
>>>
>>> Pkg.status()
>>> 5 required packages:
>>>  - CPLEX 0.0.9
>>>  - Docile0.1.0
>>>  - Images0.4.37
>>>  - Jewel 1.0.4
>>>  - JuMP  0.5.8
>>> 26 additional packages:
>>>  - AnsiColor 0.0.2
>>>  - BinDeps   0.3.9
>>>  - Calculus  0.1.4
>>>  - Color 0.4.5
>>>  - Compat0.4.4
>>>  - Compose   0.3.12
>>>  - DataStructures0.3.9
>>>  - DualNumbers   0.1.0
>>>  - FactCheck 0.1.2
>>>  - FixedPointNumbers 0.0.7
>>>  - Graphics  0.1.0
>>>  - Graphs0.4.3
>>>  - Iterators 0.1.7
>>>  - JSON  0.4.0
>>>  - JuliaParser   0.6.2
>>>  - LNR   0.0.1
>>>  - Lazy  0.8.4
>>>  - Markdown  0.3.0
>>>  - MathProgBase  0.3.1
>>>  - Requires  0.1.2
>>>  - ReverseDiffSparse 0.1.2
>>>  - SHA   0.0.4
>>>  - SIUnits   0.0.3
>>>  - TexExtensions 0.0.2
>>>  - URIParser 0.0.5
>>>  - Zlib  0.1.8
>>>
>>>
>>> On Saturday, June 13, 2015 at 2:50:56 PM UTC+2, Tim Holy wrote:
>>>>
>>>> Check to see what Pkg.status() says. You may have another package that
>>>> is
>>>> pinning Docile at 0.1?
>>>>
>>>> --Tim
>>>>
>>>> On Saturday, June 13, 2015 04:45:12 AM Kostas Tavlaridis-Gyparakis
>>>> wrote:
>>>> > When i run update after I install Dolice there is nothing new:
>>>> >
>>>> > Pkg.update()
>>>> > INFO: Updating METADATA...
>>>> > INFO: Computing changes...
>>>> > INFO: No packages to install, update or remove
>>>> >
>>>> > How can I update from terminal to julia 0.4 myself?
>>>> > Also I am not quiet sure how to use clone command
>>>> > to force julia Dolice to the latest version (sorry I am
>>>> > a real noob).
>>>> >
>>>> > On Saturday, June 13, 2015 at 1:25:05 PM UTC+2, Svaksha wrote:
>>>> > > On Sat, Jun 13, 2015 at 10:03 AM, Kostas Tavlaridis-Gyparakis
>>>> > >
>>>> > > > wrote:
>>>> > > > Followed your recommendation but have to note the following
>>>> things:
>>>> > > >
>>>> > > > 1) I have Julia v0.3 even if I just downloaded Julia this week
>>>> > > > 2) after deleting Dolice and running update command again the
>>>> version
>>>> > >
>>>> > > 0.1
>>>> > >
>>>> > > > was installed:
>>>> > > Run update after installing Docile. I'm on 0.4 so I dont know what
>>>> > > version of Docile the stable version installs, but try using
>>>> > > `Pkg.pin()` to force a specific version of the package.
>>>> Unregistered
>>>> > > packages can be installed via the url, so you can try
>>>> > > Pkg.clone("git://example.com/path/to/Package.jl.git")
>>>> > > SVAKSHA ॥  http://about.me/svaksha  ॥
>>>>
>>>>


Re: [julia-users] Re: Errors while trying to use JuMP

2015-06-14 Thread Miles Lubin
What is the output of versioninfo() (at the Julia command line)?

Also, try deleting your ~/.julia directory and reinstalling the packages 
you need.

On Saturday, June 13, 2015 at 3:11:16 PM UTC+2, Kostas Tavlaridis-Gyparakis 
wrote:
>
> Here is the outcome of Pkg.status():
>
> Pkg.status()
> 5 required packages:
>  - CPLEX 0.0.9
>  - Docile0.1.0
>  - Images0.4.37
>  - Jewel 1.0.4
>  - JuMP  0.5.8
> 26 additional packages:
>  - AnsiColor 0.0.2
>  - BinDeps   0.3.9
>  - Calculus  0.1.4
>  - Color 0.4.5
>  - Compat0.4.4
>  - Compose   0.3.12
>  - DataStructures0.3.9
>  - DualNumbers   0.1.0
>  - FactCheck 0.1.2
>  - FixedPointNumbers 0.0.7
>  - Graphics  0.1.0
>  - Graphs0.4.3
>  - Iterators 0.1.7
>  - JSON  0.4.0
>  - JuliaParser   0.6.2
>  - LNR   0.0.1
>  - Lazy  0.8.4
>  - Markdown  0.3.0
>  - MathProgBase  0.3.1
>  - Requires  0.1.2
>  - ReverseDiffSparse 0.1.2
>  - SHA   0.0.4
>  - SIUnits   0.0.3
>  - TexExtensions 0.0.2
>  - URIParser 0.0.5
>  - Zlib  0.1.8
>
>
> On Saturday, June 13, 2015 at 2:50:56 PM UTC+2, Tim Holy wrote:
>>
>> Check to see what Pkg.status() says. You may have another package that is 
>> pinning Docile at 0.1? 
>>
>> --Tim 
>>
>> On Saturday, June 13, 2015 04:45:12 AM Kostas Tavlaridis-Gyparakis wrote: 
>> > When i run update after I install Dolice there is nothing new: 
>> > 
>> > Pkg.update() 
>> > INFO: Updating METADATA... 
>> > INFO: Computing changes... 
>> > INFO: No packages to install, update or remove 
>> > 
>> > How can I update from terminal to julia 0.4 myself? 
>> > Also I am not quiet sure how to use clone command 
>> > to force julia Dolice to the latest version (sorry I am 
>> > a real noob). 
>> > 
>> > On Saturday, June 13, 2015 at 1:25:05 PM UTC+2, Svaksha wrote: 
>> > > On Sat, Jun 13, 2015 at 10:03 AM, Kostas Tavlaridis-Gyparakis 
>> > > 
>> > > > wrote: 
>> > > > Followed your recommendation but have to note the following things: 
>> > > > 
>> > > > 1) I have Julia v0.3 even if I just downloaded Julia this week 
>> > > > 2) after deleting Dolice and running update command again the 
>> version 
>> > > 
>> > > 0.1 
>> > > 
>> > > > was installed: 
>> > > Run update after installing Docile. I'm on 0.4 so I dont know what 
>> > > version of Docile the stable version installs, but try using 
>> > > `Pkg.pin()` to force a specific version of the package. Unregistered 
>> > > packages can be installed via the url, so you can try 
>> > > Pkg.clone("git://example.com/path/to/Package.jl.git") 
>> > > SVAKSHA ॥  http://about.me/svaksha  ॥ 
>>
>>

Re: [julia-users] Re: Organizing code around Jump models

2015-06-11 Thread Miles Lubin
There's a lot going on here, but a few observations:

- In the (unreleased) development version of JuMP, we store a dictionary of
variables used in the model (https://github.com/JuliaOpt/JuMP.jl/pull/446),
so you won't need to manually keep track of this e.g., with the "var"
dictionary above.
- That said, there's a bit of a performance penalty for going through
dictionaries. Don't worry about this until it becomes a bottleneck, though.
- Abstract types specify what do can "do with" a certain object, but not
what fields are available in an object. So it's a bit of poor style to
access the fields of an object in generic code (in setLineCapa!, you access
m.mip, but m is not guaranteed to have this field). I admit to doing this
myself sometimes, though. The suggested workaround is to define accessor
methods like mip(m::Model) = m.mip. Then the "delegation/proxy" pattern
above is replaced by defining new methods, e.g., mip(m::Model) = m.uc.mip.

Does this help? It's always a challange to design good abstractions, and
we're definitely open to making it easier to build modular models like this
in JuMP. Are there Julia language features which you feel are making this
more difficult than you expect?

Best,
Miles

On Thu, Jun 4, 2015 at 10:28 PM, Francois Gilbert <
francois.gilbert...@gmail.com> wrote:

>
> Hi Miles,
>
> It would be easier for me to give some excerpts of the Julia sources I am
> working with. I think it would suffice to illustrate the  mechanic we are
> trying to implement.
>
> Say we have two abstract types:
>
> abstract BasicUC
> abstract CCModel<:BasicUC
>
> These are declared in separate modules.  BasicUC stands for basic
> unit-commitment and types that derive it have access to functionalities
> such as:
>
> function setLineCapa!(m::BasicUC)
> JuMP.@addConstraint(m.mip, _contraint1[l = keys(m.net.branch), t
> = m.timeIndex],
> m.var[:flow][l,t] <= m.net.branch[l].Fmax)
> ...
> end
>
> CCModel stands for an extended unit-commitment formulation, where robust
> reserve requirement are implemented. Details are irrelevant. A type that
> derives CCModel has access to, say:
>
> function setRobustReserve!(m::CCModel)
> ...
> end
>
> Extending the functionality specific to BasicUC - let a type “Model” such
> that
>
> type Model{T<:BasicUC} <: CCModel
> uc# Hook to BasicUC
> # Proxies to BasicUC:
> demand   # exogenous stuff
> mip  # JuMP model
> . . .
> function Model(net,scenario)
> this=new()
> this.uc=T(net,scenario)
> . . .
> # Delegation to BasicUC
> this.demand=this.uc.demand
> this.mip=this.uc.mip
> this.var=this.uc.var
> . . .
> # Specific to CCModel
> this.contingencies=scenario[:contingencies]
> JuMP.@defVar(this.mip, _post_gen[keys(this.net.gen),
> keys(this.contingencies),
> this.timeIndex])
> this.var[:post_gen]= _post_gen
> . . .
> return this
> end
> end
>
> Now, let a concrete type "BasicModel" be defined in, say module
> "BasicUnitCommitment",  and such that :
>
> BasicModel<:BasicUC
>
> Type Model{T} above can then be instantiated and tailored, with:
>
> m=Model{BasicUnitCommitment.BaseModel}(net,scenario)
> # from the base model (BasicUC)
> setBalanceEq!(m)
> setLineCapa!(m)
> setInitRamping!(m)
> # these are overloaded by CCModel
> setObjective!(m)
> setGenCapa!(m)
> setRamping!(m)
> # these are specific to CCModel
> setPostBalanceEq!(m)
> setRobustReserve!(m)
> . . .
>
> So that's the idea. Perhaps I am overdoing this, but I think having some
> kind of encapsulation is essential for developing larger applications.
> Emulating inheritance, as is done above, is rather straightforward, but
> perhaps a little laborious, and there might be a better way to go about
> it.
>
> François
>
>
>
>
>
> Le mercredi 3 juin 2015 16:12:21 UTC-5, Miles Lubin a écrit :
>>
>> Hi Francois,
>>
>> Could you give an example of how you might organize this code in a more
>> namespace heavy language like C++ or Java? I think this is quite a general
>> question on how to structure large Julia applications, not really specific
>> to JuMP.
>>
>> Best,
>> Miles
>>
>> On Wednesday, June 3, 2015 at 9:07:03 PM UTC+2, Francois Gilbert wrote:
>>>
>>> Hi all,
>>>
>>> Is  anyone aware of any effort building large OR applications?  I am
>>

[julia-users] Re: Organizing code around Jump models

2015-06-03 Thread Miles Lubin
Hi Francois,

Could you give an example of how you might organize this code in a more 
namespace heavy language like C++ or Java? I think this is quite a general 
question on how to structure large Julia applications, not really specific 
to JuMP.

Best,
Miles

On Wednesday, June 3, 2015 at 9:07:03 PM UTC+2, Francois Gilbert wrote:
>
> Hi all,
>
> Is  anyone aware of any effort building large OR applications?  I am 
> currently working at organizing a relatively large code base making use of 
> the JuMP libraries.  The  context is unit-commitment and transmission 
> planning.  Flexibility and code re-use is among our priorities. 
>
> Here are some of the design choices we made so far:
>
> 1) JuMP models are encapsulated in types that hold: 
> a) references to the variable names, constraints (when multipliers are 
> needed) and indices.  These are otherwise difficult to extract directly 
> from the JuMP object
> b) data structure that are required for model specification (generator 
> characteristics, transmission lines, etc) 
> c) exogenous variable values (demand scenarios, initial conditions, etc)
>
> 2) Most models we use are derived from a basic unit commitment model. The 
> derivation is implemented using 
> composition: the derived type manages a reference to the base type, and 
> provides a number of proxies to some of the base type data structure. 
>
> 3) The objective and constraints  are specified  through function calls, 
> using polymorphism on the basis of an underlying abstract types hierarchy. 
>
> I also toyed for a while with submodules to incorporate functionality that 
> only make sense within the scope of a given base module. But from the 
> language point of view, I did not see much benefit in doing so. That is,  a 
> Julia submodule does not see anything of the base module, unless it is 
> explicitly use/imported, and is thus in the same position as any other 
> module defined outside the scope of the base module.  Are there any Julia 
> packages making use submodules? 
>
> Any suggestions/comments are very welcome,
>
> François
>
>

Re: [julia-users] JuliaCon registrations open

2015-05-31 Thread Miles Lubin
We now have a sheet to help JuliaCon attendees arrange for room sharing:

https://docs.google.com/spreadsheets/d/1361VHhtFM5Cnu-S_cSUKFeUhrH33FepOKLCc0YLtosg/edit?usp=sharing


On Friday, May 29, 2015 at 5:34:52 PM UTC+2, Matt Bauman wrote:
>
> Is anyone interested in splitting a room with two double beds?  (That is, 
> if they are available within the block; I haven't checked).
>
> On Tuesday, May 12, 2015 at 11:48:39 AM UTC-4, Yichao Yu wrote:
>>
>> On Tue, May 12, 2015 at 6:37 AM, Marcus Appelros 
>>  wrote: 
>> > OK, am interested in volunteering for Julia generally, have prepared a 
>> > message that fits better in the pinned post. 
>> > 
>> > Do send word if you find some JuliaCon task that can be handled from a 
>> > distance, currently doing projects in Kenya so flying to Boston with a 
>> weeks 
>> > notice would be hard. 
>>
>> Hopefully this is the right place to ask.. 
>> Does the student ticket include the Workshops ? 
>>
>

[julia-users] Re: sparse Jacobian and/or coloring

2015-05-27 Thread Miles Lubin
Hi Mauro,

Could you open an issue on ReverseDiffSparse to continue this discussion?

Miles

On Wednesday, May 27, 2015 at 3:42:36 PM UTC-4, Mauro wrote:
>
> For an implicit ode/dae solver, I want to provide automatic computation 
> of a sparse Jacobian, either through automatic differentiation or finite 
> differences.  So, for given a function 
>
> f(y, ydot) 
>
> I need to compute the Jacobian in the following form: 
>
> J = df/dy + a* df/dydot 
>
> ReverseDiffSparse seems to do most of this (although Hessians are 
> symmetric) but I struggle to make sense of it.  Given f and a sparsity 
> pattern of J, is there an incantation to get df/dy and df/dydot? 
>
> If I wanted to use the coloring of ReverseDiffSparse (or some other 
> package) for finite differencing, how would I go about it?  What 
> function calls would give me a coloring of the colums? 
>
> Thanks! Mauro 
>


[julia-users] Re: Julia Summer of Code

2015-05-23 Thread Miles Lubin
Sorry about that, here's the real 
link: http://julialang.org/blog/2015/05/jsoc-cfp/

On Sunday, May 24, 2015 at 12:42:27 AM UTC-4, Miles Lubin wrote:
>
> We've put up a blog post (http://localhost:4000/blog/2015/05/jsoc-cfp/) 
> with some more details on Julia Summer of Code. There are still a number of 
> unfilled slots open; submission deadline is June 1st!
>
> On Friday, May 15, 2015 at 1:57:24 PM UTC-4, Viral Shah wrote:
>>
>> Folks,
>>
>> The Moore Foundation is generously funding us to allow for 6-8 Julia 
>> Summer of Code projects. Details will be published soon, but if you are 
>> interested, please mark your calendars and plan your projects.
>>
>> -viral
>>
>

[julia-users] Re: Julia Summer of Code

2015-05-23 Thread Miles Lubin
We've put up a blog post (http://localhost:4000/blog/2015/05/jsoc-cfp/) 
with some more details on Julia Summer of Code. There are still a number of 
unfilled slots open; submission deadline is June 1st!

On Friday, May 15, 2015 at 1:57:24 PM UTC-4, Viral Shah wrote:
>
> Folks,
>
> The Moore Foundation is generously funding us to allow for 6-8 Julia 
> Summer of Code projects. Details will be published soon, but if you are 
> interested, please mark your calendars and plan your projects.
>
> -viral
>


[julia-users] Re: Julia Summer of Code

2015-05-21 Thread Miles Lubin
Agreed. There's a lot to be done with reverse-mode AD, though the full 
scale of the work is beyond that of a summer project. 

FYI, Theodore and I will be working with Jarrett Revels on the project we 
proposed around DualNumbers and extensions. Hoping to share the results at 
the end of the summer!

On Thursday, May 21, 2015 at 4:27:27 PM UTC-4, Zenna Tavares wrote:
>
> Echoing Miles, I vote for working to extend automatic differentiation 
> (especially reverse mode) to all of Julia.
>
> The work done in the current AD packages is great, but Julia has 
> sufficiently powerful introspection and metaprogramming capabilities that 
> we shouldn't, in principle, be limited to small subsets of Julia.
>
> I'm not sure I am the one to work on it though.
>
> Zenna
>
> On Tuesday, May 19, 2015 at 2:52:00 PM UTC-4, Jeff Waller wrote:
>>
>> Is this the part where I say Julia-Spark again?  
>>
>> I think this is pretty doable in time.  It will likely be more or less a 
>> port of PySpark 
>>  since Julia
>> and Python are similar in capability.  I think I counted about 6K lines 
>> (including comments).
>>
>> According to the pyspark presentation 
>> , they relied on a 3rd 
>> party to containerize  a Python
>> program for transmission -- I think I'm remembering this right.  That 
>> might be a problem to
>> overcome.
>>
>

[julia-users] Re: Julia Summer of Code

2015-05-15 Thread Miles Lubin
This is both a proposal and a call for interested undergraduate and 
graduate students:

Automatic differentiation is a technique for computing exact numerical 
derivatives of user-provided code, as opposed to using finite difference 
approximations which introduce approximation errors. These techniques have 
a number of applications in statistics, machine learning, optimization, and 
other fields. Julia as a language is particularly suitable for implementing 
automatic differentiation, and the existing capabilities are already beyond 
those of Scipy and MATLAB. We propose a project with the following 
components:

1. Experiment with the new fast tuple and SIMD features of Julia 0.4 to 
develop a blazing fast stack-allocated implementation of DualNumbers with 
multiple epsilon components. Integrate with existing packages like Optim, 
JuMP, NLsolve, etc., and measure the performance gains over existing 
implementations.

2. Combine this work with the ForwardDiff package, which aims to provide a 
unified interface to different techniques for forward-mode automatic 
differentiation, including for higher-order derivatives.

3. Time permitting, take a step towards the reverse mode of automatic 
differentiation. Possible projects include developing a new implementation 
of reverse-mode AD based on the expression-graph format used by JuMP or 
contributing to existing packages such as ReverseDiffSource and 
ReverseDiffOverload.

There are quite a number of interesting projects in this area (some with 
avenues for publication), so we can adjust the work according to the 
student's interests. An ideal student should be interested in experimenting 
with state-of-the-art techniques to make code fast. No mathematical 
background beyond calculus is needed. See juliadiff.org for more info.

Co-mentors: Miles Lubin and Theodore Papamarkou

If this sounds cool and interesting to you, do get in touch!


[julia-users] Re: ANN node-julia 1.1.0

2015-05-15 Thread Miles Lubin
Very cool!

On Friday, May 15, 2015 at 4:39:40 PM UTC-4, Jeff Waller wrote:
>
> This version supports 0.4.x using svec as well as previous 0.4.x and of 
> course
> 0.3.x as well here's the docs  if 
> interested.
>
> It's been a pretty long time + the svec change was breaking, so not all of 
> the
> feature requests made it in to this, like the cool one 
> , but that's next for sure.
>
> Various bug fixes.
>
> A couple new features features
>
> * supporting the result of julia import as an object similar to node.js 
> require Issue 6 
> * supporting struct types with Javascript accessor functions Issue 7 
> 
>
> Allows for pretty cool syntax, i think (cut-n-pasted from docs)
>
> var julia = require('node-julia');
> var JuMP = julia.import('JuMP');
> var m = julia.eval('m = JuMP.Model()');
> var x = julia.eval('JuMP.@defVar(m,0 <= x <= 2)');
> var y = julia.eval('JuMP.@defVar(m,0 <= y <= 30)');
>
> julia.eval('JuMP.@setObjective(m,Max, 5x + 3*y)');
> julia.eval('JuMP.@addConstraint(m,1x + 5y <= 3.0)');
>
> var status = JuMP.solve(m);
>
> console.log('Objective value: ',JuMP.getObjectiveValue(m));
> console.log('X value: ',JuMP.getValue(x));
> console.log('Y value: ',JuMP.getValue(y));
>
>
>

[julia-users] Re: Error messages related to macros

2015-04-04 Thread Miles Lubin
This doesn't address all of your questions, but it's definitely a 
recognized issue that the error messages from macros leave much to be 
desired. See the discussions at:

https://github.com/JuliaLang/julia/issues/1334
https://github.com/JuliaLang/julia/pull/6910
https://github.com/JuliaLang/julia/issues/8701


On Saturday, April 4, 2015 at 11:33:08 AM UTC-4, Nehal Patel wrote:
>
> Hi -- I've only used Julia for about three months now, but I think it's 
> great;  Thanks to everyone involved in its creation!
>
> I've also accumulated a set of questions during this time that I'm finally 
> getting around to asking... 
>
> So first question:  Are there plans to improve the messages related to 
> errors that occur while using macros?   
>
> Obviously error messages in general can be a difficult engineering 
> problem, but Julia's error messages related to macros are particularly 
> problematic.  At one point I tried to catalog many different error 
> scenarios, which I collected here: Macro Errors 
> .
>  
>  The notebook is perhaps a little too disorganized to follow completely, 
> but essentially it explores many different cases where, for example, the 
> reported line numbers don't seem correlated to anything useful (they are 
> just plain wrong).  
>
> Here is my explanation of why this is important to me.  There are a few 
> packages that attempt to provide powerful macros that are meant to be a) 
> fairly general purpose  and b) typically used to wrap fairly large 
> expressions (as opposed to, say, @show).  Examples are @match from Match.jl 
> and maybe @as from Lazy.jl .  So for instance, when I first came across 
> Match.jl, it seemed perfect for quickly writing simple interpreters or code 
> generators  that used some subset (or all) of Julia's syntax.  But 
> basically this ends up being difficult in practice due to the error 
> messages -- tiny typos generate opaque messages that make progress 
> difficult.  
>
> I've made a (somewhat feeble) attempt to understand what the issues might 
> be.  As far as I can tell I don't believe it's related to optimizations 
> that happen during compilation.  Rather  the Julia macro system (very 
> nicely!) tries to mitigate hygiene issues by "auto-gensym'ing" (not sure 
> what else to call it) unescaped locals in the expression returned by the 
> macro.  This is not an easy analysis and it happens in the bowels of 
> femtolisp, and somewhere in this process, the line number tracking/error 
> context information is getting garbled (I could be totally wrong about 
> this...).  
>
> The combination of theory of programming languages + numerical computation 
> + systems-level core(e.g.libuv,libunwind) trifecta that is Julia makes my 
> heart sing, and making sure Julia remains much "more than a better matlab" 
> is very important to me, so, obviously I'd like to help fix things, but I'm 
> not entirely sure I know how to fix this particular issue.   I'm curious if 
> others (the authors of Match.jl or Lazy.jl?) feel the same as I do about 
> this, and would appreciate any comments on the topic. 
>
> Cheers, nehal
>
>
>
>
>
>  
>  
>
>  
>


[julia-users] Re: Graphs.jl, Page Rank and Weighted Graphs

2015-04-01 Thread Miles Lubin
Hi Rita,

Take a look at the LightGraphs package 
(https://github.com/JuliaGraphs/LightGraphs.jl), it should be a bit easier 
to use and less cranky than Graphs.jl.

Miles

On Wednesday, April 1, 2015 at 4:37:32 PM UTC-4, Rita Maria del Rio Chanona 
wrote:
>
> Hi,
> I was wondering if there is yet an implementation of the PageRank 
> algorithm in the Julia's Graphs package.
> I also want to make an undirected weighted graph but I am having problems 
> with the graph function. I am getting the following error when I want to 
> add an edgde:
>
> g = graph([1, 2, 3], [] , is_directed=false )
>
> add_edge!(g, 1, 2)
>
> `add_edge!` has no method matching 
> add_edge!(::GenericGraph{Int64,None,Array{Int64,1},Array{None,1},Array{Array{None,1},1}},
>  ::Int64, ::Int64, ::Edge{Int64})
> while loading In[35], in expression starting on line 1
>
>  in add_edge! at /home/rm/.julia/v0.4/Graphs/src/graph.jl:120
>
>
> I also don't know how to add the weight of the edge.
>
> Thanks in advance.
>
>

Re: [julia-users] how to model semicontinuous variables in Julia

2015-03-30 Thread Miles Lubin
No explicit loop is needed:

@defVar(m, inf[i]<= pt[i=1:hmax] <=sup[i], SemiCont)

They keyword SemiCont means that the variable can be either equal to zero 
or fall within the given bounds.
Be sure to use a solver which supports this class of variables (Gurobi or 
CPLEX). 

On Monday, March 30, 2015 at 8:01:19 PM UTC-4, Michela Di Lullo wrote:
>
>
>
> Il giorno lunedì 30 marzo 2015 21:45:13 UTC+2, Miles Lubin ha scritto:
>>
>> Hi Michela, 
>>
>
>> It looks like you're referring to JuMP, in which case:
>>
>> @defVar(m, a <= x <= b, SemiCont)
>>
>> should do the trick. See also the documentation: 
>> http://jump.readthedocs.org/en/release-0.8/refvariable.html?highlight=semicontinuous
>>
>> We prefer to direct optimization-related questions to julia-opt 
>> <https://groups.google.com/forum/#!forum/julia-opt>, for future 
>> reference.
>>
>> Best,
>> Miles
>>
>> On Monday, March 30, 2015 at 3:36:15 PM UTC-4, Kevin Squire wrote:
>>>
>>> Hi Michela,
>>>
>>> I think you're going to need to provide some additional information.  
>>> Are you modeling this in JuMP by chance?  
>>>
>>> Cheers,
>>>Kevin
>>>
>>> On Mon, Mar 30, 2015 at 12:25 PM, Michela Di Lullo <
>>> michela...@uniroma1.it> wrote:
>>>
>>>> Hallo everyone! 
>>>>
>>>> how do I model a variable (array of variables) that can either be zero 
>>>> or in some range not containing zero?  
>>>> e.g. x∈0∪[a,b] where a>0
>>>>
>>>> Thank you in advance for any information!
>>>>
>>>
>>>> Michela
>>>>
>>>
>>>
> Yes, exactly..
> I'm working with JuMP. 
>
> I need to model a matrix of semicontinuos power variables p[i,j] that 
> assume values in *0 union [pt_min,pt_max]*
> In other words it's: 
>
> for i=1:gmax
>
> {@defVar(m, inf[i]<= pt[1:hmax] <=sup[i], SemiCont)}
>
> end 
> but pt can also be equal to 0 !!
> How do I model it? 
>
> Thanks to anyone will answer 
>


Re: [julia-users] how to model semicontinuous variables in Julia

2015-03-30 Thread Miles Lubin
Hi Michela,

It looks like you're referring to JuMP, in which case:

@defVar(m, a <= x <= b, SemiCont)

should do the trick. See also the documentation: 
http://jump.readthedocs.org/en/release-0.8/refvariable.html?highlight=semicontinuous

We prefer to direct optimization-related questions to julia-opt 
, for future reference.

Best,
Miles

On Monday, March 30, 2015 at 3:36:15 PM UTC-4, Kevin Squire wrote:
>
> Hi Michela,
>
> I think you're going to need to provide some additional information.  Are 
> you modeling this in JuMP by chance?  
>
> Cheers,
>Kevin
>
> On Mon, Mar 30, 2015 at 12:25 PM, Michela Di Lullo  > wrote:
>
>> Hallo everyone! 
>>
>> how do I model a variable (array of variables) that can either be zero or 
>> in some range not containing zero?  
>> e.g. x∈0∪[a,b] where a>0
>>
>> Thank you in advance for any information!
>>
>> Michela
>>
>
>

Re: [julia-users] How to test argument types available on a Function?

2015-03-28 Thread Miles Lubin
Take a look at applicable().

On Saturday, March 28, 2015 at 4:30:09 PM UTC-4, Sheehan Olver wrote:
>
> It currently works like this, which does work with f(x)=x, f(x,y) = x+y: 
>
> try 
> f(0) 
> catch 
> try 
> f(0,0) 
> catch 
> f((0,0)) 
> end 
> end 
>
> This code is meant for REPL usage primarily, and so the convenience of 
> typing just Fun(f) is worth having such “questionable” code.   
>
>
> > On 29 Mar 2015, at 6:29 am, Mauro > 
> wrote: 
> > 
> >> In ApproxFun, a user supplied function is approximated.  the 
> >> approximation depends on whether the function is univariate or 
> >> bivariate 
> > 
> > How does it work, if the user defines several methods? 
> > 
> > f(x) = x 
> > f(x,y) = x+y 
> > 
> >>> The method_exists function would probably be a slightly cleaner way 
> >>> to do this. 
> > 
> > method_exists should be good for generic functions but it does not work 
> > with anonymous functions.  I think this gives you the number of 
> > arguments: 
> > 
> > length(Base.uncompressed_ast(( (x,y,z)->1 ).code.def).args[1]) 
> > 
> > something similar should let you figure out whether a one-argument 
> > signature is a tuple.  Not sure though this is the preferred approach. 
>
>

[julia-users] Re: Question: `./` has no method matching ./(::Int64, ::AdditionAtom)

2015-03-22 Thread Miles Lubin
In the language of DCP, the division operator doesn't preserve convexity, 
so I'm assuming that's why it's not implemented.
Looking at the list of available operations 
(http://convexjl.readthedocs.org/en/latest/operations.html), you may want 
to try inv_pos(x-t).

On Saturday, March 21, 2015 at 9:08:18 PM UTC-4, Dongning Guo wrote:
>
> An error occurred upon my first attempt to use Julia for solving a 
> meaningful convex optimization problem:
>
> using Convex
> x = Variable(3)
> t = rand(3)
> s = t+rand(3)
> constraints = [x>=0, x<=s]
> p = minimize(sum(1./(x-t)), constraints)
>
> `./` has no method matching ./(::Int64, ::AdditionAtom)
>
> Searched for awhile for a fix but to no avail.  Can anyone help?  Thanks 
> in advance!
>


[julia-users] Re: Using JuMP module across multiple processes

2015-03-22 Thread Miles Lubin
Cross-post 
with https://groups.google.com/forum/#!topic/julia-opt/k19ZV2YgPgs

On Sunday, March 22, 2015 at 12:15:37 PM UTC-4, Kirsten Westeinde wrote:
>
> Hello,
>
> I am trying to do some computations using JuMP and Cbc. Everything works 
> as desired when I run it with one process, but the second I try to use 
> multiple processes I am getting this:
>
> WARNING: Module JuMP not defined on process 2
>
> fatal error on 2: ERROR: UndefVarError: JuMP not defined
>
> I have tried: @everywhere using JuMP but I still get the error. I thought 
> it may be a problem with JuliaStudio so I have also tried running it from 
> the Julia command line but am still getting the same error.
>
> Thanks in advance for any guidance,
>
> Kirsten
>


[julia-users] Re: Point test convex hull

2015-03-02 Thread Miles Lubin
There are certainly more efficient specialized solutions for low dimensions 
like 3D, but you can test whether a point is contained in the convex hull 
by solving a linear programming feasibility problem:

x = [1,1,1] # test point
Z = [1 0  # points along columns
 0 1
 0 0]
using JuMP
m = Model()
@defVar(m, 0 <= lambda[j=1:size(Z,2)] <= 1)

@addConstraint(m, inhull[i=1:length(x)], x[i] == sum{Z[i,j]*lambda[j], j = 1
:size(Z,2)})
@addConstraint(m, sum(lambda) == 1)
status = solve(m)
# Infeasible means not in the hull, Optimal means in the hull


On Monday, March 2, 2015 at 11:08:35 AM UTC-5, Robert Gates wrote:
>
> Dear Julia Users,
>
> does anyone know of a package capable of computing whether a point lies 
> inside or outside of a (3D) convex hull?
>
> I know that the solution to this problem is rather trivial, however, 
> before I reinvent the wheel, I figured some code might already be out 
> there. I checked the Julia interface to qhull (CHull.jl) but was unable to 
> find a quick fix in the qhull manual. I know that the GeometricalPredicates 
> package provides this functionality for triangles and tetrahedra, however, 
> I was looking for something more general.
>
> Best regards,
> Robert
>


Re: [julia-users] Re: Community Support for Julia on Travis CI

2015-02-14 Thread Miles Lubin
Would that also run coverage by default when running Pkg.test() locally? 
That's probably not the expected behavior given that it will litter 
coverage files all over the place.

On Saturday, February 14, 2015 at 7:22:24 AM UTC-5, Tony Kelman wrote:
>
> Making things too complicated will most likely lead to this continuing to 
> drag on.
>
> There's no harm in running coverage=true by default, right? Assuming it 
> doesn't slow things down appreciably... Judging by the lack of response 
> from anyone who works for Travis when we asked for an opinion on this, I 
> don't think they would want anything specifically hard-coded in the default 
> build scripts that deals with sending coverage results to coveralls. The 
> easy, simple, non-controversial thing to do would be flip the switch to 
> just generate the data by default. Deciding when to send the data, on which 
> combination of OS, Julia version, etc should probably be up to the package 
> author. Turning on coverage=true by default will just allow you to write 
> that logic in an after_success: section of .travis.yml as opposed to having 
> to also overwrite the default script: section.
>
>
> On Saturday, February 14, 2015 at 4:10:54 AM UTC-8, Tim Holy wrote:
>>
>> Best to use coverage=true only on nightly, as it gives more accurate 
>> results 
>> than release. 
>>
>> --Tim 
>>
>> On Saturday, February 14, 2015 03:35:38 AM Tony Kelman wrote: 
>> > Nothing formally supported yet. Have a look at what other packages are 
>> > doing, copy them. 
>> > 
>> > We never made much progress 
>> > on https://github.com/JuliaCI/travis-build/issues/1 
>> > or https://github.com/travis-ci/travis-ci/issues/3092 
>> > 
>> > We can at the very least turn on the coverage=true keyword argument to 
>> > Pkg.test in the default script. That would be simple as a PR to 
>> > travis-build, dunno how long it would take them to deploy it though. 
>> This 
>> > languished because I think people were wondering whether or not to do 
>> > anything fancier, and if so exactly what it should look like. 
>> > 
>> > On Friday, February 13, 2015 at 6:45:21 PM UTC-8, Jiahao Chen wrote: 
>> > > bump 
>> > > 
>> > > So what is the currently recommended Travis CI script that generates 
>> > > coverage data? 
>> > > 
>> > > A bunch of packages have been switched over to the new scripts with 
>> > > language: julia and now have 0 coverage. See, for example: 
>> > > 
>> > > https://github.com/JuliaLang/Color.jl/issues/79 
>> > > 
>> > > On Friday, December 12, 2014 at 6:13:55 PM UTC-5, Luthaf wrote: 
>> > >> Can it be possible to add some kind of support for code coverage ? 
>> > >> 
>> > >> I use this package (https://github.com/IainNZ/Coverage.jl) which 
>> seems 
>> > >> to be a standard one in Julia. 
>>
>>

[julia-users] Re: benchmark with Julia 2x faster than C

2015-02-11 Thread Miles Lubin
Hi Arch, all,

Thanks for looking into this, it's amazing to have experts here who 
understand the depths of compilers. I'm stubbornly having difficulty 
reproducing your timings, even though I see the same assembly generated for 
clang. I've tried on an i5-3320M and on an E5-2650 and on both, Julia is 
faster. How were you measuring the times in nanoseconds?

Miles

On Thursday, January 29, 2015 at 12:20:46 PM UTC-5, Arch Robison wrote:
>
> I can't replicate the the 2x difference.  The C is faster for me.  But I 
> have gcc 4.8.2, not gcc 4.9.1.  Nonetheless, the experiment points out 
> where Julia is missing a loop optimization that Clang and gcc get.  Here is 
> a summary of combinations that I tried on a i7-4770 @ 3.40 GHz.
>
>- Julia 0.3.5: *70*  nsec.  Inner loop is:
>
> L82:vmulsd  XMM3, XMM1, XMM2
>
> vmulsd  XMM4, XMM1, XMM1
>
> vsubsd  XMM4, XMM4, XMM0
>
> vdivsd  XMM3, XMM4, XMM3
>
> vaddsd  XMM1, XMM1, XMM3
>
> vmulsd  XMM3, XMM1, XMM1
>
> vsubsd  XMM3, XMM3, XMM0
>
> vmovq   RDX, XMM3
>
> and RDX, RAX
>
> vmovq   XMM3, RDX
>
> vucomisd XMM3, QWORD PTR [RCX]
>
> ja  L82
>
>
>- Julia trunk from around Jan 19 + LLVM 3.5: *61 *nsec.  Inner loop is:
>
> L80:vmulsd  xmm4, xmm1, xmm1
>
> vsubsd  xmm4, xmm4, xmm0
>
> vmulsd  xmm5, xmm1, xmm2
>
> vdivsd  xmm4, xmm4, xmm5
>
> vaddsd  xmm1, xmm1, xmm4
>
> vmulsd  xmm4, xmm1, xmm1
>
> vsubsd  xmm4, xmm4, xmm0
>
> vandpd  xmm4, xmm4, xmm3
>
> vucomisd xmm4, qword ptr [rax]
>
> ja  L80
>
>  
>
> The abs is done more efficiently than for Julia 0.3.5 because of PR #8364.
>  LLVM missed a CSE opportunity here because of loop rotation: the last 
> vmulsd of each iteration computes the same thing as the first vmulsd of the 
> next iteration.  
>
>
>-  C code compiled with gcc 4.8.2, using gcc -O2 -std=c99 
>-march=native -mno-fma: *46 *nsec
>
> .L11:
>
> vaddsd  %xmm1, %xmm1, %xmm3
>
> vdivsd  %xmm3, %xmm2, %xmm2
>
> vsubsd  %xmm2, %xmm1, %xmm1
>
> vmulsd  %xmm1, %xmm1, %xmm2
>
> vsubsd  %xmm0, %xmm2, %xmm2
>
> vmovapd %xmm2, %xmm3
>
> vandpd  %xmm5, %xmm3, %xmm3
>
> vucomisd%xmm4, %xmm3
>
> ja  .L11
>
>
> Multiply by 2 (5 clock latency) has been replaced by add-to-self (3 clock 
> latency).  It picked up the CSE opportunity.  Only 1 vmulsd per iteration!
>
>
>- C code compiled with clang 3.5.0, using clang -O2 -march=native: *46 
>*nsec
>
> .LBB1_3:# %while.body
>
> # =>This Inner Loop Header: Depth=1
>
> vmulsd  %xmm3, %xmm1, %xmm5
>
> vdivsd  %xmm5, %xmm2, %xmm2
>
> vaddsd  %xmm2, %xmm1, %xmm1
>
> vmulsd  %xmm1, %xmm1, %xmm2
>
> vsubsd  %xmm0, %xmm2, %xmm2
>
> vandpd  %xmm4, %xmm2, %xmm5
>
> vucomisd.LCPI1_1(%rip), %xmm5
>
> ja  .LBB1_3
>
>
> Clang picks up the CSE opportunity but misses the add-to-self opportunity 
> (xmm3=-2.0).   It's also using LLVM.  
> We should check why Julia is missing the CSE opportunity.  Maybe Clang is 
> running a pass that handles CSE for a rotated loop?  Though looking at the 
> Julia pass list, it looks like CSE runs before loop rotation.  Needs more 
> investigation.
>
>
> - Arch 
>  
>


Re: [julia-users] Announcing Convex.jl - Convex Optimization in Julia

2015-02-04 Thread Miles Lubin
I'm personally very pleased to see this. The JuMP team has worked closely 
with the Convex.jl team to make sure that we share a common infrastructure 
(through MathProgBase) to talk to solvers, and I don't think it's an 
exaggeration to say that this has resulted in an unprecedented level of 
solver interoperability. At this point I'm hard pressed to think of another 
platform besides Julia which lets you easily switch between AMPL/GAMS-style 
modeling (via JuMP) and DCP-style modeling (via Convex.jl) as you might 
experiment with different formulations of a problem.

On Wednesday, February 4, 2015 at 8:57:51 PM UTC-5, Elliot Saba wrote:
>
> This is so so cool, Madeleine.  Thank you for sharing.  I'm a huge fan of 
> DCP, ever since I took a convex optimization course here at the UW (which 
> of course featured cvx and Boyd's book) and seeing this in Julia makes me 
> smile.
> -E
>
> On Wed, Feb 4, 2015 at 5:53 PM, Madeleine Udell  > wrote:
>
>> Convex.jl  is a Julia library for 
>> mathematical programming that makes it easy to formulate and fast to solve 
>> nonlinear convex optimization problems. Convex.jl 
>>  is a member of the JuliaOpt 
>>  umbrella group and can use (nearly) any 
>> solver that complies with the MathProgBase interface, including Mosek 
>> , Gurobi 
>> , ECOS 
>> , SCS 
>> , and GLPK 
>> .
>>
>> Here's a quick example of code that solves a non-negative least-squares 
>> problem.
>>
>> using Convex
>>
>> # Generate random problem data
>> m = 4;  n = 5
>> A = randn(m, n); b = randn(m, 1)
>>
>> # Create a (column vector) variable of size n x 1.
>> x = Variable(n)
>>
>> # The problem is to minimize ||Ax - b||^2 subject to x >= 0
>> problem = minimize(sum_squares(A * x + b), [x >= 0])
>>
>> solve!(problem)
>>
>> We could instead solve a robust approximation problem by replacing 
>> sum_squares(A 
>> * x + b) by sum(norm(A * x + b, 1)) or sum(huber(A * x + b)); it's that 
>> easy.
>>
>> Convex.jl  is different from JuMP 
>>  in that it allows (and 
>> prioritizes) linear algebraic and functional constructions in objectives 
>> and constraints (like max(x,y) < A*z). Under the hood, it converts 
>> problems to a standard conic form, which requires (and certifies) that the 
>> problem is convex, and guarantees global optimality of the resulting 
>> solution.
>>
>
>

[julia-users] Re: Save state in a function

2015-02-03 Thread Miles Lubin
Just to clarify, my suggestion doesn't require any saved state between 
functions. In the context of the discussion, the state can be saved in the 
AbstractNLPEvaluator object.

On Tuesday, February 3, 2015 at 1:11:36 PM UTC-5, Peter Simon wrote:
>
> I saw a suggestion from Miles Lubin in 
> https://groups.google.com/forum/#!topic/julia-opt/z8Ld4-kdvCI for 
> avoiding redundant calculations that appeared to involve saving state 
> between function calls, and I wonder what is the standard Julian way to do 
> this.  I'm asking here because this seems to be a general Julia question. 
>  In Matlab I would use a "persistent" variable.  What is the corresponding 
> idiom in Julia?  Closures?  Or something else?
>
> Thanks,
> --Peter
>


Re: [julia-users] benchmark with Julia 2x faster than C

2015-01-27 Thread Miles Lubin
That was with gcc 4.9.1 -O2 -march=native, by the way. Here's clang 3.5.0 
-O2 -march=native:

00400570 :
  400570: c5 fb 59 c8   vmulsd %xmm0,%xmm0,%xmm1
  400574: c5 f3 5c d0   vsubsd %xmm0,%xmm1,%xmm2
  400578: c5 e9 54 0d 80 04 00 vandpd 0x480(%rip),%xmm2,%xmm1# 
400a00 <_IO_stdin_used+0x60>
  40057f: 00 
  400580: c5 f9 2e 0d 28 04 00 vucomisd 0x428(%rip),%xmm1# 4009b0 
<_IO_stdin_used+0x10>
  400587: 00 
  400588: 76 3d jbe4005c7 
  40058a: c5 fb 10 1d 26 04 00 vmovsd 0x426(%rip),%xmm3# 4009b8 
<_IO_stdin_used+0x18>
  400591: 00 
  400592: c5 fb 10 25 66 04 00 vmovsd 0x466(%rip),%xmm4# 400a00 
<_IO_stdin_used+0x60>
  400599: 00 
  40059a: c5 f8 28 c8   vmovaps %xmm0,%xmm1
  40059e: 66 90 xchg   %ax,%ax
  4005a0: c5 f3 59 eb   vmulsd %xmm3,%xmm1,%xmm5
  4005a4: c5 eb 5e d5   vdivsd %xmm5,%xmm2,%xmm2
  4005a8: c5 f3 58 ca   vaddsd %xmm2,%xmm1,%xmm1
  4005ac: c5 f3 59 d1   vmulsd %xmm1,%xmm1,%xmm2
  4005b0: c5 eb 5c d0   vsubsd %xmm0,%xmm2,%xmm2
  4005b4: c5 e9 54 ec   vandpd %xmm4,%xmm2,%xmm5
  4005b8: c5 f9 2e 2d f0 03 00 vucomisd 0x3f0(%rip),%xmm5# 4009b0 
<_IO_stdin_used+0x10>
  4005bf: 00 
  4005c0: 77 de ja 4005a0 
  4005c2: c5 f8 28 c1   vmovaps %xmm1,%xmm0
  4005c6: c3   retq   
  4005c7: c3   retq   
  4005c8: 0f 1f 84 00 00 00 00 nopl   0x0(%rax,%rax,1)
  4005cf: 00 


On Tuesday, January 27, 2015 at 7:17:58 PM UTC-5, Miles Lubin wrote:
>
> Here's the Julia assembly:
>
> function squareroot(x)
> it = x
> err = it*it-x # line 5
> while abs(err) > 1e-13
> it = it - (err)/(2it) # line 7
> err = it*it-x
> end
> return it
> end
> Source line: 5
>  push RBP
>  mov RBP, RSP
> Source line: 5
>  vmulsd XMM1, XMM0, XMM0
>  vsubsd XMM2, XMM1, XMM0
> Source line: 6
>  vmovq RCX, XMM2
>  movabs RAX, 9223372036854775807
>  and RCX, RAX
>  vmovq XMM1, RCX
>  movabs RCX, 140067103257920
>  vucomisd XMM1, QWORD PTR [RCX]
>  ja 9
>  vmovaps XMM1, XMM0
>  jmpq 61
>  movabs RDX, 140067103257928
>  vmovsd XMM3, QWORD PTR [RDX]
>  vmovaps XMM1, XMM0
> Source line: 7
>  vmulsd XMM4, XMM1, XMM3
>  vdivsd XMM2, XMM2, XMM4
>  vaddsd XMM1, XMM1, XMM2
> Source line: 8
>  vmulsd XMM2, XMM1, XMM1
>  vsubsd XMM2, XMM2, XMM0
>  vmovq RDX, XMM2
>  and RDX, RAX
>  vmovq XMM4, RDX
>  vucomisd XMM4, QWORD PTR [RCX]
>  ja -43
> Source line: 10
>  vmovaps XMM0, XMM1
>  pop RBP
>  ret
>
> And for C:
> double squareroot(double x)
> {
> double it = x;
> double err = it*it-x;
> while (fabs(err) > 1e-13) {
> it = it - (err)/(2*it);
> err = it*it-x;
> }
> return it;
> }
> 004007d0 :
>   4007d0: c5 fb 59 c8   vmulsd %xmm0,%xmm0,%xmm1
>   4007d4: c5 f9 28 d0   vmovapd %xmm0,%xmm2
>   4007d8: c5 fb 10 2d e0 01 00 vmovsd 0x1e0(%rip),%xmm5# 4009c0 
> <_IO_stdin_used+0x70>
>   4007df: 00 
>   4007e0: c5 fb 10 25 a8 01 00 vmovsd 0x1a8(%rip),%xmm4# 400990 
> <_IO_stdin_used+0x40>
>   4007e7: 00 
>   4007e8: c5 f3 5c c8   vsubsd %xmm0,%xmm1,%xmm1
>   4007ec: c5 f9 28 d9   vmovapd %xmm1,%xmm3
>   4007f0: c5 e1 54 dd   vandpd %xmm5,%xmm3,%xmm3
>   4007f4: c5 f9 2e dc   vucomisd %xmm4,%xmm3
>   4007f8: 76 28 jbe400822 
>   4007fa: 66 0f 1f 44 00 00 nopw   0x0(%rax,%rax,1)
>   400800: c5 eb 58 da   vaddsd %xmm2,%xmm2,%xmm3
>   400804: c5 f3 5e cb   vdivsd %xmm3,%xmm1,%xmm1
>   400808: c5 eb 5c d1   vsubsd %xmm1,%xmm2,%xmm2
>   40080c: c5 eb 59 ca   vmulsd %xmm2,%xmm2,%xmm1
>   400810: c5 f3 5c c8   vsubsd %xmm0,%xmm1,%xmm1
>   400814: c5 f9 28 d9   vmovapd %xmm1,%xmm3
>   400818: c5 e1 54 dd   vandpd %xmm5,%xmm3,%xmm3
>   40081c: c5 f9 2e dc   vucomisd %xmm4,%xmm3
>   400820: 77 de ja 400800 
>   400822: c5 f9 28 c2   vmovapd %xmm2,%xmm0
>   400826: c3   retq   
>   400827: 66 0f 1f 84 00 00 00 nopw   0x0(%rax,%rax,1)
>   40082e: 00 00
>
> I'm definitely not trying to making the fastest possible sqrt 
> implementation. This is just for benchmarking purposes.
>
> Thanks,
> Miles
>
> On Tuesday, January 27, 2015 at 7:01:29 PM UTC-5, Erik Schnetter wrote:
>>
>> I would begin by looking at the generated assembler code. You would use 
>> "objdump -d" for C, and the "code_native" function for Julia. Can you post 
>> the results? 
>>
>> The discussion below is beside your

Re: [julia-users] benchmark with Julia 2x faster than C

2015-01-27 Thread Miles Lubin
Here's the Julia assembly:

function squareroot(x)
it = x
err = it*it-x # line 5
while abs(err) > 1e-13
it = it - (err)/(2it) # line 7
err = it*it-x
end
return it
end
Source line: 5
 push RBP
 mov RBP, RSP
Source line: 5
 vmulsd XMM1, XMM0, XMM0
 vsubsd XMM2, XMM1, XMM0
Source line: 6
 vmovq RCX, XMM2
 movabs RAX, 9223372036854775807
 and RCX, RAX
 vmovq XMM1, RCX
 movabs RCX, 140067103257920
 vucomisd XMM1, QWORD PTR [RCX]
 ja 9
 vmovaps XMM1, XMM0
 jmpq 61
 movabs RDX, 140067103257928
 vmovsd XMM3, QWORD PTR [RDX]
 vmovaps XMM1, XMM0
Source line: 7
 vmulsd XMM4, XMM1, XMM3
 vdivsd XMM2, XMM2, XMM4
 vaddsd XMM1, XMM1, XMM2
Source line: 8
 vmulsd XMM2, XMM1, XMM1
 vsubsd XMM2, XMM2, XMM0
 vmovq RDX, XMM2
 and RDX, RAX
 vmovq XMM4, RDX
 vucomisd XMM4, QWORD PTR [RCX]
 ja -43
Source line: 10
 vmovaps XMM0, XMM1
 pop RBP
 ret

And for C:
double squareroot(double x)
{
double it = x;
double err = it*it-x;
while (fabs(err) > 1e-13) {
it = it - (err)/(2*it);
err = it*it-x;
}
return it;
}
004007d0 :
  4007d0: c5 fb 59 c8   vmulsd %xmm0,%xmm0,%xmm1
  4007d4: c5 f9 28 d0   vmovapd %xmm0,%xmm2
  4007d8: c5 fb 10 2d e0 01 00 vmovsd 0x1e0(%rip),%xmm5# 4009c0 
<_IO_stdin_used+0x70>
  4007df: 00 
  4007e0: c5 fb 10 25 a8 01 00 vmovsd 0x1a8(%rip),%xmm4# 400990 
<_IO_stdin_used+0x40>
  4007e7: 00 
  4007e8: c5 f3 5c c8   vsubsd %xmm0,%xmm1,%xmm1
  4007ec: c5 f9 28 d9   vmovapd %xmm1,%xmm3
  4007f0: c5 e1 54 dd   vandpd %xmm5,%xmm3,%xmm3
  4007f4: c5 f9 2e dc   vucomisd %xmm4,%xmm3
  4007f8: 76 28 jbe400822 
  4007fa: 66 0f 1f 44 00 00 nopw   0x0(%rax,%rax,1)
  400800: c5 eb 58 da   vaddsd %xmm2,%xmm2,%xmm3
  400804: c5 f3 5e cb   vdivsd %xmm3,%xmm1,%xmm1
  400808: c5 eb 5c d1   vsubsd %xmm1,%xmm2,%xmm2
  40080c: c5 eb 59 ca   vmulsd %xmm2,%xmm2,%xmm1
  400810: c5 f3 5c c8   vsubsd %xmm0,%xmm1,%xmm1
  400814: c5 f9 28 d9   vmovapd %xmm1,%xmm3
  400818: c5 e1 54 dd   vandpd %xmm5,%xmm3,%xmm3
  40081c: c5 f9 2e dc   vucomisd %xmm4,%xmm3
  400820: 77 de ja 400800 
  400822: c5 f9 28 c2   vmovapd %xmm2,%xmm0
  400826: c3   retq   
  400827: 66 0f 1f 84 00 00 00 nopw   0x0(%rax,%rax,1)
  40082e: 00 00

I'm definitely not trying to making the fastest possible sqrt 
implementation. This is just for benchmarking purposes.

Thanks,
Miles

On Tuesday, January 27, 2015 at 7:01:29 PM UTC-5, Erik Schnetter wrote:
>
> I would begin by looking at the generated assembler code. You would use 
> "objdump -d" for C, and the "code_native" function for Julia. Can you post 
> the results? 
>
> The discussion below is beside your points, but nevertheless: 
>
> Your sqrt implementation is very slow for two reasons: 
>
> (1) You are using a while loop to check for termination. If you use 
> exponent manipulation, then your starting guess can be within sqrt(2) of 
> the result, and then you can determine the necessary number of iterations 
> ahead of time. You probably need 5 or 6 iterations for the accuracy you 
> chose. 
>
> (2) You are using a division in your algorithm, where divisions are 
> expensive. If you calculated 1/sqrt(x) instead of sqrt(x), then you don't 
> need a division. Later you can calculate sqrt(x) = x * (1/sqrt(x)). 
>
> A good sqrt routine should take less than 10^-8 seconds. 
>
> -erik 
>


Re: [julia-users] benchmark with Julia 2x faster than C

2015-01-27 Thread Miles Lubin
I tried replacing fabs with a couple different manual implementations to no
effect.

On Tue, Jan 27, 2015 at 6:50 PM, Stefan Karpinski <
stefan.karpin...@gmail.com> wrote:

> Is it possible that C can't/isn't inlining fabs?
>
>
> > On Jan 27, 2015, at 5:39 PM, Miles Lubin  wrote:
> >
> > I'm working on a microbenchmark and found a surprising result (maybe
> not?) that Julia is about 2x faster than the same algorithm hand-coded in
> C. I wanted to check if I'm doing anything obviously wrong here before
> reporting these results. The timings reproduce across different systems and
> compiler options (clang/gcc -O2/-O3).
> >
> > The test is just to compute square root using newton's method. The
> relevant code is in this gist:
> https://gist.github.com/mlubin/4994c65c7a2fa90a3c7e.
> >
> > On Julia 0.3.5, each function call takes 8.85*10^-8 seconds. The best
> timing I've seen from C is 1.61*10^-7 using gcc -O2 -march=native.
> >
> > I did my best to check for common mistakes:
> > - Julia and C use the exact same timing routine with 10,000 repetitions
> > - Both give the correct answer, and the important code isn't being
> optimized away.
> >
> > Any ideas as to why Julia is faster on this very simple code? I know
> that performance comparisons with runtimes on the order of nanoseconds are
> probably not too meaningful, but people still like absolute numbers, and
> it's a bit surprising that I can't match the performance of Julia from C.
> >
> > Thanks,
> > Miles
>


Re: [julia-users] benchmark with Julia 2x faster than C

2015-01-27 Thread Miles Lubin
I just played around with using time_ns() from the Julia and C code and 
didn't observe significant differences in the timing. I don't think the 
issue is just timing accuracy, because then the results are very consistent 
without too much variation between runs.

On Tuesday, January 27, 2015 at 5:46:46 PM UTC-5, Jacob Quinn wrote:
>
> I'm not sure about using the `time()` function in the julia code. I 
> believe it's resolution is system dependent, but microseconds at best. I 
> think it's more common to use the `tic()` and `toq()` that use `time_ns()` 
> internally for nanosecond resolution. Though you probably did that for 
> easier C compatibility. Not sure though.
>
> On Tue, Jan 27, 2015 at 3:39 PM, Miles Lubin  > wrote:
>
>> I'm working on a microbenchmark and found a surprising result (maybe 
>> not?) that Julia is about 2x faster than the same algorithm hand-coded in 
>> C. I wanted to check if I'm doing anything obviously wrong here before 
>> reporting these results. The timings reproduce across different systems and 
>> compiler options (clang/gcc -O2/-O3).
>>
>> The test is just to compute square root using newton's method. The 
>> relevant code is in this gist: 
>> https://gist.github.com/mlubin/4994c65c7a2fa90a3c7e.
>>
>> On Julia 0.3.5, each function call takes 8.85*10^-8 seconds. The best 
>> timing I've seen from C is 1.61*10^-7 using gcc -O2 -march=native.
>>
>> I did my best to check for common mistakes:
>> - Julia and C use the exact same timing routine with 10,000 repetitions
>> - Both give the correct answer, and the important code isn't being 
>> optimized away.
>>
>> Any ideas as to why Julia is faster on this very simple code? I know that 
>> performance comparisons with runtimes on the order of nanoseconds are 
>> probably not too meaningful, but people still like absolute numbers, and 
>> it's a bit surprising that I can't match the performance of Julia from C.
>>
>> Thanks,
>> Miles
>>
>
>

[julia-users] benchmark with Julia 2x faster than C

2015-01-27 Thread Miles Lubin
I'm working on a microbenchmark and found a surprising result (maybe not?) 
that Julia is about 2x faster than the same algorithm hand-coded in C. I 
wanted to check if I'm doing anything obviously wrong here before reporting 
these results. The timings reproduce across different systems and compiler 
options (clang/gcc -O2/-O3).

The test is just to compute square root using newton's method. The relevant 
code is in this gist: https://gist.github.com/mlubin/4994c65c7a2fa90a3c7e.

On Julia 0.3.5, each function call takes 8.85*10^-8 seconds. The best 
timing I've seen from C is 1.61*10^-7 using gcc -O2 -march=native.

I did my best to check for common mistakes:
- Julia and C use the exact same timing routine with 10,000 repetitions
- Both give the correct answer, and the important code isn't being 
optimized away.

Any ideas as to why Julia is faster on this very simple code? I know that 
performance comparisons with runtimes on the order of nanoseconds are 
probably not too meaningful, but people still like absolute numbers, and 
it's a bit surprising that I can't match the performance of Julia from C.

Thanks,
Miles


Re: [julia-users] Re: flip sparse matrices slow

2015-01-07 Thread Miles Lubin
It may be more efficient (given the current implementation) to filter the 
I, J, and V arrays directly and then call sparse() again.

On Wednesday, January 7, 2015 12:48:31 PM UTC-7, Christoph Ortner wrote:
>
>
> Thanks for your replies - I will write a short piece of code that shows 
> the performance problem, and then submit an issue.
> Christoph
>
> P.S: I do use sparse(I,J,V) to construct the matrix, but I am then unable 
> to extract a sub matrix because the slicing is so slow.
>


[julia-users] Re: pmap "load balancing"?

2015-01-01 Thread Miles Lubin
pmap already does dynamic scheduling, tasks get assigned dynamically as 
workers become available. This strategy can only do so much if the tasks 
are extremely imbalanced.

On Thursday, January 1, 2015 9:10:51 PM UTC-5, Gabriel Mihalache wrote:
>
> Hello, all!
>
> I have a pmap over a "large" array, 25^2, and each call does quite a bit 
> of work, but some end up having to do a lot more than others. Using htop I 
> notice that most CPUs on most nodes are close to idle (I'm running this on 
> a SLURM cluster).
>
> Is there a pmap (drop-in?) replacement which has a different scheduling 
> strategy? Something comparable with OpenMP's dynamic scheduling? Am I 
> misunderstanding the entire philosophy of pmap?
>
> Thank you!
> Gabriel
>


[julia-users] Re: Why doesn't Dict() accept an output from zip?

2014-12-25 Thread Miles Lubin
This is a perfectly reasonable thing to want, and in fact has already been 
implemented in Julia 0.4-dev:

julia> Dict(zip("abc","123"))
Dict{Char,Char} with 3 entries:
  'b' => '2'
  'c' => '3'
  'a' => '1'



On Friday, December 26, 2014 12:15:07 AM UTC-5, Ronald L. Rivest wrote:
>
> Since the Dict constructor accepts an array of pairs, I expected
> it to also accept the output from zip.  So
>  Dict(zip("abc","123"))
> should produce
>  [ 'a'=>'1', 'b'=>'2', 'c'=>'3' ]
> but instead it breaks:
>  ERROR: `Dict{K,V}` has no method matching 
> Dict{K,V}(::Zip2{ASCIIString,ASCIIString})
>
> The fix I could figure out seems to be to use collect
>  Dict(collect(zip("abc","123")))
> but this feels rather inelegant...
>
> Have I missed something, or perhaps this is a feature that should be added?
>
> Cheers,
> Ron
>
>
>  

[julia-users] Re: Gadfly in userimg.jl

2014-12-16 Thread Miles Lubin
I've been able to successfully precompile JuMP. What issues were you 
running into?

On Friday, December 12, 2014 10:16:27 PM UTC-5, Matt Bauman wrote:
>
> I believe the trouble you're running into is that not all packages are 
> precompile-friendly.  While Gadfly by itself is, it dynamically tries to 
> load other packages if they exist.  I've run into this trouble when I have 
> some JuMP packages installed.  In my case, I fixed it by changing the 
> following in Compose.jl:
>
> diff --git a/src/table.jl b/src/table.jl
> index d66d88b..bbbfa31 100644
> --- a/src/table.jl
> +++ b/src/table.jl
> @@ -369,7 +369,7 @@ function realize_brute_force(tbl::Table, 
> drawctx::ParentDrawContext)
>  end
>
>
> -if Pkg.installed("JuMP") != nothing &&
> +if false && Pkg.installed("JuMP") != nothing &&
>  (Pkg.installed("GLPKMathProgInterface") != nothing ||
>   Pkg.installed("Cbc") != nothing)
>  include("table-jump.jl")
>
> On Friday, December 12, 2014 9:47:56 PM UTC-5, Joshua Job wrote:
>>
>> Hello all,
>>
>> Is there anyone else who has attempted to precompile Gadfly using 
>> userimg.jl? It keeps claiming that the variable STD_ variables 
>> (STDIN,STDOUT,STDERR) are not defined, and this seems to be a pretty 
>> common/general issue (it also comes up when trying to precompile HDF5, 
>> etc.). I've pulled the load time on Gadfly down to about 15 seconds by 
>> precompiling DataFrames, but it'd be really nice if I could just put "using 
>> My,Most,Used,Packages" unto my juliarc.jl file and not have a solid 30 
>> second startup time for a new instance of Julia.
>>
>> I'm wondering if a) is this an issue only I have and b) if not, is there 
>> some way to make it so the STD_ variables are all defined prior to 
>> running/compiling userimg.jl?
>>
>

[julia-users] Re: how to test NaN in an array?

2014-12-15 Thread Miles Lubin
any(isnan,arr) is probably the most compact syntax.

On Monday, December 15, 2014 3:33:55 PM UTC-5, Evan Pu wrote:
>
> 1 in [1,2,3] # returns true
>
> NaN in [NaN, 1.0, 2.0] # returns false
>
> how do I test if a float64 NaN is present in an array? I'm doing some 
> numerical computation and it can have some NaN error, I want to drop the 
> arrays that has NaN.
>


[julia-users] Re: C/MPI/SLURM => Julia/?/SLURM?

2014-11-23 Thread Miles Lubin
Two comments: 

1) It's pretty easy to use MPI from Julia. For some use cases it may make 
more sense than Julia's built-in approach to parallelism, especially if 
you're already comfortable with MPI. Though if you're well served by pmap, 
that's much simpler.

2) It looks like the subproblem you're solving is a 1d optimization. I 
wouldn't be surprised if you could get a significant speedup by tweaking 
the optimization algorithm (e.g., using a derivative-based method) and 
making the function evaluations more efficient (e.g., avoiding temporary 
allocations).

On Sunday, November 23, 2014 11:02:20 PM UTC-5, Gabriel Mihalache wrote:
>
> I made some progress on getting the code to run correctly in parallel 
> while using all the cores of one machine, by using pmap and running some 
> prep code on all the workers:
> The next steps are to get this running on multiple machines (using 
> addprocs_ssh?) and then benchmarking. Here's the state of my solution so 
> far, for anyone interested/stuck at this too:
>
> The SLURM batch file:
>
> #!/bin/bash
> #SBATCH -J sgmVfi
>
> #SBATCH -e elog_sgm
> #SBATCH -o olog_sgm
>
> #SBATCH -t 0:10:00
>
> #SBATCH --nodes=1
> #SBATCH --ntasks-per-node=1
> #SBATCH --cpus-per-task=24
>
> #SBATCH -p standard
> #SBATCH --mail-type=fail
>
> srun julia -q -p 24 -L loader.jl sgm.jl
>
>
> the "loader" file which defines constants for all workers:
>
> using MAT;
> using Grid;
> using NLopt;
>
> const alpha = 0.25;
> const uBeta = 0.90;
> const delta = 1.00;
>
> const kGridMin = 0.01;
> const kGridMax = 0.50;
> const kGridSize = 100;
> const kGrid = linrange(kGridMin, kGridMax, kGridSize);
>
> const vErrTol = 1e-8;
> const maxIter = 200;
>
> file = matopen("shock.mat")
> yData = read(file, "y");
> yGridSize = size(yData, 1);
> const y = linrange(yData[1], yData[end], yGridSize);
> const yPi = read(file, "yPi")
> close(file)
>
> gdp(yIx, kIx) = exp(y[yIx]) * kGrid[kIx].^alpha;
>
>
> and finally, the code running on the master worker/driver:
>
> function sgmVfi()
> V0 = zeros(yGridSize, kGridSize);
> V1 = zeros(size(V0));
> kPr = zeros(size(V0));
>
> iter = 0;
> err = 1.0;
> while iter < maxIter && err > vErrTol
> tic();
> iter += 1;
> println("Iteration ", iter);
> V0 = copy(V1);
> V0f = CoordInterpGrid( (y, kGrid), V0, BCnearest, InterpQuadratic);
>
> function workAt(iix)
> yIx = iix[1];
> kIx = iix[2];
> opt = Opt(:LN_BOBYQA, 1)
> lower_bounds!(opt, kGridMin)
> upper_bounds!(opt, min(kGridMax, gdp(yIx, kIx)  + (1-delta) * kGrid[kIx] - 
> 1e-5))
>
> myObj(kk) = log(gdp(yIx, kIx) - kk + (1-delta) * kGrid[kIx]) + uBeta * 
> sum( [ yPi[yIx,yPrIx] * V0f[y[yPrIx], kk] for yPrIx = 1:yGridSize ]);
>
> max_objective!(opt, (x::Vector, grad::Vector) -> myObj(x[1]) );
> (maxVal, maxK, ret) = optimize!(opt, [kGridMin] );
> return maxVal, maxK[1];
> end
>
> tmp = pmap(workAt, [[yIx, kIx] for yIx = 1:yGridSize, kIx = 1:kGridSize]);
> tmp = reshape(tmp, size(V1)); 
> for yIx = 1:yGridSize, kIx = 1:kGridSize
> V1[yIx, kIx] = tmp[yIx, kIx][1];
> kPr[yIx, kIx] = tmp[yIx, kIx][2];
> end
>
> err = maximum(abs(V1[:] - V0[:]));
> toc()
> println("Err = ", err)
> end
>
> return kGrid, V1, kPr;
> end
>
> kGrid, V, kPr = sgmVfi()
>
> file = matopen("results.mat", "w")
> write(file, "kPr", kPr)
> write(file, "V", V)
> write(file, "y", [y])
> write(file, "yPi", yPi)
> write(file, "kGrid", [kGrid])
> close(file)
>
>
>
>
>

[julia-users] Re: LIst Ipopt options when using JuMP

2014-11-23 Thread Miles Lubin
On Sunday, November 23, 2014 3:37:52 PM UTC-5, Pileas wrote:
>
>
> One last question that I have is whether we can use some kind of delimiter 
> that can give us the result in a csv file, but not in one column, but 
> rather in deifferent ones.
>

 It's better to just print out the file in the format you want by querying 
the result with getValue(). You could use the built-in writecsv().


[julia-users] Re: Bug in logdet?

2014-10-18 Thread Miles Lubin
In 

srand(1);a=rand(2,2);a=a*a';cholfact!(a);logdet(a)

The issue is that cholfact!(a) destroys the data in a, you're not allowed 
to re-use it afterwords. You still need to save the result of cholfact! and 
use that object inside of logdet(), e.g.,

srand(1);a=rand(2,2);a=a*a';afact = cholfact!(a);logdet(afact)




On Saturday, October 18, 2014 7:47:59 PM UTC-4, Adam Check wrote:
>
> Hi,
>
> I am need to compute the cholesky decomposition of a matrix, and then use 
> the log determinant of the decomposition. I get different answers depending 
> on if I use cholfact vs. cholfact!
>
> The code below produces the same number for the logdet (-4.479)
>
> *srand(1);a=rand(2,2);a=a*a';logdet(cholfact(a))*
>
> *srand(1);a=rand(2,2);a=a*a';logdet(cholfact!(a))*
>
>
> However, the following code results in different values being reported:
>
> *srand(1);a=rand(2,2);a=a*a';a=cholfact(a);logdet(a)*
>
> *srand(1);a=rand(2,2);a=a*a';cholfact!(a);logdet(a)*
> with the second code reporting the (incorrect) value of -2.426, even 
> though the Cholfact object "a" appears identical in each case.
>
> This error happens consistently when implementing cholfact!() as in the 
> last line of code, and can result in an error when taking the logdet(): 
> *ERROR: 
> DomainError: determinant is negative*, even when no error results from 
> using the cholfact() function.
>
> I am currently using julia version 0.3.1, but I believe the problem has 
> been present in older versions as well.
>
> - Adam
>


[julia-users] Re: ANN: FiniteDifferenceDerivatives.jl and questions about performance improvement

2014-10-16 Thread Miles Lubin
Hi Paweł,

Thanks for the clarification, I agree that this code is likely too 
specialized for Calculus.jl. I'm not very familiar with PDE solution 
techniques, but you may want to take a look at the various JuliaDiff 
(http://www.juliadiff.org/) projects. It's possible that by using one of 
the truncated taylor series approaches, you could compute exact derivatives 
without the need for finite differences. I'm also happy to add a link to 
FDD.jl when it's published in METADATA.

Miles

On Thursday, October 16, 2014 7:41:34 AM UTC-4, Paweł Biernat wrote:
>
> Hi Miles,
>
> FiniteDifferenceDerivatives.jl contains methods that generalize those used 
> in Calculus.jl (general k-point methods vs fixed 2-point methods).  
>
> But at the same time I don't think that Calculus.jl would benefit from 
> general k-point methods,  because the use cases of both packages differ 
> greatly.   FDD.jl is useful mostly when you have to work with a function 
> given at fixed mesh points (for example when solving PDEs with method of 
> lines), then the only way to increase the precision of the derivative is to 
> increase the order of method.  On the other hand, in Calculus.jl you can 
> evaluate the function to be differentiated wherever you want so there is no 
> fixed mesh and you can just use a lower order method with a smaller step 
> size to compensate for the low order.
>
> Also Calculus.jl uses (and implements) d-dimensional finite difference 
> schemes necessary to compute gradients, while in FDD.jl I only implemented 
> a 1-dimensional case.  So far I have not looked at a general d-dimensional, 
> n-point, k-derivative methods and I don't even know whether they exist.
>
> In summary, it would be possible to use FDD.jl in Calculus.jl, although it 
> would probably be an overkill and with current implementation of FDD.jl it 
> would decrease the performance.
>
> Best,
> Paweł
>
> W dniu środa, 15 października 2014 22:06:40 UTC+2 użytkownik Miles Lubin 
> napisał:
>>
>> Hi Paweł,
>>
>> How does your approach compare with the implementation of finite 
>> differences in Calculus.jl? It would be great if you could contribute any 
>> new techniques there.
>>
>> Miles
>>
>> On Wednesday, October 15, 2014 6:48:29 AM UTC-4, Paweł Biernat wrote:
>>>
>>> Hi,
>>> I have been experimenting with various finite difference algorithms for 
>>> computing derivatives and I found and implemented a general algorithm for 
>>> computing arbitrarily high derivative to arbitrarily high order [1].  So 
>>> for anyone interested in playing with it I wrapped it up in a small package.
>>>
>>> I would also like to ask for your an advice on how to improve the 
>>> performance of the function fdd, so far for a first derivative on a three 
>>> point stencil (on a non-uniformly spaced mesh) it takes ten times as much 
>>> time to compute the derivative, when compared to a hand crafted 
>>> implementation of the same function.  For a comparison you can call
>>>
>>> using FiniteDifferenceDerivatives
>>> x=linspace(0,1,1000)
>>> f=sin(x)
>>> df=copy(x)
>>> @elapsed for i = 1:1000; fdd(1,3,x,f); end  # general function => ~0.5 s
>>> @elapsed for i = 1:1000; fdd13(x,f); end  # hand crafted => ~0.05 s
>>>
>>> Obviously, most of the time is spent inside fddcoeffs.  One way I could 
>>> increase the performance is to cache the results of fddcoeffs (its result 
>>> depends only on the mesh).  In my use case however, the mesh changes often 
>>> and fddcoeffs has to be called every time fdd is called.  I also tried to 
>>> replace x[i1:i1+order-1] with view(x,i1:i1+order-1) in the call to 
>>> fddcoeffs (with view from ArrayViews), but it didn't result in much of an 
>>> improvement.  I also played with swapping the indexing order of c in 
>>> fddcoeffs, but to no avail (maybe because the typical size of the matrix c 
>>> is small (3x3 or similar)).  Maybe I should try to inline the fddcoeffs 
>>> into the main body of fdd (I did this in my earlier version, and it 
>>> actually resulted in an improvement, but it looked kind of less elegant)?  
>>> Any advice (or pull requests) would be welcome.
>>>
>>> [1] https://github.com/pwl/FiniteDifferenceDerivatives.jl
>>>
>>> Cheers,
>>> Paweł
>>>
>>

[julia-users] Re: ANN: FiniteDifferenceDerivatives.jl and questions about performance improvement

2014-10-15 Thread Miles Lubin
Hi Paweł,

How does your approach compare with the implementation of finite 
differences in Calculus.jl? It would be great if you could contribute any 
new techniques there.

Miles

On Wednesday, October 15, 2014 6:48:29 AM UTC-4, Paweł Biernat wrote:
>
> Hi,
> I have been experimenting with various finite difference algorithms for 
> computing derivatives and I found and implemented a general algorithm for 
> computing arbitrarily high derivative to arbitrarily high order [1].  So 
> for anyone interested in playing with it I wrapped it up in a small package.
>
> I would also like to ask for your an advice on how to improve the 
> performance of the function fdd, so far for a first derivative on a three 
> point stencil (on a non-uniformly spaced mesh) it takes ten times as much 
> time to compute the derivative, when compared to a hand crafted 
> implementation of the same function.  For a comparison you can call
>
> using FiniteDifferenceDerivatives
> x=linspace(0,1,1000)
> f=sin(x)
> df=copy(x)
> @elapsed for i = 1:1000; fdd(1,3,x,f); end  # general function => ~0.5 s
> @elapsed for i = 1:1000; fdd13(x,f); end  # hand crafted => ~0.05 s
>
> Obviously, most of the time is spent inside fddcoeffs.  One way I could 
> increase the performance is to cache the results of fddcoeffs (its result 
> depends only on the mesh).  In my use case however, the mesh changes often 
> and fddcoeffs has to be called every time fdd is called.  I also tried to 
> replace x[i1:i1+order-1] with view(x,i1:i1+order-1) in the call to 
> fddcoeffs (with view from ArrayViews), but it didn't result in much of an 
> improvement.  I also played with swapping the indexing order of c in 
> fddcoeffs, but to no avail (maybe because the typical size of the matrix c 
> is small (3x3 or similar)).  Maybe I should try to inline the fddcoeffs 
> into the main body of fdd (I did this in my earlier version, and it 
> actually resulted in an improvement, but it looked kind of less elegant)?  
> Any advice (or pull requests) would be welcome.
>
> [1] https://github.com/pwl/FiniteDifferenceDerivatives.jl
>
> Cheers,
> Paweł
>


[julia-users] Re: Call for participation: High Performance Technical Computing in Dynamic Languages workshop (11/17/2014)

2014-08-22 Thread Miles Lubin
I'm not an organizer of this workshop, but just wanted to remind everyone 
of this and note that the deadline for submission is now August 25th, so 
get to writing those papers!

On Monday, March 31, 2014 10:14:21 AM UTC-6, Jiahao Chen wrote:
>
> It is my great pleasure to announce a workshop at this year's 
> Supercomputing 2014 conference entitled 
>
> High Performance Technical Computing in Dynamic Languages 
>
> which will be held on Monday, November 17, 2014 at the Colorado 
> Convention Center in New Orleans, Louisiana. 
>
> On behalf of the organizing committee, I hope you will be able to 
> attend the workshop to meet other Julia developers and enthusiasts, 
> and to find common ground with practitioners of other related 
> languages such as Python and R. 
>
> We are also soliciting papers regardless of discipline, affiliation or 
> language (within the scope of high level dynamic languages) pertaining 
> to the scalability challenges of deploying code written in high level 
> dynamic languages. 
>
> Please refer to the conference website for further details: 
>
> http://jiahao.github.io/hptcdl-sc14 
>
> Thanks, 
>
> Jiahao Chen 
> Staff Research Scientist 
> MIT Computer Science and Artificial Intelligence Laboratory 
>


Re: [julia-users] Optional import mechanism

2014-08-19 Thread Miles Lubin
This may do what you want (snippet from JuMP):

if isdir(Pkg.dir("ArrayViews"))
eval(Expr(:import,:ArrayViews))
const subarr = ArrayViews.view
else
const subarr = Base.sub
end


On Tuesday, August 19, 2014 7:38:27 PM UTC-6, Júlio Hoffimann wrote:
>
> Hi Joey,
>
>> It’s a bit ugly, but this should work:
>>
>> try
>> eval(Expr(:import, :ImageView))
>> global view = ImageView.view
>> catch err
>> @show err
>> # fallback to nothing
>> global view = view(args...; kargs...) = (nothing, nothing)
>> end
>>
>> Unfortunately it doesn't work:
>
> err => UndefVarError(:ImageView)
> err => ErrorException("invalid redefinition of constant view")
>
> Júlio.
>


Re: [julia-users] Suitable way to implement DSL for linear constraints

2014-08-18 Thread Miles Lubin
You can ignore any LineNumberNodes in a block; they're used for generating 
backtraces. Also, dump() is a great tool for printing out expressions to 
inspect their structure.

On Monday, August 18, 2014 4:43:16 AM UTC-6, Tomas Krehlik wrote:
>
> Thanks, sometimes even small nudge helps. I have it almost working, but 
> have one last question. Why aren't the outputs of these two the same? The 
> macro return four times Expr while the operation on the expression returns 
> LineNumberNode in one case.
>
> macro test(ex)
> [println(typeof(ex.args[i])) for i=1:length(ex.args)]
> end
>
> @test begin
>betaa(1,2,3) + 2.0 * betaa(2,3,1)=1
>3.0 * betaa(2,1,3) + betaa(2,3,4) = 1000
>end
>
> test = :(begin
>   betaa(1,2,3) + 2.0 * betaa(2,3,1)=1
>   3.0 * betaa(2,1,3) + betaa(2,3,4) = 1000
>   end)
>
> [println(typeof(test.args[i])) for i=1:length(test.args)]
>
> Thanks, T.
>
> On Monday, 18 August 2014 06:27:00 UTC+2, Miles Lubin wrote:
>>
>> Using eval() within a macro is almost always not what you should be 
>> doing. Macros should return code that may depend on the values of the input 
>> symbolically. In general, I would try writing down the expression that you 
>> want to generate, and then try to reproduce it in the macro, using 
>> macroexpand() for debugging.
>>
>> On Saturday, August 16, 2014 1:39:59 PM UTC-6, Tomas Krehlik wrote:
>>>
>>> The packages are certainly cool, but do not achieve what I want. I 
>>> probably explained myself wrongly, so I have a minimal example, which I 
>>> would like to turn to macro and cannot make it work. :-/
>>>
>>> Here is how it works without macro:
>>>
>>> function β(from, to, lag)
>>> # vars, lags and ntyp should be taken from the already 
>>> estimated object
>>>  col = fill(0.0, vars*lags + vars*ntyp)
>>>  col[(from + to)*lag] = 1.0
>>>  return col
>>> end
>>>
>>> function betaa(from, to, lag)
>>>  return β(from, to, lag)
>>> end
>>>
>>> test = :(begin
>>>betaa(1,2,3) + 2.0 * betaa(2,3,1)=1
>>>3.0 * betaa(2,1,3) + betaa(2,3,4) = 1000
>>>end)
>>>
>>> # Hardcode so far
>>> vars = 20
>>> lags = 15
>>> ntyp = 20
>>> ind = [typeof(test.args[i])==Expr for i=2:length(test.args)]
>>> ind = ([1:length(ind)][ind])+1
>>> R = apply(hcat,[eval(test.args[i].args[1]) for i=ind])
>>> r = [eval(test.args[i].args[2]) for i=ind]
>>>
>>> Gives the proper matrices, but when I do 
>>> macro restrictions(ex)
>>> vars = 20
>>> lags = 15
>>> ntyp = 20
>>> ind = [typeof(ex.args[i])==Expr for i=2:length(ex.args)]
>>> ind = ([1:length(ind)][ind])+1
>>> R = apply(hcat,[eval(ex.args[i].args[1]) for i=ind])
>>> r = [eval(ex.args[i].args[2]) for i=ind]
>>> (R, r)
>>> end
>>>
>>> @restrictions begin
>>>betaa(1,2,3) + 2.0 * betaa(2,3,1)=1
>>>3.0 * betaa(2,1,3) + betaa(2,3,4) = 1000
>>>end
>>>
>>> It does not work and I have no idea why, probably still too advanced 
>>> concept for me. Also the size of the column that is returned from betaa 
>>> depends on size of the system, that will be stored in another variable so 
>>> it would be cool if the betaa function could be evaluated in the macro 
>>> environment so that it would use the vars, lags and ntyp from the within 
>>> macro.
>>>
>>> Thanks for any help with this.
>>>
>>>
>>> On Saturday, 16 August 2014 17:44:26 UTC+2, Miles Lubin wrote:
>>>>
>>>> JuMP is also a good place to look. 
>>>>
>>>> On Saturday, August 16, 2014 8:11:04 AM UTC-6, Stefan Karpinski wrote:
>>>>>
>>>>> Have you looked at CVX? I feel like it may be relevant.
>>>>>
>>>>> On Aug 16, 2014, at 8:47 AM, Tomas Krehlik  
>>>>> wrote:
>>>>>
>>>>> I have a package that implements much of vector autoregression 
>>>>> estimation and I would like to develop a macro that would implement 
>>>>> domain 
>>>>> specific language for imposing linear restriction in multidimensional 
>>>>> time-series models. Of course, the user could input the restrictions 
>>>>> using 
>>>>> some matrices, however I tend to think that they will be much less 
>>>>> readable 
>>>>> t

Re: [julia-users] Suitable way to implement DSL for linear constraints

2014-08-17 Thread Miles Lubin
Using eval() within a macro is almost always not what you should be doing. 
Macros should return code that may depend on the values of the input 
symbolically. In general, I would try writing down the expression that you 
want to generate, and then try to reproduce it in the macro, using 
macroexpand() for debugging.

On Saturday, August 16, 2014 1:39:59 PM UTC-6, Tomas Krehlik wrote:
>
> The packages are certainly cool, but do not achieve what I want. I 
> probably explained myself wrongly, so I have a minimal example, which I 
> would like to turn to macro and cannot make it work. :-/
>
> Here is how it works without macro:
>
> function β(from, to, lag)
> # vars, lags and ntyp should be taken from the already estimated 
> object
>  col = fill(0.0, vars*lags + vars*ntyp)
>  col[(from + to)*lag] = 1.0
>  return col
> end
>
> function betaa(from, to, lag)
>  return β(from, to, lag)
> end
>
> test = :(begin
>betaa(1,2,3) + 2.0 * betaa(2,3,1)=1
>3.0 * betaa(2,1,3) + betaa(2,3,4) = 1000
>end)
>
> # Hardcode so far
> vars = 20
> lags = 15
> ntyp = 20
> ind = [typeof(test.args[i])==Expr for i=2:length(test.args)]
> ind = ([1:length(ind)][ind])+1
> R = apply(hcat,[eval(test.args[i].args[1]) for i=ind])
> r = [eval(test.args[i].args[2]) for i=ind]
>
> Gives the proper matrices, but when I do 
> macro restrictions(ex)
> vars = 20
> lags = 15
> ntyp = 20
> ind = [typeof(ex.args[i])==Expr for i=2:length(ex.args)]
> ind = ([1:length(ind)][ind])+1
> R = apply(hcat,[eval(ex.args[i].args[1]) for i=ind])
> r = [eval(ex.args[i].args[2]) for i=ind]
> (R, r)
> end
>
> @restrictions begin
>betaa(1,2,3) + 2.0 * betaa(2,3,1)=1
>3.0 * betaa(2,1,3) + betaa(2,3,4) = 1000
>end
>
> It does not work and I have no idea why, probably still too advanced 
> concept for me. Also the size of the column that is returned from betaa 
> depends on size of the system, that will be stored in another variable so 
> it would be cool if the betaa function could be evaluated in the macro 
> environment so that it would use the vars, lags and ntyp from the within 
> macro.
>
> Thanks for any help with this.
>
>
> On Saturday, 16 August 2014 17:44:26 UTC+2, Miles Lubin wrote:
>>
>> JuMP is also a good place to look. 
>>
>> On Saturday, August 16, 2014 8:11:04 AM UTC-6, Stefan Karpinski wrote:
>>>
>>> Have you looked at CVX? I feel like it may be relevant.
>>>
>>> On Aug 16, 2014, at 8:47 AM, Tomas Krehlik  wrote:
>>>
>>> I have a package that implements much of vector autoregression 
>>> estimation and I would like to develop a macro that would implement domain 
>>> specific language for imposing linear restriction in multidimensional 
>>> time-series models. Of course, the user could input the restrictions using 
>>> some matrices, however I tend to think that they will be much less readable 
>>> than if I implement this macro and I hope it should not be that bad. 
>>> Typical example would be:
>>>
>>> restr = @restrictions begin
>>>  2 beta[1,2,1] - beta[2,1,3] = 10
>>>  beta[1,3,1] - beta[1,4,1] = 0
>>> end
>>>
>>> and the restr would be a tuple with matrix and vector after this.
>>>
>>> The beta has the following structure beta[from, to, lag]. The macro 
>>> should take each line and parse it and form proper restriction matrix R 
>>> that will be made from the signs and indexes at betas and vector r. So I 
>>> need to parse the signs, constants, and indeces of betas and the thing that 
>>> is after the equality.
>>>
>>> Am I right that the right direction is parsing the lines with regular 
>>> expressions to create the right matrices, or do you have suggestion for 
>>> better approach? Any further reference for something similar (or useful 
>>> resource) that has been done before is also very appreciated.
>>>
>>> Related on that note, is there any function to return all matches from 
>>> regex? That seems to be the biggest bottleneck so far.
>>>
>>> Any comments appreciated. Thanks!
>>>
>>>

Re: [julia-users] Suitable way to implement DSL for linear constraints

2014-08-16 Thread Miles Lubin
JuMP is also a good place to look. 

On Saturday, August 16, 2014 8:11:04 AM UTC-6, Stefan Karpinski wrote:
>
> Have you looked at CVX? I feel like it may be relevant.
>
> On Aug 16, 2014, at 8:47 AM, Tomas Krehlik  > wrote:
>
> I have a package that implements much of vector autoregression estimation 
> and I would like to develop a macro that would implement domain specific 
> language for imposing linear restriction in multidimensional time-series 
> models. Of course, the user could input the restrictions using some 
> matrices, however I tend to think that they will be much less readable than 
> if I implement this macro and I hope it should not be that bad. Typical 
> example would be:
>
> restr = @restrictions begin
>  2 beta[1,2,1] - beta[2,1,3] = 10
>  beta[1,3,1] - beta[1,4,1] = 0
> end
>
> and the restr would be a tuple with matrix and vector after this.
>
> The beta has the following structure beta[from, to, lag]. The macro should 
> take each line and parse it and form proper restriction matrix R that will 
> be made from the signs and indexes at betas and vector r. So I need to 
> parse the signs, constants, and indeces of betas and the thing that is 
> after the equality.
>
> Am I right that the right direction is parsing the lines with regular 
> expressions to create the right matrices, or do you have suggestion for 
> better approach? Any further reference for something similar (or useful 
> resource) that has been done before is also very appreciated.
>
> Related on that note, is there any function to return all matches from 
> regex? That seems to be the biggest bottleneck so far.
>
> Any comments appreciated. Thanks!
>
>

Re: [julia-users] Behavior of size()

2014-08-13 Thread Miles Lubin
Perhaps the motivation for this behavior is that you can use 2d indexing on 
a 1d vector:

julia> x = [1,2,3];

julia> x[1,1]
1

julia> x[1,2]
ERROR: BoundsError()

Whether it makes sense to allow this is another question.

On Wednesday, August 13, 2014 8:59:03 PM UTC-6, Isaiah wrote:
>
> size(x,n) apparently works to arbitrary n. I don't think this is 
> intentional, because the C implementation includes a dimension check (this 
> call goes through codegen even at the REPL).
>
>
> On Wed, Aug 13, 2014 at 4:54 PM, Dominique Orban  > wrote:
>
>> I'm wondering if the following behavior is intentional and what the 
>> rationale is:
>>
>> julia> x = rand(5);
>>
>> julia> size(x)
>> (5,)
>>
>> julia> n = size(x,1)
>> 5
>>
>> julia> m = size(x,2)
>> 1
>>
>> julia> (n, m) = size(x)
>> ERROR: BoundsError()
>>
>> I would expect either both size(x,2) and (n,m) = size(x) to work and be 
>> consistent, or both to return the same error...
>>
>>
>

Re: [julia-users] Package Popularity Ranking

2014-08-13 Thread Miles Lubin
It would be cool to be able to sort by github stars (and later, github 
clones) on pkg.julialang.

On Wednesday, August 13, 2014 3:03:32 PM UTC-6, Iain Dunning wrote:
>
> Hi Michael,
>
> This is a problem I think about a lot, but definitely short on data. 
> pkg.julialang.org has the GitHub stars, which is possibly a decent proxy. 
> I could add an option to sort by github stars, which might help.
>
> And yes, github clone traffic just (yesterday) became available, so I'm 
> hopeful there.
>
> On Wednesday, August 13, 2014 10:53:58 AM UTC-4, John Myles White wrote:
>>
>> You can also use GitHub stars. I'd be really interested to know how those 
>> two measures correlate.
>>
>>  -- John
>>
>> On Aug 13, 2014, at 7:51 AM, Jacob Quinn  wrote:
>>
>> GitHub just started posting `git clone` data for individual repos, so 
>> this is definitely a possibility now. I wouldn't be surprised if Iain's 
>> Package 
>> Evaluator  started 
>> hooking into this data and using it to sort the package list 
>> .
>>
>> -Jacob
>>
>>
>> On Wed, Aug 13, 2014 at 9:03 AM, Michael Smith  wrote:
>>
>>> Is there some sort of way to rank the popularity of Julia packages?
>>> Haven't found anything like that (and apologies if I have missed 
>>> anything).
>>>
>>> The reason I'm asking is that, although I am aware that relying purely
>>> on rankings has several disadvantages, I still do believe that there are
>>> some strong advantages in having a rough idea about what the really
>>> important (or widely-used) packages are.
>>>
>>> This is particularly important considering the fact that the number of
>>> Julia packages is increasing, and for newbies it will be difficult to
>>> find their way around.
>>>
>>> Thanks,
>>>
>>> M
>>>
>>
>>
>>

[julia-users] Re: Multiple dispatch issue: cat() and a custom abstract array

2014-08-04 Thread Miles Lubin
On Monday, August 4, 2014 2:10:36 PM UTC-6, Michael Grant wrote:
>
> Is there an easy resolution to this problem? One solution is for Julia, or 
> me, to define
>
> function cat( d::Integer, X::Union(Number,Base.AbstractArray)... )
> function cat( d::Integer, X::Number... )
>
> If I do it, I'd be reimplementing  the generic method, in the first case 
> at least, unless there is a way for me to link these new declarations 
> directly to the existing implementations. Is there?
>


You could call the generic implementation by using invoke, e.g.,

function cat( d::Integer, X::Number... ) = invoke(cat,(Integer,Any...), d, X
...)

I'm not sure what the performance implications of this are, but it would be 
easy to check.


[julia-users] undergraduate research position available for JuMP (MIT only)

2014-07-25 Thread Miles Lubin
The JuMP team has undergraduate research positions (UROPs) for this fall, 
open to undergraduates at MIT. If you are interested, or know someone who 
may be interested in working on optimization software in Julia, please send 
us a note. Some background in numerical optimization would be preferred, 
but enthusiastic Julia users without this background will receive 
consideration as well.
See the full posting for 
details: http://web.mit.edu/urop/research/openings.html (posted on 7/16/14)

Apologies to the majority of julia-users subscribers to whom this does not 
apply.

Miles


[julia-users] Re: what is the easiest way to convert array type?

2014-07-24 Thread Miles Lubin
How about "convert(Vector{String},dt)"?

On Thursday, July 24, 2014 8:39:02 PM UTC-6, K leo wrote:
>
> I have an Array{Any,1}, but I want to convert to Array{String,1}. The 
> only way I know is through comprehension. 
>
> julia> dt 
> 3-element Array{Any,1}: 
>   "2010/1/4T15:00:00" 
>   "2010/1/5T15:00:00" 
>   "2010/1/6T15:00:00" 
>
>
> julia> dts = [convert(String, dt[i]) for i=1:length(dt)] 
> 3-element Array{String,1}: 
>   "2010/1/4T15:00:00" 
>   "2010/1/5T15:00:00" 
>   "2010/1/6T15:00:00" 
>
>

Re: [julia-users] Re: include("otherfile.jl") inside a function - otherfile.jl doesn't seem to know about variables defined before include

2014-06-26 Thread Miles Lubin
This might be what you're looking for:

macro foo(bar)
   return :(string($bar,".baz"))
end

On Thursday, June 26, 2014 3:57:14 PM UTC-6, Tomas Lycken wrote:
>
> Just to be clear: I’m aware that there’s probably no way to get this 
> working with e.g. any string variable. But I don’t see a reason why it 
> shouldn’t be possible when passing a constant, or especially a literal 
> constant, to the macro, such as with @foo("bar") (rather than bar="bar"; 
> @foo(bar), which I understand might be impossible). 
>
> // T
>
> On Thursday, June 26, 2014 11:54:55 PM UTC+2, Tomas Lycken wrote:
>
> This gets me quite far - thanks a lot! However, to make it work, I have to 
>> hard-code the value of datafile, since I can’t figure out how to get the 
>> macro to create the string "$datafile.jld" correctly. This is probably 
>> OK for my use case - I only have two datafiles, so I can create two macros 
>> :P But it would be nice to learn how to do it, if it’s possible.
>>
>> Basically, is there a way to make the following work?
>>
>> @macro foo(bar)
>> esc(quote 
>> "$bar.baz"
>> end)
>> end
>>
>> Currently @foo("fizz") is just the string literal "$bar.baz".
>>
>> // T
>>
>> On Thursday, June 26, 2014 9:25:18 PM UTC+2, Mauro wrote:
>>
>> Maybe something like this should work: 
>>>
>>> julia> macro A() 
>>>esc(quote 
>>>a=5 
>>>b=7 
>>>end) 
>>>end 
>>>
>>> julia> function f(x) 
>>>@A 
>>>x+a+b 
>>>end 
>>> f (generic function with 1 method) 
>>>
>>> julia> f(5) 
>>> 17 
>>>
>>> If you don't want your variable names to be mangled then do the `esc`. 
>>>
>>> So, basically just write normal code in the `esc(quote ... end)` bit. 
>>>
>>> On Thu, 2014-06-26 at 19:21, tomas@gmail.com wrote: 
>>> > That explains it, thanks. 
>>> > 
>>> > In my actual problem, what I wanted to do in the included file was 
>>> > something like this: 
>>> > 
>>> > ``` 
>>> > sall,sloops,slost,Nbins,psibins,initialhist,finalhist,vols = 
>>> > jldopen("$datafile.jld") do f 
>>> > read(f, "sall"), 
>>> > read(f, "sloops"), 
>>> > read(f, "sloss"), 
>>> > read(f, "Nbins"), 
>>> > read(f, "psibins"), 
>>> > read(f, "initialhist"), 
>>> > read(f, "finalhist"), 
>>> > read(f, "vols") 
>>> > end 
>>> > ``` 
>>> > 
>>> > where `datafile` is the variable defined in the function. In other 
>>> words, 
>>> > including the file would define and assign to all those variables. Is 
>>> it 
>>> > maybe possible to write a macro that does this? 
>>> > 
>>> > // T 
>>> > 
>>> > On Thursday, June 26, 2014 8:17:29 PM UTC+2, Simon Kornblith wrote: 
>>> >> 
>>> >> include evaluates at top-level, so this would only work if foo were a 
>>> >> global variable. It not possible to include in a function context for 
>>> the 
>>> >> same reason it is not possible to eval in a function context. 
>>> >> 
>>> >> Simon 
>>> >> 
>>> >> On Thursday, June 26, 2014 1:03:00 PM UTC-4, Tomas Lycken wrote: 
>>> >>> 
>>> >>> I have the following two files: 
>>> >>> 
>>> >>> *includetest1.jl*: 
>>> >>> 
>>> >>> module IncludeTest 
>>> >>> 
>>> >>> function testinclude() 
>>> >>> foo = "foo" 
>>> >>> println(foo) 
>>> >>> include("includetest2.jl") 
>>> >>> end 
>>> >>> 
>>> >>> end 
>>> >>> 
>>> >>> *includetest2.jl* 
>>> >>> 
>>> >>> println(foo) 
>>> >>> 
>>> >>> If I now try to execute this the function from the REPL, I get 
>>> errors 
>>> >>> stating that foo is not defined: 
>>> >>> 
>>> >>> julia> include("includetest1.jl") 
>>> >>> 
>>> >>> julia> IncludeTest.testinclude() 
>>> >>> foo 
>>> >>> ERROR: foo not defined 
>>> >>>  in include at boot.jl:244 
>>> >>> while loading [...]/includetest2.jl, in expression starting on line 
>>> 1 
>>> >>> 
>>> >>> I thought include was supposed to just insert the contents of the 
>>> file 
>>> >>> in whatever context you’re in? If include is not the way to do this, 
>>> is 
>>> >>> there another? 
>>> >>> 
>>> >>> For completeness: 
>>> >>> 
>>> >>> 
>>> >>> julia> versioninfo() 
>>> >>> Julia Version 0.3.0-prerelease+3884 
>>> >>> Commit 3e6a6c7* (2014-06-25 10:41 UTC) 
>>> >>> Platform Info: 
>>> >>>   System: Linux (x86_64-linux-gnu) 
>>> >>>   CPU: Intel(R) Core(TM) i7-2630QM CPU @ 2.00GHz 
>>> >>>   WORD_SIZE: 64 
>>> >>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY) 
>>> >>>   LAPACK: libopenblas 
>>> >>>   LIBM: libopenlibm 
>>> >>> 
>>> >>> // T 
>>> >>> ​ 
>>> >>> 
>>> >> 
>>>
>>> -- 
>>>
>>> ​
>>
> ​
>


[julia-users] ANN: JuliaDiff -- differentiation tools in Julia

2014-06-25 Thread Miles Lubin
This is still a work in progress, but ahead of JuliaCon I'd like to 
announce JuliaDiff, a github organization and website 
(http://www.juliadiff.org/) for packages related to computing derivatives. 
This includes packages based on automatic differentiation. If you've never 
heard of AD, check out the intro paragraph on the website. This is a field 
where I believe the technical features of Julia really make it easier than 
ever before to implement advanced techniques efficiently and (mostly) 
transparently to the user, see, for example, the autodiff keyword in Optim 
which enables computation of exact gradients of user-provided "black box" 
functions. I'm looking forward to continued development, collaboration, and 
contributions to JuliaDiff. Thanks to Theodore Papamarkou for the impetus 
in creating this organization.

Miles

P.S. We're accepting logo submissions.


Re: [julia-users] Optim/DualNumbers question (maybe a math question?)

2014-06-22 Thread Miles Lubin
There was a PR that was prematurely merged, it's still being discussed:
https://github.com/JuliaDiff/DualNumbers.jl/pull/11
In the meantime, a generic Cholesky factorization has been proposed (
https://github.com/JuliaLang/julia/pull/7236) which would solve the
original issue but not necessarily match the speed of Chris's specialized
approach.


On Sun, Jun 22, 2014 at 9:55 PM, John Myles White 
wrote:

> Maybe. Did someone create a pull request?
>
>  — John
>
> On Jun 22, 2014, at 5:22 PM, Thomas Covert  wrote:
>
> did this code ever find its way into DualNumbers.jl?  I do anticipate its
> going to be quite helpful.
>
> -Thom
>
>
>
>
> On Fri, Jun 6, 2014 at 10:32 AM, Thomas Covert 
> wrote:
>
>> Haven't been able to try it since I'm currently travelling.  I bet it
>> will turn out to be useful though.
>>
>>
>> On Friday, June 6, 2014, Chris Foster  wrote:
>>
>>> On Wed, Jun 4, 2014 at 7:21 AM, Thomas Covert 
>>> wrote:
>>> > Your code is about twice as fast (when N = 1000) as the code I
>>> initially
>>> > posted.  I think the speed gains come from the fact that your code
>>> does all
>>> > its work on real numbers, so it only has to do one floating point
>>> operation
>>> > per operation, while my "choldn" works directly on DualNumbers.
>>>  Still, it
>>> > would be great if there was a smart LAPACK routine to do the "Sylvester
>>> > Equation" step as fast as the other standard back substitution
>>> routines.
>>>
>>> I didn't find anything which solves the problem while fully exploiting
>>> the available structure, but I think the version I've put in the pull
>>> request here
>>>
>>> https://github.com/scidom/DualNumbers.jl/pull/11
>>>
>>> should be pretty good efficiency wise since the heavy lifting
>>> basically gets handed off to gemv calls.  It is basically a forward
>>> substitution, but exploiting the structure is important because the
>>> underlying linear system is quite sparse.
>>>
>>> Did you end up using this in the end, or did you find a better way to
>>> formulate the problem?
>>>
>>> ~Chris
>>>
>>
>
>


Re: [julia-users] Re: ANN: Gumbo.jl (HTML parsing library)

2014-06-19 Thread Miles Lubin
On Thursday, June 19, 2014 8:34:46 PM UTC-6, Jameson wrote:
>
> > poorly equipped to handle simultaneous multi-arch binary package 
> installations
>
> agreed. I can't follow entirely why the GLPK steps are complicated, but I 
> think it is because the person who packaged the binaries did not separate 
> them into a usr32 and usr64 folder as I am recommending, so that had to be 
> done after downloading.
>

The GLPK binaries are from upstream, and they can't be expected to follow 
this convention.


[julia-users] Re: Least squares curve fitting package

2014-06-11 Thread Miles Lubin
Hi Paulo,

This package seems to duplicate/complement the curve fitting functionality 
in Optim, which is capable of handling general models and providing 
statistical analysis of the fit. Please join us in the discussion 
here: https://github.com/JuliaOpt/Optim.jl/pull/56 about possibly merging 
the packages.

Thanks,
Miles

On Thursday, May 8, 2014 6:14:22 PM UTC-6, Paulo Jabardo wrote:
>
> Package CurveFit
>
> https://github.com/pjabardo/CurveFit.jl
>
> This package implements a few least squares fitting functions for some 
> common cases (for me at least) . The only "different" aspect of this 
> package is
> the nonlinear least squares fitting function.
>
> For now, I have only implemented the `fit` and `coef` functions of the 
> abstract statistical models interface suggested in package StatsBase. I 
> hope to implement the other functions later on.
>
>
> Paulo
>
>

Re: [julia-users] Optim/DualNumbers question (maybe a math question?)

2014-06-04 Thread Miles Lubin

>
> Any idea why type inference is failing on full(chol(A, :U)) ? 
>

It can't be determined from the types of the arguments what the result will 
be, because you could give different symbols to access different parts of 
the factorization. I think this was a design choice to make it easier to 
access.

If you'd like to contribute your chol() implementation to DualNumbers, we'd 
be happy to have it. It'll need to be restructured a bit to work with 
cholfact().


[julia-users] Re: Optim/DualNumbers question (maybe a math question?)

2014-06-02 Thread Miles Lubin
Hi Thomas,

This is definitely a reasonable thing to want to do, and there are a number 
of methods to differentiate through a matrix factorization. Julia doesn't 
yet have a generic chol() method 
(see https://github.com/JuliaLang/julia/issues/1629), but it does have a 
generic LU decomposition method that works with DualNumbers. So if you 
replace cholfact with lufact your code might just work. However, this won't 
use LAPACK, and off-hand I'm not sure off hand if it's possible to 
reformulate a Cholesky with dual numbers into something LAPACK knows how to 
compute. There also might be some performance issues with your hand-written 
Cholesky factorization, try:

function choldn!{T}(A::Matrix{T})

instead of

function choldn!(A::Array)
 ...
 T = eltype(A);


Exactly how much slower is it?

On Monday, June 2, 2014 2:43:32 PM UTC-4, Thomas Covert wrote:
>
> I'm really excited to use DualNumbers and the autodiff option in Optim.jl. 
>  However, the functions I would like to optimize often make use of linear 
> algebra, including cholesky factorizations.  At present, DualNumbers.jl 
> does not implement a cholesky factorization.  My first pass at a workaround 
> for this was to just write out a standard cholesky solver  (i.e., just 
> writing out the equations listed in wikipedia).  This seems to work fine - 
> i.e., my "choldn()" function factors a DualNumber array into upper and 
> lower triangular matrices whose product is the input.  
>
> However, my "choldn" function listed below is MUCH slower than the LAPACK 
> call built into chol().  I can't say I'm surprised by this, but the speed 
> hit makes the whole enterprise of using autodiff with DualNumbers 
> considerably less useful.
>
> I was hoping to find some neat linear algebra trick that would let me 
> compute a DualNumber cholesky factorization without having to resort to 
> non-LAPACK code, but I haven't found it yet.  That is, I figured that I 
> could compute the cholesky factorization of the real part of the matrix and 
> then separately compute a matrix which represented the DualNumber part. 
>  However, I've not yet found a math text describing such a beast.
>
> Any ideas?  Am I wrong in thinking that a Cholesky decomposition of a 
> square array with DualNumbers is a well-defined mathematical object?
>
> I've included my code for "choldn()" below.
>
> Thanks in advance.
>
>
> CODE 
>
> function choldn!(A::Array)
>
> N = size(A,1);
>
> T = eltype(A);
>
> tmp = zero(T);
>
> for j = 1:N # cols   
>
>
> for i = j:N # rows   
>
>
> if i == j
>
> if i == 1
>
> A[i,j] = sqrt(A[i,j]);
>
> else
>
> tmp = zero(T);
>
> for k = 1:j-1
>
> tmp += A[j,k] ^ 2;
>
> end
>
> A[i,j] = sqrt(A[i,j] - tmp);
>
> end
>
> else
>
> if j == 1
>
> A[i,j] = A[i,j] / A[j,j];
>
> else
>
> tmp = zero(T);
>
> for k = 1:j-1
>
> tmp += A[i,k] * A[j,k];
>
> end
>
> A[i,j] = (A[i,j] - tmp) / A[j,j];
>
> end
>
> end
>
> end
>
> end
>
> # then zero out the non-cholesky stuff   
>
>
> for i = 1:N
>
> if i < N
>
> for j = i+1:N
>
> A[i,j] = zero(T);
>
> end
>
> end
>
> end
>
> end
>
>

Re: [julia-users] Optim.jl: unexpected results when using gradient-based methods

2014-05-22 Thread Miles Lubin
I can get another 50% speedup by:

- Running the optimization twice and timing the second run only, this is 
the more appropriate way to benchmark julia because it excludes the 
function compilation time
- Setting autodiff=true
- Breaking up the long chains of sums, apparently these seem to be slow

At this point one really needs to compare the number of function 
evaluations in each method, as John suggested.

On Thursday, May 22, 2014 9:53:36 AM UTC-4, Holger Stichnoth wrote:
>
> Thanks, it's faster now (by roughly a factor of 3 on my computer), but 
> still considerably slower than fminunc:
>
> Averages over 20 runs:
> Julia/Optim.optimize: 10.5s
> Matlab/fminunc: 2.6s
>
> Here are my Matlab settings:
> options = optimset('Display', 'iter', ...
>  'MaxIter', 2500, 'MaxFunEvals', 50, ...
>  'TolFun', 1e-6, 'TolX', 1e-6, ...
>  'GradObj', 'off', 'DerivativeCheck', 'off');
>
> startb  = ones(1,nVar)';
> [estim_clo, ll_clo]= ...
>  fminunc(@(param)clogit_ll(param,data), ...
>  startb,options);
>
> Could the speed issue be related to the following messages that I get when 
> I run the Julia code?
> C:\Users\User\Documents\References\Software\Julia\mlubin>julia main.jl
> Warning: could not import Base.foldl into NumericExtensions
> Warning: could not import Base.foldr into NumericExtensions
> Warning: could not import Base.sum! into NumericExtensions
> Warning: could not import Base.maximum! into NumericExtensions
> Warning: could not import Base.minimum! into NumericExtensions
>
>
>
>
> Am Donnerstag, 22. Mai 2014 14:18:36 UTC+1 schrieb Miles Lubin:
>>
>> I was able to get a nearly 5x speedup by avoiding the matrix allocation 
>> and making the accumulators type stable: 
>> https://gist.github.com/mlubin/055690ddf2466e98bba6
>>
>> How does this compare with Matlab now?
>>
>> On Thursday, May 22, 2014 6:38:44 AM UTC-4, Holger Stichnoth wrote:
>>>
>>> @ John: You are right, when I specify the function as 
>>> clogit_ll(beta::Vector) instead of clogit_ll(beta::Vector{Float64}), 
>>> autodiff = true works fine. Thanks for your help!
>>>
>>> @ Tim: I have set the rather strict default convergence criteria of 
>>> Optim.optimize to Matlab's default values for fminunc, but the speed 
>>> difference is still there.
>>>
>>> @ Miles/John: Getting rid of the global variables through closures and 
>>> devectorizing made the optimization _slower_ not faster in my case: 
>>> https://gist.github.com/stichnoth/7f251ded83dcaa384273. I was surprised 
>>> to see this as I expected a speed increase myself. 
>>>  
>>>
>>> Am Mittwoch, 21. Mai 2014 16:48:51 UTC+1 schrieb Miles Lubin:
>>>>
>>>> Just to extend on what John said, also think that if you can 
>>>> restructure the code to devectorize it and avoid using global variables, 
>>>> you'll see *much* better performance. 
>>>>
>>>> The way to avoid globals is by using closures, for example:
>>>> function foo(x, data)
>>>> ...
>>>> end
>>>>
>>>>
>>>> ...
>>>> data_raw = readcsv(file)
>>>> data = reshape(data_raw, nObs, nChoices*(1+nVar), T)
>>>>
>>>>
>>>>
>>>> Optim.optimize(x-> foo(x,data), ...)
>>>>
>>>>
>>>>
>>>> On Tuesday, May 20, 2014 11:47:39 AM UTC-4, John Myles White wrote:
>>>>>
>>>>> Glad that you were able to figure out the source of your problems.
>>>>>
>>>>> It would be good to get a sense of the amount of time spent inside 
>>>>> your objective function vs. the amount of time spent in the code for 
>>>>> optimize(). In general, my experience is that >>90% of the compute time 
>>>>> for 
>>>>> an optimization problem is spent in the objective function itself. If you 
>>>>> instrument your objective function to produce timing information on each 
>>>>> call, that would help a lot since you could then get a sense of how much 
>>>>> time is being spent in the code for optimize() after accounting for your 
>>>>> function itself.
>>>>>
>>>>> It’s also worth keeping in mind that your use of implicit finite 
>>>>> differencing means that your objective function is being called a lot 
>>>>> more 
>>>>> times t

Re: [julia-users] Optim.jl: unexpected results when using gradient-based methods

2014-05-22 Thread Miles Lubin
I was able to get a nearly 5x speedup by avoiding the matrix allocation and 
making the accumulators type 
stable: https://gist.github.com/mlubin/055690ddf2466e98bba6

How does this compare with Matlab now?

On Thursday, May 22, 2014 6:38:44 AM UTC-4, Holger Stichnoth wrote:
>
> @ John: You are right, when I specify the function as 
> clogit_ll(beta::Vector) instead of clogit_ll(beta::Vector{Float64}), 
> autodiff = true works fine. Thanks for your help!
>
> @ Tim: I have set the rather strict default convergence criteria of 
> Optim.optimize to Matlab's default values for fminunc, but the speed 
> difference is still there.
>
> @ Miles/John: Getting rid of the global variables through closures and 
> devectorizing made the optimization _slower_ not faster in my case: 
> https://gist.github.com/stichnoth/7f251ded83dcaa384273. I was surprised 
> to see this as I expected a speed increase myself. 
>  
>
> Am Mittwoch, 21. Mai 2014 16:48:51 UTC+1 schrieb Miles Lubin:
>>
>> Just to extend on what John said, also think that if you can restructure 
>> the code to devectorize it and avoid using global variables, you'll see 
>> *much* better performance. 
>>
>> The way to avoid globals is by using closures, for example:
>> function foo(x, data)
>> ...
>> end
>>
>>
>> ...
>> data_raw = readcsv(file)
>> data = reshape(data_raw, nObs, nChoices*(1+nVar), T)
>>
>>
>>
>> Optim.optimize(x-> foo(x,data), ...)
>>
>>
>>
>> On Tuesday, May 20, 2014 11:47:39 AM UTC-4, John Myles White wrote:
>>>
>>> Glad that you were able to figure out the source of your problems.
>>>
>>> It would be good to get a sense of the amount of time spent inside your 
>>> objective function vs. the amount of time spent in the code for optimize(). 
>>> In general, my experience is that >>90% of the compute time for an 
>>> optimization problem is spent in the objective function itself. If you 
>>> instrument your objective function to produce timing information on each 
>>> call, that would help a lot since you could then get a sense of how much 
>>> time is being spent in the code for optimize() after accounting for your 
>>> function itself.
>>>
>>> It’s also worth keeping in mind that your use of implicit finite 
>>> differencing means that your objective function is being called a lot more 
>>> times than theoretically necessary, so that any minor performance issue in 
>>> it will very substantially slow down the solver.
>>>
>>> Regarding you objective function’s code, I suspect that the combination 
>>> of global variables and memory-allocating vectorized arithmetic means that 
>>> your objective function might be a good bit slower in Julia than in Matlab. 
>>> Matlab seems to be a little better about garbage collection for vectorized 
>>> arithmetic and Julia is generally not able to optimize code involving 
>>> global variables.
>>>
>>> Hope that points you in the right direction.
>>>
>>>  — John
>>>
>>> On May 20, 2014, at 8:34 AM, Holger Stichnoth  wrote:
>>>
>>> Hi Andreas,
>>> hi John,
>>> hi Miles (via julia-opt, where I mistakenly also posted my question 
>>> yesterday),
>>>
>>> Thanks for your help. Here is the link to the Gist I created: 
>>> https://gist.github.com/anonymous/5f95ab1afd241c0a5962
>>>
>>> In the process of producing a minimal (non-)working example, I 
>>> discovered that the unexpected results are due to the truncation of the 
>>> logit choice probabilities in the objective function. Optim.optimize() is 
>>> sensitive to this when method = :l_bfgs is used. With method = 
>>> :nelder_mead, everything works fine. When I comment out the truncation, 
>>> :l_bfgs works as well. However, I need to increase the xtol from its 
>>> default of 1e-12 to at least 1e-10, otherwise I get the error that the 
>>> linesearch failed to converge.
>>>
>>> I guess I should just do without the truncation. The logit probabilities 
>>> are between 0 and 1 by construction anyway. I had just copied the 
>>> truncation code from a friend who had told me that probabilities that are 
>>> too close to 0 or 1 sometimes cause numerical problems in his Matlab code 
>>> of the same function. With Optim.optimize(), it seems to be the other way 
>>> around, i.e. moving the probabilities further away from 0 or 1 (even by 
>>> tiny amounts) means that the stability of the (gradi

Re: [julia-users] Optim.jl: unexpected results when using gradient-based methods

2014-05-21 Thread Miles Lubin
Just to extend on what John said, also think that if you can restructure 
the code to devectorize it and avoid using global variables, you'll see 
*much* better performance. 

The way to avoid globals is by using closures, for example:
function foo(x, data)
...
end


...
data_raw = readcsv(file)
data = reshape(data_raw, nObs, nChoices*(1+nVar), T)



Optim.optimize(x-> foo(x,data), ...)



On Tuesday, May 20, 2014 11:47:39 AM UTC-4, John Myles White wrote:
>
> Glad that you were able to figure out the source of your problems.
>
> It would be good to get a sense of the amount of time spent inside your 
> objective function vs. the amount of time spent in the code for optimize(). 
> In general, my experience is that >>90% of the compute time for an 
> optimization problem is spent in the objective function itself. If you 
> instrument your objective function to produce timing information on each 
> call, that would help a lot since you could then get a sense of how much 
> time is being spent in the code for optimize() after accounting for your 
> function itself.
>
> It’s also worth keeping in mind that your use of implicit finite 
> differencing means that your objective function is being called a lot more 
> times than theoretically necessary, so that any minor performance issue in 
> it will very substantially slow down the solver.
>
> Regarding you objective function’s code, I suspect that the combination of 
> global variables and memory-allocating vectorized arithmetic means that 
> your objective function might be a good bit slower in Julia than in Matlab. 
> Matlab seems to be a little better about garbage collection for vectorized 
> arithmetic and Julia is generally not able to optimize code involving 
> global variables.
>
> Hope that points you in the right direction.
>
>  — John
>
> On May 20, 2014, at 8:34 AM, Holger Stichnoth 
> > 
> wrote:
>
> Hi Andreas,
> hi John,
> hi Miles (via julia-opt, where I mistakenly also posted my question 
> yesterday),
>
> Thanks for your help. Here is the link to the Gist I created: 
> https://gist.github.com/anonymous/5f95ab1afd241c0a5962
>
> In the process of producing a minimal (non-)working example, I discovered 
> that the unexpected results are due to the truncation of the logit choice 
> probabilities in the objective function. Optim.optimize() is sensitive to 
> this when method = :l_bfgs is used. With method = :nelder_mead, everything 
> works fine. When I comment out the truncation, :l_bfgs works as well. 
> However, I need to increase the xtol from its default of 1e-12 to at least 
> 1e-10, otherwise I get the error that the linesearch failed to converge.
>
> I guess I should just do without the truncation. The logit probabilities 
> are between 0 and 1 by construction anyway. I had just copied the 
> truncation code from a friend who had told me that probabilities that are 
> too close to 0 or 1 sometimes cause numerical problems in his Matlab code 
> of the same function. With Optim.optimize(), it seems to be the other way 
> around, i.e. moving the probabilities further away from 0 or 1 (even by 
> tiny amounts) means that the stability of the (gradient-based) algorithm is 
> reduced.
>
> So for me, the problem is solved. The problem was not with Optim.jl, but 
> with my own code.
>
> The only other thing that I discovered when trying out Julia and Optim.jl 
> is that the optimization is currently considerably slower than Matlab's 
> fminunc. From the Gist I provided above, are there any potential 
> performance improvements that I am missing out on?
>
> Best wishes,
> Holger
>
>
> On Monday, 19 May 2014 14:51:16 UTC+1, John Myles White wrote:
>>
>> If you can, please do share an example of your code. Logit-style models 
>> are in general numerically unstable, so it would be good to see how exactly 
>> you’ve coded things up.
>>
>> One thing you may be able to do is use automatic differentiation via the 
>> autodiff = true keyword to optimize, but that assumes that your objective 
>> function is written in completely pure Julia code (which means, for 
>> example, that your code must not call any of functions not written in Julia 
>> provided by Distributions.jl).
>>
>>  — John
>>
>> On May 19, 2014, at 4:09 AM, Andreas Noack Jensen  
>> wrote:
>>
>> What is the output of versioninfo() and Pkg.installed("Optim")? Also, 
>> would it be possible to make a gist with your code?
>>
>>
>> 2014-05-19 12:44 GMT+02:00 Holger Stichnoth :
>>
>>>  Hello,
>>>
>>> I installed Julia a couple of days ago and was impressed how easy it was 
>>> to make the switch from Matlab and to parallelize my code
>>> (something I had never done before in any language; I'm an economist 
>>> with only limited programming experience, mainly in Stata and Matlab).
>>>
>>> However, I ran into a problem when using Optim.jl for Maximum Likelihood 
>>> estimation of a conditional logit model. With the default Nelder-Mead 
>>> algorithm, optimize from the Optim.jl package gave me th

[julia-users] Re: I cannot call "using Requests" in win XP

2014-05-03 Thread Miles Lubin
Relatedly, I think it's bad practice to simply put 
include("../deps/deps.jl") in the module, because this gives 
a meaningless error message to the user when the file doesn't exist. In 
Gurobi.jl, for example, we have:

if isfile(joinpath(Pkg.dir("Gurobi"),"deps","deps.jl")) 
include("../deps/deps.jl") 
else 
error("Gurobi not properly installed. Please run Pkg.build(\"Gurobi\")") 
end


On Saturday, May 3, 2014 11:06:37 AM UTC-4, Tony Kelman wrote:
>
> Do those deps.jl files exist? What happens if you do Pkg.build("GnuTLS") 
> or Pkg.build("Cairo") ?
>
>
> On Saturday, May 3, 2014 3:16:30 AM UTC-7, joanenric barcelo wrote:
>>
>> I cannot use Requests package in Win XP. I get the following message:
>>
>> OpenBLAS : Your OS does not support AVX instructions. OpenBLAS is using 
>> Nehalem
>>  kernels as a fallback, which may give poorer performance.
>> _
>> _   _ _(_)_ |  A fresh approach to technical computing
>>(_) | (_) (_)|  Documentation: http://docs.julialang.org
>> _ _   _| |_  __ _   |  Type "help()" to list help topics
>>| | | | | | |/ _` |  |
>>| | |_| | | | (_| |  |  Version 0.3.0-prerelease+2809 (2014-04-28 
>> 22:41 UTC)
>>  _/ |\__'_|_|_|\__'_|  |  Commit d1095bb* (4 days old master)
>>  |__/   |  i686-w64-mingw32
>>
>>  julia> using Requests
>>  ERROR: could not open file C:\Documents and Settings\user\.julia\v0.3\
>> GnuTLS
>>  \src\../deps/deps.jl
>>  in include at boot.jl:244
>>  in include_from_node1 at loading.jl:128
>>  in include at boot.jl:244
>>  in include_from_node1 at loading.jl:128
>>  in reload_path at loading.jl:152
>>  in _require at loading.jl:67
>>  in require at loading.jl:54
>>  in include at boot.jl:244
>>  in include_from_node1 at loading.jl:128
>>  in reload_path at loading.jl:152
>>  in _require at loading.jl:67
>>  in require at loading.jl:51
>>  while loading C:\Documents and Settings\user\.julia\v0.3\GnuTLS\src\
>> GnuTLS.j
>>  l, in expression starting on line 6
>>  while loading C:\Documents and Settings\user\.julia\v0.3\Requests\src\
>> Reques
>>  ts.jl, in expression starting on line 8
>>
>>  julia>
>>
>> I am having similar issues with other packages such as Winston, see below
>>
>> OpenBLAS : Your OS does not support AVX instructions. OpenBLAS is using 
>> Nehalem
>> kernels as a fallback, which may give poorer performance.
>>_
>>_   _ _(_)_ |  A fresh approach to technical computing
>>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>>_ _   _| |_  __ _   |  Type "help()" to list help topics
>>   | | | | | | |/ _` |  |
>>   | | |_| | | | (_| |  |  Version 0.3.0-prerelease+2809 (2014-04-28 22:41 
>> UTC)
>> _/ |\__'_|_|_|\__'_|  |  Commit d1095bb* (4 days old master)
>> |__/   |  i686-w64-mingw32
>>
>> julia> using Winston
>> ERROR: could not open file C:\Documents and Settings\user\.julia\v0.3\
>> Cairo\
>> src\../deps/deps.jl
>> in include at boot.jl:244
>> in include_from_node1 at loading.jl:128
>> in include at boot.jl:244
>> in include_from_node1 at loading.jl:128
>> in reload_path at loading.jl:152
>> in _require at loading.jl:67
>> in require at loading.jl:54
>> in include at boot.jl:244
>> in include_from_node1 at loading.jl:128
>> in reload_path at loading.jl:152
>> in _require at loading.jl:67
>> in require at loading.jl:51
>> while loading C:\Documents and Settings\user\.julia\v0.3\Cairo\src\Cairo.
>> jl,
>> in expression starting on line 3
>> while loading C:\Documents and Settings\user\.julia\v0.3\Winston\src\
>> Winston
>> .jl, in expression starting on line 3
>>
>> julia>
>>
>>
>>
>> Anyone with the same problem or able to point me to a possible solution? 
>> Thanks a lot!
>>
>>
>>

[julia-users] ANN: JuMP 0.5 released with support for nonlinear programming

2014-05-02 Thread Miles Lubin
We're happy to announce the release of JuMP 0.5, which includes support for 
modeling and solving convex and nonconvex nonlinear optimization problems 
which can be modeled by closed-form algebraic expressions. This extends 
JuMP's existing support for linear, mixed-integer, and second-order conic 
optimization problems and puts JuMP in competition with commercial packages 
such as AMPL and GAMS and open-source projects like CppAD and CasADi. JuMP 
can compute exact sparse second-order derivatives using the pure-Julia 
ReverseDiffSparse  package, 
which is callable in standalone form as well. On the technical side, we use 
reverse-mode automatic differentiation and extensively exploit Julia's 
expression manipulation and JIT functionality to essentially construct and 
compile specialized functions at runtime to evaluate the derivative of a 
particular expression.

Right now all problems are solved by using the efficient Ipopt 
interior-point solver; support for more solvers will follow soon. This 
release has been tested on Linux, OS X, and Windows and is compatible with 
both Julia 0.2 and 0.3 prereleases.

For documentation and examples, see 
http://jump.readthedocs.org/en/release-0.5/nlp.html.

Release notes: https://github.com/JuliaOpt/JuMP.jl/blob/master/NEWS.md

For Julia users interested in optimization, please direct discussion to the 
julia-opt  mailing list.

Thanks!
Miles, Iain, and Joey


Re: [julia-users] Re: How to use GLPK.exact ?

2014-05-02 Thread Miles Lubin
Looks good!

On Friday, May 2, 2014 12:55:49 PM UTC-4, Stéphane Laurent wrote:
>
> Thank you everybody, almost every point discussed here is now written on 
> my blog .
>


[julia-users] Re: A Reader for the Harwell-Boeing Format

2014-04-27 Thread Miles Lubin
On Sunday, April 27, 2014 5:34:56 PM UTC-4, Dominique Orban wrote:
>
> Thanks a lot all for the feedback. There should now be a (yet 
> unregistered) module named "rb". It may not be the best name (it stands for 
> "Rutherford-Boeing"). As this is my first module, I'd appreciate any 
> feedback. Also I'm sure there are places where the code could be more 
> efficient.
>

The Julia convention is to use more explicit names, so I'd suggest renaming 
the module to something like HarwellBoeing.


[julia-users] Re: A Reader for the Harwell-Boeing Format

2014-04-27 Thread Miles Lubin
Very useful, thanks! I agree with Tim's suggestion that this deserves to be 
registered as a package.

Note that there's already a package for MatrixMarket 
format: https://github.com/ViralBShah/MatrixMarket.jl.

On Sunday, April 27, 2014 4:11:32 AM UTC-4, Dominique Orban wrote:
>
> Here's a reader for matrices and supplementary data written in the 
> Harwell-Boeing format: https://github.com/dpo/rb.jl
>
> The Harwell-Boeing format is the predecessor of the Rutherford-Boeing 
> format, which is the format used by the University of Florida Sparse Matrix 
> Collection. There's quite a bit of juggling to process the Fortran format 
> strings correctly. A Rutherford-Boeing reader is under way.
>
> Enjoy!
>


[julia-users] Re: Trying to import a graph from NetworkX

2014-04-26 Thread Miles Lubin
Hi Ohad,

There was indeed a bug in the graph() constructor. I've filed a 
PR: https://github.com/JuliaLang/Graphs.jl/pull/83

Miles

On Saturday, April 26, 2014 12:19:41 PM UTC-4, Ohad wrote:
>
> To start playing with Julia I'm trying to import a graph from a 
> NetworkX-generated JSON to the Graphs.jl package.
>
> The full gist is here , but 
> basically I have an array of ExVertex and an array of ExEdge, and try to 
> call graph(). I get the following error:
>
> julia> g = graph(new_vs, new_es, is_directed=g_dict["directed"])
> ERROR: assertion failed: edge_index(e) == num_edges(g) + 1
>  in add_edge! at /home/olevinkr/.julia/v0.3/Graphs/src/graph.jl:83
>  in graph at /home/olevinkr/.julia/v0.3/Graphs/src/graph.jl:42
>
> Looking at 
> graph(), 
> if I just run the first couple of lines it seems to create the graph, but 
> then it tries to add all the edges to the graph and fails.
>
> Am I using graph() incorrectly?
>
> Thanks!
>


Re: [julia-users] Re: How to use GLPK.exact ?

2014-04-23 Thread Miles Lubin
On Wednesday, April 23, 2014 3:40:02 PM UTC-4, Stéphane Laurent wrote:
>
> If I don't call GLPKMathProgInterface, does JuMP use an internal solver ?
>

If a solver isn't specified, JuMP (actually MathProgBase) will search for 
an available solver and pick one by default. JuMP does not have an internal 
solver.

By the way, future discussions on linear programming etc. should take place 
on the julia-opt mailing 
list: https://groups.google.com/forum/#!forum/julia-opt.

Thanks,
Miles


Re: [julia-users] Re: How to use GLPK.exact ?

2014-04-22 Thread Miles Lubin
Cool! Glad to hear you got it working. Supporting exact coefficients in 
JuMP is technically possible, and I've opened an issue for 
it: https://github.com/JuliaOpt/JuMP.jl/issues/162. This will probably 
remain on the wishlist for a while.

On Tuesday, April 22, 2014 2:28:01 PM UTC-4, Stéphane Laurent wrote:
>
> Miles, I have successfully installed JuMP and GLPKMathProgInterface on 
> Windows 32-bit. 
>
> Your code works very well, this is really awesome !! However the result is 
> not as precise as the one given by *GLPK.exact*.
>
> using JuMP 
>
>  mu = [1/7, 2/7, 4/7]
>  nu = [1/4, 1/4, 1/2]
>  n = length(mu)
>  
>  m = Model()
>  @defVar(m, p[1:n,1:n] >= 0)
>  @setObjective(m, Min, sum{p[i,j], i in 1:n, j in 1:n; i != j})
>  
>  for k in 1:n
>  @addConstraint(m, sum(p[k,:]) == mu[k])
>  @addConstraint(m, sum(p[:,k]) == nu[k])
>  end
>  solve(m)
>
>
> julia> println("Optimal objective value is:", getObjectiveValue(m))
> Optimal objective value is:0.10714285714285715
>
> julia> 3/28
> 0.10714285714285714
>
>

Re: [julia-users] Re: How to use GLPK.exact ?

2014-04-16 Thread Miles Lubin
Where's the MathProgBase interface? :)

On Wed, Apr 16, 2014 at 11:07 PM, Iain Dunning  wrote:
> I implemented a version of simplex method for rational numbers - so you
> solve it exactly in pure Julia.
> https://github.com/IainNZ/RationalSimplex.jl
> Not for serious work - just for fun!
>
>
> On Saturday, April 12, 2014 11:50:26 AM UTC-4, Stéphane Laurent wrote:
>>
>> Thank you everybody. I have updated my blog post, especially to include
>> Carlo's comments.
>> Unfortunately I have some problems to use JuMP (I have opened another
>> discussion about it). And installing pycddlib on Windows 64bit is a real
>> nightmare.


[julia-users] Re: Julia interface to the AMPL modeling language

2014-04-14 Thread Miles Lubin
It seems like this discussion is heading down the rabbit's hole, but let me 
address the original question. 

AMPL is certainly the right choice from a maturity perspective, but AMPL 
itself being closed source and the AMPL solver library being written in 
seemingly intentionally obfuscated C leaves a lot to be desired with 
respect to extensibility. It's hard to hook in user-provided functions. 
It's hard to query Hessians of subblocks, which is essential for Schur 
complement decomposition of large-scale structured modules. The need to 
communicate through NL files often causes I/O bottenecks when running on 
high-performance clusters. The recent development of CasADi is another 
indicator that the mature tools like AMPL and ADOL-C aren't satisfactory in 
all applications.

Julia's tagline is "a fresh approach to technical computing", and in that 
vein I think that it's time to take a fresh look at AD. As Tony mentioned, 
Julia has a number of new technical features that, I believe, can really 
change the way that AD is implemented in practice. For example, 
forward-mode AD is already built into the base optimization library (Optim) 
and works when user functions are written generically; there's no need to 
modify the user code to use a particular "AD double" type. This has been 
under discussion in scipy since before Julia was publicly announced 
(https://github.com/scipy/scipy/issues/2035). With easy to access 
expression manipulation together with JIT, we can design and compile 
specialized functions to evaluate the derivatives of a particular model. 
This is certainly not a new idea, but now it's actually easy to implement. 
ReverseDiffSparse.jl does this in a total of about 1,300 lines of code, 300 
lines of which are for graph coloring routines to exploit sparsity. I'm not 
claiming that it's production-ready at this point, or faster than AMPL 
(yet), but it does work. AD is indeed its own field, and we do intend to 
document and publish our approaches.

On Monday, April 14, 2014 7:14:00 PM UTC-4, Dominique Orban wrote:
>
> I realize I should probably have posted this on the Julia-Opt group, but I 
> didn't know about it. I have to look around JuMP more to understand its 
> design. I have 10+ year of successful research with the ampl.jl model (only 
> in Python) so I'm quite confident that it has most of what a continuous 
> optimizer needs, with the exception perhaps of cone optimization. Also I'd 
> like to ask why you would reimplement backward AD. Some people spend their 
> entire career writing reliable AD software. It's not a judgement on your 
> code; I would have interfaced ADOL-C which already provides much of what 
> optimization needs, including Jacobian-vector products and Hessian-vector 
> products (and Hessian of the Lagrangian - vector products).
>
> On Sunday, April 13, 2014 2:48:26 AM UTC-7, Miles Lubin wrote:
>>
>> Hi Dominique,
>>
>> This will definitely be very useful for accessing the large array of 
>> problem instances written in AMPL. 
>>
>> As for writing solvers in Julia around this format, I'm admittedly biased 
>> but I don't think it's an ideal approach. We already have a pure-Julia 
>> implementation of AD for exact sparse Hessians in JuMP. That said, solvers 
>> written in Julia shouldn't be tied to a particular AD implementation or 
>> modeling language; ideally they will just implement a nonlinear 
>> MathProgBase interface (which doesn't quite exist yet), on top of which 
>> they could be called from JuMP or AMPL. I agree with Tony that it could be 
>> very interesting to use this interface as an interchangeable backend for 
>> JuMP's AD.
>>
>> Also, I'm not sure if you've considered it, but by licensing this 
>> interface under GPL, all solvers that use it must be released under GPL 
>> also, I believe. 
>>
>> Miles
>>
>> On Sunday, April 13, 2014 6:14:51 AM UTC+1, Dominique Orban wrote:
>>>
>>> I just put together a few C and Julia files that let users read models 
>>> in the AMPL modeling language for optimization.
>>>
>>> https://github.com/dpo/ampl.jl
>>>
>>> It's not quite a module or a package; please bear with me as I'm still 
>>> learning Julia. This gives access to a huge collection of problems already 
>>> written in AMPL, e.g.,
>>>
>>> http://orfe.princeton.edu/~rvdb/ampl/nlmodels/index.html
>>> https://github.com/mpf/Optimization-Test-Problems (many of the same 
>>> problems, without the solve command)
>>> http://netlib.org/ampl/models/
>>> etc.
>>>
>>> AMPL computes first and second derivatives for you, so it should be easy 
>>> to pass such problems to solvers written in Julia, and to write solvers 
>>> around this model format.
>>>
>>> Cheers.
>>>
>>

[julia-users] Re: Julia interface to the AMPL modeling language

2014-04-13 Thread Miles Lubin
Hi Dominique,

This will definitely be very useful for accessing the large array of 
problem instances written in AMPL. 

As for writing solvers in Julia around this format, I'm admittedly biased 
but I don't think it's an ideal approach. We already have a pure-Julia 
implementation of AD for exact sparse Hessians in JuMP. That said, solvers 
written in Julia shouldn't be tied to a particular AD implementation or 
modeling language; ideally they will just implement a nonlinear 
MathProgBase interface (which doesn't quite exist yet), on top of which 
they could be called from JuMP or AMPL. I agree with Tony that it could be 
very interesting to use this interface as an interchangeable backend for 
JuMP's AD.

Also, I'm not sure if you've considered it, but by licensing this interface 
under GPL, all solvers that use it must be released under GPL also, I 
believe. 

Miles

On Sunday, April 13, 2014 6:14:51 AM UTC+1, Dominique Orban wrote:
>
> I just put together a few C and Julia files that let users read models in 
> the AMPL modeling language for optimization.
>
> https://github.com/dpo/ampl.jl
>
> It's not quite a module or a package; please bear with me as I'm still 
> learning Julia. This gives access to a huge collection of problems already 
> written in AMPL, e.g.,
>
> http://orfe.princeton.edu/~rvdb/ampl/nlmodels/index.html
> https://github.com/mpf/Optimization-Test-Problems (many of the same 
> problems, without the solve command)
> http://netlib.org/ampl/models/
> etc.
>
> AMPL computes first and second derivatives for you, so it should be easy 
> to pass such problems to solvers written in Julia, and to write solvers 
> around this model format.
>
> Cheers.
>


Re: [julia-users] Re: How to use GLPK.exact ?

2014-04-10 Thread Miles Lubin


> By the way for another problem I need to get the vertices of the 
> polyhedron defined by the linear constraints, as with the cddlib library, 
> do you know how I could get that ?
>

Enumerating vertices requires a very different algorithm from optimizing 
over polyhedra. The best way to do this in Julia right now is probably to 
call pycddlib using PyCall (I've personally done this and it works). I 
think you can pass rationals to it as well.


Re: [julia-users] Re: How to use GLPK.exact ?

2014-04-09 Thread Miles Lubin
When we have a simplex solver (either in Julia or external) that supports 
rational inputs, we could consider making this work with JuMP, but for now 
JuMP stores all data as floating-point as well. 

Stephane, nice work. LP definitely needs more exposure in the probability 
community. Please please write your LPs algebraically, there's really no 
excuse not to do this in Julia when your original model is in this form.

Compare this:

using JuMP
m = Model()
@defVar(m, p[1:n,1:n] >= 0)
@setObjective(m, Max, sum{p[i,j], i in 1:n; i != j})

for k in 1:n
@addConstraint(m, sum(p[k,:]) == μ[k])
@addConstraint(m, sum(p[:,k]) == ν[k])
end
solve(m)
println("Optimal objective value is:", getObjectiveValue(m))


with the matrix gymnastics that you had to do to use the low-level GLPK 
interface. Writing down a linear programming problem shouldn't be that 
hard! (Note: I haven't tested that JuMP code).

Miles
   


On Wednesday, April 9, 2014 11:18:26 PM UTC+1, Carlo Baldassi wrote:
>
>
>
> About GLPK.exact it is not possible to get the rational number 3/28 
>> instead of a decimal approximation ? 
>>
>
> No, unfortunately. Also, for that to happen/make sense, you'd also need to 
> be able to pass all the *inputs* as exact rational values, i.e. as "1//7" 
> instead of "1/7". This would be possible if we had a native generic Julia 
> linear programming solver, but it's not possible with GLPK, which can only 
> use exact arithmetic internally.
>  


Re: [julia-users] Re: Calculus2: A new proposed interface for Calculus

2014-03-25 Thread Miles Lubin
>
> But once you have such a unified interface, why call the result
> Derivatives? What’s left for Calculus except the content of Derivatives?
> Will Calculus just be a symbolic calculus system?
>

I don't have a strong opinion on the naming, but it seemed reasonable.
Calculus also has simplification of expressions and numerical integration.
It may or may not be convenient to keep these together with the unified
interface for differentiation.


  1   2   >