Re: [julia-users] Julia takes 2nd place in "Delacorte Numbers" competition

2015-01-21 Thread Arch Robison
My write-up and program are now available as attachments at
https://software.intel.com/en-us/articles/computing-delacorte-numbers-with-julia
.

- Arch


Re: [julia-users] Memory leaks in long-running NLopt.jl runs

2015-01-21 Thread Tony Kelman
Are you explicitly calling free(xdoc) 
https://github.com/JuliaLang/LightXML.jl/blob/d6584b80d52e8e16f18dac45bdf326edf0eb6534/src/document.jl#L70
 anywhere? 
I don't think LightXML is setting any finalizers currently.

On Wednesday, January 21, 2015 at 3:43:57 PM UTC-8, Robert Feldt wrote:
>
> No, I'm pretty sure it doesn't:
>
> https://github.com/JuliaOpt/NLopt.jl/blob/master/REQUIRE
>
> I just realized one of the opt runs is actually calling out to the 
> LightXML package. I think it is likely the mem leak is in there. We create 
> a very large number of XML objects, dump them to strings and then don't 
> hold on to them from the Julia side. I guess it might not be releasing them 
> on the C side of LightXML or in the julia binding. Just a guess though.
>
> Cheers,
>
> Robert
>
> On Thu, Jan 22, 2015 at 12:08 AM, Jameson Nash  > wrote:
>
>> Does NLopt use the FastAnonymous package to create function closures?
>>
>> On Wed, Jan 21, 2015 at 4:58 PM Robert Feldt > > wrote:
>>
>>> I am running optimizations using different algorithms within the 
>>> NLopt.jl package. Memory slowly builds until the julia process is killed. I 
>>> thought the problem might be that the NLopt opt objects leaks some memory 
>>> so I tried the following code after each optimization run (there is a big 
>>> loop running multiple optimization runs after each other) to release the 
>>> NLopt::Opt object saved in my nlopt in "slot" named "opt" object:
>>>
>>>   # Overwrite the opt object to try to give back its memory. There seems 
>>> to be mem leaks
>>>   # when we have long-running opt runs:
>>>   NLopt.destroy(nlopt.opt) # Not sure what is the effect of this but we 
>>> try...
>>>   nlopt.opt = nothing
>>>   gc()
>>>
>>> but memory keeps building. I guess it could be in my (large and very 
>>> complex and thus hard to distill down to an example) code used in the 
>>> fitness function that NLopt calls out to but this is normal Julia code and 
>>> should be garbage collected. I realize it is hard to debug without more 
>>> concrete code but if anyone has ideas on why the Julia process might slowly 
>>> but continuously be increasing its mem use (I'm running on a MacBoock Pro 
>>> with Yosemite) I'd appreciate any tips/pointers or how to debug further.
>>>
>>> Each NLopt run is on the order of 15 minutes with around 2000 function 
>>> evaluations.
>>>
>>> Regards,
>>>
>>> Robert Feldt 
>>>
>>>
>
>
> -- 
> Best regards,
>
> /Robert Feldt
> --
> Tech. Dr. (PhD), Professor of Software Engineering
> Blekinge Institute of Technology, Software Engineering Research Lab, and
> Chalmers, Software Engineering Dept
> Explanea.com - Igniting your Software innovation
> robert.feldt (a) bth.seorrobert.feldt (a) chalmers.seor   
>  robert.feldt (a) gmail.com
> Mobile phone: +46 (0) 733 580 580
> http://www.robertfeldt.net 
>  


Re: [julia-users] Re: Almost at 500 packages!

2015-01-21 Thread Iain Dunning
Yes indeed Christoph, a package that doesn't work is a package that might
as well not exist. Fortunately, and fairly uniquely I think, we can
quantify to some extent how many of our packages are working, and the
degree to which they are.

In my mind the goal now is "grow fast and don't break too many things", and
I think our pace over the last month or so of around 1 package per day is
fantastic, with good stability of packages (i.e. they pass tests). I've
also noticed that packages being registered now are often of a higher
quality than they used to be, in terms of tests and documentation. I talked
about this a bit at JuliaCon, but in some sense NPM and CRAN represent
different ends of a spectrum of possibilities, and it seems like the
consensus is more towards CRAN. So, we're doing good I think.


On Wed, Jan 21, 2015 at 7:02 PM, Kevin Squire 
wrote:

> Additional references: PyPI lists 54212 packages
> , currently (roughly half as many as node)
> but, CRAN only has 6214 .
>
> Cheers,
>Kevin
>
> On Wed, Jan 21, 2015 at 3:37 PM, Sean Garborg 
> wrote:
>
>> You wouldn't like node  ;)
>>
>> On Wednesday, January 21, 2015 at 4:29:53 PM UTC-7, Christoph Ortner
>> wrote:
>>>
>>> Great that so many are contributing to Julia, but I would question
>>> whether such a large number of packages will be healthy in the long run. It
>>> will make it very difficult for new users to use Julia effectively.
>>
>>
>


-- 
*Iain Dunning*
PhD Candidate 
 / MIT Operations Research Center 
http://iaindunning.com  /  http://juliaopt.org


[julia-users] set hard view limit in Gadfly plot axis

2015-01-21 Thread Ken B
I'm trying to plot some Geom.lines with Gadfly, but some results are way 
off, so I'd like to keep my y-axis plot between some hard limits. 

The current minvalue and maxvalue arguments stretch out the graph, however 
if there are datapoints outside these min/maxvalues, the axis will scale 
anyway.

For a simple example:

plot(x=rand(10), y=rand(10), Geom.line, 
Scale.y_continuous(minvalue=-0.10, maxvalue=0.10))

here I would actually want the plot to be limited in the Y-axis to the 
range -0.1 ; 0.1, not stretched out to any value higher than 0.1

I've found a related issue 
(https://github.com/dcjones/Gadfly.jl/issues/280) but I don't understand 
the solution.

Thanks,
Ken


Re: [julia-users] Re: Almost at 500 packages!

2015-01-21 Thread Kevin Squire
Additional references: PyPI lists 54212 packages
, currently (roughly half as many as node)
but, CRAN only has 6214 .

Cheers,
   Kevin

On Wed, Jan 21, 2015 at 3:37 PM, Sean Garborg 
wrote:

> You wouldn't like node  ;)
>
> On Wednesday, January 21, 2015 at 4:29:53 PM UTC-7, Christoph Ortner wrote:
>>
>> Great that so many are contributing to Julia, but I would question
>> whether such a large number of packages will be healthy in the long run. It
>> will make it very difficult for new users to use Julia effectively.
>
>


Re: [julia-users] Memory leaks in long-running NLopt.jl runs

2015-01-21 Thread Robert Feldt
No, I'm pretty sure it doesn't:

https://github.com/JuliaOpt/NLopt.jl/blob/master/REQUIRE

I just realized one of the opt runs is actually calling out to the LightXML
package. I think it is likely the mem leak is in there. We create a very
large number of XML objects, dump them to strings and then don't hold on to
them from the Julia side. I guess it might not be releasing them on the C
side of LightXML or in the julia binding. Just a guess though.

Cheers,

Robert

On Thu, Jan 22, 2015 at 12:08 AM, Jameson Nash  wrote:

> Does NLopt use the FastAnonymous package to create function closures?
>
> On Wed, Jan 21, 2015 at 4:58 PM Robert Feldt 
> wrote:
>
>> I am running optimizations using different algorithms within the NLopt.jl
>> package. Memory slowly builds until the julia process is killed. I thought
>> the problem might be that the NLopt opt objects leaks some memory so I
>> tried the following code after each optimization run (there is a big loop
>> running multiple optimization runs after each other) to release the
>> NLopt::Opt object saved in my nlopt in "slot" named "opt" object:
>>
>>   # Overwrite the opt object to try to give back its memory. There seems
>> to be mem leaks
>>   # when we have long-running opt runs:
>>   NLopt.destroy(nlopt.opt) # Not sure what is the effect of this but we
>> try...
>>   nlopt.opt = nothing
>>   gc()
>>
>> but memory keeps building. I guess it could be in my (large and very
>> complex and thus hard to distill down to an example) code used in the
>> fitness function that NLopt calls out to but this is normal Julia code and
>> should be garbage collected. I realize it is hard to debug without more
>> concrete code but if anyone has ideas on why the Julia process might slowly
>> but continuously be increasing its mem use (I'm running on a MacBoock Pro
>> with Yosemite) I'd appreciate any tips/pointers or how to debug further.
>>
>> Each NLopt run is on the order of 15 minutes with around 2000 function
>> evaluations.
>>
>> Regards,
>>
>> Robert Feldt
>>
>>


-- 
Best regards,

/Robert Feldt
--
Tech. Dr. (PhD), Professor of Software Engineering
Blekinge Institute of Technology, Software Engineering Research Lab, and
Chalmers, Software Engineering Dept
Explanea.com - Igniting your Software innovation
robert.feldt (a) bth.seorrobert.feldt (a) chalmers.seor
 robert.feldt (a) gmail.com
Mobile phone: +46 (0) 733 580 580
http://www.robertfeldt.net 


[julia-users] Re: Almost at 500 packages!

2015-01-21 Thread Sean Garborg
You wouldn't like node  ;)

On Wednesday, January 21, 2015 at 4:29:53 PM UTC-7, Christoph Ortner wrote:
>
> Great that so many are contributing to Julia, but I would question whether 
> such a large number of packages will be healthy in the long run. It will 
> make it very difficult for new users to use Julia effectively.



[julia-users] Almost at 500 packages!

2015-01-21 Thread Christoph Ortner
Great that so many are contributing to Julia, but I would question whether such 
a large number of packages will be healthy in the long run. It will make it 
very difficult for new users to use Julia effectively.

Re: [julia-users] Memory leaks in long-running NLopt.jl runs

2015-01-21 Thread Jameson Nash
Does NLopt use the FastAnonymous package to create function closures?
On Wed, Jan 21, 2015 at 4:58 PM Robert Feldt  wrote:

> I am running optimizations using different algorithms within the NLopt.jl
> package. Memory slowly builds until the julia process is killed. I thought
> the problem might be that the NLopt opt objects leaks some memory so I
> tried the following code after each optimization run (there is a big loop
> running multiple optimization runs after each other) to release the
> NLopt::Opt object saved in my nlopt in "slot" named "opt" object:
>
>   # Overwrite the opt object to try to give back its memory. There seems
> to be mem leaks
>   # when we have long-running opt runs:
>   NLopt.destroy(nlopt.opt) # Not sure what is the effect of this but we
> try...
>   nlopt.opt = nothing
>   gc()
>
> but memory keeps building. I guess it could be in my (large and very
> complex and thus hard to distill down to an example) code used in the
> fitness function that NLopt calls out to but this is normal Julia code and
> should be garbage collected. I realize it is hard to debug without more
> concrete code but if anyone has ideas on why the Julia process might slowly
> but continuously be increasing its mem use (I'm running on a MacBoock Pro
> with Yosemite) I'd appreciate any tips/pointers or how to debug further.
>
> Each NLopt run is on the order of 15 minutes with around 2000 function
> evaluations.
>
> Regards,
>
> Robert Feldt
>
>


[julia-users] Memory leaks in long-running NLopt.jl runs

2015-01-21 Thread Robert Feldt
I am running optimizations using different algorithms within the NLopt.jl 
package. Memory slowly builds until the julia process is killed. I thought 
the problem might be that the NLopt opt objects leaks some memory so I 
tried the following code after each optimization run (there is a big loop 
running multiple optimization runs after each other) to release the 
NLopt::Opt object saved in my nlopt in "slot" named "opt" object:

  # Overwrite the opt object to try to give back its memory. There seems to 
be mem leaks
  # when we have long-running opt runs:
  NLopt.destroy(nlopt.opt) # Not sure what is the effect of this but we 
try...
  nlopt.opt = nothing
  gc()

but memory keeps building. I guess it could be in my (large and very 
complex and thus hard to distill down to an example) code used in the 
fitness function that NLopt calls out to but this is normal Julia code and 
should be garbage collected. I realize it is hard to debug without more 
concrete code but if anyone has ideas on why the Julia process might slowly 
but continuously be increasing its mem use (I'm running on a MacBoock Pro 
with Yosemite) I'd appreciate any tips/pointers or how to debug further.

Each NLopt run is on the order of 15 minutes with around 2000 function 
evaluations.

Regards,

Robert Feldt 



[julia-users] Re: workflow recommendation/tutorial

2015-01-21 Thread Gray Calhoun
I just want to thank everyone who replied to this thread. These
details are really helpful.

On Tuesday, January 20, 2015 at 12:25:30 PM UTC-6, Gray Calhoun wrote:
>
> Seconded. If anyone has time to just record a 5 minute screencast
> of "working productively in Julia" I think he or she would have a
> moderately sized but very appreciative audience.
>
> On Tuesday, January 20, 2015 at 4:45:13 AM UTC-6, Tamas Papp wrote:
>>
>> Hi, 
>>
>> I am wondering what the best workflow is for iterative/exploratory 
>> programming (as opposed to, say, library development).  I feel that my 
>> questions below all have solutions, it's just that I am not experienced 
>> enough in Julia to figure them out. 
>>
>> The way I have been doing it so far: 
>> 1. open a file in the editor, 
>> 2. start `using` some libraries, 
>> 3. write a few functions, load data, plot, analyze 
>> 4. rewrite functions, repeat 2-4 until satisfied. 
>>
>> I usually end up with a bunch of functions, followed by the actual 
>> runtime code. 
>>
>> However, I run into the following issues (or, rather, inconveniences) 
>> with nontrivial code: 
>>
>> a. If I redefine a function, then I have to recompile dependent 
>> functions, which is tedious and occasionally a source of bugs 
>> (cf. https://github.com/JuliaLang/julia/issues/265 ) 
>>
>> b. I can't redefine types. 
>>
>> I can solve both by restarting (`workspace()`), but then I have to 
>> reload & recompile everything. 
>>
>> I am wondering if there is a more organized way of doing this --- eg put 
>> some stuff in a module in a separate file and just keep reloading that, 
>> etc. Any advice, or pointers to tutorials would be appreciated. 
>>
>> I am using Emacs/ESS. 
>>
>> Also, is there a way to unintern symbols (a la CL) that would solve the 
>> type redefinition issue? 
>>
>> Best, 
>>
>> Tamas 
>>
>

[julia-users] Re: Unexpected variability in quadgk performance

2015-01-21 Thread Steven G. Johnson


On Wednesday, January 21, 2015 at 3:03:29 PM UTC-5, Alex Ames wrote:
>
>  but that fails to explain why sin(x)*cos(x) (which integrates to zero 
> between both 0--pi and 0--2pi) runs fast for 0--pi and slow for 0--2pi. 
>

(It's purely a question of roundoff errors; if you get lucky and it gets an 
error that is exactly zero due to roundoff errors, then it will stop 
earlier.) 


[julia-users] Re: NLOpt with MLSL throws invalid_args.

2015-01-21 Thread Robert Feldt
Since I also had some troubles using MLSL_LDS I add some example code that 
worked for me:

using NLopt

count = 0 # keep track of # function evaluations

function myfunc(x::Vector, grad::Vector)
global count
count::Int += 1

sqrt(x[2])
end

opt = Opt(:GN_MLSL_LDS, 2)
subopt = Opt(:LN_NELDERMEAD, 2)
for o in [opt, subopt]
  upper_bounds!(o, [100.0, 100.0])
  lower_bounds!(o, [0.0, 0.0])
  xtol_rel!(o, 1e-4)
  min_objective!(o, myfunc)
  maxtime!(o, 5.0)
end
local_optimizer!(opt, subopt)
(minf,minx,ret) = optimize(opt, [1.234, 5.678])

Cheers,

Robert Feldt



[julia-users] Re: Unexpected variability in quadgk performance

2015-01-21 Thread Steven G. Johnson


On Wednesday, January 21, 2015 at 3:03:29 PM UTC-5, Alex Ames wrote:
>
> quadgk integration of the sin and cos functions takes several seconds when 
> integrated between 0 and 2pi, versus fractional seconds when integrated 
> between 0 and pi. 
>

The difference is whether the integral is exactly zero; if the integral is 
zero you need to specify an abstol.  This is mentioned in the manual on 
quadgk 

:

Returns a pair (I,E) of the estimated integral I and an estimated upper 
bound on the absolute error E. If maxevals is not exceeded then E <= 
max(abstol, reltol*norm(I)) will hold. (Note that it is useful to specify a 
positive abstol in cases where norm(I) may be zero.)


In particular, reltol defaults to sqrt(eps), about 1e-8, while abstol 
defaults to zero (since abstol is dimensionful, i.e. it depends on the 
overall scale of f, there is no way to give it a reasonable nonzero 
default).   However, when the integral is zero, the condition number of the 
problem diverges and it is impossible to guarantee a finite relative error. 
  So, what it is doing is running until the error is zero to machine 
precision or until the default maxevals (10^7) is exceeded.

Moral: if you are computing an integral that may be nearly zero, you should 
specify an appropriate abstol for your problem.


Re: [julia-users] Re: Peculiarly slow matrix multiplication

2015-01-21 Thread Micah McClimans
Tim, you're absolutely right- it turns out that both arrays were Real, and 
it speeds right up when I convert them to Float64.

Thank you very much.

On Wednesday, January 21, 2015 at 7:41:15 AM UTC-5, Tim Holy wrote:
>
> A few more points: 
> - Simon's suggestion of checking the types is spot on; if one is a matrix 
> of 
> Any, for example, you're doomed to a slower code path. If the types are 
> Array{Float64,2} and Array{Float64,1}, that's not the problem. 
> - It should be slightly faster if you change your parentheses, 
> shrt*(diagm(expr)*shrt'). Jutho's scale suggestion would be even better. 
> - Try the profiler (see the docs). If you're running julia 0.4 and see 
> it's 
> spending a lot of time in generic_matmatmul, do a git pull and 
> rebuild---it 
> should fix the problem. 
>
> --Tim 
>
> On Wednesday, January 21, 2015 01:05:06 AM Simon Byrne wrote: 
> > As Jutho said, this shouldn't happen, but is difficult to diagnose 
> without 
> > further information. What are the types of shrt and expr? (these can be 
> > found using the typeof function). 
> > 
> > Simon 
> > 
> > On Wednesday, 21 January 2015 07:59:34 UTC, Jutho wrote: 
> > > Not sure what is causing the slowness, but you could avoid creating a 
> > > diagonal matrix and then doing the matrix multiplication with 
> diagm(expr) 
> > > which will be treated as a full matrix. 
> > > Instead of shrt*diagm(expr) which is interpreted as the multiplication 
> of 
> > > two full matrices, try scale(shrt,expr) . 
> > > 
> > > Op woensdag 21 januari 2015 07:56:19 UTC+1 schreef Micah McClimans: 
> > >> I'm running into trouble with a line of matrix multiplication going 
> very 
> > >> slowly in one of my programs. The line I'm looking into is: 
> > >> objectivematrix=shrt*diagm(expr)*(shrt') 
> > >> where shrt is 12,000x600 and expr is 600 long. This line takes 
> several 
> > >> HOURS to run, on a computer that can run 
> > >> 
> > >> k=rand(12000,12000) 
> > >> k3=k*k*k 
> > >> 
> > >> in under a minute. I've tried devectorizing the line into the 
> following 
> > >> loop (shrt is block-diagonal with each block ONevecs and -ONevecs 
> > >> respectively, so I split the loop in half) 
> > >> 
> > >> objectivematrix=zeros(2*size(ONevecs,1),2*size(ONevecs,1)) 
> > >> for i in 1:size(ONevecs,1) 
> > >> 
> > >> print(i) 
> > >> for j in 1:size(ONevecs,1) 
> > >> 
> > >> for k in 1:size(ONevecs,2) 
> > >> objectivematrix[i,j]+=ONevecs[i,k]*ONevecs[j,k]*expr[k] 
> > >> end 
> > >> 
> > >> end 
> > >> 
> > >> end 
> > >> for i in 1:size(ONevecs,1) 
> > >> 
> > >> print(i) 
> > >> for j in 1:size(ONevecs,1) 
> > >> 
> > >> for k in 1:size(ONevecs,2) 
> > >> 
> > >> 
> objectivematrix[i+size(ONevecs,1),j+size(ONevecs,1)]+=ONevecs[i,k]*ONevec 
> > >> s[j,k]*expr[k+size(ONevecs,2)]>> 
> > >> end 
> > >> 
> > >> end 
> > >> 
> > >> end 
> > >> 
> > >> and this give a print out every couple seconds- it's faster than the 
> > >> matrix multiplication version, but not enough. Why is this taking so 
> > >> long? 
> > >> This should not be a hard operation. 
>
>

[julia-users] Unexpected variability in quadgk performance

2015-01-21 Thread Alex Ames
quadgk integration of the sin and cos functions takes several seconds when 
integrated between 0 and 2pi, versus fractional seconds when integrated 
between 0 and pi. Any ideas as to why this might be? My first thought was 
that quadgk might have different behavior when the function integrates to 
zero, but that fails to explain why sin(x)*cos(x) (which integrates to zero 
between both 0--pi and 0--2pi) runs fast for 0--pi and slow for 0--2pi. 

f(x) = sin(x)*cos(x)

function quadgktest()
  println("="^40)
  @time quadgk(sin, 0., pi)
  @time quadgk(f, 0., pi)
  @time quadgk(sin, 0., 2.*pi)
  @time quadgk(f, 0., 2.*pi)
end


   - 
   
   elapsed time: 4.5545e-5 seconds (2312 bytes allocated)
   elapsed time: 0.000117391 seconds (84752 bytes allocated)
   
   - 
   
   elapsed time: 1.488471598 seconds (454495576 bytes allocated, 47.06% gc time)
   
   - 
   
   elapsed time: 3.658246353 seconds (984390504 bytes allocated, 51.24% gc time)
   
   


[julia-users] Re: ANN: PGF/TikZ packages

2015-01-21 Thread Mykel Kochenderfer
You need to make sure you are running texlive 2014. If you are indeed 
running texlive 2014, then I'm happy to look into this more deeply with 
you. Please file an issue  with 
the code you're trying to use to plot. If you want to try digging into it a 
little on your own, you can follow these steps:
1. run "using TikzPictures"
2. run "tikzDeleteIntermediate(false)"
3. rerun the commands you used to plot
4. go into the console and run "lualatex tikzpicture" on the 
tikzpicture.tex that is generated in the current directory
5. the errors you get in step 4 should give a hint as to what is wrong

On Wednesday, January 21, 2015 at 5:21:46 AM UTC-8, David van Leeuwen wrote:
>
> Hello, 
>
> On Thursday, August 21, 2014 at 11:05:08 PM UTC+2, Mykel Kochenderfer 
> wrote:
>>
>> There are three new Julia packages for interfacing with PGF/TikZ 
>>  for making publication-quality graphics.
>>
>>1. TikzPictures.jl . Basic 
>>interface to PGF/TikZ. Images can be saved as PDF, SVG, and TEX. If using 
>>IJulia, it will output SVG images.
>>2. PGFPlots.jl . Plotting tool 
>>that uses the pgfplots  package (built 
>>on top of TikZ).
>>3. TikzGraphs.jl . Graph 
>>layout package using algorithms built into PGF/TikZ 3.0+.
>>
>> Documentation is provided with each package. Installation of the 
>> dependencies (e.g., pdf2svg and pgfplots) is still a bit manual, but 
>> instructions are in the documentation.
>>
>
> This looks great, thanks.   
>
> However, I run into problems with PGFPlots:
>
> Error saving as SVG 
> ERROR: The pdf generation failed.
>  Be sure your latex libraries are fully up to date!
>  You tried: `lualatex --enable-write18 --output-directory=. tikzpicture`
>
>
> The latex on my system (I think a tex live latex on mac) is pdflatex. Does 
> anyone know how I can configure this in PGFPlots?
>
>
> Thanks
>
>
> ---david
>


Re: [julia-users] Re: Almost at 500 packages!

2015-01-21 Thread Sean Garborg
Ah, there it is! Thanks for outlining the process.

On Wednesday, January 21, 2015 at 11:11:31 AM UTC-7, Iain Dunning wrote:
>
> That is really weird, I have no explanation. If it says updated on 01-20, 
> its for METADATA (and the last green build for Julia 0.4) as at 2AM EST on 
> the 20th. Weird... It is up now for the 21st.
>
> On Wed, Jan 21, 2015 at 12:57 PM, Sean Garborg  > wrote:
>
>> Packages being registered and tagged between the 0.3 run and the 0.4 run 
>> -- pretty cool!
>>
>> The package I'm thinking of (Geodesy.jl) was registered and tagged on the 
>> 19th (merged ~13:00 EST), so maybe the run marked '2015-01-20' was kicked 
>> off on the 19th at 2AM EST? I don't know if there's an ideal dating scheme, 
>> but the date of the last METADATA pull (between the 0.3 and 0.4 runs) seems 
>> like a reasonable upper bound, not that you don't have much more important 
>> things on your plate :).
>>
>> On Wednesday, January 21, 2015 at 10:41:29 AM UTC-7, Iain Dunning wrote:
>>>
>>> Only tagged packages are counted. Also, I have to manually push to the 
>>> website still, even though PackageEval hasn't had a problem in a long time. 
>>> I should probably let my baby fly and let it fully automatically run. The 
>>> date is the date I push it on - the actual run happens at around 2AM EST, 
>>> which has actually led to packages being run only on 0.4 the first time 
>>> because they weren't in METADATA when the 0.3 tests were run - a pretty 
>>> narrow window!
>>>
>>> On Wed, Jan 21, 2015 at 8:31 AM, Sean Garborg  
>>> wrote:
>>>
 I think we were over 500 in METADATA last time the pulse was updated. I 
 just know because I registered a package on the 19th and it wasn't in the 
 1/20 status changes. Curious, would that be due to METADATA being updated 
 manually, or the batch taking ~12 hours or so, or the batch needing to be 
 restarted/resumed sometimes, or the date representing more of a post date 
 than a run date?


 On Tuesday, January 20, 2015 at 2:08:48 PM UTC-7, Luthaf wrote:
>
> If you do accept unfinished and very alpha package, I can submit one 
> right now ...
>
> It won't be very usable, but I am wondering about how finished should 
> be a package when submitted to METADATA.
>
> Viral Shah a écrit : 
>
> I wonder what the 500th package will be.
>
> -viral
>
> On Tuesday, January 20, 2015 at 9:02:45 PM UTC+5:30, Iain Dunning 
> wrote:
>>
>> Just noticed on http://pkg.julialang.org/pulse.html that we are at 
>> 499 registered packages with at least one version tagged that are Julia 
>> 0.4-dev compatible (493 on Julia 0.3).
>>
>> Thanks to all the package developers for their efforts in growing the 
>> Julia package ecosystem!
>>
>>  
>>>
>>>
>>> -- 
>>> *Iain Dunning*
>>> PhD Candidate 
>>>  / MIT 
>>> Operations Research Center 
>>> http://iaindunning.com  /  http://juliaopt.org
>>>  
>>
>
>
> -- 
> *Iain Dunning*
> PhD Candidate 
>  / MIT 
> Operations Research Center 
> http://iaindunning.com  /  http://juliaopt.org
>  


Re: [julia-users] Re: RFC Display of markdown sections in terminal

2015-01-21 Thread Stefan Karpinski
Agree. Left-aligning seems better.

On Wed, Jan 21, 2015 at 1:35 PM, Tony Kelman  wrote:

> +1 for left-aligning
>
>
> On Tuesday, January 20, 2015 at 11:37:43 AM UTC-8, andy hayden wrote:
>>
>> I posted a PR about this, but would like to gauge thoughts on what
>> formatting for headings (purely in the REPL e.g. in help messages
>> - elsewhere e.g. in html, they are rendered differently).
>>
>> I tentatively put:
>>
>> julia> Base.Markdown.parse("#Title")
>>Title
>>   -=-
>>
>> julia> Base.Markdown.parse("##Section")
>>   Section
>>  -–––-
>>
>> julia> Base.Markdown.parse("###Subsection")
>> Subsection
>> ––
>>
>> Though, as I comment in the PR, personally I dislike centered headings (I
>> find them difficult to read) and prefer left-aligned. Do others feel the
>> same/have other/better ideas for this?
>>
>> https://github.com/JuliaLang/julia/pull/9853
>>
>


[julia-users] Re: Delete old versions of dependencies

2015-01-21 Thread Tony Kelman
No, don't think we have this right now. Any information about how to build 
old dependencies (and what version numbers they were) gets lost in the 
sands of git history at the moment.

This is one of many reasons I prefer out-of-tree builds, which Julia's 
build system doesn't support for dependencies right now.


On Wednesday, January 21, 2015 at 9:13:27 AM UTC-8, Michele wrote:
>
> Hi,
> after 8 months of julia usage I realized that in the deps/ folder there 
> are lots of old dependencies. For example I have:
> $ ls deps/ | grep openblas
> openblas-v0.2.10/
> openblas-v0.2.10.rc1/
> openblas-v0.2.10.rc1.tar.gz
> openblas-v0.2.10.rc2/
> openblas-v0.2.10.rc2.tar.gz
> openblas-v0.2.10.tar.gz
> openblas-v0.2.12/
> openblas-v0.2.12.tar.gz
> openblas-v0.2.13/
> openblas-v0.2.13.tar.gz
> openblas-v0.2.8/
> openblas-v0.2.8.tar.gz
> openblas-v0.2.9/
> openblas-v0.2.9.tar.gz
>
> Is there a command to clean them all (leaving the current versions of the 
> dependency)?
> Thanks,
> Michele
>
>

[julia-users] Re: RFC Display of markdown sections in terminal

2015-01-21 Thread Tony Kelman
+1 for left-aligning


On Tuesday, January 20, 2015 at 11:37:43 AM UTC-8, andy hayden wrote:
>
> I posted a PR about this, but would like to gauge thoughts on what 
> formatting for headings (purely in the REPL e.g. in help messages 
> - elsewhere e.g. in html, they are rendered differently).
>
> I tentatively put:
>
> julia> Base.Markdown.parse("#Title")
>Title
>   -=-
>
> julia> Base.Markdown.parse("##Section")
>   Section
>  -–––-
>
> julia> Base.Markdown.parse("###Subsection")
> Subsection
> ––
>
> Though, as I comment in the PR, personally I dislike centered headings (I 
> find them difficult to read) and prefer left-aligned. Do others feel the 
> same/have other/better ideas for this?
>
> https://github.com/JuliaLang/julia/pull/9853
>


Re: [julia-users] Re: Almost at 500 packages!

2015-01-21 Thread Iain Dunning
That is really weird, I have no explanation. If it says updated on 01-20,
its for METADATA (and the last green build for Julia 0.4) as at 2AM EST on
the 20th. Weird... It is up now for the 21st.

On Wed, Jan 21, 2015 at 12:57 PM, Sean Garborg 
wrote:

> Packages being registered and tagged between the 0.3 run and the 0.4 run
> -- pretty cool!
>
> The package I'm thinking of (Geodesy.jl) was registered and tagged on the
> 19th (merged ~13:00 EST), so maybe the run marked '2015-01-20' was kicked
> off on the 19th at 2AM EST? I don't know if there's an ideal dating scheme,
> but the date of the last METADATA pull (between the 0.3 and 0.4 runs) seems
> like a reasonable upper bound, not that you don't have much more important
> things on your plate :).
>
> On Wednesday, January 21, 2015 at 10:41:29 AM UTC-7, Iain Dunning wrote:
>>
>> Only tagged packages are counted. Also, I have to manually push to the
>> website still, even though PackageEval hasn't had a problem in a long time.
>> I should probably let my baby fly and let it fully automatically run. The
>> date is the date I push it on - the actual run happens at around 2AM EST,
>> which has actually led to packages being run only on 0.4 the first time
>> because they weren't in METADATA when the 0.3 tests were run - a pretty
>> narrow window!
>>
>> On Wed, Jan 21, 2015 at 8:31 AM, Sean Garborg 
>> wrote:
>>
>>> I think we were over 500 in METADATA last time the pulse was updated. I
>>> just know because I registered a package on the 19th and it wasn't in the
>>> 1/20 status changes. Curious, would that be due to METADATA being updated
>>> manually, or the batch taking ~12 hours or so, or the batch needing to be
>>> restarted/resumed sometimes, or the date representing more of a post date
>>> than a run date?
>>>
>>>
>>> On Tuesday, January 20, 2015 at 2:08:48 PM UTC-7, Luthaf wrote:

 If you do accept unfinished and very alpha package, I can submit one
 right now ...

 It won't be very usable, but I am wondering about how finished should
 be a package when submitted to METADATA.

 Viral Shah a écrit :

 I wonder what the 500th package will be.

 -viral

 On Tuesday, January 20, 2015 at 9:02:45 PM UTC+5:30, Iain Dunning wrote:
>
> Just noticed on http://pkg.julialang.org/pulse.html that we are at
> 499 registered packages with at least one version tagged that are Julia
> 0.4-dev compatible (493 on Julia 0.3).
>
> Thanks to all the package developers for their efforts in growing the
> Julia package ecosystem!
>
>
>>
>>
>> --
>> *Iain Dunning*
>> PhD Candidate
>>  / MIT
>> Operations Research Center 
>> http://iaindunning.com  /  http://juliaopt.org
>>
>


-- 
*Iain Dunning*
PhD Candidate 
 / MIT Operations Research Center 
http://iaindunning.com  /  http://juliaopt.org


Re: [julia-users] Re: Almost at 500 packages!

2015-01-21 Thread Sean Garborg
Packages being registered and tagged between the 0.3 run and the 0.4 run -- 
pretty cool!

The package I'm thinking of (Geodesy.jl) was registered and tagged on the 
19th (merged ~13:00 EST), so maybe the run marked '2015-01-20' was kicked 
off on the 19th at 2AM EST? I don't know if there's an ideal dating scheme, 
but the date of the last METADATA pull (between the 0.3 and 0.4 runs) seems 
like a reasonable upper bound, not that you don't have much more important 
things on your plate :).

On Wednesday, January 21, 2015 at 10:41:29 AM UTC-7, Iain Dunning wrote:
>
> Only tagged packages are counted. Also, I have to manually push to the 
> website still, even though PackageEval hasn't had a problem in a long time. 
> I should probably let my baby fly and let it fully automatically run. The 
> date is the date I push it on - the actual run happens at around 2AM EST, 
> which has actually led to packages being run only on 0.4 the first time 
> because they weren't in METADATA when the 0.3 tests were run - a pretty 
> narrow window!
>
> On Wed, Jan 21, 2015 at 8:31 AM, Sean Garborg  > wrote:
>
>> I think we were over 500 in METADATA last time the pulse was updated. I 
>> just know because I registered a package on the 19th and it wasn't in the 
>> 1/20 status changes. Curious, would that be due to METADATA being updated 
>> manually, or the batch taking ~12 hours or so, or the batch needing to be 
>> restarted/resumed sometimes, or the date representing more of a post date 
>> than a run date?
>>
>>
>> On Tuesday, January 20, 2015 at 2:08:48 PM UTC-7, Luthaf wrote:
>>>
>>> If you do accept unfinished and very alpha package, I can submit one 
>>> right now ...
>>>
>>> It won't be very usable, but I am wondering about how finished should be 
>>> a package when submitted to METADATA.
>>>
>>> Viral Shah a écrit : 
>>>
>>> I wonder what the 500th package will be.
>>>
>>> -viral
>>>
>>> On Tuesday, January 20, 2015 at 9:02:45 PM UTC+5:30, Iain Dunning wrote:

 Just noticed on http://pkg.julialang.org/pulse.html that we are at 499 
 registered packages with at least one version tagged that are Julia 
 0.4-dev 
 compatible (493 on Julia 0.3).

 Thanks to all the package developers for their efforts in growing the 
 Julia package ecosystem!

  
>
>
> -- 
> *Iain Dunning*
> PhD Candidate 
>  / MIT 
> Operations Research Center 
> http://iaindunning.com  /  http://juliaopt.org
>  


Re: [julia-users] Re: Almost at 500 packages!

2015-01-21 Thread Iain Dunning
Only tagged packages are counted. Also, I have to manually push to the
website still, even though PackageEval hasn't had a problem in a long time.
I should probably let my baby fly and let it fully automatically run. The
date is the date I push it on - the actual run happens at around 2AM EST,
which has actually led to packages being run only on 0.4 the first time
because they weren't in METADATA when the 0.3 tests were run - a pretty
narrow window!

On Wed, Jan 21, 2015 at 8:31 AM, Sean Garborg 
wrote:

> I think we were over 500 in METADATA last time the pulse was updated. I
> just know because I registered a package on the 19th and it wasn't in the
> 1/20 status changes. Curious, would that be due to METADATA being updated
> manually, or the batch taking ~12 hours or so, or the batch needing to be
> restarted/resumed sometimes, or the date representing more of a post date
> than a run date?
>
>
> On Tuesday, January 20, 2015 at 2:08:48 PM UTC-7, Luthaf wrote:
>>
>> If you do accept unfinished and very alpha package, I can submit one
>> right now ...
>>
>> It won't be very usable, but I am wondering about how finished should be
>> a package when submitted to METADATA.
>>
>> Viral Shah a écrit :
>>
>> I wonder what the 500th package will be.
>>
>> -viral
>>
>> On Tuesday, January 20, 2015 at 9:02:45 PM UTC+5:30, Iain Dunning wrote:
>>>
>>> Just noticed on http://pkg.julialang.org/pulse.html that we are at 499
>>> registered packages with at least one version tagged that are Julia 0.4-dev
>>> compatible (493 on Julia 0.3).
>>>
>>> Thanks to all the package developers for their efforts in growing the
>>> Julia package ecosystem!
>>>
>>>


-- 
*Iain Dunning*
PhD Candidate 
 / MIT Operations Research Center 
http://iaindunning.com  /  http://juliaopt.org


[julia-users] Re: Speed of Julia when a function is passed as an argument, and a different, but much faster coding.

2015-01-21 Thread Seth
Hi all,

I've read through all the recent threads on the inefficiencies of passing 
functions as arguments and the proposed workarounds with call() in 0.4. I'm 
not quite sure how to use call() to replicate passing a function, however, 
without creating a whole bunch of extra complexity. It seems to me that 
you'd need to create a separate *type* for each function - and then 
abstract any functions that get passed this new type, in which case perhaps 
FastAnonymous is more intuitive. For processes that allow end users to 
specify their own function to use, this becomes a bit problematic because 
instead of telling them to create a *function*, you need to step them 
through creating a *type* and then overloading call() on that type. It's 
more complex, to be sure.

Is FastAnonymous still the best way of efficiently passing functions in 
0.4, or will it be deprecated in favor of the type/call() approach? Have I 
missed a better option?


[julia-users] Delete old versions of dependencies

2015-01-21 Thread Michele
Hi,
after 8 months of julia usage I realized that in the deps/ folder there are 
lots of old dependencies. For example I have:
$ ls deps/ | grep openblas
openblas-v0.2.10/
openblas-v0.2.10.rc1/
openblas-v0.2.10.rc1.tar.gz
openblas-v0.2.10.rc2/
openblas-v0.2.10.rc2.tar.gz
openblas-v0.2.10.tar.gz
openblas-v0.2.12/
openblas-v0.2.12.tar.gz
openblas-v0.2.13/
openblas-v0.2.13.tar.gz
openblas-v0.2.8/
openblas-v0.2.8.tar.gz
openblas-v0.2.9/
openblas-v0.2.9.tar.gz

Is there a command to clean them all (leaving the current versions of the 
dependency)?
Thanks,
Michele



[julia-users] Re: 1º reunión de Julians, México D.F.

2015-01-21 Thread Ismael VC















El miércoles, 14 de enero de 2015, 13:11:35 (UTC-6), Ismael VC escribió:
>
> Hola a todos!
>
>
> Si viven por el D.F. o la zona metropolitana los esperamos este sábado en 
> *KMMX*!
>
> Detalles: http://www.meetup.com/julialang-mx
>
>
> Alla nos vemos! :D
>


Re: [julia-users] ANN: Docile & Lexicon update.

2015-01-21 Thread Ivan Ogasawara
great!
El 21/01/2015 14:22, "Michael Hatherly" 
escribió:

> Hi all,
>
> I’m pleased to announce the latest update to the Docile
>  and Lexicon
>  documentation packages.
>
> New features include:
>
>- Docile now supports plain strings
>,
>ie. without @doc, as docstrings. Compatibility with the Julia 0.4 doc
>system is still present.
>- Thanks to Tom Short, Lexicon can now output nicely formatted
>markdown. This can then be used to create static documentation using
>programs such as MkDocs . See the
>documentation from the following packages for examples of the results:
>Sims , Docile
>, and Lexicon
>.
>
> Any bugs or feature requests can be opened in either the Docile or Lexicon
> repos.
>
> Happy documenting!
>
> — Mike
> ​
>


Re: [julia-users] Usage of @inbounds

2015-01-21 Thread Erik Schnetter
On Jan 21, 2015, at 10:08 , Nils Gudat  wrote:
> 
> Thanks for the clarifications, although Erik's points lead me to a follow-up 
> question (two, actually): When you say "everything that follows", does this 
> extend to nested loops? I.e., do I need to write:
> 
> @inbounds for i = 1:1000
>   @inbounds for j = 1:1000
> a = x[i,j]
>   end
> end
> 
> or just
> 
> @inbounds for i = 1:1000
>   for j = 1:1000
> a = x[i,j]
>   end
> end
> 
> ?

Those two are equivalent. `@inbounds` switches Julia to a no-bounds-checking 
mode, and it switches back only when the end of the (original) `@inbounds` 
construct is reached.

-erik

--
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/

My email is as private as my paper mail. I therefore support encrypting
and signing email messages. Get my PGP key from https://sks-keyservers.net.



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [julia-users] Re: workflow recommendation/tutorial

2015-01-21 Thread ggggg
I do basically the same thing Tim described, but I use "julia -L 
mymoduletests.jl" when I start julia.  -L executes the file, then drops to 
the REPL in state after execution. The slowest part about the process is 
usually loading PyPlot.

On Wednesday, January 21, 2015 at 5:29:04 AM UTC-7, Tim Holy wrote:
>
> Sure, want to add it to the FAQ? 
>
> --Tim 
>
> On Tuesday, January 20, 2015 08:27:06 PM Viral Shah wrote: 
> > Should we capture this in the documentation somewhere? This is generally 
> a 
> > useful set of hints for newcomers. 
> > 
> > -viral 
> > 
> > On Wednesday, January 21, 2015 at 5:38:29 AM UTC+5:30, Tim Holy wrote: 
> > > Agreed there are advantages in putting ones test script into a module. 
> > > There 
> > > is also at least one disadvantage: if you get an error, you don't 
> already 
> > > have 
> > > the "state" prepared to examine the variables you'll be passing as 
> > > arguments, 
> > > try the call with slightly different arguments, etc., from the REPL. 
> > > 
> > > But this is a small point, and either strategy can work fine. 
> > > 
> > > Best, 
> > > --Tim 
> > > 
> > > On Tuesday, January 20, 2015 04:01:14 PM Petr Krysl wrote: 
> > > > I think it would be worthwhile to point out  that enclosing  the 
> code of 
> > > > one's "scratch" file  in a module  has a number of advantages. 
> > > > 
> > > > 1. The  global workspace is not polluted  with too many variables 
> and 
> > > > conflicts are avoided. 
> > > > 2. The  variables  defined within that module  are accessible from 
> the 
> > > 
> > > REPL 
> > > 
> > > > as if the variables were defined at the global level. 
> > > > 
> > > > Example: 
> > > > 
> > > > module m1 
> > > > 
> > > > using JFinEALE 
> > > > 
> > > > t0 = time() 
> > > > 
> > > > rho=1.21*1e-9;# mass density 
> > > > c =345.0*1000;# millimeters per second 
> > > > bulk= c^2*rho; 
> > > > Lx=1900.0;# length of the box, millimeters 
> > > > Ly=800.0; # length of the box, millimeters 
> > > > 
> > > > fens,fes = Q4block(Lx,Ly,3,2); # Mesh 
> > > > show(fes.conn) 
> > > > 
> > > > end 
> > > > 
> > > > julia> include("./module_env.jl") 
> > > > Warning: replacing module m1 
> > > > [1 2 6 5 
> > > > 
> > > >  5 6 10 9 
> > > >  2 3 7 6 
> > > >  6 7 11 10 
> > > >  3 4 8 7 
> > > >  7 8 12 11] 
> > > > 
> > > > julia> m1. # I hit the tab key at this point, and I got this list of 
> > > > variables that I can access 
> > > > Lx   Lybulk  c eval  fens  fes   rho   t0 
> > > > 
> > > > I'm sure this is no news to the Julian  wizards, but to a newbie 
> like me 
> > > 
> > > it 
> > > 
> > > > is useful information.  (I haven't seen this in the documentation. 
> > > > Perhaps it is there,  but  if that is not the case I would be all 
> for 
> > > > adding it in.) 
> > > > 
> > > > Petr 
> > > > 
> > > > On Tuesday, January 20, 2015 at 1:45:02 PM UTC-8, Jameson wrote: 
> > > > > My workflow is very similar. I'll add that I'll make a throwaway 
> > > 
> > > module 
> > > 
> > > > > ("MyModuleTests") so that I can use "using" in the test file. 
> Doing 
> > > 
> > > this 
> > > 
> > > > > at 
> > > > > the REPL (defining a new module directly at the prompt) is also a 
> nice 
> > > 
> > > way 
> > > 
> > > > > of encapsulating a chunk of code to isolate it from existing 
> > > 
> > > definitions 
> > > 
> > > > > (including old using statements). It's also similar to how I'll 
> use a 
> > > > > large 
> > > > > begin/end block to group a large chunk of initialization code so 
> that 
> > > 
> > > I 
> > > 
> > > > > can 
> > > > > iterate and rerun it easily. 
> > > > > On Tue, Jan 20, 2015 at 4:34 PM Tim Holy  > > 
> > > > 
> > > 
> > > > > wrote: 
> > > > >> My workflow (REPL-based, Juno in particular is probably 
> different): 
> > > > >> - Open a file ("MyModule.jl") that will consist of a single 
> module, 
> > > 
> > > and 
> > > 
> > > > >> contains types & code 
> > > > >> - Open a 2nd file ("mymodule_tests.jl") that will be the tests 
> file 
> > > 
> > > for 
> > > 
> > > > >> the 
> > > > >> module. Inside of this file, say `import MyModule` rather than 
> `using 
> > > > >> MyModule`; you'll have to scope all calls, but that's a small 
> price 
> > > 
> > > to 
> > > 
> > > > >> pay for 
> > > > >> the ability to `reload("MyModule")` and re-run your tests. 
> > > > >> - Open a julia REPL 
> > > > >> - Start playing with ideas/code in the REPL. Paste the good ones 
> into 
> > > 
> > > the 
> > > 
> > > > >> files. And sometimes vice-versa, when it's easier to type 
> straight 
> > > 
> > > into 
> > > 
> > > > >> the 
> > > > >> files. 
> > > > >> - When enough code is in place, restart the repl. Cycle through 
> > > > >> 
> > > > >> reload("MyModule") 
> > > > >> include("mymodule_tests.jl") 
> > > > >>  
> > > > >> 
> > > > >> until things actually work. 
> > > > >> 
> > > > >> --Tim 
> > > > >> 
> > > > >> On Tuesday, January 20, 2015 01:09:01 PM Viral Shah wrote: 
> > > > >> > This is pretty much the workflow a lot of peopl

[julia-users] ANN: Docile & Lexicon update.

2015-01-21 Thread Michael Hatherly


Hi all,

I’m pleased to announce the latest update to the Docile 
 and Lexicon 
 documentation packages.

New features include:

   - Docile now supports plain strings 
   , ie. 
   without @doc, as docstrings. Compatibility with the Julia 0.4 doc system 
   is still present. 
   - Thanks to Tom Short, Lexicon can now output nicely formatted markdown. 
   This can then be used to create static documentation using programs such as 
   MkDocs . See the documentation from the 
   following packages for examples of the results: Sims 
   , Docile 
   , and Lexicon 
   . 

Any bugs or feature requests can be opened in either the Docile or Lexicon 
repos.

Happy documenting!

— Mike
​


Re: [julia-users] convincing Julia that a function call (via a variable) has a stable return type

2015-01-21 Thread Keith Mason
I figured out my problem.  I was pre-creating and storing 
CFunction{Float64,Float64} objects in an Array{CFunction,1}.  This was 
causing f to be a CFunction rather than a CFunction{Float64,Float64}.  So 
the Julia compiler was a bit confused as to what my return type was going 
to be.

I've got it worked out, and it does work.  Thanks!

On Tuesday, January 20, 2015 at 10:19:51 PM UTC-6, Jeff Bezanson wrote:
>
> That's surprising; I get the same speedup in 0.3 with 
>
> function test2() 
> f = CFunction{Float64,Float64}(foo) 
> for i=1:1 
> r = call(f, 1.0) 
> goo(r) 
> end 
> end 
>

-- 


Please click here 
 for 
important information regarding this e-mail communication.


Re: [julia-users] Usage of @inbounds

2015-01-21 Thread Nils Gudat
Thanks for the clarifications, although Erik's points lead me to a 
follow-up question (two, actually): When you say "everything that follows", 
does this extend to nested loops? I.e., do I need to write:

@inbounds for i = 1:1000
  @inbounds for j = 1:1000
a = x[i,j]
  end
end

or just

@inbounds for i = 1:1000
  for j = 1:1000
a = x[i,j]
  end
end

?


Re: [julia-users] Re: Almost at 500 packages!

2015-01-21 Thread Sean Garborg
I think we were over 500 in METADATA last time the pulse was updated. I 
just know because I registered a package on the 19th and it wasn't in the 
1/20 status changes. Curious, would that be due to METADATA being updated 
manually, or the batch taking ~12 hours or so, or the batch needing to be 
restarted/resumed sometimes, or the date representing more of a post date 
than a run date?

On Tuesday, January 20, 2015 at 2:08:48 PM UTC-7, Luthaf wrote:
>
> If you do accept unfinished and very alpha package, I can submit one right 
> now ...
>
> It won't be very usable, but I am wondering about how finished should be a 
> package when submitted to METADATA.
>
> Viral Shah a écrit : 
>
> I wonder what the 500th package will be.
>
> -viral
>
> On Tuesday, January 20, 2015 at 9:02:45 PM UTC+5:30, Iain Dunning wrote:
>>
>> Just noticed on http://pkg.julialang.org/pulse.html that we are at 499 
>> registered packages with at least one version tagged that are Julia 0.4-dev 
>> compatible (493 on Julia 0.3).
>>
>> Thanks to all the package developers for their efforts in growing the 
>> Julia package ecosystem!
>>
>> 

[julia-users] Re: ANN: PGF/TikZ packages

2015-01-21 Thread David van Leeuwen
Hello, 

On Thursday, August 21, 2014 at 11:05:08 PM UTC+2, Mykel Kochenderfer wrote:
>
> There are three new Julia packages for interfacing with PGF/TikZ 
>  for making publication-quality graphics.
>
>1. TikzPictures.jl . Basic 
>interface to PGF/TikZ. Images can be saved as PDF, SVG, and TEX. If using 
>IJulia, it will output SVG images.
>2. PGFPlots.jl . Plotting tool 
>that uses the pgfplots  package (built 
>on top of TikZ).
>3. TikzGraphs.jl . Graph layout 
>package using algorithms built into PGF/TikZ 3.0+.
>
> Documentation is provided with each package. Installation of the 
> dependencies (e.g., pdf2svg and pgfplots) is still a bit manual, but 
> instructions are in the documentation.
>

This looks great, thanks.   

However, I run into problems with PGFPlots:

Error saving as SVG 
ERROR: The pdf generation failed.
 Be sure your latex libraries are fully up to date!
 You tried: `lualatex --enable-write18 --output-directory=. tikzpicture`


The latex on my system (I think a tex live latex on mac) is pdflatex. Does 
anyone know how I can configure this in PGFPlots?


Thanks


---david


Re: [julia-users] Re: Peculiarly slow matrix multiplication

2015-01-21 Thread Tim Holy
A few more points:
- Simon's suggestion of checking the types is spot on; if one is a matrix of 
Any, for example, you're doomed to a slower code path. If the types are 
Array{Float64,2} and Array{Float64,1}, that's not the problem.
- It should be slightly faster if you change your parentheses, 
shrt*(diagm(expr)*shrt'). Jutho's scale suggestion would be even better.
- Try the profiler (see the docs). If you're running julia 0.4 and see it's 
spending a lot of time in generic_matmatmul, do a git pull and rebuild---it 
should fix the problem.

--Tim

On Wednesday, January 21, 2015 01:05:06 AM Simon Byrne wrote:
> As Jutho said, this shouldn't happen, but is difficult to diagnose without
> further information. What are the types of shrt and expr? (these can be
> found using the typeof function).
> 
> Simon
> 
> On Wednesday, 21 January 2015 07:59:34 UTC, Jutho wrote:
> > Not sure what is causing the slowness, but you could avoid creating a
> > diagonal matrix and then doing the matrix multiplication with diagm(expr)
> > which will be treated as a full matrix.
> > Instead of shrt*diagm(expr) which is interpreted as the multiplication of
> > two full matrices, try scale(shrt,expr) .
> > 
> > Op woensdag 21 januari 2015 07:56:19 UTC+1 schreef Micah McClimans:
> >> I'm running into trouble with a line of matrix multiplication going very
> >> slowly in one of my programs. The line I'm looking into is:
> >> objectivematrix=shrt*diagm(expr)*(shrt')
> >> where shrt is 12,000x600 and expr is 600 long. This line takes several
> >> HOURS to run, on a computer that can run
> >> 
> >> k=rand(12000,12000)
> >> k3=k*k*k
> >> 
> >> in under a minute. I've tried devectorizing the line into the following
> >> loop (shrt is block-diagonal with each block ONevecs and -ONevecs
> >> respectively, so I split the loop in half)
> >> 
> >> objectivematrix=zeros(2*size(ONevecs,1),2*size(ONevecs,1))
> >> for i in 1:size(ONevecs,1)
> >> 
> >> print(i)
> >> for j in 1:size(ONevecs,1)
> >> 
> >> for k in 1:size(ONevecs,2)
> >> objectivematrix[i,j]+=ONevecs[i,k]*ONevecs[j,k]*expr[k]
> >> end
> >> 
> >> end
> >> 
> >> end
> >> for i in 1:size(ONevecs,1)
> >> 
> >> print(i)
> >> for j in 1:size(ONevecs,1)
> >> 
> >> for k in 1:size(ONevecs,2)
> >> 
> >> objectivematrix[i+size(ONevecs,1),j+size(ONevecs,1)]+=ONevecs[i,k]*ONevec
> >> s[j,k]*expr[k+size(ONevecs,2)]>> 
> >> end
> >> 
> >> end
> >> 
> >> end
> >> 
> >> and this give a print out every couple seconds- it's faster than the
> >> matrix multiplication version, but not enough. Why is this taking so
> >> long?
> >> This should not be a hard operation.



Re: [julia-users] Re: workflow recommendation/tutorial

2015-01-21 Thread Tim Holy
Sure, want to add it to the FAQ?

--Tim

On Tuesday, January 20, 2015 08:27:06 PM Viral Shah wrote:
> Should we capture this in the documentation somewhere? This is generally a
> useful set of hints for newcomers.
> 
> -viral
> 
> On Wednesday, January 21, 2015 at 5:38:29 AM UTC+5:30, Tim Holy wrote:
> > Agreed there are advantages in putting ones test script into a module.
> > There
> > is also at least one disadvantage: if you get an error, you don't already
> > have
> > the "state" prepared to examine the variables you'll be passing as
> > arguments,
> > try the call with slightly different arguments, etc., from the REPL.
> > 
> > But this is a small point, and either strategy can work fine.
> > 
> > Best,
> > --Tim
> > 
> > On Tuesday, January 20, 2015 04:01:14 PM Petr Krysl wrote:
> > > I think it would be worthwhile to point out  that enclosing  the code of
> > > one's "scratch" file  in a module  has a number of advantages.
> > > 
> > > 1. The  global workspace is not polluted  with too many variables and
> > > conflicts are avoided.
> > > 2. The  variables  defined within that module  are accessible from the
> > 
> > REPL
> > 
> > > as if the variables were defined at the global level.
> > > 
> > > Example:
> > > 
> > > module m1
> > > 
> > > using JFinEALE
> > > 
> > > t0 = time()
> > > 
> > > rho=1.21*1e-9;# mass density
> > > c =345.0*1000;# millimeters per second
> > > bulk= c^2*rho;
> > > Lx=1900.0;# length of the box, millimeters
> > > Ly=800.0; # length of the box, millimeters
> > > 
> > > fens,fes = Q4block(Lx,Ly,3,2); # Mesh
> > > show(fes.conn)
> > > 
> > > end
> > > 
> > > julia> include("./module_env.jl")
> > > Warning: replacing module m1
> > > [1 2 6 5
> > > 
> > >  5 6 10 9
> > >  2 3 7 6
> > >  6 7 11 10
> > >  3 4 8 7
> > >  7 8 12 11]
> > > 
> > > julia> m1. # I hit the tab key at this point, and I got this list of
> > > variables that I can access
> > > Lx   Lybulk  c eval  fens  fes   rho   t0
> > > 
> > > I'm sure this is no news to the Julian  wizards, but to a newbie like me
> > 
> > it
> > 
> > > is useful information.  (I haven't seen this in the documentation.
> > > Perhaps it is there,  but  if that is not the case I would be all for
> > > adding it in.)
> > > 
> > > Petr
> > > 
> > > On Tuesday, January 20, 2015 at 1:45:02 PM UTC-8, Jameson wrote:
> > > > My workflow is very similar. I'll add that I'll make a throwaway
> > 
> > module
> > 
> > > > ("MyModuleTests") so that I can use "using" in the test file. Doing
> > 
> > this
> > 
> > > > at
> > > > the REPL (defining a new module directly at the prompt) is also a nice
> > 
> > way
> > 
> > > > of encapsulating a chunk of code to isolate it from existing
> > 
> > definitions
> > 
> > > > (including old using statements). It's also similar to how I'll use a
> > > > large
> > > > begin/end block to group a large chunk of initialization code so that
> > 
> > I
> > 
> > > > can
> > > > iterate and rerun it easily.
> > > > On Tue, Jan 20, 2015 at 4:34 PM Tim Holy  > 
> > >
> > 
> > > > wrote:
> > > >> My workflow (REPL-based, Juno in particular is probably different):
> > > >> - Open a file ("MyModule.jl") that will consist of a single module,
> > 
> > and
> > 
> > > >> contains types & code
> > > >> - Open a 2nd file ("mymodule_tests.jl") that will be the tests file
> > 
> > for
> > 
> > > >> the
> > > >> module. Inside of this file, say `import MyModule` rather than `using
> > > >> MyModule`; you'll have to scope all calls, but that's a small price
> > 
> > to
> > 
> > > >> pay for
> > > >> the ability to `reload("MyModule")` and re-run your tests.
> > > >> - Open a julia REPL
> > > >> - Start playing with ideas/code in the REPL. Paste the good ones into
> > 
> > the
> > 
> > > >> files. And sometimes vice-versa, when it's easier to type straight
> > 
> > into
> > 
> > > >> the
> > > >> files.
> > > >> - When enough code is in place, restart the repl. Cycle through
> > > >> 
> > > >> reload("MyModule")
> > > >> include("mymodule_tests.jl")
> > > >> 
> > > >> 
> > > >> until things actually work.
> > > >> 
> > > >> --Tim
> > > >> 
> > > >> On Tuesday, January 20, 2015 01:09:01 PM Viral Shah wrote:
> > > >> > This is pretty much the workflow a lot of people use, with a few
> > 
> > julia
> > 
> > > >> > restarts to deal with the issues a) and b). I often maintain a
> > 
> > script
> > 
> > > >> > as
> > > >> > part of my iterative/exploratory work, so that I can easily get to
> > 
> > the
> > 
> > > >> > desired state when I have to restart.
> > > >> > 
> > > >> > -viral
> > > >> > 
> > > >> > On Tuesday, January 20, 2015 at 4:15:13 PM UTC+5:30, Tamas Papp
> > 
> > wrote:
> > > >> > > Hi,
> > > >> > > 
> > > >> > > I am wondering what the best workflow is for
> > 
> > iterative/exploratory
> > 
> > > >> > > programming (as opposed to, say, library development).  I feel
> > 
> > that
> > 
> > > >> > > my
> > > >> > > questions below all have solutions, it's just that I am not
> > > >> 
> > > >> experience

[julia-users] Re: Peculiarly slow matrix multiplication

2015-01-21 Thread Simon Byrne
As Jutho said, this shouldn't happen, but is difficult to diagnose without 
further information. What are the types of shrt and expr? (these can be 
found using the typeof function).

Simon

On Wednesday, 21 January 2015 07:59:34 UTC, Jutho wrote:
>
> Not sure what is causing the slowness, but you could avoid creating a 
> diagonal matrix and then doing the matrix multiplication with diagm(expr) 
> which will be treated as a full matrix. 
> Instead of shrt*diagm(expr) which is interpreted as the multiplication of 
> two full matrices, try scale(shrt,expr) .
>
>
> Op woensdag 21 januari 2015 07:56:19 UTC+1 schreef Micah McClimans:
>>
>> I'm running into trouble with a line of matrix multiplication going very 
>> slowly in one of my programs. The line I'm looking into is:
>> objectivematrix=shrt*diagm(expr)*(shrt')
>> where shrt is 12,000x600 and expr is 600 long. This line takes several 
>> HOURS to run, on a computer that can run
>>
>> k=rand(12000,12000)
>> k3=k*k*k
>>
>> in under a minute. I've tried devectorizing the line into the following 
>> loop (shrt is block-diagonal with each block ONevecs and -ONevecs 
>> respectively, so I split the loop in half)
>>
>> objectivematrix=zeros(2*size(ONevecs,1),2*size(ONevecs,1))
>> for i in 1:size(ONevecs,1)
>> print(i)
>> for j in 1:size(ONevecs,1)
>> for k in 1:size(ONevecs,2)
>> objectivematrix[i,j]+=ONevecs[i,k]*ONevecs[j,k]*expr[k]
>> end
>> end
>> end
>> for i in 1:size(ONevecs,1)
>> print(i)
>> for j in 1:size(ONevecs,1)
>> for k in 1:size(ONevecs,2)
>> 
>> objectivematrix[i+size(ONevecs,1),j+size(ONevecs,1)]+=ONevecs[i,k]*ONevecs[j,k]*expr[k+size(ONevecs,2)]
>> end
>> end
>>
>> end
>>
>> and this give a print out every couple seconds- it's faster than the 
>> matrix multiplication version, but not enough. Why is this taking so long? 
>> This should not be a hard operation.
>>
>

[julia-users] Re: Moving a point to constraint

2015-01-21 Thread Tomas Lycken


Wow, I’m sorry this has been lying around for so long without a single 
answer. I’ll give it a shot, even though you’ve hopefully managed to get 
past these problems already.

To start with, are you using some kind of point type, or just regular Julia 
arrays, to represent the point?

If you’re using regular Julia arrays, you could probably use built-in 
functions like dot and norm to accomplish what you’re trying to do. For 
example, to project a vector a onto a vector b, you can use the formula a_2 
= dot(a,b) / norm(b)^2 * b (it, and many others, are available on Wikipedia 
).

If you’re using your own (or someone elses) point type, you could use 
exactly the same approach, if you first extend the dot and norm functions 
with methods for your point type:

immutable Point2D{T<:Real}
x::T
y::T
end

import Base: dot, norm

dot{T<:Real}(a::Point2D{T}, b::Point2D{T}) = a.x * b.x + a.y * b.y
norm{T<:Real}(a::Point2D{T}) = sqrt(dot(a,a))

If one or more of these concepts are utterly unfamiliar to you, I’d 
recommend giving the manual a read-through 
 - I haven’t used anything that 
isn’t extensively explained either there or in the Wikipedia article I 
linked to above =)

Happy hacking!

// Tomas

On Wednesday, January 14, 2015 at 9:05:18 AM UTC+1, tmjohari wrote:

Hi,
> I am very new to Julia Programming.
>
> Is there a package/easy way to project a point onto a constraint 
> line/plane,  or 
> at least calculate the distance(maybe euclidean distance) between the 
> point and constraint plane/line and move the point later on by adding the 
> distance(later changed to vector) to the point?
>
​


[julia-users] Re: Peculiarly slow matrix multiplication

2015-01-21 Thread Jutho
Not sure what is causing the slowness, but you could avoid creating a 
diagonal matrix and then doing the matrix multiplication with diagm(expr) 
which will be treated as a full matrix. 
Instead of shrt*diagm(expr) which is interpreted as the multiplication of 
two full matrices, try scale(shrt,expr) .


Op woensdag 21 januari 2015 07:56:19 UTC+1 schreef Micah McClimans:
>
> I'm running into trouble with a line of matrix multiplication going very 
> slowly in one of my programs. The line I'm looking into is:
> objectivematrix=shrt*diagm(expr)*(shrt')
> where shrt is 12,000x600 and expr is 600 long. This line takes several 
> HOURS to run, on a computer that can run
>
> k=rand(12000,12000)
> k3=k*k*k
>
> in under a minute. I've tried devectorizing the line into the following 
> loop (shrt is block-diagonal with each block ONevecs and -ONevecs 
> respectively, so I split the loop in half)
>
> objectivematrix=zeros(2*size(ONevecs,1),2*size(ONevecs,1))
> for i in 1:size(ONevecs,1)
> print(i)
> for j in 1:size(ONevecs,1)
> for k in 1:size(ONevecs,2)
> objectivematrix[i,j]+=ONevecs[i,k]*ONevecs[j,k]*expr[k]
> end
> end
> end
> for i in 1:size(ONevecs,1)
> print(i)
> for j in 1:size(ONevecs,1)
> for k in 1:size(ONevecs,2)
> 
> objectivematrix[i+size(ONevecs,1),j+size(ONevecs,1)]+=ONevecs[i,k]*ONevecs[j,k]*expr[k+size(ONevecs,2)]
> end
> end
>
> end
>
> and this give a print out every couple seconds- it's faster than the 
> matrix multiplication version, but not enough. Why is this taking so long? 
> This should not be a hard operation.
>