Re: [julia-users] How to tell if Val{T} is using a fast route

2016-11-16 Thread Chris Rackauckas
The top level scope of a module is still a global (interactive scope).

On Wednesday, November 16, 2016 at 7:59:04 AM UTC-8, FANG Colin wrote:
>
> Typo, should be
>
> module ...
>
> ff(x::Type{Val{1}}) = 1
>
> x = 1
> a = ff(Val{x})
>
> const y = 1
> a = ff(Val{y})
>
> end
>
>>
>>

[julia-users] Re: Use of special characters in labels of plots.jl

2016-11-16 Thread Chris Rackauckas
Use LaTeXStrings.jl's L_str macro: 

L"$10^{10}"

I know this works with the PyPlot backend, and I think GR.


On Wednesday, November 16, 2016 at 4:37:16 AM UTC-8, Ferran Mazzanti wrote:
>
> ...and please notice that I'm not looking for explicit LaTeX suppport. 
> Just wanted to know if there is any way to add special characters and 
> subscripts etc.
> Thanks again,
> Ferran.



[julia-users] Re: Sharing experience on packages

2016-11-15 Thread Chris Rackauckas
For now, stars are the best bad measurement we have.

On Tuesday, November 15, 2016 at 8:05:16 AM UTC-8, Jérôme Collet wrote:
>
> Hi all,
>
>
> I am new to Julia, I used to use R. And using R packages, the main 
> difficulty for me is the choice of a package for a given task. Most of the 
> time, there are many packages solving the same problem, so we have to 
> choose.
>
>- A first possibility could be the TaskViews, but it is easy to see 
>that around a third of all packages is listed in a TaskView, so being 
>listed does not say anything about quality.
>- Thanks to Rstudio, it is possible to know how many times a given 
>package was downloaded. It is an indication about package quality. But 
> this 
>indication is difficult to obtain, and not very reliable.
>- I heard that in Matlab, it is easy to know the experience about a 
>package, or even a function of a package, compared to other similar 
>functions in another packages.
>
> So, are there any plans to collect, store and share the experience on 
> packages? 
>


[julia-users] Re: PSA: Documenter.jl deprecations

2016-11-03 Thread Chris Rackauckas
Once again, thank you very much for this change! It's highly appreciated by 
those of us who are unfamiliar with gem, Ruby, travis, and the whole 
install feature. As you know, I gave this new access token setup a go and 
had it working in minutes with no extra software to install. Just generate 
some keys, copy/paste into Travis/Github, and done. This really helps with 
usability!

On Wednesday, November 2, 2016 at 12:55:51 PM UTC-7, Michael Hatherly wrote:
>
>
> Version 0.7 of Documenter  
> has been tagged. Please note that this release *deprecates* the current 
> authentication methods used to deploy generated documentation from Travis 
> CI to GitHub Pages, namely GitHub personal access tokens and SSH keys 
> generated via the travis Ruby gem. The new approach used is described in 
> the docs 
> .
>  
> The deprecation warnings will remain in place until version 0.9 of 
> Documenter is released.
>
> Please direct any issues you happen to encounter with these changes while 
> upgrading to the new authentication method to the issue tracker 
> .
>
> — Mike
>
>

[julia-users] Re: ANN: Highlights.jl

2016-11-03 Thread Chris Rackauckas
Nice work!

On Wednesday, November 2, 2016 at 1:14:06 PM UTC-7, Michael Hatherly wrote:
>
> I’m pleased to announce the initial 0.1 release of Highlights.jl 
>  — a Julia package for 
> highlighting source code similar to the well-known Python package called 
> Pygments .
>
> The documentation for the package can be found here 
> . Currently there are 
> only a handful of supported languages and colour schemes, so if your 
> favourite language (aside from Julia) is missing then please feel free to add 
> it to the requests list 
>  or open a PR.
>
> Any bugs or feature requests can be made over on the issue tracker 
> .
>
> — Mike
>


Re: [julia-users] Re: Fast vector element-wise multiplication

2016-11-02 Thread Chris Rackauckas
Yes, this most likely won't help for GPU arrays because you likely don't 
want to be looping through elements serially: you want to call a vectorized 
GPU function which will do the computation in parallel on the GPU. 
ArrayFire's mathematical operations are already overloaded to do this, but 
I don't think they can fuse.

On Tuesday, November 1, 2016 at 8:06:12 PM UTC-7, Sheehan Olver wrote:
>
> Ah thanks!
>
> Though I guess if I want the same code to work also on a GPU array then 
> this won't help?
>
> Sent from my iPhone
>
> On 2 Nov. 2016, at 13:51, Chris Rackauckas  > wrote:
>
> It's the other way around. .* won't fuse because it's still an operator. 
> .= will. It you want .* to fuse, you can instead do:
>
> A .= *.(A,B)
>
> since this invokes the broadcast on *, instead of invoking .*. But that's 
> just a temporary thing.
>
> On Tuesday, November 1, 2016 at 7:27:40 PM UTC-7, Tom Breloff wrote:
>>
>> As I understand it, the .* will fuse, but the .= will not (until 0.6?), 
>> so A will be rebound to a newly allocated array.  If my understanding is 
>> wrong I'd love to know.  There have been many times in the last few days 
>> that I would have used it...
>>
>> On Tue, Nov 1, 2016 at 10:06 PM, Sheehan Olver  
>> wrote:
>>
>>> Ah, good point.  Though I guess that won't work til 0.6 since .* won't 
>>> auto-fuse yet? 
>>>
>>> Sent from my iPhone
>>>
>>> On 2 Nov. 2016, at 12:55, Chris Rackauckas  wrote:
>>>
>>> This is pretty much obsolete by the . fusing changes:
>>>
>>> A .= A.*B
>>>
>>> should be an in-place update of A scaled by B (Tomas' solution).
>>>
>>> On Tuesday, November 1, 2016 at 4:39:15 PM UTC-7, Sheehan Olver wrote:
>>>>
>>>> Should this be added to a package?  I imagine if the arrays are on the 
>>>> GPU (AFArrays) then the operation could be much faster, and having a 
>>>> consistent name would be helpful.
>>>>
>>>>
>>>> On Wednesday, October 7, 2015 at 1:28:29 AM UTC+11, Lionel du Peloux 
>>>> wrote:
>>>>>
>>>>> Dear all,
>>>>>
>>>>> I'm looking for the fastest way to do element-wise vector 
>>>>> multiplication in Julia. The best I could have done is the following 
>>>>> implementation which still runs 1.5x slower than the dot product. I 
>>>>> assume 
>>>>> the dot product would include such an operation ... and then do a 
>>>>> cumulative sum over the element-wise product.
>>>>>
>>>>> The MKL lib includes such an operation (v?Mul) but it seems OpenBLAS 
>>>>> does not. So my question is :
>>>>>
>>>>> 1) is there any chance I can do vector element-wise multiplication 
>>>>> faster then the actual dot product ?
>>>>> 2) why the built-in element-wise multiplication operator (*.) is much 
>>>>> slower than my own implementation for such a basic linealg operation 
>>>>> (full 
>>>>> julia) ? 
>>>>>
>>>>> Thank you,
>>>>> Lionel
>>>>>
>>>>> Best custom implementation :
>>>>>
>>>>> function xpy!{T<:Number}(A::Vector{T},B::Vector{T})
>>>>>   n = size(A)[1]
>>>>>   if n == size(B)[1]
>>>>> for i=1:n
>>>>>   @inbounds A[i] *= B[i]
>>>>> end
>>>>>   end
>>>>>   return A
>>>>> end
>>>>>
>>>>> Bench mark results (JuliaBox, A = randn(30) :
>>>>>
>>>>> function  CPU (s) GC (%)  ALLOCATION (bytes)  
>>>>> CPU (x) 
>>>>> dot(A,B)  1.58e-040.0016  
>>>>> 1.0 xpy!(A,B) 2.31e-040.0080  
>>>>> 1.5 
>>>>> NumericExtensions.multiply!(P,Q)  3.60e-040.0080  
>>>>> 2.3 xpy!(A,B) - no @inbounds check4.36e-040.0080  
>>>>> 2.8 
>>>>> P.*Q  2.52e-0350.36   2400512 
>>>>> 16.0
>>>>> 
>>>>>
>>>>>
>>

Re: [julia-users] Re: Fast vector element-wise multiplication

2016-11-01 Thread Chris Rackauckas
It's the other way around. .* won't fuse because it's still an operator. .= 
will. It you want .* to fuse, you can instead do:

A .= *.(A,B)

since this invokes the broadcast on *, instead of invoking .*. But that's 
just a temporary thing.

On Tuesday, November 1, 2016 at 7:27:40 PM UTC-7, Tom Breloff wrote:
>
> As I understand it, the .* will fuse, but the .= will not (until 0.6?), so 
> A will be rebound to a newly allocated array.  If my understanding is wrong 
> I'd love to know.  There have been many times in the last few days that I 
> would have used it...
>
> On Tue, Nov 1, 2016 at 10:06 PM, Sheehan Olver  > wrote:
>
>> Ah, good point.  Though I guess that won't work til 0.6 since .* won't 
>> auto-fuse yet? 
>>
>> Sent from my iPhone
>>
>> On 2 Nov. 2016, at 12:55, Chris Rackauckas > > wrote:
>>
>> This is pretty much obsolete by the . fusing changes:
>>
>> A .= A.*B
>>
>> should be an in-place update of A scaled by B (Tomas' solution).
>>
>> On Tuesday, November 1, 2016 at 4:39:15 PM UTC-7, Sheehan Olver wrote:
>>>
>>> Should this be added to a package?  I imagine if the arrays are on the 
>>> GPU (AFArrays) then the operation could be much faster, and having a 
>>> consistent name would be helpful.
>>>
>>>
>>> On Wednesday, October 7, 2015 at 1:28:29 AM UTC+11, Lionel du Peloux 
>>> wrote:
>>>>
>>>> Dear all,
>>>>
>>>> I'm looking for the fastest way to do element-wise vector 
>>>> multiplication in Julia. The best I could have done is the following 
>>>> implementation which still runs 1.5x slower than the dot product. I assume 
>>>> the dot product would include such an operation ... and then do a 
>>>> cumulative sum over the element-wise product.
>>>>
>>>> The MKL lib includes such an operation (v?Mul) but it seems OpenBLAS 
>>>> does not. So my question is :
>>>>
>>>> 1) is there any chance I can do vector element-wise multiplication 
>>>> faster then the actual dot product ?
>>>> 2) why the built-in element-wise multiplication operator (*.) is much 
>>>> slower than my own implementation for such a basic linealg operation (full 
>>>> julia) ? 
>>>>
>>>> Thank you,
>>>> Lionel
>>>>
>>>> Best custom implementation :
>>>>
>>>> function xpy!{T<:Number}(A::Vector{T},B::Vector{T})
>>>>   n = size(A)[1]
>>>>   if n == size(B)[1]
>>>> for i=1:n
>>>>   @inbounds A[i] *= B[i]
>>>> end
>>>>   end
>>>>   return A
>>>> end
>>>>
>>>> Bench mark results (JuliaBox, A = randn(30) :
>>>>
>>>> function  CPU (s) GC (%)  ALLOCATION (bytes)  
>>>> CPU (x) 
>>>> dot(A,B)  1.58e-040.0016  
>>>> 1.0 xpy!(A,B) 2.31e-040.0080   
>>>>1.5 
>>>> NumericExtensions.multiply!(P,Q)  3.60e-040.0080  
>>>> 2.3 xpy!(A,B) - no @inbounds check4.36e-040.0080   
>>>>2.8 
>>>> P.*Q  2.52e-0350.36   2400512 
>>>> 16.0
>>>> 
>>>>
>>>>
>

[julia-users] Re: Fast vector element-wise multiplication

2016-11-01 Thread Chris Rackauckas
This is pretty much obsolete by the . fusing changes:

A .= A.*B

should be an in-place update of A scaled by B (Tomas' solution).

On Tuesday, November 1, 2016 at 4:39:15 PM UTC-7, Sheehan Olver wrote:
>
> Should this be added to a package?  I imagine if the arrays are on the GPU 
> (AFArrays) then the operation could be much faster, and having a consistent 
> name would be helpful.
>
>
> On Wednesday, October 7, 2015 at 1:28:29 AM UTC+11, Lionel du Peloux wrote:
>>
>> Dear all,
>>
>> I'm looking for the fastest way to do element-wise vector multiplication 
>> in Julia. The best I could have done is the following implementation which 
>> still runs 1.5x slower than the dot product. I assume the dot product would 
>> include such an operation ... and then do a cumulative sum over the 
>> element-wise product.
>>
>> The MKL lib includes such an operation (v?Mul) but it seems OpenBLAS does 
>> not. So my question is :
>>
>> 1) is there any chance I can do vector element-wise multiplication faster 
>> then the actual dot product ?
>> 2) why the built-in element-wise multiplication operator (*.) is much 
>> slower than my own implementation for such a basic linealg operation (full 
>> julia) ? 
>>
>> Thank you,
>> Lionel
>>
>> Best custom implementation :
>>
>> function xpy!{T<:Number}(A::Vector{T},B::Vector{T})
>>   n = size(A)[1]
>>   if n == size(B)[1]
>> for i=1:n
>>   @inbounds A[i] *= B[i]
>> end
>>   end
>>   return A
>> end
>>
>> Bench mark results (JuliaBox, A = randn(30) :
>>
>> function  CPU (s) GC (%)  ALLOCATION (bytes)  
>> CPU (x) 
>> dot(A,B)  1.58e-040.0016  
>> 1.0 xpy!(A,B) 2.31e-040.0080 
>>  1.5 
>> NumericExtensions.multiply!(P,Q)  3.60e-040.0080  
>> 2.3 xpy!(A,B) - no @inbounds check4.36e-040.0080 
>>  2.8 
>> P.*Q  2.52e-0350.36   2400512 
>> 16.0
>> 
>>
>>

Re: [julia-users] inconsistent 'unique' in Atom

2016-10-31 Thread Chris Rackauckas
Just click on the number and it will expand it.

On Sunday, October 30, 2016 at 7:28:47 PM UTC-7, missp...@gmail.com wrote:
>
> Hi Yichao,
>
> thanks a lot, 
> it does display it correctly if I use dump, but it's annoying that Atom is 
> inconsistent while displaying the results
>
> thanks a lot,
>
> On Sunday, October 30, 2016 at 7:14:07 PM UTC-7, Yichao Yu wrote:
>>
>> On Sun, Oct 30, 2016 at 10:05 PM,   wrote: 
>> > Hi folks, 
>> > 
>> > I've noticed that in v5 the expression 
>> > 
>> > 
>> > unique([122 122.5 10 10.3]) 
>> > 
>> > 
>> > gives as result the following vector: 
>> > 
>> > 122 123 10 10.3 
>> > 
>> > 
>> > Any device? Is there any maximum number of characters displayed in the 
>> > console, or something similar? 
>>
>> I'm not sure how Atom display works but maybe you can try 
>> `dump(unique([122 122.5 10 10.3]))`. Also what if you just print `[122 
>> 122.5 10 10.3]` since the unique is supposed to be no op here? 
>>
>> > 
>> > thanks, 
>>
>

[julia-users] Re: Changing label fontsize in Plots.jl

2016-10-31 Thread Chris Rackauckas
Wonderful to see the learning process in action haha. For future reference, 
to see which commands are available in which packages, you can check the 
supported attributes page of the documentation. 


On Monday, October 31, 2016 at 3:05:30 PM UTC-7, Nitin Arora wrote:
>
> I opened an issue at Plots.jl github as this command is working with 
> pyplots but not pgfplots backend.
>
> On Monday, October 31, 2016 at 2:58:29 PM UTC-7, Nitin Arora wrote:
>>
>> I did try using the "guidefont" and "tickfont" commands but they seem to 
>> make no difference when implemented as below:
>>
>> scatter(Sol_U,:tof,:vinf,xlabel="Time (years)", ylabel="Arrival V(km/s)", 
>> marker = (:c, 2,stroke(0)),xlims = (6,12),tickfont=font(28),guidefont=
>> font(28))
>>
>> where Sol_U is a dataframe.
>>
>> I am using plots 0.9.4+ (master) and my Julia version is 0.5.1-pre+2
>>
>> thanks again !
>>
>> On Monday, October 31, 2016 at 2:48:40 PM UTC-7, Nitin Arora wrote:
>>>
>>> Hi,
>>>  
>>> Does anyone know how to change fontsize for xlabel, ylabel and axis 
>>> tick-labels in Plots.jl ? I am using the pgfplots backend.
>>>
>>> I dont see any examples on the Plots.jl demonstrating that. 
>>>
>>> Documentation I looked up: https://juliaplots.github.io/
>>>
>>> thanks,
>>> Nitin
>>>
>>

[julia-users] Re: What's julia's answer to tapply or accumarray?

2016-10-31 Thread Chris Rackauckas
For reference I've been gathering these kinds of "vectorized" functions in, 
well, VectorizedRoutines.jl 
. I am just 
trying to get an implementation of all of those vectorized routines you 
know and love since, in some cases, they lead to slick code. You can find 
an accumarray there. Feel free to add a PR that has more.

On Monday, October 31, 2016 at 12:38:06 PM UTC-7, phav...@gene.com wrote:
>
> RLEVectors.jl  now has a 
> tapply function where an RLE is used as the factor.
>
>
> On Thursday, March 20, 2014 at 10:46:33 AM UTC-7, James Johndrow wrote:
>>
>> I cannot seem to find a built-in julia function that performs the 
>> function of tapply in R or accumarray in matlab. Anyone know of one?
>>
>

[julia-users] Re: Cost of @view and reshape

2016-10-30 Thread Chris Rackauckas
reshape makes a view, and views are cheap. Don't worry about this.

BTW, I would love to add a collocation method to JuliaDiffEq. Would you 
consider making this a package?

On Sunday, October 30, 2016 at 3:52:37 AM UTC-7, Alexey Cherkaev wrote:
>
> I'm writing RadauIIA (for now, fixed order 5 with 3 points) method for ODE 
> BVP (basically, collocation method). In the process, I construct an overall 
> non-linear equation that needs to be solved. It takes "mega-vector" x[j] as 
> an argument. However, internally it is more convenient to reshape it to 
> y[m,i,n] where m is the index of original ODE vector, i is the index of the 
> collocation point on time element (or layer) and n is time element index. 
> Also, some inputs to the method (ODE RHS function and BVP function) expect 
> z[m]-kind vector. So far I chose to pass a @view of the "mega-vector" to 
> them.
>
> The alternatives for reshaping and @view would be:
>
>- Use the inline function or a macro that maps the indices between 
>mega-vector and arrays (I've tried it, didn't see any difference in 
>performance or memory allocation, but @code_warntype has less "red" spots)
>- Copy relevant pieces of mega-vector into preallocated arrays of 
>desired shape. This can also be an alternative for @view.
>
> Is there some kind of rule of thumb where which one would be preferable? 
> And are there any high costs associated with @view and reshape?
>
>

[julia-users] Re: 3D plot (Plots)

2016-10-29 Thread Chris Rackauckas
Plotly/PlotlyJS can also do some 3d. So can GR. Just look at the chart of 
supported attributes: https://juliaplots.github.io/supported/

On Saturday, October 29, 2016 at 5:29:06 PM UTC-7, Sheehan Olver wrote:
>
> surface(x,y,z)
>
> (or maybe surface(x,y,z')).
>
> Note that if x and y are matrices (so non-rectangular grid), then I think 
> only the pyplot and glvisualize backends will work.
>
>
>
>
> On Sunday, October 30, 2016 at 10:06:04 AM UTC+11, digxx wrote:
>>
>> is it possible to somehow 3d plot the following input?
>>
>> x=array of length n
>> y=array of length m
>> z=array of size mxn
>>
>> like it can be done in matlab?
>>
>

[julia-users] Re: Defining a type that inherits methods

2016-10-29 Thread Chris Rackauckas
Functions shouldn't be written for concrete types and instead for abstract 
types. If you write your function for AbstractMatrix and then make your 
type <: AbstractMatrix, this will work naturally. Making the type 
declarations on a function too strict doesn't help performance anyways.

On Saturday, October 29, 2016 at 8:54:04 AM UTC-7, Jérémy Béjanin wrote:
>
> I know that it is not possible to subtype concrete types, but is it 
> possible in some other way to create a type that behave exactly like an 
> existing concrete type (in my case a Matrix) such that it would keep all 
> the methods associated with it, but would still dispatch when a more 
> specific (ie specific to that new type) method exists?
>
> I searched on the mailing list but could not find anything.
>
> Thanks,
> Jeremy
>


[julia-users] Re: so many plotting packages

2016-10-28 Thread Chris Rackauckas
Just use Plots.jl. JuliaPlots and Plots.jl is essentially a metapackage/org 
which puts this all together into one convenient package. It works very 
well and should be recommended as the standard plotting package to almost 
everyone.

On Friday, October 28, 2016 at 12:17:56 PM UTC-7, Ben Arthur wrote:
>
> would be nice to have an organization to contain all of these. as 
> @StefanKarpinski  pointed out (i 
> forget where exactly), it would help to encourage developers to work 
> together. it would also help new users of julia figure out which package 
> they want to use.
>
> https://github.com/johnmyleswhite/ASCIIPlots.jl
> http://github.com/baggepinnen/ExperimentalAnalysis.jl 
> 
> http://github.com/jheinen/GR.jl 
> http://github.com/GiovineItalia/Gadfly.jl 
> 
> http://github.com/mbaz/Gaston.jl 
> http://github.com/ma-laforge/GracePlot.jl 
> 
> http://github.com/ma-laforge/InspectDR.jl 
> 
> http://github.com/sisl/PGFPlots.jl 
> http://github.com/plotly/Plotly.jl 
> http://github.com/tbreloff/Plots.jl 
> http://github.com/JuliaPy/PyPlot.jl 
> http://github.com/ig-or/QWTwPlot.jl 
> http://github.com/tbreloff/Qwt.jl 
> http://github.com/rennis250/Sparrow.jl 
> 
> http://github.com/sunetos/TextPlots.jl 
> 
> http://github.com/Evizero/UnicodePlots.jl 
> 
> http://github.com/nolta/Winston.jl 
>


[julia-users] Re: ANN: ParallelAccelerator v0.2 for Julia 0.5 released.

2016-10-28 Thread Chris Rackauckas
1) Won't that have bad interactions with pre-compilation? Since macros 
apply at parse time, the package will stay in the "state" the it 
precompiles in: so if one precompiles the package and then adds 
ParallelAccelerator, wouldn't that not be used? And the other way around, 
if one removes ParallelAccelerator, won't the package be unusable without 
manually deleting the precompile cache? I think that to use this you'd have 
to link the precompilation of the package to whether you have changes in 
ParallelAccelerator.

2) Shouldn't/Won't Base auto-parallelize broadcasted calls? That seems like 
the clear next step after loop fusing is finished and threading is no 
longer experimental. Where else is the implicit parallelism hiding?

On Thursday, October 27, 2016 at 2:02:38 PM UTC-7, Todd Anderson wrote:
>
> To answer your question #1, would the following be suitable?  There may be 
> a couple details to work out but what about the general approach?
>
> if haskey(Pkg.installed(), "ParallelAccelerator")
> println("ParallelAccelerator present")
>
> using ParallelAccelerator
>
> macro PkgCheck(ast)
> quote
> @acc $(esc(ast))
> end
> end
> else
> println("ParallelAccelerator not present")
>
> macro PkgCheck(ast)
> return ast
> end
> end
>
> @PkgCheck function f1(x)
> x * 5
> end
>
> a = f1(10)
> println("a = ", a)
>
>
> 2) The point of ParallelAccelerator is to extract the implicit parallelism 
> automatically.  The purpose of @threads is to allow you to express 
> parallelism explicitly.  So, they both enable parallelism but the former 
> has the potential to be a lot easier to use particularly for scientific 
> programmers who are more scientist than programmer.  In general, I feel 
> there is room for all approaches to be supported across a range of 
> programming ability.  
>
> On Thursday, October 27, 2016 at 10:47:57 AM UTC-7, Chris Rackauckas wrote:
>>
>> Thank you for all of your amazing work. I will be giving v0.2 a try soon. 
>> But I have two questions:
>>
>> 1) How do you see ParallelAccelerator integrating with packages? I asked 
>> this in the chatroom, but I think having it here might be helpful for 
>> others to chime in. If I want to use ParallelAccelerator in a package, then 
>> it seems like I would have to require it (and make sure that every user I 
>> have can compile it!) and sprinkle the macros around. Is there some 
>> sensible way to be able to use ParallelAccelerator if it's available on the 
>> user's machine, but not otherwise? This might be something that requires 
>> Pkg3, but even with Pkg3 I don't see how to do this without having one 
>> version of the function with a macro, and another without it.
>>
>> 2) What do you see as the future of ParallelAccelerator going forward? It 
>> seems like Base Julia is stepping all over your domain: automated loop 
>> fusing, multithreading, etc. What exactly does ParallelAccelerator give 
>> that Base Julia does not or, in the near future, will not / can not? I am 
>> curious because with Base Julia getting so many optimizations itself, it's 
>> hard to tell whether supporting ParallelAccelerator will be a worthwhile 
>> investment in a year or two, and wanted to know what you guys think of 
>> that. I don't mean you haven't done great work: you clearly have, but it 
>> seems Julia is also doing a lot of great work!
>>
>> On Tuesday, October 25, 2016 at 9:42:44 AM UTC-7, Todd Anderson wrote:
>>>
>>> The High Performance Scripting team at Intel Labs is pleased to announce 
>>> the release of version 0.2 of ParallelAccelerator.jl, a package for 
>>> high-performance parallel computing in Julia, primarily oriented around 
>>> arrays and stencils.  In this release, we provide support for Julia 0.5 and 
>>> introduce experimental support for the Julia native threading backend.  
>>> While we still currently support Julia 0.4, such support should be 
>>> considered deprecated and we recommend everyone move to Julia 0.5 as Julia 
>>> 0.4 support may be removed in the future.
>>>
>>> The goal of ParallelAccelerator is to accelerate the computational 
>>> kernel of an application by the programmer simply annotating the kernel 
>>> function with the @acc (short for "accelerate") macro, provided by the 
>>> ParallelAccelerator package.  In version 0.2, ParallelAccelerator still 
>>> defaults to transforming the kernel to OpenMP C code that is then compiled 
>>&

[julia-users] Re: ANN: ParallelAccelerator v0.2 for Julia 0.5 released.

2016-10-27 Thread Chris Rackauckas
Thank you for all of your amazing work. I will be giving v0.2 a try soon. 
But I have two questions:

1) How do you see ParallelAccelerator integrating with packages? I asked 
this in the chatroom, but I think having it here might be helpful for 
others to chime in. If I want to use ParallelAccelerator in a package, then 
it seems like I would have to require it (and make sure that every user I 
have can compile it!) and sprinkle the macros around. Is there some 
sensible way to be able to use ParallelAccelerator if it's available on the 
user's machine, but not otherwise? This might be something that requires 
Pkg3, but even with Pkg3 I don't see how to do this without having one 
version of the function with a macro, and another without it.

2) What do you see as the future of ParallelAccelerator going forward? It 
seems like Base Julia is stepping all over your domain: automated loop 
fusing, multithreading, etc. What exactly does ParallelAccelerator give 
that Base Julia does not or, in the near future, will not / can not? I am 
curious because with Base Julia getting so many optimizations itself, it's 
hard to tell whether supporting ParallelAccelerator will be a worthwhile 
investment in a year or two, and wanted to know what you guys think of 
that. I don't mean you haven't done great work: you clearly have, but it 
seems Julia is also doing a lot of great work!

On Tuesday, October 25, 2016 at 9:42:44 AM UTC-7, Todd Anderson wrote:
>
> The High Performance Scripting team at Intel Labs is pleased to announce 
> the release of version 0.2 of ParallelAccelerator.jl, a package for 
> high-performance parallel computing in Julia, primarily oriented around 
> arrays and stencils.  In this release, we provide support for Julia 0.5 and 
> introduce experimental support for the Julia native threading backend.  
> While we still currently support Julia 0.4, such support should be 
> considered deprecated and we recommend everyone move to Julia 0.5 as Julia 
> 0.4 support may be removed in the future.
>
> The goal of ParallelAccelerator is to accelerate the computational kernel 
> of an application by the programmer simply annotating the kernel function 
> with the @acc (short for "accelerate") macro, provided by the 
> ParallelAccelerator package.  In version 0.2, ParallelAccelerator still 
> defaults to transforming the kernel to OpenMP C code that is then compiled 
> with a system C compiler (ICC or GCC) and transparently handles the 
> invocation of the C code from Julia as if the program were running normally.
>
> However, ParallelAccelerator v0.2 also introduces experimental backend 
> support for Julia's native threading (which is also experimental).  To 
> enable native threading mode, set the environment variable 
> PROSPECT_MODE=threads.  In this mode, ParallelAccelerator identifies pieces 
> of code that can be run in parallel and then runs that code as if it had 
> been annotated with Julia's @threads and goes through the standard Julia 
> compiler pipeline with LLVM.  The ParallelAccelerator C backend has the 
> limitation that the kernel functions and anything called by those cannot 
> include code that is not type-stable to a single type.  In particular, 
> variables of type Any are not supported.  In practice, this restriction was 
> a significant limitation.  For the native threading backend, no such 
> restriction is necessary and thus our backend should handle arbitrary Julia 
> code.
>
> Under the hood, ParallelAccelerator is essentially a domain-specific 
> compiler written in Julia. It performs additional analysis and optimization 
> on top of the Julia compiler. ParallelAccelerator discovers and exploits 
> the implicit parallelism in source programs that use parallel programming 
> patterns such as map, reduce, comprehension, and stencil. For example, 
> Julia array operators such as .+, .-, .*, ./ are translated by 
> ParallelAccelerator internally into data-parallel map operations over all 
> elements of input arrays. For the most part, these patterns are already 
> present in standard Julia, so programmers can use ParallelAccelerator to 
> run the same Julia program without (significantly) modifying the source 
> code.
>
> Version 0.2 should be considered an alpha release, suitable for early 
> adopters and Julia enthusiasts.  Please file bugs at 
> https://travis-ci.org/IntelLabs/ParallelAccelerator.jl/issues .
>
> See our GitHub repository at 
> https://github.com/IntelLabs/ParallelAccelerator.jl for a complete list 
> of prerequisites, supported platforms, example programs, and documentation.
>
> Thanks to our colleagues at Intel and Intel Labs, the Julia team, and the 
> broader Julia community for their support of our efforts!
>
> Best regards,
> The High Performance Scripting team
> (Parallel Computing Lab, Intel Labs)
>


[julia-users] Re: Status of Plots.jl?

2016-10-16 Thread Chris Rackauckas
using Plots
#Pkg.add("GR")
gr() # Change the backend
plot(rand(4,4))

There's a bug with the plot pane where you might need to hit it twice. If 
that's not working, then it's not setup correctly.


On Sunday, October 16, 2016 at 9:45:07 AM UTC-7, missp...@gmail.com wrote:
>
> Hi folks,
>
> I don't seem to be able to have the display of a graph in GR. I'm calling 
> the instructions using Atom. 
>
> could someone post a Hello World example?
>
> thanks, 
>
> On Thursday, March 10, 2016 at 5:11:57 AM UTC-8, Daniel Carrera wrote:
>>
>> Hello,
>>
>> Does anyone know the status of Plots.jl? It seems to have come a long 
>> way. At least the documentation makes it look pretty complete:
>>
>> http://plots.readthedocs.org/en/latest/
>>
>> I'm looking at the backends. Does anyone know what "Gr", "Qwt", and 
>> "unicodeplots" are? Apparently support for Winston was dropped?
>>
>> https://github.com/tbreloff/Plots.jl/issues/152
>>
>> I don't use Winston, but I'm curious to know what happened. Was Winston 
>> hard to support?
>>
>> I am currently using PyPlot because I need the maturity of Matplotlib, 
>> but I am happy to see all the effort that's going into making a native 
>> plotting library for Julia.
>>
>> Cheers,
>> Daniel.
>>
>

[julia-users] Re: matrix multiplication in Matlab is much faster

2016-10-16 Thread Chris Rackauckas
Take a look at the performance tips 
. The 
first time you run it, the function will compile. Then the compiled 
function is cached. On my computer I did:

a = rand(1000,1000)
y=similar(a)
@time a*a 
@time a*a 
@time A_mul_B!(y,a,a)
@time A_mul_B!(y,a,a)

Which gives output:


  0.435561 seconds (367.13 k allocations: 20.108 MB, 1.58% gc time)
  0.019922 seconds (7 allocations: 7.630 MB)

  0.027144 seconds (53 allocations: 2.875 KB)
  0.016211 seconds (4 allocations: 160 bytes)

Notice how after compiling, the allocations and the timings go way down. 
For a more in-depth look at how Julia is looking to get the speed (and how 
to make the most of it), take a look at this blog post 
. Julia is a 
little bit more complex than MATLAB, but the payoffs can be huge once you 
take the time to understand it. Happy Julia-ing!


On Sunday, October 16, 2016 at 9:45:00 AM UTC-7, majid.z...@gmail.com wrote:
>
> i have run the same matrix multiplication  in both matlab and julia but 
> matlab in much faster that julia, i have used both A_mul_B! and *() 
> functions
> my codes are :
> in matlab : 
> tic 
> a = rand(1000,1000)
> a*a
> toc
> the output is : Elapsed time is 0.193979 seconds
>
> in Julia :
> a = rand(1000,1000)
> y=similar(a)
> @time a*a 
> @time A_mul_B!(y,a,a)
>
> the output is:
> 1.575159 seconds
> 1.497884  seconds
> Majid
>


[julia-users] Re: Root finding package

2016-10-15 Thread Chris Rackauckas
I don't know if NLsolve handles complex roots but I've always found it to 
be very good. Maybe you can just act like the problem is on a vector of two 
points (the real and imaginary parts) and solve for where the norm of f(x) 
is zero.

On Saturday, October 15, 2016 at 4:56:23 PM UTC-7, digxx wrote:
>
> So I know there is Roots but is there also one for finding complex roots?
>


Re: [julia-users] Re: Setting/getting field-like members with side effects

2016-10-09 Thread Chris Rackauckas
Yes, symbols would be preferred. Checking equality two symbols is quicker, 
but, more importantly, symbols are natural to Julia 
metaprogramming/expressions, meaning that if you use symbols then the 
expression is easier to generate via macros.

Tom is getting shy, but really take a look at StochasticOptimization.jl. 
Yes, it will change, but it does have a very nice design which is 
extendable and allows for the flexibility and the parameters that you need, 
and even more. Another good way to accomplish this may be call-overloaded 
types. IMHO, you may find that trying to fit an OO framework to this might 
be more work than correctly designing the framework the first time. Of 
course, YMMV.

On Sunday, October 9, 2016 at 1:07:12 PM UTC-7, Bart Janssens wrote:
>
> OK, thanks, so symbols are definitely preferred for the [] variant then?
>
> As for using dispatch, I am certainly all for designing a "proper" Julian 
> interface, but the number of parameters here can be daunting, see e.g. here 
> for a more complete example:
>
> https://github.com/barche/coolfluid3/blob/master/doc/tutorials/live_visualization/cylinder.py#L45-L57
>
> This is indeed from an object-oriented C++ framework, where we have a tree 
> of objects that each have options controlling their behavior. In the non 
> too distant future I would like for it to be possible to extend this using 
> models written in Julia (because they can be as fast as C++, but with a lot 
> less headaches for students I hope), but rewriting the entire framework is 
> just too much work right now, so I think we are stuck with this overall 
> structure for the time being.
>
> On Sun, Oct 9, 2016 at 9:13 PM Chris Rackauckas  > wrote:
>
>> Missed that one, though it should be
>>
>> mysolver[:linear_system][:parameters][:solver_type] = :GMRES
>>
>> There's no reason for an algorithm choice to be a String.
>>
>> But this entire thing seems wrong-headed. The choice of the solver method 
>> should likely be done by dispatching on the solve method somehow. This 
>> seems like trying directly match an object-oriented framework which 
>> shouldn't be recommended (look at how fast it gets messy). You may want to 
>> see if you can map a framework like StochasticOptimization.jl 
>> <https://github.com/JuliaML/StochasticOptimization.jl> to 
>> IterativeSolvers.jl <https://github.com/JuliaMath/IterativeSolvers.jl>.
>>
>>
>> On Sunday, October 9, 2016 at 11:34:46 AM UTC-7, Kristoffer Carlsson 
>> wrote:
>>>
>>> That was one of the suggestions? 
>>
>>

[julia-users] Re: Setting/getting field-like members with side effects

2016-10-09 Thread Chris Rackauckas
Missed that one, though it should be

mysolver[:linear_system][:parameters][:solver_type] = :GMRES

There's no reason for an algorithm choice to be a String.

But this entire thing seems wrong-headed. The choice of the solver method 
should likely be done by dispatching on the solve method somehow. This 
seems like trying directly match an object-oriented framework which 
shouldn't be recommended (look at how fast it gets messy). You may want to 
see if you can map a framework like StochasticOptimization.jl 
 to 
IterativeSolvers.jl .

On Sunday, October 9, 2016 at 11:34:46 AM UTC-7, Kristoffer Carlsson wrote:
>
> That was one of the suggestions? 



Re: [julia-users] Re: macro: with

2016-10-09 Thread Chris Rackauckas
It's a lot like unpacking a type, except instead of defining new variables 
for the unpacked values, your macro places the type instance designation 
(t. ...).

On Sunday, October 9, 2016 at 10:55:45 AM UTC-7, Tom Breloff wrote:
>
> What about it?  I don't think there's anything like this in Parameters.
>
> On Sun, Oct 9, 2016 at 1:27 PM, Chris Rackauckas  > wrote:
>
>> What about Parameters.jl <https://github.com/mauro3/Parameters.jl>?
>>
>>
>> On Wednesday, September 7, 2016 at 7:23:47 AM UTC-7, Tom Breloff wrote:
>>>
>>> Hey all... I just threw together a quick macro to save some typing when 
>>> working with the fields of an object.  Disclaimer: this should not be used 
>>> in library code, or in anything that will stick around longer than a REPL 
>>> session.  Hopefully it's self explanatory with this example.  Is there a 
>>> good home for this?  Does anyone want to use it?
>>>
>>> julia> type T; a; b; end; t = T(0,0)
>>> T(0,0)
>>>
>>> julia> macroexpand(:(
>>>@with t::T begin
>>>a = 1
>>>b = 2
>>>c = a + b - 4
>>>d = c - b
>>>a = d / c
>>>end
>>>))
>>> quote  # REPL[3], line 3:
>>> t.a = 1 # REPL[3], line 4:
>>> t.b = 2 # REPL[3], line 5:
>>> c = (t.a + t.b) - 4 # REPL[3], line 6:
>>> d = c - t.b # REPL[3], line 7:
>>> t.a = d / c
>>> end
>>>
>>> julia> @with t::T begin
>>>a = 1
>>>b = 2
>>>c = a + b - 4
>>>d = c - b
>>>a = d / c
>>>end
>>> 3.0
>>>
>>> julia> t
>>> T(3.0,2)
>>>
>>>
>

[julia-users] Re: Setting/getting field-like members with side effects

2016-10-09 Thread Chris Rackauckas
Why not use Symbols instead of strings here?

On Sunday, October 9, 2016 at 8:26:57 AM UTC-7, Bart Janssens wrote:
>
> Hi all,
>
> I'm thinking about how to translate a Python interface that makes 
> extensive use of __getattr__ and __setattr__ overloading to allow chaining 
> a series of option values like this example:
> mysolver.linear_system.parameters.solver_type = "GMRES"
>
> Each time a . is encountered, the appropriate __getattr__ is called, which 
> in turn calls a C++ function to get the correct value, which may not be 
> stored as a true data field member anywhere. In the end, the assignment 
> triggers a call to __setattr__ and may call additional functions to notify 
> functions of the change. I see 3 ways of tackling this in Julia:
>
> 1. Just use methods to get/set each field, so something like:
> setproperty(getproperty(getproperty(mysolver, "linear_system"), 
> "parameters"), "solver_type", "GMRES")
> As is obvious from the example, this is a bit cumbersome and hard to read 
> with a long chain of gets.
>
> 2. Use the [] operator, which can be overloaded, as far as I understand. 
> The question there is if we use strings or symbols as keys, to get either:
> mysolver["linear_system"]["parameters"]["solver_type"] = "GMRES"
> or:
> mysolver[:linear_system][:parameters][:solver_type] = "GMRES"
> Either solution is still not as readable or easy to type as the Python 
> original
>
> 3. Finally use a macro, such as enabled by the DotOverload.jl package (
> https://github.com/sneusse/DotOverload.jl):
> @dotted mysolver.linear_system.parameters.solver_type = "GMRES"
>
> I realise this touches on the discussion on . overloading at 
> https://github.com/JuliaLang/julia/issues/1974 but that seems to have 
> dried out a bit, and I understand that touching an operation that is 
> expected to have (almost?) no inherent cost such as field access may be 
> dangerous. The macro solution in 3 seems like the most elegant method, but 
> I worry about code readability, since the purpose of the @dotted will not 
> be immediately clear to people unfamiliar with the DotOverload package. 
> Could a macro like this be provided in Base, with proper documentation 
> (near the descriptions of types and their fields, for example), so option 3 
> can be used without concerns over code readability/understandability?
>
> Cheers,
>
> Bart
>


[julia-users] Re: macro: with

2016-10-09 Thread Chris Rackauckas
What about Parameters.jl ?

On Wednesday, September 7, 2016 at 7:23:47 AM UTC-7, Tom Breloff wrote:
>
> Hey all... I just threw together a quick macro to save some typing when 
> working with the fields of an object.  Disclaimer: this should not be used 
> in library code, or in anything that will stick around longer than a REPL 
> session.  Hopefully it's self explanatory with this example.  Is there a 
> good home for this?  Does anyone want to use it?
>
> julia> type T; a; b; end; t = T(0,0)
> T(0,0)
>
> julia> macroexpand(:(
>@with t::T begin
>a = 1
>b = 2
>c = a + b - 4
>d = c - b
>a = d / c
>end
>))
> quote  # REPL[3], line 3:
> t.a = 1 # REPL[3], line 4:
> t.b = 2 # REPL[3], line 5:
> c = (t.a + t.b) - 4 # REPL[3], line 6:
> d = c - t.b # REPL[3], line 7:
> t.a = d / c
> end
>
> julia> @with t::T begin
>a = 1
>b = 2
>c = a + b - 4
>d = c - b
>a = d / c
>end
> 3.0
>
> julia> t
> T(3.0,2)
>
>

[julia-users] Re: New SPEC - open to Julia[applications]?

2016-10-08 Thread Chris Rackauckas
>From your second link:


>- Submissions for the first step in the search program will be 
>accepted by SPEC beginning 11 November 2008 and ending 30 June 2010 (11:59 
>pm, Pacific Standard Time).
>
>
On Saturday, October 8, 2016 at 12:18:53 PM UTC-7, Páll Haraldsson wrote:
>
>
> https://www.spec.org/cpuv6/
>
> It would be cool (and publicity) if Julia would make it into SPEC version 
> 6. Anyway, might be of interest to people here.
>
> SPEC used C or Fortran last I looked, I see only references to 
> "languages", "C/C++" and "portable":
>
>
> https://www.spec.org/cpuv6/
> "SPEC holds to the principle that better benchmarks can be developed from 
> actual applications. With this in mind, SPEC is once again seeking to 
> encourage those outside of SPEC to assist us in locating applications that 
> could be used in the next CPU-intensive benchmark suite, currently under 
> development within SPEC and currently designated as SPEC CPUv6.[..]
>
> Portable or can be ported to multiple hardware architectures and operating 
> systems with reasonable effort 
>
>
> For C/C++ programs:
> [..]
> for the main routine, take one of these two forms
>
>
> [..]
> the programming(s) language used in the program/application and 
> approximate lines of code, 
>
> [..]
> Step 4: Complete Code Testing and Benchmark Infrastructure ($1000 upon 
> successful completion) 
> [..]
> SPEC always prefers to use code that conforms to the relevant language 
> standards.
>
> [..]
> Step 6: Acceptance into the CPU Suite ($2500 if accepted)
>
> If the program/application is recommended to and is accepted by the Open 
> Systems Group, in its sole discretion, then the program/application is 
> included in the suite and the Submitter will receive $2500 and a license 
> for the suite when it is released."
>


[julia-users] Re: Julia and the Tower of Babel

2016-10-08 Thread Chris Rackauckas
Conventions would have to be arrived at before this is possible.

On Saturday, October 8, 2016 at 3:39:55 AM UTC-7, Traktor Toni wrote:
>
> In my opinion the solutions to this are very clear, or would be:
>
> 1. make a mandatory linter for all julia code
> 2. julia IDEs should offer good intellisense
>
> Am Freitag, 7. Oktober 2016 17:35:46 UTC+2 schrieb Gabriel Gellner:
>>
>> Something that I have been noticing, as I convert more of my research 
>> code over to Julia, is how the super easy to use package manager (which I 
>> love), coupled with the talent base of the Julia community seems to have a 
>> detrimental effect on the API consistency of the many “micro” packages that 
>> cover what I would consider the de-facto standard library.
>>
>> What I mean is that whereas a commercial package like Matlab/Mathematica 
>> etc., being written under one large umbrella, will largely (clearly not 
>> always) choose consistent names for similar API keyword arguments, and have 
>> similar calling conventions for master function like tools (`optimize` 
>> versus `lbfgs`, etc), which I am starting to realize is one of the great 
>> selling points of these packages as an end user. I can usually guess what a 
>> keyword will be in Mathematica, whereas even after a year of using Julia 
>> almost exclusively I find I have to look at the documentation (or the 
>> source code depending on the documentation ...) to figure out the keyword 
>> names in many common packages.
>>
>> Similarly, in my experience with open source tools, due to the complexity 
>> of the package management, we get large “batteries included” distributions 
>> that cover a lot of the standard stuff for doing science, like python’s 
>> numpy + scipy combination. Whereas in Julia the equivalent of scipy is 
>> split over many, separately developed packages (Base, Optim.jl, NLopt.jl, 
>> Roots.jl, NLsolve.jl, ODE.jl/DifferentialEquations.jl). Many of these 
>> packages are stupid awesome, but they can have dramatically different 
>> naming conventions and calling behavior, for essential equivalent behavior. 
>> Recently I noticed that tolerances, for example, are named as `atol/rtol` 
>> versus `abstol/reltol` versus `abs_tol/rel_tol`, which means is extremely 
>> easy to have a piece of scientific code that will need to use all three 
>> conventions across different calls to seemingly similar libraries. 
>>
>> Having brought this up I find that the community is largely sympathetic 
>> and, in general, would support a common convention, the issue I have slowly 
>> realized is that it is rarely that straightforward. In the above example 
>> the abstol/reltol versus abs_tol/rel_tol seems like an easy example of what 
>> can be tidied up, but the latter underscored name is consistent with 
>> similar naming conventions from Optim.jl for other tolerances, so that 
>> community is reluctant to change the convention. Similarly, I think there 
>> would be little interest in changing abstol/reltol to the underscored 
>> version in packages like Base, ODE.jl etc as this feels consistent with 
>> each of these code bases. Hence I have started to think that the problem is 
>> the micro-packaging. It is much easier to look for consistency within a 
>> package then across similar packages, and since Julia seems to distribute 
>> so many of the essential tools in very narrow boundaries of functionality I 
>> am not sure that this kind of naming convention will ever be able to reach 
>> something like a Scipy, or the even higher standard of commercial packages 
>> like Matlab/Mathematica. (I am sure there are many more examples like using 
>> maxiter, versus iterations for describing stopping criteria in iterative 
>> solvers ...)
>>
>> Even further I have noticed that even when packages try to find 
>> consistency across packages, for example Optim.jl <-> Roots.jl <-> 
>> NLsolve.jl, when one package changes how they do things (Optim.jl moving to 
>> delegation on types for method choice) then again the consistency fractures 
>> quickly, where we now have a common divide of using either Typed dispatch 
>> keywords versus :method symbol names across the previous packages (not to 
>> mention the whole inplace versus not-inplace for function arguments …)
>>
>> Do people, with more experience in scientific packages ecosystems, feel 
>> this is solvable? Or do micro distributions just lead to many, many varying 
>> degrees of API conventions that need to be learned by end users? Is this 
>> common in communities that use C++ etc? I ask as I wonder how much this 
>> kind of thing can be worried about when making small packages is so easy.
>>
>

[julia-users] Re: Julia and the Tower of Babel

2016-10-08 Thread Chris Rackauckas
Create a repo where we can all bikeshed different names, agree upon some, 
and then standardize. I honestly don't care which conventions are chosen 
and will just find/replace with whatever people want, but there has to be a 
"whatever people want" to do that.

On Saturday, October 8, 2016 at 1:47:07 AM UTC-7, jonatha...@alumni.epfl.ch 
wrote:
>
> Maybe an "easy" first step would be to have a page (a github repo) 
> containing domain specific naming conventions (atol/abstol) that package
> developers can look up. Even though existing packages might not adopt 
> them, at least newly created ones would have a chance
> to be more consistent. You could even do a small tool that parse your 
> files and warn you about improper naming.
>


[julia-users] New Julia-Focused Blog: Pkg Update

2016-10-06 Thread Chris Rackauckas
Hey, 
  I am happy to announce Pkg Update, a new blog focused on summarizing the 
large changes throughout the Julia package ecosystem. As I feel it's always 
easier to get people involved once you already have things up and running, 
I went ahead and created the first blog post 
, detailing some recent changes in 
JuliaMath, JuliaStats, JuliaDiffEq, JuliaML, etc. I hope you find this 
interesting. As I mention in the post, I am looking for others who would 
like to help contribute, especially as correspondents for a particular 
organization. I feel like this can be very helpful for Julia users to 
understand the rapidly evolving ecosystem. Hope you enjoy!


[julia-users] Re: Julia vs Seed7, why languages succeed or fail

2016-10-05 Thread Chris Rackauckas
Who's the audience for Seed7? I googled Seed7 BLAS, Seed7 Linpack, Seed7 
FFT and nothing came up. So a large portion of Julia users are not the 
Seed7 audience. To me, there is almost no similarity between Julia and 
Seed7, even if the syntax or features were similar. But for the reasons you 
say, the Seed7 audience would be part of the Julia audience. 

Chapel was made for a small niche: most people still aren't dedicating all 
of their time to parallel programming. That's fine, but it's clear why it 
isn't seen everywhere.

One thing to notice too is that one of Julia's strength is that it allows 
itself to build off of other's strengths. It doesn't reinvent the wheel, it 
reinvents the language design and structure. It very naturally plugs into 
LLVM, FFTW, OpenBLAS, etc. so that way it's fully featured and fully 
performant, but without having to re-create every little detail. This plus 
the fact that Julia's Base is mostly written in Julia means that the basics 
are covered, and Julia can be worked on by standard Julia users. This makes 
Julia easy to maintain, easy to upgrade, and easy to see a future for.

Lastly, why do some languages succeed? Because you have to use them. Julia 
offers me many things that no other language does, so even though my 
research adviser didn't want me "trying a new language", I can't make him 
happy: switching from Julia would only lead to major 
performance/feature/syntax/maintainability problems (been there, done 
that). So no matter what, I am going to continue to use Julia. I believe 
others are the same. And that's why it's succeeding.

On Wednesday, October 5, 2016 at 2:55:28 PM UTC-7, Páll Haraldsson wrote:
>
>
> A.
> I just [re?]discovered Seed7 language, one of the few languages with 
> multiple dispatch, also extensible (not though macros).
>
> https://groups.google.com/forum/#!topic/comp.programming/_C08U8t4dRg
> "Seed7 has more then 90 libraries now." [in 2013, after 7 years]
>
> They hit Top100 (93? top) on TIOBE, but nowhere to be found now.
>
> They seem very similar, except for Pascal like syntax, I guess there must 
> be more to it..
>
>
> B.
> Chapel can be faster than Go or competitive, but also much slower (is a 
> parallel language, not sure if not working/meant to work always..):
>
> http://benchmarksgame.alioth.debian.org/u64q/chapel.html
>
> fasta 
>  
>
>
> source secs KB gz cpu cpu load 
> Chapel 
> 
>  20.59 
> 28,868 1216 20.59 100% 0% 0% 1% 
> Go 
> 
>  1.97 
>
>


[julia-users] Re: multiple processes question

2016-10-05 Thread Chris Rackauckas
See this blog post .

If your code is perfectly efficient, yes then processes equal to the number 
of cores (so for something like BLAS where it's written as the most 
efficient threaded algorithms you could image). But for your simple 
homework assignment? There will be time lost due to inefficiencies. It ends 
up being much faster to overload the scheduler so that, while one process 
is being slow due to moving data or something like that, it will kick 
another one in so that way something is always computing on each core. Even 
though this will cause some cache misses, if your program is not perfectly 
efficient, this will win the tradeoff.

So while in theory cores = threads, you just write efficient code and this 
choice is best because no cache misses... that's generally not reality. 
This is the same principle as Amdahl's Law, though that's an iffy 
explanation since normally that law is in the context of efficiency 
measured as what percentage of the program is serial vs parallel. Here the 
efficiency loss is due to the higher-level programming context not being 
100% bare-metal efficient, but it's the same idea. Note that your Monte 
Carlo pi calculation probably is 100% parallel, so it would "look" like 
Amdahl's law type things don't apply, but that's only when you abstract and 
ignore all of the details of computing (caching, data movement, etc.)

I was taught the same thing, yet if you continuously benchmark, only for 
the most performant and optimal threaded/MPI will this be true. It 
shouldn't be taught anymore: you should just be taught to benchmark you're 
code.

On Wednesday, October 5, 2016 at 6:02:41 AM UTC-7, noel ryan wrote:
>
> I am an undergraduate working on a Julia parallelism project. I have read 
> in quite a few tutorials that to get the best parallel performance I should 
> spawn a number of processes equal to the number of cores in my processor ( 
> working with 2 cores & 4 threads). However in a test to check processing 
> speeds my result ( monte carlo test for pi to 1 billion)  was that using 17 
> processes calculated the quickest. Adding extra processes above 17 didn't 
> speed up the calculation. Can anyone explain what is happening here?
>
> Any help would be great
>
> Regards,
>
> Noel
>


[julia-users] Re: Changing Repositories Without Deleting METADATA

2016-10-03 Thread Chris Rackauckas
Thanks for the suggestion. For some reason I never checked to see if there 
were other GUIs, but once you mentioned it I did quite a bit of Googling. I 
am now using GitKraken and while it will take a bit to get used to, it's 
already improving my productivity. Thanks for the suggestion.

On Sunday, October 2, 2016 at 9:31:18 PM UTC-7, Tony Kelman wrote:
>
> The conclusion I'd take out of that is not to use GitHub Desktop. It tries 
> to hide too many important things like this from you. As far as git GUIs 
> go, try SourceTree, it has a much more direct mapping between command line 
> operations and GUI buttons.



[julia-users] Re: View (a portion of a) vector as a matrix

2016-10-02 Thread Chris Rackauckas
Fengyang's reshape((@view x[1:6]), (3, 2)) will work well and will be 
essentially cost-free since reshape creates a view, and a view of a view is 
still just a view (no copy). Another way to write it is 
reshape(view(x,1:6), (3, 2)). For example:

function f(t,u,du)
  x = reshape(view(x,1:6), (3, 2))
  # Use x and u[7], u[8], u[9], and u[10] to write directly to du
  nothing
end

 should be a good way to write the function for Sundials.jl, 
DifferentialEquations.jl, ODE.jl (after the iterator PR).

On Sunday, October 2, 2016 at 5:43:01 AM UTC-7, Alexey Cherkaev wrote:
>
> I have the model where it is convenient to represent one of the variables 
> as a matrix. This variable, however, is obtained as a solution of ODEs 
> (among other variables). I'm using Sundials to solve ODEs and 
> `Sundials.cvode` requires the ODE RHS function to take a vector. So, it 
> seems logical to pack the variables into a vector to pass to `cvode` and 
> unpack them for more convenient use in my code.
>
> For example, consider `x = fill(1.0, 10)` and the first 6 entries are 
> actually a matrix of size 3x2, other 4: other variables. So, I can do `s = 
> reshape(x[1:6], (3,2))`. However, this creates a copy, which I would want 
> to avoid. And I couldn't find a way to do the view-reshaping by applying 
> `view()` function or doing `SubArray` directly. For now I settle on to 
> using indices of the form `(i-1)*M + j +1` to retrieve `s[i,j] = x[(i-1)*M 
> + j +1]`. But, (1) julia's 1-based arrays make it awkward to use this 
> notation and (2) matrix (not element-wise) operations are not available.
>


[julia-users] Re: Changing Repositories Without Deleting METADATA

2016-10-02 Thread Chris Rackauckas
Yeah, this is how I used to do it. It's really fragile though when used in 
conjunction with Github Desktop on Windows (it doesn't, at least didn't, 
like multiple remotes). I guess that's the best option unless the package 
manager allows one to choose remotes.

On Sunday, October 2, 2016 at 10:50:29 AM UTC-7, Avik Sengupta wrote:
>
> I usually do this directly in git, and not via Pkg commands. I go into the 
> package directory, and add my fork as an additonal remote. So..
>
> cd /home/chris/v0.5/Sundials
> git remote add  chris https://github.com/ChrisRackauckas/Sundials.jl.git 
> <https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FChrisRackauckas%2FSundials.jl.git&sa=D&sntz=1&usg=AFQjCNFQc3flEftrLIpqZWy6ZUJSp4A54Q>
> git checkout -b cool-feature
> julia #Develop cool feature, test from Julia
> git push chris cool-feature
>
> I'll then create a pull request from Github's ui. 
>
> On Sunday, 2 October 2016 18:01:28 UTC+1, Chris Rackauckas wrote:
>>
>> Does anyone have a good way to change repositories? A common example for 
>> me is, Sundials is in JuliaDiffEq, so I fork it to my Github account for an 
>> extended PR, but to work on it I need to remove my current Sundials install 
>> and clone from my own repository. However, METADATA does not like this at 
>> all:
>>
>> julia> Pkg.rm("Sundials") # Remove the JuliaDiffEq/Sundials version
>> WARNING: unknown DataFrames commit 84523937, metadata may be ahead of 
>> package cache
>> INFO: No packages to install, update or remove
>> INFO: Package database updated
>>
>> julia> Pkg.clone("https://github.com/ChrisRackauckas/Sundials.jl.git 
>> <https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FChrisRackauckas%2FSundials.jl.git&sa=D&sntz=1&usg=AFQjCNFQc3flEftrLIpqZWy6ZUJSp4A54Q>")
>>  
>> # Install from my Github
>> INFO: Cloning Sundials from 
>> https://github.com/ChrisRackauckas/Sundials.jl.git
>> ERROR: Sundials already exists
>>  in clone(::String, ::SubString{String}) at .\pkg\entry.jl:193
>>  in clone(::String) at .\pkg\entry.jl:221
>>  in 
>> (::Base.Pkg.Dir.##2#3{Array{Any,1},Base.Pkg.Entry.#clone,Tuple{String}})() 
>> at .\pkg\dir.jl:31
>>  in 
>> cd(::Base.Pkg.Dir.##2#3{Array{Any,1},Base.Pkg.Entry.#clone,Tuple{String}}, 
>> ::String) at .\file.jl:58
>>  in #cd#1(::Array{Any,1}, ::Function, ::Function, ::String, 
>> ::Vararg{Any,N}) at .\pkg\dir.jl:31
>>  in clone(::String) at .\pkg\pkg.jl:151
>>
>> In the past I would just delete METADATA and let it re-create itself, and 
>> that will fix it, but then you have to re-install packages which can be a 
>> mess. Since this is becoming much more common for me, I need a better way 
>> to handle this. Does anyone have a better workflow?
>>
>

[julia-users] Changing Repositories Without Deleting METADATA

2016-10-02 Thread Chris Rackauckas
Does anyone have a good way to change repositories? A common example for me 
is, Sundials is in JuliaDiffEq, so I fork it to my Github account for an 
extended PR, but to work on it I need to remove my current Sundials install 
and clone from my own repository. However, METADATA does not like this at 
all:

julia> Pkg.rm("Sundials") # Remove the JuliaDiffEq/Sundials version
WARNING: unknown DataFrames commit 84523937, metadata may be ahead of 
package cache
INFO: No packages to install, update or remove
INFO: Package database updated

julia> Pkg.clone("https://github.com/ChrisRackauckas/Sundials.jl.git";) # 
Install from my Github
INFO: Cloning Sundials from 
https://github.com/ChrisRackauckas/Sundials.jl.git
ERROR: Sundials already exists
 in clone(::String, ::SubString{String}) at .\pkg\entry.jl:193
 in clone(::String) at .\pkg\entry.jl:221
 in 
(::Base.Pkg.Dir.##2#3{Array{Any,1},Base.Pkg.Entry.#clone,Tuple{String}})() 
at .\pkg\dir.jl:31
 in 
cd(::Base.Pkg.Dir.##2#3{Array{Any,1},Base.Pkg.Entry.#clone,Tuple{String}}, 
::String) at .\file.jl:58
 in #cd#1(::Array{Any,1}, ::Function, ::Function, ::String, 
::Vararg{Any,N}) at .\pkg\dir.jl:31
 in clone(::String) at .\pkg\pkg.jl:151

In the past I would just delete METADATA and let it re-create itself, and 
that will fix it, but then you have to re-install packages which can be a 
mess. Since this is becoming much more common for me, I need a better way 
to handle this. Does anyone have a better workflow?


[julia-users] ANN: ParallelDataTransfer.jl

2016-10-02 Thread Chris Rackauckas
ParallelDataTransfer.jl 
 is a library 
for sending and receiving data among processes defined using Julia's 
parallel computing framework. This library can be used to:


   - Send variables to worker processes
   - Directly define variables on worker processes
   - Broadcast a definition statement to all workers
   - Get variables from remote processes
   - Pass variables between processes


 This library is constructed so these operations are done safely and 
easily, with the wait/fetch/sync operations built in. This allows one to do 
parallel programming at a high-level, specifying what data is moving, and 
not the details of how it moves.

Also included are examples of how to use this functionality type-safely. 
Since these functions/macros internally utilize eval statements, the 
variables they define are in the global scope. However, by passing these 
variables into functions or declaring their types, one can easily use this 
library with type-stable functions. Please see the README for details.

If you like this project, please star the repository to show your support. 
Please feel free to suggest features by opening issues (and please report 
bugs as well). Thank you for your attention.


[julia-users] Re: ANN: CUDAdrv.jl, and CUDA.jl deprecation

2016-09-30 Thread Chris Rackauckas
Thanks for the update.

On Thursday, September 29, 2016 at 6:31:29 PM UTC-7, Tim Besard wrote:
>
> Hi all,
>
> CUDAdrv.jl  is Julia wrapper for 
> the CUDA driver API -- not to be confused with its counterpart CUDArt.jl 
>  which wraps the slightly 
> higher-level CUDA runtime API.
>
> The package doesn't feature many high-level or easy-to-use wrappers, but 
> focuses on providing the necessary functionality for other packages to 
> build upon. For example, CUDArt uses CUDAdrv for launching kernels, while 
> CUDAnative (the in-development native programming interface) completely 
> relies on CUDAdrv for all GPU interactions.
>
> It features a ccall-like cudacall interface for launching kernels and 
> passing values:
> using CUDAdrv
> using Base.Test
>
> dev = CuDevice(0)
> ctx = CuContext(dev)
>
> md = CuModuleFile(joinpath(dirname(@__FILE__), "vadd.ptx"))
> vadd = CuFunction(md, "kernel_vadd")
>
> dims = (3,4)
> a = round(rand(Float32, dims) * 100)
> b = round(rand(Float32, dims) * 100)
>
> d_a = CuArray(a)
> d_b = CuArray(b)
> d_c = CuArray(Float32, dims)
>
> len = prod(dims)
> cudacall(vadd, len, 1, (DevicePtr{Cfloat},DevicePtr{Cfloat},DevicePtr{
> Cfloat}), d_a, d_b, d_c)
> c = Array(d_c)
> @test a+b ≈ c
>
> destroy(ctx)
>
> For documentation, refer to the NVIDIA docs 
> . Even though they don't 
> fully match what CUDAdrv.jl implements, the package is well tested, and 
> redocumenting the entire thing is too much work.
>
> Current master of this package only supports 0.5, but there's a tagged 
> version supporting 0.4 (as CUDArt.jl does so as well). It has been tested 
> on CUDA 5.0 up to 8.0, but there might always be issues with certain 
> versions (as the wrappers aren't auto-generated, and probably will never be 
> due to how NVIDIA has implemented cuda.h)
>
> Anybody thinking there's a lot of overlap between CUDArt and CUDAdrv is 
> completely right, but it mimics the overlap between CUDA's runtime and 
> driver APIs as in some cases we do specifically need one or the other (eg., 
> CUDAnative wouldn't work with only the runtime API). There's also some 
> legacy at the Julia side: CUDAdrv.jl is based on CUDA.jl, while CUDArt.jl 
> has been an independent effort.
>
>
> In other news, we have recently *deprecated the old CUDA.jl package*. All 
> users should now use either CUDArt.jl or CUDAdrv.jl, depending on what 
> suits them best. Neither is a drop-in replacement, but the changes should 
> be minor. At the very least, users will have to change the kernel launch 
> syntax, which should use cudacall as shown above. In the future, we might 
> re-use the CUDA.jl package name for the native programming interface 
> currently at CUDAnative.jl
>
>
> Best,
> Tim
>


[julia-users] Southern California Julia Users Kickoff Meeting - October 7th

2016-09-28 Thread Chris Rackauckas
Hello,
  We are excited to announce a kickoff meeting for the Southern California 
Julia Users on October 7th at 6PM at UCLA (Engineering 4). There will be 
presentations by local Julia users showcasing their work. If you are 
interested in presenting, please let us know through email, Gitter, or this 
thread. We hope this will spark some interest for further events like HPC 
workshops and hackatons. For more information, see the meetup page 
.


Re: [julia-users] Re: [ANN] UnicodeFun

2016-09-28 Thread Chris Rackauckas
what about the other direction? It would be a sick to write symbolic code 
for SymEngine or SymPy in that unicode form, and have it convert to the 
appropriate code.

On Wednesday, September 28, 2016 at 6:43:48 AM UTC-7, Simon Danisch wrote:
>
> That's the short form which works with sub/superscript and will create 
> something like: 
> to_fraction("a-123", 392) == "ᵃ⁻¹²³⁄₃₉₂"
>
> For the newline version try:
> to_fraction_nl("α² ⋅ α²⁺³ ≡ α⁷", " ℝ: 𝐴𝐯 = λᵢ𝐯")
>
> I really need to rework the names and documentation ;)
>
> 2016-09-28 15:22 GMT+02:00 henri@gmail.com  <
> henri@gmail.com >:
>
>> I probably misused it but that what I get (before I add it and checkout)
>>
>> Sorry
>>
>> Henri
>>
>>
>> julia> to_fraction("α² ⋅ α²⁺³ ≡ α⁷", " ℝ: 𝐴𝐯 = λᵢ𝐯")
>> ERROR: Char ² doesn't have a unicode superscript
>>  in to_superscript(::Char) at 
>> /home/pi/.julia/v0.5/UnicodeFun/src/sub_super_scripts.jl:156
>>  in to_superscript(::Base.AbstractIOBuffer{Array{UInt8,1}}, ::String) at 
>> /home/pi/.julia/v0.5/UnicodeFun/src/sub_super_scripts.jl:13
>>  in to_fraction at 
>> /home/pi/.julia/v0.5/UnicodeFun/src/sub_super_scripts.jl:178 [inlined]
>>  in 
>> (::UnicodeFun.##5#6{String,String})(::Base.AbstractIOBuffer{Array{UInt8,1}}) 
>> at /home/pi/.julia/v0.5/UnicodeFun/src/sub_super_scripts.jl:173
>>  in #sprint#304(::Void, ::Function, ::Int64, ::Function) at 
>> ./strings/io.jl:37
>>  in to_fraction(::String, ::String) at 
>> /home/pi/.julia/v0.5/UnicodeFun/src/sub_super_scripts.jl:172
>>
>> julia> 
>>
>>
>> Le 28/09/2016 à 13:40, Simon Danisch a écrit :
>>
>> I added the to_fraction function:
>>
>> to_fraction("α² ⋅ α²⁺³ ≡ α⁷", " ℝ: 𝐴𝐯 = λᵢ𝐯") -->
>>
>> α̲²̲ ̲⋅̲ ̲α̲²̲⁺̲³̲ ̲≡̲ ̲α̲⁷̲
>>  ℝ: 𝐴𝐯 = λᵢ𝐯
>>
>> https://github.com/SimonDanisch/UnicodeFun.jl/pull/3
>>
>> But I won't have time to add this to the *to*_*latex* function, since 
>> it's a bit more involved with the line break.
>>
>> Am Mittwoch, 28. September 2016 12:05:36 UTC+2 schrieb Simon Danisch: 
>>>
>>> Good news everyone!
>>> I've written a small library that offers various transformations of text 
>>> to special Unicode characters.
>>> The most prominent one is the latex-string to latex-unicode:
>>>
>>> "\\itA \\in \\bbR^{nxn}, \\bfv \\in \\bbR^n, \\lambda_i \\in \\bbR: 
>>> \\itA\\bfv = \\lambda_i\\bfv"==> "𝐴 ∈ ℝⁿˣⁿ, 𝐯 ∈ ℝⁿ, λᵢ ∈ ℝ: 𝐴𝐯 = λᵢ𝐯"
>>>
>>>
>>> It doesn't support the whole range of latex, but I hope it will be 
>>> enough for simple formulas.
>>> I will obviously use this library for latex label support in GLVisualize 
>>> , but I hope that this 
>>> library can also be usable in other contexts! (REPL latex renderer?) 
>>> Enjoy with a simple *Pkg.add("UnicodeFun") *(since its freshly 
>>> registered, a Pkg.update() might be needed)
>>>
>>> Please feel free to report latex strings that are not working, or help 
>>> adding new transformations! :)
>>>
>>> Best,
>>> Simon
>>>
>>
>>
>

Re: [julia-users] Re: Why does Julia 0.5 keep complaining about method re-definitions?

2016-09-27 Thread Chris Rackauckas
You could've just used Suppressor.jl 
...

On Tuesday, September 27, 2016 at 9:55:53 PM UTC-7, K leo wrote:
>
>
> On Wednesday, September 28, 2016 at 12:53:12 PM UTC+8, K leo wrote:
>>
>> This a very heavy install.  It's fetching tons of things that I have not 
>> used.  Not sure what they are, but seems like trashing my system.
>>
>
> julia> Pkg.clone("git://github.com/cstjean/ClobberingReload.jl.git")
> INFO: Cloning ClobberingReload from git://
> github.com/cstjean/ClobberingReload.jl.git
> INFO: Computing changes...
> INFO: Cloning cache of IJulia from 
> https://github.com/JuliaLang/IJulia.jl.git
> INFO: Cloning cache of Nettle from 
> https://github.com/staticfloat/Nettle.jl.git
> INFO: Cloning cache of ZMQ from https://github.com/JuliaLang/ZMQ.jl.git
> INFO: Installing Conda v0.3.2
> INFO: Installing IJulia v1.3.2
> INFO: Installing Nettle v0.2.4
> INFO: Installing ZMQ v0.3.4
> INFO: Building Nettle
> INFO: Building ZMQ
> INFO: Building IJulia
> INFO: Installing Jupyter via the Conda package.
> INFO: Downloading miniconda installer ...
>   % Total% Received % Xferd  Average Speed   TimeTime Time 
>  Current
>  Dload  Upload   Total   SpentLeft 
>  Speed
> 100 25.9M  100 25.9M0 0  1104k  0  0:00:24  0:00:24 --:--:-- 
> 2297k
> INFO: Installing miniconda ...
> PREFIX=/home/xxx/.julia/v0.5/Conda/deps/usr
> installing: _cache-0.0-py27_x0 ...
> installing: python-2.7.11-0 ...
> installing: conda-env-2.4.5-py27_0 ...
> installing: openssl-1.0.2g-0 ...
> installing: pycosat-0.6.1-py27_0 ...
> installing: pyyaml-3.11-py27_1 ...
> installing: readline-6.2-2 ...
> installing: requests-2.9.1-py27_0 ...
> installing: sqlite-3.9.2-0 ...
> installing: tk-8.5.18-0 ...
> installing: yaml-0.1.6-0 ...
> installing: zlib-1.2.8-0 ...
> installing: conda-4.0.5-py27_0 ...
> installing: pycrypto-2.6.1-py27_0 ...
> installing: pip-8.1.1-py27_1 ...
> installing: wheel-0.29.0-py27_0 ...
> installing: setuptools-20.3-py27_0 ...
> Python 2.7.11 :: Continuum Analytics, Inc.
> creating default environment...
> installation finished.
> Fetching package metadata: 
> Solving package specifications: .
>
> Package plan for installation in environment 
> /home/xxx/.julia/v0.5/Conda/deps/usr:
>
> The following packages will be downloaded:
>
> package|build
> ---|-
> conda-env-2.6.0|0  502 B
> expat-2.1.0|0 365 KB
> icu-54.1   |011.3 MB
> jpeg-8d|1 806 KB
> libffi-3.2.1   |0  36 KB
> libgcc-5.2.0   |0 1.1 MB
> libsodium-1.0.10   |0 1.2 MB
> libxcb-1.12|0 1.5 MB
> sqlite-3.13.0  |0 4.0 MB
> dbus-1.10.10   |0 2.4 MB
> glib-2.43.0|1 5.4 MB
> libpng-1.6.22  |0 214 KB
> libxml2-2.9.2  |0 4.2 MB
> python-2.7.12  |112.1 MB
> zeromq-4.1.4   |0 4.1 MB
> backports-1.0  |   py27_0   1 KB
> backports_abc-0.4  |   py27_0   5 KB
> decorator-4.0.10   |   py27_0  12 KB
> enum34-1.1.6   |   py27_0  53 KB
> freetype-2.5.5 |1 2.5 MB
> functools32-3.2.3.2|   py27_0  15 KB
> gstreamer-1.8.0|0 2.6 MB
> ipython_genutils-0.1.0 |   py27_0  32 KB
> markupsafe-0.23|   py27_2  31 KB
> mistune-0.7.3  |   py27_0 560 KB
> path.py-8.2.1  |   py27_0  45 KB
> ptyprocess-0.5.1   |   py27_0  19 KB
> pygments-2.1.3 |   py27_0 1.2 MB
> pytz-2016.6.1  |   py27_0 178 KB
> pyzmq-15.4.0   |   py27_0 705 KB
> ruamel_yaml-0.11.14|   py27_0 352 KB
> simplegeneric-0.8.1|   py27_1   7 KB
> sip-4.18   |   py27_0 264 KB
> six-1.10.0 |   py27_0  16 KB
> wcwidth-0.1.7  |   py27_0  21 KB
> clyent-1.2.2   |   py27_0  15 KB
> conda-4.2.9|   py27_0 360 KB
> configparser-3.5.0 |   py27

[julia-users] Re: Why does Julia 0.5 keep complaining about method re-definitions?

2016-09-27 Thread Chris Rackauckas
Or just a standard way to suppress warnings of a given type (say, 
surpress("MethodDefinition")). For now, Suppressor.jl 
 does well.

On Tuesday, September 27, 2016 at 12:13:00 PM UTC-7, Andrew wrote:
>
> It seems like a lot of people are complaining about this. Is there some 
> way to suppress method overwritten warnings for an include() statement? 
> Perhaps a keyword like include("foo.jl", quietly = true)?
>
> On Tuesday, September 27, 2016 at 1:56:27 PM UTC-4, Daniel Carrera wrote:
>>
>> Hello,
>>
>> I'm not sure when I upgraded, but I am using Julia 0.5 and now it 
>> complains every time I redefine a method, which is basically all the time. 
>> When I'm developing ideas I usually have a file with a script that I modify 
>> and reload all the time:
>>
>> julia> include("foo.jl");
>>
>> ... see the results, edit file ...
>>
>> julia> include("foo.jl");
>>
>> ... see the results, edit file ...
>> julia> include("foo.jl");
>>
>> ... see the results, edit file ...
>>
>>
>> And so on. This is what I do most of the time. But now every time I 
>> `include("foo.jl")` I get warnings for every method that has been redefined 
>> (which is all of them):
>>
>> julia> include("foo.jl");
>>
>> WARNING: Method definition (::Type{Main.Line})(Float64, Float64) in 
>> module Main at /home/daniel/Data/Science/Thesis/SI.jl:4 overwritten at 
>> /home/daniel/Data/Science/Thesis/SI.jl:4.
>> WARNING: Method definition (::Type{Main.Line})(Any, Any) in module Main 
>> at /home/daniel/Data/Science/Thesis/SI.jl:4 overwritten at 
>> /home/daniel/Data/Science/Thesis/SI.jl:4.
>> WARNING: Method definition new_line(Any, Any, Any) in module Main at 
>> /home/daniel/Data/Science/Thesis/SI.jl:8 overwritten at 
>> /home/daniel/Data/Science/Thesis/SI.jl:8.
>>
>>
>> Is there a way that this can be fixed? How can I recover Julia's earlier 
>> behaviour? This is very irritating, and I don't think it makes sense for a 
>> functional language like Julia. If I wrote a method as a variable 
>> assignment (e.g. "foo = x -> 2*x") Julia wouldn't complain.
>>
>>
>> Thanks for the help,
>> Daniel.
>>
>

[julia-users] Re: PkgDev.tag issues

2016-09-27 Thread Chris Rackauckas
It happened to me as well when I tagged yesterday. I just changed computers 
over to Linux... v0.5 Windows 10.

On Monday, September 26, 2016 at 8:26:00 PM UTC-7, Tony Kelman wrote:
>
> The "no changes to commit" issue sounds like 
> https://github.com/JuliaLang/PkgDev.jl/issues/28 
> ,
>  
> especially since you're on Windows. I don't know what's going on there or 
> where to start debugging. I'm a little surprised you're only the second one 
> to report the issue, I'd think it would have happened to more people by 
> now. What version of Windows?
>
>
> On Saturday, September 24, 2016 at 11:16:35 PM UTC-7, Brandon Taylor wrote:
>>
>> Ok, I deleted the extraneous tags, and retagged. Same thing messages 
>> about no changes to commit.
>>
>> So I git added the new v0.1.0 folder and then committed manually.
>>
>> Then I tried PkgDev.publish() and I got this:
>>
>> ERROR: GitError(Code:EAUTH, Class:None, No errors)
>>  in macro expansion at .\libgit2\error.jl:99 [inlined]
>>  in #push#53(::Bool, ::Base.LibGit2.PushOptions, ::Function, 
>> ::Base.LibGit2.GitRemote, ::Array{String,1}) at .\libgit2\remote.jl:84
>>  in (::Base.LibGit2.#kw##push)(::Array{Any,1}, ::Base.LibGit2.#push, 
>> ::Base.LibGit2.GitRemote, ::Array{String,1}) at .\:0
>>  in #push#94(::String, ::String, ::Array{String,1}, ::Bool, 
>> ::Nullable{Base.LibGit2.UserPasswordCredentials}, ::Function, 
>> ::Base.LibGit2.GitRepo) at .\libgit2\libgit2.jl:185
>>  in (::Base.LibGit2.#kw##push)(::Array{Any,1}, ::Base.LibGit2.#push, 
>> ::Base.LibGit2.GitRepo) at .\:0
>>  in 
>> (::PkgDev.Entry.##6#11{Dict{String,Array{String,1}}})(::Base.LibGit2.GitRepo)
>>  
>> at C:\Users\jsnot\.julia\v0.5\PkgDev\src\entry.jl:114
>>  in with(::PkgDev.Entry.##6#11{Dict{String,Array{String,1}}}, 
>> ::Base.LibGit2.GitRepo) at .\libgit2\types.jl:638
>>  in publish(::String, ::String) at 
>> C:\Users\jsnot\.julia\v0.5\PkgDev\src\entry.jl:97
>>  in publish() at C:\Users\jsnot\.julia\v0.5\PkgDev\src\PkgDev.jl:70
>>
>> So then I tried to checkout PkgDev as suggested here:
>> https://github.com/JuliaLang/PkgDev.jl/issues/69
>>
>> and now I'm getting
>>
>> INFO: Validating METADATA
>> INFO: Creating a personal access token for Julia Package Manager on 
>> GitHub.
>> You will be asked to provide credentials to your GitHub account.
>> Enter host password for user 'bramtayl':
>> INFO: Pushing ChainMap permanent tags: v0.1.0
>> INFO: Submitting METADATA changes
>> INFO: Forking JuliaLang/METADATA.jl to bramtayl
>> INFO: Pushing changes as branch pull-request/2ed12a90
>> ERROR: GitError(Code:ERROR, Class:Net, Remote error: access denied or 
>> repository not exported: /2/nw/29/05/c9/6106340/69147216.git)
>>  in macro expansion at .\libgit2\error.jl:99 [inlined]
>>  in #push#53(::Bool, ::Base.LibGit2.PushOptions, ::Function, 
>> ::Base.LibGit2.GitRemote, ::Array{String,1}) at .\libgit2\remote.jl:84
>>  in (::Base.LibGit2.#kw##push)(::Array{Any,1}, ::Base.LibGit2.#push, 
>> ::Base.LibGit2.GitRemote, ::Array{String,1}) at .\:0
>>  in #push#94(::String, ::String, ::Array{String,1}, ::Bool, 
>> ::Nullable{Base.LibGit2.UserPasswordCredentials}, ::Function, 
>> ::Base.LibGit2.GitRepo) at .\libgit2\libgit2.jl:185
>>  in (::Base.LibGit2.#kw##push)(::Array{Any,1}, ::Base.LibGit2.#push, 
>> ::Base.LibGit2.GitRepo) at .\:0
>>  in (::PkgDev.Entry.##2#3)(::Base.LibGit2.GitRepo) at 
>> C:\Users\jsnot\.julia\v0.5\PkgDev\src\entry.jl:39
>>  in with(::PkgDev.Entry.##2#3, ::Base.LibGit2.GitRepo) at 
>> .\libgit2\types.jl:638
>>  in #pull_request#1(::String, ::String, ::String, ::Function, ::String) 
>> at C:\Users\jsnot\.julia\v0.5\PkgDev\src\entry.jl:15
>>  in (::PkgDev.Entry.#kw##pull_request)(::Array{Any,1}, 
>> ::PkgDev.Entry.#pull_request, ::String) at .\:0
>>  in publish(::String, ::String) at 
>> C:\Users\jsnot\.julia\v0.5\PkgDev\src\entry.jl:121
>>  in publish() at C:\Users\jsnot\.julia\v0.5\PkgDev\src\PkgDev.jl:70
>>
>>
>> On Saturday, September 24, 2016 at 10:02:21 PM UTC-4, Tony Kelman wrote:
>>>
>>> You may have to remove the git tag from your local clone of the package 
>>> repo.
>>
>>

[julia-users] Re: Benchmarking Julia

2016-09-27 Thread Chris Rackauckas
Is your code still type-stable and the type are inferred correctly? There 
are some type-inference issues that came up when I transitioned a lot of 
things to v0.5. After I got rid of those, the performance on v0.5 ended up 
being a bit better (most likely due to -O3 optimization). I transitioned 
before the release clients, so the waters were murkier then, but I know 
that there are a few type-inference bugs being worked out that may be the 
cause.

On Tuesday, September 27, 2016 at 1:44:30 AM UTC-7, cormu...@mac.com wrote:
>
> I've become convinced that upgrading my Mac to the latest OS (Sierra) has 
> slowed down Julia in some areas. (One test showed a fourfold speed 
> reduction compared with the same test running on the last release.) But to 
> get some real-world numbers and eliminate some obvious explanations I'm 
> looking for something simple to install that measures general Julia 
> performance on a particular processor, compared only with itself (not 
> compared with Octave or C, for example). 
>
> Is there a ready-made suite of benchmarks that can be easily run on 0.4.7 
> and 0.5 that gives a score I can use to compare with other computers 
> running Julia?
>
>

[julia-users] Re: Is there a way to download a copy of Plots' documentation?

2016-09-25 Thread Chris Rackauckas
You have to do what Sundara describes. This is a limitation of 
Documenter.jl with the mkdocs render. That would be a feature request for 
Documenter.jl . I think it's 
planned for the new native renderer, but I couldn't find the issue (and 
Plots would have to switch to the native renderer. It's not hard, but it's 
still a little bit of work).

On Sunday, September 25, 2016 at 6:10:42 AM UTC-7, SundaraRaman R wrote:
>
> I don't know if the following is the best or easiest way, but: 
>
> It appears the Plots.jl doc sources (
> https://github.com/JuliaPlots/PlotDocs.jl) are in MkDocs format, which by 
> itself only allows html output; however, the author of MkDocs has released 
> a Python package (https://github.com/jgrassler/mkdocs-pandoc) which 
> allows you to convert the mkdocs-style sources into pandoc-style sources - 
> which you can then feed to pandoc, to get PDF, EPUB, etc.
>
> On Sunday, September 25, 2016 at 6:00:44 PM UTC+5:30, K leo wrote:
>>
>> in epub or even in pdf
>>
>

Re: [julia-users] Is FMA/Muladd Working Here?

2016-09-22 Thread Chris Rackauckas
So, in the end, is `@fastmath` supposed to be adding FMA? Should I open an 
issue?

On Wednesday, September 21, 2016 at 7:11:14 PM UTC-7, Yichao Yu wrote:
>
> On Wed, Sep 21, 2016 at 9:49 PM, Erik Schnetter  > wrote: 
> > I confirm that I can't get Julia to synthesize a `vfmadd` instruction 
> > either... Sorry for sending you on a wild goose chase. 
>
> -march=haswell does the trick for C (both clang and gcc) 
> the necessary bit for the machine ir optimization (this is not a llvm 
> ir optimization pass) to do this is llc options -mcpu=haswell and 
> function attribute unsafe-fp-math=true. 
>
> > 
> > -erik 
> > 
> > On Wed, Sep 21, 2016 at 9:33 PM, Yichao Yu  > wrote: 
> >> 
> >> On Wed, Sep 21, 2016 at 9:29 PM, Erik Schnetter  > 
> >> wrote: 
> >> > On Wed, Sep 21, 2016 at 9:22 PM, Chris Rackauckas  > 
> >> > wrote: 
> >> >> 
> >> >> I'm not seeing `@fastmath` apply fma/muladd. I rebuilt the sysimg 
> and 
> >> >> now 
> >> >> I get results where g and h apply muladd/fma in the native code, but 
> a 
> >> >> new 
> >> >> function k which is `@fastmath` inside of f does not apply 
> muladd/fma. 
> >> >> 
> >> >> 
> >> >> 
> https://gist.github.com/ChrisRackauckas/b239e33b4b52bcc28f3922c673a25910 
> >> >> 
> >> >> Should I open an issue? 
> >> > 
> >> > 
> >> > In your case, LLVM apparently thinks that `x + x + 3` is faster to 
> >> > calculate 
> >> > than `2x+3`. If you use a less round number than `2` multiplying `x`, 
> >> > you 
> >> > might see a different behaviour. 
> >> 
> >> I've personally never seen llvm create fma from mul and add. We might 
> >> not have the llvm passes enabled if LLVM is capable of doing this at 
> >> all. 
> >> 
> >> > 
> >> > -erik 
> >> > 
> >> > 
> >> >> Note that this is on v0.6 Windows. On Linux the sysimg isn't 
> rebuilding 
> >> >> for some reason, so I may need to just build from source. 
> >> >> 
> >> >> On Wednesday, September 21, 2016 at 6:22:06 AM UTC-7, Erik Schnetter 
> >> >> wrote: 
> >> >>> 
> >> >>> On Wed, Sep 21, 2016 at 1:56 AM, Chris Rackauckas <
> rack...@gmail.com> 
> >> >>> wrote: 
> >> >>>> 
> >> >>>> Hi, 
> >> >>>>   First of all, does LLVM essentially fma or muladd expressions 
> like 
> >> >>>> `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one 
> >> >>>> explicitly use 
> >> >>>> `muladd` and `fma` on these types of instructions (is there a 
> macro 
> >> >>>> for 
> >> >>>> making this easier)? 
> >> >>> 
> >> >>> 
> >> >>> Yes, LLVM will use fma machine instructions -- but only if they 
> lead 
> >> >>> to 
> >> >>> the same round-off error as using separate multiply and add 
> >> >>> instructions. If 
> >> >>> you do not care about the details of conforming to the IEEE 
> standard, 
> >> >>> then 
> >> >>> you can use the `@fastmath` macro that enables several 
> optimizations, 
> >> >>> including this one. This is described in the manual 
> >> >>> 
> >> >>> <
> http://docs.julialang.org/en/release-0.5/manual/performance-tips/#performance-annotations>.
>  
>
> >> >>> 
> >> >>> 
> >> >>>>   Secondly, I am wondering if my setup is no applying these 
> >> >>>> operations 
> >> >>>> correctly. Here's my test code: 
> >> >>>> 
> >> >>>> f(x) = 2.0x + 3.0 
> >> >>>> g(x) = muladd(x,2.0, 3.0) 
> >> >>>> h(x) = fma(x,2.0, 3.0) 
> >> >>>> 
> >> >>>> @code_llvm f(4.0) 
> >> >>>> @code_llvm g(4.0) 
> >> >>>> @code_llvm h(4.0) 
> >> >>>> 
> >> >>>> @code_native f(4.0) 
> >> >>>> @code_native g(4.0) 
> >> >>>> @code_native h(4.0) 
> >> >>>> 
> >> >>>> Computer 1 
> >> >>>> 
> >> >

Re: [julia-users] Is FMA/Muladd Working Here?

2016-09-21 Thread Chris Rackauckas
Still no FMA?

julia> k(x) = @fastmath 2.4x + 3.0
WARNING: Method definition k(Any) in module Main at REPL[14]:1 overwritten 
at REPL[23]:1.
k (generic function with 1 method)

julia> @code_llvm k(4.0)

; Function Attrs: uwtable
define double @julia_k_66737(double) #0 {
top:
  %1 = fmul fast double %0, 2.40e+00
  %2 = fadd fast double %1, 3.00e+00
  ret double %2
}

julia> @code_native k(4.0)
.text
Filename: REPL[23]
pushq   %rbp
movq%rsp, %rbp
movabsq $568231032, %rax# imm = 0x21DE8478
Source line: 1
vmulsd  (%rax), %xmm0, %xmm0
movabsq $568231040, %rax# imm = 0x21DE8480
vaddsd  (%rax), %xmm0, %xmm0
popq%rbp
retq
nopw%cs:(%rax,%rax)



On Wednesday, September 21, 2016 at 6:29:44 PM UTC-7, Erik Schnetter wrote:
>
> On Wed, Sep 21, 2016 at 9:22 PM, Chris Rackauckas  > wrote:
>
>> I'm not seeing `@fastmath` apply fma/muladd. I rebuilt the sysimg and now 
>> I get results where g and h apply muladd/fma in the native code, but a new 
>> function k which is `@fastmath` inside of f does not apply muladd/fma.
>>
>> https://gist.github.com/ChrisRackauckas/b239e33b4b52bcc28f3922c673a25910
>>
>> Should I open an issue?
>>
>
> In your case, LLVM apparently thinks that `x + x + 3` is faster to 
> calculate than `2x+3`. If you use a less round number than `2` multiplying 
> `x`, you might see a different behaviour.
>
> -erik
>
>
> Note that this is on v0.6 Windows. On Linux the sysimg isn't rebuilding 
>> for some reason, so I may need to just build from source.
>>
>> On Wednesday, September 21, 2016 at 6:22:06 AM UTC-7, Erik Schnetter 
>> wrote:
>>>
>>> On Wed, Sep 21, 2016 at 1:56 AM, Chris Rackauckas  
>>> wrote:
>>>
>>>> Hi,
>>>>   First of all, does LLVM essentially fma or muladd expressions like 
>>>> `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one explicitly use 
>>>> `muladd` and `fma` on these types of instructions (is there a macro for 
>>>> making this easier)?
>>>>
>>>
>>> Yes, LLVM will use fma machine instructions -- but only if they lead to 
>>> the same round-off error as using separate multiply and add instructions. 
>>> If you do not care about the details of conforming to the IEEE standard, 
>>> then you can use the `@fastmath` macro that enables several optimizations, 
>>> including this one. This is described in the manual <
>>> http://docs.julialang.org/en/release-0.5/manual/performance-tips/#performance-annotations
>>> >.
>>>
>>>
>>>   Secondly, I am wondering if my setup is no applying these operations 
>>>> correctly. Here's my test code:
>>>>
>>>> f(x) = 2.0x + 3.0
>>>> g(x) = muladd(x,2.0, 3.0)
>>>> h(x) = fma(x,2.0, 3.0)
>>>>
>>>> @code_llvm f(4.0)
>>>> @code_llvm g(4.0)
>>>> @code_llvm h(4.0)
>>>>
>>>> @code_native f(4.0)
>>>> @code_native g(4.0)
>>>> @code_native h(4.0)
>>>>
>>>> *Computer 1*
>>>>
>>>> Julia Version 0.5.0-rc4+0
>>>> Commit 9c76c3e* (2016-09-09 01:43 UTC)
>>>> Platform Info:
>>>>   System: Linux (x86_64-redhat-linux)
>>>>   CPU: Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
>>>>   WORD_SIZE: 64
>>>>   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
>>>>   LAPACK: libopenblasp.so.0
>>>>   LIBM: libopenlibm
>>>>   LLVM: libLLVM-3.7.1 (ORCJIT, broadwell)
>>>>
>>>
>>> This looks good, the "broadwell" architecture that LLVM uses should 
>>> imply the respective optimizations. Try with `@fastmath`.
>>>
>>> -erik
>>>
>>>
>>>
>>>  
>>>
>>>> (the COPR nightly on CentOS7) with 
>>>>
>>>> [crackauc@crackauc2 ~]$ lscpu
>>>> Architecture:  x86_64
>>>> CPU op-mode(s):32-bit, 64-bit
>>>> Byte Order:Little Endian
>>>> CPU(s):16
>>>> On-line CPU(s) list:   0-15
>>>> Thread(s) per core:1
>>>> Core(s) per socket:8
>>>> Socket(s): 2
>>>> NUMA node(s):  2
>>>> Vendor ID: GenuineIntel
>>>> CPU family:6
>>>> Model: 79
>>>> Model name:Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
>>>> Steppi

Re: [julia-users] Is FMA/Muladd Working Here?

2016-09-21 Thread Chris Rackauckas
I'm not seeing `@fastmath` apply fma/muladd. I rebuilt the sysimg and now I 
get results where g and h apply muladd/fma in the native code, but a new 
function k which is `@fastmath` inside of f does not apply muladd/fma.

https://gist.github.com/ChrisRackauckas/b239e33b4b52bcc28f3922c673a25910

Should I open an issue?

Note that this is on v0.6 Windows. On Linux the sysimg isn't rebuilding for 
some reason, so I may need to just build from source.

On Wednesday, September 21, 2016 at 6:22:06 AM UTC-7, Erik Schnetter wrote:
>
> On Wed, Sep 21, 2016 at 1:56 AM, Chris Rackauckas  > wrote:
>
>> Hi,
>>   First of all, does LLVM essentially fma or muladd expressions like 
>> `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one explicitly use 
>> `muladd` and `fma` on these types of instructions (is there a macro for 
>> making this easier)?
>>
>
> Yes, LLVM will use fma machine instructions -- but only if they lead to 
> the same round-off error as using separate multiply and add instructions. 
> If you do not care about the details of conforming to the IEEE standard, 
> then you can use the `@fastmath` macro that enables several optimizations, 
> including this one. This is described in the manual <
> http://docs.julialang.org/en/release-0.5/manual/performance-tips/#performance-annotations
> >.
>
>
>   Secondly, I am wondering if my setup is no applying these operations 
>> correctly. Here's my test code:
>>
>> f(x) = 2.0x + 3.0
>> g(x) = muladd(x,2.0, 3.0)
>> h(x) = fma(x,2.0, 3.0)
>>
>> @code_llvm f(4.0)
>> @code_llvm g(4.0)
>> @code_llvm h(4.0)
>>
>> @code_native f(4.0)
>> @code_native g(4.0)
>> @code_native h(4.0)
>>
>> *Computer 1*
>>
>> Julia Version 0.5.0-rc4+0
>> Commit 9c76c3e* (2016-09-09 01:43 UTC)
>> Platform Info:
>>   System: Linux (x86_64-redhat-linux)
>>   CPU: Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
>>   WORD_SIZE: 64
>>   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
>>   LAPACK: libopenblasp.so.0
>>   LIBM: libopenlibm
>>   LLVM: libLLVM-3.7.1 (ORCJIT, broadwell)
>>
>
> This looks good, the "broadwell" architecture that LLVM uses should imply 
> the respective optimizations. Try with `@fastmath`.
>
> -erik
>
>
>
>  
>
>> (the COPR nightly on CentOS7) with 
>>
>> [crackauc@crackauc2 ~]$ lscpu
>> Architecture:  x86_64
>> CPU op-mode(s):32-bit, 64-bit
>> Byte Order:Little Endian
>> CPU(s):16
>> On-line CPU(s) list:   0-15
>> Thread(s) per core:1
>> Core(s) per socket:8
>> Socket(s): 2
>> NUMA node(s):  2
>> Vendor ID: GenuineIntel
>> CPU family:6
>> Model: 79
>> Model name:Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
>> Stepping:  1
>> CPU MHz:   1200.000
>> BogoMIPS:  6392.58
>> Virtualization:VT-x
>> L1d cache: 32K
>> L1i cache: 32K
>> L2 cache:  256K
>> L3 cache:  25600K
>> NUMA node0 CPU(s): 0-7
>> NUMA node1 CPU(s): 8-15
>>
>>
>>
>> I get the output
>>
>> define double @julia_f_72025(double) #0 {
>> top:
>>   %1 = fmul double %0, 2.00e+00
>>   %2 = fadd double %1, 3.00e+00
>>   ret double %2
>> }
>>
>> define double @julia_g_72027(double) #0 {
>> top:
>>   %1 = call double @llvm.fmuladd.f64(double %0, double 2.00e+00, 
>> double 3.00e+00)
>>   ret double %1
>> }
>>
>> define double @julia_h_72029(double) #0 {
>> top:
>>   %1 = call double @llvm.fma.f64(double %0, double 2.00e+00, double 
>> 3.00e+00)
>>   ret double %1
>> }
>> .text
>> Filename: fmatest.jl
>> pushq %rbp
>> movq %rsp, %rbp
>> Source line: 1
>> addsd %xmm0, %xmm0
>> movabsq $139916162906520, %rax  # imm = 0x7F40C5303998
>> addsd (%rax), %xmm0
>> popq %rbp
>> retq
>> nopl (%rax,%rax)
>> .text
>> Filename: fmatest.jl
>> pushq %rbp
>> movq %rsp, %rbp
>> Source line: 2
>> addsd %xmm0, %xmm0
>> movabsq $139916162906648, %rax  # imm = 0x7F40C5303A18
>> addsd (%rax), %xmm0
>> popq %rbp
>> retq
>> nopl (%rax,%rax)
>> .text
>> Filename: fmatest.jl
>> pushq %rbp
>> movq %rsp, %rbp
>> movabsq $139916162906776, %rax  # imm = 0x7F40C5303A98
>> Source line: 3
>> movsd (%rax), %xmm1   # xmm1 = mem[0]

Re: [julia-users] Re: Is FMA/Muladd Working Here?

2016-09-21 Thread Chris Rackauckas
I see. So what I am getting is that, in my codes, 

1. I will need to add @fastmath anywhere I want these optimizations to show 
up. That should be easy since I can just add it at the beginnings of loops 
where I have @inbounds which already covers every major inner loop I have. 
Easy find/replace fix. 

2. For my own setup, I am going to need to build from source to get all the 
optimizations? I would've though the point of using the Linux repositories 
instead of the generic binaries is that they would be setup to build for 
your system. That's just a non-expert's misconception I guess? I think that 
should be highlighted somewhere.

On Wednesday, September 21, 2016 at 12:11:34 PM UTC-7, Milan Bouchet-Valat 
wrote:
>
> Le mercredi 21 septembre 2016 à 11:36 -0700, Chris Rackauckas a écrit : 
> > The Windows one is using the pre-built binary. The Linux one uses the 
> > COPR nightly (I assume that should build with all the goodies?) 
> The Copr RPMs are subject to the same constraint as official binaries: 
> we need them to work on most machines. So they don't enable FMA (nor 
> e.g. AVX) either. 
>
> It would be nice to find a way to ship with several pre-built sysimg 
> files and using the highest instruction set supported by your CPU. 
>
>
> Regards 
>
> > > > Hi, 
> > > >   First of all, does LLVM essentially fma or muladd expressions 
> > > > like `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one 
> > > > explicitly use `muladd` and `fma` on these types of instructions 
> > > > (is there a macro for making this easier)? 
> > > > 
> > > 
> > > You will generally need to use muladd, unless you use @fastmath. 
> > > 
> > >   
> > > >   Secondly, I am wondering if my setup is no applying these 
> > > > operations correctly. Here's my test code: 
> > > > 
> > > 
> > > If you're using the prebuilt downloads (as opposed to building from 
> > > source), you will need to rebuild the sysimg (look in 
> > > contrib/build_sysimg.jl) as we build for the lowest-common 
> > > architecture. 
> > > 
> > > -Simon 
> > > 
>


[julia-users] Re: Plotting lots of data

2016-09-21 Thread Chris Rackauckas
Usually I'm plotting the run from really long differential equations 
solution. The one I am mentioning is from a really long stochastic 
differential equation solution (publication coming soon). 19 lines with 
likely millions of dots, thrown together into one figure or spanning 
multiple. I can't really explain "faster" other than, when I ran the plot 
command afterwards (on smaller test cases) PyPlot would take forever but GR 
would get the plot done much quicker, so for the longer run I went with GR 
and it worked. I am not much of a plot guy so my method is, use Plots.jl, 
switch backends to find something that works, and if I can't find an easy 
solution like this, go ask Tom :). What I am saying is, if you do some 
experiments, GR will plot faster than something like Gadfly, PyPlot, 
(Plotly gave issues, this was in June so it may no longer be present) etc., 
so my hint is to give the GR backend a try if you're ever in a similar case.

On Wednesday, September 21, 2016 at 11:54:11 AM UTC-7, Andreas Lobinger 
wrote:
>
> Hello colleague,
>
> On Wednesday, September 21, 2016 at 8:34:21 PM UTC+2, Chris Rackauckas 
> wrote:
>>
>> I've managed to plot quite a few large datasets. GR through Plots.jl 
>> works very well for this. I tend to still prefer the defaults of PyPlot, 
>> but GR is just so much faster that I switch the backend whenever the amount 
>> of data gets unruly (larger than like 5-10GB, and it's worked to save a 
>> raster image from data larger than 40-50 GB). Plots + GR is a good combo
>>
>
> Could you explain this in more length, especially the 'faster'? It sounds 
> like your plotting a few hundred million items/lines.
>


[julia-users] Re: Is FMA/Muladd Working Here?

2016-09-21 Thread Chris Rackauckas
The Windows one is using the pre-built binary. The Linux one uses the COPR 
nightly (I assume that should build with all the goodies?)

On Wednesday, September 21, 2016 at 9:00:02 AM UTC-7, Simon Byrne wrote:
>
> On Wednesday, 21 September 2016 06:56:45 UTC+1, Chris Rackauckas wrote:
>>
>> Hi,
>>   First of all, does LLVM essentially fma or muladd expressions like 
>> `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one explicitly use 
>> `muladd` and `fma` on these types of instructions (is there a macro for 
>> making this easier)?
>>
>
> You will generally need to use muladd, unless you use @fastmath.
>
>  
>
>>   Secondly, I am wondering if my setup is no applying these operations 
>> correctly. Here's my test code:
>>
>
> If you're using the prebuilt downloads (as opposed to building from 
> source), you will need to rebuild the sysimg (look in 
> contrib/build_sysimg.jl) as we build for the lowest-common architecture.
>
> -Simon
>


[julia-users] Re: Plotting lots of data

2016-09-21 Thread Chris Rackauckas
I've managed to plot quite a few large datasets. GR through Plots.jl works 
very well for this. I tend to still prefer the defaults of PyPlot, but GR 
is just so much faster that I switch the backend whenever the amount of 
data gets unruly (larger than like 5-10GB, and it's worked to save a raster 
image from data larger than 40-50 GB). Plots + GR is a good combo.

On Wednesday, September 21, 2016 at 4:52:43 AM UTC-7, Igor wrote:
>
> Hello!
> did you managed to plot big data sets? You can try to use my small package 
> for this (  https://github.com/ig-or/qwtwplot.jl ) - it's very 
> interesting for me how it can handle big data sets.
>
> Best regards, Igor
>
>
> четверг, 16 июня 2016 г., 0:08:42 UTC+3 пользователь CrocoDuck O'Ducks 
> написал:
>>
>>
>> 
>>
>>
>> 
>> Hi, thank you very much, really appreciated. GR seems pretty much what I 
>> need. I like I can use Plots.jl with it. PlotlyJS.jl is very hot, I guess I 
>> will use it when I need interactivity. I will look into OpenGL related 
>> visualization tools for more advanced plots/renders.
>>
>> I just have a quick question. I just did a quick test with GR plotting 
>> two 1 second long sine waves sampled at 192 kHz, one of frequency 100 Hz 
>> and one of frequency 10 kHz. The 100 Hz looks fine but the 10 kHz plot has 
>> blank areas (see attached pictures). I guess it is due to the density of 
>> lines... probably solved by making the plot bigger?
>>
>>

[julia-users] Is FMA/Muladd Working Here?

2016-09-20 Thread Chris Rackauckas
Hi,
  First of all, does LLVM essentially fma or muladd expressions like `a1*x1 
+ a2*x2 + a3*x3 + a4*x4`? Or is it required that one explicitly use 
`muladd` and `fma` on these types of instructions (is there a macro for 
making this easier)?

  Secondly, I am wondering if my setup is no applying these operations 
correctly. Here's my test code:

f(x) = 2.0x + 3.0
g(x) = muladd(x,2.0, 3.0)
h(x) = fma(x,2.0, 3.0)

@code_llvm f(4.0)
@code_llvm g(4.0)
@code_llvm h(4.0)

@code_native f(4.0)
@code_native g(4.0)
@code_native h(4.0)

*Computer 1*

Julia Version 0.5.0-rc4+0
Commit 9c76c3e* (2016-09-09 01:43 UTC)
Platform Info:
  System: Linux (x86_64-redhat-linux)
  CPU: Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
  WORD_SIZE: 64
  BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
  LAPACK: libopenblasp.so.0
  LIBM: libopenlibm
  LLVM: libLLVM-3.7.1 (ORCJIT, broadwell)

(the COPR nightly on CentOS7) with 

[crackauc@crackauc2 ~]$ lscpu
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):16
On-line CPU(s) list:   0-15
Thread(s) per core:1
Core(s) per socket:8
Socket(s): 2
NUMA node(s):  2
Vendor ID: GenuineIntel
CPU family:6
Model: 79
Model name:Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
Stepping:  1
CPU MHz:   1200.000
BogoMIPS:  6392.58
Virtualization:VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache:  256K
L3 cache:  25600K
NUMA node0 CPU(s): 0-7
NUMA node1 CPU(s): 8-15



I get the output

define double @julia_f_72025(double) #0 {
top:
  %1 = fmul double %0, 2.00e+00
  %2 = fadd double %1, 3.00e+00
  ret double %2
}

define double @julia_g_72027(double) #0 {
top:
  %1 = call double @llvm.fmuladd.f64(double %0, double 2.00e+00, double 
3.00e+00)
  ret double %1
}

define double @julia_h_72029(double) #0 {
top:
  %1 = call double @llvm.fma.f64(double %0, double 2.00e+00, double 
3.00e+00)
  ret double %1
}
.text
Filename: fmatest.jl
pushq %rbp
movq %rsp, %rbp
Source line: 1
addsd %xmm0, %xmm0
movabsq $139916162906520, %rax  # imm = 0x7F40C5303998
addsd (%rax), %xmm0
popq %rbp
retq
nopl (%rax,%rax)
.text
Filename: fmatest.jl
pushq %rbp
movq %rsp, %rbp
Source line: 2
addsd %xmm0, %xmm0
movabsq $139916162906648, %rax  # imm = 0x7F40C5303A18
addsd (%rax), %xmm0
popq %rbp
retq
nopl (%rax,%rax)
.text
Filename: fmatest.jl
pushq %rbp
movq %rsp, %rbp
movabsq $139916162906776, %rax  # imm = 0x7F40C5303A98
Source line: 3
movsd (%rax), %xmm1   # xmm1 = mem[0],zero
movabsq $139916162906784, %rax  # imm = 0x7F40C5303AA0
movsd (%rax), %xmm2   # xmm2 = mem[0],zero
movabsq $139925776008800, %rax  # imm = 0x7F43022C8660
popq %rbp
jmpq *%rax
nopl (%rax)

It looks like explicit muladd or not ends up at the same native code, but 
is that native code actually doing an fma? The fma native is different, but 
from a discussion on the Gitter it seems that might be a software FMA? This 
computer is setup with the BIOS setting as LAPACK optimized or something 
like that, so is that messing with something?

*Computer 2*

Julia Version 0.6.0-dev.557
Commit c7a4897 (2016-09-08 17:50 UTC)
Platform Info:
  System: NT (x86_64-w64-mingw32)
  CPU: Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
  LAPACK: libopenblas64_
  LIBM: libopenlibm
  LLVM: libLLVM-3.7.1 (ORCJIT, haswell)


on a 4770k i7, Windows 10, I get the output

; Function Attrs: uwtable
define double @julia_f_66153(double) #0 {
top:
  %1 = fmul double %0, 2.00e+00
  %2 = fadd double %1, 3.00e+00
  ret double %2
}

; Function Attrs: uwtable
define double @julia_g_66157(double) #0 {
top:
  %1 = call double @llvm.fmuladd.f64(double %0, double 2.00e+00, double 
3.00e+00)
  ret double %1
}

; Function Attrs: uwtable
define double @julia_h_66158(double) #0 {
top:
  %1 = call double @llvm.fma.f64(double %0, double 2.00e+00, double 
3.00e+00)
  ret double %1
}
.text
Filename: console
pushq %rbp
movq %rsp, %rbp
Source line: 1
addsd %xmm0, %xmm0
movabsq $534749456, %rax# imm = 0x1FDFA110
addsd (%rax), %xmm0
popq %rbp
retq
nopl (%rax,%rax)
.text
Filename: console
pushq %rbp
movq %rsp, %rbp
Source line: 2
addsd %xmm0, %xmm0
movabsq $534749584, %rax# imm = 0x1FDFA190
addsd (%rax), %xmm0
popq %rbp
retq
nopl (%rax,%rax)
.text
Filename: console
pushq %rbp
movq %rsp, %rbp
movabsq $534749712, %rax# imm = 0x1FDFA210
Source line: 3
movsd dcabs164_(%rax), %xmm1  # xmm1 = mem[0],zero
movabsq $534749720, %rax# imm = 0x1FDFA218
movsd (%rax), %xmm2   # xmm2 = mem[0],zero
movabsq $fma, %rax
popq %rbp
jmpq *%rax
nop

This seems to be similar to the first result.



[julia-users] Re: Adding publications easier

2016-09-20 Thread Chris Rackauckas
I think he's talking about the fact that this specifically is more than 
Github: it also requires using pandoc and 
Jekyll: 
https://github.com/JuliaLang/julialang.github.com/tree/master/publications

If the repo somehow ran a build script when checking the PR so that way all 
you had to do was edit the .bib and index.md file, that probably would 
lower the barrier to entry (and could be done straight from the browser). 
That would require a smart setup like what's done for building docs, and 
probably overkill for this. It's probably easier on the maintainer side 
just to tell people to ask for help. (And it's better to keep it as a .bib 
instead of directly editing the Markdown/HTML so that way formatting is 
always the same / correct).

On Tuesday, September 20, 2016 at 12:14:21 PM UTC-7, Tony Kelman wrote:
>
> What do you propose? Github is about as simple as we can do, considering 
> also the complexity of maintaining something from thr project side. There 
> are plenty of people around the community who are happy to walk you through 
> the process of making a pull request, and if it's not explained in enoug 
> detail then we can add more instructions if it would help. What have you 
> tried so far?



[julia-users] Re: Any serious quant finance package for Julia?

2016-09-20 Thread Chris Rackauckas
We're using Julia: packages correctly designed with multiple dispatch 
(DifferentialEquations.jl, the direction that JuliaML and Optim.jl are 
going with the iterator interfaces, etc.) allow for these to be extendable: 
adding a new algorithm is just making a new dispatch or a new type. So if 
your specific algorithm isn't implemented, then you can implement it into 
one of these packages without even modifying their source (though PRs would 
likely be accepted). And using specialized types while injecting code into 
callbacks or in an iterator format should allow almost anything. I'm not 
sure what's missing. But what's gained is that the algorithms would plug 
into the package and get all the goodies like convergence testing, plot 
capabilities, etc., while all you have to do is specify one part.

On Monday, September 19, 2016 at 10:08:16 PM UTC-7, esproff wrote:
>
> I've only used Convex.jl extensively, and NLopt.jl a bit, so I don't 
> really know whether Julia's optimization ecosystem would cover all possible 
> eventualities, I'm just speaking from experience when I say that it's rare 
> that I'm able to rely solely on off-the-shelf packages when writing 
> numerical algorithms.  If that's not the case here than so much the better.
>
> On Monday, September 19, 2016 at 9:45:13 PM UTC-7, Chris Rackauckas wrote:
>>
>> If you implement an optimization routine for a specific type of 
>> functions, why not have that maintained with the API and structure of 
>> Optim.jl, and then use it for your specific case? What about the JuliaML 
>> interfaces / the Optim iterator interface do you find would be a limitation 
>> to a quant library?
>>
>> Optim.jl and things like Learn.jl are designed to be metapackages which 
>> contain numerous solvers, each optimized for different domains. I think any 
>> optimization algorithm would benefit from the infrastructure that is being 
>> built into these packages, instead of making one off implementations (let 
>> alone then it's easier to use for other applications). And the iterator 
>> format is pretty general allows a ton of flexibility. If there's something 
>> about the design that you see not working for this, it's better to help 
>> them fix their design than to attempt to double up efforts and (most 
>> likely) not get more optimized algorithms.
>>
>> On Monday, September 19, 2016 at 9:27:38 PM UTC-7, esproff wrote:
>>>
>>> Ok Chris I'll definitely check out Plots.jl.
>>>
>>> As for optimization packages, more than one will probably have to be 
>>> used depending on the problem: disciplined convex vs numerical vs global vs 
>>> etc.
>>>
>>> And, as always, some optimization algorithms will have to be custom 
>>> rolled out since established packages will never have everything you need 
>>> exactly as you need it.
>>>
>>> On Monday, September 19, 2016 at 7:29:30 PM UTC-7, Chris Rackauckas 
>>> wrote:
>>>>
>>>> I was saying that Quantlib, not Quantlib.jl, had rudimentary numerical 
>>>> methods. The main reason is probably because they only implemented a few 
>>>> here and there, instead of focusing heavily in the numerical solvers or 
>>>> using available libraries. There's no reason to do this in Julia: you have 
>>>> access to a large set of packages to provide the different aspects. 
>>>> Piecing 
>>>> together a metapackage of Julia packages in a way that curates them into a 
>>>> library specific for solving financial equations would easily give you a 
>>>> more sophisticated package than Quantlib achieves. That's why I'm asking 
>>>> what you'd like to see on the differential equations side because 
>>>> DifferentialEquations.jl already offers a bunch of methods for SDEs which 
>>>> could have simple front-ends thrown on them to become 
>>>> "GeneralizedBlackScholes" and etc. solvers. The same should be done with 
>>>> the other parts of Quantlib like optimization, and you'll easily get a 
>>>> vast 
>>>> library routines specifically tailored to mathematical finance problem 
>>>> which will outperform what is given by Quantlib.
>>>>
>>>> As to esproff's suggestion, Plots.jl should be targeted instead of 
>>>> Gadfly for a few reasons. For one, plot recipes are a powerful way to link 
>>>> a package to plotting ability that would make most of the plotting work 
>>>> trivial. Secondly, recipes would add plotting capabilities 

[julia-users] Re: Any serious quant finance package for Julia?

2016-09-19 Thread Chris Rackauckas
If you implement an optimization routine for a specific type of functions, 
why not have that maintained with the API and structure of Optim.jl, and 
then use it for your specific case? What about the JuliaML interfaces / the 
Optim iterator interface do you find would be a limitation to a quant 
library?

Optim.jl and things like Learn.jl are designed to be metapackages which 
contain numerous solvers, each optimized for different domains. I think any 
optimization algorithm would benefit from the infrastructure that is being 
built into these packages, instead of making one off implementations (let 
alone then it's easier to use for other applications). And the iterator 
format is pretty general allows a ton of flexibility. If there's something 
about the design that you see not working for this, it's better to help 
them fix their design than to attempt to double up efforts and (most 
likely) not get more optimized algorithms.

On Monday, September 19, 2016 at 9:27:38 PM UTC-7, esproff wrote:
>
> Ok Chris I'll definitely check out Plots.jl.
>
> As for optimization packages, more than one will probably have to be used 
> depending on the problem: disciplined convex vs numerical vs global vs etc.
>
> And, as always, some optimization algorithms will have to be custom rolled 
> out since established packages will never have everything you need exactly 
> as you need it.
>
> On Monday, September 19, 2016 at 7:29:30 PM UTC-7, Chris Rackauckas wrote:
>>
>> I was saying that Quantlib, not Quantlib.jl, had rudimentary numerical 
>> methods. The main reason is probably because they only implemented a few 
>> here and there, instead of focusing heavily in the numerical solvers or 
>> using available libraries. There's no reason to do this in Julia: you have 
>> access to a large set of packages to provide the different aspects. Piecing 
>> together a metapackage of Julia packages in a way that curates them into a 
>> library specific for solving financial equations would easily give you a 
>> more sophisticated package than Quantlib achieves. That's why I'm asking 
>> what you'd like to see on the differential equations side because 
>> DifferentialEquations.jl already offers a bunch of methods for SDEs which 
>> could have simple front-ends thrown on them to become 
>> "GeneralizedBlackScholes" and etc. solvers. The same should be done with 
>> the other parts of Quantlib like optimization, and you'll easily get a vast 
>> library routines specifically tailored to mathematical finance problem 
>> which will outperform what is given by Quantlib.
>>
>> As to esproff's suggestion, Plots.jl should be targeted instead of Gadfly 
>> for a few reasons. For one, plot recipes are a powerful way to link a 
>> package to plotting ability that would make most of the plotting work 
>> trivial. Secondly, recipes would add plotting capabilities without having a 
>> large dependency like Gadfly. Thirdly, it would let you choose whatever 
>> your favorite plotting backend is. Fourthly, Gadfly doesn't support 3D 
>> plots which are one standard way of showing things like FDM results. 
>> There's no need to unnecessarily limit our plotting abilities. Lastly, the 
>> developer of Plots.jl is a financial guy himself who has already commented 
>> on this thread (Tom Breloff), which always a bonus.
>>
>> As for targeting Convex.jl to put optimization routines over, I am not 
>> sure. I would keep up with the developments of JuliaOpt and JuliaML to see 
>> what packages seem to be growing into the "go-to which offers the 
>> functionality" (currently Optim.jl is the most, the metapackge Learn.jl may 
>> be an interesting target in the future). The "obvious" choice in some cases 
>> may be to target JuMP, but experiences from LightGraphs.jl seem to show 
>> that it doesn't play nicely with other packages as a conditional dependency 
>> (i.e. if you want to use it, you might have to force everyone to have it 
>> and it's a big install.) This is actually what has stalled a package for 
>> parameter inference for ODEs/SDEs/PDEs: it's not clear to me what to target 
>> right now if I want as much functionality as possible but want to minimize 
>> the amount of re-writing in the future (once this is together though, you 
>> could stick a front-end on this as well to do parameter inference for 
>> financial equations).
>>
>> On Monday, September 19, 2016 at 11:26:12 AM UTC-7, Christopher Alexander 
>> wrote:
>>>
>>> I had started the QuantLib.jl package, but the goal was basically a 
>>> rewrite of the C++ package

[julia-users] Re: Any serious quant finance package for Julia?

2016-09-19 Thread Chris Rackauckas
I was saying that Quantlib, not Quantlib.jl, had rudimentary numerical 
methods. The main reason is probably because they only implemented a few 
here and there, instead of focusing heavily in the numerical solvers or 
using available libraries. There's no reason to do this in Julia: you have 
access to a large set of packages to provide the different aspects. Piecing 
together a metapackage of Julia packages in a way that curates them into a 
library specific for solving financial equations would easily give you a 
more sophisticated package than Quantlib achieves. That's why I'm asking 
what you'd like to see on the differential equations side because 
DifferentialEquations.jl already offers a bunch of methods for SDEs which 
could have simple front-ends thrown on them to become 
"GeneralizedBlackScholes" and etc. solvers. The same should be done with 
the other parts of Quantlib like optimization, and you'll easily get a vast 
library routines specifically tailored to mathematical finance problem 
which will outperform what is given by Quantlib.

As to esproff's suggestion, Plots.jl should be targeted instead of Gadfly 
for a few reasons. For one, plot recipes are a powerful way to link a 
package to plotting ability that would make most of the plotting work 
trivial. Secondly, recipes would add plotting capabilities without having a 
large dependency like Gadfly. Thirdly, it would let you choose whatever 
your favorite plotting backend is. Fourthly, Gadfly doesn't support 3D 
plots which are one standard way of showing things like FDM results. 
There's no need to unnecessarily limit our plotting abilities. Lastly, the 
developer of Plots.jl is a financial guy himself who has already commented 
on this thread (Tom Breloff), which always a bonus.

As for targeting Convex.jl to put optimization routines over, I am not 
sure. I would keep up with the developments of JuliaOpt and JuliaML to see 
what packages seem to be growing into the "go-to which offers the 
functionality" (currently Optim.jl is the most, the metapackge Learn.jl may 
be an interesting target in the future). The "obvious" choice in some cases 
may be to target JuMP, but experiences from LightGraphs.jl seem to show 
that it doesn't play nicely with other packages as a conditional dependency 
(i.e. if you want to use it, you might have to force everyone to have it 
and it's a big install.) This is actually what has stalled a package for 
parameter inference for ODEs/SDEs/PDEs: it's not clear to me what to target 
right now if I want as much functionality as possible but want to minimize 
the amount of re-writing in the future (once this is together though, you 
could stick a front-end on this as well to do parameter inference for 
financial equations).

On Monday, September 19, 2016 at 11:26:12 AM UTC-7, Christopher Alexander 
wrote:
>
> I had started the QuantLib.jl package, but the goal was basically a 
> rewrite of the C++ package in Julia.  I haven't given it much love lately, 
> but I hope to pick it back up sometime soon.  Anyone who wants to join in 
> is definitely welcome!
>
> Chris
>
> On Saturday, September 17, 2016 at 11:28:36 AM UTC-4, Chris Rackauckas 
> wrote:
>>
>> Thanks Femto Trader for bumping this. I took a quick look at Quantlib 
>> (and Ito) and I have to say, their numerical methods are very rudimentary 
>> (in fact, one of their methods for stochastic processes, EndPointEuler, 
>> doesn't have finite moments for its error due to KPS 1994...). For anything 
>> that isn't a Jump Process you can currently use DifferentialEquations.jl 
>> which has higher Strong order methods for solving the SDEs (with efficient 
>> adaptivity coming whenever my paper gets past peer review... short summary: 
>> mathematicians don't like computer science tools to show up in their math 
>> papers even if it makes it faster...). That's the thing though, you have to 
>> know the stochastic differential equation for the process.
>>
>> That said, it would pretty trivial to use dispatch so that way you define 
>> a "GeneralizedBlackScholes" equation, when then uses dispatch to construct 
>> an SDE and apply an optimized SDE method to it. Since you can already do 
>> this manually, it would just take setting up an object and a dispatch for 
>> each process. Would this kind of ease-of-use layer for quants be something 
>> anyone is interested in?
>>
>> The other thing is the Forward Kolmogorov PDEs associated to the SDEs. 
>> Currently I have FEM methods for Poisson and Semilinear Heat Equations 
>> which, as with the SDEs, can define any of the processes. This has a few 
>> more fast methods than Quantlib, but it doesn't have TRBDF2 (but that would 
&g

[julia-users] Re: ANN: SymEngine.jl, a symbolic manipulation library

2016-09-19 Thread Chris Rackauckas
Is there documentation? I don't see a link in the README.

On Monday, September 5, 2016 at 2:49:54 AM UTC-7, Isuru Fernando wrote:
>
> We are happy to announce the first release of SymEngine.jl. GitHub repo is 
> at https://github.com/symengine/SymEngine.jl .
>
> SymEngine.jl wraps SymEngine, a symbolic manipulation library written in 
> C++ with C, Python and now Julia wrappers. SymEngine started out as a 
> rewrite of SymPy's core in C++. It provides a small set of functionality 
> from sympy.core and sympy.matrices, and gives a performance boost over 
> SymPy. Eventually, SymEngine will be able to replace the core of SymPy 
> fully (with an option to use SymEngine or pure python implementation).
>
> SymEngine.jl should work on Linux, OSX and Windows. For windows, git 
> version of Conda.jl is needed until there is a new release. The package 
> provides basic symbolic manipulation including basic arithmetic, expansion, 
> differentiation, substitution and trigonometric and other functions. 
> Comments and contributions are welcome.
>
>
> SymEngine development team
>


[julia-users] Re: Any serious quant finance package for Julia?

2016-09-17 Thread Chris Rackauckas
Thanks Femto Trader for bumping this. I took a quick look at Quantlib (and 
Ito) and I have to say, their numerical methods are very rudimentary (in 
fact, one of their methods for stochastic processes, EndPointEuler, doesn't 
have finite moments for its error due to KPS 1994...). For anything that 
isn't a Jump Process you can currently use DifferentialEquations.jl which 
has higher Strong order methods for solving the SDEs (with efficient 
adaptivity coming whenever my paper gets past peer review... short summary: 
mathematicians don't like computer science tools to show up in their math 
papers even if it makes it faster...). That's the thing though, you have to 
know the stochastic differential equation for the process.

That said, it would pretty trivial to use dispatch so that way you define a 
"GeneralizedBlackScholes" equation, when then uses dispatch to construct an 
SDE and apply an optimized SDE method to it. Since you can already do this 
manually, it would just take setting up an object and a dispatch for each 
process. Would this kind of ease-of-use layer for quants be something 
anyone is interested in?

The other thing is the Forward Kolmogorov PDEs associated to the SDEs. 
Currently I have FEM methods for Poisson and Semilinear Heat Equations 
which, as with the SDEs, can define any of the processes. This has a few 
more fast methods than Quantlib, but it doesn't have TRBDF2 (but that would 
be pretty trivial to implement. If you want it let me know, it should take 
less than hour to modify what I have for the trapezoid rule since it's just 
about defining the implicit function, NLsolve handles the solving).

However, for most PDEs in finance you likely don't need the general 
boundaries that FEM provides and so FDM (finite difference methods) can 
probably be used. I haven't coded it up yet because I was looking for the 
right implementation. I am honing in on it: ImageFiltering.jl gives a good 
n-dimensional LaPlacian operator (and if I can convince Tim Holy it's 
worthwhile, parallel/multithreaded), and I will be building up Grids.jl 
 memory-efficient iterators 
for storing the space. This should lead to blazing fast FDM implementations 
where the only actual array are the independent variable (the option price) 
itself, so it should also be pretty memory efficient. I'll be pairing this 
with the standard methods but also some very recent Implicit Integrating 
Factor Methods (IIF) which should give a pretty large speedup over anything 
in Quantlib for stiff equations. Would anyone be interested in a quant 
ease-of-use interface over this as well? (If you'd like to help speed this 
up, the way to do that is to help get Grids.jl implemented. The ideas are 
coming together, but someone needs to throw together some prototype (which 
shouldn't be too difficult))

Note that Jump Processes can easily be done by using callback functions 
(independent jumps can be computed in advance and then use an appropriate 
tspan, adding the jump between the intervals. Dependent jumps just need to 
use a callback within to add a jump in the appropriate intervals and maybe 
interpolate back a bit, likely better with adaptive timestepping), and I'll 
probably make an API to make this easier.

Let me know what you guys would like to see on the differential equation / 
stochastic processes side and I'll make it happen. I'm doing most of this 
stuff for SPDEs in stochastic systems biology, but the equations are 
literally the same (general SDEs and semilinear Heat equations) so I'm 
plowing through whatever I can.

On Thursday, October 1, 2015 at 7:34:32 PM UTC-7, Christopher Alexander 
wrote:
>
> I think the Ito package is a great start, and I've forked it to work on 
> adding to it other features of Quantlib (as best as I can!).  I'm glad 
> someone mentioned the InterestRates package too as I hadn't seen that.  I 
> work at major bank in risk, and my goal is to at some point sell them on 
> the power of Julia (we are currently a Python/C++ shop).
>
> - Chris
>
> On Friday, September 11, 2015 at 2:05:39 AM UTC-4, Ferenc Szalma wrote:
>>
>> Are there any quant finance packages for Julia? I see some rudimentary 
>> calendar and day-counting in Ito.js for example but not much for even a 
>> simple yield2price or price2yield or any bond objects in Julia packages on 
>> GitHub. What is the best approach, using C++ function/object from Quantlib, 
>> to finance in Julia?
>>
>

[julia-users] Re: Idea: Julia Standard Libraries and Distributions

2016-09-16 Thread Chris Rackauckas
I think some of this I just answered 
here: https://github.com/JuliaLang/julia/issues/18389 . Here's a quote:

I think a good way to go forward is to start making some metapackages held 
> by the Julia orgs. For example, FFT could move out of Base into JuliaMath, 
> and there can be a standard Math.jl which simply re-exports the "Julia 
> standard math library" using Reexport.jl. This would have agreed on 
> packages (with version requirements?) for a specific domain. JuliaMath 
> could have Math.jl metapackage, JuliaStats could have a Stats.jl 
> metapackage, etc.
>
> I think this is better than just lumping them all together into a SciBase 
> because 
> "Science" is huge: the ML people will want the machine learning stack in 
> there, there's a whole statistics stack which replicates base R, if you 
> want to replicate MATLAB/SciPy then you'd want some differential equations 
> solvers which are large packages themselves, arguably PyCall and everything 
> associated is "standard Julia". I think using Math, Stats is both 
> succinct and informative to others if you share your script, and move the 
> discussion of "standards" to a more domain specific level.
>
> Does anyone see any design issues that may arise due to using Reexport 
> here (other than the fact that it's a personal repo)? Or does anyone else 
> have a good design?
>

Metapackages fix your first issue since they can lump a bunch of packages 
together, and import a bunch together as well. I think the Julia 
organizations (JuliaMath, JuliaStats, JuliaOpt, etc.) are good communities 
for deciding a base distribution in their own domains, which is why I am 
think that maybe it should be domain specific: each main JuliaDomain org 
has a Domain.jl  package which you can add with Pkg.add("Domain"), and use 
with `using Domain`. Math, Stats, Opt would then be curated separately by 
people who are engaged in the specific communities (with overlap of course 
since it's open source). A using statement for a few of these still isn't 
too bad, and "using Math, Stats" is both succinct and solves the 
reproducibility problem noted earlier (and that line could easily be added 
a .juliarc file, maybe a function could be added, auto_using("Math"), to 
automatically update the .juliarc? Don't like the name).

For your other issues, the fixes are happening by "upgrading the 
community". The packages are updating rapidly mostly because many of the 
packages themselves are really young and haven't finished adding all of 
their "basic" features. The Orgs have a good approval process. 

As for packages created by non-computing specialists, scientific computing 
is a pretty unique domain where a non-computing specialist may be the only 
person who can implement a specific package at the highest level of 
performance / sophistication. This means it's less about finding software 
developers to do the work, rather it's about teaching and helping domain 
specialists make a package. Julia has already been doing this to some 
extent: requiring CI testing and documentation, and policing the packages a 
bit for version requirements. I think Tony Kelman has done a great job at 
being a good "gatekeeper" to METADATA (I've learned a lot from him, thanks 
Tony!). 

Thus I think the proper way to move forward there is to setup Julia 
software development guidelines, and push for people to follow those 
guidelines. I tried to help out by writing a blog post which explains most 
of the steps 
,
 
but I think a good guideline with "the proper way to make/manage/make a 
Julia package" will be helpful in the long run for non-software development 
experts, and then whenever these principles aren't followed one can just be 
pointed to there with an explanation of why it's a good practice and how to 
do it.

On Thursday, September 15, 2016 at 4:45:25 AM UTC-7, Jeremy Cavanagh wrote:
>
>
> Hi Everyone,
>
> There are many comments with which I agree. Chris has put forward many 
> good ideas, I also use Java and find the install/upgrade system excellent. 
> There is no problem with having lots of APIs in the distribution since they 
> are not loaded into you code unless specifically required. I see a couple 
> of things that are a problem with Julia which I would like to see some 
> changes/improvements in. 
>
> The first is the way that packages are added is really quite tedious and 
> time consuming (I suspect this is a result of its REPL origin) which could 
> possibly be streamlined with some kind of gui based application.
>
> The second is that the group working on the Julia language (unlike Java) 
> are not also providing the bulk of the packages that help to improve the 
> functionality and usefulness of Julia. So how would you decide on a 
> suitable base distribution if the bulk are third party packages? 
>
> There is another "problem" (I can't quite think of the correct term) w

[julia-users] Re: How would you use Julia in a real world R&D Setup ?

2016-09-16 Thread Chris Rackauckas
The tooling for debugging is still growing. Gallium.jl with Juno is nice, 
but I still do a lot of println debugging. As Gallium/Juno matures I use it 
more and more often.

To make sure you're not using old versions, quit the REPL. In Juno, that's 
Ctrl+j Ctrl+k. You can hit that command almost instantly, and if you have 
the process server enabled (which I think will be introduced and be the 
default in the next version they are tagging?) then Juno already has a 
process started that is waiting for you, so there is no delay after doing 
this. Since there's really no delay, I do this after every Pkg.update(), 
most of the time when things need to re-compile (you can specifically 
highlight and evaluate a method to recompile it, but I find that quitting 
the REPL like this is so easy that I tend to overuse it), or just out of 
caution I'll use it. Another Juno command which is good to know is Ctrl+j 
Ctrl+c which will clear the console. 

On Friday, September 16, 2016 at 12:19:08 AM UTC-7, Tsur Herman wrote:
>
> Thank you for the time you took to answer.
>
> How do you go about debugging and inspecting? and making sure that changes 
> you made gets compiled
> and that you are not accidentally running a previously imported version of 
> a function? 
>
>
> On Thursday, September 15, 2016 at 10:11:21 PM UTC+3, Tsur Herman wrote:
>>
>> Hi , I am new here .
>> Just started playing around with julia, my background is in Matlab c++ 
>> and GPU computing(writing kernels)
>>
>> I am trying to figure out what will be good practice for rapid 
>> prototyping.
>> How would you use Julia and Juno IDE for a research and development 
>> project that will
>> end up having numerous files and many lines of code , and a basic gui?
>>
>> Please share your experience  , thanks.
>> Tsur
>>
>>
>>

[julia-users] Re: How would you use Julia in a real world R&D Setup ?

2016-09-15 Thread Chris Rackauckas
I don't know what you mean by "real world R&D" because that could mean 
anything. But as my main project has grown, here are a few tips I wish I 
knew when I started (some of these are scientific computing / data science 
centric):

1. If you're doing anything complicated with a piece of data, make it a 
fully featured type. If there's really one array you're accessing from it, 
give that type getindex and setindex commands so that it acts like an 
array. This will make your code much cleaner. Also, build all of the 
"performance-enhancement tricks" you plan on doing with it into the 
functions of the type itself. Spawn it off to its own package. Document and 
test it thoroughly. Now you have a type that will cleanly give you good 
performance (and you can share it with the community). I can't stress this 
enough: building good types is a good way to make code both clean and fast, 
and testing them separately makes them easy to maintain (and makes it 
easier to find out how to change them for the next Julia version).

2. Use multiple-dispatch to make your code more readable. Don't call your 
functions plot_potato and plot_carrot, just use plot and have your types 
work the details out on what it does. Avoid giant if elseif elseif ... by 
using a function with dispatches.

3. Use multiple-dispatch on short functions to give better performance. 
Types can be known because of dispatch. So for performance sensitive parts 
of code, don't use "if this type, do this, else do this", instead write a 
function which works that out by dispatch. This will lead to stricter 
typing and better performance. Then, write a good test for that small 
function and that functionality will be both easy to maintain and good for 
performance (if it's small enough, the compiler will just inline it so 
there anyways).

4. Use Plots.jl's plot recipes for plotting your types. They will make it 
so that way plotting your type is just the command plot(type), but that 
plot command is also infinitely customizable. Plot recipes are really quick 
to write, and will make it so easy to plot that you'll use plots as 
feedback everywhere. You can also use the UnicodePlots backend to have your 
tests show you plots.

5. Every once in awhile, go back and use ProfileView and @time to optimize 
code, cut down on allocations, and make things a little faster. I prefer to 
benchmark ideas before I implement them as well. Double check functions 
with @code_warntype or maybe peak at the @code_llvm for anything weird.

6. Use the process server option in Juno, and remember the command Ctrl+j 
Ctrl+k (kill the current process). The process server has extra Julia 
processes ready for when you kill the current process, so you can instantly 
be computing again without a delay.

7. Get on Gitter. You will invariably run into some issues/missing features 
in the package ecosystem. Asking a quick question in a chatroom can get you 
a known workaround in seconds. You can also ask for good repositories to 
learn software design from.

8. Use in-place functions whenever you can. I prefer to prototype without 
it, and then change things to in-place. This is because sometimes in-place 
functions can mean that the reference of two variables are still the same, 
which can be a hard to bug to find (best way to find it: print everything 
out and you'll see two variables have exactly the same values). My mantra 
usually is: get it working, and then apply all of the optimizations (and 
benchmark along the way).

9. Write lots and lots of tests. Let the tests tell you if something is 
wrong. At first this seems like it slows you down. It doesn't.

10. Start early with documenting your functions with docstrings. 
Documenter.jl will essentially build your documentation from your 
docstrings, so if they're complete then making your documentation will be 
simple.

11. Use Github. Even if you don't register your package / have it private, 
use continuous integration testing. Use branches to try out new ideas. 
Write your own issues to keep track of things you want to do and bugs 
you've encountered (maybe use the new project stuff?). Julia is designed to 
be used in conjunction with Github.

12. Save your "end results" by using an IJulia notebook. It will store the 
code and the output. It's a good way to "document" as you go along without 
actually taking the time to document. Of course, it's better to write fully 
featured documentation, but when you're prototyping, it's a good way to 
show someone "here's how to use it and here's what it does" (while at the 
same time testing your code).

And lastly,

13. Don't be afraid to break the rules in parts of the code where 
performance isn't crucial. As long as your hot inner loops are wrapped 
inside a function call with strict typing (and these loops take the vast 
majority of your time), you can do pretty much whatever you want above it 
without worrying about performance. Write a larger function which fixes 
types, sets up plo

[julia-users] Re: Fused broadcasting?

2016-09-15 Thread Chris Rackauckas
Ahh, last time I checked I must've accidentally been on the v0.4 docs. Here 
it is: http://docs.julialang.org/en/latest/manual/functions/?highlight=fused

On Thursday, September 15, 2016 at 2:24:35 PM UTC-7, Steven G. Johnson 
wrote:
>
> It's in the manual. Search the 0.5 manual for "vectorized" or "fused" ...



[julia-users] Re: Fused broadcasting?

2016-09-15 Thread Chris Rackauckas
https://github.com/JuliaLang/julia/blob/master/NEWS.md v0.5 section.

With the issues:

https://github.com/JuliaLang/julia/pull/15032
https://github.com/JuliaLang/julia/pull/17300
https://github.com/JuliaLang/julia/pull/17510

It probably needs to be documented.

On Thursday, September 15, 2016 at 1:29:24 PM UTC-7, daycaster wrote:
>
> Hey folks. Where in the docs is the description of "fused broadcasting" 
> and the "." operator. My searches are drawing a blank...



Re: [julia-users] Re: Idea: Julia Standard Libraries and Distributions

2016-09-14 Thread Chris Rackauckas
If you use Reexport.jl's @reexport in the top-scope of a module for such a 
metapackage it will actually export all of the functions from inside:

module MyMetapackage # Setup t
  using Reexport
  @reexport using DifferentialEquations
  @reexport using Plots
end



using MyMetapackage # Imports both packages
plot # 3 methods
solve # 7 methods


On Wednesday, September 14, 2016 at 3:01:34 PM UTC-7, Mosè Giordano wrote:
>
> 2016-09-14 23:46 GMT+02:00 Chris Rackauckas: 
> > I too am weary about how different distributions would play together. 
> I'd 
> > assume you'd just install with one and that would be it. 
>
> There are many people working in many fields that may want to install 
> different distributions.  I don't think that preventing them to 
> install all of them, or forcing them to choose only one, is a good 
> idea.  Creating many cross-field distributions is not viable.  Some 
> people may discard the idea of installing a distribution just for 
> these reasons. 
>
> > And yes, not 
> > putting "using" in code that you share can lead to some issues, so maybe 
> > "using MyMetapackage" and having the repo online so people can see all 
> of 
> > your packages (at that date/time) would be helpful? 
>
> I think this works better with a meta-package, doesn't it?  The 
> problem is that functions are not exported. 
>
> Bye, 
> Mosè 
>


[julia-users] Re: Idea: Julia Standard Libraries and Distributions

2016-09-14 Thread Chris Rackauckas
I didn't think about using metapackages instead to do "distributions" (with 
Reexport.jl to make using Metapackage import all of the packages). That 
seems like an interesting idea.

I too am weary about how different distributions would play together. I'd 
assume you'd just install with one and that would be it. And yes, not 
putting "using" in code that you share can lead to some issues, so maybe 
"using MyMetapackage" and having the repo online so people can see all of 
your packages (at that date/time) would be helpful?

It really is an interplay between: things keep moving out of Base (like the 
most recent talk is for LinSpace), so the list of "how many usings I need 
for what I am doing" grows. This is both a good thing because Base is 
leaner, but a bad thing because needing to be too explicit about every 
function you want to use is a PITA. Maybe "distributions" is overkill and 
metapackages + a standard library would be better. g's suggestion of 
ways to exclude the StandardLibrary would be helpful, though I don't know 
if an exclude command is possible (though if it was setup to import some 
standard library, julia --barebones seems possible. I don't know if exclude 
is possible because I don't think there's an easy way to remove a package 
after it's been imported).

On Wednesday, September 14, 2016 at 2:03:17 PM UTC-7, Mosè Giordano wrote:
>
> Hi Chris,
>
> what would be the difference between a distribution and a meta-package 
> like those used, for example, in Debian and derivatives: a package without 
> code, that only requires other packages?  In this sense you can create 
> right now a meta-package: just create a repository for a Julia package and 
> populate the REQUIRE file as you wish.  You can specify a Julia version and 
> provide a build script.
>
> I'm above all worried about the interplay between different 
> distributions.  There would be clashing .juliarc's?  Why providing a 
> .juliarc at all?  Automatically running at startup some "using"s & co. is 
> definitely useful, but can lead in the long run to bad habits, like 
> forgetting to put "using" commands in code shared with people not using the 
> same distribution.
>
> Bye,
> Mosè
>
> I think one major point of contention when talking about what should be 
>> included in Base due to competing factors:
>>
>>
>>1. Some people would like a "lean Base" for things like embedded 
>>installs or other low memory applications
>>2. Some people want a MATLAB-like "bells and whistles" approach. This 
>>way all the functions they use are just there: no extra packages to 
>>find/import.
>>3. Some people like having things in Base because it "standardizes" 
>>things. 
>>4. Putting things in Base constrains their release schedule. 
>>5. Putting things in packages outside of JuliaLang helps free up 
>>Travis.
>>
>>
>> The last two concerns have been why things like JuliaMath have sprung up 
>> to move things out of Base. However, I think there is some credibility to 
>> having some form of standardization. I think this can be achieved through 
>> some kind of standard library. This would entail a set of packages which 
>> are installed when Julia is installed, and a set of packages which add 
>> their using statement to the .juliarc. To most users this would be 
>> seamless: they would install automatically, and every time you open Julia, 
>> they would import automatically. There are a few issues there:
>>
>>
>>1.  This wouldn't work with building from source. This idea works 
>>better for binaries (this is no biggie since these users are likely more 
>>experienced anyways)
>>2. Julia would have to pick winners and losers.
>>
>> That second part is big: what goes into the standard library? Would all 
>> of the MATLAB functions like linspace, find, etc. go there? Would the 
>> sparse matrix library be included?
>>
>> I think one way to circumvent the second issue would be to allow for 
>> Julia Distributions. A distribution would be defined by:
>>
>>
>>1. A Julia version
>>2. A List of packages to install (with versions?)
>>3. A build script
>>4. A .juliarc
>>
>> The ideal would be for one to be able to make an executable from those 
>> parts which would install the Julia version with the specified packages, 
>> build the packages (and maybe modify some environment variables / 
>> defaults), and add a .juliarc that would automatically import some packages 
>> / maybe define some constants or checkout branches. JuliaLang could then 
>> provide a lean distribution and a "standard distribution" where the 
>> standard distribution is a more curated library which people can fight 
>> about, but it's not as big of a deal if anyone can make their own. This has 
>> many upsides:
>>
>>
>>1. Julia wouldn't have to come with what you don't want.
>>2. Other than some edge cases where the advantages of Base come into 
>>play (I don't know of a good example, but I know

[julia-users] Re: Pkg.add() works fine while Pkg.update() doesn't over https instead of git

2016-09-14 Thread Chris Rackauckas
+1000 for the REQUIRE hack. Never knew about that. Be careful to save the 
packages you've been working on (or just commit and push somewhere) if you 
do this though.

On Wednesday, September 14, 2016 at 10:02:22 AM UTC-7, David P. Sanders 
wrote:
>
>
> I am a fan of deleting the entire .julia directory in your home directory 
> and reinstalling your packages.
>
> You can also just keep the REQUIRE file from .julia/v0.4 somewhere, do 
> Pkg.init(), then copy the REQUIRE file back and do
> Pkg.resolve()  to reinstall everything you previously had installed.
>
> El martes, 13 de septiembre de 2016, 10:54:43 (UTC-4), Rahul Mourya 
> escribió:
>>
>> Hi,
>> I'm using Julia-0.4.6. My machine is behind a firewall, thus configured 
>> git to use https: git config --global url."https://".insteadOf git://.
>> Under this setting, I'm able to install packages using Pkg.add(), 
>> however, when I use Pkg.update(), I get following error:
>>
>> INFO: Updating METADATA...
>> Cannot pull with rebase: You have unstaged changes.
>> Please commit or stash them.
>> ERROR: failed process: Process(`git pull --rebase -q`, ProcessExited(1)) 
>> [1]
>>  in pipeline_error at process.jl:555
>>  in run at process.jl:531
>>  in anonymous at pkg/entry.jl:283
>>  in withenv at env.jl:160
>>  in anonymous at pkg/entry.jl:282
>>  in cd at ./file.jl:22
>>  in update at ./pkg/entry.jl:272
>>  in anonymous at pkg/dir.jl:31
>>  in cd at file.jl:22
>>  in cd at pkg/dir.jl:31
>>  in update at ./pkg.jl:45
>>
>> what could be the reason? Any workaround this?
>>
>> Thanks!
>>
>

Re: [julia-users] Re: Curious parsing behavior

2016-09-14 Thread Chris Rackauckas
Some are unavoidable: [1 -2] vs [1 - 2] (though I think there should be a 
row-concatenation operator, like ; does column-concatenation. That would 
stop this problem).



On Wednesday, September 14, 2016 at 10:01:07 AM UTC-7, Erik Schnetter wrote:
>
> There was a talk at JuliaCon suggesting that parsing ambiguities are often 
> best resolved by throwing an error: "Fortress: Features and Lessons 
> Learned".
>
> -erik
>
> On Wed, Sep 14, 2016 at 12:01 PM, David P. Sanders  > wrote:
>
>>
>>
>> El miércoles, 14 de septiembre de 2016, 11:12:52 (UTC-4), David Gleich 
>> escribió:
>>>
>>> Ahah! That explains it.
>>>
>>> Is there a better way to create floating point literals that avoid this?
>>>
>>
>> I think using 1782.0 instead of 1782. (without the 0) will solve this?
>> I seem to remember there was an issue to deprecate the style without the 
>> 0.
>>  
>>
>>>
>>> David
>>>
>>> On Wednesday, September 14, 2016 at 9:26:42 AM UTC-4, Steven G. Johnson 
>>> wrote:



 On Wednesday, September 14, 2016 at 9:18:11 AM UTC-4, David Gleich 
 wrote:
>
> Can anyone give me a quick explanation for why these statements seem 
> to parse differently?
>
> julia> 1782.^12. + 1841.^12.
>

 .^ and .+ are (elementwise/broadcasting) operators in Julia, and there 
 is a parsing ambiguity here because it is not clear whether the . goes 
 with 
 the operator or the number.

 See also the discussion at

  https://github.com/JuliaLang/julia/issues/15731
  https://github.com/JuliaLang/julia/pull/11529

 for possible ways that this might be made less surprising in the future.

>>>
>
>
> -- 
> Erik Schnetter > 
> http://www.perimeterinstitute.ca/personal/eschnetter/
>


[julia-users] Re: Pkg.add() works fine while Pkg.update() doesn't over https instead of git

2016-09-14 Thread Chris Rackauckas
If git stash doesn't work, you can always go nuclear and delete your 
METADATA folder, along with META_BRANCH and REQUIRE. Then when you 
Pkg.update() it will install a new METADATA and basically try again, but I 
think this will delete all of your installed packages so it really is a 
heavy-handed fix. Maybe Tony has a more refined solution.

On Wednesday, September 14, 2016 at 2:04:05 AM UTC-7, Rahul Mourya wrote:
>
> Well I didn't make any intentional change my own. I guess it's the Pkg 
> manager, which was not successful ! I did git stash, however it didn't 
> solve the problem, I'm still getting same error msg when using Pkg.update() 
> .
>
> On Tuesday, 13 September 2016 16:54:43 UTC+2, Rahul Mourya wrote:
>>
>> Hi,
>> I'm using Julia-0.4.6. My machine is behind a firewall, thus configured 
>> git to use https: git config --global url."https://".insteadOf git://.
>> Under this setting, I'm able to install packages using Pkg.add(), 
>> however, when I use Pkg.update(), I get following error:
>>
>> INFO: Updating METADATA...
>> Cannot pull with rebase: You have unstaged changes.
>> Please commit or stash them.
>> ERROR: failed process: Process(`git pull --rebase -q`, ProcessExited(1)) 
>> [1]
>>  in pipeline_error at process.jl:555
>>  in run at process.jl:531
>>  in anonymous at pkg/entry.jl:283
>>  in withenv at env.jl:160
>>  in anonymous at pkg/entry.jl:282
>>  in cd at ./file.jl:22
>>  in update at ./pkg/entry.jl:272
>>  in anonymous at pkg/dir.jl:31
>>  in cd at file.jl:22
>>  in cd at pkg/dir.jl:31
>>  in update at ./pkg.jl:45
>>
>> what could be the reason? Any workaround this?
>>
>> Thanks!
>>
>

Re: [julia-users] ls()?

2016-09-14 Thread Chris Rackauckas
Here it is: https://github.com/JuliaLang/julia/issues/3376. Would changing 
to ls be back on the table?

On Wednesday, September 14, 2016 at 9:41:55 AM UTC-7, Stefan Karpinski 
wrote:
>
> There are POSIX standards for new programming language function names? But 
> yes, ls() could be a better name.
>
> On Wed, Sep 14, 2016 at 12:04 PM, Adrian Lewis  > wrote:
>
>> > You can find a thread/issue where this is discussed. Some group decided 
>> to call it readdir() and like it more. I just got used to it. I think it's 
>> silly, but it's just syntax.
>>
>> I thought it might be an idea to stick with POSIX standards. 
>>
>> On Wednesday, September 14, 2016 at 4:40:03 PM UTC+1, Chris Rackauckas 
>> wrote:
>>>
>>>
>>>
>>> On Wednesday, September 14, 2016 at 7:36:18 AM UTC-7, Jacob Quinn wrote:
>>>>
>>>> readdir()
>>>>
>>>> On Wed, Sep 14, 2016 at 8:34 AM, Adrian Lewis  
>>>> wrote:
>>>>
>>>>> In the filesystem package, if we have pwd() and cd(), why do we not 
>>>>> have ls()?
>>>>>
>>>>> Aidy
>>>>>
>>>>
>>>>
>

[julia-users] Re: [ANN]New package RFlavor.jl provides R-like functions

2016-09-14 Thread Chris Rackauckas
We may want to combine efforts. I have VectorizedRoutines.jl 
 for this with a 
slightly larger scope (I don't know if it's the right name though).

On Wednesday, September 14, 2016 at 8:12:21 AM UTC-7, Lanfeng Pan wrote:
>
> Hi all,
>
> To help R users get used to Julia soon, this package provide some handy 
> functions from R in Julia, such as 
>
> list
> matrix
> rep, seq
> table
> expand_grid, outer
> sweep
> duplicated
> findinterval
> ...
>
> More to come and all contributions and comments are welcome.
> The package address is https://github.com/panlanfeng/RFlavor.jl
> See our work list at https://github.com/panlanfeng/RFlavor.jl/issues/2
>
>
> Best,
> Lanfeng
>


Re: [julia-users] ls()?

2016-09-14 Thread Chris Rackauckas
You can find a thread/issue where this is discussed. Some group decided to 
call it readdir() and like it more. I just got used to it. I think it's 
silly, but it's just syntax.

On Wednesday, September 14, 2016 at 7:36:18 AM UTC-7, Jacob Quinn wrote:
>
> readdir()
>
> On Wed, Sep 14, 2016 at 8:34 AM, Adrian Lewis  > wrote:
>
>> In the filesystem package, if we have pwd() and cd(), why do we not have 
>> ls()?
>>
>> Aidy
>>
>
>

Re: [julia-users] Re: Julia Users and First Customers

2016-09-13 Thread Chris Rackauckas
Thanks for weighing in. I'll stick to your lingo: solid and usable, but not 
with a guarantee of long term support until v1.0. That's what I've been 
meaning by alpha (I mean I use Julia everyday because it's solid and 
usable), but I won't continue to use that term because it seems to carry a 
worse connotation than what I meant it to have.

Though I will say that when I talk of "Julia", I also am referring to the 
package ecosystem (like how Python means Python + NumPy etc.). FWIW, most 
of the issues I run into tend to be due to packages rather than base. 

On Tuesday, September 13, 2016 at 1:00:10 PM UTC-7, Stefan Karpinski wrote:
>
> +1 to everything David said. And yes, we should put out an official 
> statement on this subject. Julia has absolutely been released on four and 
> very nearly five occasions: 0.1, 0.2, 0.3, 0.4 and any day now, 0.5. These 
> are not alpha releases, nor are they unstable. They are very solid and 
> usable. The pre-1.0 numbering of these releases (see, that's the word) are 
> simply an indication that we *will* change APIs and break code in the 
> relatively near future – before 1.0 is out. We could have numbered these 
> releases 1.0, 2.0, 3.0 and 4.0, but (to me at least), there's an 
> expectation that a 1.0 release will be supported for *years* into the 
> future, which would leave us supporting four-five releases concurrently, 
> which simply isn't realistic at this point in time. Once Julia 1.0 is 
> released we *will* support it for the long term, while we continue work 
> towards 2.0 and beyond. But at that point the Julia community will be 
> larger, we will have more resources, and releases will be further apart.
>
> Aside: Swift seems to have taken the let's release major versions very 
> often since they are already on a prerelease of major version 3.0 (and 
> maybe that's better marketing), but to me having three major releases in 
> two years seems a bit crazy. Of course, Swift has a very large, effectively 
> captive audience. Julia is likely to follow the Rust approach, albeit with 
> a *lot* less breakage between pre-1.0 versions.
>
> Issues you may have had with Juno don't really have anything to do with 
> Julia, although there's an argument to be made that pre-1.0 numbering of 
> the language is a good indicator that the ecosystem may be a little rough 
> around the edges. Once Julia 0.5 is out, there will be a stable version of 
> Juno released to go with it.
>
> On Tue, Sep 13, 2016 at 1:19 PM, Chris Rackauckas  > wrote:
>
>> I agree that there are some qualifiers that have to go with it in order 
>> to get the connotation right, but the term "unstable but usable 
>> pre-release" is just a phrase for alpha. It's not misleading to 
>> characterize Julia as not having been released since the core dev group is 
>> calling v1.0 the release, and so saying pre-1.0 is just a synonym for 
>> "pre-release". Being alpha doesn't mean it's buggy, it just means that it's 
>> alpha, as in not the final version and can change. You can rename it to 
>> another synonym, but it's still true.
>>
>> Whether it's in a state to be used in production, that's a call for an 
>> experienced Julia user who knows the specifics of the application they are 
>> looking to build. But throwing it in with the more stable languages so that 
>> way someone in a meeting is like a stand-in ("hey guys, we can try Julia. 
>> It should be faster than those other options, and should be hard to learn, 
>> what do you think?"), that has a decent change of leading to trouble.
>>
>> And I think not being honest with people is a good way to put a bad taste 
>> in people's mouths. Even if lots of Julia were stable, the package 
>> ecosystem isn't. Just the other day I was doing damage control since there 
>> was a version of Juno that was tagged which broke "most users'" systems, by 
>> most users I mean there was an "obvious" fix: look at the error message, 
>> manually clone this random new repository, maybe check out master on a few 
>> packages 
>> <http://stackoverflow.com/questions/39419502/cannot-start-julia-in-atom-loaderror-argumenterror-juno-not-found-in-path/39420874#39420874>.
>>  
>> You want to use the plot pane? Oh just checkout master, no biggie. 
>> <http://discuss.junolab.org/t/plots-jl-in-atom/714/11> If someone is 
>> trying to use Julia without knowing any Git/Github, it's possible, but 
>> we're still at a point where it could lead to some trouble, or it's 
>> confusing since some basic features are see

[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Chris Rackauckas
A range is a type that essentially acts as an array, but really is only 
three numbers: start, end, and the step length. I.e. a=0:2^N would make a 
range where a[1]=0, a[i]==i-1, and a[end]=2^N. I haven't looked at the 
whole code, but if you're using rand([0...2^N]), then each time you do that 
it has to make the array, whereas rand(1:2^N) or things like that using 
ranges won't. So if you find yourself making arrays like [0...2^N], they 
should probably be ranges.

On Tuesday, September 13, 2016 at 10:43:39 AM UTC-7, Neal Becker wrote:
>
> I'm not following you here. IIUC a range is a single scalar value?  Are 
> you 
> suggesting I want an Array{range}? 
>
> Chris Rackauckas wrote: 
>
> > Do you need to use an array? That sounds better suited for a range. 
> > 
> > On Tuesday, September 13, 2016 at 10:24:15 AM UTC-7, Neal Becker wrote: 
> >> 
> >> Steven G. Johnson wrote: 
> >> 
> >> > 
> >> > 
> >> > 
> >> > On Monday, September 12, 2016 at 7:32:48 AM UTC-4, Neal Becker wrote: 
> >> >> 
> >> >> PnSeq.jl calls rand() to get a Int64, caching the result and then 
> >> >> providing 
> >> >> N bits at a time to fill an Array.  It's supposed to be a fast way 
> to 
> >> get 
> >> >> an 
> >> >> Array of small-width random integers. 
> >> >> 
> >> > 
> >> > rand(T, n) already does this for small integer types T.  (In fact, it 
> >> > generates 128 random bits at a time.)  See base/random.jl 
> >> > 
> >> < 
> >> 
>
> https://github.com/JuliaLang/julia/blob/d0e7684dd0ce867e1add2b88bb91f1c4574100e0/base/random.jl#L507-L515>
>  
>
> >> 
> >> > for how it does it. 
> >> > 
> >> > In a quick test, rand(UInt16, 10^6) was more than 6x faster than 
> >> > pnseq(16)(10^6, UInt16). 
> >> 
> >> Thanks for the ideas.  Here, though, the generated values need to be 
> >> Uniform([0...2^N]), where N could be any number.  For example 
> [0...2^3]. 
> >> So the output array itself would be Array{Int64} for example, but the 
> >> values 
> >> in the array are [0 ... 7].  Do you know a better way to do this? 
> >> 
> >> > 
> >> > (In a performance-critical situation where you are calling this lots 
> of 
> >> > times to generate random arrays, I would pre-allocate the array A and 
> >> call 
> >> > rand!(A) instead to fill it with random numbers in-place.) 
> >> 
> >> 
> >> 
>
>
>

[julia-users] Re: 1st try julia, 2/3 speed of python/c++

2016-09-13 Thread Chris Rackauckas
Do you need to use an array? That sounds better suited for a range.

On Tuesday, September 13, 2016 at 10:24:15 AM UTC-7, Neal Becker wrote:
>
> Steven G. Johnson wrote: 
>
> > 
> > 
> > 
> > On Monday, September 12, 2016 at 7:32:48 AM UTC-4, Neal Becker wrote: 
> >> 
> >> PnSeq.jl calls rand() to get a Int64, caching the result and then 
> >> providing 
> >> N bits at a time to fill an Array.  It's supposed to be a fast way to 
> get 
> >> an 
> >> Array of small-width random integers. 
> >> 
> > 
> > rand(T, n) already does this for small integer types T.  (In fact, it 
> > generates 128 random bits at a time.)  See base/random.jl 
> > 
> <
> https://github.com/JuliaLang/julia/blob/d0e7684dd0ce867e1add2b88bb91f1c4574100e0/base/random.jl#L507-L515>
>  
>
> > for how it does it. 
> > 
> > In a quick test, rand(UInt16, 10^6) was more than 6x faster than 
> > pnseq(16)(10^6, UInt16). 
>
> Thanks for the ideas.  Here, though, the generated values need to be 
> Uniform([0...2^N]), where N could be any number.  For example [0...2^3]. 
> So the output array itself would be Array{Int64} for example, but the 
> values 
> in the array are [0 ... 7].  Do you know a better way to do this? 
>
> > 
> > (In a performance-critical situation where you are calling this lots of 
> > times to generate random arrays, I would pre-allocate the array A and 
> call 
> > rand!(A) instead to fill it with random numbers in-place.) 
>
>
>

Re: [julia-users] Re: Julia Users and First Customers

2016-09-13 Thread Chris Rackauckas
t; There is a large group of julia users out there that use julia for “real 
> world” work. It is really not helpful for us if julia gets an undeserved 
> reputation of being a pre-release, buggy thing that shouldn’t be used for 
> “real work”. Such a reputation would question the validity of our results, 
> whereas a reputation as “hasn’t reached a stable API” is completely 
> harmless.
>
>  
>
> Also, keep in mind that there is julia computing out there, which is 
> feeding the core dev group. They have customers that pay them (I hope) for 
> supported versions of julia, so it seems highly misleading to characterize 
> julia as not released and not ready for production. Heck, you can buy a 
> support contract for the current released version, so in my mind that seems 
> very much released!
>
>  
>
> I think it would be a good idea if the core julia group would actually put 
> a definitive statement out on the website for this topic. There are a 
> couple of devs that at least from the outside seem close to the core group 
> that have made statements like the one below, that to any sloppy reader 
> will just sound like “stay away from julia if you don’t want a bug riddled 
> system”, and I think that really doesn’t square with the message that e.g. 
> julia computing needs to put out there or with the state of the language. I 
> think a good official position would be: “Current julia releases are of 
> high quality and are ready to be used for ‘real world’ work. Pre-1.0 
> releases will introduce breaking API changes between 0.x versions, which 
> might require extra work on the users part when updating to new julia 
> versions.” Or something like that.
>
>  
>
> Cheers,
>
> David
>
>  
>
> --
>
> David Anthoff
>
> University of California, Berkeley
>
>  
>
> http://www.david-anthoff.com
>
>  
>
>  
>
> *From:* julia...@googlegroups.com  [mailto:
> julia...@googlegroups.com ] *On Behalf Of *Chris Rackauckas
> *Sent:* Tuesday, September 13, 2016 9:05 AM
> *To:* julia-users >
> *Subject:* [julia-users] Re: Julia Users and First Customers
>
>  
>
> 1. Jeff Bezanson and Stefan Karpinski. I kid (though that's true). It's 
> the group of MIT students who made it. You can track the early issues and 
> kind of see who's involved. 
> <https://github.com/JuliaLang/julia/issues?page=348&q=is%3Aissue+is%3Aclosed> 
> Very 
> early on that's probably a good indicator of who's joining in when, but 
> that only makes sense for very early Julia when using means also 
> contributing to some extent.
>
>  
>
> 2. Julia hasn't been released. Putting it in large scale production and 
> treating it like it has been released is a bad idea.
>
>  
>
> 3. The results advertise for itself. You go learn Julia and come back to 
> your lab with faster code that was quicker to write than their 
> MATLAB/Python/R code, and then everyone wants you to teach a workshop. Also 
> packages seem to play a big role: a lot of people come to these forums for 
> the first time to use things like JuMP.
>
>  
>
> 4. Julia hasn't had its first release. 
>
>  
>
> Keep in mind Julia is still in its alpha. It doesn't even have a beta for 
> v1.0 yet. That doesn't mean it's not generally usable yet, quite the 
> contrary: any hacker willing to play with it will find that you can get 
> some extraordinary productivity and performance gains at this point. But 
> just because Julia is doing well does not mean that it has suddenly been 
> released. This misconception can lead to issues like this blog post 
> <https://blog.staffjoy.com/retro-on-the-julia-programming-language-7655121ea341#.w1bggaw78>.
>  
> Of course they had issues with Julia updating and breaking syntax: it's 
> explicitly stated that Base syntax may change and many things may break 
> because Julia is still in its alpha, and so there's no reason to slow down 
> / stifle development for the few who jumped the gun and wanted a stable 
> release yesterday. It always ends up like people complaining that the 
> alpha/beta for a video game is buggy: of course it is, that's what you 
> signed up for. Remember that?
>
>  
>
> Making sure people are aware of this fact is a good thing for Julia. If 
> someone's not really a programmer and doesn't want to read Github issues / 
> deal with changing documentation (a lot of mathematicians / scientists), 
> there's still no reason to push Julia onto them because Julia, as an 
> unreleased alpha program, will change and so you will need to keep 
> up-to-date with the changes. Disregarding that fact will only lead to 
> misconceptions about Julia when people inevitably run into problems here. 
>
>
> On Tuesday, September 13, 2016 at 7:55:51 AM UTC-7, Adeel Malik wrote:
>
> I would Like to ask few questions about Julia that I could not find it on 
> the internet. 
>
>  
>
> 1) Who were the very first few users of Julia ?
>
>  
>
> 2) Who were the industrial customers of Julia when it was first released? 
> Who are those industrial customers now?
>
>  
>
> 3) How Julia found more users?
>
>  
>
> 4) How Julia survived against Python and R at first release?
>
>  
>
> It's not a homework. Anyone can answer these questions or put me at the 
> right direction that would be perfect.
>
>  
>
> Thanks in advance
>
>  
>
> Regards,
>
> Adeel
>
>

[julia-users] Re: Pkg.add() works fine while Pkg.update() doesn't over https instead of git

2016-09-13 Thread Chris Rackauckas
You modified your METADATA. Did you make a package or something? The 
easiest thing to do would be to go to the v0.4/METADATA folder and use `git 
stash`. However, this will stash any of the changes you made. Did you make 
these changes for a reason, like you want to publish a new release for a 
package? Then you should commit and push that to your metadata-v2 branch on 
your fork. See http://docs.julialang.org/en/release-0.4/manual/packages/

On Tuesday, September 13, 2016 at 7:54:43 AM UTC-7, Rahul Mourya wrote:
>
> Hi,
> I'm using Julia-0.4.6. My machine is behind a firewall, thus configured 
> git to use https: git config --global url."https://".insteadOf git://.
> Under this setting, I'm able to install packages using Pkg.add(), however, 
> when I use Pkg.update(), I get following error:
>
> INFO: Updating METADATA...
> Cannot pull with rebase: You have unstaged changes.
> Please commit or stash them.
> ERROR: failed process: Process(`git pull --rebase -q`, ProcessExited(1)) 
> [1]
>  in pipeline_error at process.jl:555
>  in run at process.jl:531
>  in anonymous at pkg/entry.jl:283
>  in withenv at env.jl:160
>  in anonymous at pkg/entry.jl:282
>  in cd at ./file.jl:22
>  in update at ./pkg/entry.jl:272
>  in anonymous at pkg/dir.jl:31
>  in cd at file.jl:22
>  in cd at pkg/dir.jl:31
>  in update at ./pkg.jl:45
>
> what could be the reason? Any workaround this?
>
> Thanks!
>


[julia-users] Re: Julia Users and First Customers

2016-09-13 Thread Chris Rackauckas
1. Jeff Bezanson and Stefan Karpinski. I kid (though that's true). It's the 
group of MIT students who made it. You can track the early issues and kind 
of see who's involved. 
 
Very 
early on that's probably a good indicator of who's joining in when, but 
that only makes sense for very early Julia when using means also 
contributing to some extent.

2. Julia hasn't been released. Putting it in large scale production and 
treating it like it has been released is a bad idea.

3. The results advertise for itself. You go learn Julia and come back to 
your lab with faster code that was quicker to write than their 
MATLAB/Python/R code, and then everyone wants you to teach a workshop. Also 
packages seem to play a big role: a lot of people come to these forums for 
the first time to use things like JuMP.

4. Julia hasn't had its first release. 

Keep in mind Julia is still in its alpha. It doesn't even have a beta for 
v1.0 yet. That doesn't mean it's not generally usable yet, quite the 
contrary: any hacker willing to play with it will find that you can get 
some extraordinary productivity and performance gains at this point. But 
just because Julia is doing well does not mean that it has suddenly been 
released. This misconception can lead to issues like this blog post 
.
 
Of course they had issues with Julia updating and breaking syntax: it's 
explicitly stated that Base syntax may change and many things may break 
because Julia is still in its alpha, and so there's no reason to slow down 
/ stifle development for the few who jumped the gun and wanted a stable 
release yesterday. It always ends up like people complaining that the 
alpha/beta for a video game is buggy: of course it is, that's what you 
signed up for. Remember that?

Making sure people are aware of this fact is a good thing for Julia. If 
someone's not really a programmer and doesn't want to read Github issues / 
deal with changing documentation (a lot of mathematicians / scientists), 
there's still no reason to push Julia onto them because Julia, as an 
unreleased alpha program, will change and so you will need to keep 
up-to-date with the changes. Disregarding that fact will only lead to 
misconceptions about Julia when people inevitably run into problems here. 

On Tuesday, September 13, 2016 at 7:55:51 AM UTC-7, Adeel Malik wrote:
>
> I would Like to ask few questions about Julia that I could not find it on 
> the internet. 
>
> 1) Who were the very first few users of Julia ?
>
> 2) Who were the industrial customers of Julia when it was first released? 
> Who are those industrial customers now?
>
> 3) How Julia found more users?
>
> 4) How Julia survived against Python and R at first release?
>
> It's not a homework. Anyone can answer these questions or put me at the 
> right direction that would be perfect.
>
> Thanks in advance
>
> Regards,
> Adeel
>


[julia-users] Re: Vector Field operators (gradient, divergence, curl) in Julia

2016-09-13 Thread Chris Rackauckas
For gradients, check out ForwardDiff. It'll give you really fast 
calculations.

On Tuesday, September 13, 2016 at 4:29:59 AM UTC-7, MLicer wrote:
>
> Dear all,
>
> i am wondering if there exists Julia N-dimensional equivalents to Numpy 
> vector field operators like gradient, divergence and curl, for example:
>
> np.gradient(x)
>
> Thanks so much,
>
> Cheers!
>


[julia-users] Idea: Julia Standard Libraries and Distributions

2016-09-13 Thread Chris Rackauckas
I think one major point of contention when talking about what should be 
included in Base due to competing factors:


   1. Some people would like a "lean Base" for things like embedded 
   installs or other low memory applications
   2. Some people want a MATLAB-like "bells and whistles" approach. This 
   way all the functions they use are just there: no extra packages to 
   find/import.
   3. Some people like having things in Base because it "standardizes" 
   things. 
   4. Putting things in Base constrains their release schedule. 
   5. Putting things in packages outside of JuliaLang helps free up Travis.


The last two concerns have been why things like JuliaMath have sprung up to 
move things out of Base. However, I think there is some credibility to 
having some form of standardization. I think this can be achieved through 
some kind of standard library. This would entail a set of packages which 
are installed when Julia is installed, and a set of packages which add 
their using statement to the .juliarc. To most users this would be 
seamless: they would install automatically, and every time you open Julia, 
they would import automatically. There are a few issues there:


   1.  This wouldn't work with building from source. This idea works better 
   for binaries (this is no biggie since these users are likely more 
   experienced anyways)
   2. Julia would have to pick winners and losers.

That second part is big: what goes into the standard library? Would all of 
the MATLAB functions like linspace, find, etc. go there? Would the sparse 
matrix library be included?

I think one way to circumvent the second issue would be to allow for Julia 
Distributions. A distribution would be defined by:


   1. A Julia version
   2. A List of packages to install (with versions?)
   3. A build script
   4. A .juliarc

The ideal would be for one to be able to make an executable from those 
parts which would install the Julia version with the specified packages, 
build the packages (and maybe modify some environment variables / 
defaults), and add a .juliarc that would automatically import some packages 
/ maybe define some constants or checkout branches. JuliaLang could then 
provide a lean distribution and a "standard distribution" where the 
standard distribution is a more curated library which people can fight 
about, but it's not as big of a deal if anyone can make their own. This has 
many upsides:


   1. Julia wouldn't have to come with what you don't want.
   2. Other than some edge cases where the advantages of Base come into 
   play (I don't know of a good example, but I know there are some things 
   which can't be defined outside of Base really well, like BigFloats? I'm not 
   the expert on this.), most things could spawn out to packages without the 
   standard user ever noticing.
   3. There would still be a large set of standard functions you can assume 
   most people will have.
   4. You can share Julia setups: for example, with my lab I would share a 
   distribution that would have all of the JuliaDiffEq packages installed, 
   along with Plots.jl and some backends, so that way it would be "in the box 
   solve differential equations and plot" setup like what MATLAB provides. I 
   could pick packages/versions that I know work well together, 
   and guarantee their install will work. 
   5. You could write tutorials / run workshops which use a distribution, 
   knowing that a given set of packages will be available.
   6. Anyone could make their setup match yours by looking at the 
   distribution setup scripts (maybe just make a base function which runs that 
   install since it would all be in Julia). This would be nice for some work 
   in progress projects which require checking out master on 3 different 
   packages, and getting some weird branch for another 5. I would give you a 
   succinct and standardized way to specify an install to get there.


Side notes:

[An interesting distribution would be that JuliaGPU could provide a full 
distribution for which CUDAnative works (since it requires a different 
Julia install)]

[A "Data Science Distribution" would be a cool idea: you'd never want to 
include all of the plotting and statistical things inside of Base, but 
someone pointing out what all of the "good" packages are that play nice 
with each other would be very helpful.]

[What if the build script could specify a library path, so that way it can 
install a setup which doesn't interfere with a standard Julia install?]

This is not without downsides. Indeed, one place where you can look is 
Python. Python has distributions, but one problem with them is that 
packages don't tend to play nice with all distributions. This leads to 
fragmenting in the package sphere. Also, since it's not explicit on where 
the packages come from, it may be harder to find documentation (however, 
maybe Documenter.jl automatically adding links to the documentation in 
docstrings could fix this?). Also, t

[julia-users] Re: Juno workspace variable display.

2016-09-12 Thread Chris Rackauckas
I can confirm that works. Wow, never knew that was there. It should be 
added to the menu. Maybe it's still considered experimental.

On Monday, September 12, 2016 at 4:27:52 PM UTC-7, Uwe Fechner wrote:
>
> It works for me:
> Try to open the command palette (Cmd-Shift-P on mac, I guess Ctrl-Shift-P 
> on linux and windows), and type 'julia open workspace'. It opens a window 
> showing all variables and functions in scope.
>
> On Monday, September 12, 2016 at 11:46:31 PM UTC+2, Patrick Belliveau 
> wrote:
>>
>> Hi all,
>>   In his JuliaCon 2016 talk 
>>  on Juno's new graphical 
>> debugging capabilities, Mike Innes also showed off a workspace pane in Juno 
>> that displays currently defined variable values from an interactive Julia 
>> session. My impression from the video is that this feature should be 
>> available in the latest version of Juno but I can't get it to show up. As 
>> far as I can tell, the feature is not included in my version of Juno. Am I 
>> missing something or has this functionality not been released yet? I'm on 
>> linux, running 
>>
>> Julia 0.5.0-rc4+0
>> atom 1.9.9
>> master branches of Atom.jl,CodeTools.jl,Juno.jl checked out and up to date
>> ink 0.5.1
>> julia-client 0.5.2
>> language-julia 0.6
>> uber-juno 0.1.1
>>
>> Thanks, Patrick
>>
>> P.S. I've just started using Juno and in general I'm really liking it, 
>> especially the debugging gui features. Great work Juno team!
>>
>

[julia-users] JuliaDiffEq Logo Poll

2016-09-12 Thread Chris Rackauckas
Sometime last week I threw up a logo idea, and a ton of other really cool 
ideas followed. Now that we have so many awesome choices due to a previous 
thread , it's hard to 
pick. Help us choose the JuliaDiffEq logo by going to this issue 
 and voting for your 
favorite(s). Or if you're interested, add your own entry. 

[If you're new to Github, this is a good time to make an account and 
"contribute to the Julia community"! :)]

P.S. Is there a less hacky way to do polls on Github?


[julia-users] Re: Juno workspace variable display.

2016-09-12 Thread Chris Rackauckas
I don't think it's available yet. This might be something you might want to 
file a request for by opening an issue.

On Monday, September 12, 2016 at 2:46:31 PM UTC-7, Patrick Belliveau wrote:
>
> Hi all,
>   In his JuliaCon 2016 talk 
>  on Juno's new graphical 
> debugging capabilities, Mike Innes also showed off a workspace pane in Juno 
> that displays currently defined variable values from an interactive Julia 
> session. My impression from the video is that this feature should be 
> available in the latest version of Juno but I can't get it to show up. As 
> far as I can tell, the feature is not included in my version of Juno. Am I 
> missing something or has this functionality not been released yet? I'm on 
> linux, running 
>
> Julia 0.5.0-rc4+0
> atom 1.9.9
> master branches of Atom.jl,CodeTools.jl,Juno.jl checked out and up to date
> ink 0.5.1
> julia-client 0.5.2
> language-julia 0.6
> uber-juno 0.1.1
>
> Thanks, Patrick
>
> P.S. I've just started using Juno and in general I'm really liking it, 
> especially the debugging gui features. Great work Juno team!
>


Re: [julia-users] code design question – best ideomatic way to define nested types?

2016-09-12 Thread Chris Rackauckas
Ahh, that makes a lot of sense as well. I can see how that would make 
everything a lot harder to optimize. Thanks for the explanation!

On Monday, September 12, 2016 at 2:44:22 PM UTC-7, Stefan Karpinski wrote:
>
> The biggest practical issue is that if you can subtype a concrete type 
> then you can't store values inline in an array, even if the values are 
> immutable – since a subtype can be bigger than the supertype. This leads to 
> having things like "final" classes, etc. Fundamentally, this is really an 
> issue of failing to separate the concrete type – which is complete and can 
> be instantiated – from the abstract type, which is incomplete and can be 
> subtyped.
>
> On Mon, Sep 12, 2016 at 3:17 PM, Chris Rackauckas  > wrote:
>
>> https://en.wikipedia.org/wiki/Composition_over_inheritance
>>
>>
>> http://programmers.stackexchange.com/questions/134097/why-should-i-prefer-composition-over-inheritance
>>
>>
>> https://www.thoughtworks.com/insights/blog/composition-vs-inheritance-how-choose
>>
>> That's just the start. Overtime, people realized inheritance can be quite 
>> fragile, so many style guidelines simply forbid you from doing it.
>>
>> On Monday, September 12, 2016 at 11:45:40 AM UTC-7, Bart Janssens wrote:
>>>
>>> Looking at this example, it seems mighty tempting to have the ability to 
>>> subtype a concrete type. Are the exact problems with that documented 
>>> somewhere? I am aware of the following section in the docs:
>>>
>>> "One particularly distinctive feature of Julia’s type system is that 
>>> concrete types may not subtype each other: all concrete types are final and 
>>> may only have abstract types as their supertypes. While this might at first 
>>> seem unduly restrictive, it has many beneficial consequences with 
>>> surprisingly few drawbacks. It turns out that being able to inherit 
>>> behavior is much more important than being able to inherit structure, and 
>>> inheriting both causes significant difficulties in traditional 
>>> object-oriented languages."
>>>
>>> I'm just wondering what the "significant difficulties" are, not 
>>> advocating changing this behaviour.
>>>
>>> On Mon, Sep 12, 2016 at 5:28 PM Stefan Karpinski  
>>> wrote:
>>>
>>>> I would probably go with approach #2 myself and only refer to the .bar 
>>>> and .baz fields in all of the generic AbstractFoo methods.
>>>>
>>>> On Mon, Sep 12, 2016 at 10:10 AM, Michael Borregaard <
>>>> mkborr...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I am defining a set of types to hold scientific data, and trying to 
>>>>> get the best out of Julia's type system. The types in my example are 
>>>>> 'nested' in the sense that each type will hold progressively more 
>>>>> information and thus allow the user to do progressively more. Like this:
>>>>>
>>>>> type Foo
>>>>>   bar
>>>>>   baz
>>>>> end
>>>>>
>>>>> type Foobar
>>>>>   bar  # this
>>>>>   baz  # and this are identical with Foo
>>>>>   barbaz
>>>>>   bazbaz
>>>>> end
>>>>>
>>>>>
>>>>
>

[julia-users] Re: can someone help me read julia's memory footprint on this cluster? [SGE]

2016-09-12 Thread Chris Rackauckas
For SGE, a lot of systems let you ssh into the node and use htop. That will 
show you a lot of information about the node, and can help you find out 
which process is using up what about of memory (note, this check only works 
in real-time so your computational has to be long enough).

On Monday, September 12, 2016 at 1:24:20 PM UTC-7, Florian Oswald wrote:
>
> hi all,
>
> i get the following output from the SGE command `qstat -j jobnumber` of a 
> julia job that uses 30 workers. I am confused by the mem column. am I using 
> more memory than what I asked for? I asked for max 4G on each processor.
>
>
> job-array tasks:1-30:1
>
> usage1: cpu=00:00:08, mem=20.61903 GBs, io=0.08026, 
> vmem=2.684G, maxvmem=2.684G
>
> usage2: cpu=00:00:13, mem=35.36547 GBs, io=0.14832, 
> vmem=2.754G, maxvmem=2.754G
>
> usage3: cpu=00:00:16, mem=47.97179 GBs, io=0.10563, 
> vmem=3.084G, maxvmem=3.084G
>
> usage4: cpu=00:00:17, mem=52.39960 GBs, io=0.14685, 
> vmem=3.146G, maxvmem=3.146G
>
> usage5: cpu=00:00:13, mem=38.00948 GBs, io=0.06336, 
> vmem=3.208G, maxvmem=3.208G
>
> usage6: cpu=00:00:14, mem=41.84277 GBs, io=0.08085, 
> vmem=3.208G, maxvmem=3.208G
>
> usage7: cpu=00:00:16, mem=49.34722 GBs, io=0.10563, 
> vmem=3.208G, maxvmem=3.208G
>
> usage8: cpu=00:00:18, mem=56.29933 GBs, io=0.14685, 
> vmem=3.208G, maxvmem=3.208G
>
> usage9: cpu=00:00:21, mem=61.30837 GBs, io=0.13234, 
> vmem=3.145G, maxvmem=3.145G
>
> usage   10: cpu=00:00:18, mem=53.78262 GBs, io=0.10650, 
> vmem=3.209G, maxvmem=3.209G
>
> usage   11: cpu=00:00:19, mem=58.20804 GBs, io=0.16047, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   12: cpu=00:00:19, mem=58.90526 GBs, io=0.15296, 
> vmem=3.209G, maxvmem=3.209G
>
> usage   13: cpu=00:00:13, mem=37.73257 GBs, io=0.06336, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   14: cpu=00:00:15, mem=43.44044 GBs, io=0.08085, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   15: cpu=00:00:19, mem=58.27114 GBs, io=0.13234, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   16: cpu=00:00:17, mem=51.33971 GBs, io=0.10650, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   17: cpu=00:00:19, mem=56.00911 GBs, io=0.16047, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   18: cpu=00:00:19, mem=57.45101 GBs, io=0.15301, 
> vmem=3.209G, maxvmem=3.209G
>
> usage   19: cpu=00:00:19, mem=56.42524 GBs, io=0.13240, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   20: cpu=00:00:18, mem=52.25189 GBs, io=0.10650, 
> vmem=3.209G, maxvmem=3.209G
>
> usage   21: cpu=00:00:20, mem=60.76601 GBs, io=0.12465, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   22: cpu=00:00:22, mem=65.11690 GBs, io=0.14843, 
> vmem=3.207G, maxvmem=3.207G
>
> usage   23: cpu=00:00:18, mem=52.75353 GBs, io=0.11566, 
> vmem=3.146G, maxvmem=3.146G
>
> usage   24: cpu=00:00:15, mem=44.21442 GBs, io=0.04204, 
> vmem=3.207G, maxvmem=3.207G
>
> usage   25: cpu=00:00:20, mem=58.85802 GBs, io=0.14714, 
> vmem=3.209G, maxvmem=3.209G
>
> usage   26: cpu=00:00:18, mem=53.52543 GBs, io=0.12236, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   27: cpu=00:00:20, mem=59.24938 GBs, io=0.13234, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   28: cpu=00:00:20, mem=59.86234 GBs, io=0.12465, 
> vmem=3.208G, maxvmem=3.208G
>
> usage   29: cpu=00:00:18, mem=53.94314 GBs, io=0.11566, 
> vmem=3.209G, maxvmem=3.209G
>
> usage   30: cpu=00:00:16, mem=48.74432 GBs, io=0.10222, 
> vmem=3.208G, maxvmem=3.208G
>


Re: [julia-users] code design question – best ideomatic way to define nested types?

2016-09-12 Thread Chris Rackauckas
https://en.wikipedia.org/wiki/Composition_over_inheritance

http://programmers.stackexchange.com/questions/134097/why-should-i-prefer-composition-over-inheritance

https://www.thoughtworks.com/insights/blog/composition-vs-inheritance-how-choose

That's just the start. Overtime, people realized inheritance can be quite 
fragile, so many style guidelines simply forbid you from doing it.

On Monday, September 12, 2016 at 11:45:40 AM UTC-7, Bart Janssens wrote:
>
> Looking at this example, it seems mighty tempting to have the ability to 
> subtype a concrete type. Are the exact problems with that documented 
> somewhere? I am aware of the following section in the docs:
>
> "One particularly distinctive feature of Julia’s type system is that 
> concrete types may not subtype each other: all concrete types are final and 
> may only have abstract types as their supertypes. While this might at first 
> seem unduly restrictive, it has many beneficial consequences with 
> surprisingly few drawbacks. It turns out that being able to inherit 
> behavior is much more important than being able to inherit structure, and 
> inheriting both causes significant difficulties in traditional 
> object-oriented languages."
>
> I'm just wondering what the "significant difficulties" are, not advocating 
> changing this behaviour.
>
> On Mon, Sep 12, 2016 at 5:28 PM Stefan Karpinski  > wrote:
>
>> I would probably go with approach #2 myself and only refer to the .bar 
>> and .baz fields in all of the generic AbstractFoo methods.
>>
>> On Mon, Sep 12, 2016 at 10:10 AM, Michael Borregaard > > wrote:
>>
>>> Hi,
>>>
>>> I am defining a set of types to hold scientific data, and trying to get 
>>> the best out of Julia's type system. The types in my example are 'nested' 
>>> in the sense that each type will hold progressively more information and 
>>> thus allow the user to do progressively more. Like this:
>>>
>>> type Foo
>>>   bar
>>>   baz
>>> end
>>>
>>> type Foobar
>>>   bar  # this
>>>   baz  # and this are identical with Foo
>>>   barbaz
>>>   bazbaz
>>> end
>>>
>>>
>>

[julia-users] Re: Suggestion regarding valuable Youtube videos related to Julia learning

2016-09-12 Thread Chris Rackauckas
I think we should setup something for video tutorials, like the 
JuliaBlogger community except for Youtube (is there such a thing as Youtube 
aggregating?). I plan on doing some tutorials on "Plotting with Plots.jl in 
Juno", "Solving ODEs with DifferentialEquations.jl", "Using Julia's Pkg 
with Github to Navigate Changing Packages", "Using CUDA.jl", etc., and 
other topics which involve a mix of visuals, switching programs/windows, 
and some math all at once (i.e. hard to capture in full with a blog post or 
straight text). 

I was just waiting on a few more features which help with the visuals: Juno 
plot pane, Juno time estimates from the progressbar, DifferentialEquations 
dense output, etc. Those are all pretty much together now, so I'll probably 
be doing this to setup for an October workshop. 

I am pretty sure once a few start doing it, others will join. Videos have a 
far lower barrier to entry than a detailed blog post, so it would be 
helpful to new users.

On Monday, September 12, 2016 at 10:17:26 AM UTC-7, Colin Beckingham wrote:
>
> The various Youtube videos recorded at Julia conferences look very good. 
> It's great to have explanations given by the experts at the top of the 
> Julia tree, no names mentioned, you know who you are. Thanks for this 
> resource.
> From the consumer side, the packages are kinda long. I imagine that many, 
> with the exception of the cohort that wants to see every second live, would 
> benefit from shorter edited versions that present concisely what the 
> speaker wants to say. Not a problem, you say, we just need someone to find 
> the time to sit down and pull out the *obiter dicta* to leave the 
> kernels. But where do we find this time? It can be challenging to decide 
> where to cut.
> It is a well-known practice in the political arena to give presentations 
> that make the editing process easy so that the press gets the right 
> message, the principal points in a short bite with easily identifiable 
> chunks to illustrate the points made in further detail, each in its own 
> right a standalone element.
> The speakers are excellent and experienced presenters. They know that they 
> must tailor the presentation to the audience; my suggestion is that when 
> they look out on the hundreds of people in their immediate room, they keep 
> in mind the tens of thousands who will tune in later.
> Does the video manager have any details on the number of learners 
> accessing these videos and the amount of time they remain glued to the 
> screen?
>


Re: [julia-users] Want to contribute to Julia

2016-09-12 Thread Chris Rackauckas
I would say start with the package ecosystem. Almost nothing in Julia Base 
is really first-class or special, so almost anything can contribute to 
Julia via packages. For example, things like Traits   
and VML bindings 
 basically add to Julia's core 
features or improve the performance, and packages like CUDA.jl 
 allow you to use GPUs. So I think the 
easiest way to get started is to contribute to (or make) packages which are 
in your expertise / that you're interested. Usually there's a lot less code 
so it's easier to get started, and you can usually find some of the 
developers in a Gitter channel to chat with and have them help you along.

Looking at the package sphere, you can find ways to contribute to anything. 
If you're interested in web development, you may want to check out Genie.jl 
. It's a web framework built in 
Julia that is looking really nice (one example 
), though he still has quite the TODO 
list. If you're interested in numerical differential equations, I could 
help you find a project to get started.  Etc.

Or Tamas Papp's suggestion of the intro issues also gives a lot of good 
problems to work on. Just find what's suitable to you.

On Monday, September 12, 2016 at 7:56:47 AM UTC-7, rishu...@gmail.com wrote:
>
> Thanks for the help. Can you suggest me what should I learn to work in 
> Julia?
>


[julia-users] Re: Help on building Julia with Intel MKL on Windows?

2016-09-12 Thread Chris Rackauckas
You just do what it says here: https://github.com/JuliaLang/julia. Then you 
can replace a lot of the functions using VML.jl 


On Sunday, September 11, 2016 at 10:35:35 PM UTC-7, Zhong Pan wrote:
>
> Anybody knows how to build Julia with Intel MKL on Windows? 
>
> I found the article below but it is for Linux.
>
> https://software.intel.com/en-us/articles/julia-with-intel-mkl-for-improved-performance
>
> Thanks!
>
>

Re: [julia-users] Re: ProfileView not compatible with julia-0.5?

2016-09-11 Thread Chris Rackauckas
I'm not on a Mac so it's hard for me to know, but maybe the Homebrew GTK is 
missing theme files? Or maybe the theme files are supposed to be present in 
the OS, but can't be found in MacOSX?

The next post is showing that there's a clash: you have two libgio's (one 
provided by Julia, one not), so it arbitrarily is choosing which one to 
use. That may be a source of trouble.

On Sunday, September 11, 2016 at 4:41:49 AM UTC-7, Christoph Ortner wrote:
>
> I rebuilt both and neither gave errors. (though actually building Cairo 
> did give an error, it then switched to source and built ok. This is OS X 
> with Homebrew)
>
> But now it opens a small window and the error message changes. 
>
> julia> @profile test(1_000_000_00);
> julia> ProfileView.view()
> Gtk.GtkWindowLeaf(name="", parent, width-request=-1, height-request=-1, 
> visible=TRUE, sensitive=TRUE, app-paintable=FALSE, can-focus=FALSE, has-
> focus=FALSE, is-focus=FALSE, focus-on-click=TRUE, can-default=FALSE, has-
> default=FALSE, receives-default=FALSE, composite-child=FALSE, style, 
> events=0, no-show-all=FALSE, has-tooltip=FALSE, tooltip-markup=NULL, 
> tooltip-text=NULL, window, opacity=1.00, double-buffered, halign=
> GTK_ALIGN_FILL, valign=GTK_ALIGN_FILL, margin-left, margin-right, margin-
> start=0, margin-end=0, margin-top=0, margin-bottom=0, margin=0, hexpand=
> FALSE, vexpand=FALSE, hexpand-set=FALSE, vexpand-set=FALSE, expand=FALSE, 
> scale-factor=2, border-width=0, resize-mode, child, type=
> GTK_WINDOW_TOPLEVEL, title="Profile", role=NULL, resizable=TRUE, modal=
> FALSE, window-position=GTK_WIN_POS_NONE, default-width=-1, default-height
> =-1, destroy-with-parent=FALSE, hide-titlebar-when-maximized=FALSE, icon, 
> icon-name=NULL, screen, type-hint=GDK_WINDOW_TYPE_HINT_NORMAL, skip-
> taskbar-hint=FALSE, skip-pager-hint=FALSE, urgency-hint=FALSE, accept-
> focus=TRUE, focus-on-map=TRUE, decorated=TRUE, deletable=TRUE, gravity=
> GDK_GRAVITY_NORTH_WEST, transient-for, attached-to, has-resize-grip, 
> resize-grip-visible, application, is-active=FALSE, has-toplevel-focus=
> FALSE, startup-id, mnemonics-visible=FALSE, focus-visible=FALSE, is-
> maximized=FALSE)
>
>
> julia>
> (:53358): Gtk-WARNING **: Error loading theme icon 
> 'document-open' for stock: Icon 'document-open' not present in theme 
> Adwaita
>
>
> (:53358): Gtk-WARNING **: Error loading theme icon 
> 'document-save-as' for stock: Icon 'document-save-as' not present in 
> theme Await
>
>
> # and many more lines like that.
>
>
>
>
> On Sunday, 11 September 2016 09:12:35 UTC+1, Chris Rackauckas wrote:
>>
>> Did GTK and Cairo build correctly?
>>
>> On Sunday, September 11, 2016 at 12:10:54 AM UTC-7, Christoph Ortner 
>> wrote:
>>>
>>> yes and yes (that was the previous error message I posted)
>>>
>>> maybe it is just some dependencies that are not correctly installed? But 
>>> then why does it both fail in REPL and NB  ?
>>>
>>> On Sunday, 11 September 2016 02:08:53 UTC+1, Chris Rackauckas wrote:
>>>>
>>>> Is this after updating? Do you also have an error when you do it from 
>>>> the REPL?
>>>>
>>>> On Saturday, September 10, 2016 at 2:34:03 PM UTC-7, Christoph Ortner 
>>>> wrote:
>>>>>
>>>>> And here the error message I get in a notebook:
>>>>>
>>>>> type Array has no field func
>>>>>
>>>>>  in 
>>>>> (::ProfileView.#printrec#26{Dict{UInt64,Array{StackFrame,1}}})(::IOContext{Base.AbstractIOBuffer{Array{UInt8,1}}},
>>>>>  ::Int64, ::Float64, ::Float64, ::Float64, ::ProfileView.TagData, 
>>>>> ::ColorTypes.RGB{FixedPointNumbers.UFixed{UInt8,8}}) at 
>>>>> /Users/ortner/.julia/v0.5/ProfileView/src/ProfileView.jl:213
>>>>>  in show(::IOContext{Base.AbstractIOBuffer{Array{UInt8,1}}}, 
>>>>> ::MIME{Symbol("image/svg+xml")}, ::ProfileView.ProfileData) at 
>>>>> /Users/ortner/.julia/v0.5/ProfileView/src/ProfileView.jl:255
>>>>>  in verbose_show(::Base.AbstractIOBuffer{Array{UInt8,1}}, 
>>>>> ::MIME{Symbol("image/svg+xml")}, ::ProfileView.ProfileData) at 
>>>>> ./multimedia.jl:50
>>>>>  in #sprint#304(::Void, ::Function, ::Int64, ::Function, 
>>>>> ::MIME{Symbol("image/svg+xml")}, ::Vararg{Any,N}) at ./strings/io.jl:37
>>>>>  in display_dict(::ProfileView.ProfileData) at 
>>>>> /Users/ortner/.julia/v0.5/IJulia/src/execute_request.jl:28
>>>>>  in execute_request(::ZMQ.Socket, ::IJulia.Msg) at 
>>>>> /Users/ortner/.julia/v0.5/IJulia/src/execute_request.jl:195
>>>>>  in eventloop(::ZMQ.Socket) at 
>>>>> /Users/ortner/.julia/v0.5/IJulia/src/IJulia.jl:138
>>>>>  in (::IJulia.##25#31)() at ./task.jl:360
>>>>>
>>>>>
>>>>> In [ ]:
>>>>>
>>>>

Re: [julia-users] Re: ProfileView not compatible with julia-0.5?

2016-09-11 Thread Chris Rackauckas
Did GTK and Cairo build correctly?

On Sunday, September 11, 2016 at 12:10:54 AM UTC-7, Christoph Ortner wrote:
>
> yes and yes (that was the previous error message I posted)
>
> maybe it is just some dependencies that are not correctly installed? But 
> then why does it both fail in REPL and NB  ?
>
> On Sunday, 11 September 2016 02:08:53 UTC+1, Chris Rackauckas wrote:
>>
>> Is this after updating? Do you also have an error when you do it from the 
>> REPL?
>>
>> On Saturday, September 10, 2016 at 2:34:03 PM UTC-7, Christoph Ortner 
>> wrote:
>>>
>>> And here the error message I get in a notebook:
>>>
>>> type Array has no field func
>>>
>>>  in 
>>> (::ProfileView.#printrec#26{Dict{UInt64,Array{StackFrame,1}}})(::IOContext{Base.AbstractIOBuffer{Array{UInt8,1}}},
>>>  ::Int64, ::Float64, ::Float64, ::Float64, ::ProfileView.TagData, 
>>> ::ColorTypes.RGB{FixedPointNumbers.UFixed{UInt8,8}}) at 
>>> /Users/ortner/.julia/v0.5/ProfileView/src/ProfileView.jl:213
>>>  in show(::IOContext{Base.AbstractIOBuffer{Array{UInt8,1}}}, 
>>> ::MIME{Symbol("image/svg+xml")}, ::ProfileView.ProfileData) at 
>>> /Users/ortner/.julia/v0.5/ProfileView/src/ProfileView.jl:255
>>>  in verbose_show(::Base.AbstractIOBuffer{Array{UInt8,1}}, 
>>> ::MIME{Symbol("image/svg+xml")}, ::ProfileView.ProfileData) at 
>>> ./multimedia.jl:50
>>>  in #sprint#304(::Void, ::Function, ::Int64, ::Function, 
>>> ::MIME{Symbol("image/svg+xml")}, ::Vararg{Any,N}) at ./strings/io.jl:37
>>>  in display_dict(::ProfileView.ProfileData) at 
>>> /Users/ortner/.julia/v0.5/IJulia/src/execute_request.jl:28
>>>  in execute_request(::ZMQ.Socket, ::IJulia.Msg) at 
>>> /Users/ortner/.julia/v0.5/IJulia/src/execute_request.jl:195
>>>  in eventloop(::ZMQ.Socket) at 
>>> /Users/ortner/.julia/v0.5/IJulia/src/IJulia.jl:138
>>>  in (::IJulia.##25#31)() at ./task.jl:360
>>>
>>>
>>> In [ ]:
>>>
>>

Re: [julia-users] Re: ProfileView not compatible with julia-0.5?

2016-09-10 Thread Chris Rackauckas
Is this after updating? Do you also have an error when you do it from the 
REPL?

On Saturday, September 10, 2016 at 2:34:03 PM UTC-7, Christoph Ortner wrote:
>
> And here the error message I get in a notebook:
>
> type Array has no field func
>
>  in 
> (::ProfileView.#printrec#26{Dict{UInt64,Array{StackFrame,1}}})(::IOContext{Base.AbstractIOBuffer{Array{UInt8,1}}},
>  ::Int64, ::Float64, ::Float64, ::Float64, ::ProfileView.TagData, 
> ::ColorTypes.RGB{FixedPointNumbers.UFixed{UInt8,8}}) at 
> /Users/ortner/.julia/v0.5/ProfileView/src/ProfileView.jl:213
>  in show(::IOContext{Base.AbstractIOBuffer{Array{UInt8,1}}}, 
> ::MIME{Symbol("image/svg+xml")}, ::ProfileView.ProfileData) at 
> /Users/ortner/.julia/v0.5/ProfileView/src/ProfileView.jl:255
>  in verbose_show(::Base.AbstractIOBuffer{Array{UInt8,1}}, 
> ::MIME{Symbol("image/svg+xml")}, ::ProfileView.ProfileData) at 
> ./multimedia.jl:50
>  in #sprint#304(::Void, ::Function, ::Int64, ::Function, 
> ::MIME{Symbol("image/svg+xml")}, ::Vararg{Any,N}) at ./strings/io.jl:37
>  in display_dict(::ProfileView.ProfileData) at 
> /Users/ortner/.julia/v0.5/IJulia/src/execute_request.jl:28
>  in execute_request(::ZMQ.Socket, ::IJulia.Msg) at 
> /Users/ortner/.julia/v0.5/IJulia/src/execute_request.jl:195
>  in eventloop(::ZMQ.Socket) at 
> /Users/ortner/.julia/v0.5/IJulia/src/IJulia.jl:138
>  in (::IJulia.##25#31)() at ./task.jl:360
>
>
> In [ ]:
>


[julia-users] Re: Cannot start Julia in Atom

2016-09-10 Thread Chris Rackauckas
Yeah it should be all good now. The latest release was untagged.

On Saturday, September 10, 2016 at 10:15:59 AM UTC-7, Uwe Fechner wrote:
>
> I uninstalled julia-client and reinstalled it from within Atom as 
> suggested by rogerwhitney.
> (
> http://discuss.junolab.org/t/start-julia-error-loading-atom-jl-package/853/2) 
> This fixed it for me.
>
> On Saturday, September 10, 2016 at 9:39:39 AM UTC+2, Chris Rackauckas 
> wrote:
>>
>> The context is important: 
>> http://stackoverflow.com/questions/39419502/cannot-start-julia-in-atom-loaderror-argumenterror-juno-not-found-in-path/39420874#39420874
>>
>> Or the bottom of this: 
>> https://gitter.im/JunoLab/atom-ink/archives/2016/09/09
>>
>> It's known. It's a bad tag. Either just clone Juno.jl or wait for the tag.
>>
>> On Friday, September 9, 2016 at 8:13:05 PM UTC-7, Yuanchu Dang wrote:
>>>
>>> ERROR: LoadError: ArgumentError: Juno not found in path
>>>  in require at loading.jl:249
>>>  in include at boot.jl:261
>>>  in include_from_node1 at loading.jl:320
>>>  in process_options at client.jl:280
>>>  in _start at client.jl:378
>>> while loading C:\Users\think\.atom\packages\julia-client\script\boot.jl, 
>>> in expression starting on line 36
>>>
>>

[julia-users] Re: Cannot start Julia in Atom

2016-09-10 Thread Chris Rackauckas
The context is 
important: 
http://stackoverflow.com/questions/39419502/cannot-start-julia-in-atom-loaderror-argumenterror-juno-not-found-in-path/39420874#39420874

Or the bottom of 
this: https://gitter.im/JunoLab/atom-ink/archives/2016/09/09

It's known. It's a bad tag. Either just clone Juno.jl or wait for the tag.

On Friday, September 9, 2016 at 8:13:05 PM UTC-7, Yuanchu Dang wrote:
>
> ERROR: LoadError: ArgumentError: Juno not found in path
>  in require at loading.jl:249
>  in include at boot.jl:261
>  in include_from_node1 at loading.jl:320
>  in process_options at client.jl:280
>  in _start at client.jl:378
> while loading C:\Users\think\.atom\packages\julia-client\script\boot.jl, 
> in expression starting on line 36
>


[julia-users] Re: Slow Performance Compared to MATLAB

2016-09-10 Thread Chris Rackauckas
I can't get this to run because of a dimension mismatch error. Was this 
written for v0.4? I am using v0.5 and there is a difference in how it drops 
extra dimensions which may be the cause.

On Friday, September 9, 2016 at 8:13:00 PM UTC-7, Zhilong Liu wrote:
>
> Hello All,
>
> I am trying to convert a MATLAB script into Julia, hoping that it will run 
> faster. But after the conversion, it runs about 3 to 4 times slower than 
> MATLAB.
>
> I attached the relevant files here. The problem seems to be from line 297 
> in OUM.jl from the profiler information. But I have no idea how to make it 
> run faster. Anybody can help?
>
>
> Thanks!
>
> Zhilong Liu
>
>
>

[julia-users] Re: Installation error

2016-09-10 Thread Chris Rackauckas
Run Pkg.update()? For some reason Atom isn't in your METADATA, so it's 
either old or corrupted.

On Friday, September 9, 2016 at 8:13:05 PM UTC-7, Yuanchu Dang wrote:
>
> Was told:
>
> Error installing Atom.jl package
> Go to the Packages → Julia → Open Terminal menu and
> run `Pkg.add("Atom")` in Julia, then try again.
> If you still see an issue, please report it to:
> julia...@googlegroups.com 
>
> But doesn't work:
>
>
> 
>
>

[julia-users] Re: ProfileView not compatible with julia-0.5?

2016-09-09 Thread Chris Rackauckas
Yes, you can checkout a branch of the repository via 
Pkg.checkout("Package","branch"). Note that the default is for the master 
branch, i.e. Pkg.checkout("ProfileView") will checkout master. This will 
put you on the "current up-to-date" branch, which could be different than 
the most recent tagged version (which is what Julia gives you from Pkg.add 
by default). Many times the tags/releases will be behind on the bug fixes / 
version compatibility, so checking out master will usually fix this. 

I know that ProfileView works on master in Julia v0.5 and v0.6.

For completeness, you can always go back to the latest tag via 
Pkg.release("Package"). If you stay on master of any package, be prepared 
for "bleeding edge development issues" to crop up, as you're asking for 
using the current source and not the latest release.

On Friday, September 9, 2016 at 3:47:47 PM UTC-7, Neal Becker wrote:
>
> Chris Rackauckas wrote: 
>
> > Did you checkout master? 
> > 
>
> No I just did Pkg.add("ProfileView").  I don't actually know how to do 
> otherwise - is that a Pkg option? 
>
> > On Friday, September 9, 2016 at 2:55:21 PM UTC-7, Neal Becker wrote: 
> >> 
> >> using ProfileView 
> >> INFO: Precompiling module ProfileView. 
> >> WARNING: Module Compat with uuid 314389968181888 is missing from the 
> >> cache. 
> >> This may mean module Compat does not support precompilation but is 
> >> imported 
> >> by a module that does. 
> >> ERROR: LoadError: Declaring __precompile__(false) is not allowed in 
> files 
> >> that are being precompiled. 
> >>  in macro expansion; at ./none:2 [inlined] 
> >>  in anonymous at ./:? 
> >> while loading /home/nbecker/.julia/v0.5/ProfileView/src/ProfileView.jl, 
> >> in expression starting on line 5 
> >> ERROR: Failed to precompile ProfileView to 
> >> /home/nbecker/.julia/lib/v0.5/ProfileView.ji. 
> >>  in eval_user_input(::Any, ::Base.REPL.REPLBackend) at ./REPL.jl:64 
> >>  in macro expansion at ./REPL.jl:95 [inlined] 
> >>  in (::Base.REPL.##3#4{Base.REPL.REPLBackend})() at ./event.jl:68 
> >> 
> >> 
>
>
>

[julia-users] Re: Running octave scrips from julia

2016-09-09 Thread Chris Rackauckas
You accidentally revived a >1 year old thread.

On Friday, September 9, 2016 at 4:32:56 AM UTC-7, Steven G. Johnson wrote:
>
>
>
> On Friday, February 6, 2015 at 2:03:23 PM UTC-5, astromono wrote:
>>
>>   in pyinitialize at /home/rober/.julia/v0.4/PyCall/src/pyinit.jl:245
>>
>
>  I think you must have pinned PyCall at some ancient version, because the 
> current pyinit.jl file only has 115 lines.
>


[julia-users] Re: Running octave scrips from julia

2016-09-09 Thread Chris Rackauckas
I guess not >1 year, but getting close to a year.

On Friday, September 9, 2016 at 4:32:56 AM UTC-7, Steven G. Johnson wrote:
>
>
>
> On Friday, February 6, 2015 at 2:03:23 PM UTC-5, astromono wrote:
>>
>>   in pyinitialize at /home/rober/.julia/v0.4/PyCall/src/pyinit.jl:245
>>
>
>  I think you must have pinned PyCall at some ancient version, because the 
> current pyinit.jl file only has 115 lines.
>


[julia-users] Re: ProfileView not compatible with julia-0.5?

2016-09-09 Thread Chris Rackauckas
Did you checkout master?

On Friday, September 9, 2016 at 2:55:21 PM UTC-7, Neal Becker wrote:
>
> using ProfileView 
> INFO: Precompiling module ProfileView. 
> WARNING: Module Compat with uuid 314389968181888 is missing from the 
> cache. 
> This may mean module Compat does not support precompilation but is 
> imported 
> by a module that does. 
> ERROR: LoadError: Declaring __precompile__(false) is not allowed in files 
> that are being precompiled. 
>  in macro expansion; at ./none:2 [inlined] 
>  in anonymous at ./:? 
> while loading /home/nbecker/.julia/v0.5/ProfileView/src/ProfileView.jl, in 
> expression starting on line 5 
> ERROR: Failed to precompile ProfileView to 
> /home/nbecker/.julia/lib/v0.5/ProfileView.ji. 
>  in eval_user_input(::Any, ::Base.REPL.REPLBackend) at ./REPL.jl:64 
>  in macro expansion at ./REPL.jl:95 [inlined] 
>  in (::Base.REPL.##3#4{Base.REPL.REPLBackend})() at ./event.jl:68 
>
>

[julia-users] Re: whole arrays pass by reference while matrix columns by value?

2016-09-07 Thread Chris Rackauckas
You allocate memory for the entire array that you pass back. So if A is 
nxn, and instead of modifying it in-place, you create function takes in A, 
creates an Anew, and returns that, notice that you will have done an nxn 
array of memory allocation. As you can see, this will happen with every 
function call and adds up fast. That's why the MATLAB "vectorized" way of 
doing this isn't very efficient (when you have fast looping to modify 
things in place). 

That doesn't mean don't do it, it means write in-place updating functions 
when you can. It takes a little more forethought since you are changing 
what you already have, so if you use that same array in the future without 
realizing you change the values, it will break code. However, Julia always 
allows you to do B = copy(A) when you need to.

Generally, this means: worry about performance in your "inner loops" that 
take up all of the other time, and then write the easiest to maintain code 
for the parts which don't take up as much time.

And expanding on what John said in example form: A = B saves the reference 
of B to A, so A is essentially a view to A (modifying A modifies B, they 
are the same location in memory). A[:,i] = B[:,i] does slicing on the right 
side of the = sign, so it made a copy, so modifying A[:,i] doesn't modify 
B[:,i]. Note that A[:] on the left-side is a view, so A[:] = ones(N) is an 
in-place update (Julia is smart enough to not allocate here, at least last 
time I checked). These same rules apply when passing to a function. And 
instead of slicing/copying with B[:,i], you can use view(B,:,i) to not copy 
and instead get something that points directly to the memory of B.

And while we're at it, in the latest versions (v0.6, some of this hasn't 
been implemented yet), you can broadcast everything, so

A .= B .+ C .* sin.(E)

would make a for loop that fuses and assigns to A in-place (that's the .=, 
which changes it to A[i] = ... just like how other operators broadcast)

While this may seem like a lot at first, one you get used to this, these 
tools help you write much faster code (since you can essentially eliminate 
temporaries but still have elegant looking code!).

On Wednesday, September 7, 2016 at 5:00:00 PM UTC-7, Alexandros Fakos wrote:
>
> Hi,
>
> a=rand(10,2)
> b=rand(10)
> sort!(b) modifies b
> but sort!(a[:,1]) does not modify the first column of matrix a 
>
> why is that? Does this mean that i cannot write functions that modify 
> their arguments and apply it to columns of matrices?
>
> Is trying to modify columns of matrices bad programming style? Is there a 
> better alternative?
>  How much more memory am I allocating if my functions return output 
> instead of modifying their arguments (matlab background - I am completely 
> unaware of memory allocation issues)? 
>
> Thanks,
> Alex
>


  1   2   3   >