[julia-users] Re: InexactError due to negative hexadecimal number

2015-01-05 Thread Chi-wei Wang
Thanks! Forgot I'm on a 64-bit environment.


Tony Kelman於 2015年1月6日星期二UTC+8下午12時31分05秒寫道:
>
> Yes, actually. If you're on a 64 bit machine, then the integer literal "0" 
> is an Int64. So int64(0) - 0x12345678 promotes and does the subtraction in 
> Int64. -0x12345678 wraps around to the unsigned integer 0xedcba988 which is 
> greater than typemax(Int32). So the value is too large to represent as a 
> signed Int32.
>
>
> On Monday, January 5, 2015 5:43:20 PM UTC-8, Chi-wei Wang wrote:
>>
>>
>>
>> julia version 0.4.0-dev+2496
>>
>> Program:
>> println(-0x12345678)
>> println(0-0x12345678)
>> println(int32(0-0x12345678))
>> println(int32(-0x12345678))
>>
>>
>> Output:
>> 3989547400
>> -305419896
>> -305419896
>> ERROR: InexactError()
>>  in include at ./boot.jl:248
>>  in include_from_node1 at loading.jl:128
>>  in process_options at ./client.jl:312
>>  in _start at ./client.jl:393
>> while loading /home/jack/julia/jia32/test.jl, in expression starting on 
>> line 4
>>
>>
>> Can this be right?
>>
>

[julia-users] Re: Package name for embedding R within Julia

2015-01-05 Thread Randy Lai


Hi all,


Sorry for joining the game late, I have been very busy in the past few 
days. 



First of all, I am very excited to learn that RCall.jl is now alive at 
JuliaStats. To resolve the confusion between the package names of my 
original package and the one hosted on JuliaStats, I renamed my original 
package and now it is called “RCalling.jl”. 


https://github.com/randy3k/RCalling.jl



I have a brief look of what Douglas have been doing in the new repo of 
RCall.jl. 

His direction in porting all R API functions from C to Julia may allow 
further development of the R/Julia interface easier.

In contrest, in RCalling.jl, I have been using the Julia API (in C) and R 
API (of course, also in C) intensively, which actually make coding 
difficult and hard to be maintained.


It is good to have more people playing the game.

Please let me know how I could be helping in the development of the package.


Best


Randy

On Friday, January 2, 2015 11:59:04 AM UTC-8, Douglas Bates wrote:
>
> For many statistics-oriented Julia users there is a great advantage in 
> being able to piggy-back on R development and to use at least the data sets 
> from R packages.  Hence the RDatasets package and the read_rda function in 
> the DataFrames package for reading saved R data.
>
> Over the last couple of days I have been experimenting with running an 
> embedded R within Julia and calling R functions from Julia. This is similar 
> in scope to the Rif package except that this code is written in Julia and 
> not as a set of wrapper functions written in C. The R API is a C API and, 
> in some ways, very simple. Everything in R is represented as a "symbolic 
> expression" or SEXPREC and passed around as pointers to such expressions 
> (called an SEXP type).  Most functions take one or more SEXP values as 
> arguments and return an SEXP.
>
> I have avoided reading the code for Rif for two reasons:
>  1. It is GPL3 licensed
>  2. I already know a fair bit of the R API and where to find API function 
> signatures.
>
> Here's a simple example
> julia> initR()
> 1
>
> julia> globalEnv = unsafe_load(cglobal((:R_GlobalEnv,libR),SEXP),1)
> Ptr{Void} @0x08c1c388
>
> julia> formaldehyde = tryEval(install(:Formaldehyde))
> Ptr{Void} @0x08fd1d18
>
> julia> inherits(formaldehyde,"data.frame")
> true
>
> julia> printValue(formaldehyde)
>   carb optden
> 1  0.1  0.086
> 2  0.3  0.269
> 3  0.5  0.446
> 4  0.6  0.538
> 5  0.7  0.626
> 6  0.9  0.782
>
> julia> length(formaldehyde)
> 2
>
> julia> names(formaldehyde)
> 2-element Array{ASCIIString,1}:
>  "carb"  
>  "optden"
>
> julia> form1 = ccall((:VECTOR_ELT,libR),SEXP,(SEXP,Cint),formaldehyde,0)
> Ptr{Void} @0x0a5baf58
>
> julia> ccall((:TYPEOF,libR),Cint,(SEXP,),form1)
> 14
>
> julia> carb = 
> copy(pointer_to_array(ccall((:REAL,libR),Ptr{Cdouble},(SEXP,),form1),length(form1)))
> 6-element Array{Float64,1}:
>  0.1
>  0.3
>  0.5
>  0.6
>  0.7
>  0.9
>
> julia> form2 = ccall((:VECTOR_ELT,libR),SEXP,(SEXP,Cint),formaldehyde,1)
> Ptr{Void} @0x0a5baef0
>
> julia> ccall((:TYPEOF,libR),Cint,(SEXP,),form2)
> 14
>
> julia> optden = 
> copy(pointer_to_array(ccall((:REAL,libR),Ptr{Cdouble},(SEXP,),form2),length(form2)))
> 6-element Array{Float64,1}:
>  0.086
>  0.269
>  0.446
>  0.538
>  0.626
>  0.782
>
>
> A call to printValue uses the R printing mechanism.
>
> Questions:
>  - What would be a good name for such a package?  In the spirit of PyCall 
> it could be RCall or Rcall perhaps.
>
>  - Right now I am defining several functions that emulate the names of 
> functions in R itself ir in the R API.  What is a good balance?  Obviously 
> it would not be a good idea to bring in all the names in the R base 
> namespace.  On the other hand, those who know names like "inherits" and 
> what it means in R will find it convenient to have such names in such a 
> package.
>
> - Should I move the discussion the the julia-stats list?
>
>

Re: [julia-users] [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-05 Thread Shashi Gowda
I love the display system

in Blink. This makes much more sense than the current display stack
mechanism. First, it lets you show things on different (even multiple)
displays. Second, classification of output as Tabular, Graphical and
Textual (and finer classifications) is really useful, something the current
display stack based mechanism lacks.

Very much for moving it into Base!

On Tue, Jan 6, 2015 at 2:00 AM, Mike Innes  wrote:

> The writemime methods and the display system are largely orthogonal – the
> display system concerns itself with routing output to a suitable display
> device (terminal, Blink window, whatever) while writemime simply provides
> the implementation. In other words, I'm only really focused on the
> `display` function, and that work looks completely compatible with my
> proposed changes.
>
> On 5 January 2015 at 19:41, Ivar Nesje  wrote:
>
>> Have you seen https://github.com/JuliaLang/julia/pull/8987?
>>
>>
>


[julia-users] Re: building ipopt documentation

2015-01-05 Thread Tony Kelman
I think I see what's happening here. Ipopt.jl's documentation makefile 
(https://github.com/JuliaOpt/Ipopt.jl/blob/master/doc/Makefile) looks like 
it was copied from an old version of base Julia's 
(https://github.com/JuliaLang/julia/blob/ee461cc2829b9dab3e25dd574c9057d5e998b28b/doc/Makefile),
 
so there are some targets there that don't make sense for a package - like 
helpdb.jl, which refers to stdlib/*.rst which Ipopt.jl doesn't have. I 
think Miles or Iain would know more about whether that doc Makefile is even 
capable of running locally, they might only ever run it on ReadTheDocs. 
They'll probably both see the issue you opened 
- https://github.com/JuliaOpt/Ipopt.jl/issues/21 since they're watching the 
Ipopt.jl repository. Are you unable to access the online version 
at http://ipoptjl.readthedocs.org/en/latest/ipopt.html? I think it's just 
the one page for Ipopt.jl.

(Also hi Clas, a little surprised to see you here. Consider myself reminded 
that I need to respond to your email.)

-Tony


On Monday, January 5, 2015 6:09:27 PM UTC-8, Clas Jacobson wrote:
>
> I installed Ipopt and it seems to run fine. I am trying to build the 
> documentation on a Mac OS X and get the following:
>
> make: *** No rule to make target `stdlib/*.rst', needed by `helpdb.jl'.  
> Stop.
>
>
> Any clues here? I have sphinx on my machine.
>
>
>
>

[julia-users] Tips and tricks for figuring out where allocation occurs

2015-01-05 Thread Petr Krysl
Hi guys,

How does one figure out where allocation  of memory occurs?   When I use 
the @time  macro it tells me there's a lot of memory allocation and 
deallocation going on.  Just looking at the code I'm at a loss: I can't see 
the reasons for it there.

So, what are the tips and tricks for the curious?  How do I debug the 
memory allocation issue?  I looked at the lint, the type check, and the 
code_typed().  Perhaps I don't know where to look, but  these didn't seem 
to be of much help.

Thanks a bunch,

Petr


[julia-users] Re: InexactError due to negative hexadecimal number

2015-01-05 Thread Tony Kelman
Yes, actually. If you're on a 64 bit machine, then the integer literal "0" 
is an Int64. So int64(0) - 0x12345678 promotes and does the subtraction in 
Int64. -0x12345678 wraps around to the unsigned integer 0xedcba988 which is 
greater than typemax(Int32). So the value is too large to represent as a 
signed Int32.


On Monday, January 5, 2015 5:43:20 PM UTC-8, Chi-wei Wang wrote:
>
>
>
> julia version 0.4.0-dev+2496
>
> Program:
> println(-0x12345678)
> println(0-0x12345678)
> println(int32(0-0x12345678))
> println(int32(-0x12345678))
>
>
> Output:
> 3989547400
> -305419896
> -305419896
> ERROR: InexactError()
>  in include at ./boot.jl:248
>  in include_from_node1 at loading.jl:128
>  in process_options at ./client.jl:312
>  in _start at ./client.jl:393
> while loading /home/jack/julia/jia32/test.jl, in expression starting on 
> line 4
>
>
> Can this be right?
>


Re: [julia-users] Julia backslash performance vs MATLAB backslash

2015-01-05 Thread Viral Shah
This is similar to the FFTW situation, where the license is held by MIT.

-viral

> On 06-Jan-2015, at 8:14 am, Viral Shah  wrote:
> 
> I believe that it is University of Florida that owns the copyright and they 
> would lose licencing revenue. I would love it too if we could have these 
> under the MIT licence, but it may not be a realistic expectation.
> 
> Looking at the paper is the best way to go. Jiahao has already produced the 
> pseudo code in the issue, and we do similar things in our dense \.
> 
> -viral
> 
> On 6 Jan 2015 07:31, "Kevin Squire"  wrote:
> Since Tim wrote the code (presumably?), couldn't he give permission to 
> license it under MIT?  (Assuming he was okay with that, of course!).
> 
> Cheers,
>Kevin
> 
> On Mon, Jan 5, 2015 at 3:09 PM, Stefan Karpinski  wrote:
> A word of legal caution: Tim, I believe some (all?) of your SuiteSparse code 
> is GPL and since Julia is MIT (although not all libraries are), we can look 
> at pseudocode but not copy GPL code while legally keeping the MIT license on 
> Julia's standard library.
> 
> Also, thanks so much for helping with this.
> 
> 
> On Mon, Jan 5, 2015 at 4:09 PM, Ehsan Eftekhari  wrote:
> Following your advice, I tried the code again, this time I also used MUMPS 
> solver from https://github.com/lruthotto/MUMPS.jl 
> I used a 42x43x44 grid. These are the results:
> 
> MUMPS: elapsed time: 2.09091471 seconds
> lufact: elapsed time: 5.01038297 seconds (9952832 bytes allocated)
> backslash: elapsed time: 16.604061696 seconds (80189136 bytes allocated, 
> 0.45% gc time)
> 
> and in Matlab:
> Elapsed time is 5.423656 seconds.
> 
> Thanks a lot Tim and Viral for your quick and helpful comments.
> 
> Kind regards,
> Ehsan
> 
> 
> On Monday, January 5, 2015 9:56:12 PM UTC+1, Viral Shah wrote:
> Thanks, that is great. I was wondering about the symmetry checker - we have 
> the naive one currently, but I can just use the CHOLMOD one now. 
> 
> -viral 
> 
> 
> 
> > On 06-Jan-2015, at 2:22 am, Tim Davis  wrote: 
> > 
> > oops.  Yes, your factorize function is broken.  You might try mine instead, 
> > in my 
> > factorize package. 
> > 
> > I have a symmetry-checker in CHOLMOD.  It checks if the matrix is symmetric 
> > and 
> > with positive diagonals.  I think I have a MATLAB interface for it too.  
> > The code is efficient, 
> > since it doesn't form A transpose, and it quits early as soon as asymmetry 
> > is detected. 
> > 
> > It does rely on the fact that MATLAB requires its sparse matrices to have 
> > sorted row indices 
> > in each column, however. 
> > 
> > On Mon, Jan 5, 2015 at 2:43 PM, Viral Shah  wrote: 
> > Tim - thanks for the reference. The paper will come in handy. This is a 
> > longstanding issue, that we just haven’t got around to addressing yet, but 
> > perhaps now is a good time. 
> > 
> > https://github.com/JuliaLang/julia/issues/3295 
> > 
> > We have a very simplistic factorize() for sparse matrices that must have 
> > been implemented as a stopgap. This is what it currently does and that 
> > explains everything. 
> > 
> > # placing factorize here for now. Maybe add a new file 
> > function factorize(A::SparseMatrixCSC) 
> > m, n = size(A) 
> > if m == n 
> > Ac = cholfact(A) 
> > Ac.c.minor == m && ishermitian(A) && return Ac 
> > end 
> > return lufact(A) 
> > end 
> > 
> > -viral 
> > 
> > 
> > 
> > > On 06-Jan-2015, at 1:57 am, Tim Davis  wrote: 
> > > 
> > > That does sound like a glitch in the "\" algorithm, rather than in 
> > > UMFPACK.  The OpenBLAS is pretty good. 
> > > 
> > > This is very nice in Julia: 
> > > 
> > > F = lufact (d["M"]) ; F \ d 
> > > 
> > > That's a great idea to have a factorization object like that.  I have a 
> > > MATLAB toolbox that does 
> > > the same thing, but it's not a built-in function inside MATLAB.  It's 
> > > written in M, so it can be slow for 
> > > small matrices.   With it, however, I can do: 
> > > 
> > > F = factorize (A) ;% does an LU, Cholesky, QR, SVD, or whatever.  
> > > Uses my polyalgorithm for "\". 
> > > x = F\b ; 
> > > 
> > > I can do S = inverse(A); which returns a factorization, not an inverse, 
> > > but with a flag 
> > > set so that S*b does A\b (and yes, S\b would do A*b, since S keeps a copy 
> > > of A inside it, as well). 
> > > 
> > > You can also specify the factorization, such as 
> > > 
> > >  F=factorize(A, 'lu') 
> > > F=factorize(A,'svd') ; etc. 
> > > 
> > > It's in SuiteSparse/MATLAB_tools/Factorize, if you're interested.  I've 
> > > suggested the same 
> > > feature to The MathWorks. 
> > > 
> > > My factorize function includes a backslash polyalgorithm, if you're 
> > > interested in taking a look. 
> > > 
> > > Algorithm 930: FACTORIZE: an object-oriented linear system solver for 
> > > MATLAB T. A. Davis, ACM Transactions on Mathematical Software, Vol 39, 
> > > Issue 4, pp. 28:1 - 28:18, 2013. 
> > > http://faculty.cse.tamu.edu/davis/publications_files/Factorize_an_object_oriented_linear_system

Re: [julia-users] Package name for embedding R within Julia

2015-01-05 Thread Viral Shah
Does this mirror the functionality in Rif.jl? I don’t know enough about the 
subtleties of the two packages, but it seems like there are multiple efforts 
towards calling R, and it would be great to combine them somehow.

https://github.com/lgautier/Rif.jl
https://github.com/lgautier/Rif.jl/issues/41

-viral



> On 06-Jan-2015, at 3:20 am, Douglas Bates  wrote:
> 
> The (unregistered) [RCall](https://github.com/JuliaStats/RCall.jl) package is 
> an initial cut at the interface.  I am not happy with all the names that I 
> chose and welcome suggestions of improvements.  For some reason I was unable 
> to create an R module within the RCall module, as Stefan suggested.  Again, I 
> welcome suggestions of how to accomplish this.  My particular difficulty is 
> that if I create an R module within the RCall module I don't see the names 
> from RCall.
> 
> 
> On Saturday, January 3, 2015 12:56:48 PM UTC-6, lgautier wrote:
> I agree.
> RCall does provide consistency, although at the possible slight cost of 
> boring conformity, and seems a better choice than RStats.
> 
> On Saturday, January 3, 2015 8:31:42 AM UTC-5, Viral Shah wrote:
> I prefer Rcall.jl, for consistency with ccall, PyCall, JavaCall, etc. Also, 
> once in JuliaStats, it will probably also be well advertised and documented - 
> so finding it should not be a challenge, IMO.
> 
> -viral
> 
> On Saturday, January 3, 2015 10:16:51 AM UTC+5:30, Ismael VC wrote:
> +1 for RStats.jl, I also think it's more search-friendly but not only for 
> people coming from R.
> 
> On Fri, Jan 2, 2015 at 9:50 PM, Gray Calhoun  wrote:
> That sounds great! Rcall.jl or RCall.jl are fine names; RStats.jl may be 
> slightly more search-friendly for people coming from R, since they may not 
> know about PyCall.
> 
> 
> On Friday, January 2, 2015 1:59:04 PM UTC-6, Douglas Bates wrote:
> For many statistics-oriented Julia users there is a great advantage in being 
> able to piggy-back on R development and to use at least the data sets from R 
> packages.  Hence the RDatasets package and the read_rda function in the 
> DataFrames package for reading saved R data.
> 
> Over the last couple of days I have been experimenting with running an 
> embedded R within Julia and calling R functions from Julia. This is similar 
> in scope to the Rif package except that this code is written in Julia and not 
> as a set of wrapper functions written in C. The R API is a C API and, in some 
> ways, very simple. Everything in R is represented as a "symbolic expression" 
> or SEXPREC and passed around as pointers to such expressions (called an SEXP 
> type).  Most functions take one or more SEXP values as arguments and return 
> an SEXP.
> 
> I have avoided reading the code for Rif for two reasons:
>  1. It is GPL3 licensed
>  2. I already know a fair bit of the R API and where to find API function 
> signatures.
> 
> Here's a simple example
> julia> initR()
> 1
> 
> julia> globalEnv = unsafe_load(cglobal((:R_GlobalEnv,libR),SEXP),1)
> Ptr{Void} @0x08c1c388
> 
> julia> formaldehyde = tryEval(install(:Formaldehyde))
> Ptr{Void} @0x08fd1d18
> 
> julia> inherits(formaldehyde,"data.frame")
> true
> 
> julia> printValue(formaldehyde)
>   carb optden
> 1  0.1  0.086
> 2  0.3  0.269
> 3  0.5  0.446
> 4  0.6  0.538
> 5  0.7  0.626
> 6  0.9  0.782
> 
> julia> length(formaldehyde)
> 2
> 
> julia> names(formaldehyde)
> 2-element Array{ASCIIString,1}:
>  "carb"  
>  "optden"
> 
> julia> form1 = ccall((:VECTOR_ELT,libR),SEXP,(SEXP,Cint),formaldehyde,0)
> Ptr{Void} @0x0a5baf58
> 
> julia> ccall((:TYPEOF,libR),Cint,(SEXP,),form1)
> 14
> 
> julia> carb = 
> copy(pointer_to_array(ccall((:REAL,libR),Ptr{Cdouble},(SEXP,),form1),length(form1)))
> 6-element Array{Float64,1}:
>  0.1
>  0.3
>  0.5
>  0.6
>  0.7
>  0.9
> 
> julia> form2 = ccall((:VECTOR_ELT,libR),SEXP,(SEXP,Cint),formaldehyde,1)
> Ptr{Void} @0x0a5baef0
> 
> julia> ccall((:TYPEOF,libR),Cint,(SEXP,),form2)
> 14
> 
> julia> optden = 
> copy(pointer_to_array(ccall((:REAL,libR),Ptr{Cdouble},(SEXP,),form2),length(form2)))
> 6-element Array{Float64,1}:
>  0.086
>  0.269
>  0.446
>  0.538
>  0.626
>  0.782
> 
> 
> A call to printValue uses the R printing mechanism.
> 
> Questions:
>  - What would be a good name for such a package?  In the spirit of PyCall it 
> could be RCall or Rcall perhaps.
> 
>  - Right now I am defining several functions that emulate the names of 
> functions in R itself ir in the R API.  What is a good balance?  Obviously it 
> would not be a good idea to bring in all the names in the R base namespace.  
> On the other hand, those who know names like "inherits" and what it means in 
> R will find it convenient to have such names in such a package.
> 
> - Should I move the discussion the the julia-stats list?
> 
> 



Re: [julia-users] Julia takes 2nd place in "Delacorte Numbers" competition

2015-01-05 Thread Viral Shah
I wonder if the threading branch is already useful for such stuff.

-viral

On Tuesday, January 6, 2015 9:43:27 AM UTC+5:30, Arch Robison wrote:
>
> Yes, multithreading would have been helpful.  I had a table that was up to 
> about 2 MByte per thread, and 8 hardware threads on the machine.  With only 
> 8 MByte of outer-level cache, running all 8 hardware threads with separate 
> processes meant having 8 copies of the table, and thrashing cache badly.  
> It would have been nice to share the table among the threads.
>
> On Mon, Jan 5, 2015 at 5:27 PM, Stefan Karpinski  
> wrote:
>
>> Very nice. Would this problem benefit even more from multithreading?
>>
>> On Sun, Jan 4, 2015 at 12:35 PM, Arch Robison  
>> wrote:
>>
>>> FYI, I won 2nd place in the recent Al Zimmerman programming contest 
>>> "Delacorte 
>>> Numbers ", using 
>>> only Julia and a quad-core MonkeyStation Pro 
>>> .   Julia worked out 
>>> well because it had:
>>>
>>>- interactivity to study the problem
>>>- quick prototyping to try ideas
>>>- fast scalar code
>>>- fast SIMD loops 
>>>
>>> I've working on a paper that will describe the experience in more detail.
>>>
>>> - Arch
>>>
>>>
>>
>

Re: [julia-users] building ipopt documentation

2015-01-05 Thread Elliot Saba
What is the command you're running to get this output?  Also, what does
this have to do with ipopt?  Can you go through your steps out loud for us,
so we can follow along?
-E

On Mon, Jan 5, 2015 at 6:09 PM, Clas Jacobson 
wrote:

> I installed Ipopt and it seems to run fine. I am trying to build the
> documentation on a Mac OS X and get the following:
>
> make: *** No rule to make target `stdlib/*.rst', needed by `helpdb.jl'.
> Stop.
>
>
> Any clues here? I have sphinx on my machine.
>
>
>
>


[julia-users] Re: Julia takes 2nd place in "Delacorte Numbers" competition

2015-01-05 Thread Eric Forgy
Go Illini! :)

On Monday, January 5, 2015 1:35:46 AM UTC+8, Arch Robison wrote:
>
> FYI, I won 2nd place in the recent Al Zimmerman programming contest 
> "Delacorte 
> Numbers ", using only 
> Julia and a quad-core MonkeyStation Pro 
> .   Julia worked out well 
> because it had:
>
>- interactivity to study the problem
>- quick prototyping to try ideas
>- fast scalar code
>- fast SIMD loops 
>
> I've working on a paper that will describe the experience in more detail.
>
> - Arch
>
>

Re: [julia-users] Julia takes 2nd place in "Delacorte Numbers" competition

2015-01-05 Thread Arch Robison
Yes, multithreading would have been helpful.  I had a table that was up to
about 2 MByte per thread, and 8 hardware threads on the machine.  With only
8 MByte of outer-level cache, running all 8 hardware threads with separate
processes meant having 8 copies of the table, and thrashing cache badly.
It would have been nice to share the table among the threads.

On Mon, Jan 5, 2015 at 5:27 PM, Stefan Karpinski 
wrote:

> Very nice. Would this problem benefit even more from multithreading?
>
> On Sun, Jan 4, 2015 at 12:35 PM, Arch Robison 
> wrote:
>
>> FYI, I won 2nd place in the recent Al Zimmerman programming contest 
>> "Delacorte
>> Numbers ", using
>> only Julia and a quad-core MonkeyStation Pro
>> .   Julia worked out
>> well because it had:
>>
>>- interactivity to study the problem
>>- quick prototyping to try ideas
>>- fast scalar code
>>- fast SIMD loops
>>
>> I've working on a paper that will describe the experience in more detail.
>>
>> - Arch
>>
>>
>


Re: [julia-users] Julia framework similar to scikit-learn?

2015-01-05 Thread Viral Shah
I do think that even just getting the API right will take a while, and writing 
Julian wrappers around scikits will be useful.

Re: [julia-users] Julia framework similar to scikit-learn?

2015-01-05 Thread Viral Shah
I do think that even just getting the API right will take a while, and writing 
Julian wrappers around scikits will be useful.

[julia-users] Re: Nullable use cases / expected behavior?

2015-01-05 Thread elextr
My reasoning for Nullable{T} is that it is type stable.  Taking your 
example, None and Int would be different type objects, introducing a type 
instability and potential performance penalty.  But Nullable{T} is always 
type Nullable{T} and get(Nullable{T}) is always type T.  Allowing 
Nullable{T} to decay into T would re-introduce the type instability.

Cheers
Lex

On Tuesday, January 6, 2015 12:03:24 PM UTC+10, Seth wrote:
>
> I'm trying to figure out how (and under what circumstances) one would use 
> Nullable. That is, it seems that it might be valuable when you don't know 
> whether the value/object exists (sort of like Python's None, I guess), but 
> then something like "Nullable(3) == 3" returns false, and that sort of 
> messes up how I'm thinking about it.
>
> The code I'd imagine would be useful would be something like
>
> function foo(x::Int, y=Nullable{Int}())  # that is, y defaults to python's 
> "None" but is restricted to Int
> if !isnull(y)
> return x+y  # x + get(y) works, but why must we invoke another 
> method to get the value?
> else
> return 2x
> end
> end
>
> I'm left wondering why it wasn't reasonable to allow y to return get(y) if 
> not null, else raise a NullException, and the conclusion I'm coming to is 
> that I don't understand the concept of Nullable yet. Any pointers?
>


Re: [julia-users] sum of 1-element array of composite type returns reference

2015-01-05 Thread Greg Plowman

>
>
> The only reason I can think of is that a copy may be costly for certain 
>> types, and it's not needed in most cases since the summation will create 
>> a new value in the general case. But as you noted this is not true when 
>> the array only contains one element. So it looks like the most efficient 
>> fix would be to copy only when n == 1 in _mapreduce(). 
>>
>
>
I must admit I don't really understand the code, however it doesn't look 
like evaluation would be affected for n>=2. 
The extra cost would only be for 1-element arrays:

Apparently, for 1-element arrays, zero(::MyType) needs to be defined
For 0-element arrays, both zero(::MyType) and zero(::Type{MyType}) need to 
be defined
(strangely, for 0-element arrays, mr_empty() calls r_promote(::AddFun, 
zero(T)) which effectively calls zero(T) + zero(zero(T)), so both forms of 
zero() need to be defined

In any case, at the moment I guess I have 2 workarounds:

I could define MyType as a subtype of Number and provide zero() functions. 
However, I'm not sure what the side effects of subtyping are, and whether 
this is advisable?

type MyType <:Number
x::Int
end

Base.zero(::Type{MyType}) = MyType(0) # required for sum(0-element array)
Base.zero(::MyType) = MyType(0)   # required for sum(0-element array) 
and sum(1-element array)

+(a::MyType, b::MyType) = MyType(a.x + b.x)


Alternatively, I could define my own sum() functions, but then if I want 
the general functionality of all variants of sum(), this seems non-trivial.






Re: [julia-users] Julia backslash performance vs MATLAB backslash

2015-01-05 Thread Viral Shah
I believe that it is University of Florida that owns the copyright and they
would lose licencing revenue. I would love it too if we could have these
under the MIT licence, but it may not be a realistic expectation.

Looking at the paper is the best way to go. Jiahao has already produced the
pseudo code in the issue, and we do similar things in our dense \.

-viral
On 6 Jan 2015 07:31, "Kevin Squire"  wrote:

> Since Tim wrote the code (presumably?), couldn't he give permission to
> license it under MIT?  (Assuming he was okay with that, of course!).
>
> Cheers,
>Kevin
>
> On Mon, Jan 5, 2015 at 3:09 PM, Stefan Karpinski 
> wrote:
>
>> A word of legal caution: Tim, I believe some (all?) of your SuiteSparse
>> code is GPL and since Julia is MIT (although not all libraries are), we can
>> look at pseudocode but not copy GPL code while legally keeping the MIT
>> license on Julia's standard library.
>>
>> Also, thanks so much for helping with this.
>>
>>
>> On Mon, Jan 5, 2015 at 4:09 PM, Ehsan Eftekhari 
>> wrote:
>>
>>> Following your advice, I tried the code again, this time I also used
>>> MUMPS solver from https://github.com/lruthotto/MUMPS.jl
>>> I used a 42x43x44 grid. These are the results:
>>>
>>> MUMPS: elapsed time: 2.09091471 seconds
>>> lufact: elapsed time: 5.01038297 seconds (9952832 bytes allocated)
>>> backslash: elapsed time: 16.604061696 seconds (80189136 bytes allocated,
>>> 0.45% gc time)
>>>
>>> and in Matlab:
>>> Elapsed time is 5.423656 seconds.
>>>
>>> Thanks a lot Tim and Viral for your quick and helpful comments.
>>>
>>> Kind regards,
>>> Ehsan
>>>
>>>
>>> On Monday, January 5, 2015 9:56:12 PM UTC+1, Viral Shah wrote:

 Thanks, that is great. I was wondering about the symmetry checker - we
 have the naive one currently, but I can just use the CHOLMOD one now.

 -viral



 > On 06-Jan-2015, at 2:22 am, Tim Davis  wrote:
 >
 > oops.  Yes, your factorize function is broken.  You might try mine
 instead, in my
 > factorize package.
 >
 > I have a symmetry-checker in CHOLMOD.  It checks if the matrix is
 symmetric and
 > with positive diagonals.  I think I have a MATLAB interface for it
 too.  The code is efficient,
 > since it doesn't form A transpose, and it quits early as soon as
 asymmetry is detected.
 >
 > It does rely on the fact that MATLAB requires its sparse matrices to
 have sorted row indices
 > in each column, however.
 >
 > On Mon, Jan 5, 2015 at 2:43 PM, Viral Shah  wrote:
 > Tim - thanks for the reference. The paper will come in handy. This is
 a longstanding issue, that we just haven’t got around to addressing yet,
 but perhaps now is a good time.
 >
 > https://github.com/JuliaLang/julia/issues/3295
 >
 > We have a very simplistic factorize() for sparse matrices that must
 have been implemented as a stopgap. This is what it currently does and that
 explains everything.
 >
 > # placing factorize here for now. Maybe add a new file
 > function factorize(A::SparseMatrixCSC)
 > m, n = size(A)
 > if m == n
 > Ac = cholfact(A)
 > Ac.c.minor == m && ishermitian(A) && return Ac
 > end
 > return lufact(A)
 > end
 >
 > -viral
 >
 >
 >
 > > On 06-Jan-2015, at 1:57 am, Tim Davis  wrote:
 > >
 > > That does sound like a glitch in the "\" algorithm, rather than in
 UMFPACK.  The OpenBLAS is pretty good.
 > >
 > > This is very nice in Julia:
 > >
 > > F = lufact (d["M"]) ; F \ d
 > >
 > > That's a great idea to have a factorization object like that.  I
 have a MATLAB toolbox that does
 > > the same thing, but it's not a built-in function inside MATLAB.
 It's written in M, so it can be slow for
 > > small matrices.   With it, however, I can do:
 > >
 > > F = factorize (A) ;% does an LU, Cholesky, QR, SVD, or
 whatever.  Uses my polyalgorithm for "\".
 > > x = F\b ;
 > >
 > > I can do S = inverse(A); which returns a factorization, not an
 inverse, but with a flag
 > > set so that S*b does A\b (and yes, S\b would do A*b, since S keeps
 a copy of A inside it, as well).
 > >
 > > You can also specify the factorization, such as
 > >
 > >  F=factorize(A, 'lu')
 > > F=factorize(A,'svd') ; etc.
 > >
 > > It's in SuiteSparse/MATLAB_tools/Factorize, if you're interested.
 I've suggested the same
 > > feature to The MathWorks.
 > >
 > > My factorize function includes a backslash polyalgorithm, if you're
 interested in taking a look.
 > >
 > > Algorithm 930: FACTORIZE: an object-oriented linear system solver
 for MATLAB T. A. Davis, ACM Transactions on Mathematical Software, Vol 39,
 Issue 4, pp. 28:1 - 28:18, 2013.
 > > http://faculty.cse.tamu.edu/davis/publications_files/
 Factorize_an_object_oriented_l

[julia-users] building ipopt documentation

2015-01-05 Thread Clas Jacobson
I installed Ipopt and it seems to run fine. I am trying to build the 
documentation on a Mac OS X and get the following:

make: *** No rule to make target `stdlib/*.rst', needed by `helpdb.jl'.  
Stop.


Any clues here? I have sphinx on my machine.





Re: [julia-users] Julia framework similar to scikit-learn?

2015-01-05 Thread Tom Fawcett
Thanks, Jacob, this is what I was looking for.
-Tom

On Mon, Jan 5, 2015 at 5:48 PM, Jacob Quinn  wrote:

> I know there's been a lot of discussion [here](
> https://github.com/JuliaStats/Roadmap.jl/issues/11) in the past, though
> not very recently. I would imagine there would be even more willing to
> participate in pushing things forward at this point (myself included). I'd
> say chiming in there would most likely get some great response.
>
> -Jacob
>
> On Mon, Jan 5, 2015 at 6:28 PM, Tom Fawcett  wrote:
>
>> True, but yes, not very satisfying.
>>
>> It seems like there's a good intersection of Julia people with machine
>> learning people.  I was thinking there might already be an effort underway
>> to develop a native ML framework for Julia.  Since I'm an ML person I'd
>> like to get involved.  But I'm new to Julia so I probably wouldn't be the
>> best person to lead such an effort.
>>
>> Regards,
>> -Tom
>>
>> On Mon, Jan 5, 2015 at 2:42 PM, Stefan Karpinski 
>> wrote:
>>
>>> You can always call scikit learn from Julia using PyCall. Not sure how
>>> satisfying that would be for what you had in mind though.
>>>
>>> On Mon, Jan 5, 2015 at 3:22 PM, Tom Fawcett 
>>> wrote:
>>>
 Fellow humans,

 I realize there are various machine learning algorithms implemented in
 Julia.  Is there anything like a machine learning framework, similar to
 scikit-learn, under development?

 Of course, Julia already has many of the capabilities of Numpy & Scipy
 so that's most of the way.  I'm imagining a package (or meta-package) to
 provide a common processing framework (comprising IO, pre-processing, core
 ML algs, evaluation, visualization, etc.) with a set of APIs.  It would
 provide a standard way to string together components so anyone can set up
 an ML processing stream or contribute a new module.

 Is anything in the works?  I did a brief search and didn't find
 anything.

 Thanks,
 -Tom


>>>
>>
>


[julia-users] Nullable use cases / expected behavior?

2015-01-05 Thread Seth
I'm trying to figure out how (and under what circumstances) one would use 
Nullable. That is, it seems that it might be valuable when you don't know 
whether the value/object exists (sort of like Python's None, I guess), but 
then something like "Nullable(3) == 3" returns false, and that sort of 
messes up how I'm thinking about it.

The code I'd imagine would be useful would be something like

function foo(x::Int, y=Nullable{Int}())  # that is, y defaults to python's 
"None" but is restricted to Int
if !isnull(y)
return x+y  # x + get(y) works, but why must we invoke another 
method to get the value?
else
return 2x
end
end

I'm left wondering why it wasn't reasonable to allow y to return get(y) if 
not null, else raise a NullException, and the conclusion I'm coming to is 
that I don't understand the concept of Nullable yet. Any pointers?


Re: [julia-users] Julia backslash performance vs MATLAB backslash

2015-01-05 Thread Kevin Squire
Since Tim wrote the code (presumably?), couldn't he give permission to
license it under MIT?  (Assuming he was okay with that, of course!).

Cheers,
   Kevin

On Mon, Jan 5, 2015 at 3:09 PM, Stefan Karpinski 
wrote:

> A word of legal caution: Tim, I believe some (all?) of your SuiteSparse
> code is GPL and since Julia is MIT (although not all libraries are), we can
> look at pseudocode but not copy GPL code while legally keeping the MIT
> license on Julia's standard library.
>
> Also, thanks so much for helping with this.
>
>
> On Mon, Jan 5, 2015 at 4:09 PM, Ehsan Eftekhari 
> wrote:
>
>> Following your advice, I tried the code again, this time I also used
>> MUMPS solver from https://github.com/lruthotto/MUMPS.jl
>> I used a 42x43x44 grid. These are the results:
>>
>> MUMPS: elapsed time: 2.09091471 seconds
>> lufact: elapsed time: 5.01038297 seconds (9952832 bytes allocated)
>> backslash: elapsed time: 16.604061696 seconds (80189136 bytes allocated,
>> 0.45% gc time)
>>
>> and in Matlab:
>> Elapsed time is 5.423656 seconds.
>>
>> Thanks a lot Tim and Viral for your quick and helpful comments.
>>
>> Kind regards,
>> Ehsan
>>
>>
>> On Monday, January 5, 2015 9:56:12 PM UTC+1, Viral Shah wrote:
>>>
>>> Thanks, that is great. I was wondering about the symmetry checker - we
>>> have the naive one currently, but I can just use the CHOLMOD one now.
>>>
>>> -viral
>>>
>>>
>>>
>>> > On 06-Jan-2015, at 2:22 am, Tim Davis  wrote:
>>> >
>>> > oops.  Yes, your factorize function is broken.  You might try mine
>>> instead, in my
>>> > factorize package.
>>> >
>>> > I have a symmetry-checker in CHOLMOD.  It checks if the matrix is
>>> symmetric and
>>> > with positive diagonals.  I think I have a MATLAB interface for it
>>> too.  The code is efficient,
>>> > since it doesn't form A transpose, and it quits early as soon as
>>> asymmetry is detected.
>>> >
>>> > It does rely on the fact that MATLAB requires its sparse matrices to
>>> have sorted row indices
>>> > in each column, however.
>>> >
>>> > On Mon, Jan 5, 2015 at 2:43 PM, Viral Shah  wrote:
>>> > Tim - thanks for the reference. The paper will come in handy. This is
>>> a longstanding issue, that we just haven’t got around to addressing yet,
>>> but perhaps now is a good time.
>>> >
>>> > https://github.com/JuliaLang/julia/issues/3295
>>> >
>>> > We have a very simplistic factorize() for sparse matrices that must
>>> have been implemented as a stopgap. This is what it currently does and that
>>> explains everything.
>>> >
>>> > # placing factorize here for now. Maybe add a new file
>>> > function factorize(A::SparseMatrixCSC)
>>> > m, n = size(A)
>>> > if m == n
>>> > Ac = cholfact(A)
>>> > Ac.c.minor == m && ishermitian(A) && return Ac
>>> > end
>>> > return lufact(A)
>>> > end
>>> >
>>> > -viral
>>> >
>>> >
>>> >
>>> > > On 06-Jan-2015, at 1:57 am, Tim Davis  wrote:
>>> > >
>>> > > That does sound like a glitch in the "\" algorithm, rather than in
>>> UMFPACK.  The OpenBLAS is pretty good.
>>> > >
>>> > > This is very nice in Julia:
>>> > >
>>> > > F = lufact (d["M"]) ; F \ d
>>> > >
>>> > > That's a great idea to have a factorization object like that.  I
>>> have a MATLAB toolbox that does
>>> > > the same thing, but it's not a built-in function inside MATLAB.
>>> It's written in M, so it can be slow for
>>> > > small matrices.   With it, however, I can do:
>>> > >
>>> > > F = factorize (A) ;% does an LU, Cholesky, QR, SVD, or
>>> whatever.  Uses my polyalgorithm for "\".
>>> > > x = F\b ;
>>> > >
>>> > > I can do S = inverse(A); which returns a factorization, not an
>>> inverse, but with a flag
>>> > > set so that S*b does A\b (and yes, S\b would do A*b, since S keeps a
>>> copy of A inside it, as well).
>>> > >
>>> > > You can also specify the factorization, such as
>>> > >
>>> > >  F=factorize(A, 'lu')
>>> > > F=factorize(A,'svd') ; etc.
>>> > >
>>> > > It's in SuiteSparse/MATLAB_tools/Factorize, if you're interested.
>>> I've suggested the same
>>> > > feature to The MathWorks.
>>> > >
>>> > > My factorize function includes a backslash polyalgorithm, if you're
>>> interested in taking a look.
>>> > >
>>> > > Algorithm 930: FACTORIZE: an object-oriented linear system solver
>>> for MATLAB T. A. Davis, ACM Transactions on Mathematical Software, Vol 39,
>>> Issue 4, pp. 28:1 - 28:18, 2013.
>>> > > http://faculty.cse.tamu.edu/davis/publications_files/
>>> Factorize_an_object_oriented_linear_system_solver_for_MATLAB.pdf
>>> > >
>>> > > On Mon, Jan 5, 2015 at 2:11 PM, Viral Shah  wrote:
>>> > > The BLAS will certainly make a difference, but OpenBLAS is
>>> reasonably good.
>>> > >
>>> > > I also wonder what is happening in our \ polyalgorithm. The profile
>>> suggests the code is trying Cholesky decomposition, but it really shouldn't
>>> since the matrix is not symmetric. If I just do the lufact(), which
>>> essentially calls Umfpack, I can match Matlab timing:
>>> > >
>>> > > @time F = lufact(d["M"]); F \ d["RHS"];
>>> > >

Re: [julia-users] Julia framework similar to scikit-learn?

2015-01-05 Thread Jacob Quinn
I know there's been a lot of discussion [here](
https://github.com/JuliaStats/Roadmap.jl/issues/11) in the past, though not
very recently. I would imagine there would be even more willing to
participate in pushing things forward at this point (myself included). I'd
say chiming in there would most likely get some great response.

-Jacob

On Mon, Jan 5, 2015 at 6:28 PM, Tom Fawcett  wrote:

> True, but yes, not very satisfying.
>
> It seems like there's a good intersection of Julia people with machine
> learning people.  I was thinking there might already be an effort underway
> to develop a native ML framework for Julia.  Since I'm an ML person I'd
> like to get involved.  But I'm new to Julia so I probably wouldn't be the
> best person to lead such an effort.
>
> Regards,
> -Tom
>
> On Mon, Jan 5, 2015 at 2:42 PM, Stefan Karpinski 
> wrote:
>
>> You can always call scikit learn from Julia using PyCall. Not sure how
>> satisfying that would be for what you had in mind though.
>>
>> On Mon, Jan 5, 2015 at 3:22 PM, Tom Fawcett 
>> wrote:
>>
>>> Fellow humans,
>>>
>>> I realize there are various machine learning algorithms implemented in
>>> Julia.  Is there anything like a machine learning framework, similar to
>>> scikit-learn, under development?
>>>
>>> Of course, Julia already has many of the capabilities of Numpy & Scipy
>>> so that's most of the way.  I'm imagining a package (or meta-package) to
>>> provide a common processing framework (comprising IO, pre-processing, core
>>> ML algs, evaluation, visualization, etc.) with a set of APIs.  It would
>>> provide a standard way to string together components so anyone can set up
>>> an ML processing stream or contribute a new module.
>>>
>>> Is anything in the works?  I did a brief search and didn't find
>>> anything.
>>>
>>> Thanks,
>>> -Tom
>>>
>>>
>>
>


[julia-users] InexactError due to negative hexadecimal number

2015-01-05 Thread Chi-wei Wang


julia version 0.4.0-dev+2496

Program:
println(-0x12345678)
println(0-0x12345678)
println(int32(0-0x12345678))
println(int32(-0x12345678))


Output:
3989547400
-305419896
-305419896
ERROR: InexactError()
 in include at ./boot.jl:248
 in include_from_node1 at loading.jl:128
 in process_options at ./client.jl:312
 in _start at ./client.jl:393
while loading /home/jack/julia/jia32/test.jl, in expression starting on 
line 4


Can this be right?


Re: [julia-users] Julia framework similar to scikit-learn?

2015-01-05 Thread Tom Fawcett
True, but yes, not very satisfying.

It seems like there's a good intersection of Julia people with machine
learning people.  I was thinking there might already be an effort underway
to develop a native ML framework for Julia.  Since I'm an ML person I'd
like to get involved.  But I'm new to Julia so I probably wouldn't be the
best person to lead such an effort.

Regards,
-Tom

On Mon, Jan 5, 2015 at 2:42 PM, Stefan Karpinski 
wrote:

> You can always call scikit learn from Julia using PyCall. Not sure how
> satisfying that would be for what you had in mind though.
>
> On Mon, Jan 5, 2015 at 3:22 PM, Tom Fawcett  wrote:
>
>> Fellow humans,
>>
>> I realize there are various machine learning algorithms implemented in
>> Julia.  Is there anything like a machine learning framework, similar to
>> scikit-learn, under development?
>>
>> Of course, Julia already has many of the capabilities of Numpy & Scipy so
>> that's most of the way.  I'm imagining a package (or meta-package) to
>> provide a common processing framework (comprising IO, pre-processing, core
>> ML algs, evaluation, visualization, etc.) with a set of APIs.  It would
>> provide a standard way to string together components so anyone can set up
>> an ML processing stream or contribute a new module.
>>
>> Is anything in the works?  I did a brief search and didn't find anything.
>>
>>
>> Thanks,
>> -Tom
>>
>>
>


[julia-users] allocate array of a custom type

2015-01-05 Thread Steven G. Johnson
You can use Array(MyType, n) to allocate a length-n uninitialized array of your 
type.

[julia-users] Julia REPL segfaults non-deterministically...

2015-01-05 Thread Tomas Lycken


I just got the following in the REPL:

julia> module Foo

   type Bar{T} end

   end

signal (11): Segmentation fault
unknown function (ip: -716631494)
jl_get_binding at /opt/julia-0.4/usr/bin/../lib/libjulia.so (unknown line)
jl_get_global at /opt/julia-0.4/usr/bin/../lib/libjulia.so (unknown line)
jl_module_run_initializer at /opt/julia-0.4/usr/bin/../lib/libjulia.so (unknown 
line)
unknown function (ip: -716770303)
unknown function (ip: -716771435)
jl_toplevel_eval_in at /opt/julia-0.4/usr/bin/../lib/libjulia.so (unknown line)
eval_user_input at REPL.jl:54
jlcall_eval_user_input_42363 at  (unknown line)
jl_apply_generic at /opt/julia-0.4/usr/bin/../lib/libjulia.so (unknown line)
anonymous at task.jl:96
unknown function (ip: -716815279)
unknown function (ip: 0)

I’ve seen segfaults a couple of times since I last built Julia from source, 
but I was always too concentrated on what I was doing to try to pin it down 
- but this time I wasn’t =) I hadn’t done anything in that session except a 
few Pkg commands, but I couldn’t reproduce the problem either in a fresh 
REPL or in one where I ran exactly the same Pkg commands in the same 
sequence before doing that. Is this a known problem, or should I report it 
on github?

julia> versioninfo()
Julia Version 0.4.0-dev+2416
Commit db3fa60 (2015-01-02 18:23 UTC)
Platform Info:
  System: Linux (x86_64-linux-gnu)
  CPU: Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

// T
​


Re: [julia-users] Re: Package name for embedding R within Julia

2015-01-05 Thread Tomas Lycken
Nvm, I forgot to export bar from the outer module; doing that, using works 
just as well as import, and you don't need to fully qualify (exported) 
names in the outer module, just as usual.

// T

On Tuesday, January 6, 2015 1:31:12 AM UTC+1, Tomas Lycken wrote:
>
> The following works for me:
>
> module RCall
>
> module R
>
> import ..RCall
>
> export foo
>
> function foo()
> RCall.bar()
> end
>
> end # R
>
> export R
>
> function bar()
> println("hello, handsome!")
> end
>
> end
>
> Now, I can do
>
> using RCall
> R.foo()
> "hello, handsome!"
>
> I didn’t manage to get it working with using instead of import in the 
> inner module, though, but that only affects your own usage of your own 
> code, so I think that’s a minor issue.
>
> // T
>
> On Monday, January 5, 2015 10:50:04 PM UTC+1, Douglas Bates wrote:
>
> The (unregistered) [RCall](https://github.com/JuliaStats/RCall.jl) 
>> package is an initial cut at the interface.  I am not happy with all the 
>> names that I chose and welcome suggestions of improvements.  For some 
>> reason I was unable to create an R module within the RCall module, as 
>> Stefan suggested.  Again, I welcome suggestions of how to accomplish this. 
>>  My particular difficulty is that if I create an R module within the RCall 
>> module I don't see the names from RCall.
>>
>>
>> On Saturday, January 3, 2015 12:56:48 PM UTC-6, lgautier wrote:
>>>
>>> I agree.
>>> RCall does provide consistency, although at the possible slight cost of 
>>> boring conformity, and seems a better choice than RStats.
>>>
>>> On Saturday, January 3, 2015 8:31:42 AM UTC-5, Viral Shah wrote:

 I prefer Rcall.jl, for consistency with ccall, PyCall, JavaCall, etc. 
 Also, once in JuliaStats, it will probably also be well advertised and 
 documented - so finding it should not be a challenge, IMO.

 -viral

 On Saturday, January 3, 2015 10:16:51 AM UTC+5:30, Ismael VC wrote:
>
> +1 for RStats.jl, I also think it's more search-friendly but not only 
> for people coming from R.
>
> On Fri, Jan 2, 2015 at 9:50 PM, Gray Calhoun  
> wrote:
>
>> That sounds great! Rcall.jl or RCall.jl are fine names; RStats.jl may 
>> be slightly more search-friendly for people coming from R, since they 
>> may 
>> not know about PyCall.
>>
>>
>> On Friday, January 2, 2015 1:59:04 PM UTC-6, Douglas Bates wrote:
>>>
>>> For many statistics-oriented Julia users there is a great advantage 
>>> in being able to piggy-back on R development and to use at least the 
>>> data 
>>> sets from R packages.  Hence the RDatasets package and the read_rda 
>>> function in the DataFrames package for reading saved R data.
>>>
>>> Over the last couple of days I have been experimenting with running 
>>> an embedded R within Julia and calling R functions from Julia. This is 
>>> similar in scope to the Rif package except that this code is written in 
>>> Julia and not as a set of wrapper functions written in C. The R API is 
>>> a C 
>>> API and, in some ways, very simple. Everything in R is represented as a 
>>> "symbolic expression" or SEXPREC and passed around as pointers to such 
>>> expressions (called an SEXP type).  Most functions take one or more 
>>> SEXP 
>>> values as arguments and return an SEXP.
>>>
>>> I have avoided reading the code for Rif for two reasons:
>>>  1. It is GPL3 licensed
>>>  2. I already know a fair bit of the R API and where to find API 
>>> function signatures.
>>>
>>> Here's a simple example
>>> julia> initR()
>>> 1
>>>
>>> julia> globalEnv = unsafe_load(cglobal((:R_GlobalEnv,libR),SEXP),1)
>>> Ptr{Void} @0x08c1c388
>>>
>>> julia> formaldehyde = tryEval(install(:Formaldehyde))
>>> Ptr{Void} @0x08fd1d18
>>>
>>> julia> inherits(formaldehyde,"data.frame")
>>> true
>>>
>>> julia> printValue(formaldehyde)
>>>   carb optden
>>> 1  0.1  0.086
>>> 2  0.3  0.269
>>> 3  0.5  0.446
>>> 4  0.6  0.538
>>> 5  0.7  0.626
>>> 6  0.9  0.782
>>>
>>> julia> length(formaldehyde)
>>> 2
>>>
>>> julia> names(formaldehyde)
>>> 2-element Array{ASCIIString,1}:
>>>  "carb"  
>>>  "optden"
>>>
>>> julia> form1 = ccall((:VECTOR_ELT,libR),SEXP,
>>> (SEXP,Cint),formaldehyde,0)
>>> Ptr{Void} @0x0a5baf58
>>>
>>> julia> ccall((:TYPEOF,libR),Cint,(SEXP,),form1)
>>> 14
>>>
>>> julia> carb = copy(pointer_to_array(ccall((:
>>> REAL,libR),Ptr{Cdouble},(SEXP,),form1),length(form1)))
>>> 6-element Array{Float64,1}:
>>>  0.1
>>>  0.3
>>>  0.5
>>>  0.6
>>>  0.7
>>>  0.9
>>>
>>> julia> form2 = ccall((:VECTOR_ELT,libR),SEXP,
>>> (SEXP,Cint),formaldehyde,1)
>>> Ptr{Void} @0x0a5baef0
>>>
>>> julia> ccall((:TYPEOF,libR),Cint,

Re: [julia-users] Re: Package name for embedding R within Julia

2015-01-05 Thread Tomas Lycken


The following works for me:

module RCall

module R

import ..RCall

export foo

function foo()
RCall.bar()
end

end # R

export R

function bar()
println("hello, handsome!")
end

end

Now, I can do

using RCall
R.foo()
"hello, handsome!"

I didn’t manage to get it working with using instead of import in the inner 
module, though, but that only affects your own usage of your own code, so I 
think that’s a minor issue.

// T

On Monday, January 5, 2015 10:50:04 PM UTC+1, Douglas Bates wrote:

The (unregistered) [RCall](https://github.com/JuliaStats/RCall.jl) package 
> is an initial cut at the interface.  I am not happy with all the names that 
> I chose and welcome suggestions of improvements.  For some reason I was 
> unable to create an R module within the RCall module, as Stefan suggested. 
>  Again, I welcome suggestions of how to accomplish this.  My particular 
> difficulty is that if I create an R module within the RCall module I don't 
> see the names from RCall.
>
>
> On Saturday, January 3, 2015 12:56:48 PM UTC-6, lgautier wrote:
>>
>> I agree.
>> RCall does provide consistency, although at the possible slight cost of 
>> boring conformity, and seems a better choice than RStats.
>>
>> On Saturday, January 3, 2015 8:31:42 AM UTC-5, Viral Shah wrote:
>>>
>>> I prefer Rcall.jl, for consistency with ccall, PyCall, JavaCall, etc. 
>>> Also, once in JuliaStats, it will probably also be well advertised and 
>>> documented - so finding it should not be a challenge, IMO.
>>>
>>> -viral
>>>
>>> On Saturday, January 3, 2015 10:16:51 AM UTC+5:30, Ismael VC wrote:

 +1 for RStats.jl, I also think it's more search-friendly but not only 
 for people coming from R.

 On Fri, Jan 2, 2015 at 9:50 PM, Gray Calhoun  
 wrote:

> That sounds great! Rcall.jl or RCall.jl are fine names; RStats.jl may 
> be slightly more search-friendly for people coming from R, since they may 
> not know about PyCall.
>
>
> On Friday, January 2, 2015 1:59:04 PM UTC-6, Douglas Bates wrote:
>>
>> For many statistics-oriented Julia users there is a great advantage 
>> in being able to piggy-back on R development and to use at least the 
>> data 
>> sets from R packages.  Hence the RDatasets package and the read_rda 
>> function in the DataFrames package for reading saved R data.
>>
>> Over the last couple of days I have been experimenting with running 
>> an embedded R within Julia and calling R functions from Julia. This is 
>> similar in scope to the Rif package except that this code is written in 
>> Julia and not as a set of wrapper functions written in C. The R API is a 
>> C 
>> API and, in some ways, very simple. Everything in R is represented as a 
>> "symbolic expression" or SEXPREC and passed around as pointers to such 
>> expressions (called an SEXP type).  Most functions take one or more SEXP 
>> values as arguments and return an SEXP.
>>
>> I have avoided reading the code for Rif for two reasons:
>>  1. It is GPL3 licensed
>>  2. I already know a fair bit of the R API and where to find API 
>> function signatures.
>>
>> Here's a simple example
>> julia> initR()
>> 1
>>
>> julia> globalEnv = unsafe_load(cglobal((:R_GlobalEnv,libR),SEXP),1)
>> Ptr{Void} @0x08c1c388
>>
>> julia> formaldehyde = tryEval(install(:Formaldehyde))
>> Ptr{Void} @0x08fd1d18
>>
>> julia> inherits(formaldehyde,"data.frame")
>> true
>>
>> julia> printValue(formaldehyde)
>>   carb optden
>> 1  0.1  0.086
>> 2  0.3  0.269
>> 3  0.5  0.446
>> 4  0.6  0.538
>> 5  0.7  0.626
>> 6  0.9  0.782
>>
>> julia> length(formaldehyde)
>> 2
>>
>> julia> names(formaldehyde)
>> 2-element Array{ASCIIString,1}:
>>  "carb"  
>>  "optden"
>>
>> julia> form1 = ccall((:VECTOR_ELT,libR),SEXP,
>> (SEXP,Cint),formaldehyde,0)
>> Ptr{Void} @0x0a5baf58
>>
>> julia> ccall((:TYPEOF,libR),Cint,(SEXP,),form1)
>> 14
>>
>> julia> carb = copy(pointer_to_array(ccall((:
>> REAL,libR),Ptr{Cdouble},(SEXP,),form1),length(form1)))
>> 6-element Array{Float64,1}:
>>  0.1
>>  0.3
>>  0.5
>>  0.6
>>  0.7
>>  0.9
>>
>> julia> form2 = ccall((:VECTOR_ELT,libR),SEXP,
>> (SEXP,Cint),formaldehyde,1)
>> Ptr{Void} @0x0a5baef0
>>
>> julia> ccall((:TYPEOF,libR),Cint,(SEXP,),form2)
>> 14
>>
>> julia> optden = copy(pointer_to_array(ccall((:
>> REAL,libR),Ptr{Cdouble},(SEXP,),form2),length(form2)))
>> 6-element Array{Float64,1}:
>>  0.086
>>  0.269
>>  0.446
>>  0.538
>>  0.626
>>  0.782
>>
>>
>> A call to printValue uses the R printing mechanism.
>>
>> Questions:
>>  - What would be a good name for such a package?  In the spirit of 
>> PyCall

[julia-users] allocate array of a custom type

2015-01-05 Thread Kuba Roth
Heh, I think i have found the answer. It turned out that the list 
comprehensions were doing pretty good job. The slowdown I was getting was due 
to my inability of using rand function.


Re: [julia-users] Julia takes 2nd place in "Delacorte Numbers" competition

2015-01-05 Thread Stefan Karpinski
Very nice. Would this problem benefit even more from multithreading?

On Sun, Jan 4, 2015 at 12:35 PM, Arch Robison 
wrote:

> FYI, I won 2nd place in the recent Al Zimmerman programming contest "Delacorte
> Numbers ", using only
> Julia and a quad-core MonkeyStation Pro
> .   Julia worked out well
> because it had:
>
>- interactivity to study the problem
>- quick prototyping to try ideas
>- fast scalar code
>- fast SIMD loops
>
> I've working on a paper that will describe the experience in more detail.
>
> - Arch
>
>


[julia-users] allocate array of a custom type

2015-01-05 Thread Kuba Roth
Oopps my bad - the conversion error makes sense - the 10 is an Int. 

Would it be better then to use zeros() or a list comprehension here?
Zeros default type is Float64 so going through an intermediate type may be 
expansive for large arrays...

Re: [julia-users] Julia backslash performance vs MATLAB backslash

2015-01-05 Thread Stefan Karpinski
A word of legal caution: Tim, I believe some (all?) of your SuiteSparse
code is GPL and since Julia is MIT (although not all libraries are), we can
look at pseudocode but not copy GPL code while legally keeping the MIT
license on Julia's standard library.

Also, thanks so much for helping with this.

On Mon, Jan 5, 2015 at 4:09 PM, Ehsan Eftekhari 
wrote:

> Following your advice, I tried the code again, this time I also used MUMPS
> solver from https://github.com/lruthotto/MUMPS.jl
> I used a 42x43x44 grid. These are the results:
>
> MUMPS: elapsed time: 2.09091471 seconds
> lufact: elapsed time: 5.01038297 seconds (9952832 bytes allocated)
> backslash: elapsed time: 16.604061696 seconds (80189136 bytes allocated,
> 0.45% gc time)
>
> and in Matlab:
> Elapsed time is 5.423656 seconds.
>
> Thanks a lot Tim and Viral for your quick and helpful comments.
>
> Kind regards,
> Ehsan
>
>
> On Monday, January 5, 2015 9:56:12 PM UTC+1, Viral Shah wrote:
>>
>> Thanks, that is great. I was wondering about the symmetry checker - we
>> have the naive one currently, but I can just use the CHOLMOD one now.
>>
>> -viral
>>
>>
>>
>> > On 06-Jan-2015, at 2:22 am, Tim Davis  wrote:
>> >
>> > oops.  Yes, your factorize function is broken.  You might try mine
>> instead, in my
>> > factorize package.
>> >
>> > I have a symmetry-checker in CHOLMOD.  It checks if the matrix is
>> symmetric and
>> > with positive diagonals.  I think I have a MATLAB interface for it
>> too.  The code is efficient,
>> > since it doesn't form A transpose, and it quits early as soon as
>> asymmetry is detected.
>> >
>> > It does rely on the fact that MATLAB requires its sparse matrices to
>> have sorted row indices
>> > in each column, however.
>> >
>> > On Mon, Jan 5, 2015 at 2:43 PM, Viral Shah  wrote:
>> > Tim - thanks for the reference. The paper will come in handy. This is a
>> longstanding issue, that we just haven’t got around to addressing yet, but
>> perhaps now is a good time.
>> >
>> > https://github.com/JuliaLang/julia/issues/3295
>> >
>> > We have a very simplistic factorize() for sparse matrices that must
>> have been implemented as a stopgap. This is what it currently does and that
>> explains everything.
>> >
>> > # placing factorize here for now. Maybe add a new file
>> > function factorize(A::SparseMatrixCSC)
>> > m, n = size(A)
>> > if m == n
>> > Ac = cholfact(A)
>> > Ac.c.minor == m && ishermitian(A) && return Ac
>> > end
>> > return lufact(A)
>> > end
>> >
>> > -viral
>> >
>> >
>> >
>> > > On 06-Jan-2015, at 1:57 am, Tim Davis  wrote:
>> > >
>> > > That does sound like a glitch in the "\" algorithm, rather than in
>> UMFPACK.  The OpenBLAS is pretty good.
>> > >
>> > > This is very nice in Julia:
>> > >
>> > > F = lufact (d["M"]) ; F \ d
>> > >
>> > > That's a great idea to have a factorization object like that.  I have
>> a MATLAB toolbox that does
>> > > the same thing, but it's not a built-in function inside MATLAB.  It's
>> written in M, so it can be slow for
>> > > small matrices.   With it, however, I can do:
>> > >
>> > > F = factorize (A) ;% does an LU, Cholesky, QR, SVD, or whatever.
>> Uses my polyalgorithm for "\".
>> > > x = F\b ;
>> > >
>> > > I can do S = inverse(A); which returns a factorization, not an
>> inverse, but with a flag
>> > > set so that S*b does A\b (and yes, S\b would do A*b, since S keeps a
>> copy of A inside it, as well).
>> > >
>> > > You can also specify the factorization, such as
>> > >
>> > >  F=factorize(A, 'lu')
>> > > F=factorize(A,'svd') ; etc.
>> > >
>> > > It's in SuiteSparse/MATLAB_tools/Factorize, if you're interested.
>> I've suggested the same
>> > > feature to The MathWorks.
>> > >
>> > > My factorize function includes a backslash polyalgorithm, if you're
>> interested in taking a look.
>> > >
>> > > Algorithm 930: FACTORIZE: an object-oriented linear system solver for
>> MATLAB T. A. Davis, ACM Transactions on Mathematical Software, Vol 39,
>> Issue 4, pp. 28:1 - 28:18, 2013.
>> > > http://faculty.cse.tamu.edu/davis/publications_files/
>> Factorize_an_object_oriented_linear_system_solver_for_MATLAB.pdf
>> > >
>> > > On Mon, Jan 5, 2015 at 2:11 PM, Viral Shah  wrote:
>> > > The BLAS will certainly make a difference, but OpenBLAS is reasonably
>> good.
>> > >
>> > > I also wonder what is happening in our \ polyalgorithm. The profile
>> suggests the code is trying Cholesky decomposition, but it really shouldn't
>> since the matrix is not symmetric. If I just do the lufact(), which
>> essentially calls Umfpack, I can match Matlab timing:
>> > >
>> > > @time F = lufact(d["M"]); F \ d["RHS"];
>> > >
>> > > -viral
>> > >
>> > >
>> > > On Tuesday, January 6, 2015 12:31:34 AM UTC+5:30, Tim Davis wrote:
>> > > The difference could be the BLAS.  MATLAB comes with its own BLAS
>> library, and the performance
>> > > of the BLAS has a huge impact on the performance of UMFPACK,
>> particularly for 3D discretizations.
>> > >
>> > > On Mon, Jan 5, 2015 at 6:21 A

Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Hans W Borchers
Oh, right. I now see the same behavior in Matlab. Never went into this trap
-- probably because I was always using commas in array constructions.
Unfortunately (for me), there is a difference with these two spellings in 
Julia.


On Monday, January 5, 2015 11:47:25 PM UTC+1, Stefan Karpinski wrote:
>
> Splitting expressions inside of array syntax is space sensitive as well:
>
> julia> [1 + 2]
> 1-element Array{Int64,1}:
>  3
>
> julia> [1 +2]
> 1x2 Array{Int64,2}:
>  1  2
>
>
> Of course this bothers me too, but it's another example of 
> space-sensitivity..
>
> On Mon, Jan 5, 2015 at 5:42 PM, Hans W Borchers  > wrote:
>
>> Style guides are not syntax rules. Every body writes n+1 at times.
>> Is there any other place in Julia where putting spaces (or not putting 
>> spaces) 
>> around arithmetical operators makes a difference?
>> Would this be allowed by the general Julia philosophy?
>> Will it not lead to errors very difficult to track down?
>>
>>
>

Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Jeff Bezanson
Yes, we should try to restrict parsing of hex float literals more if
we're going to keep them. The `p` should only be special after
0xd.d...

In fact, I didn't notice before that this is actually a simple bug in
the parser. `2p+1` gives an internal error. That needs to be fixed in
any case.

On Mon, Jan 5, 2015 at 5:46 PM, Stefan Karpinski  wrote:
> Splitting expressions inside of array syntax is space sensitive as well:
>
> julia> [1 + 2]
> 1-element Array{Int64,1}:
>  3
>
> julia> [1 +2]
> 1x2 Array{Int64,2}:
>  1  2
>
>
> Of course this bothers me too, but it's another example of
> space-sensitivity..
>
> On Mon, Jan 5, 2015 at 5:42 PM, Hans W Borchers 
> wrote:
>>
>> Style guides are not syntax rules. Every body writes n+1 at times.
>> Is there any other place in Julia where putting spaces (or not putting
>> spaces)
>> around arithmetical operators makes a difference?
>> Would this be allowed by the general Julia philosophy?
>> Will it not lead to errors very difficult to track down?
>>
>>
>>
>> On Monday, January 5, 2015 9:33:41 PM UTC+1, Jeff Waller wrote:
>>>
>>> The cause for this thread is mainly a lexical analyzer bug for hex
>>> notation. Except for the error in #9617, I'm fine with the current behavior
>>> and syntax even with the semi e-ambiguity if you want the scientific
>>> notation literal, use no spaces.  This is only ambiguous because Julia
>>> permits a number literal N to proceed an identifier I as a shortcut for N*I,
>>> which is different than many languages and part of Julia's charm.  I'd be
>>> sorry to see it go.
>>>
>>> [0-9]+(.[0-9]+)?e(+|-)?[0-9]+< scientific notation literal
>>>
>>> 2e+1 is 2x10^1
>>> 2e + 1   is 2*e + 1
>>> 2e+ 1is a syntax error because to the lexical analyzer 2e+ is an
>>> error without at least 1 trailing digit (no spaces)
>>>
>>> typing 2e+1 (without the space) and expecting it to mean 2*e + 1 is way
>>> over emphasizing the need to not type a space.  All of the other language
>>> style guides are consistent about this being bad style.
>>>
>>> Finally consider this
>>>
>>> julia> 2e-1e
>>> 0.5436563656918091
>>>
>>>
>>> This is parsed as (2*10^-1)e  = .2e which I assert is the right thing to
>>> do.
>
>


[julia-users] allocate array of a custom type

2015-01-05 Thread Kuba Roth
Hi there,
What is the recommended way to allocate size of an array of a custom type?

a=MyType[10] gives me a conversion error. What do I miss here?

Also does push!() comes with significant performance penalty compared to 
[]operator?

Thank you,
Kuba

Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Stefan Karpinski
Splitting expressions inside of array syntax is space sensitive as well:

julia> [1 + 2]
1-element Array{Int64,1}:
 3

julia> [1 +2]
1x2 Array{Int64,2}:
 1  2


Of course this bothers me too, but it's another example of
space-sensitivity..

On Mon, Jan 5, 2015 at 5:42 PM, Hans W Borchers 
wrote:

> Style guides are not syntax rules. Every body writes n+1 at times.
> Is there any other place in Julia where putting spaces (or not putting
> spaces)
> around arithmetical operators makes a difference?
> Would this be allowed by the general Julia philosophy?
> Will it not lead to errors very difficult to track down?
>
>
>
> On Monday, January 5, 2015 9:33:41 PM UTC+1, Jeff Waller wrote:
>>
>> The cause for this thread is mainly a lexical analyzer bug for hex
>> notation. Except for the error in #9617, I'm fine with the current behavior
>> and syntax even with the semi e-ambiguity if you want the scientific
>> notation literal, use no spaces.  This is only ambiguous because Julia
>> permits a number literal N to proceed an identifier I as a shortcut for
>> N*I, which is different than many languages and part of Julia's charm.  I'd
>> be sorry to see it go.
>>
>> [0-9]+(.[0-9]+)?e(+|-)?[0-9]+< scientific notation literal
>>
>> 2e+1 is 2x10^1
>> 2e + 1   is 2*e + 1
>> 2e+ 1is a syntax error because to the lexical analyzer 2e+ is an
>> error without at least 1 trailing digit (no spaces)
>>
>> typing 2e+1 (without the space) and expecting it to mean 2*e + 1 is way
>> over emphasizing the need to not type a space.  All of the other language
>> style guides are consistent about this being bad style.
>>
>> Finally consider this
>>
>> *julia> *
>> *2e-1e**0.5436563656918091*
>>
>> This is parsed as (2*10^-1)e  = .2e which I assert is the right thing to
>> do.
>>
>


Re: [julia-users] Julia framework similar to scikit-learn?

2015-01-05 Thread Stefan Karpinski
You can always call scikit learn from Julia using PyCall. Not sure how
satisfying that would be for what you had in mind though.

On Mon, Jan 5, 2015 at 3:22 PM, Tom Fawcett  wrote:

> Fellow humans,
>
> I realize there are various machine learning algorithms implemented in
> Julia.  Is there anything like a machine learning framework, similar to
> scikit-learn, under development?
>
> Of course, Julia already has many of the capabilities of Numpy & Scipy so
> that's most of the way.  I'm imagining a package (or meta-package) to
> provide a common processing framework (comprising IO, pre-processing, core
> ML algs, evaluation, visualization, etc.) with a set of APIs.  It would
> provide a standard way to string together components so anyone can set up
> an ML processing stream or contribute a new module.
>
> Is anything in the works?  I did a brief search and didn't find anything.
>
> Thanks,
> -Tom
>
>


Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Hans W Borchers
Style guides are not syntax rules. Every body writes n+1 at times.
Is there any other place in Julia where putting spaces (or not putting 
spaces) 
around arithmetical operators makes a difference?
Would this be allowed by the general Julia philosophy?
Will it not lead to errors very difficult to track down?


On Monday, January 5, 2015 9:33:41 PM UTC+1, Jeff Waller wrote:
>
> The cause for this thread is mainly a lexical analyzer bug for hex 
> notation. Except for the error in #9617, I'm fine with the current behavior 
> and syntax even with the semi e-ambiguity if you want the scientific 
> notation literal, use no spaces.  This is only ambiguous because Julia 
> permits a number literal N to proceed an identifier I as a shortcut for 
> N*I, which is different than many languages and part of Julia's charm.  I'd 
> be sorry to see it go.
>
> [0-9]+(.[0-9]+)?e(+|-)?[0-9]+< scientific notation literal
>
> 2e+1 is 2x10^1
> 2e + 1   is 2*e + 1
> 2e+ 1is a syntax error because to the lexical analyzer 2e+ is an 
> error without at least 1 trailing digit (no spaces)
>
> typing 2e+1 (without the space) and expecting it to mean 2*e + 1 is way 
> over emphasizing the need to not type a space.  All of the other language 
> style guides are consistent about this being bad style.
>
> Finally consider this
>
> *julia> *
> *2e-1e**0.5436563656918091*
>
> This is parsed as (2*10^-1)e  = .2e which I assert is the right thing to 
> do.
>


Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Stefan Karpinski
I would also be sad to see hex literals go, and I've found the binary
literals quite nice. Octal literals can go jump in a lake (the only real
use case is for specifying permissions).

On Mon, Jan 5, 2015 at 5:34 PM, Simon Byrne  wrote:

> On Monday, 5 January 2015 20:13:51 UTC, Jeff Bezanson wrote:
>>
>> Hex float literals are a different story. It's an understatement to
>> say they are VERY rarely used, and that most programmers don't need
>> them and have never heard of them. They also don't currently work
>> under MSVC (issue #6349). We have not had them very long. I say we
>> remove them. They can be replaced with a custom string literal.
>>
>
> While I can see that they do make things complicated, I for one would be
> sad to see them go: they still are the easiest way to ensure you have the
> exact constant you want, and don't get bitten by some obscure rounding
> error. I don't really lisp enough to understand the parser, but would it be
> possible to make the parsing of "p" conditional on the prefix?
>


Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Simon Byrne
On Monday, 5 January 2015 20:13:51 UTC, Jeff Bezanson wrote:
>
> Hex float literals are a different story. It's an understatement to 
> say they are VERY rarely used, and that most programmers don't need 
> them and have never heard of them. They also don't currently work 
> under MSVC (issue #6349). We have not had them very long. I say we 
> remove them. They can be replaced with a custom string literal. 
>

While I can see that they do make things complicated, I for one would be 
sad to see them go: they still are the easiest way to ensure you have the 
exact constant you want, and don't get bitten by some obscure rounding 
error. I don't really lisp enough to understand the parser, but would it be 
possible to make the parsing of "p" conditional on the prefix?


Re: [julia-users] Re: Package name for embedding R within Julia

2015-01-05 Thread Douglas Bates
The (unregistered) [RCall](https://github.com/JuliaStats/RCall.jl) package 
is an initial cut at the interface.  I am not happy with all the names that 
I chose and welcome suggestions of improvements.  For some reason I was 
unable to create an R module within the RCall module, as Stefan suggested. 
 Again, I welcome suggestions of how to accomplish this.  My particular 
difficulty is that if I create an R module within the RCall module I don't 
see the names from RCall.


On Saturday, January 3, 2015 12:56:48 PM UTC-6, lgautier wrote:
>
> I agree.
> RCall does provide consistency, although at the possible slight cost of 
> boring conformity, and seems a better choice than RStats.
>
> On Saturday, January 3, 2015 8:31:42 AM UTC-5, Viral Shah wrote:
>>
>> I prefer Rcall.jl, for consistency with ccall, PyCall, JavaCall, etc. 
>> Also, once in JuliaStats, it will probably also be well advertised and 
>> documented - so finding it should not be a challenge, IMO.
>>
>> -viral
>>
>> On Saturday, January 3, 2015 10:16:51 AM UTC+5:30, Ismael VC wrote:
>>>
>>> +1 for RStats.jl, I also think it's more search-friendly but not only 
>>> for people coming from R.
>>>
>>> On Fri, Jan 2, 2015 at 9:50 PM, Gray Calhoun  
>>> wrote:
>>>
 That sounds great! Rcall.jl or RCall.jl are fine names; RStats.jl may 
 be slightly more search-friendly for people coming from R, since they may 
 not know about PyCall.


 On Friday, January 2, 2015 1:59:04 PM UTC-6, Douglas Bates wrote:
>
> For many statistics-oriented Julia users there is a great advantage in 
> being able to piggy-back on R development and to use at least the data 
> sets 
> from R packages.  Hence the RDatasets package and the read_rda function 
> in 
> the DataFrames package for reading saved R data.
>
> Over the last couple of days I have been experimenting with running an 
> embedded R within Julia and calling R functions from Julia. This is 
> similar 
> in scope to the Rif package except that this code is written in Julia and 
> not as a set of wrapper functions written in C. The R API is a C API and, 
> in some ways, very simple. Everything in R is represented as a "symbolic 
> expression" or SEXPREC and passed around as pointers to such expressions 
> (called an SEXP type).  Most functions take one or more SEXP values as 
> arguments and return an SEXP.
>
> I have avoided reading the code for Rif for two reasons:
>  1. It is GPL3 licensed
>  2. I already know a fair bit of the R API and where to find API 
> function signatures.
>
> Here's a simple example
> julia> initR()
> 1
>
> julia> globalEnv = unsafe_load(cglobal((:R_GlobalEnv,libR),SEXP),1)
> Ptr{Void} @0x08c1c388
>
> julia> formaldehyde = tryEval(install(:Formaldehyde))
> Ptr{Void} @0x08fd1d18
>
> julia> inherits(formaldehyde,"data.frame")
> true
>
> julia> printValue(formaldehyde)
>   carb optden
> 1  0.1  0.086
> 2  0.3  0.269
> 3  0.5  0.446
> 4  0.6  0.538
> 5  0.7  0.626
> 6  0.9  0.782
>
> julia> length(formaldehyde)
> 2
>
> julia> names(formaldehyde)
> 2-element Array{ASCIIString,1}:
>  "carb"  
>  "optden"
>
> julia> form1 = ccall((:VECTOR_ELT,libR),SEXP,
> (SEXP,Cint),formaldehyde,0)
> Ptr{Void} @0x0a5baf58
>
> julia> ccall((:TYPEOF,libR),Cint,(SEXP,),form1)
> 14
>
> julia> carb = copy(pointer_to_array(ccall((:
> REAL,libR),Ptr{Cdouble},(SEXP,),form1),length(form1)))
> 6-element Array{Float64,1}:
>  0.1
>  0.3
>  0.5
>  0.6
>  0.7
>  0.9
>
> julia> form2 = ccall((:VECTOR_ELT,libR),SEXP,
> (SEXP,Cint),formaldehyde,1)
> Ptr{Void} @0x0a5baef0
>
> julia> ccall((:TYPEOF,libR),Cint,(SEXP,),form2)
> 14
>
> julia> optden = copy(pointer_to_array(ccall((:
> REAL,libR),Ptr{Cdouble},(SEXP,),form2),length(form2)))
> 6-element Array{Float64,1}:
>  0.086
>  0.269
>  0.446
>  0.538
>  0.626
>  0.782
>
>
> A call to printValue uses the R printing mechanism.
>
> Questions:
>  - What would be a good name for such a package?  In the spirit of 
> PyCall it could be RCall or Rcall perhaps.
>
>  - Right now I am defining several functions that emulate the names of 
> functions in R itself ir in the R API.  What is a good balance?  
> Obviously 
> it would not be a good idea to bring in all the names in the R base 
> namespace.  On the other hand, those who know names like "inherits" and 
> what it means in R will find it convenient to have such names in such a 
> package.
>
> - Should I move the discussion the the julia-stats list?
>
>
>>>

Re: [julia-users] How to overwrite to an existing file, only range of data? HDF5 can do this ?

2015-01-05 Thread Tim Holy
On Monday, January 05, 2015 09:52:28 PM Paul Analyst wrote:
> dset = d_create( "F", datatype(Float64), dataspace(10,10))/

You're missing the `fid`. You have to tell it where (which file, or which 
group) 
you want to create that dataset.

--Tim



Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Jeff Bezanson
Already there: #6770

On Mon, Jan 5, 2015 at 4:27 PM, Tim Holy  wrote:
> On Monday, January 05, 2015 08:15:03 AM Hans W Borchers wrote:
>> By the way, has the bug x = 10; x.1 returning 1.0 been handled in 0.4? It's
>> still there in 0.3.
>
> Nope. If you haven't filed an issue already, please do.
>
> --Tim
>
>>
>> On Monday, January 5, 2015 2:32:00 PM UTC+1, Simon Byrne wrote:
>> > *  julia> 3e+1*
>> >
>> >> *  30.0*
>> >>
>> >>   *julia> 3e + 1*
>> >>
>> >> *  9.154845485377136*
>> >
>> > Perhaps this is a good reason to change behaviour such that e is no longer
>> > a constant: it has always seemed bit odd to use a valuable latin singleton
>> > in this way. We could use a unicode script e (U+212F) instead, as
>> > suggested
>> > by wikipedia:
>> >
>> > http://en.wikipedia.org/wiki/Numerals_in_Unicode#Characters_for_mathematic
>> > al_constants
>> >
>> > s
>


Re: [julia-users] Re: Suggestion for "tuple types" explanation in manual

2015-01-05 Thread ivo welch
On Mon, Jan 5, 2015 at 1:26 PM, Tim Holy  wrote:
> In the absence of being able to actually construct the change, I like the idea
> of tagging things that need fixing. You can do that just by editing the
> document to say "This needs to be clarified in the following way:". It's not a
> real pull-request and wouldn't be merged, but it's an effective reminder.


perfect.  I will do this later in the week.


Ivo Welch (ivo.we...@gmail.com)
http://www.ivo-welch.info/


Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Tim Holy
On Monday, January 05, 2015 08:15:03 AM Hans W Borchers wrote:
> By the way, has the bug x = 10; x.1 returning 1.0 been handled in 0.4? It's
> still there in 0.3.

Nope. If you haven't filed an issue already, please do.

--Tim

> 
> On Monday, January 5, 2015 2:32:00 PM UTC+1, Simon Byrne wrote:
> > *  julia> 3e+1*
> > 
> >> *  30.0*
> >> 
> >>   *julia> 3e + 1*
> >> 
> >> *  9.154845485377136*
> > 
> > Perhaps this is a good reason to change behaviour such that e is no longer
> > a constant: it has always seemed bit odd to use a valuable latin singleton
> > in this way. We could use a unicode script e (U+212F) instead, as
> > suggested
> > by wikipedia:
> > 
> > http://en.wikipedia.org/wiki/Numerals_in_Unicode#Characters_for_mathematic
> > al_constants
> > 
> > s



Re: [julia-users] Re: Suggestion for "tuple types" explanation in manual

2015-01-05 Thread Tim Holy
First, keep in mind that nothing gets committed without review, unless you 
already have push privileges.

In the absence of being able to actually construct the change, I like the idea 
of tagging things that need fixing. You can do that just by editing the 
document to say "This needs to be clarified in the following way:". It's not a 
real pull-request and wouldn't be merged, but it's an effective reminder.

--Tim

On Monday, January 05, 2015 11:32:11 AM ivo welch wrote:
> documentation is a tricky thing.  I am pretty sure you do *not* want
> me to make doc changes.
> 
> I am very qualified to state exactly where I am getting confused and
> where it could be better.  alas, if I tried to write it, I would write
> incorrect explanation, which would probably be worse.  so, this needs
> one person who is learning it (the consumer) and one person who is
> teaching it (the producer).
> 
> what we really need is the ability for readers to attach questions and
> notes to specific spots, that someone with knowledge can address later
> on at his/her convenience.  the more specific the insertion point for
> comments is, the easier it will be for the writer of the documentation
> to fix this later.
> 
> fwiw, I have written a textbook in corporate finance.  I needed
> student reviewers that told me where they got confused.  the material
> was obvious to me.  I don't think there is any other way to make
> writing clear.
> 
> in the absence of ability to change github to allow specific insertion
> points, I wonder if we want to branch the docs and request that
> comments be left in a particular color (say, red).
> 
> regards,
> 
> /iaw
> 
> 
> Ivo Welch (ivo.we...@gmail.com)
> http://www.ivo-welch.info/
> J. Fred Weston Distinguished Professor of Finance
> Anderson School at UCLA, C519
> Director, UCLA Anderson Fink Center for Finance and Investments
> Free Finance Textbook, http://book.ivo-welch.info/
> Exec Editor, Critical Finance Review,
> http://www.critical-finance-review.org/ Editor and Publisher, FAMe,
> http://www.fame-jagazine.com/
> 
> On Mon, Jan 5, 2015 at 2:35 AM, Sean Marshallsay  wrote:
> > Hi Ivo
> > 
> > You're more than welcome to contribute to the documentation yourself to
> > help clarify anything you found confusing.
> > 
> > Regarding your second point, open() does not return a named type it
> > returns
> > a tuple containing some kind of stream and some kind of process, Pipe is
> > some kind of stream and Process is some kind of process. Hopefully the
> > following code snippet will help clear things up.
> > 
> > julia> x = open(`less`)
> > (Pipe(closed, 0 bytes waiting),Process(`less`, ProcessExited(0)))
> > 
> > julia> y = typeof(x)
> > (Pipe,Process)
> > 
> > julia> typeof(y)
> > (DataType,DataType)
> > 
> > help?> issubtype
> > INFO: Loading help data...
> > Base.issubtype(type1, type2)
> > 
> >True if and only if all values of "type1" are also of "type2".
> >Can also be written using the "<:" infix operator as "type1 <:
> >type2".
> > 
> > julia> issubtype((Base.Pipe, Base.Process), (Base.AsyncStream,
> > Base.Process))
> > true
> > 
> > help?> super
> > Base.super(T::DataType)
> > 
> >Return the supertype of DataType T
> > 
> > julia> super(Base.Pipe)
> > AsyncStream
> > 
> > julia> super(Base.Process)
> > Any
> > 
> > So what we can see is that open() does return a (stream, process) tuple
> > but
> > stream should actually be called AsyncStream and process should actually
> > be
> > called Process.
> > 
> > Hope this helps
> > Sean
> > 
> > On Monday, 5 January 2015 06:59:31 UTC, ivo welch wrote:
> >> I am reading again about the type system, esp in
> >> http://julia.readthedocs.org/en/latest/manual/types/ .  I am a good
> >> guinea
> >> pig for a manual, because I don't know too much.
> >> 
> >> a tuple is like function arguments without the functions.  so,
> >> 
> >> mytuple=(1,"ab",(3,4),"5")
> >> 
> >> is a tuple.  good.
> >> 
> >> what can I do with a typle?  the manual tells me right upfront that I can
> >> do a typeof(mytuple) function call to see its types.  good.
> >> 
> >> alas, then it goes into intricacies of how types "sort-of" inherit.  I
> >> need a few more basics first.
> >> 
> >> I would suggest adding to the docs right after the typeof function that,
> >> e.g., mytuple[2] shows the contents of the second parameter.  the julia
> >> cli
> >> prints the contents.  the examples would be a little clearer, perhaps, if
> >> one used a nested tuple, like (1,2,("foo",3),"bar").
> >> 
> >> before getting into type relations, I would also add how one creates a
> >> named tuple.  since open() does exactly this.  well, maybe I am wrong. 
> >> the
> >> docs say it returns a (stream,process), but typeof( open(`gzcat
> >> d.csv.gz`)
> >> tells me I have a (Pipe,Process).
> >> 
> >> I know how to extract the n-th component of the open() returned tuple
> >> (with the [] index operator), but I don't know how to get its name. 
> >> x.Pipe
> >> does not work for op

Re: [julia-users] Julia backslash performance vs MATLAB backslash

2015-01-05 Thread Ehsan Eftekhari
Following your advice, I tried the code again, this time I also used MUMPS 
solver from https://github.com/lruthotto/MUMPS.jl 
I used a 42x43x44 grid. These are the results:

MUMPS: elapsed time: 2.09091471 seconds
lufact: elapsed time: 5.01038297 seconds (9952832 bytes allocated)
backslash: elapsed time: 16.604061696 seconds (80189136 bytes allocated, 
0.45% gc time)

and in Matlab:
Elapsed time is 5.423656 seconds.

Thanks a lot Tim and Viral for your quick and helpful comments.

Kind regards,
Ehsan


On Monday, January 5, 2015 9:56:12 PM UTC+1, Viral Shah wrote:
>
> Thanks, that is great. I was wondering about the symmetry checker - we 
> have the naive one currently, but I can just use the CHOLMOD one now. 
>
> -viral 
>
>
>
> > On 06-Jan-2015, at 2:22 am, Tim Davis > 
> wrote: 
> > 
> > oops.  Yes, your factorize function is broken.  You might try mine 
> instead, in my 
> > factorize package. 
> > 
> > I have a symmetry-checker in CHOLMOD.  It checks if the matrix is 
> symmetric and 
> > with positive diagonals.  I think I have a MATLAB interface for it too. 
>  The code is efficient, 
> > since it doesn't form A transpose, and it quits early as soon as 
> asymmetry is detected. 
> > 
> > It does rely on the fact that MATLAB requires its sparse matrices to 
> have sorted row indices 
> > in each column, however. 
> > 
> > On Mon, Jan 5, 2015 at 2:43 PM, Viral Shah  > wrote: 
> > Tim - thanks for the reference. The paper will come in handy. This is a 
> longstanding issue, that we just haven’t got around to addressing yet, but 
> perhaps now is a good time. 
> > 
> > https://github.com/JuliaLang/julia/issues/3295 
> > 
> > We have a very simplistic factorize() for sparse matrices that must have 
> been implemented as a stopgap. This is what it currently does and that 
> explains everything. 
> > 
> > # placing factorize here for now. Maybe add a new file 
> > function factorize(A::SparseMatrixCSC) 
> > m, n = size(A) 
> > if m == n 
> > Ac = cholfact(A) 
> > Ac.c.minor == m && ishermitian(A) && return Ac 
> > end 
> > return lufact(A) 
> > end 
> > 
> > -viral 
> > 
> > 
> > 
> > > On 06-Jan-2015, at 1:57 am, Tim Davis > 
> wrote: 
> > > 
> > > That does sound like a glitch in the "\" algorithm, rather than in 
> UMFPACK.  The OpenBLAS is pretty good. 
> > > 
> > > This is very nice in Julia: 
> > > 
> > > F = lufact (d["M"]) ; F \ d 
> > > 
> > > That's a great idea to have a factorization object like that.  I have 
> a MATLAB toolbox that does 
> > > the same thing, but it's not a built-in function inside MATLAB.  It's 
> written in M, so it can be slow for 
> > > small matrices.   With it, however, I can do: 
> > > 
> > > F = factorize (A) ;% does an LU, Cholesky, QR, SVD, or whatever. 
>  Uses my polyalgorithm for "\". 
> > > x = F\b ; 
> > > 
> > > I can do S = inverse(A); which returns a factorization, not an 
> inverse, but with a flag 
> > > set so that S*b does A\b (and yes, S\b would do A*b, since S keeps a 
> copy of A inside it, as well). 
> > > 
> > > You can also specify the factorization, such as 
> > > 
> > >  F=factorize(A, 'lu') 
> > > F=factorize(A,'svd') ; etc. 
> > > 
> > > It's in SuiteSparse/MATLAB_tools/Factorize, if you're interested. 
>  I've suggested the same 
> > > feature to The MathWorks. 
> > > 
> > > My factorize function includes a backslash polyalgorithm, if you're 
> interested in taking a look. 
> > > 
> > > Algorithm 930: FACTORIZE: an object-oriented linear system solver for 
> MATLAB T. A. Davis, ACM Transactions on Mathematical Software, Vol 39, 
> Issue 4, pp. 28:1 - 28:18, 2013. 
> > > 
> http://faculty.cse.tamu.edu/davis/publications_files/Factorize_an_object_oriented_linear_system_solver_for_MATLAB.pdf
>  
> > > 
> > > On Mon, Jan 5, 2015 at 2:11 PM, Viral Shah  > wrote: 
> > > The BLAS will certainly make a difference, but OpenBLAS is reasonably 
> good. 
> > > 
> > > I also wonder what is happening in our \ polyalgorithm. The profile 
> suggests the code is trying Cholesky decomposition, but it really shouldn't 
> since the matrix is not symmetric. If I just do the lufact(), which 
> essentially calls Umfpack, I can match Matlab timing: 
> > > 
> > > @time F = lufact(d["M"]); F \ d["RHS"]; 
> > > 
> > > -viral 
> > > 
> > > 
> > > On Tuesday, January 6, 2015 12:31:34 AM UTC+5:30, Tim Davis wrote: 
> > > The difference could be the BLAS.  MATLAB comes with its own BLAS 
> library, and the performance 
> > > of the BLAS has a huge impact on the performance of UMFPACK, 
> particularly for 3D discretizations. 
> > > 
> > > On Mon, Jan 5, 2015 at 6:21 AM, Ehsan Eftekhari  > wrote: 
> > > I'm solving diffusion equation in Matlab on a 3D uniform grid 
> (31x32x33) and Julia. I use the "\" to solve the linear system of 
> equations. Here is the performance of the linear solver in Julia: 
> > > elapsed time: 2.743971424 seconds (35236720 bytes allocated) 
> > > 
> > > and Matlab (I used spparms('spumoni',1) to see what "\" does

Re: [julia-users] Julia vs C++-11 for random walks

2015-01-05 Thread lapeyre . math122a
Good. This is fine with me.Thanks.
--John


On Monday, January 5, 2015 9:52:58 PM UTC+1, Viral Shah wrote:
>
> I believe the labels can only be attached by those who have read/write 
> access. 
>
> -viral 
>
>
>
> > On 06-Jan-2015, at 2:20 am, lapeyre@gmail.com  wrote: 
> > 
> > Ok. It's done.  Just to be sure I understood what I read on a github 
> forum; there is no way for me to attach a label to the PR. So the labels 
> are always added by someone else ? 
> > 
> > --John 
> > 
> > On Monday, January 5, 2015 9:25:18 PM UTC+1, Viral Shah wrote: 
> > Thanks. Do create a PR. 
> > 
> > -viral 
> > 
> > 
> > 
> > > On 06-Jan-2015, at 1:53 am, lapeyre@gmail.com wrote: 
> > > 
> > > I meant randbool() in v0.3, where it was a more direct call, not 
> randbool() in v0.4. 
> > > Anyway, I just found the problem and patched it. Adding one '@inline' 
> now makes rand(Bool) in v0.4 
> > > about as fast as randbool() in v0.3. 
> > > 
> > > Should I open an issue (bug report), or just make a PR ? 
> > > 
> > > On Monday, January 5, 2015 8:59:02 PM UTC+1, Viral Shah wrote: 
> > > I doubt that rand(Bool) is any slower, since randbool() calls 
> rand(Bool). It is worth filing this as a performance regression. 
> > > 
> > > -viral 
> > > 
> > > On Monday, January 5, 2015 9:41:45 PM UTC+5:30, lapeyre@gmail.com 
> wrote: 
> > >  It may be in part the implementation of the RNG. I think it is also 
> in part whether the abstraction is optimized away. 
> > > Notice that Julia v0.3 is faster than v0.4. This is probably 
> randbool() vs. rand(Bool). 
> > > 
> > > On Monday, January 5, 2015 4:50:56 PM UTC+1, Isaiah wrote: 
> > > Very neat. Just in case this gets posted to the interwebz, it is worth 
> pointing out that the performance advantage for Julia can probably be 
> explained by differences in the underlying RNG. We use dsFMT, which is 
> known to be one of (if not the?) fastest MT libraries around. I could not 
> find any published comparisons in a quick google, but based on this test 
> harness [1], dsFMT may be significantly faster than std::mt19937: 
> > > 
> > > ``` 
> > > ihnorton@julia:~/tmp/cpp-random-test$ ./random-real 
> > > C++11 : 2.34846 
> > > Boost : 0.371674 
> > > dSFMT : 0.281255 
> > > GSL   : 0.649981 
> > > ``` 
> > > 
> > > [1] https://github.com/yomichi/cpp-random-test 
> > > 
> > > 
> > > On Mon, Jan 5, 2015 at 10:12 AM,  wrote: 
> > > Oh, and, (I forgot to mention!)  the Julia code runs much faster. 
> > > 
> > > 
> > > On Monday, January 5, 2015 3:56:07 PM UTC+1, lapeyre@gmail.com 
> wrote: 
> > > Hi, here is a comparison of Julia and C++ for simulating a random 
> walk. 
> > > 
> > > It is the first Julia program I wrote. I just pushed it to github. 
> > > 
> > > --John 
> > > 
> > > 
> > 
>
>

Re: [julia-users] Julia backslash performance vs MATLAB backslash

2015-01-05 Thread Viral Shah
Thanks, that is great. I was wondering about the symmetry checker - we have the 
naive one currently, but I can just use the CHOLMOD one now. 

-viral



> On 06-Jan-2015, at 2:22 am, Tim Davis  wrote:
> 
> oops.  Yes, your factorize function is broken.  You might try mine instead, 
> in my
> factorize package.
> 
> I have a symmetry-checker in CHOLMOD.  It checks if the matrix is symmetric 
> and
> with positive diagonals.  I think I have a MATLAB interface for it too.  The 
> code is efficient,
> since it doesn't form A transpose, and it quits early as soon as asymmetry is 
> detected.
> 
> It does rely on the fact that MATLAB requires its sparse matrices to have 
> sorted row indices
> in each column, however.
> 
> On Mon, Jan 5, 2015 at 2:43 PM, Viral Shah  wrote:
> Tim - thanks for the reference. The paper will come in handy. This is a 
> longstanding issue, that we just haven’t got around to addressing yet, but 
> perhaps now is a good time.
> 
> https://github.com/JuliaLang/julia/issues/3295
> 
> We have a very simplistic factorize() for sparse matrices that must have been 
> implemented as a stopgap. This is what it currently does and that explains 
> everything.
> 
> # placing factorize here for now. Maybe add a new file
> function factorize(A::SparseMatrixCSC)
> m, n = size(A)
> if m == n
> Ac = cholfact(A)
> Ac.c.minor == m && ishermitian(A) && return Ac
> end
> return lufact(A)
> end
> 
> -viral
> 
> 
> 
> > On 06-Jan-2015, at 1:57 am, Tim Davis  wrote:
> >
> > That does sound like a glitch in the "\" algorithm, rather than in UMFPACK. 
> >  The OpenBLAS is pretty good.
> >
> > This is very nice in Julia:
> >
> > F = lufact (d["M"]) ; F \ d
> >
> > That's a great idea to have a factorization object like that.  I have a 
> > MATLAB toolbox that does
> > the same thing, but it's not a built-in function inside MATLAB.  It's 
> > written in M, so it can be slow for
> > small matrices.   With it, however, I can do:
> >
> > F = factorize (A) ;% does an LU, Cholesky, QR, SVD, or whatever.  Uses 
> > my polyalgorithm for "\".
> > x = F\b ;
> >
> > I can do S = inverse(A); which returns a factorization, not an inverse, but 
> > with a flag
> > set so that S*b does A\b (and yes, S\b would do A*b, since S keeps a copy 
> > of A inside it, as well).
> >
> > You can also specify the factorization, such as
> >
> >  F=factorize(A, 'lu')
> > F=factorize(A,'svd') ; etc.
> >
> > It's in SuiteSparse/MATLAB_tools/Factorize, if you're interested.  I've 
> > suggested the same
> > feature to The MathWorks.
> >
> > My factorize function includes a backslash polyalgorithm, if you're 
> > interested in taking a look.
> >
> > Algorithm 930: FACTORIZE: an object-oriented linear system solver for 
> > MATLAB T. A. Davis, ACM Transactions on Mathematical Software, Vol 39, 
> > Issue 4, pp. 28:1 - 28:18, 2013.
> > http://faculty.cse.tamu.edu/davis/publications_files/Factorize_an_object_oriented_linear_system_solver_for_MATLAB.pdf
> >
> > On Mon, Jan 5, 2015 at 2:11 PM, Viral Shah  wrote:
> > The BLAS will certainly make a difference, but OpenBLAS is reasonably good.
> >
> > I also wonder what is happening in our \ polyalgorithm. The profile 
> > suggests the code is trying Cholesky decomposition, but it really shouldn't 
> > since the matrix is not symmetric. If I just do the lufact(), which 
> > essentially calls Umfpack, I can match Matlab timing:
> >
> > @time F = lufact(d["M"]); F \ d["RHS"];
> >
> > -viral
> >
> >
> > On Tuesday, January 6, 2015 12:31:34 AM UTC+5:30, Tim Davis wrote:
> > The difference could be the BLAS.  MATLAB comes with its own BLAS library, 
> > and the performance
> > of the BLAS has a huge impact on the performance of UMFPACK, particularly 
> > for 3D discretizations.
> >
> > On Mon, Jan 5, 2015 at 6:21 AM, Ehsan Eftekhari  
> > wrote:
> > I'm solving diffusion equation in Matlab on a 3D uniform grid (31x32x33) 
> > and Julia. I use the "\" to solve the linear system of equations. Here is 
> > the performance of the linear solver in Julia:
> > elapsed time: 2.743971424 seconds (35236720 bytes allocated)
> >
> > and Matlab (I used spparms('spumoni',1) to see what "\" does in Matlab):
> > sp\: bandwidth = 1056+1+1056.
> > sp\: is A diagonal? no.
> > sp\: is band density (0.00) > bandden (0.50) to try banded solver? no.
> > sp\: is A triangular? no.
> > sp\: is A morally triangular? no.
> > sp\: is A a candidate for Cholesky (symmetric, real positive diagonal)? no.
> > sp\: use Unsymmetric MultiFrontal PACKage with automatic reordering.
> > sp\: UMFPACK's factorization was successful.
> > sp\: UMFPACK's solve was successful.
> > Elapsed time is 0.819120 seconds.
> >
> > I have uploaded the sparse matrix (M) and the right-hand side (RHS) vectors 
> > in a mat file here:
> > https://drive.google.com/open?id=0B8OOfC6oWXEPV2xYTWFMZTljU00&authuser=0
> >
> > I read in the documents that Julia uses Umfpack for sparse matrices. My 
> > question is why umfpack is faster wh

Re: [julia-users] Julia vs C++-11 for random walks

2015-01-05 Thread Viral Shah
I believe the labels can only be attached by those who have read/write access.

-viral



> On 06-Jan-2015, at 2:20 am, lapeyre.math1...@gmail.com wrote:
> 
> Ok. It's done.  Just to be sure I understood what I read on a github forum; 
> there is no way for me to attach a label to the PR. So the labels are always 
> added by someone else ?
> 
> --John
> 
> On Monday, January 5, 2015 9:25:18 PM UTC+1, Viral Shah wrote:
> Thanks. Do create a PR. 
> 
> -viral 
> 
> 
> 
> > On 06-Jan-2015, at 1:53 am, lapeyre@gmail.com wrote: 
> > 
> > I meant randbool() in v0.3, where it was a more direct call, not randbool() 
> > in v0.4. 
> > Anyway, I just found the problem and patched it. Adding one '@inline' now 
> > makes rand(Bool) in v0.4 
> > about as fast as randbool() in v0.3. 
> > 
> > Should I open an issue (bug report), or just make a PR ? 
> > 
> > On Monday, January 5, 2015 8:59:02 PM UTC+1, Viral Shah wrote: 
> > I doubt that rand(Bool) is any slower, since randbool() calls rand(Bool). 
> > It is worth filing this as a performance regression. 
> > 
> > -viral 
> > 
> > On Monday, January 5, 2015 9:41:45 PM UTC+5:30, lapeyre@gmail.com 
> > wrote: 
> >  It may be in part the implementation of the RNG. I think it is also in 
> > part whether the abstraction is optimized away. 
> > Notice that Julia v0.3 is faster than v0.4. This is probably randbool() vs. 
> > rand(Bool). 
> > 
> > On Monday, January 5, 2015 4:50:56 PM UTC+1, Isaiah wrote: 
> > Very neat. Just in case this gets posted to the interwebz, it is worth 
> > pointing out that the performance advantage for Julia can probably be 
> > explained by differences in the underlying RNG. We use dsFMT, which is 
> > known to be one of (if not the?) fastest MT libraries around. I could not 
> > find any published comparisons in a quick google, but based on this test 
> > harness [1], dsFMT may be significantly faster than std::mt19937: 
> > 
> > ``` 
> > ihnorton@julia:~/tmp/cpp-random-test$ ./random-real 
> > C++11 : 2.34846 
> > Boost : 0.371674 
> > dSFMT : 0.281255 
> > GSL   : 0.649981 
> > ``` 
> > 
> > [1] https://github.com/yomichi/cpp-random-test 
> > 
> > 
> > On Mon, Jan 5, 2015 at 10:12 AM,  wrote: 
> > Oh, and, (I forgot to mention!)  the Julia code runs much faster. 
> > 
> > 
> > On Monday, January 5, 2015 3:56:07 PM UTC+1, lapeyre@gmail.com wrote: 
> > Hi, here is a comparison of Julia and C++ for simulating a random walk. 
> > 
> > It is the first Julia program I wrote. I just pushed it to github. 
> > 
> > --John 
> > 
> > 
> 



Re: [julia-users] How to overwrite to an existing file, only range of data? HDF5 can do this ?

2015-01-05 Thread Paul Analyst
Tim , thx for hints, but do not work without this line  
#fid["mygroup/A"]=rand(2)
becouse I vave not "g" , and nothing to declare in this line /dset = 
d_create(g, "F", datatype(Float64), dataspace(10,10))/

/
dset = d_create( "F", datatype(Float64), dataspace(10,10))/
do not work ...

Paul




using HDF5
hfi=h5open("test.h5","w");close(hfi)
fid = h5open("test.h5","r+")
#fid["mygroup/A"]=rand(2)
g = fid["mygroup"]
dset = d_create(g, "F", datatype(Float64), dataspace(10,10))
dset[:,1] = rand(10)
h5read("test.h5","mygroup/F",(:,1))
close(fid)


W dniu 2015-01-05 o 15:39, Tim Holy pisze:

On Monday, January 05, 2015 03:17:12 PM Paul Analyst wrote:

Thx, Tim,
I have solution but is 2 questions:
1. Whay must be line: /fid["mygroup/A"]=rand(2)/ ?

That line just means 'create a variable called "A" inside a group called
"mygroup", and assign it a value of rand(2)'. If you don't need that variable,
you don't need that line. You also don't have to create a group called
"mygroup", if you prefer you can store everything in the top level of the file.


2.if sum k and l > 9 Julia cant work. Is it to big size for hdf5 or for
system (Win7 64 Home Premium) ?

Not sure. It works for me (Kubuntu Linux 14.04).

--Tim




using HDF5
hfi=h5open("bigfile.h5","w")
close(hfi)

k,l=6,3;
fid = h5open("bigfile.h5","r+")
fid["mygroup/A"]=rand(2)
g = fid["mygroup"]
dset = d_create(g, "F", datatype(Float64), dataspace(10^k,10^l))
dset[:,1] = rand(10^k)
h5read("bigfile.h5","mygroup/F",(:,1))
close(fid)
h5read("bigfile.h5","mygroup/F",(:,1:2))

Is OK
but if
k,l=6,4;

julia> close(fid)
HDF5-DIAG: Error detected in HDF5 (1.8.13) thread 0:
#000: /home/abuild/rpmbuild/BUILD/hdf5-1.8.13/src/H5F.c line 2070 in
H5Fclose(): decrementing file ID failed
  major: Object atom
  minor: Unable to close file
#001: /home/abuild/rpmbuild/BUILD/hdf5-1.8.13/src/H5I.c line 1464 in
H5I_dec_app_ref(): can't decrement ID ref c
ount
  major: Object atom
  minor: Unable to decrement reference count
#002: /home/abuild/rpmbuild/BUILD/hdf5-1.8.13/src/H5F.c line 1847 in
H5F_close(): can't close file
  major: File accessibilty
  minor: Unable to close file
#003: /home/abuild/rpmbuild/BUILD/hdf5-1.8.13/src/H5F.c line 2009 in
H5F_try_close(): problems closing file
  major: File accessibilty
  minor: Unable to close file
#004: /home/abuild/rpmbuild/BUILD/hdf5-1.8.13/src/H5F.c line 1161 in
H5F_dest(): low level truncate failed
  major: File accessibilty
  minor: Write failed
#005: /home/abuild/rpmbuild/BUILD/hdf5-1.8.13/src/H5FD.c line 1895 in
H5FD_truncate(): driver truncate request f
ailed
  major: Virtual File Layer
  minor: Can't update object
#006: /home/abuild/rpmbuild/BUILD/hdf5-1.8.13/src/H5FDsec2.c line 900
in H5FD_sec2_truncate(): unable to extend
file properly
  major: Low-level I/O
  minor: Seek failed
ERROR: Error closing file
   in h5f_close at C:\Users\SAMSUNG2\.julia\v0.3\HDF5\src\plain.jl:1924


julia> h5read("bigfile.h5","mygroup/F",(:,1:2))
HDF5-DIAG: Error detected in HDF5 (1.8.13) thread 0:
#000: /home/abuild/rpmbuild/BUILD/hdf5-1.8.13/src/H5F.c line 1594 in
H5Fopen(): unable to open file
  major: File accessibilty
  minor: Unable to open file
#001: /home/abuild/rpmbuild/BUILD/hdf5-1.8.13/src/H5F.c line 1385 in
H5F_open(): unable to read superblock
  major: File accessibilty
  minor: Read failed
#002: /home/abuild/rpmbuild/BUILD/hdf5-1.8.13/src/H5Fsuper.c line 353
in H5F_super_read(): unable to load superb
lock
  major: Object cache
  minor: Unable to protect metadata
#003: /home/abuild/rpmbuild/BUILD/hdf5-1.8.13/src/H5AC.c line 1323 in
H5AC_protect(): H5C_protect() failed.
  major: Object cache
  minor: Unable to protect metadata
#004: /home/abuild/rpmbuild/BUILD/hdf5-1.8.13/src/H5C.c line 3574 in
H5C_protect(): can't load entry
  major: Object cache
  minor: Unable to load metadata into cache
#005: /home/abuild/rpmbuild/BUILD/hdf5-1.8.13/src/H5C.c line 7954 in
H5C_load_entry(): unable to load entry
  major: Object cache
  minor: Unable to load metadata into cache
#006: /home/abuild/rpmbuild/BUILD/hdf5-1.8.13/src/H5Fsuper_cache.c
line 471 in H5F_sblock_load(): truncated file

: eof = 8002864, sblock->base_addr = 0, stored_eoa = 8002864

  major: File accessibilty
  minor: File has been truncated
ERROR: Error opening file bigfile.h5
   in h5f_open at C:\Users\SAMSUNG2\.julia\v0.3\HDF5\src\plain.jl:2023
   in h5open at C:\Users\SAMSUNG2\.julia\v0.3\HDF5\src\plain.jl:554



Paul

W dniu 2015-01-04 o 18:37, Tim Holy pisze:

Do note there are two additional pages of documentation in the doc/
folder.

--Tim

On Sunday, January 04, 2015 06:59:53 AM paul analyst wrote:

Of course, first I read :)
Is there about reading range array. I need to save a range of In analogy
to.

A = reshape (1: 120, 15, 8)
h5write ("/ tmp / test2.h5", "mygroup2 / A", A)
data = h5read ("/ tmp / test2.h5", "mygr

Re: [julia-users] Julia backslash performance vs MATLAB backslash

2015-01-05 Thread Tim Davis
oops.  Yes, your factorize function is broken.  You might try mine instead,
in my
factorize package.

I have a symmetry-checker in CHOLMOD.  It checks if the matrix is symmetric
and
with positive diagonals.  I think I have a MATLAB interface for it too.
The code is efficient,
since it doesn't form A transpose, and it quits early as soon as asymmetry
is detected.

It does rely on the fact that MATLAB requires its sparse matrices to have
sorted row indices
in each column, however.

On Mon, Jan 5, 2015 at 2:43 PM, Viral Shah  wrote:

> Tim - thanks for the reference. The paper will come in handy. This is a
> longstanding issue, that we just haven’t got around to addressing yet, but
> perhaps now is a good time.
>
> https://github.com/JuliaLang/julia/issues/3295
>
> We have a very simplistic factorize() for sparse matrices that must have
> been implemented as a stopgap. This is what it currently does and that
> explains everything.
>
> # placing factorize here for now. Maybe add a new file
> function factorize(A::SparseMatrixCSC)
> m, n = size(A)
> if m == n
> Ac = cholfact(A)
> Ac.c.minor == m && ishermitian(A) && return Ac
> end
> return lufact(A)
> end
>
> -viral
>
>
>
> > On 06-Jan-2015, at 1:57 am, Tim Davis  wrote:
> >
> > That does sound like a glitch in the "\" algorithm, rather than in
> UMFPACK.  The OpenBLAS is pretty good.
> >
> > This is very nice in Julia:
> >
> > F = lufact (d["M"]) ; F \ d
> >
> > That's a great idea to have a factorization object like that.  I have a
> MATLAB toolbox that does
> > the same thing, but it's not a built-in function inside MATLAB.  It's
> written in M, so it can be slow for
> > small matrices.   With it, however, I can do:
> >
> > F = factorize (A) ;% does an LU, Cholesky, QR, SVD, or whatever.
> Uses my polyalgorithm for "\".
> > x = F\b ;
> >
> > I can do S = inverse(A); which returns a factorization, not an inverse,
> but with a flag
> > set so that S*b does A\b (and yes, S\b would do A*b, since S keeps a
> copy of A inside it, as well).
> >
> > You can also specify the factorization, such as
> >
> >  F=factorize(A, 'lu')
> > F=factorize(A,'svd') ; etc.
> >
> > It's in SuiteSparse/MATLAB_tools/Factorize, if you're interested.  I've
> suggested the same
> > feature to The MathWorks.
> >
> > My factorize function includes a backslash polyalgorithm, if you're
> interested in taking a look.
> >
> > Algorithm 930: FACTORIZE: an object-oriented linear system solver for
> MATLAB T. A. Davis, ACM Transactions on Mathematical Software, Vol 39,
> Issue 4, pp. 28:1 - 28:18, 2013.
> >
> http://faculty.cse.tamu.edu/davis/publications_files/Factorize_an_object_oriented_linear_system_solver_for_MATLAB.pdf
> >
> > On Mon, Jan 5, 2015 at 2:11 PM, Viral Shah  wrote:
> > The BLAS will certainly make a difference, but OpenBLAS is reasonably
> good.
> >
> > I also wonder what is happening in our \ polyalgorithm. The profile
> suggests the code is trying Cholesky decomposition, but it really shouldn't
> since the matrix is not symmetric. If I just do the lufact(), which
> essentially calls Umfpack, I can match Matlab timing:
> >
> > @time F = lufact(d["M"]); F \ d["RHS"];
> >
> > -viral
> >
> >
> > On Tuesday, January 6, 2015 12:31:34 AM UTC+5:30, Tim Davis wrote:
> > The difference could be the BLAS.  MATLAB comes with its own BLAS
> library, and the performance
> > of the BLAS has a huge impact on the performance of UMFPACK,
> particularly for 3D discretizations.
> >
> > On Mon, Jan 5, 2015 at 6:21 AM, Ehsan Eftekhari 
> wrote:
> > I'm solving diffusion equation in Matlab on a 3D uniform grid (31x32x33)
> and Julia. I use the "\" to solve the linear system of equations. Here is
> the performance of the linear solver in Julia:
> > elapsed time: 2.743971424 seconds (35236720 bytes allocated)
> >
> > and Matlab (I used spparms('spumoni',1) to see what "\" does in Matlab):
> > sp\: bandwidth = 1056+1+1056.
> > sp\: is A diagonal? no.
> > sp\: is band density (0.00) > bandden (0.50) to try banded solver? no.
> > sp\: is A triangular? no.
> > sp\: is A morally triangular? no.
> > sp\: is A a candidate for Cholesky (symmetric, real positive diagonal)?
> no.
> > sp\: use Unsymmetric MultiFrontal PACKage with automatic reordering.
> > sp\: UMFPACK's factorization was successful.
> > sp\: UMFPACK's solve was successful.
> > Elapsed time is 0.819120 seconds.
> >
> > I have uploaded the sparse matrix (M) and the right-hand side (RHS)
> vectors in a mat file here:
> > https://drive.google.com/open?id=0B8OOfC6oWXEPV2xYTWFMZTljU00&authuser=0
> >
> > I read in the documents that Julia uses Umfpack for sparse matrices. My
> question is why umfpack is faster when it is called from matlab?
> >
> > The matlab and julia codes are here:
> > https://drive.google.com/open?id=0B8OOfC6oWXEPbXFnYlh2TFBKV1k&authuser=0
> > https://drive.google.com/open?id=0B8OOfC6oWXEPdlNfOEFKbnV5MlE&authuser=0
> >
> > and the FVM codes are here:
> > https://github.com/simulkade/FVTool
> > 

Re: [julia-users] Re: Julia vs C++-11 for random walks

2015-01-05 Thread lapeyre . math122a
Ok. It's done.  Just to be sure I understood what I read on a github forum; 
there is no way for me to attach a label to the PR. So the labels are 
always added by someone else ?

--John

On Monday, January 5, 2015 9:25:18 PM UTC+1, Viral Shah wrote:
>
> Thanks. Do create a PR. 
>
> -viral 
>
>
>
> > On 06-Jan-2015, at 1:53 am, lapeyre@gmail.com  wrote: 
> > 
> > I meant randbool() in v0.3, where it was a more direct call, not 
> randbool() in v0.4. 
> > Anyway, I just found the problem and patched it. Adding one '@inline' 
> now makes rand(Bool) in v0.4 
> > about as fast as randbool() in v0.3. 
> > 
> > Should I open an issue (bug report), or just make a PR ? 
> > 
> > On Monday, January 5, 2015 8:59:02 PM UTC+1, Viral Shah wrote: 
> > I doubt that rand(Bool) is any slower, since randbool() calls 
> rand(Bool). It is worth filing this as a performance regression. 
> > 
> > -viral 
> > 
> > On Monday, January 5, 2015 9:41:45 PM UTC+5:30, lapeyre@gmail.com 
> wrote: 
> >  It may be in part the implementation of the RNG. I think it is also in 
> part whether the abstraction is optimized away. 
> > Notice that Julia v0.3 is faster than v0.4. This is probably randbool() 
> vs. rand(Bool). 
> > 
> > On Monday, January 5, 2015 4:50:56 PM UTC+1, Isaiah wrote: 
> > Very neat. Just in case this gets posted to the interwebz, it is worth 
> pointing out that the performance advantage for Julia can probably be 
> explained by differences in the underlying RNG. We use dsFMT, which is 
> known to be one of (if not the?) fastest MT libraries around. I could not 
> find any published comparisons in a quick google, but based on this test 
> harness [1], dsFMT may be significantly faster than std::mt19937: 
> > 
> > ``` 
> > ihnorton@julia:~/tmp/cpp-random-test$ ./random-real 
> > C++11 : 2.34846 
> > Boost : 0.371674 
> > dSFMT : 0.281255 
> > GSL   : 0.649981 
> > ``` 
> > 
> > [1] https://github.com/yomichi/cpp-random-test 
> > 
> > 
> > On Mon, Jan 5, 2015 at 10:12 AM,  wrote: 
> > Oh, and, (I forgot to mention!)  the Julia code runs much faster. 
> > 
> > 
> > On Monday, January 5, 2015 3:56:07 PM UTC+1, lapeyre@gmail.com 
> wrote: 
> > Hi, here is a comparison of Julia and C++ for simulating a random walk. 
> > 
> > It is the first Julia program I wrote. I just pushed it to github. 
> > 
> > --John 
> > 
> > 
>
>

[julia-users] Re: Test-scoped requirements

2015-01-05 Thread Andrei Zh
Thanks, works like a charm. 

On Sunday, January 4, 2015 10:39:27 PM UTC+3, Michael Hatherly wrote:
>
> You can put a REQUIRE file in the test directory that should do what you 
> want.
>
> — Mike
> ​
>
>
> On Sunday, 4 January 2015 21:33:37 UTC+2, Andrei Zh wrote:
>>
>> I have a package A that has file-based integration with package B, i.e. A 
>> writes files that B can read, but neither A depends on B, nor vise versa. I 
>> want to write a test for this integration, but don't want to include B in 
>> REQUIRE, since A may be used completely without it. 
>>
>> I'm wondering if there's a way to specify requirement for test time only. 
>> If not, what are general recommendations in this case? 
>>
>

Re: [julia-users] Julia backslash performance vs MATLAB backslash

2015-01-05 Thread Viral Shah
Tim - thanks for the reference. The paper will come in handy. This is a 
longstanding issue, that we just haven’t got around to addressing yet, but 
perhaps now is a good time.

https://github.com/JuliaLang/julia/issues/3295

We have a very simplistic factorize() for sparse matrices that must have been 
implemented as a stopgap. This is what it currently does and that explains 
everything.

# placing factorize here for now. Maybe add a new file
function factorize(A::SparseMatrixCSC)
m, n = size(A)
if m == n
Ac = cholfact(A)
Ac.c.minor == m && ishermitian(A) && return Ac
end
return lufact(A)
end

-viral



> On 06-Jan-2015, at 1:57 am, Tim Davis  wrote:
> 
> That does sound like a glitch in the "\" algorithm, rather than in UMFPACK.  
> The OpenBLAS is pretty good.
> 
> This is very nice in Julia:
> 
> F = lufact (d["M"]) ; F \ d
> 
> That's a great idea to have a factorization object like that.  I have a 
> MATLAB toolbox that does
> the same thing, but it's not a built-in function inside MATLAB.  It's written 
> in M, so it can be slow for
> small matrices.   With it, however, I can do:
> 
> F = factorize (A) ;% does an LU, Cholesky, QR, SVD, or whatever.  Uses my 
> polyalgorithm for "\".
> x = F\b ;
> 
> I can do S = inverse(A); which returns a factorization, not an inverse, but 
> with a flag
> set so that S*b does A\b (and yes, S\b would do A*b, since S keeps a copy of 
> A inside it, as well).
> 
> You can also specify the factorization, such as
> 
>  F=factorize(A, 'lu')
> F=factorize(A,'svd') ; etc.
> 
> It's in SuiteSparse/MATLAB_tools/Factorize, if you're interested.  I've 
> suggested the same
> feature to The MathWorks.
> 
> My factorize function includes a backslash polyalgorithm, if you're 
> interested in taking a look.
> 
> Algorithm 930: FACTORIZE: an object-oriented linear system solver for MATLAB 
> T. A. Davis, ACM Transactions on Mathematical Software, Vol 39, Issue 4, pp. 
> 28:1 - 28:18, 2013. 
> http://faculty.cse.tamu.edu/davis/publications_files/Factorize_an_object_oriented_linear_system_solver_for_MATLAB.pdf
> 
> On Mon, Jan 5, 2015 at 2:11 PM, Viral Shah  wrote:
> The BLAS will certainly make a difference, but OpenBLAS is reasonably good. 
> 
> I also wonder what is happening in our \ polyalgorithm. The profile suggests 
> the code is trying Cholesky decomposition, but it really shouldn't since the 
> matrix is not symmetric. If I just do the lufact(), which essentially calls 
> Umfpack, I can match Matlab timing:
> 
> @time F = lufact(d["M"]); F \ d["RHS"];
> 
> -viral
> 
> 
> On Tuesday, January 6, 2015 12:31:34 AM UTC+5:30, Tim Davis wrote:
> The difference could be the BLAS.  MATLAB comes with its own BLAS library, 
> and the performance
> of the BLAS has a huge impact on the performance of UMFPACK, particularly for 
> 3D discretizations.
> 
> On Mon, Jan 5, 2015 at 6:21 AM, Ehsan Eftekhari  wrote:
> I'm solving diffusion equation in Matlab on a 3D uniform grid (31x32x33) and 
> Julia. I use the "\" to solve the linear system of equations. Here is the 
> performance of the linear solver in Julia:
> elapsed time: 2.743971424 seconds (35236720 bytes allocated)
> 
> and Matlab (I used spparms('spumoni',1) to see what "\" does in Matlab):
> sp\: bandwidth = 1056+1+1056.
> sp\: is A diagonal? no.
> sp\: is band density (0.00) > bandden (0.50) to try banded solver? no.
> sp\: is A triangular? no.
> sp\: is A morally triangular? no.
> sp\: is A a candidate for Cholesky (symmetric, real positive diagonal)? no.
> sp\: use Unsymmetric MultiFrontal PACKage with automatic reordering.
> sp\: UMFPACK's factorization was successful.
> sp\: UMFPACK's solve was successful.
> Elapsed time is 0.819120 seconds.
> 
> I have uploaded the sparse matrix (M) and the right-hand side (RHS) vectors 
> in a mat file here:
> https://drive.google.com/open?id=0B8OOfC6oWXEPV2xYTWFMZTljU00&authuser=0
> 
> I read in the documents that Julia uses Umfpack for sparse matrices. My 
> question is why umfpack is faster when it is called from matlab?
> 
> The matlab and julia codes are here:
> https://drive.google.com/open?id=0B8OOfC6oWXEPbXFnYlh2TFBKV1k&authuser=0
> https://drive.google.com/open?id=0B8OOfC6oWXEPdlNfOEFKbnV5MlE&authuser=0
> 
> and the FVM codes are here:
> https://github.com/simulkade/FVTool
> https://github.com/simulkade/JFVM
> 
> Thanks a lot in advance,
> 
> Ehsan
> 
> 
> On Wednesday, June 5, 2013 8:39:15 AM UTC+2, Viral Shah wrote:
> I guess it is the last 20 years of sparse solver work packed into one 
> function. Not many fields can boast of providing this level of usability out 
> of their work. :-)
> 
> There are a class of people who believe that things like \ encourage blackbox 
> usage, with people doing stuff they do not understand, and there are others 
> who believe in standing on the shoulders of giants.
> 
> I find that we have taken a good approach in Julia, where we have \ and it 
> will have the perfect polyalgorithm at some point. But, you als

Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Jeff Waller
The cause for this thread is mainly a lexical analyzer bug for hex 
notation. Except for the error in #9617, I'm fine with the current behavior 
and syntax even with the semi e-ambiguity if you want the scientific 
notation literal, use no spaces.  This is only ambiguous because Julia 
permits a number literal N to proceed an identifier I as a shortcut for 
N*I, which is different than many languages and part of Julia's charm.  I'd 
be sorry to see it go.

[0-9]+(.[0-9]+)?e(+|-)?[0-9]+< scientific notation literal

2e+1 is 2x10^1
2e + 1   is 2*e + 1
2e+ 1is a syntax error because to the lexical analyzer 2e+ is an error 
without at least 1 trailing digit (no spaces)

typing 2e+1 (without the space) and expecting it to mean 2*e + 1 is way 
over emphasizing the need to not type a space.  All of the other language 
style guides are consistent about this being bad style.

Finally consider this

*julia> *
*2e-1e**0.5436563656918091*

This is parsed as (2*10^-1)e  = .2e which I assert is the right thing to do.


Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Jeff Bezanson
Yes, sometimes saving one character is a big deal. It also allows
editors to color them as numbers. But this is a minor point. For
something as marginal as hex float literals, custom string literals
are fine.

On Mon, Jan 5, 2015 at 3:21 PM, Jason Merrill  wrote:
> On Monday, January 5, 2015 12:13:51 PM UTC-8, Jeff Bezanson wrote:
>>
>> > We might be able to find a more scalable syntax for different types of
>> numbers. For example the syntax x@000 is available; `@ then digit` is
>> always a syntax error currently.
>
>
> Compared to a custom string literal (e.g. hex"1a3b7"), is the advantage that
> you don't have to use a closing quote?


Re: [julia-users] [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-05 Thread Mike Innes
The writemime methods and the display system are largely orthogonal – the
display system concerns itself with routing output to a suitable display
device (terminal, Blink window, whatever) while writemime simply provides
the implementation. In other words, I'm only really focused on the
`display` function, and that work looks completely compatible with my
proposed changes.

On 5 January 2015 at 19:41, Ivar Nesje  wrote:

> Have you seen https://github.com/JuliaLang/julia/pull/8987?
>
>


Re: [julia-users] Re: Julia backslash performance vs MATLAB backslash

2015-01-05 Thread Tim Davis
That does sound like a glitch in the "\" algorithm, rather than in
UMFPACK.  The OpenBLAS is pretty good.

This is very nice in Julia:

F = lufact (d["M"]) ; F \ d

That's a great idea to have a factorization object like that.  I have a
MATLAB toolbox that does
the same thing, but it's not a built-in function inside MATLAB.  It's
written in M, so it can be slow for
small matrices.   With it, however, I can do:

F = factorize (A) ;% does an LU, Cholesky, QR, SVD, or whatever.  Uses
my polyalgorithm for "\".
x = F\b ;

I can do S = inverse(A); which returns a factorization, not an inverse, but
with a flag
set so that S*b does A\b (and yes, S\b would do A*b, since S keeps a copy
of A inside it, as well).

You can also specify the factorization, such as

 F=factorize(A, 'lu')
F=factorize(A,'svd') ; etc.

It's in SuiteSparse/MATLAB_tools/Factorize, if you're interested.  I've
suggested the same
feature to The MathWorks.

My factorize function includes a backslash polyalgorithm, if you're
interested in taking a look.

Algorithm 930: FACTORIZE: an object-oriented linear system solver for
MATLAB T. A. Davis, ACM Transactions on Mathematical Software, Vol 39,
Issue 4, pp. 28:1 - 28:18, 2013.
http://faculty.cse.tamu.edu/davis/publications_files/Factorize_an_object_oriented_linear_system_solver_for_MATLAB.pdf

On Mon, Jan 5, 2015 at 2:11 PM, Viral Shah  wrote:

> The BLAS will certainly make a difference, but OpenBLAS is reasonably
> good.
>
> I also wonder what is happening in our \ polyalgorithm. The profile
> suggests the code is trying Cholesky decomposition, but it really shouldn't
> since the matrix is not symmetric. If I just do the lufact(), which
> essentially calls Umfpack, I can match Matlab timing:
>
> @time F = lufact(d["M"]); F \ d["RHS"];
>
> -viral
>
>
> On Tuesday, January 6, 2015 12:31:34 AM UTC+5:30, Tim Davis wrote:
>>
>> The difference could be the BLAS.  MATLAB comes with its own BLAS
>> library, and the performance
>> of the BLAS has a huge impact on the performance of UMFPACK, particularly
>> for 3D discretizations.
>>
>> On Mon, Jan 5, 2015 at 6:21 AM, Ehsan Eftekhari 
>> wrote:
>>
>>> I'm solving diffusion equation in Matlab on a 3D uniform grid (31x32x33)
>>> and Julia. I use the "\" to solve the linear system of equations. Here is
>>> the performance of the linear solver in Julia:
>>> elapsed time: 2.743971424 seconds (35236720 bytes allocated)
>>>
>>> and Matlab (I used spparms('spumoni',1) to see what "\" does in Matlab):
>>> sp\: bandwidth = 1056+1+1056.
>>> sp\: is A diagonal? no.
>>> sp\: is band density (0.00) > bandden (0.50) to try banded solver? no.
>>> sp\: is A triangular? no.
>>> sp\: is A morally triangular? no.
>>> sp\: is A a candidate for Cholesky (symmetric, real positive diagonal)?
>>> no.
>>> sp\: use Unsymmetric MultiFrontal PACKage with automatic reordering.
>>> sp\: UMFPACK's factorization was successful.
>>> sp\: UMFPACK's solve was successful.
>>> Elapsed time is 0.819120 seconds.
>>>
>>> I have uploaded the sparse matrix (M) and the right-hand side (RHS)
>>> vectors in a mat file here:
>>> https://drive.google.com/open?id=0B8OOfC6oWXEPV2xYTWFMZTljU00&authuser=0
>>>
>>> I read in the documents that Julia uses Umfpack for sparse matrices. My
>>> question is why umfpack is faster when it is called from matlab?
>>>
>>> The matlab and julia codes are here:
>>> https://drive.google.com/open?id=0B8OOfC6oWXEPbXFnYlh2TFBKV1k&authuser=0
>>> https://drive.google.com/open?id=0B8OOfC6oWXEPdlNfOEFKbnV5MlE&authuser=0
>>>
>>> and the FVM codes are here:
>>> https://github.com/simulkade/FVTool
>>> https://github.com/simulkade/JFVM
>>>
>>> Thanks a lot in advance,
>>>
>>> Ehsan
>>>
>>>
>>> On Wednesday, June 5, 2013 8:39:15 AM UTC+2, Viral Shah wrote:

 I guess it is the last 20 years of sparse solver work packed into one
 function. Not many fields can boast of providing this level of usability
 out of their work. :-)

 There are a class of people who believe that things like \ encourage
 blackbox usage, with people doing stuff they do not understand, and there
 are others who believe in standing on the shoulders of giants.

 I find that we have taken a good approach in Julia, where we have \ and
 it will have the perfect polyalgorithm at some point. But, you also have
 the option of digging deeper with interfaces such as lufact(), cholfact(),
 qrfact(), and finally, even if that does not work out for you, call the
 LAPACK and SuiteSparse functions directly.

 -viral

 On Wednesday, June 5, 2013 9:42:12 AM UTC+5:30, Stefan Karpinski wrote:
>
> Goodness. This is why there needs to be a polyalgorithm – no mortal
> user could know all of this stuff!
>
>
> On Tue, Jun 4, 2013 at 11:11 PM, Viral Shah  wrote:
>
>> Doug,
>>
>> Ideally, the backslash needs to look for diagonal matrices,
>> triangular matrices and permutations thereof, banded matrices and the 
>> least

Re: [julia-users] Re: Julia vs C++-11 for random walks

2015-01-05 Thread Viral Shah
Thanks. Do create a PR.

-viral



> On 06-Jan-2015, at 1:53 am, lapeyre.math1...@gmail.com wrote:
> 
> I meant randbool() in v0.3, where it was a more direct call, not randbool() 
> in v0.4.
> Anyway, I just found the problem and patched it. Adding one '@inline' now 
> makes rand(Bool) in v0.4
> about as fast as randbool() in v0.3.
> 
> Should I open an issue (bug report), or just make a PR ? 
> 
> On Monday, January 5, 2015 8:59:02 PM UTC+1, Viral Shah wrote:
> I doubt that rand(Bool) is any slower, since randbool() calls rand(Bool). It 
> is worth filing this as a performance regression.
> 
> -viral
> 
> On Monday, January 5, 2015 9:41:45 PM UTC+5:30, lapeyre@gmail.com wrote:
>  It may be in part the implementation of the RNG. I think it is also in part 
> whether the abstraction is optimized away.
> Notice that Julia v0.3 is faster than v0.4. This is probably randbool() vs. 
> rand(Bool).
> 
> On Monday, January 5, 2015 4:50:56 PM UTC+1, Isaiah wrote:
> Very neat. Just in case this gets posted to the interwebz, it is worth 
> pointing out that the performance advantage for Julia can probably be 
> explained by differences in the underlying RNG. We use dsFMT, which is known 
> to be one of (if not the?) fastest MT libraries around. I could not find any 
> published comparisons in a quick google, but based on this test harness [1], 
> dsFMT may be significantly faster than std::mt19937:
> 
> ```
> ihnorton@julia:~/tmp/cpp-random-test$ ./random-real
> C++11 : 2.34846
> Boost : 0.371674
> dSFMT : 0.281255
> GSL   : 0.649981
> ```
> 
> [1] https://github.com/yomichi/cpp-random-test
> 
> 
> On Mon, Jan 5, 2015 at 10:12 AM,  wrote:
> Oh, and, (I forgot to mention!)  the Julia code runs much faster.
> 
> 
> On Monday, January 5, 2015 3:56:07 PM UTC+1, lapeyre@gmail.com wrote:
> Hi, here is a comparison of Julia and C++ for simulating a random walk.
> 
> It is the first Julia program I wrote. I just pushed it to github.
> 
> --John
> 
> 



Re: [julia-users] Re: Julia vs C++-11 for random walks

2015-01-05 Thread lapeyre . math122a
I meant randbool() in v0.3, where it was a more direct call, not randbool() 
in v0.4.
Anyway, I just found the problem and patched it. Adding one '@inline' now 
makes rand(Bool) in v0.4
about as fast as randbool() in v0.3.

Should I open an issue (bug report), or just make a PR ? 

On Monday, January 5, 2015 8:59:02 PM UTC+1, Viral Shah wrote:
>
> I doubt that rand(Bool) is any slower, since randbool() calls rand(Bool). 
> It is worth filing this as a performance regression.
>
> -viral
>
> On Monday, January 5, 2015 9:41:45 PM UTC+5:30, lapeyre@gmail.com 
> wrote:
>>
>>  It may be in part the implementation of the RNG. I think it is also in 
>> part whether the abstraction is optimized away.
>> Notice that Julia v0.3 is faster than v0.4. This is probably randbool() 
>> vs. rand(Bool).
>>
>> On Monday, January 5, 2015 4:50:56 PM UTC+1, Isaiah wrote:
>>>
>>> Very neat. Just in case this gets posted to the interwebz, it is worth 
>>> pointing out that the performance advantage for Julia can probably be 
>>> explained by differences in the underlying RNG. We use dsFMT, which is 
>>> known to be one of (if not the?) fastest MT libraries around. I could not 
>>> find any published comparisons in a quick google, but based on this test 
>>> harness [1], dsFMT may be significantly faster than std::mt19937:
>>>
>>> ```
>>> ihnorton@julia:~/tmp/cpp-random-test$ ./random-real
>>> C++11 : 2.34846
>>> Boost : 0.371674
>>> dSFMT : 0.281255
>>> GSL   : 0.649981
>>> ```
>>>
>>> [1] https://github.com/yomichi/cpp-random-test
>>>
>>>
>>> On Mon, Jan 5, 2015 at 10:12 AM,  wrote:
>>>
 Oh, and, (I forgot to mention!)  the Julia code runs much faster.


 On Monday, January 5, 2015 3:56:07 PM UTC+1, lapeyre@gmail.com 
 wrote:
>
> Hi, here is a comparison of Julia and C++ for simulating a random walk 
> .
>
> It is the first Julia program I wrote. I just pushed it to github.
>
> --John
>
>
>>>

[julia-users] Julia framework similar to scikit-learn?

2015-01-05 Thread Tom Fawcett
Fellow humans,

I realize there are various machine learning algorithms implemented in 
Julia.  Is there anything like a machine learning framework, similar to 
scikit-learn, under development?  

Of course, Julia already has many of the capabilities of Numpy & Scipy so 
that's most of the way.  I'm imagining a package (or meta-package) to 
provide a common processing framework (comprising IO, pre-processing, core 
ML algs, evaluation, visualization, etc.) with a set of APIs.  It would 
provide a standard way to string together components so anyone can set up 
an ML processing stream or contribute a new module.

Is anything in the works?  I did a brief search and didn't find anything.  

Thanks,
-Tom



Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Jason Merrill
On Monday, January 5, 2015 12:13:51 PM UTC-8, Jeff Bezanson wrote:

> > We might be able to find a more scalable syntax for different types of 
> numbers. For example the syntax x@000 is available; `@ then digit` is 
> always a syntax error currently.
>

Compared to a custom string literal (e.g. hex"1a3b7"), is the advantage 
that you don't have to use a closing quote?


Re: [julia-users] Re: How quickly subtract the two large arrays ?

2015-01-05 Thread Viral Shah
As of now, you can use SharedArray. Eventually, once we have a good 
threading model, we want to multi-thread the entire array library, but that 
is quite some ways away.

-viral

On Sunday, January 4, 2015 3:22:29 PM UTC+5:30, paul analyst wrote:
>
>  Suppose the array is located in memory. There are a lot of columns to 
> count eg. Average. As a parallel process count because now 7 of 8 
> processors doing nothing.
> Paul
>
> W dniu 2015-01-03 o 23:27, ele...@gmail.com pisze:
>  
>
>
> On Sunday, January 4, 2015 4:28:06 AM UTC+10, paul analyst wrote: 
>>
>> THX
>> A have not :/ but I can makes it in parts!
>>  
>
>  If the arrays won't fit in memory it probably doesn't matter what Julia 
> does, the IO or paging time will dominate.
>
>  Cheers
> Lex
>
>   
>
>>
>> How simply use parallel for it? I have 8 proc, is working only 1
>> Paul
>>
>>
>>
>>
>> W dniu piątek, 15 sierpnia 2014 11:53:54 UTC+2 użytkownik Billou Bielour 
>> napisał: 
>>>
>>> This might be a bit faster:
>>>
>>> function sub!(A,B,C)
>>> for j=1:size(A,2)
>>> for i=1:size(A,1)
>>> @inbounds C[i,j] = A[i,j] - B[i,j]
>>> end
>>> end
>>> end
>>>
>>> C = zeros(size(A));
>>> sub!(A,B,C)
>>>
>>> Do you have enough RAM to store these matrices though ? 10^5 * 10^5 
>>> Float64 seems rather large.
>>>
>>>
> 

Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Jeff Bezanson
My take on this is that 3e+1 has become a standard notation for
numbers in programming languages. I'm comfortable having that as an
exception to the concise multiplication syntax.

Hex float literals are a different story. It's an understatement to
say they are VERY rarely used, and that most programmers don't need
them and have never heard of them. They also don't currently work
under MSVC (issue #6349). We have not had them very long. I say we
remove them. They can be replaced with a custom string literal.

We might be able to find a more scalable syntax for different types of
numbers. For example the syntax x@000 is available; `@ then digit` is
always a syntax error currently.

On Mon, Jan 5, 2015 at 2:40 PM, Viral Shah  wrote:
> There is also an issue filed:
>
> https://github.com/JuliaLang/julia/issues/9617
>
> -viral
>
>
> On Tuesday, January 6, 2015 12:29:33 AM UTC+5:30, Peter Mancini wrote:
>>
>> No. I'm tongue in cheek pointing out the absurdity of the situation.
>>
>> On Monday, January 5, 2015 12:57:45 PM UTC-6, Hans W Borchers wrote:
>>>
>>> Does this mean you suggest to disallow variables names 'e', 'f', 'p' (and
>>> possibly
>>> others) in a programming environment for scientific computing? Hard to
>>> believe.
>>>
>>>
>>> On Monday, January 5, 2015 7:41:49 PM UTC+1, Peter Mancini wrote:

 That is a case of e being overloaded. It helps with the OP's issue
 though. For the scientific notation issue I would suggest choosing which is
 more useful, natural e or using e for a base ten exponent.

 On Monday, January 5, 2015 12:22:11 PM UTC-6, Stefan Karpinski wrote:
>
> On Mon, Jan 5, 2015 at 12:55 PM, Peter Mancini 
> wrote:
>>
>> Usually a language handles this problem by making the constants such
>> as p and e as reserved. Thus you can't create a new variable with those
>> names and since they are constant you can't assign to them without 
>> raising
>> an error.
>
>
> That doesn't help here since `2e+1` would still mean something
> different than `2e + 1`.


[julia-users] Re: Julia takes 2nd place in "Delacorte Numbers" competition

2015-01-05 Thread Viral Shah
This is really cool. Looking forward to the paper!

-viral

On Sunday, January 4, 2015 11:05:46 PM UTC+5:30, Arch Robison wrote:
>
> FYI, I won 2nd place in the recent Al Zimmerman programming contest 
> "Delacorte 
> Numbers ", using only 
> Julia and a quad-core MonkeyStation Pro 
> .   Julia worked out well 
> because it had:
>
>- interactivity to study the problem
>- quick prototyping to try ideas
>- fast scalar code
>- fast SIMD loops 
>
> I've working on a paper that will describe the experience in more detail.
>
> - Arch
>
>

Re: [julia-users] Re: Julia backslash performance vs MATLAB backslash

2015-01-05 Thread Viral Shah
The BLAS will certainly make a difference, but OpenBLAS is reasonably good. 

I also wonder what is happening in our \ polyalgorithm. The profile 
suggests the code is trying Cholesky decomposition, but it really shouldn't 
since the matrix is not symmetric. If I just do the lufact(), which 
essentially calls Umfpack, I can match Matlab timing:

@time F = lufact(d["M"]); F \ d["RHS"];

-viral

On Tuesday, January 6, 2015 12:31:34 AM UTC+5:30, Tim Davis wrote:
>
> The difference could be the BLAS.  MATLAB comes with its own BLAS library, 
> and the performance
> of the BLAS has a huge impact on the performance of UMFPACK, particularly 
> for 3D discretizations.
>
> On Mon, Jan 5, 2015 at 6:21 AM, Ehsan Eftekhari  
> wrote:
>
>> I'm solving diffusion equation in Matlab on a 3D uniform grid (31x32x33) 
>> and Julia. I use the "\" to solve the linear system of equations. Here is 
>> the performance of the linear solver in Julia:
>> elapsed time: 2.743971424 seconds (35236720 bytes allocated)
>>
>> and Matlab (I used spparms('spumoni',1) to see what "\" does in Matlab):
>> sp\: bandwidth = 1056+1+1056.
>> sp\: is A diagonal? no.
>> sp\: is band density (0.00) > bandden (0.50) to try banded solver? no.
>> sp\: is A triangular? no.
>> sp\: is A morally triangular? no.
>> sp\: is A a candidate for Cholesky (symmetric, real positive diagonal)? 
>> no.
>> sp\: use Unsymmetric MultiFrontal PACKage with automatic reordering.
>> sp\: UMFPACK's factorization was successful.
>> sp\: UMFPACK's solve was successful.
>> Elapsed time is 0.819120 seconds.
>>
>> I have uploaded the sparse matrix (M) and the right-hand side (RHS) 
>> vectors in a mat file here:
>> https://drive.google.com/open?id=0B8OOfC6oWXEPV2xYTWFMZTljU00&authuser=0
>>
>> I read in the documents that Julia uses Umfpack for sparse matrices. My 
>> question is why umfpack is faster when it is called from matlab?
>>
>> The matlab and julia codes are here:
>> https://drive.google.com/open?id=0B8OOfC6oWXEPbXFnYlh2TFBKV1k&authuser=0
>> https://drive.google.com/open?id=0B8OOfC6oWXEPdlNfOEFKbnV5MlE&authuser=0
>>
>> and the FVM codes are here:
>> https://github.com/simulkade/FVTool
>> https://github.com/simulkade/JFVM
>>
>> Thanks a lot in advance,
>>
>> Ehsan
>>
>>
>> On Wednesday, June 5, 2013 8:39:15 AM UTC+2, Viral Shah wrote:
>>>
>>> I guess it is the last 20 years of sparse solver work packed into one 
>>> function. Not many fields can boast of providing this level of usability 
>>> out of their work. :-)
>>>
>>> There are a class of people who believe that things like \ encourage 
>>> blackbox usage, with people doing stuff they do not understand, and there 
>>> are others who believe in standing on the shoulders of giants.
>>>
>>> I find that we have taken a good approach in Julia, where we have \ and 
>>> it will have the perfect polyalgorithm at some point. But, you also have 
>>> the option of digging deeper with interfaces such as lufact(), cholfact(), 
>>> qrfact(), and finally, even if that does not work out for you, call the 
>>> LAPACK and SuiteSparse functions directly.
>>>
>>> -viral
>>>
>>> On Wednesday, June 5, 2013 9:42:12 AM UTC+5:30, Stefan Karpinski wrote:

 Goodness. This is why there needs to be a polyalgorithm – no mortal 
 user could know all of this stuff!


 On Tue, Jun 4, 2013 at 11:11 PM, Viral Shah  wrote:

> Doug,
>
> Ideally, the backslash needs to look for diagonal matrices, triangular 
> matrices and permutations thereof, banded matrices and the least squares 
> problems (non-square). In case it is square, symmetric and hermitian, 
> with 
> a heavy diagonal(?), cholesky can be attempted, with a fallback to LU. I 
> believe we do some of this in the dense \ polyalgorithm, but I am not 
> sure 
> if we look for the banded cases yet.
>
> This is what Octave does:
> http://www.gnu.org/software/octave/doc/interpreter/Sparse-
> Linear-Algebra.html#Sparse-Linear-Algebra
>
> This is Tim's Factorize for solving linear and least squares systems:
> http://www.cise.ufl.edu/research/sparse/SuiteSparse/
> current/SuiteSparse/MATLAB_Tools/Factorize/Doc/factorize_demo.html
>
> -viral
>
>
> On Tuesday, June 4, 2013 8:18:39 PM UTC+5:30, Douglas Bates wrote:
>>
>> On Thursday, May 30, 2013 10:10:59 PM UTC-5, Mingming Wang wrote:
>>
>>> Hi, 
>>>
>>> I am trying to port my MATLAB program to Julia. The for loop is 
>>> about 25% faster. But the backslash is about 10 times slower. It seems 
>>> in 
>>> MATLAB, the backslash is parallelized automatically. Is there any plan 
>>> in 
>>> Julia to do this? BTW, the matrix I am solving is sparse and symmetric.
>>>
>>
>> For a sparse symmetric matrix try
>>
>> cholfact(A)\b
>>
>> The simple
>>
>> A\b
>>
>> call will always use an LU decomposition from UMFPACK.
>>  
>>
>

>

Re: [julia-users] Re: Julia vs C++-11 for random walks

2015-01-05 Thread Viral Shah
I doubt that rand(Bool) is any slower, since randbool() calls rand(Bool). 
It is worth filing this as a performance regression.

-viral

On Monday, January 5, 2015 9:41:45 PM UTC+5:30, lapeyre@gmail.com wrote:
>
>  It may be in part the implementation of the RNG. I think it is also in 
> part whether the abstraction is optimized away.
> Notice that Julia v0.3 is faster than v0.4. This is probably randbool() 
> vs. rand(Bool).
>
> On Monday, January 5, 2015 4:50:56 PM UTC+1, Isaiah wrote:
>>
>> Very neat. Just in case this gets posted to the interwebz, it is worth 
>> pointing out that the performance advantage for Julia can probably be 
>> explained by differences in the underlying RNG. We use dsFMT, which is 
>> known to be one of (if not the?) fastest MT libraries around. I could not 
>> find any published comparisons in a quick google, but based on this test 
>> harness [1], dsFMT may be significantly faster than std::mt19937:
>>
>> ```
>> ihnorton@julia:~/tmp/cpp-random-test$ ./random-real
>> C++11 : 2.34846
>> Boost : 0.371674
>> dSFMT : 0.281255
>> GSL   : 0.649981
>> ```
>>
>> [1] https://github.com/yomichi/cpp-random-test
>>
>>
>> On Mon, Jan 5, 2015 at 10:12 AM,  wrote:
>>
>>> Oh, and, (I forgot to mention!)  the Julia code runs much faster.
>>>
>>>
>>> On Monday, January 5, 2015 3:56:07 PM UTC+1, lapeyre@gmail.com 
>>> wrote:

 Hi, here is a comparison of Julia and C++ for simulating a random walk 
 .

 It is the first Julia program I wrote. I just pushed it to github.

 --John


>>

Re: [julia-users] [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-05 Thread Ivar Nesje
Have you seen https://github.com/JuliaLang/julia/pull/8987?



Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Viral Shah
There is also an issue filed:

https://github.com/JuliaLang/julia/issues/9617

-viral

On Tuesday, January 6, 2015 12:29:33 AM UTC+5:30, Peter Mancini wrote:
>
> No. I'm tongue in cheek pointing out the absurdity of the situation.
>
> On Monday, January 5, 2015 12:57:45 PM UTC-6, Hans W Borchers wrote:
>>
>> Does this mean you suggest to disallow variables names 'e', 'f', 'p' (and 
>> possibly 
>> others) in a programming environment for scientific computing? Hard to 
>> believe.
>>
>>
>> On Monday, January 5, 2015 7:41:49 PM UTC+1, Peter Mancini wrote:
>>>
>>> That is a case of e being overloaded. It helps with the OP's issue 
>>> though. For the scientific notation issue I would suggest choosing which is 
>>> more useful, natural e or using e for a base ten exponent. 
>>>
>>> On Monday, January 5, 2015 12:22:11 PM UTC-6, Stefan Karpinski wrote:

 On Mon, Jan 5, 2015 at 12:55 PM, Peter Mancini  
 wrote:

> Usually a language handles this problem by making the constants such 
> as p and e as reserved. Thus you can't create a new variable with those 
> names and since they are constant you can't assign to them without 
> raising 
> an error.
>

 That doesn't help here since `2e+1` would still mean something 
 different than `2e + 1`.

>>>

Re: [julia-users] reading compressed csv file?

2015-01-05 Thread Jameson Nash
It seems perhaps that each Process instance should remember its IO streams,
so that it could be used directly as an IO object.
On Mon, Jan 5, 2015 at 2:32 PM Steven G. Johnson 
wrote:

> On Monday, January 5, 2015 9:09:41 AM UTC-5, Kevin Squire wrote:
>>
>> FWIW, I believe that there was concern that the behavior of open(process)
>> might cause confusion when it was defined in this way. (A quick search
>> didn't locate the issue.)
>
>
> See the discussion at https://github.com/JuliaLang/julia/pull/6948
>


Re: [julia-users] Re: Suggestion for "tuple types" explanation in manual

2015-01-05 Thread Jiahao Chen
> I am very qualified to state exactly where I am getting confused and
where it could be better.  alas, if I tried to write it, I would write
incorrect explanation, which would probably be worse.  so, this needs
one person who is learning it (the consumer) and one person who is
teaching it (the producer).

Well, I'd still recommend you try anyway. In the open source model, there
is much less of a distinction between producer and consumer. Submitting a
pull request to change the documentation is a good way of getting other
people to tell you what is wrong with the current version. It usually only
takes a few iterations to get it right (or at least to the point where
there are no obvious flaws).


Re: [julia-users] reading compressed csv file?

2015-01-05 Thread Steven G. Johnson
On Monday, January 5, 2015 9:09:41 AM UTC-5, Kevin Squire wrote:
>
> FWIW, I believe that there was concern that the behavior of open(process) 
> might cause confusion when it was defined in this way. (A quick search 
> didn't locate the issue.)


See the discussion at https://github.com/JuliaLang/julia/pull/6948 


Re: [julia-users] Re: Suggestion for "tuple types" explanation in manual

2015-01-05 Thread ivo welch
documentation is a tricky thing.  I am pretty sure you do *not* want
me to make doc changes.

I am very qualified to state exactly where I am getting confused and
where it could be better.  alas, if I tried to write it, I would write
incorrect explanation, which would probably be worse.  so, this needs
one person who is learning it (the consumer) and one person who is
teaching it (the producer).

what we really need is the ability for readers to attach questions and
notes to specific spots, that someone with knowledge can address later
on at his/her convenience.  the more specific the insertion point for
comments is, the easier it will be for the writer of the documentation
to fix this later.

fwiw, I have written a textbook in corporate finance.  I needed
student reviewers that told me where they got confused.  the material
was obvious to me.  I don't think there is any other way to make
writing clear.

in the absence of ability to change github to allow specific insertion
points, I wonder if we want to branch the docs and request that
comments be left in a particular color (say, red).

regards,

/iaw


Ivo Welch (ivo.we...@gmail.com)
http://www.ivo-welch.info/
J. Fred Weston Distinguished Professor of Finance
Anderson School at UCLA, C519
Director, UCLA Anderson Fink Center for Finance and Investments
Free Finance Textbook, http://book.ivo-welch.info/
Exec Editor, Critical Finance Review, http://www.critical-finance-review.org/
Editor and Publisher, FAMe, http://www.fame-jagazine.com/


On Mon, Jan 5, 2015 at 2:35 AM, Sean Marshallsay  wrote:
> Hi Ivo
>
> You're more than welcome to contribute to the documentation yourself to help
> clarify anything you found confusing.
>
> Regarding your second point, open() does not return a named type it returns
> a tuple containing some kind of stream and some kind of process, Pipe is
> some kind of stream and Process is some kind of process. Hopefully the
> following code snippet will help clear things up.
>
> julia> x = open(`less`)
> (Pipe(closed, 0 bytes waiting),Process(`less`, ProcessExited(0)))
>
> julia> y = typeof(x)
> (Pipe,Process)
>
> julia> typeof(y)
> (DataType,DataType)
>
> help?> issubtype
> INFO: Loading help data...
> Base.issubtype(type1, type2)
>
>True if and only if all values of "type1" are also of "type2".
>Can also be written using the "<:" infix operator as "type1 <:
>type2".
>
> julia> issubtype((Base.Pipe, Base.Process), (Base.AsyncStream,
> Base.Process))
> true
>
> help?> super
> Base.super(T::DataType)
>
>Return the supertype of DataType T
>
> julia> super(Base.Pipe)
> AsyncStream
>
> julia> super(Base.Process)
> Any
>
> So what we can see is that open() does return a (stream, process) tuple but
> stream should actually be called AsyncStream and process should actually be
> called Process.
>
> Hope this helps
> Sean
>
>
> On Monday, 5 January 2015 06:59:31 UTC, ivo welch wrote:
>>
>>
>> I am reading again about the type system, esp in
>> http://julia.readthedocs.org/en/latest/manual/types/ .  I am a good guinea
>> pig for a manual, because I don't know too much.
>>
>> a tuple is like function arguments without the functions.  so,
>>
>> mytuple=(1,"ab",(3,4),"5")
>>
>> is a tuple.  good.
>>
>> what can I do with a typle?  the manual tells me right upfront that I can
>> do a typeof(mytuple) function call to see its types.  good.
>>
>> alas, then it goes into intricacies of how types "sort-of" inherit.  I
>> need a few more basics first.
>>
>> I would suggest adding to the docs right after the typeof function that,
>> e.g., mytuple[2] shows the contents of the second parameter.  the julia cli
>> prints the contents.  the examples would be a little clearer, perhaps, if
>> one used a nested tuple, like (1,2,("foo",3),"bar").
>>
>> before getting into type relations, I would also add how one creates a
>> named tuple.  since open() does exactly this.  well, maybe I am wrong.  the
>> docs say it returns a (stream,process), but typeof( open(`gzcat d.csv.gz`)
>> tells me I have a (Pipe,Process).
>>
>> I know how to extract the n-th component of the open() returned tuple
>> (with the [] index operator), but I don't know how to get its name.  x.Pipe
>> does not work for open().
>>
>> well, my point is that it would be useful to add a few more examples and
>> explanations here.
>>
>> regards,
>>
>> /iaw
>>
>


Re: [julia-users] reading compressed csv file?

2015-01-05 Thread ivo welch
hi kevin---I would be happy to open an issue, but I would prefer if
the "honor" was left to someone (you?) who can articulate it better.
I am a true novice here.

if I understand it right, the fix is easy.  is a "Handle" change
complex and/or needed?  just overload all functions that expect a Pipe
to work also with the (Pipe,Process) tuple.   otoh, maybe doing this
with a Handle simply automates this everywhere?!  not sure.  I can't
weigh in on a discussion.  I just don't know enough.

regards,

/iaw


Ivo Welch (ivo.we...@gmail.com)
http://www.ivo-welch.info/
J. Fred Weston Distinguished Professor of Finance
Anderson School at UCLA, C519
Director, UCLA Anderson Fink Center for Finance and Investments
Free Finance Textbook, http://book.ivo-welch.info/
Exec Editor, Critical Finance Review, http://www.critical-finance-review.org/
Editor and Publisher, FAMe, http://www.fame-jagazine.com/


On Mon, Jan 5, 2015 at 6:09 AM, Kevin Squire  wrote:
>
>
>>> another strange definition from a novice perspective:  close(x1) is
>>> not defined.  close(x1[1]) is.
>>
>>
>> close() is defined for a stream, not a tuple (stream, process).
>>
>>>
>>> julia is the first language I have
>>> seen where a close(open("file")) is wrong.
>
>
> FWIW, I believe that there was concern that the behavior of open(process)
> might cause confusion when it was defined in this way. (A quick search
> didn't locate the issue.)
>
> The goal was to minimize the number of methods, but it might be worth
> exploring alternative interface. A simple change would be to create and
> return a typed object (say, Handle), instead of a tuple, which would both
> allow easy closing directly and give access to the opened process.
>
> Ivo, would you be willing to open an issue regarding your confusion here
> (and point back to this thread)?
>
> Cheers,
>Kevin
>
>
>>
>> close(open("filenamestring")) is fine, close(open(command)) is not because
>> open(command) returns a tuple of two things, not just the stream.  This is
>> Julia's primary paradigm, multi-dispatch means that the same named function
>> can have several methods that do different things depending on the *type* of
>> the arguments to the call, string or command.
>
>
>
>
>>
>>
>>>
>>>  this is esp surprising
>>> because julia has the dispatch ability to understand what it could do
>>> with a close(Pipe,Process) tuple.
>>
>>
>> But only if such a close() method is defined, which it is not.  Maybe it
>> should be, but open(command) is significantly less used than open(file).
>>
>> Cheers
>> Lex
>>
>>
>>>
>>>  the same holds true for other
>>> functions that expect a part of open.  julia should be smart enough to
>>> know this.
>>>
>>> regards,
>>>
>>> /iaw
>>>
>>> 
>>> Ivo Welch (ivo@gmail.com)
>>> http://www.ivo-welch.info/
>>> J. Fred Weston Distinguished Professor of Finance
>>> Anderson School at UCLA, C519
>>> Director, UCLA Anderson Fink Center for Finance and Investments
>>> Free Finance Textbook, http://book.ivo-welch.info/
>>> Exec Editor, Critical Finance Review,
>>> http://www.critical-finance-review.org/
>>> Editor and Publisher, FAMe, http://www.fame-jagazine.com/
>>>
>>>
>>> On Sun, Jan 4, 2015 at 6:29 PM, Todd Leo  wrote:
>>> > An intuitive thought is, uncompress your csv file via bash utility
>>> > zcat,
>>> > pipe it to STDIN and use readline(STDIN) in julia.
>>> >
>>> >
>>> >
>>> > On Monday, January 5, 2015 7:51:18 AM UTC+8, ivo welch wrote:
>>> >>
>>> >>
>>> >> dear julia users:  beginner's question (apologies, more will be
>>> >> coming).
>>> >> it's probably obvious.
>>> >>
>>> >> I am storing files in compressed csv form.  I want to use the built-in
>>> >> julia readcsv() function.  but I also need to pipe through a
>>> >> decompressor
>>> >> first.  so, I tried a variety of forms, like
>>> >>
>>> >>d= readcsv("/usr/bin/gzcat ./myfile.csv.gz |")
>>> >>d= readcsv("`/usr/bin/gzcat ./myfile.csv.gz`")
>>> >>
>>> >> I can type the file with run(`/usr/bin/gzcat ./crsp90.csv.gz"), but
>>> >> wrapping a readcsv around it does not capture it.  how does one do
>>> >> this?
>>> >>
>>> >> regards,
>>> >>
>>> >> /iaw
>>> >>
>>> >


Re: [julia-users] Re: Julia backslash performance vs MATLAB backslash

2015-01-05 Thread Tim Davis
The difference could be the BLAS.  MATLAB comes with its own BLAS library,
and the performance
of the BLAS has a huge impact on the performance of UMFPACK, particularly
for 3D discretizations.

On Mon, Jan 5, 2015 at 6:21 AM, Ehsan Eftekhari 
wrote:

> I'm solving diffusion equation in Matlab on a 3D uniform grid (31x32x33)
> and Julia. I use the "\" to solve the linear system of equations. Here is
> the performance of the linear solver in Julia:
> elapsed time: 2.743971424 seconds (35236720 bytes allocated)
>
> and Matlab (I used spparms('spumoni',1) to see what "\" does in Matlab):
> sp\: bandwidth = 1056+1+1056.
> sp\: is A diagonal? no.
> sp\: is band density (0.00) > bandden (0.50) to try banded solver? no.
> sp\: is A triangular? no.
> sp\: is A morally triangular? no.
> sp\: is A a candidate for Cholesky (symmetric, real positive diagonal)? no.
> sp\: use Unsymmetric MultiFrontal PACKage with automatic reordering.
> sp\: UMFPACK's factorization was successful.
> sp\: UMFPACK's solve was successful.
> Elapsed time is 0.819120 seconds.
>
> I have uploaded the sparse matrix (M) and the right-hand side (RHS)
> vectors in a mat file here:
> https://drive.google.com/open?id=0B8OOfC6oWXEPV2xYTWFMZTljU00&authuser=0
>
> I read in the documents that Julia uses Umfpack for sparse matrices. My
> question is why umfpack is faster when it is called from matlab?
>
> The matlab and julia codes are here:
> https://drive.google.com/open?id=0B8OOfC6oWXEPbXFnYlh2TFBKV1k&authuser=0
> https://drive.google.com/open?id=0B8OOfC6oWXEPdlNfOEFKbnV5MlE&authuser=0
>
> and the FVM codes are here:
> https://github.com/simulkade/FVTool
> https://github.com/simulkade/JFVM
>
> Thanks a lot in advance,
>
> Ehsan
>
>
> On Wednesday, June 5, 2013 8:39:15 AM UTC+2, Viral Shah wrote:
>>
>> I guess it is the last 20 years of sparse solver work packed into one
>> function. Not many fields can boast of providing this level of usability
>> out of their work. :-)
>>
>> There are a class of people who believe that things like \ encourage
>> blackbox usage, with people doing stuff they do not understand, and there
>> are others who believe in standing on the shoulders of giants.
>>
>> I find that we have taken a good approach in Julia, where we have \ and
>> it will have the perfect polyalgorithm at some point. But, you also have
>> the option of digging deeper with interfaces such as lufact(), cholfact(),
>> qrfact(), and finally, even if that does not work out for you, call the
>> LAPACK and SuiteSparse functions directly.
>>
>> -viral
>>
>> On Wednesday, June 5, 2013 9:42:12 AM UTC+5:30, Stefan Karpinski wrote:
>>>
>>> Goodness. This is why there needs to be a polyalgorithm – no mortal user
>>> could know all of this stuff!
>>>
>>>
>>> On Tue, Jun 4, 2013 at 11:11 PM, Viral Shah  wrote:
>>>
 Doug,

 Ideally, the backslash needs to look for diagonal matrices, triangular
 matrices and permutations thereof, banded matrices and the least squares
 problems (non-square). In case it is square, symmetric and hermitian, with
 a heavy diagonal(?), cholesky can be attempted, with a fallback to LU. I
 believe we do some of this in the dense \ polyalgorithm, but I am not sure
 if we look for the banded cases yet.

 This is what Octave does:
 http://www.gnu.org/software/octave/doc/interpreter/Sparse-
 Linear-Algebra.html#Sparse-Linear-Algebra

 This is Tim's Factorize for solving linear and least squares systems:
 http://www.cise.ufl.edu/research/sparse/SuiteSparse/
 current/SuiteSparse/MATLAB_Tools/Factorize/Doc/factorize_demo.html

 -viral


 On Tuesday, June 4, 2013 8:18:39 PM UTC+5:30, Douglas Bates wrote:
>
> On Thursday, May 30, 2013 10:10:59 PM UTC-5, Mingming Wang wrote:
>
>> Hi,
>>
>> I am trying to port my MATLAB program to Julia. The for loop is about
>> 25% faster. But the backslash is about 10 times slower. It seems in 
>> MATLAB,
>> the backslash is parallelized automatically. Is there any plan in Julia 
>> to
>> do this? BTW, the matrix I am solving is sparse and symmetric.
>>
>
> For a sparse symmetric matrix try
>
> cholfact(A)\b
>
> The simple
>
> A\b
>
> call will always use an LU decomposition from UMFPACK.
>
>

>>>


Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Peter Mancini
No. I'm tongue in cheek pointing out the absurdity of the situation.

On Monday, January 5, 2015 12:57:45 PM UTC-6, Hans W Borchers wrote:
>
> Does this mean you suggest to disallow variables names 'e', 'f', 'p' (and 
> possibly 
> others) in a programming environment for scientific computing? Hard to 
> believe.
>
>
> On Monday, January 5, 2015 7:41:49 PM UTC+1, Peter Mancini wrote:
>>
>> That is a case of e being overloaded. It helps with the OP's issue 
>> though. For the scientific notation issue I would suggest choosing which is 
>> more useful, natural e or using e for a base ten exponent. 
>>
>> On Monday, January 5, 2015 12:22:11 PM UTC-6, Stefan Karpinski wrote:
>>>
>>> On Mon, Jan 5, 2015 at 12:55 PM, Peter Mancini  
>>> wrote:
>>>
 Usually a language handles this problem by making the constants such as 
 p and e as reserved. Thus you can't create a new variable with those names 
 and since they are constant you can't assign to them without raising an 
 error.

>>>
>>> That doesn't help here since `2e+1` would still mean something different 
>>> than `2e + 1`.
>>>
>>

Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Hans W Borchers
Does this mean you suggest to disallow variables names 'e', 'f', 'p' (and 
possibly 
others) in a programming environment for scientific computing? Hard to 
believe.


On Monday, January 5, 2015 7:41:49 PM UTC+1, Peter Mancini wrote:
>
> That is a case of e being overloaded. It helps with the OP's issue though. 
> For the scientific notation issue I would suggest choosing which is more 
> useful, natural e or using e for a base ten exponent. 
>
> On Monday, January 5, 2015 12:22:11 PM UTC-6, Stefan Karpinski wrote:
>>
>> On Mon, Jan 5, 2015 at 12:55 PM, Peter Mancini  wrote:
>>
>>> Usually a language handles this problem by making the constants such as 
>>> p and e as reserved. Thus you can't create a new variable with those names 
>>> and since they are constant you can't assign to them without raising an 
>>> error.
>>>
>>
>> That doesn't help here since `2e+1` would still mean something different 
>> than `2e + 1`.
>>
>

Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Stefan Karpinski
To eliminate the ambiguity, one would have to disallow all variable names
that start with the letter "e". At which point, one might as well go all
the way and just disallow using "e" altogether and rename the language to
Gadsby.

On Mon, Jan 5, 2015 at 1:41 PM, Peter Mancini  wrote:

> That is a case of e being overloaded. It helps with the OP's issue though.
> For the scientific notation issue I would suggest choosing which is more
> useful, natural e or using e for a base ten exponent.
>
> On Monday, January 5, 2015 12:22:11 PM UTC-6, Stefan Karpinski wrote:
>>
>> On Mon, Jan 5, 2015 at 12:55 PM, Peter Mancini  wrote:
>>
>>> Usually a language handles this problem by making the constants such as
>>> p and e as reserved. Thus you can't create a new variable with those names
>>> and since they are constant you can't assign to them without raising an
>>> error.
>>>
>>
>> That doesn't help here since `2e+1` would still mean something different
>> than `2e + 1`.
>>
>


Re: [julia-users] sub type definitions

2015-01-05 Thread Mike Innes
Yup, that's the one, my bad

On 5 January 2015 at 18:35, Samuel Colvin  wrote:

> Great thanks a lot, now working.
>
>
> --
>
> Samuel Colvin
> s...@muelcolvin.com,
> 07801160713
>
> On 5 January 2015 at 18:32, Jacob Quinn  wrote:
>
>> I believe it should actually be:
>>
>> func{T<:Foo}(bar::Type{T}) ...
>>
>> On Mon, Jan 5, 2015 at 11:30 AM, Mike Innes 
>> wrote:
>>
>>> func{T<:Foo}(bar::T)
>>>
>>> should do what you want, I think
>>>
>>> On 5 January 2015 at 18:28, Samuel Colvin  wrote:
>>>
 julia> abstract Foo

 julia> type SubFoo <: Foo
x
end

 julia> func(bar::Type{Foo}) = println(bar)
 func (generic function with 1 method)

 julia> f=SubFoo(1)
 SubFoo(1)

 julia> func(typeof(f))
 ERROR: `func` has no method matching func(::Type{SubFoo})



 What do I need to replace "Type{Foo}" with to get this working?

 I tried "Type{T<:Foo}" and "Type{<:Foo}" but neither worked, I felt I
 was close though?

>>>
>>>
>>
>


Re: [julia-users] K-means Clustering in matlab

2015-01-05 Thread Stefan Karpinski
This is a little confusing since it seems to be a question about Matlab.
Did you mean to ask about how to do k-means clustering in Julia?

On Mon, Jan 5, 2015 at 6:47 AM, Eng Noor  wrote:

>
>
> How to Perform a K-means Clustering  in matlab and use it to select
> acluster head ???
>


Re: [julia-users] printf float64 as hex/raw

2015-01-05 Thread Jameson Nash
https://developer.gnome.org/gdk3/stable/gdk3-Events.html#gdk-event-request-motions

I've generally found the manual to have better information than the
tutorials. Specifically, the tutorials generally don't seem to get updated
as Gtk evolves (esp. those not hosted on the gnome site).

That's probably a better question for SO or the Gtk dev lists.

On Mon, Jan 5, 2015 at 7:22 AM Andreas Lobinger  wrote:

> Hello colleague,
>
>
> On Sunday, January 4, 2015 6:33:24 PM UTC+1, Jameson wrote:
>>
>> Gtk3 hasn't changed that much in this area -- it is still incorrect to
>> call gdk_window_get_device_position & friends from event handlers.
>>
>
> Could you please provide a reference for this?
>
> I was reading the Gtk2 tutorial (unfortunately there is no Gtk3 tutorial)
> and in the demo programm scribble (both in code and text):
> https://developer.gnome.org/gtk-tutorial/stable/x2431.html
> you'll find:
>
>> When we specify GDK_POINTER_MOTION_HINT_MASK, the server sends us a
>> motion event the first time the pointer moves after entering our window, or
>> after a button press or release event. Subsequent motion events will be
>> suppressed until we explicitly ask for the position of the pointer using
>> the function: gdk_window_get_pointer.
>>
> For the question about the integer coordinates: I always expected the
> event.x/y coordinates truncated when i use a mouse pointer and the Gtk2
> this is true in all my programms (C and python based, some julia). So i
> wondered, why in Gtk3 suddenly they don't.
>
>
>


Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Peter Mancini
That is a case of e being overloaded. It helps with the OP's issue though. 
For the scientific notation issue I would suggest choosing which is more 
useful, natural e or using e for a base ten exponent. 

On Monday, January 5, 2015 12:22:11 PM UTC-6, Stefan Karpinski wrote:
>
> On Mon, Jan 5, 2015 at 12:55 PM, Peter Mancini  > wrote:
>
>> Usually a language handles this problem by making the constants such as p 
>> and e as reserved. Thus you can't create a new variable with those names 
>> and since they are constant you can't assign to them without raising an 
>> error.
>>
>
> That doesn't help here since `2e+1` would still mean something different 
> than `2e + 1`.
>


Re: [julia-users] sub type definitions

2015-01-05 Thread Jacob Quinn
I believe it should actually be:

func{T<:Foo}(bar::Type{T}) ...

On Mon, Jan 5, 2015 at 11:30 AM, Mike Innes  wrote:

> func{T<:Foo}(bar::T)
>
> should do what you want, I think
>
> On 5 January 2015 at 18:28, Samuel Colvin  wrote:
>
>> julia> abstract Foo
>>
>> julia> type SubFoo <: Foo
>>x
>>end
>>
>> julia> func(bar::Type{Foo}) = println(bar)
>> func (generic function with 1 method)
>>
>> julia> f=SubFoo(1)
>> SubFoo(1)
>>
>> julia> func(typeof(f))
>> ERROR: `func` has no method matching func(::Type{SubFoo})
>>
>>
>>
>> What do I need to replace "Type{Foo}" with to get this working?
>>
>> I tried "Type{T<:Foo}" and "Type{<:Foo}" but neither worked, I felt I was
>> close though?
>>
>
>


Re: [julia-users] sub type definitions

2015-01-05 Thread Samuel Colvin
Great thanks a lot, now working.


--

Samuel Colvin
s...@muelcolvin.com,
07801160713

On 5 January 2015 at 18:32, Jacob Quinn  wrote:

> I believe it should actually be:
>
> func{T<:Foo}(bar::Type{T}) ...
>
> On Mon, Jan 5, 2015 at 11:30 AM, Mike Innes 
> wrote:
>
>> func{T<:Foo}(bar::T)
>>
>> should do what you want, I think
>>
>> On 5 January 2015 at 18:28, Samuel Colvin  wrote:
>>
>>> julia> abstract Foo
>>>
>>> julia> type SubFoo <: Foo
>>>x
>>>end
>>>
>>> julia> func(bar::Type{Foo}) = println(bar)
>>> func (generic function with 1 method)
>>>
>>> julia> f=SubFoo(1)
>>> SubFoo(1)
>>>
>>> julia> func(typeof(f))
>>> ERROR: `func` has no method matching func(::Type{SubFoo})
>>>
>>>
>>>
>>> What do I need to replace "Type{Foo}" with to get this working?
>>>
>>> I tried "Type{T<:Foo}" and "Type{<:Foo}" but neither worked, I felt I
>>> was close though?
>>>
>>
>>
>


Re: [julia-users] sub type definitions

2015-01-05 Thread Mike Innes
func{T<:Foo}(bar::T)

should do what you want, I think

On 5 January 2015 at 18:28, Samuel Colvin  wrote:

> julia> abstract Foo
>
> julia> type SubFoo <: Foo
>x
>end
>
> julia> func(bar::Type{Foo}) = println(bar)
> func (generic function with 1 method)
>
> julia> f=SubFoo(1)
> SubFoo(1)
>
> julia> func(typeof(f))
> ERROR: `func` has no method matching func(::Type{SubFoo})
>
>
>
> What do I need to replace "Type{Foo}" with to get this working?
>
> I tried "Type{T<:Foo}" and "Type{<:Foo}" but neither worked, I felt I was
> close though?
>


[julia-users] sub type definitions

2015-01-05 Thread Samuel Colvin


julia> abstract Foo

julia> type SubFoo <: Foo
   x
   end

julia> func(bar::Type{Foo}) = println(bar)
func (generic function with 1 method)

julia> f=SubFoo(1)
SubFoo(1)

julia> func(typeof(f))
ERROR: `func` has no method matching func(::Type{SubFoo})



What do I need to replace "Type{Foo}" with to get this working?

I tried "Type{T<:Foo}" and "Type{<:Foo}" but neither worked, I felt I was 
close though?


Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Stefan Karpinski
On Mon, Jan 5, 2015 at 12:55 PM, Peter Mancini  wrote:

> Usually a language handles this problem by making the constants such as p
> and e as reserved. Thus you can't create a new variable with those names
> and since they are constant you can't assign to them without raising an
> error.
>

That doesn't help here since `2e+1` would still mean something different
than `2e + 1`.


[julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Peter Mancini
Usually a language handles this problem by making the constants such as p 
and e as reserved. Thus you can't create a new variable with those names 
and since they are constant you can't assign to them without raising an 
error.

--Pete


Re: [julia-users] Quirks with float16 promotion

2015-01-05 Thread Jiahao Chen
Related issue: #5942

https://github.com/JuliaLang/julia/issues/5942

On Mon Jan 05 2015 at 10:06:54 AM Mark B <2460...@gmail.com> wrote:

> I'm trying to figure out promotions and noticed a few possible quirks -
> perhaps these are bugs, as I can't figure out the logic. I realize float16
> is a work in progress but I really like the data type as my datasets are
> large.
>
> julia> a=rand(Float16,1)  # define a float16 variable
> 1-element Array{Float16,1}:
>
> julia> a+1.0   # adding 1 gives a float16
> 1-element Array{Float16,1}:
>
> julia> a+1im   # but adding 1im gives a float32
> 1-element Array{Complex{Float32},1}:
>
> julia> typeof(1im)# even tho 1im is a float64
> Complex{Int64} (constructor with 1 method)
>
> julia> a+float16(1im)# and 1im could be represented in
> float16
> 1-element Array{Complex{Float16},1}:
>
> julia> sparse(a) # define a sparse float16
> matrix
> 1x1 sparse matrix with 1 Float16 entries:
>
> julia> sparse(a+1im)  # adding 1im causes
> promotion to float32
> 1x1 sparse matrix with 1 Complex{Float32} entries:
>
> julia> sparse(a)*1im # promotion to float32
> 1x1 sparse matrix with 1 Complex{Float32} entries:
> [1, 1]  =  0.0+0.335938im
>
> julia> fft(a) # fft promotes to float64
> 1-element Array{Complex{Float64},1}:
>
> For the last one, it probably need only promote to float32 to use the
> single precision fftw functions. It would be nice if julia could silently
> convert back to float16 when the in-place transform is requested:
>
> julia> g=plan_fft!(a)
> ERROR: `plan_fft!` has no method matching plan_fft!(::Array{Float16,2},
> ::UnitRange{Int64}, ::Uint32, ::Float64) in plan_fft! at fftw.jl:492
>


[julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Hans W Borchers
It's the same with 'f', i.e. 1f+1 gives 10 and 1f + 1 an error (if f is not 
defined, else a different result again).
And if someone introduces 'g' for "engineering notation", there will be an 
exception for this letter, too.

By the way, has the bug x = 10; x.1 returning 1.0 been handled in 0.4? It's 
still there in 0.3.


On Monday, January 5, 2015 2:32:00 PM UTC+1, Simon Byrne wrote:
>
> *  julia> 3e+1*
>> *  30.0*
>>
>>   *julia> 3e + 1*
>>
>> *  9.154845485377136*
>>
>
> Perhaps this is a good reason to change behaviour such that e is no longer 
> a constant: it has always seemed bit odd to use a valuable latin singleton 
> in this way. We could use a unicode script e (U+212F) instead, as suggested 
> by wikipedia:
>
> http://en.wikipedia.org/wiki/Numerals_in_Unicode#Characters_for_mathematical_constants
>
> s
>


Re: [julia-users] Re: Julia vs C++-11 for random walks

2015-01-05 Thread lapeyre . math122a
 It may be in part the implementation of the RNG. I think it is also in 
part whether the abstraction is optimized away.
Notice that Julia v0.3 is faster than v0.4. This is probably randbool() vs. 
rand(Bool).

On Monday, January 5, 2015 4:50:56 PM UTC+1, Isaiah wrote:
>
> Very neat. Just in case this gets posted to the interwebz, it is worth 
> pointing out that the performance advantage for Julia can probably be 
> explained by differences in the underlying RNG. We use dsFMT, which is 
> known to be one of (if not the?) fastest MT libraries around. I could not 
> find any published comparisons in a quick google, but based on this test 
> harness [1], dsFMT may be significantly faster than std::mt19937:
>
> ```
> ihnorton@julia:~/tmp/cpp-random-test$ ./random-real
> C++11 : 2.34846
> Boost : 0.371674
> dSFMT : 0.281255
> GSL   : 0.649981
> ```
>
> [1] https://github.com/yomichi/cpp-random-test
>
>
> On Mon, Jan 5, 2015 at 10:12 AM, > 
> wrote:
>
>> Oh, and, (I forgot to mention!)  the Julia code runs much faster.
>>
>>
>> On Monday, January 5, 2015 3:56:07 PM UTC+1, lapeyre@gmail.com wrote:
>>>
>>> Hi, here is a comparison of Julia and C++ for simulating a random walk 
>>> .
>>>
>>> It is the first Julia program I wrote. I just pushed it to github.
>>>
>>> --John
>>>
>>>
>

Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Christoph Ortner
just noticed that Tamas already recommended that above. Just to reiterate I 
think this is the better way to resolve this particular issue.
   Christoph

On Monday, 5 January 2015 15:04:27 UTC, Eric Forgy wrote:
>
> Maybe its not so bad if you just always include * where it should be, i.e. 
> p = 1; 2*p+1 works fine.
>
> On Mon, Jan 5, 2015 at 10:55 PM, Christoph Ortner  > wrote:
>
>> For what it's worth, it always struck me is as odd that dropping the * 
>> for multiplication is allowed. Is it worth dropping this instead of the p, 
>> e notation?
>> Christoph
>>
>>
>

[julia-users] Re: Warning: imported binding for transpose overwritten in module __anon__

2015-01-05 Thread John Zuhone
Simon,

Thanks for looking into that!

Steven, what are your plans for bumping PyCall to the next version number?

Best,

John

On Sunday, January 4, 2015 6:39:33 PM UTC-5, Simon Kornblith wrote:
>
> https://github.com/stevengj/PyCall.jl/pull/110
>
> On Sunday, January 4, 2015 9:34:14 AM UTC-5, John Zuhone wrote:
>>
>> Steven,
>>
>> How difficult would it be to work a way to suppress this warning message? 
>> I general I would argue that it's best to avoid printing warnings to the 
>> screen unless there is something going on to be genuinely warned about, so 
>> as not to confuse the end-user. Since my package (
>> http://github.com/jzuhone/YT.jl) depends on SymPy, this warning is shown 
>> every time one does "using YT" or "import YT". It's a cosmetic issue, but 
>> it would still be nice to get rid of it. 
>>
>> If suppressing it is doable, I'd be happy to investigate it myself and 
>> submit a PR. I'm not sure if this should be done in PyCall or in Julia 
>> itself somehow.
>>
>> Best,
>>
>> John Z
>>
>> On Saturday, January 3, 2015 9:37:23 AM UTC-5, Steven G. Johnson wrote:
>>>
>>> You can safely ignore it.  @pyimport creates an module __anon__ (which 
>>> is assigned to plt in this case) that has definitions for the Python 
>>> functions in the Python module.   The warning is telling you that this 
>>> module creates its own "transpose" function instead of extending 
>>> Base.transpose.  (It is a warning because in many cases a module author 
>>> would have intended to add a new method to Base.transpose instead.)
>>>
>>> This is fine.  transpose in other modules still refers to 
>>> Base.transpose, and plt.transpose refers to the pylab one (== numpy 
>>> transpose).
>>>
>>> --SGJ
>>>
>>> PS. By the way, I would normally import just pyplot and not pylab.  The 
>>> pylab module is useful in Python because it imports numpy too, and without 
>>> that you wouldn't have a lot of basic array functionality.  But in Julia 
>>> you already have the equivalent of numpy built in to Julia Base.   Also, I 
>>> would tend to recommend the Julia PyPlot module over manually importing 
>>> pyplot.  The PyPlot module adds some niceties like IJulia inline plots and 
>>> interactive GUI plots, whereas pylab is imported by default in 
>>> non-interactive mode.
>>>
>>

Re: [julia-users] Re: Julia vs C++-11 for random walks

2015-01-05 Thread Isaiah Norton
Very neat. Just in case this gets posted to the interwebz, it is worth
pointing out that the performance advantage for Julia can probably be
explained by differences in the underlying RNG. We use dsFMT, which is
known to be one of (if not the?) fastest MT libraries around. I could not
find any published comparisons in a quick google, but based on this test
harness [1], dsFMT may be significantly faster than std::mt19937:

```
ihnorton@julia:~/tmp/cpp-random-test$ ./random-real
C++11 : 2.34846
Boost : 0.371674
dSFMT : 0.281255
GSL   : 0.649981
```

[1] https://github.com/yomichi/cpp-random-test


On Mon, Jan 5, 2015 at 10:12 AM,  wrote:

> Oh, and, (I forgot to mention!)  the Julia code runs much faster.
>
>
> On Monday, January 5, 2015 3:56:07 PM UTC+1, lapeyre@gmail.com wrote:
>>
>> Hi, here is a comparison of Julia and C++ for simulating a random walk
>> .
>>
>> It is the first Julia program I wrote. I just pushed it to github.
>>
>> --John
>>
>>


[julia-users] Re: [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-05 Thread Tracy Wadleigh
Nice work!

I was just thinking about julia integration with atom-shell this morning. 
I'm excited to see where this goes.


Re: [julia-users] [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-05 Thread Mike Innes
Whoops, slight bug in the build script. A Pkg.update() (possibly followed
by Pkg.build("Blink"), if it doesn't happen automatically) should fix you
up.

On 5 January 2015 at 15:02, Rob J. Goedman  wrote:

> Hi Mike,
>
> Tried it a couple of times, but run into below error, in REPL.
>
> Regards,
> Rob J. Goedman
> goed...@mac.com
>
>
> *julia> **Pkg.add("Blink")*
> *INFO: Upgrading Blink: v0.1.2 => v0.1.3*
> *INFO: Building Blink*
>   % Total% Received % Xferd  Average Speed   TimeTime Time
> Current
>  Dload  Upload   Total   SpentLeft
> Speed
> 100  100k  100  100k0 0   280k  0 --:--:-- --:--:-- --:--:--
> 280k
>   % Total% Received % Xferd  Average Speed   TimeTime Time
> Current
>  Dload  Upload   Total   SpentLeft
> Speed
> 100 35.8M  100 35.8M0 0  3217k  0  0:00:11  0:00:11 --:--:--
> 4128k
> *=[ ERROR: Blink
> ]=*
>
> *x not defined*
> *while loading /Users/rob/.julia/v0.3/Blink/deps/build.jl, in expression
> starting on line 13*
>
>
> *==*
>
> *=[ BUILD ERRORS
> ]=*
>
> *WARNING: Blink had build errors.*
>
> * - packages with build errors remain installed in /Users/rob/.julia/v0.3*
> * - build the package(s) and all dependencies with `Pkg.build("Blink")`*
> * - build a single package by running its `deps/build.jl` script*
>
>
> *==*
> *INFO: Package database updated*
>
> *julia> **versioninfo()*
> Julia Version 0.3.4
> Commit 3392026* (2014-12-26 10:42 UTC)
> Platform Info:
>   System: Darwin (x86_64-apple-darwin13.4.0)
>   CPU: Intel(R) Core(TM) i7-3720QM CPU @ 2.60GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
>   LAPACK: libopenblas
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.3
>
> *julia> *
>
>
>
> On Jan 5, 2015, at 6:29 AM, Mike Innes  wrote:
>
> Hello Julians,
>
> I have a shiny late Christmas present for you, complete with Julia-themed
> wrapping.
>
> Blink.jl  wraps Chrome to
> enable web-based GUIs. It's very primitive at the moment, but as a proof of
> concept it includes BlinkDisplay, which will display graphics like Gadfly
> plots in a convenient popup window (matplotlib style).
>
> Shashi has some great ideas for ways to control HTML from Julia, and
> hopefully in future we'll have more nice things like matrix/data frame
> explorers and other graphical tools.
>
> (Incidentally, I'd also appreciate any feedback on the display system I've
> made to enable this, since I'm hoping to propose it to replace Base's
> current one in future)
>
> Anyway, let me know if this is useful to you and/or there are any problems.
>
> – Mike
>
>
>


[julia-users] Re: Julia vs C++-11 for random walks

2015-01-05 Thread lapeyre . math122a
Oh, and, (I forgot to mention!)  the Julia code runs much faster.

On Monday, January 5, 2015 3:56:07 PM UTC+1, lapeyre@gmail.com wrote:
>
> Hi, here is a comparison of Julia and C++ for simulating a random walk 
> .
>
> It is the first Julia program I wrote. I just pushed it to github.
>
> --John
>
>

[julia-users] Quirks with float16 promotion

2015-01-05 Thread Mark B
I'm trying to figure out promotions and noticed a few possible quirks - 
perhaps these are bugs, as I can't figure out the logic. I realize float16 
is a work in progress but I really like the data type as my datasets are 
large.

julia> a=rand(Float16,1)  # define a float16 variable
1-element Array{Float16,1}: 

julia> a+1.0   # adding 1 gives a float16
1-element Array{Float16,1}:

julia> a+1im   # but adding 1im gives a float32
1-element Array{Complex{Float32},1}:

julia> typeof(1im)# even tho 1im is a float64
Complex{Int64} (constructor with 1 method)

julia> a+float16(1im)# and 1im could be represented in 
float16
1-element Array{Complex{Float16},1}:

julia> sparse(a) # define a sparse float16 
matrix
1x1 sparse matrix with 1 Float16 entries:

julia> sparse(a+1im)  # adding 1im causes promotion 
to float32
1x1 sparse matrix with 1 Complex{Float32} entries:

julia> sparse(a)*1im # promotion to float32
1x1 sparse matrix with 1 Complex{Float32} entries:
[1, 1]  =  0.0+0.335938im

julia> fft(a) # fft promotes to float64
1-element Array{Complex{Float64},1}:

For the last one, it probably need only promote to float32 to use the 
single precision fftw functions. It would be nice if julia could silently 
convert back to float16 when the in-place transform is requested:

julia> g=plan_fft!(a)
ERROR: `plan_fft!` has no method matching plan_fft!(::Array{Float16,2}, 
::UnitRange{Int64}, ::Uint32, ::Float64) in plan_fft! at fftw.jl:492


Re: [julia-users] Re: ANN: JLTest (An xUnit like testing framework)

2015-01-05 Thread andy hayden
This looks great!

Is it possible to do further improve do notation with something like:

using JLTest
testcase("Mytest Tests") do  # testcase name is optional
#Some code
x = 0

#Function to be called before each test (optional)
setUp() do
x += 1
end

#Function to be called after each test (optional)
tearDown() do
x=0
end

test("A Simple Test") do  # optional test name
@assertEqual(x,1)
end

#more tests or code...
end # end of test case


perhaps you could avoid the namespace (test function) issue look prefixed 
test_ function names (like in python's unittest)? ie

test_simple_test() do
@assertEqual(x,1)
end


This seems like the syntax you'd want (if it were possible)...

On Saturday, 1 November 2014 18:39:32 UTC, Sal Mangano wrote:
>
> Thanks all for taking a look at the implementation and suggesting 
> alternative. It did dawn on me that it did not have to be entirely macro 
> based but I thought using macros consistently through out would be easier 
> on users as the would not have to remember which parts where functional and 
> which were macro based. Thoughts on this desire for consistency?
>
> On Saturday, November 1, 2014 1:58:56 PM UTC-4, Jameson wrote:
>>
>> In your example modification, the caller function wouldn't be able to 
>> actually access any of those internal closures, and the callee wouldn't be 
>> able to define more than one test() method.
>>
>> Instead, you could do something like the following, although I don't 
>> think it's necessarily any better than the macro version, it is perhaps a 
>> bit more flexible since you could reuse and curry function more easily, 
>> whereas the macros mostly force you to write out the explicit syntax. (note 
>> that ()->begin;end and function();end are just different ways of writing 
>> exactly equivalent expressions)
>>
>> testcase() do T
>> T.casename = "Mytest Tests"
>>
>> #Some code
>> x = 0
>>
>> #Function to be called before each test (optional)
>> T.setUp = function()
>> x += 1
>> end
>>
>> #Function to be called after each test (optional)
>> T.tearDown = function()
>> x=0
>> end
>>
>> push!(T.tests, function()
>> testname("A Simple Test") #Name of test (optional)
>> @assertEqual(x,1)
>> end)
>>
>> #more tests or code...
>>
>> end # end of test case
>>
>>
>>
>> On Sat, Nov 1, 2014 at 1:24 PM, Jason Merrill  wrote:
>>
>>> On Saturday, November 1, 2014 10:01:48 AM UTC-7, Jason Merrill wrote:


 On Thursday, October 30, 2014 8:08:23 PM UTC-4, Sal Mangano wrote:

 I wanted a unittest framework that worked more like unittest in 
 python (or xUnit in in other languages) so I wrote 
 g...@github.com:smangano/JLTest.git. 
 If you find it useful great. I am still new to Julia so if you find 
 anything showing poor taste I am happy to get critique and/or 
 suggestions 
 for improvement. I'll try to add better docs but I think it is pretty 
 self 
 explanatory if you look at test/runtests.jl



 Re: macros, in places where you have macros that should take a 
 function, or that will usually be called with a begin ... end block, you 
 can often replace them with a plain old function and simplify the 
 implementation. Julia's `do` syntax makes this especially easy. Same goes 
 for macros that take a string and don't use it to generate code.

 Borrowing the example from the readme:

 using JLTest

 @testcase begin
 @casename "Mytest Tests"

 #Some code
 x = 0

 #Function to be called before each test (optional)
 @setUp () -> (x+=1)

 #Function to be called after each test (optional)
 @tearDown () -> (x=0)

 @test begin
 @testname "A Simple Test" #Name of test (optional)
 @assertEqual(x,1)
 end

 #more tests or code...

 end # end of test case


 Could become

 using JLTest

 testcase() do
 casename("Mytest Tests")

 #Some code
 x = 0

 #Function to be called before each test (optional)
 setUp() do
 x += 1
 end

 #Function to be called after each test (optional)
 tearDown() do
 x=0
 end

 test() do
 testname("A Simple Test") #Name of test (optional)
 @assertEqual(x,1)
 end

 #more tests or code...

 end # end of test case


 So concretely, I'm suggesting testcase, casename, setup, teardown, 
 test, and testname could (should?) maybe all be functions instead of 
 macros.

 The assertions are probably best left as macros, since they actually 
 get something out of delayed evaluation (i

  1   2   >