[julia-users] Is there a way to convert function name to a string?

2016-04-08 Thread K leo
Say a function is named FuncA.  I hope to get this name into a string like
"FuncA".  Is there a way to do that?


Re: [julia-users] Re: custom ctags for Julia

2016-04-08 Thread El suisse
Sorry for the noise, which it is the right way to generate the file tags?
in .julia/ ?
with the command:

`ctags -R .`

2016-03-16 10:38 GMT-03:00 Christof Stocker :

> Makes sense. I don't use C-], I use the CtrlP plugin to jump around in a
> project. I included the first parameter, so that CtrlP has an easier time
> finding the unique match
>
>
> On 2016-03-15 20:41, Daniel Arndt wrote:
>
> I gave this ago and my experience was:
>
> TagBar worked and looked great!
>
> jump to tag (C-] in vim) did not work. This seemed to be a result of
> including the parameters in the tag. If I manually removed them from the
> tags file, tag jumping would start working again. I won't pretend to know
> the internals of what VIM and TagBar are doing differently here. I haven't
> had time to dig any deeper.
>
> Cheers,
> Dan
>
> On Monday, 14 March 2016 15:34:16 UTC-3, Daniel Arndt wrote:
>>
>> Thanks Christof,
>>
>> I've been meaning to get around to doing this myself, so you've saved me
>> some time. I'm testing it out right now.
>>
>> Cheers,
>> Dan
>>
>> On Sunday, 13 March 2016 13:50:00 UTC-3, Christof Stocker wrote:
>>>
>>> I made myself a custom one based on the one from the julia repo.
>>> Basically I seperated the big one into multiple categories and also allow
>>> functions to be marked inline
>>>
>>> I posted a gist of it for those interested
>>>
>>> https://gist.github.com/Evizero/e1595c35611c15ebf8f9
>>>
>>> Am Freitag, 12. Februar 2016 14:38:42 UTC+1 schrieb Christof Stocker:

 Hi!

 Are there any VIM users here who have a nice
 [tagbar](https://github.com/majutsushi/tagbar) going?

 For the tagbar to work properly one needs to have a Julia language
 definition for [ctags](http://ctags.sourceforge.net/). I have found
 one
 [here](https://github.com/JuliaLang/julia/blob/master/contrib/ctags)
 that nicely lists all the functions, which is great, but I wonder if
 anyone has already put in the additional effort and created a custom
 one
 that also lists types, and macros etc. Would be much appreciated.

>>>
>


RE: [julia-users] JuliaCon 2016: Keynote speakers

2016-04-08 Thread David Anthoff
Wow, that is a FANTASTIC lineup, well done!! Cheers, David

 

From: julia-users@googlegroups.com [mailto:julia-users@googlegroups.com] On 
Behalf Of Andreas Noack
Sent: Friday, April 8, 2016 6:43 PM
To: julia-users 
Subject: [julia-users] JuliaCon 2016: Keynote speakers

 

JuliaCon 2016
June 21 - 25, 2016
Massachusetts Institute of Technology, Cambridge, Massachusetts, USA.
http://juliacon.org/

On behalf of the JuliaCon 2016 committee, I am happy to announce the following 
keynote speakers:

Timothy E. Holy, Thomas J. Sargent, and Guy L. Steele Jr.

Timothy E. Holy is Associate Professor of Neuroscience at Washington University 
in St. Louis. In 2009 he received the NIH Director’s Pioneer award for 
innovations in optics and microscopy. His research interests include imaging of 
neuronal activity and his lab was probably one of the first to adopt Julia for 
scientific research. He is a long time Julia contributor and a lead developer 
of Julia’s multidimensional array capabilities as well as the author of far too 
many Julia packages.

Thomas J. Sargent is Professor of Economics at New York University and Senior 
Fellow at the Hoover Institution. In 2011 the Royal Swedish Academy of Sciences 
awarded him the Nobel Memorial Prize in Economic Sciences for his work on 
macroeconomics. Together with John Stachurski he founded quant-econ.net, a 
Julia and Python based learning platform for quantitative economics focusing on 
algorithms and numerical methods for studying economic problems as well as 
coding skills.

Guy L. Steele Jr. is a Software Architect for Oracle Labs and Principal 
Investigator of the Programming Language Research project. The Association for 
Computing Machinery awarded him the 1988 Grace Murray Hopper award. He has 
co-designed the programming language Scheme, which has greatly influenced the 
design of Julia, as well as languages such as Fortress and Java.

Andreas Noack
JuliaCon 2016 Local Chair
Postdoctoral Associate
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology



[julia-users] JuliaCon 2016: Keynote speakers

2016-04-08 Thread Andreas Noack
JuliaCon 2016
June 21 - 25, 2016
Massachusetts Institute of Technology, Cambridge, Massachusetts, USA.
http://juliacon.org/

On behalf of the JuliaCon 2016 committee, I am happy to announce the 
following keynote speakers:

*Timothy E. Holy, Thomas J. Sargent, *and* Guy L. Steele Jr.*

*Timothy E. Holy* is Associate Professor of Neuroscience at Washington 
University in St. Louis. In 2009 he received the NIH Director’s Pioneer 
award for innovations in optics and microscopy. His research interests 
include imaging of neuronal activity and his lab was probably one of the 
first to adopt Julia for scientific research. He is a long time Julia 
contributor and a lead developer of Julia’s multidimensional array 
capabilities as well as the author of far too many Julia packages.

*Thomas J. Sargent* is Professor of Economics at New York University and 
Senior Fellow at the Hoover Institution. In 2011 the Royal Swedish Academy 
of Sciences awarded him the Nobel Memorial Prize in Economic Sciences for 
his work on macroeconomics. Together with John Stachurski he founded 
quant-econ.net, a Julia and Python based learning platform for quantitative 
economics focusing on algorithms and numerical methods for studying 
economic problems as well as coding skills.

*Guy L. Steele Jr.* is a Software Architect for Oracle Labs and Principal 
Investigator of the Programming Language Research project. The Association 
for Computing Machinery awarded him the 1988 Grace Murray Hopper award. He 
has co-designed the programming language Scheme, which has greatly 
influenced the design of Julia, as well as languages such as Fortress and 
Java.

Andreas Noack
JuliaCon 2016 Local Chair
Postdoctoral Associate
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology


Re: [julia-users] Opinion on Plots.jl -- exclamation marks

2016-04-08 Thread Daniel Carrera
Ok. Thanks for the explanation.

Cheers,
Daniel.

On 9 April 2016 at 02:30, Tom Breloff  wrote:

> I understand that to you this seems intuitive, but to me it is completely
>> counter-intuitive. The function that I'm calling is not changing any of its
>> inputs. Telling me that behind the scenes it calls "plots!" is describing
>> an implementation detail that can change and should have no bearing on the
>> exposed API.
>>
>
> It's not behind the scenes or an implementation detail. The "proper,
> exposed" API looks like:
>
> plt = plot(rand(10))
> plot!(plt, xlim = (0,20))
>
> Here plt is a julia object.  It's constructed during the first call, and
> changed in the second call.  This is the only form that should be used for
> "serious" work, as you shouldn't depend on global state in general.  For
> your original example, you were on the right track:
>
> plt = plot(title = "...", xaxis = (...), ...)
> for dir in directories
> ... do some work ...
> plot!(plt, result, ...)
> end
>
> You can set up the plot attributes in the first call, and then add series
> (data) with your results.
>
> In your original code, what if there's something in that "do some work"
> that calls a plotting command?  (answer... your code breaks)  So the
> mutating plot call without an AbstractPlot as the first argument is really
> only meant for quick, one-off, analytic work.  It's not meant to be the
> primary usage pattern.
>
> I would argue that "plots(plt, ...)" is like "write(stream, ...)"
>>
>
> In this example, plt is a julia object that is mutated, whereas stream is
> effectively unchanged (it still points to the same file descriptor, or
> whatever it wraps).
>
> The better analogies (IMO) are:
>
> plt = plot()
> x = rand(10)
>
> # these are similar
> push!(x, rand())
> plot!(plt, x)
>
> # these are similar
> show(io, x)
> display(plt)
>
>
>


Re: [julia-users] Opinion on Plots.jl -- exclamation marks

2016-04-08 Thread Daniel Carrera
Looks like I'm out-numbered.

On 9 April 2016 at 02:20, ben  wrote:

> I also like the ! in Plots a lot.
>
> Ben!


Re: [julia-users] Opinion on Plots.jl -- exclamation marks

2016-04-08 Thread Tom Breloff

>
> I understand that to you this seems intuitive, but to me it is completely 
> counter-intuitive. The function that I'm calling is not changing any of its 
> inputs. Telling me that behind the scenes it calls "plots!" is describing 
> an implementation detail that can change and should have no bearing on the 
> exposed API.
>

It's not behind the scenes or an implementation detail. The "proper, 
exposed" API looks like:

plt = plot(rand(10))
plot!(plt, xlim = (0,20))

Here plt is a julia object.  It's constructed during the first call, and 
changed in the second call.  This is the only form that should be used for 
"serious" work, as you shouldn't depend on global state in general.  For 
your original example, you were on the right track:

plt = plot(title = "...", xaxis = (...), ...)
for dir in directories
... do some work ...
plot!(plt, result, ...)
end

You can set up the plot attributes in the first call, and then add series 
(data) with your results.

In your original code, what if there's something in that "do some work" 
that calls a plotting command?  (answer... your code breaks)  So the 
mutating plot call without an AbstractPlot as the first argument is really 
only meant for quick, one-off, analytic work.  It's not meant to be the 
primary usage pattern.

I would argue that "plots(plt, ...)" is like "write(stream, ...)"
>

In this example, plt is a julia object that is mutated, whereas stream is 
effectively unchanged (it still points to the same file descriptor, or 
whatever it wraps).  

The better analogies (IMO) are:

plt = plot()
x = rand(10)

# these are similar
push!(x, rand())
plot!(plt, x)

# these are similar
show(io, x)
display(plt)




Re: [julia-users] Opinion on Plots.jl -- exclamation marks

2016-04-08 Thread ben
I also like the ! in Plots a lot.

Ben!

Re: [julia-users] Opinion on Plots.jl -- exclamation marks

2016-04-08 Thread Kristoffer Carlsson
I like the ! in Plots. 

Re: [julia-users] Opinion on Plots.jl -- exclamation marks

2016-04-08 Thread Daniel Carrera
On 9 April 2016 at 00:09, Tom Breloff  wrote:
>
> The `plot!` command  is primarily `plot!(plt::AbstractPlot, args...;
> kw...)`.  In this case it holds to convention.
>
> I have a convenience `current()` which stores the most recently updated
> AbstractPlot object in a global, so that any plotting command without a
> leading AbstractPlot object gets it added implicitly.  (i.e. a call to
> `plot!(...)` is really a call to `plot!(current(), ...)`).
>
> I think this strategy is better than alternatives, and isn't too far from
> accepted conventions.
>


I understand that to you this seems intuitive, but to me it is completely
counter-intuitive. The function that I'm calling is not changing any of its
inputs. Telling me that behind the scenes it calls "plots!" is describing
an implementation detail that can change and should have no bearing on the
exposed API.

Another pertinent example is the write function:


write(stream, foo)


Using your line of reasoning you'd say that "write()" is modifying "stream"
and therefore should have an exclamation mark. But this is clearly not how
the Julia convention works. The fact that "stream" points to something that
was modified is not enough to merit the exclamation mark.

I would argue that "plots(plt, ...)" is like "write(stream, ...)"


Cheers,
Daniel.


Re: [julia-users] Opinion on Plots.jl -- exclamation marks

2016-04-08 Thread Tom Breloff
>
>
>>
> Actually, it doesn't seem entirely consistent with Julia conventions.
>
> The standard Julia convention (borrowed from Lisp/Scheme) is that a !
> suffix means that a function modifies *one of its arguments*.
>

The `plot!` command  is primarily `plot!(plt::AbstractPlot, args...;
kw...)`.  In this case it holds to convention.

I have a convenience `current()` which stores the most recently updated
AbstractPlot object in a global, so that any plotting command without a
leading AbstractPlot object gets it added implicitly.  (i.e. a call to
`plot!(...)` is really a call to `plot!(current(), ...)`).

I think this strategy is better than alternatives, and isn't too far from
accepted conventions.


Re: [julia-users] Opinion on Plots.jl -- exclamation marks

2016-04-08 Thread Steven G. Johnson

On Friday, April 8, 2016 at 4:00:47 AM UTC-4, Tamas Papp wrote:
>
> IMO an extra ! is a small price to pay for consistency --- you are after 
> all modifying an existing object. Avoiding globals is also a good 
> strategy. 
>

Actually, it doesn't seem entirely consistent with Julia conventions. 

The standard Julia convention (borrowed from Lisp/Scheme) is that a ! 
suffix means that a function modifies *one of its arguments*.

Here, however, one of the arguments is not being modified, but rather some 
global object (a plot) is being changed.  Modifying global state is a 
different sort of side effect from modifying an argument.   And, in Julia, 
side-effects in general don't lead to a ! suffix.   (e.g. any I/O is a side 
effect, but functions like println and write in the Julia standard library 
don't have ! suffixes.)

Steven


[julia-users] Re: Getting the name of the running program

2016-04-08 Thread Steven G. Johnson
In addition to getting the current file via @__FILE__, in Julia 0.5 you can 
get the name of the currently running script (e.g. "foo.jl" when you run 
"julia foo.jl") via 
PROGRAM_FILE: https://github.com/JuliaLang/julia/pull/14114


[julia-users] eigenvalues for Matrix-Free problem

2016-04-08 Thread Steven G. Johnson
You need to define a type to hold your data, and then define a * operator to 
multiply your type by a vector. 

Re: [julia-users] Opinion on Plots.jl -- exclamation marks

2016-04-08 Thread Daniel Carrera
Hello,

Here is an example use case for me. I have several directories with data. I
want to step through each directory, do some work, and plot some result;
all in the same figure:

for dir in directories
... do some work ...
plot(result)
end


Clearly it would be inconvenient if every time I have to do this (which is
all the time) I had to make sure that the first set of results runs
``plot()`` and the others run ``plot!()``. How would you deal with this
using Plots.jl? The solution I can come up with is:

plot()  # Plot nothing.
for dir in directories
... do some work ...
plot!(result)
end

I guess that would be the best option.

Cheers,
Daniel.



On 8 April 2016 at 15:07, Tom Breloff  wrote:

>
>
> On Fri, Apr 8, 2016 at 3:44 AM, Daniel Carrera  wrote:
>
>> Hello,
>>
>> I was looking through the API for Plots.jl
>>
>> http://plots.readthedocs.org/en/latest/#api
>>
>
> If you look just above that, note that I put a big warning that this
> section of the docs need updating.  Personally, I hardly ever use those
> mutating methods; but some people prefer that style, so I make it available.
>
>
>> Maybe I'm the only one, but I think all those exclamation marks are a bit
>> extraneous and feel like syntactic noise.
>>
>
> It modifies a plot, and so follows Julia convention.  Anything else is
> likely to induce confusion.
>
>
>> I have been following Plots.jl because I'm interested in plotting. My use
>> of Julia comes down to either making plots, or post-processing data so I
>> can make a plots. I get the idea from Plots.jl that functions that end in
>> an exclamation mark are supposed to modify an existing plot. So you get
>> things like:
>>
>> plot!(...)  # Add another plot to an existing one.
>> title!(...)
>> xaxis!("mylabel", :log, :flip)
>> xlims!(...)
>> xticks!(...)
>>
>
> If you don't want to plot like this, then don't!  There's a million ways
> to produce the same plot.  If you want to put all your commands in one
> line, this will work: plot(rand(10), title="TITLE", xaxis = ("mylabel",
> :log, :flip, (0, 20), linspace(1, 10, 20)))
>
> [image: Inline image 1]
> Plots figures out that log is the scale, (0,20) is the axis limits, and
> linspace(1,10,20) are the tick marks, which reduces a ton of clutter (not
> to mention you don't need to remember a complicated API).  One of the key
> goals of Plots is that you can use whatever style suits you.  (and feel
> free to open issues if there's something that you think can be more
> intuitive)
>
>
>>  In PyPlot, all commands edit the current plot unless you explicitly call
>> `figure()` to create a new plot. You can also use clf() to clear the
>> current plot. I think this is something that PyPlot / Matplotlib get right.
>>
>
> We can agree to disagree on this point.  It's clunky and error-prone.
>
> Quick tip: you can choose to reuse PyPlot windows by default if you want:
>
> # these will effectively call clf() before each command
>>
> using Plots
>
> pyplot(reuse = true)
>> plot(rand(10))
>> plot(rand(10))
>> plot(rand(10))
>
>
>
> - Tom
>


Re: [julia-users] Fast multiprecision integers?

2016-04-08 Thread Laurent Bartholdi
I understand my sketch is very specific... if memory efficiency is at
issue, it's important to treat pointers as 61-bit and to accept working
with 63-bit signints if discriminated unions are to exist at no memory cost.
(In the computer algebra applications I have in mind, losing 50% speed is
fine, losing 50% memory is not)

On Fri, Apr 8, 2016, 17:47 Stefan Karpinski  wrote:

> This could help if both parts of the union were plain old data like Int64
> and Float64, but in the case of Int and BigInt, one of them is a more
> complex type, which would currently force the whole thing to be boxed and
> live in the heap anyway. What's needed here is the ability to
> stack-allocate objects that refer to the heap. We cannot do that now, but
> it is an important planned CG improvement.
>
> On Fri, Apr 8, 2016 at 11:28 AM, Scott Jones 
> wrote:
>
>> I like very much the idea of a discriminated union - that would help
>> enormously with some of the stuff I work on.
>>
>>
>> On Friday, April 8, 2016 at 8:47:59 AM UTC-4, Erik Schnetter wrote:
>>
>>> Laurent
>>>
>>> I'm afraid you can't use `reinterpret` with a type such as `BigInt`.
>>>
>>> I think what you want is an extension of `Nullable`: A nullable type
>>> represents an object of a particular type that might or might not be
>>> there; there's hope that this would be done efficiently, e.g. via an
>>> additional bit. What you want is an efficient representation of a
>>> discriminated union -- a type that can represent either of two types,
>>> but without the overhead from boxing the types as is currently done in
>>> a `Union`. Unfortunately, Julia currently doesn't provide this, but it
>>> would make sense to have a feature request for this.
>>>
>>> This might look like this:
>>> ```Julia
>>> immutable FastInt
>>> val::DiscriminatedUnion{Int, BigInt}
>>> end
>>> function (+)(a::FastInt, b::FastInt)
>>> if typeindex(a.val) == 1 & typeindex(b.val) == 1
>>> ov,res = add_with_overflow(a.val[1], b.val[1])
>>> ov && return FastInt(BigInt(res))
>>> return FastInt(res)
>>> end
>>> # TODO: handle mixed case
>>> FastInt(a.val[2] + b.val[2])
>>> end
>>> ```
>>>
>>> This would be the same idea as yours, but the `reinterpret` occurs
>>> within Julia, in a type-safe and type-stable manner, in a way such
>>> that the compiler can optimize better.
>>>
>>> You could try defining a type that contains two fields, both an `Int`
>>> and a `BigInt` -- maybe `BigInt` will handle the case of a value that
>>> is never used in a more efficient manner that doesn't require
>>> allocating an object.
>>>
>>> -erik
>>>
>>> On Fri, Apr 8, 2016 at 2:07 AM, Laurent Bartholdi
>>>  wrote:
>>> > Dear all,
>>> > How hard would it be to code arbitrary-precision integers in Julia
>>> with at
>>> > worst 2x performance loss over native Ints?
>>> >
>>> > This is what I have in mind: have a bitstype structure, say 64 bits,
>>> which
>>> > is either the address of a BigInt (if even), or an Int64 (if odd).
>>> Addition
>>> > would be something like:
>>> >
>>> > function +(a::FastInt,b::FastInt)
>>> > if a&1
>>> > (result,obit) = @llvm.sadd.with.overflow.i64(a,b&~1)
>>> > obit ? reinterpret(FastInt,BigInt(a>>1)+(b>>1)) : result
>>> > elseif a&1
>>> > reinterpret(FastInt,(a>>1) + reinterpret(BigInt,b))
>>> > elseif b&1
>>> > reinterpret(FastInt,reinterpret(BigInt,a) + b>>1)
>>> > else
>>> > reinterpret(FastInt,reinterpret(BigInt,a) +
>>> reinterpret(BigInt,b))
>>> > end
>>> > end
>>> >
>>> > (code not meant to be run, just a skeleton)
>>> >
>>> > This would be very useful in the development of computer algebra
>>> systems, in
>>> > which BigInts are too slow and eat up too much memory, but one is
>>> ready to
>>> > pay a small price for guard against arithmetic overflows.
>>> >
>>> > If this is too complicated, then perhaps at least a type of integers
>>> that
>>> > would raise an error in case of over/underflows? Those could be caught
>>> in
>>> > throw/catch enclosures, so the user could restart the operation with
>>> > BigInts.
>>> >
>>> > TIA, Laurent
>>>
>>>
>>>
>>> --
>>> Erik Schnetter 
>>> http://www.perimeterinstitute.ca/personal/eschnetter/
>>>
>>
> --
Laurent Bartholdi
DMA, École Normale Supérieure, 45 rue d'Ulm, 75005 Paris. +33 14432 2060
Mathematisches Institut, Universität Göttingen, Bunsenstrasse 3-5, D-37073
Göttingen. +49 551 39 7826


Re: [julia-users] Fast multiprecision integers?

2016-04-08 Thread Stefan Karpinski
This could help if both parts of the union were plain old data like Int64
and Float64, but in the case of Int and BigInt, one of them is a more
complex type, which would currently force the whole thing to be boxed and
live in the heap anyway. What's needed here is the ability to
stack-allocate objects that refer to the heap. We cannot do that now, but
it is an important planned CG improvement.

On Fri, Apr 8, 2016 at 11:28 AM, Scott Jones 
wrote:

> I like very much the idea of a discriminated union - that would help
> enormously with some of the stuff I work on.
>
>
> On Friday, April 8, 2016 at 8:47:59 AM UTC-4, Erik Schnetter wrote:
>
>> Laurent
>>
>> I'm afraid you can't use `reinterpret` with a type such as `BigInt`.
>>
>> I think what you want is an extension of `Nullable`: A nullable type
>> represents an object of a particular type that might or might not be
>> there; there's hope that this would be done efficiently, e.g. via an
>> additional bit. What you want is an efficient representation of a
>> discriminated union -- a type that can represent either of two types,
>> but without the overhead from boxing the types as is currently done in
>> a `Union`. Unfortunately, Julia currently doesn't provide this, but it
>> would make sense to have a feature request for this.
>>
>> This might look like this:
>> ```Julia
>> immutable FastInt
>> val::DiscriminatedUnion{Int, BigInt}
>> end
>> function (+)(a::FastInt, b::FastInt)
>> if typeindex(a.val) == 1 & typeindex(b.val) == 1
>> ov,res = add_with_overflow(a.val[1], b.val[1])
>> ov && return FastInt(BigInt(res))
>> return FastInt(res)
>> end
>> # TODO: handle mixed case
>> FastInt(a.val[2] + b.val[2])
>> end
>> ```
>>
>> This would be the same idea as yours, but the `reinterpret` occurs
>> within Julia, in a type-safe and type-stable manner, in a way such
>> that the compiler can optimize better.
>>
>> You could try defining a type that contains two fields, both an `Int`
>> and a `BigInt` -- maybe `BigInt` will handle the case of a value that
>> is never used in a more efficient manner that doesn't require
>> allocating an object.
>>
>> -erik
>>
>> On Fri, Apr 8, 2016 at 2:07 AM, Laurent Bartholdi
>>  wrote:
>> > Dear all,
>> > How hard would it be to code arbitrary-precision integers in Julia with
>> at
>> > worst 2x performance loss over native Ints?
>> >
>> > This is what I have in mind: have a bitstype structure, say 64 bits,
>> which
>> > is either the address of a BigInt (if even), or an Int64 (if odd).
>> Addition
>> > would be something like:
>> >
>> > function +(a::FastInt,b::FastInt)
>> > if a&1
>> > (result,obit) = @llvm.sadd.with.overflow.i64(a,b&~1)
>> > obit ? reinterpret(FastInt,BigInt(a>>1)+(b>>1)) : result
>> > elseif a&1
>> > reinterpret(FastInt,(a>>1) + reinterpret(BigInt,b))
>> > elseif b&1
>> > reinterpret(FastInt,reinterpret(BigInt,a) + b>>1)
>> > else
>> > reinterpret(FastInt,reinterpret(BigInt,a) +
>> reinterpret(BigInt,b))
>> > end
>> > end
>> >
>> > (code not meant to be run, just a skeleton)
>> >
>> > This would be very useful in the development of computer algebra
>> systems, in
>> > which BigInts are too slow and eat up too much memory, but one is ready
>> to
>> > pay a small price for guard against arithmetic overflows.
>> >
>> > If this is too complicated, then perhaps at least a type of integers
>> that
>> > would raise an error in case of over/underflows? Those could be caught
>> in
>> > throw/catch enclosures, so the user could restart the operation with
>> > BigInts.
>> >
>> > TIA, Laurent
>>
>>
>>
>> --
>> Erik Schnetter 
>> http://www.perimeterinstitute.ca/personal/eschnetter/
>>
>


Re: [julia-users] Fast multiprecision integers?

2016-04-08 Thread Scott Jones
I like very much the idea of a discriminated union - that would help 
enormously with some of the stuff I work on.

On Friday, April 8, 2016 at 8:47:59 AM UTC-4, Erik Schnetter wrote:
>
> Laurent 
>
> I'm afraid you can't use `reinterpret` with a type such as `BigInt`. 
>
> I think what you want is an extension of `Nullable`: A nullable type 
> represents an object of a particular type that might or might not be 
> there; there's hope that this would be done efficiently, e.g. via an 
> additional bit. What you want is an efficient representation of a 
> discriminated union -- a type that can represent either of two types, 
> but without the overhead from boxing the types as is currently done in 
> a `Union`. Unfortunately, Julia currently doesn't provide this, but it 
> would make sense to have a feature request for this. 
>
> This might look like this: 
> ```Julia 
> immutable FastInt 
> val::DiscriminatedUnion{Int, BigInt} 
> end 
> function (+)(a::FastInt, b::FastInt) 
> if typeindex(a.val) == 1 & typeindex(b.val) == 1 
> ov,res = add_with_overflow(a.val[1], b.val[1]) 
> ov && return FastInt(BigInt(res)) 
> return FastInt(res) 
> end 
> # TODO: handle mixed case 
> FastInt(a.val[2] + b.val[2]) 
> end 
> ``` 
>
> This would be the same idea as yours, but the `reinterpret` occurs 
> within Julia, in a type-safe and type-stable manner, in a way such 
> that the compiler can optimize better. 
>
> You could try defining a type that contains two fields, both an `Int` 
> and a `BigInt` -- maybe `BigInt` will handle the case of a value that 
> is never used in a more efficient manner that doesn't require 
> allocating an object. 
>
> -erik 
>
> On Fri, Apr 8, 2016 at 2:07 AM, Laurent Bartholdi 
>  wrote: 
> > Dear all, 
> > How hard would it be to code arbitrary-precision integers in Julia with 
> at 
> > worst 2x performance loss over native Ints? 
> > 
> > This is what I have in mind: have a bitstype structure, say 64 bits, 
> which 
> > is either the address of a BigInt (if even), or an Int64 (if odd). 
> Addition 
> > would be something like: 
> > 
> > function +(a::FastInt,b::FastInt) 
> > if a&1 
> > (result,obit) = @llvm.sadd.with.overflow.i64(a,b&~1) 
> > obit ? reinterpret(FastInt,BigInt(a>>1)+(b>>1)) : result 
> > elseif a&1 
> > reinterpret(FastInt,(a>>1) + reinterpret(BigInt,b)) 
> > elseif b&1 
> > reinterpret(FastInt,reinterpret(BigInt,a) + b>>1) 
> > else 
> > reinterpret(FastInt,reinterpret(BigInt,a) + 
> reinterpret(BigInt,b)) 
> > end 
> > end 
> > 
> > (code not meant to be run, just a skeleton) 
> > 
> > This would be very useful in the development of computer algebra 
> systems, in 
> > which BigInts are too slow and eat up too much memory, but one is ready 
> to 
> > pay a small price for guard against arithmetic overflows. 
> > 
> > If this is too complicated, then perhaps at least a type of integers 
> that 
> > would raise an error in case of over/underflows? Those could be caught 
> in 
> > throw/catch enclosures, so the user could restart the operation with 
> > BigInts. 
> > 
> > TIA, Laurent 
>
>
>
> -- 
> Erik Schnetter  
> http://www.perimeterinstitute.ca/personal/eschnetter/ 
>


[julia-users] Re: Error installing hydrogen for Atom

2016-04-08 Thread Achu
That's unfortunate. I was excited about hydrogen.

On Friday, 8 April 2016 00:14:02 UTC-5, Jeffrey Sarnoff wrote:
>
> I have experienced this too.  Hydrogen has never been a load-and-go 
> proposition for Julia on Windows.  Before this, there were other hang-ups 
> and some have had their roots in something an earlier release of Atom on 
> Win was doing or not doing.  I have downloaded a version of Juno/LT that 
> works on Win7.  It is less of an advanced editor today, but it coordinates 
> easily with Julia for type/eval/edit/eval/save.
>
> On Thursday, April 7, 2016 at 9:58:20 PM UTC-4, Achu wrote:
>>
>> Was trying to install hydrogen for Atom when I got this error. Tried it 
>> with every available version of ZMQ and no luck. Anyone know why?
>>
>> Installing hydrogen to C:\Users\achug\.atom\packages failed 
>>
>> > zmq@2.14.0 install 
>> C:\Users\achug\AppData\Local\Temp\apm-install-dir-11637-16296-1xp74zg\node_modules\Hydrogen\node_modules\jmp\node_modules\zmq
>> > node-gyp rebuild
>>
>>
>> C:\Users\achug\AppData\Local\Temp\apm-install-dir-11637-16296-1xp74zg\node_modules\Hydrogen\node_modules\jmp\node_modules\zmq>if
>>  
>> not defined npm_config_node_gyp (node 
>> "C:\Users\achug\AppData\Local\atom\app-1.6.2\resources\app\apm\node_modules\npm\bin\node-gyp-bin\\..\..\node_modules\node-gyp\bin\node-gyp.js"
>>  
>> rebuild )  else (node  rebuild )
>>
>> gypnpm ERR! Windows_NT 6.2.9200
>> npm ERR! argv 
>> "C:\\Users\\achug\\AppData\\Local\\atom\\app-1.6.2\\resources\\app\\apm\\bin\\node.exe"
>>  
>> "C:\\Users\\achug\\AppData\\Local\\atom\\app-1.6.2\\resources\\app\\apm\\node_modules\\npm\\bin\\npm-cli.js"
>>  
>> "--globalconfig" "C:\\Users\\achug\\.atom\\.apm\\.apmrc" "--userconfig" 
>> "C:\\Users\\achug\\.atom\\.apmrc" "install" 
>> "C:\\Users\\achug\\AppData\\Local\\Temp\\d-11637-16296-10fuf5t\\package.tgz" 
>> "--target=0.34.5" "--arch=ia32"
>> npm ERR! node v0.10.40
>> npm ERR! npm  v2.13.3
>> npm ERR! code ELIFECYCLE
>>
>> npm ERR! zmq@2.14.0 install: `node-gyp rebuild`
>> npm ERR! Exit status 1
>> npm ERR!
>> npm ERR! Failed at the zmq@2.14.0 install script 'node-gyp rebuild'.
>> npm ERR! This is most likely a problem with the zmq package,
>> npm ERR! not with npm itself.
>> npm ERR! Tell the author that this fails on your system:
>> npm ERR! node-gyp rebuild
>> npm ERR! You can get their info via:
>> npm ERR! npm owner ls zmq
>> npm ERR! There is likely additional logging output above.
>>
>

Re: [julia-users] Opinion on Plots.jl -- exclamation marks

2016-04-08 Thread Daniel Carrera
Hello,

On 8 April 2016 at 15:07, Tom Breloff  wrote:

>
> It modifies a plot, and so follows Julia convention.  Anything else is
> likely to induce confusion.
>

I do see your point of view, and of course it's your library. I also don't
want to diss your work. I think the convention is about modifying inputs,
rather than modifying "something". Functions that modify files don't get
exclamation marks. On the other hand, Gtk.jl is a blend, with functions
like push!() and delete!() but also destroy(), and signal_connect(). The
authors just picked what made sense to them.



>  In PyPlot, all commands edit the current plot unless you explicitly call
>> `figure()` to create a new plot. You can also use clf() to clear the
>> current plot. I think this is something that PyPlot / Matplotlib get right.
>>
>
> We can agree to disagree on this point.  It's clunky and error-prone.
>


Agree to disagree. This is just an opinion.



> Quick tip: you can choose to reuse PyPlot windows by default if you want:
>
> # these will effectively call clf() before each command
>>
> using Plots
>
> pyplot(reuse = true)
>> plot(rand(10))
>> plot(rand(10))
>> plot(rand(10))
>
>
Thanks.

Cheers,
Daniel.


Re: [julia-users] Cross-correlation: rfft() VS fft() VS xcorr() performances

2016-04-08 Thread CrocoDuck O'Ducks
I think so. MATLAB xcorr 
 implementation accepts 
a maxlag argument, which is probably handy for many people.

I don't think I will write my xcorr version. I will probably take a dirty 
shortcut: calculate the cross correlation only on windowed segments of 
signals. It should give the correct lag anyway.

On Friday, 8 April 2016 14:49:59 UTC+1, Stefan Karpinski wrote:
>
> Would it make sense to have a maxlag keyword option on xcorr to limit how 
> big the lags it considers are?
>
> On Friday, April 8, 2016, DNF  wrote:
>
>> The xcorr function calls the conv function, which again uses fft. If you 
>> know the general structure and length of your signals ahead of time, you 
>> can probably gain some performance by planning the ffts beforehand. I don't 
>> know why it doesn't work for you, but you could have a look in at conv in 
>> dsp.jl.
>>
>> If you *really* want to speed things up, though, you might implement 
>> your own xcorr.  xcorr dominates the runtime in your function, and if you 
>> know an upper bound on the signal lags, you can implement xcorr with a 
>> limited number of lags. By default xcorr calculates for all lags (in your 
>> case that's 2*48000*60-1 ~ 6million lags). If you know that the max lag is 
>> 1 second, you can save ~98% percent of the runtime of xcorr.
>>
>> A couple of other remarks:
>> * There's no need to put type annotations next to the function outputs, 
>> it's just visual noise
>> * Use ct_idx = cld(lₛ, 2) and forget about the mod.
>>
>

Re: [julia-users] Fast multiprecision integers?

2016-04-08 Thread Stefan Karpinski
This is possible but requires significant cooperation from the garbage
collector, which is not possible without a lot of work. This is work that
it planned, however: efficient bigints are definitely in our sights.

On Friday, April 8, 2016, Erik Schnetter  wrote:

> Laurent
>
> I'm afraid you can't use `reinterpret` with a type such as `BigInt`.
>
> I think what you want is an extension of `Nullable`: A nullable type
> represents an object of a particular type that might or might not be
> there; there's hope that this would be done efficiently, e.g. via an
> additional bit. What you want is an efficient representation of a
> discriminated union -- a type that can represent either of two types,
> but without the overhead from boxing the types as is currently done in
> a `Union`. Unfortunately, Julia currently doesn't provide this, but it
> would make sense to have a feature request for this.
>
> This might look like this:
> ```Julia
> immutable FastInt
> val::DiscriminatedUnion{Int, BigInt}
> end
> function (+)(a::FastInt, b::FastInt)
> if typeindex(a.val) == 1 & typeindex(b.val) == 1
> ov,res = add_with_overflow(a.val[1], b.val[1])
> ov && return FastInt(BigInt(res))
> return FastInt(res)
> end
> # TODO: handle mixed case
> FastInt(a.val[2] + b.val[2])
> end
> ```
>
> This would be the same idea as yours, but the `reinterpret` occurs
> within Julia, in a type-safe and type-stable manner, in a way such
> that the compiler can optimize better.
>
> You could try defining a type that contains two fields, both an `Int`
> and a `BigInt` -- maybe `BigInt` will handle the case of a value that
> is never used in a more efficient manner that doesn't require
> allocating an object.
>
> -erik
>
> On Fri, Apr 8, 2016 at 2:07 AM, Laurent Bartholdi
> > wrote:
> > Dear all,
> > How hard would it be to code arbitrary-precision integers in Julia with
> at
> > worst 2x performance loss over native Ints?
> >
> > This is what I have in mind: have a bitstype structure, say 64 bits,
> which
> > is either the address of a BigInt (if even), or an Int64 (if odd).
> Addition
> > would be something like:
> >
> > function +(a::FastInt,b::FastInt)
> > if a&1
> > (result,obit) = @llvm.sadd.with.overflow.i64(a,b&~1)
> > obit ? reinterpret(FastInt,BigInt(a>>1)+(b>>1)) : result
> > elseif a&1
> > reinterpret(FastInt,(a>>1) + reinterpret(BigInt,b))
> > elseif b&1
> > reinterpret(FastInt,reinterpret(BigInt,a) + b>>1)
> > else
> > reinterpret(FastInt,reinterpret(BigInt,a) +
> reinterpret(BigInt,b))
> > end
> > end
> >
> > (code not meant to be run, just a skeleton)
> >
> > This would be very useful in the development of computer algebra
> systems, in
> > which BigInts are too slow and eat up too much memory, but one is ready
> to
> > pay a small price for guard against arithmetic overflows.
> >
> > If this is too complicated, then perhaps at least a type of integers that
> > would raise an error in case of over/underflows? Those could be caught in
> > throw/catch enclosures, so the user could restart the operation with
> > BigInts.
> >
> > TIA, Laurent
>
>
>
> --
> Erik Schnetter >
> http://www.perimeterinstitute.ca/personal/eschnetter/
>


Re: [julia-users] Cross-correlation: rfft() VS fft() VS xcorr() performances

2016-04-08 Thread Stefan Karpinski
Would it make sense to have a maxlag keyword option on xcorr to limit how
big the lags it considers are?

On Friday, April 8, 2016, DNF  wrote:

> The xcorr function calls the conv function, which again uses fft. If you
> know the general structure and length of your signals ahead of time, you
> can probably gain some performance by planning the ffts beforehand. I don't
> know why it doesn't work for you, but you could have a look in at conv in
> dsp.jl.
>
> If you *really* want to speed things up, though, you might implement your
> own xcorr.  xcorr dominates the runtime in your function, and if you know
> an upper bound on the signal lags, you can implement xcorr with a limited
> number of lags. By default xcorr calculates for all lags (in your case
> that's 2*48000*60-1 ~ 6million lags). If you know that the max lag is 1
> second, you can save ~98% percent of the runtime of xcorr.
>
> A couple of other remarks:
> * There's no need to put type annotations next to the function outputs,
> it's just visual noise
> * Use ct_idx = cld(lₛ, 2) and forget about the mod.
>


Re: [julia-users] Opinion on Plots.jl -- exclamation marks

2016-04-08 Thread Tom Breloff
On Fri, Apr 8, 2016 at 3:44 AM, Daniel Carrera  wrote:

> Hello,
>
> I was looking through the API for Plots.jl
>
> http://plots.readthedocs.org/en/latest/#api
>

If you look just above that, note that I put a big warning that this
section of the docs need updating.  Personally, I hardly ever use those
mutating methods; but some people prefer that style, so I make it available.


> Maybe I'm the only one, but I think all those exclamation marks are a bit
> extraneous and feel like syntactic noise.
>

It modifies a plot, and so follows Julia convention.  Anything else is
likely to induce confusion.


> I have been following Plots.jl because I'm interested in plotting. My use
> of Julia comes down to either making plots, or post-processing data so I
> can make a plots. I get the idea from Plots.jl that functions that end in
> an exclamation mark are supposed to modify an existing plot. So you get
> things like:
>
> plot!(...)  # Add another plot to an existing one.
> title!(...)
> xaxis!("mylabel", :log, :flip)
> xlims!(...)
> xticks!(...)
>

If you don't want to plot like this, then don't!  There's a million ways to
produce the same plot.  If you want to put all your commands in one line,
this will work: plot(rand(10), title="TITLE", xaxis = ("mylabel", :log,
:flip, (0, 20), linspace(1, 10, 20)))

[image: Inline image 1]
Plots figures out that log is the scale, (0,20) is the axis limits, and
linspace(1,10,20) are the tick marks, which reduces a ton of clutter (not
to mention you don't need to remember a complicated API).  One of the key
goals of Plots is that you can use whatever style suits you.  (and feel
free to open issues if there's something that you think can be more
intuitive)


>  In PyPlot, all commands edit the current plot unless you explicitly call
> `figure()` to create a new plot. You can also use clf() to clear the
> current plot. I think this is something that PyPlot / Matplotlib get right.
>

We can agree to disagree on this point.  It's clunky and error-prone.

Quick tip: you can choose to reuse PyPlot windows by default if you want:

# these will effectively call clf() before each command
>
using Plots

pyplot(reuse = true)
> plot(rand(10))
> plot(rand(10))
> plot(rand(10))



- Tom


Re: [julia-users] Fast multiprecision integers?

2016-04-08 Thread Erik Schnetter
Laurent

I'm afraid you can't use `reinterpret` with a type such as `BigInt`.

I think what you want is an extension of `Nullable`: A nullable type
represents an object of a particular type that might or might not be
there; there's hope that this would be done efficiently, e.g. via an
additional bit. What you want is an efficient representation of a
discriminated union -- a type that can represent either of two types,
but without the overhead from boxing the types as is currently done in
a `Union`. Unfortunately, Julia currently doesn't provide this, but it
would make sense to have a feature request for this.

This might look like this:
```Julia
immutable FastInt
val::DiscriminatedUnion{Int, BigInt}
end
function (+)(a::FastInt, b::FastInt)
if typeindex(a.val) == 1 & typeindex(b.val) == 1
ov,res = add_with_overflow(a.val[1], b.val[1])
ov && return FastInt(BigInt(res))
return FastInt(res)
end
# TODO: handle mixed case
FastInt(a.val[2] + b.val[2])
end
```

This would be the same idea as yours, but the `reinterpret` occurs
within Julia, in a type-safe and type-stable manner, in a way such
that the compiler can optimize better.

You could try defining a type that contains two fields, both an `Int`
and a `BigInt` -- maybe `BigInt` will handle the case of a value that
is never used in a more efficient manner that doesn't require
allocating an object.

-erik

On Fri, Apr 8, 2016 at 2:07 AM, Laurent Bartholdi
 wrote:
> Dear all,
> How hard would it be to code arbitrary-precision integers in Julia with at
> worst 2x performance loss over native Ints?
>
> This is what I have in mind: have a bitstype structure, say 64 bits, which
> is either the address of a BigInt (if even), or an Int64 (if odd). Addition
> would be something like:
>
> function +(a::FastInt,b::FastInt)
> if a&1
> (result,obit) = @llvm.sadd.with.overflow.i64(a,b&~1)
> obit ? reinterpret(FastInt,BigInt(a>>1)+(b>>1)) : result
> elseif a&1
> reinterpret(FastInt,(a>>1) + reinterpret(BigInt,b))
> elseif b&1
> reinterpret(FastInt,reinterpret(BigInt,a) + b>>1)
> else
> reinterpret(FastInt,reinterpret(BigInt,a) + reinterpret(BigInt,b))
> end
> end
>
> (code not meant to be run, just a skeleton)
>
> This would be very useful in the development of computer algebra systems, in
> which BigInts are too slow and eat up too much memory, but one is ready to
> pay a small price for guard against arithmetic overflows.
>
> If this is too complicated, then perhaps at least a type of integers that
> would raise an error in case of over/underflows? Those could be caught in
> throw/catch enclosures, so the user could restart the operation with
> BigInts.
>
> TIA, Laurent



-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Creating a subtype of IO that wraps an IOBuffer

2016-04-08 Thread Daniel Arndt
Hi Kevin,

That's definitely a useful reference, they are doing a lot of similar 
things. Thanks!

Cheers,
Dan

On Wednesday, 6 April 2016 12:44:30 UTC-3, Kevin Squire wrote:
>
> Have you seen https://github.com/BioJulia/BufferedStreams.jl?  Is that 
> close to what you're trying to do?
>
> Cheers,
>Kevin
>
> On Wed, Apr 6, 2016 at 7:23 AM, Daniel Arndt  > wrote:
>
>> Great! that *seems* to work in my extremely limited testing so far, 
>> thanks Milan!
>>
>> Please note that I have since identified the error in my constructor 
>> (entirely unrelated to any issues)... it should read:
>>
>> NewIOType() = new(IOBuffer())
>>
>>
>>
>> On Wednesday, 6 April 2016 11:06:17 UTC-3, Milan Bouchet-Valat wrote:
>>>
>>> Le mercredi 06 avril 2016 à 06:48 -0700, Daniel Arndt a écrit : 
>>> > This is shorter than it looks, it's mostly code / output. Feel free 
>>> > to skip to the tl;dr. 
>>> > 
>>> > I'm playing around with an idea where I've created a new type that 
>>> > wraps an IOBuffer. 
>>> > 
>>> > This new type would hold some other information as well, but for it's 
>>> > read/write operations, I wanted to just pass the calls on to the 
>>> > encapsulated IOBuffer. I thought this would be fairly simple (as any 
>>> > good programming story begins): 
>>> > 
>>> > import Base: write 
>>> > 
>>> > 
>>> > type NewIOType <: IO 
>>> > buffer::IOBuffer 
>>> > some_other_stuff::Int 
>>> >  
>>> > new() = new(IOBuffer()) 
>>> > end 
>>> > 
>>> > 
>>> > write(io::NewIOType, x...) = write(io.buffer, x...) 
>>> > 
>>> > However, write seems to conflict with multiple other definitions: 
>>> > 
>>> > WARNING: New definition  
>>> > write(Main.NewIOType, Any...) at In[1]:10 
>>> > is ambiguous with:  
>>> > write(Base.IO, Base.Complex) at complex.jl:78. 
>>> > To fix, define  
>>> > write(Main.NewIOType, Base.Complex) 
>>> > before the new definition. 
>>> > WARNING: New definition  
>>> > write(Main.NewIOType, Any...) at In[1]:10 
>>> > is ambiguous with:  
>>> > write(Base.IO, Base.Rational) at rational.jl:66. 
>>> > ... 
>>> > and on and on 
>>> > ... 
>>> > 
>>> > I understand the problem: Although my first parameter is more 
>>> > specific, my second is not. In an exploratory move, I tried: 
>>> > 
>>> > write{T}(io::NewIOType, x::T) = write(io.buffer, x) 
>>> > 
>>> > Thinking that this would create a new write function for every type T 
>>> > and therefore be more specific (I could use some clarity here, as 
>>> > obviously my understanding is incorrect). I get this: 
>>> > 
>>> > WARNING: New definition  
>>> > write(Main.NewIOType, #T<:Any) at In[1]:10 
>>> > is ambiguous with:  
>>> > write(Base.IO, Base.Complex) at complex.jl:78. 
>>> > To fix, define  
>>> > write(Main.NewIOType, _<:Base.Complex) 
>>> > before the new definition. 
>>> > WARNING: New definition  
>>> > write(Main.NewIOType, #T<:Any) at In[1]:10 
>>> > is ambiguous with:  
>>> > write(Base.IO, Base.Rational) at rational.jl:66. 
>>> > To fix, define  
>>> > write(Main.NewIOType, _<:Base.Rational) 
>>> > 
>>> > tl;dr Can I wrap an IO instance, and pass all calls to 'write' to the 
>>> > wrapped instance's version? 
>>> > 
>>> > I am entirely capable and aware of other approaches, so while I do 
>>> > appreciate suggestions of alternative approaches, I am specifically 
>>> > wondering if there is some mechanism that I'm missing that easily 
>>> > overcomes this. 
>>> I think you only need to provide write(s::NewIOType, x::UInt8). All 
>>> higher-level write() methods will automatically use it. 
>>>
>>>
>>> Regards 
>>>
>>
>

[julia-users] Re: Fast multiprecision integers?

2016-04-08 Thread Jeffrey Sarnoff
Julia has functions for checked integer arithmetic on the base integer 
types which raise an OverflowError().

   using Base.Checked # now these functions are available for use:
# checked_neg, checked_abs, checked_add, checked_sub, checked_mul # 
checked_div, checked_rem, checked_fld, checked_mod, checked_cld
As I recall,  the current try .. catch mechanism is not optimized for that 
sort of heavy use within conditional logic.
So the approach you suggest is reasonable to me.  Fredrik Johansson does 
something similar in the way his Arb software,
written in C, interacts internally with what would be Julia's BigFloat. 
 The package Nemo.jl gives access to some of Arb
and FLINT (Fast multiprecision integers) within Julia.

Arb doc  Nemo doc 
   



On Friday, April 8, 2016 at 2:07:30 AM UTC-4, Laurent Bartholdi wrote:
>
> Dear all,
> How hard would it be to code arbitrary-precision integers in Julia with at 
> worst 2x performance loss over native Ints?
>
> This is what I have in mind: have a bitstype structure, say 64 bits, which 
> is either the address of a BigInt (if even), or an Int64 (if odd). Addition 
> would be something like:
>
> function +(a::FastInt,b::FastInt)
> if a&1
> (result,obit) = @llvm.sadd.with.overflow.i64(a,b&~1)
> obit ? reinterpret(FastInt,BigInt(a>>1)+(b>>1)) : result
> elseif a&1
> reinterpret(FastInt,(a>>1) + reinterpret(BigInt,b))
> elseif b&1
> reinterpret(FastInt,reinterpret(BigInt,a) + b>>1)
> else
> reinterpret(FastInt,reinterpret(BigInt,a) + reinterpret(BigInt,b))
> end
> end
>
> (code not meant to be run, just a skeleton)
>
> This would be very useful in the development of computer algebra systems, 
> in which BigInts are too slow and eat up too much memory, but one is ready 
> to pay a small price for guard against arithmetic overflows.
>
> If this is too complicated, then perhaps at least a type of integers that 
> would raise an error in case of over/underflows? Those could be caught in 
> throw/catch enclosures, so the user could restart the operation with 
> BigInts.
>
> TIA, Laurent
>


[julia-users] Re: How to initialize a Matrix{T}?

2016-04-08 Thread 'Greg Plowman' via julia-users
Maybe something like:

x = Array{Int}(3,4,0)

x = vec(x)
append!(x, 1:12)
x = reshape(x,3,4,1)

x = vec(x)
append!(x, 13:24)
x = reshape(x,3,4,2)

Of course you could wrap it into a more convenient function.


On Friday, April 8, 2016 at 7:31:30 PM UTC+10, Sisyphuss wrote:

> I have a related question:
>
> How to initialize an array of fixed shape except one dimension, such as `m 
> * n * 0`. 
> And I would like to dynamically increase the last dimension later.
>


Re: [julia-users] state of Winston

2016-04-08 Thread Tom Breloff
Plots.jl is by far the best option ;)  I'm happy to help get you started if
you run into any trouble.


[julia-users] Re: Getting the name of the running program

2016-04-08 Thread Andrea Pagnani
If with name of the program you mean the name of the file where the 
function is defined

 println(@__FILE__)

makes the job.

On Friday, April 8, 2016 at 10:53:14 AM UTC+2, Ferran Mazzanti wrote:
>
> Hi folks,
>
> probably a most stupid question here :) Is there a way to get a string 
> with the name of the Julia program one is running? 
> I ask because I usually write output files starting with a header stating 
> something like "# Results generated by the program " followed by the 
> program name. So how could I get the name of the program I'm running?
>
> Best regards and thanks,
>
> Ferran.
>


[julia-users] Re: Cross-correlation: rfft() VS fft() VS xcorr() performances

2016-04-08 Thread DNF


On Friday, April 8, 2016 at 1:31:54 PM UTC+2, DNF wrote:
>
> FFT is O(N*logN), while time domain is O(L*logN) where L is the number of 
> lags.
>

I meant, of course, that the time domain xcorr is O(L*N). 


[julia-users] Re: Cross-correlation: rfft() VS fft() VS xcorr() performances

2016-04-08 Thread DNF
Hmm. Thinking a bit more about it, it might be hard to beat the FFT with 
when your signals are *that* long.

FFT is O(N*logN), while time domain is O(L*logN) where L is the number of 
lags. log2(60*48000) is less than 22(!) so the FFT will be *very* beneficial. 
Here you can have a look at a discussion of conv speed in Julia:
https://github.com/JuliaLang/julia/issues/13996 , including a heuristic for 
choosing between FFT and time domain processing.


[julia-users] Re: diagm for an (n x 1) Array{T, 2}

2016-04-08 Thread Sisyphuss
IMHO, It would also make sense to extend `diagm` to (n x 1 x 1), (n x 1 x 1 
x 1), (1 x n x 1) ... arrays. 

But, if we check the dimension in the function body, it may decrease the 
performance (IMHO).

Note that you can always write `diagm(vec(a))` to transform it to a vector.




On Friday, April 8, 2016 at 6:59:37 AM UTC+2, SrAceves wrote:
>
> Wouldn't it make sense to extend diagm to accept (n x 1) Array{T, 2}?
>


[julia-users] Re: How to initialize a Matrix{T}?

2016-04-08 Thread Sisyphuss
I have a related question:

How to initialize an array of fixed shape except one dimension, such as `m 
* n * 0`. 
And I would like to dynamically increase the last dimension later.


Re: [julia-users] state of Winston

2016-04-08 Thread jonathan . bieler
Gadfly is probably the best solution for a native implementation, but is 
has serious performance issues, specially if you want interactivity (which 
you can get with https://github.com/JuliaGraphics/Immerse.jl).


[julia-users] Getting the name of the running program

2016-04-08 Thread Ferran Mazzanti
Hi folks,

probably a most stupid question here :) Is there a way to get a string with 
the name of the Julia program one is running? 
I ask because I usually write output files starting with a header stating 
something like "# Results generated by the program " followed by the 
program name. So how could I get the name of the program I'm running?

Best regards and thanks,

Ferran.


[julia-users] Re: MXNet setting workspace, Convolutional layers, kernels etc

2016-04-08 Thread kleinsplash
Getting `ERROR: LoadError: UndefVarError: @mxcall not defined` which is 
odd.. 

On Friday, 8 April 2016 10:18:03 UTC+2, kleinsplash wrote:
>
> Hi,
>
> I reduced the workspace to 1024 and batch_size to 1 - I get the memory 
> error same as before. I went back to using the whole batch and its running 
> again. 
>
> Thx for the other stuff will look into that now.. 
>
>
>
> On Friday, 8 April 2016 09:53:18 UTC+2, Valentin Churavy wrote:
>>
>> Also take a look at https://github.com/dmlc/MXNet.jl/pull/73 where I 
>> implemented debug_str in Julia so that you can test your network on its 
>> space requirements.
>>
>> On Friday, 8 April 2016 15:54:06 UTC+9, Valentin Churavy wrote:
>>>
>>> What happens if you set the batch_size to 1? Also take a look at 
>>> https://github.com/dmlc/mxnet/tree/master/example/memcost
>>>
>>> Also workspace is per convolution and you should keep it small.  
>>>
>>> On Thursday, 7 April 2016 19:13:36 UTC+9, kleinsplash wrote:

 Hi,

 I have a memory error using Quadro K5000M which has 4GB global memory. 
 I was wondering if there is some guide as to how to set my workspace and 
 Convolutional layers.

 My current settings: 

 training_data =  128x128x1x800
 batch_size = 128x128x1x8
 workspace = 2048 (I think this can go up to 4096 because of the 
 .deviceQuery)

 This is my net (still to be designed so just basic ):

 # first conv
 conv1 = @mx.chain mx.Convolution(data=data, kernel=(5,5), num_filter=20
 , workspace=workspace)  =>
   mx.Activation(act_type=:relu) =>
   mx.Pooling(pool_type=:max, kernel=(2,2), stride=(2,2
 ))
 # second conv
 conv2 = @mx.chain mx.Convolution(data=conv1, kernel=(5,5), num_filter=
 50, workspace=workspace) =>
   mx.Activation(act_type=:relu) =>
   mx.Pooling(pool_type=:max, kernel=(2,2), stride=(2,2
 ))
   # first fully-connected
 fc1   = @mx.chain mx.Flatten(data=conv2) =>
   mx.FullyConnected(num_hidden=500) =>
   mx.Activation(act_type=:relu)
 # second fully-connected
 fc2   = mx.FullyConnected(data=fc1, num_hidden=10) 
 # third fully-connected
 fc3   = mx.FullyConnected(data=fc2, num_hidden=2) 
 # softmax loss
 net = mx.SoftmaxOutput(data=fc3, name=:softmax)

 So far if I reduce my image to 28x28 it all works - but I need to up 
 the resolution to pick out features. Anyone have any ideas on thumb 
 sucking 
 initial values for at least getting past memory issues to the design of 
 the 
 net? 


 Thx



Re: [julia-users] What is the correct way to use the type alias Vector and Matrix in multiple dispatch?

2016-04-08 Thread Eric Forgy
julia> foo{T}(A::Vector{Matrix{T}}) = 1

This says: foo is a function with parameter "T" that takes an argument of 
type "Vector{Matrix{T}}".

No problem.

julia> bar(A::Vector{Matrix}) = 1


This says: bar is a function with no parameters that takes an argument of 
type "Vector{Matrix}". 

However, "Vector" is a typealias for "Array{T,1}" for some type "T". So 
whether or not you explicitly state the parameter, "Vector" is still a 
parameterized typed. 

No problem.

baz(A::Vector{Matrix{T}}) = 1

This says: baz is a function with no parameter that takes an argument 
"Vector{Matrix}" with parameter "T".

See the problem? 

To use a parameter, you need to specify that the function has a parameter.

This section of the documentation might be helpful: TypeVars 
.

Hope this helps.

Best regards,
Eric


Re: [julia-users] Opinion on Plots.jl -- exclamation marks

2016-04-08 Thread Tamas Papp
Hi,

IMO an extra ! is a small price to pay for consistency --- you are after
all modifying an existing object. Avoiding globals is also a good
strategy.

The lure of terser syntax for common use cases is always there, but I
like to remind myself that I will be the person reading this code in 6
months or a year, and I will be grateful for all the clarity I can
get. That said, maybe my use case differs from yours: I like to write
functions that make plots, so my data analysis programs are more
structured (eg standardize axes, titles, colors, plot types, etc).

Another, more functional-style API could be what ggplot does in R, using
the + operator to create a new plot with the extras.

Best,

Tamas

On Fri, Apr 08 2016, Daniel Carrera wrote:

> Hello,
>
> I was looking through the API for Plots.jl
>
> http://plots.readthedocs.org/en/latest/#api
>
>
> Maybe I'm the only one, but I think all those exclamation marks are a bit
> extraneous and feel like syntactic noise. I have been following Plots.jl
> because I'm interested in plotting. My use of Julia comes down to either
> making plots, or post-processing data so I can make a plots. I get the idea
> from Plots.jl that functions that end in an exclamation mark are supposed
> to modify an existing plot. So you get things like:
>
> plot!(...)  # Add another plot to an existing one.
> title!(...)
> xaxis!("mylabel", :log, :flip)
> xlims!(...)
> xticks!(...)
>
> and so on...
>
> This means that in actual usage, almost every line I write needs to have an
> extra `!`. To me this means that the `!` is not adding real information and
> is just syntactic noise. I currently use PyPlot, so I use that as a point
> of comparison. In PyPlot, all commands edit the current plot unless you
> explicitly call `figure()` to create a new plot. You can also use clf() to
> clear the current plot. I think this is something that PyPlot / Matplotlib
> get right. The special syntax lines up with the less common action.
>
> I don't know if anyone agrees with me. I still think Plots.jl is a step in
> the right direction and I'll keep cheering from the stands. I just wanted
> to share my thoughts.
>
> Cheers,
> Daniel.


[julia-users] Re: Cross-correlation: rfft() VS fft() VS xcorr() performances

2016-04-08 Thread CrocoDuck O'Ducks
Oh cool, thanks for the tips. I'll dive in and be back!

On Friday, 8 April 2016 08:18:17 UTC+1, DNF wrote:
>
> The xcorr function calls the conv function, which again uses fft. If you 
> know the general structure and length of your signals ahead of time, you 
> can probably gain some performance by planning the ffts beforehand. I don't 
> know why it doesn't work for you, but you could have a look in at conv in 
> dsp.jl.
>
> If you *really* want to speed things up, though, you might implement your 
> own xcorr.  xcorr dominates the runtime in your function, and if you know 
> an upper bound on the signal lags, you can implement xcorr with a limited 
> number of lags. By default xcorr calculates for all lags (in your case 
> that's 2*48000*60-1 ~ 6million lags). If you know that the max lag is 1 
> second, you can save ~98% percent of the runtime of xcorr.
>
> A couple of other remarks:
> * There's no need to put type annotations next to the function outputs, 
> it's just visual noise
> * Use ct_idx = cld(lₛ, 2) and forget about the mod.
>


[julia-users] Re: MXNet setting workspace, Convolutional layers, kernels etc

2016-04-08 Thread Valentin Churavy
Also take a look at https://github.com/dmlc/MXNet.jl/pull/73 where I 
implemented debug_str in Julia so that you can test your network on its 
space requirements.

On Friday, 8 April 2016 15:54:06 UTC+9, Valentin Churavy wrote:
>
> What happens if you set the batch_size to 1? Also take a look at 
> https://github.com/dmlc/mxnet/tree/master/example/memcost
>
> Also workspace is per convolution and you should keep it small.  
>
> On Thursday, 7 April 2016 19:13:36 UTC+9, kleinsplash wrote:
>>
>> Hi,
>>
>> I have a memory error using Quadro K5000M which has 4GB global memory. I 
>> was wondering if there is some guide as to how to set my workspace and 
>> Convolutional layers.
>>
>> My current settings: 
>>
>> training_data =  128x128x1x800
>> batch_size = 128x128x1x8
>> workspace = 2048 (I think this can go up to 4096 because of the 
>> .deviceQuery)
>>
>> This is my net (still to be designed so just basic ):
>>
>> # first conv
>> conv1 = @mx.chain mx.Convolution(data=data, kernel=(5,5), num_filter=20, 
>> workspace=workspace)  =>
>>   mx.Activation(act_type=:relu) =>
>>   mx.Pooling(pool_type=:max, kernel=(2,2), stride=(2,2))
>> # second conv
>> conv2 = @mx.chain mx.Convolution(data=conv1, kernel=(5,5), num_filter=50, 
>> workspace=workspace) =>
>>   mx.Activation(act_type=:relu) =>
>>   mx.Pooling(pool_type=:max, kernel=(2,2), stride=(2,2))
>>   # first fully-connected
>> fc1   = @mx.chain mx.Flatten(data=conv2) =>
>>   mx.FullyConnected(num_hidden=500) =>
>>   mx.Activation(act_type=:relu)
>> # second fully-connected
>> fc2   = mx.FullyConnected(data=fc1, num_hidden=10) 
>> # third fully-connected
>> fc3   = mx.FullyConnected(data=fc2, num_hidden=2) 
>> # softmax loss
>> net = mx.SoftmaxOutput(data=fc3, name=:softmax)
>>
>> So far if I reduce my image to 28x28 it all works - but I need to up the 
>> resolution to pick out features. Anyone have any ideas on thumb sucking 
>> initial values for at least getting past memory issues to the design of the 
>> net? 
>>
>>
>> Thx
>>
>>

Re: [julia-users] How to initialize a Matrix{T}?

2016-04-08 Thread Mauro
On Thu, 2016-04-07 at 21:27, Lucas de Almeida Carotta  
wrote:
> How I make this code works?
>
> function read_matrix( data::DataType=Any, spacing=" " )
> local line::Vector{ data } = Array( data, 1 )
> local matrix::Matix{ data } = Array( Array{ data, 1 }, 1 )
>
> while !eof( STDIN )
> line = [ parse( data, i ) for i in split( chomp( readline( STDIN )
> ), spacing ) ]
> push!( matrix, line )
> end
>
> return matrix
> end

Note that the `local` is not necessary (it does nothing) as assignment
creates a local variable.  Also, the type-declaration is unnecessary.
Assuming it is a matrix, i.e. all lines are the same length.  Untested
code:

function read_matrix( T::DataType=Any, spacing=" " )
out = T[] # equivalent to Array(T,1)

nlines = 0
while !eof( STDIN )
# note you can only push to 1-D arrays
push!(out, [ parse( data, i ) for i in split( chomp( readline( STDIN ) 
), spacing ) ] )
nlines += 1
end

return reshape(out, nlines, div(length(out),nlines))  # maybe needs a 
transpose
end


[julia-users] Opinion on Plots.jl -- exclamation marks

2016-04-08 Thread Daniel Carrera
Hello,

I was looking through the API for Plots.jl

http://plots.readthedocs.org/en/latest/#api


Maybe I'm the only one, but I think all those exclamation marks are a bit 
extraneous and feel like syntactic noise. I have been following Plots.jl 
because I'm interested in plotting. My use of Julia comes down to either 
making plots, or post-processing data so I can make a plots. I get the idea 
from Plots.jl that functions that end in an exclamation mark are supposed 
to modify an existing plot. So you get things like:

plot!(...)  # Add another plot to an existing one.
title!(...)
xaxis!("mylabel", :log, :flip)
xlims!(...)
xticks!(...)

and so on...

This means that in actual usage, almost every line I write needs to have an 
extra `!`. To me this means that the `!` is not adding real information and 
is just syntactic noise. I currently use PyPlot, so I use that as a point 
of comparison. In PyPlot, all commands edit the current plot unless you 
explicitly call `figure()` to create a new plot. You can also use clf() to 
clear the current plot. I think this is something that PyPlot / Matplotlib 
get right. The special syntax lines up with the less common action.

I don't know if anyone agrees with me. I still think Plots.jl is a step in 
the right direction and I'll keep cheering from the stands. I just wanted 
to share my thoughts.

Cheers,
Daniel.


Re: [julia-users] state of Winston

2016-04-08 Thread harven


Le jeudi 7 avril 2016 21:37:47 UTC+2, Mauro a écrit :
>
> https://github.com/tbreloff/Plots.jl wraps many of the plotting packages 
> and thus allows to use all of them with a single syntax.  Maybe you 
> should give that a spin? 
>
>
I will have a look. Thanks. 


[julia-users] Re: Cross-correlation: rfft() VS fft() VS xcorr() performances

2016-04-08 Thread DNF
The xcorr function calls the conv function, which again uses fft. If you 
know the general structure and length of your signals ahead of time, you 
can probably gain some performance by planning the ffts beforehand. I don't 
know why it doesn't work for you, but you could have a look in at conv in 
dsp.jl.

If you *really* want to speed things up, though, you might implement your 
own xcorr.  xcorr dominates the runtime in your function, and if you know 
an upper bound on the signal lags, you can implement xcorr with a limited 
number of lags. By default xcorr calculates for all lags (in your case 
that's 2*48000*60-1 ~ 6million lags). If you know that the max lag is 1 
second, you can save ~98% percent of the runtime of xcorr.

A couple of other remarks:
* There's no need to put type annotations next to the function outputs, 
it's just visual noise
* Use ct_idx = cld(lₛ, 2) and forget about the mod.


[julia-users] Re: MXNet setting workspace, Convolutional layers, kernels etc

2016-04-08 Thread Valentin Churavy
What happens if you set the batch_size to 1? Also take a look 
at https://github.com/dmlc/mxnet/tree/master/example/memcost

Also workspace is per convolution and you should keep it small.  

On Thursday, 7 April 2016 19:13:36 UTC+9, kleinsplash wrote:
>
> Hi,
>
> I have a memory error using Quadro K5000M which has 4GB global memory. I 
> was wondering if there is some guide as to how to set my workspace and 
> Convolutional layers.
>
> My current settings: 
>
> training_data =  128x128x1x800
> batch_size = 128x128x1x8
> workspace = 2048 (I think this can go up to 4096 because of the 
> .deviceQuery)
>
> This is my net (still to be designed so just basic ):
>
> # first conv
> conv1 = @mx.chain mx.Convolution(data=data, kernel=(5,5), num_filter=20, 
> workspace=workspace)  =>
>   mx.Activation(act_type=:relu) =>
>   mx.Pooling(pool_type=:max, kernel=(2,2), stride=(2,2))
> # second conv
> conv2 = @mx.chain mx.Convolution(data=conv1, kernel=(5,5), num_filter=50, 
> workspace=workspace) =>
>   mx.Activation(act_type=:relu) =>
>   mx.Pooling(pool_type=:max, kernel=(2,2), stride=(2,2))
>   # first fully-connected
> fc1   = @mx.chain mx.Flatten(data=conv2) =>
>   mx.FullyConnected(num_hidden=500) =>
>   mx.Activation(act_type=:relu)
> # second fully-connected
> fc2   = mx.FullyConnected(data=fc1, num_hidden=10) 
> # third fully-connected
> fc3   = mx.FullyConnected(data=fc2, num_hidden=2) 
> # softmax loss
> net = mx.SoftmaxOutput(data=fc3, name=:softmax)
>
> So far if I reduce my image to 28x28 it all works - but I need to up the 
> resolution to pick out features. Anyone have any ideas on thumb sucking 
> initial values for at least getting past memory issues to the design of the 
> net? 
>
>
> Thx
>
>

Re: [julia-users] What is the correct way to use the type alias Vector and Matrix in multiple dispatch?

2016-04-08 Thread Po Choi
Thanks.

One point I don't understand is the type alias `Matrix{T}` with parametric 
type `T`.

julia> foo{T}(A::Vector{Matrix{T}}) = 1

foo (generic function with 1 method)


julia> methods(foo)

# 1 method for generic function "foo":

foo{T}(A::Array{Array{T,2},1}) at none:1


julia> bar(A::Vector{Matrix}) = 1

bar (generic function with 1 method)


julia> methods(bar)

# 1 method for generic function "bar":

bar(A::Array{Array{T,2},1}) at none:1


julia> baz(A::Vector{Matrix{T}}) = 1

ERROR: UndefVarError: T not defined


I can define `foo{T}(A::Array{Array{T,2},1})`.
I can define `bar(A::Array{Array{T,2},1})`  by using type alias `Matrix` 
without `{T}`
But, I cannot define `baz` as above without `{T}`.
Is it fair?


On Wednesday, April 6, 2016 at 6:24:34 PM UTC-7, Andy Ferris wrote:
>
> T is meant to be a parametric type, defined in this case in the definition 
> of Matrix (as a type alias) and also Array (as a type parameter with the 
> same name). In typeof(AAA) it's pulling T out of that definition of the 
> typealias. You could have written AAA = Matrix{Float64}[randn(3,3) for k in 
> 1:4] to define T.
>
> Further, types in Julia are not covariant which means even if A <: B, we 
> do NOT have Type{A} <: Type{B}. In your case that reads Matrix{Float64} <: 
> Matrix, but not Vector{Matrix{Float64}} <: Vector{Matrix}.
>
> To be generic, your function definitions could take a type parameter, like:
>
> hello{T}(A::Vector{Matrix{T}}) = 2
>
> Here we have introduced a new "parameteric type" variable "T". It could 
> have been named anything. It works since there is some T (==Float64) where 
> it will find a match. 
>
> Or, to be specific about the input type, just define:
>
> hello(A::Vector{Matrix{Float64}}) = 2
>
> You don't have to be afraid of using Matrix and Vector, but you do have to 
> think about how that might interact with the non-covariant type system. In 
> cases like these AFAIK the only way to make a generic function is to 
> introduce type parameters (using either Array{T,2} or Matrix{T} should be 
> fully equivalent).
>
> Does that help?
> Andy
>
>
> On Thursday, April 7, 2016 at 9:58:50 AM UTC+10, Po Choi wrote:
>>
>>
>> Does it make sense to declare a variable with the type `Matrix`?
>>
>> julia> methods(hello)
>> # 2 methods for generic function "hello":
>> hello(A::Array{T,2}) at none:1
>> hello(A::Array{Array{T,2},1}) at none:1
>>
>> julia> AA = [randn(3,3) for k in 1:4];
>>
>> julia> AAA = Matrix[randn(3,3) for k in 1:4];
>>
>> julia> hello(AA)
>> ERROR: MethodError: `hello` has no method matching 
>> hello(::Array{Array{Float64,2},1})
>>
>> julia> hello(AAA)
>> 2
>>
>>
>> julia> typeof(AA)
>> Array{Array{Float64,2},1}
>>
>> julia> typeof(AAA)
>> Array{Array{T,2},1}
>>
>> julia> Array{T,2}
>> ERROR: UndefVarError: T not defined
>>
>>
>> I am a little bit confused about the `T`. Why can `T` appear inside `AAA` 
>> without being declared?
>>
>>
>> On Wednesday, April 6, 2016 at 1:44:38 PM UTC-7, Yichao Yu wrote:
>>>
>>> On Wed, Apr 6, 2016 at 4:23 PM, Po Choi  wrote: 
>>> > 
>>> > hello(A::Matrix) = 1 
>>> > hello(A::Vector{Matrix}) = 2 
>>>
>>>
>>> http://julia.readthedocs.org/en/latest/manual/types/#parametric-composite-types
>>>  
>>>
>>> Vector{Matrix{Float64}} is not a subtype of Vector{Matrix} 
>>>
>>> > A = randn(3,3); 
>>> > AA = [randn(3,3) for k in 1:4]; 
>>> > hello(A) 
>>> > hello(AA) 
>>> > 
>>> > The output has method error. 
>>> > julia> hello(A) 
>>> > 1 
>>> > 
>>> > julia> hello(AA) 
>>> > ERROR: MethodError: `hello` has no method matching 
>>> > hello(::Array{Array{Float64,2},1}) 
>>> > 
>>> > 
>>> > If I write down the types explicitly, 
>>> > hi(A::Array{Float64,2}) = 1 
>>> > hi(A::Array{Array{Float64,2},1}) = 2 
>>> > A = randn(3,3); 
>>> > AA = [randn(3,3) for k in 1:4]; 
>>> > hi(A) 
>>> > hi(AA) 
>>> > The output is what I expect. 
>>> > julia> hi(A) 
>>> > 1 
>>> > 
>>> > julia> hi(AA) 
>>> > 2 
>>> > 
>>> > Am I using Vector and Matrix in a wrong way? 
>>>
>>

[julia-users] Fast multiprecision integers?

2016-04-08 Thread Laurent Bartholdi
Dear all,
How hard would it be to code arbitrary-precision integers in Julia with at 
worst 2x performance loss over native Ints?

This is what I have in mind: have a bitstype structure, say 64 bits, which 
is either the address of a BigInt (if even), or an Int64 (if odd). Addition 
would be something like:

function +(a::FastInt,b::FastInt)
if a&1
(result,obit) = @llvm.sadd.with.overflow.i64(a,b&~1)
obit ? reinterpret(FastInt,BigInt(a>>1)+(b>>1)) : result
elseif a&1
reinterpret(FastInt,(a>>1) + reinterpret(BigInt,b))
elseif b&1
reinterpret(FastInt,reinterpret(BigInt,a) + b>>1)
else
reinterpret(FastInt,reinterpret(BigInt,a) + reinterpret(BigInt,b))
end
end

(code not meant to be run, just a skeleton)

This would be very useful in the development of computer algebra systems, 
in which BigInts are too slow and eat up too much memory, but one is ready 
to pay a small price for guard against arithmetic overflows.

If this is too complicated, then perhaps at least a type of integers that 
would raise an error in case of over/underflows? Those could be caught in 
throw/catch enclosures, so the user could restart the operation with 
BigInts.

TIA, Laurent