[julia-users] Zeal docset

2014-11-04 Thread Yakir Gagnon
Anyone managed to get/generate a docset for zeal ? 


[julia-users] Animation with Gadfly and IJulia?

2014-11-04 Thread Sheehan Olver


I'm wondering whether there's an example of doing animation directly in 
IJulia with Gadfly.  Where by animation I mean plotting a sequence of 
functions, lets say each frame is calculated from the previous frame and 
wants to be plotted as soon as calculated.

Its clearly possible as its possible with Interact.jl: the code below does 
work, but is not elegant and seems to run into problems if the calculation 
is slow.  There is also the extra unneeded slide bar for k.   I can't seem 
to figure out how to get ride of the @manipulate.

@manipulate for k=1:1, t_dt=timestamp(fps(30.))
# calculate plot
end




[julia-users] Gadfly: Type command before close browser in REPL

2014-11-04 Thread xiongjieyi
I like use Gadfly in REPL rather IJulia (because I can not Ctrl-C to break 
running script in IJulia). The browser (firefox) with figure pop-up through 
X11 on my screen. However, I cannot continue my command in REPL until I 
close the browser. Is there any way to return command-ready state 
immediately after the browser pop-up?


[julia-users] some problems updating to latest git

2014-11-04 Thread Neal Becker
After playing with julia a bit some weeks ago, I attempted to update to the 
latest git, but have some problems:

1. On startup:
julia> ERROR: String not defined

2. Now let's update:

julia> Pkg.update()
INFO: Updating METADATA...
INFO: Updating cache of SHA...
INFO: Updating cache of ZMQ...
INFO: Updating cache of Nettle...
INFO: Updating cache of PyCall...
INFO: Updating cache of JSON...
INFO: Updating cache of BinDeps...
INFO: Updating cache of PyPlot...
INFO: Updating cache of URIParser...
INFO: Updating cache of FixedPointNumbers...
INFO: Updating cache of IJulia...
INFO: Updating cache of Color...
INFO: Updating cache of ArrayViews...
INFO: Updating Cxx...
INFO: Computing changes...
INFO: Cloning cache of Compat from git://github.com/JuliaLang/Compat.jl.git
INFO: Cloning cache of LaTeXStrings from 
git://github.com/stevengj/LaTeXStrings.jl.git
INFO: Upgrading ArrayViews: v0.4.6 => v0.4.8
INFO: Upgrading BinDeps: v0.3.3 => v0.3.6
INFO: Upgrading Color: v0.3.4 => v0.3.10
INFO: Installing Compat v0.1.0
INFO: Upgrading FixedPointNumbers: v0.0.2 => v0.0.4
INFO: Upgrading IJulia: v0.1.14 => v0.1.15
INFO: Upgrading JSON: v0.3.7 => v0.3.9
INFO: Installing LaTeXStrings v0.1.0
INFO: Upgrading Nettle: v0.1.4 => v0.1.6
INFO: Upgrading PyCall: v0.4.8 => v0.4.10
INFO: Upgrading PyPlot: v1.3.0 => v1.4.0
INFO: Upgrading SHA: v0.0.2 => v0.0.3
INFO: Upgrading URIParser: v0.0.2 => v0.0.3
INFO: Upgrading ZMQ: v0.1.13 => v0.1.14
INFO: Building Nettle

WARNING: deprecated syntax "{}" at 
/home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:103.
Use "[]" instead.

WARNING: deprecated syntax "{}" at 
/home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:104.
Use "[]" instead.

WARNING: deprecated syntax "(String=>String)[]" at 
/home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:146.
Use "Dict{String,String}()" instead.

WARNING: deprecated syntax "(String=>String)[]" at 
/home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:147.
Use "Dict{String,String}()" instead.

WARNING: deprecated syntax "(String=>String)[]" at 
/home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:148.
Use "Dict{String,String}()" instead.

WARNING: deprecated syntax "(String=>String)[]" at 
/home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:149.
Use "Dict{String,String}()" instead.
===[ ERROR: Nettle ]

String not defined
while loading /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl, in expression 
starting on line 45
while loading /home/nbecker/.julia/v0.4/Nettle/deps/build.jl, in expression 
starting on line 1


INFO: Building ZMQ
=[ ERROR: ZMQ ]=

@setup not defined
while loading /home/nbecker/.julia/v0.4/ZMQ/deps/build.jl, in expression 
starting on line 4


INFO: Building IJulia
Found IPython version 2.1.0 ... ok.
Creating julia profile in IPython...
===[ ERROR: IJulia ]

String not defined
while loading /home/nbecker/.julia/v0.4/IJulia/deps/build.jl, in expression 
starting on line 28



[ BUILD ERRORS ]

WARNING: IJulia, Nettle and ZMQ had build errors.

 - packages with build errors remain installed in /home/nbecker/.julia/v0.4
 - build a package and all its dependencies with `Pkg.build(pkg)`
 - build a single package by running its `deps/build.jl` script



julia> 

-- 
-- Those who don't understand recursion are doomed to repeat it



Re: [julia-users] some problems updating to latest git

2014-11-04 Thread Isaiah Norton
Master has some major breaking changes right now and it will take some time
to settle things out. People should use 'git checkout release-0.3' for the
stable branch unless they are working on/with master.
On Nov 4, 2014 8:10 AM, "Neal Becker"  wrote:

> After playing with julia a bit some weeks ago, I attempted to update to the
> latest git, but have some problems:
>
> 1. On startup:
> julia> ERROR: String not defined
>
> 2. Now let's update:
>
> julia> Pkg.update()
> INFO: Updating METADATA...
> INFO: Updating cache of SHA...
> INFO: Updating cache of ZMQ...
> INFO: Updating cache of Nettle...
> INFO: Updating cache of PyCall...
> INFO: Updating cache of JSON...
> INFO: Updating cache of BinDeps...
> INFO: Updating cache of PyPlot...
> INFO: Updating cache of URIParser...
> INFO: Updating cache of FixedPointNumbers...
> INFO: Updating cache of IJulia...
> INFO: Updating cache of Color...
> INFO: Updating cache of ArrayViews...
> INFO: Updating Cxx...
> INFO: Computing changes...
> INFO: Cloning cache of Compat from git://
> github.com/JuliaLang/Compat.jl.git
> INFO: Cloning cache of LaTeXStrings from
> git://github.com/stevengj/LaTeXStrings.jl.git
> INFO: Upgrading ArrayViews: v0.4.6 => v0.4.8
> INFO: Upgrading BinDeps: v0.3.3 => v0.3.6
> INFO: Upgrading Color: v0.3.4 => v0.3.10
> INFO: Installing Compat v0.1.0
> INFO: Upgrading FixedPointNumbers: v0.0.2 => v0.0.4
> INFO: Upgrading IJulia: v0.1.14 => v0.1.15
> INFO: Upgrading JSON: v0.3.7 => v0.3.9
> INFO: Installing LaTeXStrings v0.1.0
> INFO: Upgrading Nettle: v0.1.4 => v0.1.6
> INFO: Upgrading PyCall: v0.4.8 => v0.4.10
> INFO: Upgrading PyPlot: v1.3.0 => v1.4.0
> INFO: Upgrading SHA: v0.0.2 => v0.0.3
> INFO: Upgrading URIParser: v0.0.2 => v0.0.3
> INFO: Upgrading ZMQ: v0.1.13 => v0.1.14
> INFO: Building Nettle
>
> WARNING: deprecated syntax "{}" at
> /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:103.
> Use "[]" instead.
>
> WARNING: deprecated syntax "{}" at
> /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:104.
> Use "[]" instead.
>
> WARNING: deprecated syntax "(String=>String)[]" at
> /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:146.
> Use "Dict{String,String}()" instead.
>
> WARNING: deprecated syntax "(String=>String)[]" at
> /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:147.
> Use "Dict{String,String}()" instead.
>
> WARNING: deprecated syntax "(String=>String)[]" at
> /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:148.
> Use "Dict{String,String}()" instead.
>
> WARNING: deprecated syntax "(String=>String)[]" at
> /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:149.
> Use "Dict{String,String}()" instead.
> ===[ ERROR: Nettle
> ]
>
> String not defined
> while loading /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl, in
> expression
> starting on line 45
> while loading /home/nbecker/.julia/v0.4/Nettle/deps/build.jl, in expression
> starting on line 1
>
>
> 
> INFO: Building ZMQ
> =[ ERROR: ZMQ
> ]=
>
> @setup not defined
> while loading /home/nbecker/.julia/v0.4/ZMQ/deps/build.jl, in expression
> starting on line 4
>
>
> 
> INFO: Building IJulia
> Found IPython version 2.1.0 ... ok.
> Creating julia profile in IPython...
> ===[ ERROR: IJulia
> ]
>
> String not defined
> while loading /home/nbecker/.julia/v0.4/IJulia/deps/build.jl, in expression
> starting on line 28
>
>
> 
>
> [ BUILD ERRORS
> ]
>
> WARNING: IJulia, Nettle and ZMQ had build errors.
>
>  - packages with build errors remain installed in /home/nbecker/.julia/v0.4
>  - build a package and all its dependencies with `Pkg.build(pkg)`
>  - build a single package by running its `deps/build.jl` script
>
>
> 
>
> julia>
>
> --
> -- Those who don't understand recursion are doomed to repeat it
>
>


[julia-users] Re: some problems updating to latest git

2014-11-04 Thread Neal Becker
Just updated to release-0.3, and did make clean.  But now I don't get
far at all:

make
 /bin/sh ./config.status
config.status: creating libuv.pc
config.status: creating Makefile
config.status: executing depfiles commands
config.status: executing libtool commands
  GEN  include/uv-dtrace.h
/usr/bin/dtrace invalid option -xnolibs


Isaiah Norton wrote:

> Master has some major breaking changes right now and it will take some time
> to settle things out. People should use 'git checkout release-0.3' for the
> stable branch unless they are working on/with master.
> On Nov 4, 2014 8:10 AM, "Neal Becker"
>  wrote:
> 
>> After playing with julia a bit some weeks ago, I attempted to update to the
>> latest git, but have some problems:
>>
>> 1. On startup:
>> julia> ERROR: String not defined
>>
>> 2. Now let's update:
>>
>> julia> Pkg.update()
>> INFO: Updating METADATA...
>> INFO: Updating cache of SHA...
>> INFO: Updating cache of ZMQ...
>> INFO: Updating cache of Nettle...
>> INFO: Updating cache of PyCall...
>> INFO: Updating cache of JSON...
>> INFO: Updating cache of BinDeps...
>> INFO: Updating cache of PyPlot...
>> INFO: Updating cache of URIParser...
>> INFO: Updating cache of FixedPointNumbers...
>> INFO: Updating cache of IJulia...
>> INFO: Updating cache of Color...
>> INFO: Updating cache of ArrayViews...
>> INFO: Updating Cxx...
>> INFO: Computing changes...
>> INFO: Cloning cache of Compat from git://
>> github.com/JuliaLang/Compat.jl.git
>> INFO: Cloning cache of LaTeXStrings from
>> git://github.com/stevengj/LaTeXStrings.jl.git
>> INFO: Upgrading ArrayViews: v0.4.6 => v0.4.8
>> INFO: Upgrading BinDeps: v0.3.3 => v0.3.6
>> INFO: Upgrading Color: v0.3.4 => v0.3.10
>> INFO: Installing Compat v0.1.0
>> INFO: Upgrading FixedPointNumbers: v0.0.2 => v0.0.4
>> INFO: Upgrading IJulia: v0.1.14 => v0.1.15
>> INFO: Upgrading JSON: v0.3.7 => v0.3.9
>> INFO: Installing LaTeXStrings v0.1.0
>> INFO: Upgrading Nettle: v0.1.4 => v0.1.6
>> INFO: Upgrading PyCall: v0.4.8 => v0.4.10
>> INFO: Upgrading PyPlot: v1.3.0 => v1.4.0
>> INFO: Upgrading SHA: v0.0.2 => v0.0.3
>> INFO: Upgrading URIParser: v0.0.2 => v0.0.3
>> INFO: Upgrading ZMQ: v0.1.13 => v0.1.14
>> INFO: Building Nettle
>>
>> WARNING: deprecated syntax "{}" at
>> /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:103.
>> Use "[]" instead.
>>
>> WARNING: deprecated syntax "{}" at
>> /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:104.
>> Use "[]" instead.
>>
>> WARNING: deprecated syntax "(String=>String)[]" at
>> /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:146.
>> Use "Dict{String,String}()" instead.
>>
>> WARNING: deprecated syntax "(String=>String)[]" at
>> /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:147.
>> Use "Dict{String,String}()" instead.
>>
>> WARNING: deprecated syntax "(String=>String)[]" at
>> /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:148.
>> Use "Dict{String,String}()" instead.
>>
>> WARNING: deprecated syntax "(String=>String)[]" at
>> /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:149.
>> Use "Dict{String,String}()" instead.
>> ===[ ERROR: Nettle
>> ]
>>
>> String not defined
>> while loading /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl, in
>> expression
>> starting on line 45
>> while loading /home/nbecker/.julia/v0.4/Nettle/deps/build.jl, in expression
>> starting on line 1
>>
>>
>> 

>> INFO: Building ZMQ
>> =[ ERROR: ZMQ
>> ]=
>>
>> @setup not defined
>> while loading /home/nbecker/.julia/v0.4/ZMQ/deps/build.jl, in expression
>> starting on line 4
>>
>>
>> 

>> INFO: Building IJulia
>> Found IPython version 2.1.0 ... ok.
>> Creating julia profile in IPython...
>> ===[ ERROR: IJulia
>> ]
>>
>> String not defined
>> while loading /home/nbecker/.julia/v0.4/IJulia/deps/build.jl, in expression
>> starting on line 28
>>
>>
>> 

>>
>> [ BUILD ERRORS
>> ]
>>
>> WARNING: IJulia, Nettle and ZMQ had build errors.
>>
>>  - packages with build errors remain installed in /home/nbecker/.julia/v0.4
>>  - build a package and all its dependencies with `Pkg.build(pkg)`
>>  - build a single package by running its `deps/build.jl` script
>>
>>
>> 

>>
>> julia>
>>
>> --
>> -- Those who don't understand recursion are doomed to repeat it
>>
>>
-- 
-- Those who don't understand recursion are doomed to repeat it



Re: [julia-users] Re: Julia looking for old gfortran after upgrade

2014-11-04 Thread Sean Garborg
Thanks, you two. 

On Monday, November 3, 2014 1:42:50 PM UTC-7, Elliot Saba wrote:
>
> Specifically, you need to clean out arpack, suite-sparse, and openblas.  
> These guys use gfortran, which embeds absolute paths to libgfortran.  You 
> need to do this every time the gcc version changes.
> -E
>
> On Mon, Nov 3, 2014 at 11:23 AM, James Kyle  > wrote:
>
>> Sometimes you have to recompile linked deps when the lib path changes in 
>> upgrade. For example:
>>
>> % brew reinstall qrupdate
>>
>> the stack error should provide hints on which one.
>>
>> On Saturday, November 1, 2014 5:06:12 PM UTC-7, Sean Garborg wrote:
>>>
>>> I upgraded OSX from Mavericks to Yosemite and ran 'brew upgrade' which 
>>> brought a new version of gcc and friends. I'm not sure which action was to 
>>> blame, but Julia kept looking for '/usr/local/lib/gcc/x86_64-
>>> apple-darwin13.x.x/4.8.x/libgfortran.3.dylib' (old versions of Darwin 
>>> and gcc). 'make cleanall' didn't help.
>>>
>>> I'm fine after cloning Julia anew, but that's slow. For future 
>>> reference, is there a quicker way?
>>>
>>
>

[julia-users] Downgrade to a previous version of Julia

2014-11-04 Thread Charles Novaes de Santana
Dear list,

I am trying to downgrade my julia installation to the version
0.4.0-dev+734. Currently I have the nightly Julia Version 0.4.0-dev+1408.

I tried the following command:

git checkout 0.4.0-dev+734

But I got the following error message: "error: pathspec '0.4.0-dev+734' did
not match any file(s) known to git."

I successfully could do a downgrade to version 0.3 by running "git checkout
release-0.3". Why is it different for a previous version of 0.4.0-dev?

Sorry if this is a question regarding to Github more than Julia scope. And
thanks in advance for any help.

Best,

Charles

-- 
Um axé! :)

--
Charles Novaes de Santana, PhD
http://www.imedea.uib-csic.es/~charles


Re: [julia-users] inv(::Symmetric), slow

2014-11-04 Thread Andreas Noack
In your case, I think the right solution is to invert it by
inv(cholfact(pd)). By calling cholfact, you are telling Julia that your
matrix is positive definite and Julia will exploit that to give you a fast
inverse which is also positive definite.

inv(factorize(Hermitian(pd))) is slow because it uses a factorization that
only exploits symmetry (Bunch-Kaufman), but not positive definiteness
(Cholesky). However, the Bunch-Kaufman factorization preserves symmetry and
hence the result is positive definite. In contrast, when doing inv(pd)
Julia has no idea that pd is positive definite or even symmetric and hence
it defaults to use the LU factorization which won't preserve symmetry and
therefore isposdef will return false.

Hope it made sense. I'll probably have to write a section in the
documentation about this soon.

2014-11-03 18:53 GMT-05:00 David van Leeuwen :

> Hello,
>
> I am struggling with the fact that covariance matrices computed from a
> precision matrix aren't positive definite, according to `isposdef()` (they
> should be according to the maths).
>
> It looks like the culprit is `inv(pd::Matrix)` which does not always
> result in a positive definite matrix if `pd` is one.  This is probably
> because `inv()` is agnostic of the fact that the argument is positive
> definite, and numerical details.
>
> Now I've tried to understand the support for special matrices, and I
> believe that `inv(factorize(Hermitian(pd)))` is the proper way to do this.
> Indeed the resulting matrix is positive definite.  However, this
> computation takes a lot longer than inv(), about 5--6 times as slow.  I
> would have expected that the extra symmetry would lead to a more efficient
> matrix inversion.
>
> Is there something I'm doing wrong?
>
> Cheers,
>
> ---david
>


[julia-users] Trigonometric functions at infinity

2014-11-04 Thread isahin
I have recently become aware of Julia and have been impressed with its ease 
of use and speed.  While I was converting my previous code to Julia, I 
noticed that trigonometric functions at infinity yield DomainError and 
abort the program. Try sin(Inf), sin(-Inf), cos(Inf), tan(Inf), etc.  I 
checked the behavior of Numpy and it returns a NaN and a warning of invalid 
value to the function. I remember Matlab was yielding a NaN too. But they 
both wouldn't abort the program.

Returning a NaN instead of aborting the program might be useful when the 
following computations don't depend on only this result of NaN.  For 
example, consider finding the smallest element of x =cos( [1.0, 2.0, Inf]) 
which would have given the number I am interested in if cos(Inf) gives a 
NaN.  Here I cos(2.0) < NaN would be false nevertheless findmin cleverly 
finds the correct answer 2.0 ( so does Numpy). Note that Inf is usually a 
result of an intermediate step of an algorithm.

If the following computations involve NaN and yield compilation error than 
checking elements of x being not NaN is necessary. But since it does occur 
rarely, avoiding this check might be useful for speed.

What would be your thoughts?  Thanks.



[julia-users] Re: Downgrade to a previous version of Julia

2014-11-04 Thread Ivar Nesje
You need to do 

git checkout c1fd3ab4edefcd7194


kl. 16:24:54 UTC+1 tirsdag 4. november 2014 skrev Charles Santana følgende:
>
> Dear list,
>
> I am trying to downgrade my julia installation to the version 
> 0.4.0-dev+734. Currently I have the nightly Julia Version 0.4.0-dev+1408. 
>
> I tried the following command:
>
> git checkout 0.4.0-dev+734
>
> But I got the following error message: "error: pathspec '0.4.0-dev+734' 
> did not match any file(s) known to git."
>
> I successfully could do a downgrade to version 0.3 by running "git 
> checkout release-0.3". Why is it different for a previous version of 
> 0.4.0-dev?
>
> Sorry if this is a question regarding to Github more than Julia scope. And 
> thanks in advance for any help.
>
> Best,
>
> Charles
>
> -- 
> Um axé! :)
>
> --
> Charles Novaes de Santana, PhD
> http://www.imedea.uib-csic.es/~charles
>  


Re: [julia-users] Trigonometric functions at infinity

2014-11-04 Thread John Myles White
My personal preference is for code to never raise warnings if you might ever 
use it in a system that has more than 10 lines of code. So I'm personally a 
believer in either returning NaN without a warning (which seems a little risky) 
or maintaining the current behavior, which seems wisest to me.

 -- John

On Nov 4, 2014, at 9:22 AM, isa...@gmail.com wrote:

> I have recently become aware of Julia and have been impressed with its ease 
> of use and speed.  While I was converting my previous code to Julia, I 
> noticed that trigonometric functions at infinity yield DomainError and abort 
> the program. Try sin(Inf), sin(-Inf), cos(Inf), tan(Inf), etc.  I checked the 
> behavior of Numpy and it returns a NaN and a warning of invalid value to the 
> function. I remember Matlab was yielding a NaN too. But they both wouldn't 
> abort the program.
> 
> Returning a NaN instead of aborting the program might be useful when the 
> following computations don't depend on only this result of NaN.  For example, 
> consider finding the smallest element of x =cos( [1.0, 2.0, Inf]) which would 
> have given the number I am interested in if cos(Inf) gives a NaN.  Here I 
> cos(2.0) < NaN would be false nevertheless findmin cleverly finds the correct 
> answer 2.0 ( so does Numpy). Note that Inf is usually a result of an 
> intermediate step of an algorithm.
> 
> If the following computations involve NaN and yield compilation error than 
> checking elements of x being not NaN is necessary. But since it does occur 
> rarely, avoiding this check might be useful for speed.
> 
> What would be your thoughts?  Thanks.
> 



[julia-users] Re: Downgrade to a previous version of Julia

2014-11-04 Thread Ivar Nesje
Git work with a tree of commits, you need to write down the SHA of the 
commit rather than the number. `release-0.3` and `v0.3.X` works because we 
have a branch named `release-0.3` and tags named `v0.3.0`, `v0.3.1` and 
`v0.3.2` that points to the correct commits.

There are numerous Git tutorials online, and if you want to be involved in 
any collaborative coding I'd strongly recommend that you spend some time to 
learn to be friends with git, rather than trying to fight it. Your time 
will be well spent.

kl. 18:57:23 UTC+1 tirsdag 4. november 2014 skrev Ivar Nesje følgende:
>
> You need to do 
>
> git checkout c1fd3ab4edefcd7194
>
>
> kl. 16:24:54 UTC+1 tirsdag 4. november 2014 skrev Charles Santana følgende:
>>
>> Dear list,
>>
>> I am trying to downgrade my julia installation to the version 
>> 0.4.0-dev+734. Currently I have the nightly Julia Version 0.4.0-dev+1408. 
>>
>> I tried the following command:
>>
>> git checkout 0.4.0-dev+734
>>
>> But I got the following error message: "error: pathspec '0.4.0-dev+734' 
>> did not match any file(s) known to git."
>>
>> I successfully could do a downgrade to version 0.3 by running "git 
>> checkout release-0.3". Why is it different for a previous version of 
>> 0.4.0-dev?
>>
>> Sorry if this is a question regarding to Github more than Julia scope. 
>> And thanks in advance for any help.
>>
>> Best,
>>
>> Charles
>>
>> -- 
>> Um axé! :)
>>
>> --
>> Charles Novaes de Santana, PhD
>> http://www.imedea.uib-csic.es/~charles
>>  
>

Re: [julia-users] type confusions in list comprehensions (and how to work around it?)

2014-11-04 Thread Evan Pu
It does indeed happens inside the function, if you pass a function as an 
argument to it (rather than refering to f implicitly in the function body, 
you explicitly pass in f as an extra argument)
see below:

julia> f(x) = x + 1
f (generic function with 1 method)

julia> g(f, xs) = [f(x) for x in xs]
g (generic function with 1 method)

julia> xs = [1,2,3]
3-element Array{Int64,1}:
 1
 2
 3

julia> g(f,xs)
3-element Array{Any,1}:
 2
 3
 4


On Tuesday, November 4, 2014 2:22:24 AM UTC-5, Jutho wrote:
>
> This only happens in global scope, not inside a function? If you define
> f(list) = return [g(x) for x in list]
>
> then f(xs) will return an Array{Float64,1}. 
>
> Op dinsdag 4 november 2014 03:23:36 UTC+1 schreef K leo:
>>
>> I found that I often have to force this conversion, which is not too 
>> difficult.  The question why comprehension has to build with type Any? 
>>
>>
>> On 2014年11月04日 07:06, Miguel Bazdresch wrote: 
>> > > How could I force the type of gxs1 to be of an array of Float64? 
>> > 
>> > The simplest way is: 
>> > 
>> > gxs1 = Float64[g(x) for x in xs] 
>> > 
>> > -- mb 
>> > 
>> > On Mon, Nov 3, 2014 at 6:01 PM, Evan Pu > > > wrote: 
>> > 
>> > Consider the following interaction: 
>> > 
>> > julia> g(x) = 1 / (1 + x) 
>> > g (generic function with 1 method) 
>> > 
>> > julia> typeof(g(1.0)) 
>> > Float64 
>> > 
>> > julia> xs = [1.0, 2.0, 3.0, 4.0] 
>> > 4-element Array{Float64,1}: 
>> >  1.0 
>> >  2.0 
>> >  3.0 
>> >  4.0 
>> > 
>> > julia> gxs1 = [g(x) for x in xs] 
>> > 4-element Array{Any,1}: 
>> >  0.5 
>> >  0.33 
>> >  0.25 
>> >  0.2 
>> > 
>> > Why isn't gxs1 type of Array{Float64,1}? 
>> > How could I force the type of gxs1 to be of an array of Float64? 
>> > 
>> > julia> gxs2 = [convert(Float64,g(x)) for x in xs] 
>> > 4-element Array{Any,1}: 
>> >  0.5 
>> >  0.33 
>> >  0.25 
>> >  0.2 
>> > 
>> > somehow this doesn't seem to work... 
>> > 
>> > 
>> > 
>> > 
>>
>>

[julia-users] Is there an implicit "apply" method for a type?

2014-11-04 Thread Evan Pu
Hello,
I want to create a polynomial type, parametrized by its coefficients, able 
to perform polynomial additions and such.
but I would also like to use it like a function call, since a polynomial 
should be just like a function
Something of the following would be nice:

p = Poly([1,2,3]) # creating a polynomial p(x) = 1 + 2x + 3x^2
q = Poly([1,2]) # creating a polynomial 1 + 2x
r = p + q # using dynamic dispatch, creating the polynomial 2 + 4x + 3x^2

r(1) # this should give 9, by evaluating polynomial r at x = 1

Is there something in Julia that would allow me to create what I have in 
mind?
thanks!!


Re: [julia-users] Trigonometric functions at infinity

2014-11-04 Thread Milan Bouchet-Valat
Le mardi 04 novembre 2014 à 09:22 -0800, isa...@gmail.com a écrit :
> I have recently become aware of Julia and have been impressed with its
> ease of use and speed.  While I was converting my previous code to
> Julia, I noticed that trigonometric functions at infinity yield
> DomainError and abort the program. Try sin(Inf), sin(-Inf), cos(Inf),
> tan(Inf), etc.  I checked the behavior of Numpy and it returns a NaN
> and a warning of invalid value to the function. I remember Matlab was
> yielding a NaN too. But they both wouldn't abort the program.
> 
> Returning a NaN instead of aborting the program might be useful when
> the following computations don't depend on only this result of NaN.
> For example, consider finding the smallest element of x =cos( [1.0,
> 2.0, Inf]) which would have given the number I am interested in if
> cos(Inf) gives a NaN.  Here I cos(2.0) < NaN would be false
> nevertheless findmin cleverly finds the correct answer 2.0 ( so does
> Numpy). Note that Inf is usually a result of an intermediate step of
> an algorithm.
You'll probably be interested in this discussion:
https://github.com/JuliaLang/julia/issues/7866

> If the following computations involve NaN and yield compilation error
> than checking elements of x being not NaN is necessary. But since it
> does occur rarely, avoiding this check might be useful for speed.
> 
> What would be your thoughts?  Thanks.
> 
> 



[julia-users] Re: Is there an implicit "apply" method for a type?

2014-11-04 Thread Ivar Nesje
It will be in Julia 0.4, and you can already overload `call` if you use the 
nightly releases.

In Julia 0.3 you will have to define a function and write call(r,1) in 
order for it to work.

Regards
Ivar

kl. 19:50:31 UTC+1 tirsdag 4. november 2014 skrev Evan Pu følgende:
>
> Hello,
> I want to create a polynomial type, parametrized by its coefficients, able 
> to perform polynomial additions and such.
> but I would also like to use it like a function call, since a polynomial 
> should be just like a function
> Something of the following would be nice:
>
> p = Poly([1,2,3]) # creating a polynomial p(x) = 1 + 2x + 3x^2
> q = Poly([1,2]) # creating a polynomial 1 + 2x
> r = p + q # using dynamic dispatch, creating the polynomial 2 + 4x + 3x^2
>
> r(1) # this should give 9, by evaluating polynomial r at x = 1
>
> Is there something in Julia that would allow me to create what I have in 
> mind?
> thanks!!
>


[julia-users] Re: Is there an implicit "apply" method for a type?

2014-11-04 Thread Evan Pu
thanks!

On Tuesday, November 4, 2014 2:00:54 PM UTC-5, Ivar Nesje wrote:
>
> It will be in Julia 0.4, and you can already overload `call` if you use 
> the nightly releases.
>
> In Julia 0.3 you will have to define a function and write call(r,1) in 
> order for it to work.
>
> Regards
> Ivar
>
> kl. 19:50:31 UTC+1 tirsdag 4. november 2014 skrev Evan Pu følgende:
>>
>> Hello,
>> I want to create a polynomial type, parametrized by its coefficients, 
>> able to perform polynomial additions and such.
>> but I would also like to use it like a function call, since a polynomial 
>> should be just like a function
>> Something of the following would be nice:
>>
>> p = Poly([1,2,3]) # creating a polynomial p(x) = 1 + 2x + 3x^2
>> q = Poly([1,2]) # creating a polynomial 1 + 2x
>> r = p + q # using dynamic dispatch, creating the polynomial 2 + 4x + 3x^2
>>
>> r(1) # this should give 9, by evaluating polynomial r at x = 1
>>
>> Is there something in Julia that would allow me to create what I have in 
>> mind?
>> thanks!!
>>
>

[julia-users] How Julia do math operations

2014-11-04 Thread Neil Devadasan
julia> f(x::Float64, y::Float64) = 2x + y;

julia> f(10.97,23.9985)
45.9385005

The above method execution of function f returns an answer that I cannot 
understand.  Can someone clarify?

Thank you.


Re: [julia-users] How Julia do math operations

2014-11-04 Thread John Myles White
Hi Neil,

Julie does math the same way that all computers do math. You're probably coming 
from another language where a lot of effort is invested into pretending that 
computers offer a closer approximation to abstract mathematics than they 
actually do. Those systems have been lying to you.

Put another way: you just took the red pill by using Julia.

 -- John

On Nov 4, 2014, at 11:06 AM, Neil Devadasan  wrote:

> julia> f(x::Float64, y::Float64) = 2x + y;
> 
> julia> f(10.97,23.9985)
> 45.9385005
> 
> The above method execution of function f returns an answer that I cannot 
> understand.  Can someone clarify?
> 
> Thank you.



[julia-users] Re: How Julia do math operations

2014-11-04 Thread Karel Zapfe
Roundoff error. Is good that you pointed this. 


[julia-users] Re: Zeal docset

2014-11-04 Thread Patrick O'Leary
ReadTheDocs used to create Dash docsets automatically as a part of the 
service, but they stopped doing so at some point--there's some GitHub 
issues for RTD along those lines.

If you build the docs yourself, doc2dash 
(https://pypi.python.org/pypi/doc2dash) should work, but I haven't tried it.

On Tuesday, November 4, 2014 2:38:45 AM UTC-6, Yakir Gagnon wrote:
>
> Anyone managed to get/generate a docset for zeal ? 
>


Re: [julia-users] How Julia do math operations

2014-11-04 Thread Neil Devadasan
Thanks

On Tuesday, November 4, 2014 2:13:37 PM UTC-5, John Myles White wrote:
>
> Hi Neil, 
>
> Julie does math the same way that all computers do math. You're probably 
> coming from another language where a lot of effort is invested into 
> pretending that computers offer a closer approximation to abstract 
> mathematics than they actually do. Those systems have been lying to you. 
>
> Put another way: you just took the red pill by using Julia. 
>
>  -- John 
>
> On Nov 4, 2014, at 11:06 AM, Neil Devadasan  > wrote: 
>
> > julia> f(x::Float64, y::Float64) = 2x + y; 
> > 
> > julia> f(10.97,23.9985) 
> > 45.9385005 
> > 
> > The above method execution of function f returns an answer that I cannot 
> understand.  Can someone clarify? 
> > 
> > Thank you. 
>
>

Re: [julia-users] Re: Downgrade to a previous version of Julia

2014-11-04 Thread Charles Novaes de Santana
Great! Thank you very much for your response, Ivar! I will definitely take
a time to study Github. I only use Github to organize my own projects, so I
have never gone into the details of it. Thanks for your advice!

Best,

Charles

On Tue, Nov 4, 2014 at 7:14 PM, Ivar Nesje  wrote:

> Git work with a tree of commits, you need to write down the SHA of the
> commit rather than the number. `release-0.3` and `v0.3.X` works because we
> have a branch named `release-0.3` and tags named `v0.3.0`, `v0.3.1` and
> `v0.3.2` that points to the correct commits.
>
> There are numerous Git tutorials online, and if you want to be involved in
> any collaborative coding I'd strongly recommend that you spend some time to
> learn to be friends with git, rather than trying to fight it. Your time
> will be well spent.
>
> kl. 18:57:23 UTC+1 tirsdag 4. november 2014 skrev Ivar Nesje følgende:
>
>> You need to do
>>
>> git checkout c1fd3ab4edefcd7194
>>
>>
>> kl. 16:24:54 UTC+1 tirsdag 4. november 2014 skrev Charles Santana
>> følgende:
>>>
>>> Dear list,
>>>
>>> I am trying to downgrade my julia installation to the version
>>> 0.4.0-dev+734. Currently I have the nightly Julia Version 0.4.0-dev+1408.
>>>
>>> I tried the following command:
>>>
>>> git checkout 0.4.0-dev+734
>>>
>>> But I got the following error message: "error: pathspec '0.4.0-dev+734'
>>> did not match any file(s) known to git."
>>>
>>> I successfully could do a downgrade to version 0.3 by running "git
>>> checkout release-0.3". Why is it different for a previous version of
>>> 0.4.0-dev?
>>>
>>> Sorry if this is a question regarding to Github more than Julia scope.
>>> And thanks in advance for any help.
>>>
>>> Best,
>>>
>>> Charles
>>>
>>> --
>>> Um axé! :)
>>>
>>> --
>>> Charles Novaes de Santana, PhD
>>> http://www.imedea.uib-csic.es/~charles
>>>
>>


-- 
Um axé! :)

--
Charles Novaes de Santana, PhD
http://www.imedea.uib-csic.es/~charles


[julia-users] Re: Zeal docset

2014-11-04 Thread Yakir Gagnon
I tried installing the whole doc2dash thing 
(http://sveme.org/dash-zeal-and-julia.html), but I'm on a crunchbang system 
and it became pretty involved pretty quick, so I kind of gave up. 

OFF-TOPIC: Aside from Zeal, is there really no other **simple** way of 
having selected API documentation offline (other than a bunch of PDFs)?

On Wednesday, November 5, 2014 5:21:45 AM UTC+10, Patrick O'Leary wrote:
>
> ReadTheDocs used to create Dash docsets automatically as a part of the 
> service, but they stopped doing so at some point--there's some GitHub 
> issues for RTD along those lines.
>
> If you build the docs yourself, doc2dash (
> https://pypi.python.org/pypi/doc2dash) should work, but I haven't tried 
> it.
>
> On Tuesday, November 4, 2014 2:38:45 AM UTC-6, Yakir Gagnon wrote:
>>
>> Anyone managed to get/generate a docset for zeal ? 
>>
>

[julia-users] linreg(X,Y) not working.

2014-11-04 Thread Karel Zapfe
Hi fellas:

I am starting the Julia Tutorial at:
http://forio.com/labs/julia-studio/tutorials

There is an example for linear regression in Julia. According to the 
tutorial, linear regresion via the function

linreg(x,y) 

should just work. But it doesn't. Instead, it gives me a very very very 
criptic error message:

 ** On entry to DTZRZF parameter number  7 had an illegal value

My code is very simple:

x = float([1:12])
y = [5.5; 6.3; 7.6; 8.8; 10.9; 11.79; 13.48; 15.02; 17.77; 20.81; 22.0; 
22.99]
a, b = linreg(x,y)

My Julia is 0.3. 

I have tried it with bigger matrices (the ones in the tutorial) and give me 
the same error.

What does it mean?




Re: [julia-users] linreg(X,Y) not working.

2014-11-04 Thread Milan Bouchet-Valat
Le mardi 04 novembre 2014 à 14:12 -0800, Karel Zapfe a écrit :
> Hi fellas:
> 
> 
> I am starting the Julia Tutorial at:
> http://forio.com/labs/julia-studio/tutorials
> 
> 
> 
> There is an example for linear regression in Julia. According to the
> tutorial, linear regresion via the function
> 
> 
> linreg(x,y) 
> 
> 
> should just work. But it doesn't. Instead, it gives me a very very
> very criptic error message:
> 
> 
>  ** On entry to DTZRZF parameter number  7 had an illegal value
> 
> 
> 
> My code is very simple:
> 
> 
> x = float([1:12])
> y = [5.5; 6.3; 7.6; 8.8; 10.9; 11.79; 13.48; 15.02; 17.77; 20.81;
> 22.0; 22.99]
> a, b = linreg(x,y)
> 
> 
> My Julia is 0.3. 
> 
> 
> I have tried it with bigger matrices (the ones in the tutorial) and
> give me the same error.
> 
> 
> What does it mean?
What's your OS, and where did you get Julia from? Could you post the
output of versioninfo()?


Regards



Re: [julia-users] type confusions in list comprehensions (and how to work around it?)

2014-11-04 Thread elextr


On Wednesday, November 5, 2014 5:43:39 AM UTC+11, Evan Pu wrote:
>
> It does indeed happens inside the function, if you pass a function as an 
> argument to it (rather than refering to f implicitly in the function body, 
> you explicitly pass in f as an extra argument)
> see below:
>
> julia> f(x) = x + 1
> f (generic function with 1 method)
>
> julia> g(f, xs) = [f(x) for x in xs]
> g (generic function with 1 method)
>

When this is being compiled Julia has no way of knowing what type f() 
returns since its a runtime parameter, so it has to use Any.
 

>
> julia> xs = [1,2,3]
> 3-element Array{Int64,1}:
>  1
>  2
>  3
>
> julia> g(f,xs)
> 3-element Array{Any,1}:
>  2
>  3
>  4
>
>
> On Tuesday, November 4, 2014 2:22:24 AM UTC-5, Jutho wrote:
>>
>> This only happens in global scope, not inside a function? If you define
>> f(list) = return [g(x) for x in list]
>>
>> then f(xs) will return an Array{Float64,1}. 
>>
>> Op dinsdag 4 november 2014 03:23:36 UTC+1 schreef K leo:
>>>
>>> I found that I often have to force this conversion, which is not too 
>>> difficult.  The question why comprehension has to build with type Any? 
>>>
>>>
>>> On 2014年11月04日 07:06, Miguel Bazdresch wrote: 
>>> > > How could I force the type of gxs1 to be of an array of Float64? 
>>> > 
>>> > The simplest way is: 
>>> > 
>>> > gxs1 = Float64[g(x) for x in xs] 
>>> > 
>>> > -- mb 
>>> > 
>>> > On Mon, Nov 3, 2014 at 6:01 PM, Evan Pu >> > > wrote: 
>>> > 
>>> > Consider the following interaction: 
>>> > 
>>> > julia> g(x) = 1 / (1 + x) 
>>> > g (generic function with 1 method) 
>>> > 
>>> > julia> typeof(g(1.0)) 
>>> > Float64 
>>> > 
>>> > julia> xs = [1.0, 2.0, 3.0, 4.0] 
>>> > 4-element Array{Float64,1}: 
>>> >  1.0 
>>> >  2.0 
>>> >  3.0 
>>> >  4.0 
>>> > 
>>> > julia> gxs1 = [g(x) for x in xs] 
>>> > 4-element Array{Any,1}: 
>>> >  0.5 
>>> >  0.33 
>>> >  0.25 
>>> >  0.2 
>>> > 
>>> > Why isn't gxs1 type of Array{Float64,1}? 
>>> > How could I force the type of gxs1 to be of an array of Float64? 
>>> > 
>>> > julia> gxs2 = [convert(Float64,g(x)) for x in xs] 
>>> > 4-element Array{Any,1}: 
>>> >  0.5 
>>> >  0.33 
>>> >  0.25 
>>> >  0.2 
>>> > 
>>> > somehow this doesn't seem to work... 
>>> > 
>>> > 
>>> > 
>>> > 
>>>
>>>

Re: [julia-users] linreg(X,Y) not working.

2014-11-04 Thread Karel Zapfe

>
> Hi Milan Bouchet-Valat:


Thanks for the fast reply. 
My OS is a gentoo gnu/linux, and I got julia from the "portage tree", the 
usual repositories from source that gentoo mantains. The output of the
command you requested is as follows:
 
julia> versioninfo()
Julia Version 0.3.0
Platform Info:
  System: Linux (x86_64-pc-linux-gnu)
  CPU: Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz
  WORD_SIZE: 64
  BLAS: libblas
  LAPACK: liblapack
  LIBM: libm
  LLVM: libLLVM-3.3



[julia-users] Re: type confusions in list comprehensions (and how to work around it?)

2014-11-04 Thread yfractal
Bug map works fine...

julia> g(x) = 1 / (1 + x)
g (generic function with 1 method)

julia> xs = [1.0, 2.0, 3.0, 4.0]
4-element Array{Float64,1}:
 1.0
 2.0
 3.0
 4.0

julia> gxs1 = map(g, xs)
4-element Array{Float64,1}:
 0.5 
 0.33
 0.25
 0.2 



Evan Pu於 2014年11月4日星期二UTC+8上午7時01分37秒寫道:
>
> Consider the following interaction:
>
> julia> g(x) = 1 / (1 + x)
> g (generic function with 1 method)
>
> julia> typeof(g(1.0))
> Float64
>
> julia> xs = [1.0, 2.0, 3.0, 4.0]
> 4-element Array{Float64,1}:
>  1.0
>  2.0
>  3.0
>  4.0
>
> julia> gxs1 = [g(x) for x in xs]
> 4-element Array{Any,1}:
>  0.5 
>  0.33
>  0.25
>  0.2 
>
> Why isn't gxs1 type of Array{Float64,1}?
> How could I force the type of gxs1 to be of an array of Float64?
>
> julia> gxs2 = [convert(Float64,g(x)) for x in xs]
> 4-element Array{Any,1}:
>  0.5 
>  0.33
>  0.25
>  0.2   
>
> somehow this doesn't seem to work...
>
>
>
>

[julia-users] base case for the reduce operator?

2014-11-04 Thread Evan Pu
Hi I'm writing a simple polynomial module which requires addition of 
polynomials.

I have defined the addition by overloading the function + with an 
additional method:
+(p1::Poly, p2::Poly) = ...# code for the addition

I would like to use + now in a reduce call, imagine I have a list of 
polynomials [p1, p2, p3],
calling reduce(+, [p1, p2, p3]) behaves as expected, giving me a polynomial 
that's the sum of the 3

however, I would also like to cover the edge cases where there's only a 
single polynomial or no polynomial.
I would like the following behaviours, :

# reducing with a single polynomial list gives just the polynomial back
reduce(+, [p1]) 
> p1

# reducing with an empty list gives back the 0 polynomial
reduce(+, [])
> ZeroPoly

How might I add such functionality?
For the empty list case, how might I annotate the type so Julia is aware 
that [] means an empty list of polynomials rather than an empty list of Any?


[julia-users] Re: Issue with Pkg.add

2014-11-04 Thread Dejan Miljkovic
Same here but with different package
I am using 0.2.2

Dejan

*julia> **Pkg.add("HDFS")*

*ERROR: `convert` has no method matching convert(::Type{UTF8String}, 
::ASCIIString)*

* in wait at 
/Applications/Julia-0.3.2.app/Contents/Resources/julia/lib/julia/sys.dylib 
(repeats 2 times)*

* in wait at task.jl:48*

* in sync_end at 
/Applications/Julia-0.3.2.app/Contents/Resources/julia/lib/julia/sys.dylib*

* in add at pkg/entry.jl:319*

* in add at pkg/entry.jl:71*

* in anonymous at pkg/dir.jl:28*

* in cd at 
/Applications/Julia-0.3.2.app/Contents/Resources/julia/lib/julia/sys.dylib*

* in __cd#227__ at 
/Applications/Julia-0.3.2.app/Contents/Resources/julia/lib/julia/sys.dylib*

* in add at pkg.jl:20*

On Saturday, October 11, 2014 10:45:10 AM UTC-7, Bruno Gomes wrote:
>
> Hi, I have a problem. When I call Pkg.add I get the following
>
> Pkg.add("RDatasets")
> error: `convert` has no method matching convert(::Type{UTF8String}, ::
> ASCIIString)
>
> I tried
>
> Pkg.add(ascii("RDatasets"))
>
> but the same occurs.
>
> Thanks in advance.
>
>

Re: [julia-users] inv(::Symmetric), slow

2014-11-04 Thread Stefan Karpinski
I know that factorize checks a bunch of properties of the matrix to be
factorized – is positive definiteness not something that it checks? Should
it?

On Tue, Nov 4, 2014 at 5:59 PM, Andreas Noack 
wrote:

> In your case, I think the right solution is to invert it by
> inv(cholfact(pd)). By calling cholfact, you are telling Julia that your
> matrix is positive definite and Julia will exploit that to give you a fast
> inverse which is also positive definite.
>
> inv(factorize(Hermitian(pd))) is slow because it uses a factorization that
> only exploits symmetry (Bunch-Kaufman), but not positive definiteness
> (Cholesky). However, the Bunch-Kaufman factorization preserves symmetry and
> hence the result is positive definite. In contrast, when doing inv(pd)
> Julia has no idea that pd is positive definite or even symmetric and hence
> it defaults to use the LU factorization which won't preserve symmetry and
> therefore isposdef will return false.
>
> Hope it made sense. I'll probably have to write a section in the
> documentation about this soon.
>
> 2014-11-03 18:53 GMT-05:00 David van Leeuwen :
>
> Hello,
>>
>> I am struggling with the fact that covariance matrices computed from a
>> precision matrix aren't positive definite, according to `isposdef()` (they
>> should be according to the maths).
>>
>> It looks like the culprit is `inv(pd::Matrix)` which does not always
>> result in a positive definite matrix if `pd` is one.  This is probably
>> because `inv()` is agnostic of the fact that the argument is positive
>> definite, and numerical details.
>>
>> Now I've tried to understand the support for special matrices, and I
>> believe that `inv(factorize(Hermitian(pd)))` is the proper way to do this.
>> Indeed the resulting matrix is positive definite.  However, this
>> computation takes a lot longer than inv(), about 5--6 times as slow.  I
>> would have expected that the extra symmetry would lead to a more efficient
>> matrix inversion.
>>
>> Is there something I'm doing wrong?
>>
>> Cheers,
>>
>> ---david
>>
>
>


[julia-users] Re: linreg(X,Y) not working.

2014-11-04 Thread Arch Call
It works fine for me.  I am on Julia 3.2 using Windows 7 with 64 bit 
hardware.

On Tuesday, November 4, 2014 5:12:52 PM UTC-5, Karel Zapfe wrote:
>
> Hi fellas:
>
> I am starting the Julia Tutorial at:
> http://forio.com/labs/julia-studio/tutorials
>
> There is an example for linear regression in Julia. According to the 
> tutorial, linear regresion via the function
>
> linreg(x,y) 
>
> should just work. But it doesn't. Instead, it gives me a very very very 
> criptic error message:
>
>  ** On entry to DTZRZF parameter number  7 had an illegal value
>
> My code is very simple:
>
> x = float([1:12])
> y = [5.5; 6.3; 7.6; 8.8; 10.9; 11.79; 13.48; 15.02; 17.77; 20.81; 22.0; 
> 22.99]
> a, b = linreg(x,y)
>
> My Julia is 0.3. 
>
> I have tried it with bigger matrices (the ones in the tutorial) and give 
> me the same error.
>
> What does it mean?
>
>
>

[julia-users] Re: type confusions in list comprehensions (and how to work around it?)

2014-11-04 Thread elextr


On Wednesday, November 5, 2014 10:01:10 AM UTC+11, yfra...@gmail.com wrote:
>
> Bug map works fine...
>
> julia> g(x) = 1 / (1 + x)
> g (generic function with 1 method)
>
> julia> xs = [1.0, 2.0, 3.0, 4.0]
> 4-element Array{Float64,1}:
>  1.0
>  2.0
>  3.0
>  4.0
>
> julia> gxs1 = map(g, xs)
> 4-element Array{Float64,1}:
>  0.5 
>  0.33
>  0.25
>  0.2 
>
>
Map determines the type dynamically at runtime.  Comprehensions infer the 
type at compile time, but the type of g(x) is dependent on the type of x, 
so its not known at compile time.  

If you define g(x) = (1/(1+x))::Float64 so the type of g(x) is known at 
compile time then you get:

julia> gxs = [g(x) for x in xs]
4-element Array{Float64,1}:
 0.5 
 0.33
 0.25
 0.2 

[...]

>
>>
>>

Re: [julia-users] inv(::Symmetric), slow

2014-11-04 Thread Tim Holy
On Wednesday, November 05, 2014 01:19:55 AM Stefan Karpinski wrote:
> I know that factorize checks a bunch of properties of the matrix to be
> factorized – is positive definiteness not something that it checks? Should
> it?

Positive definiteness is not a quick check. For example, the matrix `ones(2,2)` 
is symmetric and has all positive entries but is not positive-definite. You 
have to finish computing the Cholesky factorization before you can be sure it's 
positive definite, at which point you should of course just keep that 
factorization.

--Tim

> 
> On Tue, Nov 4, 2014 at 5:59 PM, Andreas Noack 
> wrote:
> > In your case, I think the right solution is to invert it by
> > inv(cholfact(pd)). By calling cholfact, you are telling Julia that your
> > matrix is positive definite and Julia will exploit that to give you a fast
> > inverse which is also positive definite.
> > 
> > inv(factorize(Hermitian(pd))) is slow because it uses a factorization that
> > only exploits symmetry (Bunch-Kaufman), but not positive definiteness
> > (Cholesky). However, the Bunch-Kaufman factorization preserves symmetry
> > and
> > hence the result is positive definite. In contrast, when doing inv(pd)
> > Julia has no idea that pd is positive definite or even symmetric and hence
> > it defaults to use the LU factorization which won't preserve symmetry and
> > therefore isposdef will return false.
> > 
> > Hope it made sense. I'll probably have to write a section in the
> > documentation about this soon.
> > 
> > 2014-11-03 18:53 GMT-05:00 David van Leeuwen :
> > 
> > Hello,
> > 
> >> I am struggling with the fact that covariance matrices computed from a
> >> precision matrix aren't positive definite, according to `isposdef()`
> >> (they
> >> should be according to the maths).
> >> 
> >> It looks like the culprit is `inv(pd::Matrix)` which does not always
> >> result in a positive definite matrix if `pd` is one.  This is probably
> >> because `inv()` is agnostic of the fact that the argument is positive
> >> definite, and numerical details.
> >> 
> >> Now I've tried to understand the support for special matrices, and I
> >> believe that `inv(factorize(Hermitian(pd)))` is the proper way to do
> >> this.
> >> Indeed the resulting matrix is positive definite.  However, this
> >> computation takes a lot longer than inv(), about 5--6 times as slow.  I
> >> would have expected that the extra symmetry would lead to a more
> >> efficient
> >> matrix inversion.
> >> 
> >> Is there something I'm doing wrong?
> >> 
> >> Cheers,
> >> 
> >> ---david



Re: [julia-users] How Julia do math operations

2014-11-04 Thread Stefan Karpinski
Some systems round their answers as John said but it's easy to check that
it's a lie:

R version 3.1.0 (2014-04-10) -- "Spring Dance"
> 2*10.97 + 23.9985
[1] 45.9385
> 2*10.97 + 23.9985 == 45.9385
[1] FALSE

This is perl 5, version 16, subversion 2 (v5.16.2)
  DB<1> x 2*10.97 + 23.9985
0  45.9385
  DB<2> x 2*10.97 + 23.9985 == 45.9385
0  ''


I don't have a working copy of Matlab right now, but I think it does this
too.

On Tue, Nov 4, 2014 at 8:31 PM, Neil Devadasan  wrote:

> Thanks
>
> On Tuesday, November 4, 2014 2:13:37 PM UTC-5, John Myles White wrote:
>>
>> Hi Neil,
>>
>> Julie does math the same way that all computers do math. You're probably
>> coming from another language where a lot of effort is invested into
>> pretending that computers offer a closer approximation to abstract
>> mathematics than they actually do. Those systems have been lying to you.
>>
>> Put another way: you just took the red pill by using Julia.
>>
>>  -- John
>>
>> On Nov 4, 2014, at 11:06 AM, Neil Devadasan  wrote:
>>
>> > julia> f(x::Float64, y::Float64) = 2x + y;
>> >
>> > julia> f(10.97,23.9985)
>> > 45.9385005
>> >
>> > The above method execution of function f returns an answer that I
>> cannot understand.  Can someone clarify?
>> >
>> > Thank you.
>>
>>


Re: [julia-users] How Julia do math operations

2014-11-04 Thread Miguel Bazdresch
On Matlab R2013b:

>> 2*10.97 + 23.9985
ans =
   45.9385
>> 2*10.97 + 23.9985 == 45.9385
ans =
 0
>>

-- mb

On Tue, Nov 4, 2014 at 7:48 PM, Stefan Karpinski 
wrote:

> Some systems round their answers as John said but it's easy to check that
> it's a lie:
>
> R version 3.1.0 (2014-04-10) -- "Spring Dance"
> > 2*10.97 + 23.9985
> [1] 45.9385
> > 2*10.97 + 23.9985 == 45.9385
> [1] FALSE
>
> This is perl 5, version 16, subversion 2 (v5.16.2)
>   DB<1> x 2*10.97 + 23.9985
> 0  45.9385
>   DB<2> x 2*10.97 + 23.9985 == 45.9385
> 0  ''
>
>
> I don't have a working copy of Matlab right now, but I think it does this
> too.
>
> On Tue, Nov 4, 2014 at 8:31 PM, Neil Devadasan  wrote:
>
>> Thanks
>>
>> On Tuesday, November 4, 2014 2:13:37 PM UTC-5, John Myles White wrote:
>>>
>>> Hi Neil,
>>>
>>> Julie does math the same way that all computers do math. You're probably
>>> coming from another language where a lot of effort is invested into
>>> pretending that computers offer a closer approximation to abstract
>>> mathematics than they actually do. Those systems have been lying to you.
>>>
>>> Put another way: you just took the red pill by using Julia.
>>>
>>>  -- John
>>>
>>> On Nov 4, 2014, at 11:06 AM, Neil Devadasan  wrote:
>>>
>>> > julia> f(x::Float64, y::Float64) = 2x + y;
>>> >
>>> > julia> f(10.97,23.9985)
>>> > 45.9385005
>>> >
>>> > The above method execution of function f returns an answer that I
>>> cannot understand.  Can someone clarify?
>>> >
>>> > Thank you.
>>>
>>>
>


Re: [julia-users] Re: linreg(X,Y) not working.

2014-11-04 Thread Andreas Noack
I also cannot reproduce this. Is your input exactly as your have posted it?

2014-11-04 19:27 GMT-05:00 Arch Call :

> It works fine for me.  I am on Julia 3.2 using Windows 7 with 64 bit
> hardware.
>
>
> On Tuesday, November 4, 2014 5:12:52 PM UTC-5, Karel Zapfe wrote:
>>
>> Hi fellas:
>>
>> I am starting the Julia Tutorial at:
>> http://forio.com/labs/julia-studio/tutorials
>>
>> There is an example for linear regression in Julia. According to the
>> tutorial, linear regresion via the function
>>
>> linreg(x,y)
>>
>> should just work. But it doesn't. Instead, it gives me a very very very
>> criptic error message:
>>
>>  ** On entry to DTZRZF parameter number  7 had an illegal value
>>
>> My code is very simple:
>>
>> x = float([1:12])
>> y = [5.5; 6.3; 7.6; 8.8; 10.9; 11.79; 13.48; 15.02; 17.77; 20.81; 22.0;
>> 22.99]
>> a, b = linreg(x,y)
>>
>> My Julia is 0.3.
>>
>> I have tried it with bigger matrices (the ones in the tutorial) and give
>> me the same error.
>>
>> What does it mean?
>>
>>
>>


Re: [julia-users] inv(::Symmetric), slow

2014-11-04 Thread Andreas Noack
factorize(Matrix) is a full service check, but if you specify structure as
David did with Hermitian, factorize dispatches to an appropriate method. In
this case it is Bunch-Kaufman. E.g.

julia> A = randn(3,3);A = A'A;

julia> typeof(factorize(A))
Cholesky{Float64} (constructor with 1 method)

julia> typeof(factorize(Hermitian(A)))
BunchKaufman{Float64} (constructor with 1 method)

2014-11-04 19:31 GMT-05:00 Tim Holy :

> On Wednesday, November 05, 2014 01:19:55 AM Stefan Karpinski wrote:
> > I know that factorize checks a bunch of properties of the matrix to be
> > factorized – is positive definiteness not something that it checks?
> Should
> > it?
>
> Positive definiteness is not a quick check. For example, the matrix
> `ones(2,2)`
> is symmetric and has all positive entries but is not positive-definite. You
> have to finish computing the Cholesky factorization before you can be sure
> it's
> positive definite, at which point you should of course just keep that
> factorization.
>
> --Tim
>
> >
> > On Tue, Nov 4, 2014 at 5:59 PM, Andreas Noack <
> andreasnoackjen...@gmail.com>
> > wrote:
> > > In your case, I think the right solution is to invert it by
> > > inv(cholfact(pd)). By calling cholfact, you are telling Julia that your
> > > matrix is positive definite and Julia will exploit that to give you a
> fast
> > > inverse which is also positive definite.
> > >
> > > inv(factorize(Hermitian(pd))) is slow because it uses a factorization
> that
> > > only exploits symmetry (Bunch-Kaufman), but not positive definiteness
> > > (Cholesky). However, the Bunch-Kaufman factorization preserves symmetry
> > > and
> > > hence the result is positive definite. In contrast, when doing inv(pd)
> > > Julia has no idea that pd is positive definite or even symmetric and
> hence
> > > it defaults to use the LU factorization which won't preserve symmetry
> and
> > > therefore isposdef will return false.
> > >
> > > Hope it made sense. I'll probably have to write a section in the
> > > documentation about this soon.
> > >
> > > 2014-11-03 18:53 GMT-05:00 David van Leeuwen <
> david.vanleeu...@gmail.com>:
> > >
> > > Hello,
> > >
> > >> I am struggling with the fact that covariance matrices computed from a
> > >> precision matrix aren't positive definite, according to `isposdef()`
> > >> (they
> > >> should be according to the maths).
> > >>
> > >> It looks like the culprit is `inv(pd::Matrix)` which does not always
> > >> result in a positive definite matrix if `pd` is one.  This is probably
> > >> because `inv()` is agnostic of the fact that the argument is positive
> > >> definite, and numerical details.
> > >>
> > >> Now I've tried to understand the support for special matrices, and I
> > >> believe that `inv(factorize(Hermitian(pd)))` is the proper way to do
> > >> this.
> > >> Indeed the resulting matrix is positive definite.  However, this
> > >> computation takes a lot longer than inv(), about 5--6 times as slow.
> I
> > >> would have expected that the extra symmetry would lead to a more
> > >> efficient
> > >> matrix inversion.
> > >>
> > >> Is there something I'm doing wrong?
> > >>
> > >> Cheers,
> > >>
> > >> ---david
>
>


Re: [julia-users] How Julia do math operations

2014-11-04 Thread K Leo

julia> 2*10.97 + 23.9985
45.9385005

julia> 2*10.97 + 23.9985 == 45.9385005
true

Amazing.  I never expected this.  Is floating point comparison going to 
be guaranteed?


On 2014年11月05日 08:48, Stefan Karpinski wrote:
Some systems round their answers as John said but it's easy to check 
that it's a lie:


R version 3.1.0 (2014-04-10) -- "Spring Dance"
> 2*10.97 + 23.9985
[1] 45.9385
> 2*10.97 + 23.9985 == 45.9385
[1] FALSE

This is perl 5, version 16, subversion 2 (v5.16.2)
  DB<1> x 2*10.97 + 23.9985
0  45.9385
  DB<2> x 2*10.97 + 23.9985 == 45.9385
0  ''


I don't have a working copy of Matlab right now, but I think it does 
this too.


On Tue, Nov 4, 2014 at 8:31 PM, Neil Devadasan > wrote:


Thanks

On Tuesday, November 4, 2014 2:13:37 PM UTC-5, John Myles White
wrote:

Hi Neil,

Julie does math the same way that all computers do math.
You're probably coming from another language where a lot of
effort is invested into pretending that computers offer a
closer approximation to abstract mathematics than they
actually do. Those systems have been lying to you.

Put another way: you just took the red pill by using Julia.

 -- John

On Nov 4, 2014, at 11:06 AM, Neil Devadasan
 wrote:

> julia> f(x::Float64, y::Float64) = 2x + y;
>
> julia> f(10.97,23.9985)
> 45.9385005
>
> The above method execution of function f returns an answer
that I cannot understand.  Can someone clarify?
>
> Thank you.






[julia-users] Help Keeping Up With Changes to Julia

2014-11-04 Thread zach
Hi,

I had written a utility in Julia when I was taking a first stab at learning 
the Julia.  After updating Julia recently, my utility stopped working.  I 
can't seem to diagnose what the issue might be, and so I was seeking out 
others' expertise to help me identify where the problem might reside. 
 Aside from a number of deprecated syntax warnings related to ArgParse, the 
traceback I see is:

ERROR: `convert` has no method matching convert(::Type{Dict{Symbol,Int64}}, 
::(Symbol,Symbol), ::(Int64,Int64))

 

 in call at /Users/zdavis/.julia/v0.4/Options/src/Options.jl:45 

 in call at /Users/zdavis/.julia/v0.4/Options/src/Options.jl:68 

 in parse_commandline at /Users/zdavis/bin/atmos:570 

 in main at /Users/zdavis/bin/atmos:82 

 in include at ./boot.jl:242 

 in include_from_node1 at loading.jl:128 

 in process_options at ./client.jl:293 

 in _start at ./client.jl:375 

 in _start at /Users/zdavis/Applications/Julia/usr/lib/julia/sys.dylib 

while loading /Users/zdavis/bin/atmos, in expression starting on line 122



I've attached the utility for anyone helpful enough to review.  Any help 
would be greatly appreciated.

Thanks!




atmos.gz
Description: Binary data


Re: [julia-users] inv(::Symmetric), slow

2014-11-04 Thread Stefan Karpinski
So this works as desired if you just do inv(factorize(X))?

On Wed, Nov 5, 2014 at 1:59 AM, Andreas Noack 
wrote:

> factorize(Matrix) is a full service check, but if you specify structure as
> David did with Hermitian, factorize dispatches to an appropriate method. In
> this case it is Bunch-Kaufman. E.g.
>
> julia> A = randn(3,3);A = A'A;
>
> julia> typeof(factorize(A))
> Cholesky{Float64} (constructor with 1 method)
>
> julia> typeof(factorize(Hermitian(A)))
> BunchKaufman{Float64} (constructor with 1 method)
>
> 2014-11-04 19:31 GMT-05:00 Tim Holy :
>
> On Wednesday, November 05, 2014 01:19:55 AM Stefan Karpinski wrote:
>> > I know that factorize checks a bunch of properties of the matrix to be
>> > factorized – is positive definiteness not something that it checks?
>> Should
>> > it?
>>
>> Positive definiteness is not a quick check. For example, the matrix
>> `ones(2,2)`
>> is symmetric and has all positive entries but is not positive-definite.
>> You
>> have to finish computing the Cholesky factorization before you can be
>> sure it's
>> positive definite, at which point you should of course just keep that
>> factorization.
>>
>> --Tim
>>
>> >
>> > On Tue, Nov 4, 2014 at 5:59 PM, Andreas Noack <
>> andreasnoackjen...@gmail.com>
>> > wrote:
>> > > In your case, I think the right solution is to invert it by
>> > > inv(cholfact(pd)). By calling cholfact, you are telling Julia that
>> your
>> > > matrix is positive definite and Julia will exploit that to give you a
>> fast
>> > > inverse which is also positive definite.
>> > >
>> > > inv(factorize(Hermitian(pd))) is slow because it uses a factorization
>> that
>> > > only exploits symmetry (Bunch-Kaufman), but not positive definiteness
>> > > (Cholesky). However, the Bunch-Kaufman factorization preserves
>> symmetry
>> > > and
>> > > hence the result is positive definite. In contrast, when doing inv(pd)
>> > > Julia has no idea that pd is positive definite or even symmetric and
>> hence
>> > > it defaults to use the LU factorization which won't preserve symmetry
>> and
>> > > therefore isposdef will return false.
>> > >
>> > > Hope it made sense. I'll probably have to write a section in the
>> > > documentation about this soon.
>> > >
>> > > 2014-11-03 18:53 GMT-05:00 David van Leeuwen <
>> david.vanleeu...@gmail.com>:
>> > >
>> > > Hello,
>> > >
>> > >> I am struggling with the fact that covariance matrices computed from
>> a
>> > >> precision matrix aren't positive definite, according to `isposdef()`
>> > >> (they
>> > >> should be according to the maths).
>> > >>
>> > >> It looks like the culprit is `inv(pd::Matrix)` which does not always
>> > >> result in a positive definite matrix if `pd` is one.  This is
>> probably
>> > >> because `inv()` is agnostic of the fact that the argument is positive
>> > >> definite, and numerical details.
>> > >>
>> > >> Now I've tried to understand the support for special matrices, and I
>> > >> believe that `inv(factorize(Hermitian(pd)))` is the proper way to do
>> > >> this.
>> > >> Indeed the resulting matrix is positive definite.  However, this
>> > >> computation takes a lot longer than inv(), about 5--6 times as
>> slow.  I
>> > >> would have expected that the extra symmetry would lead to a more
>> > >> efficient
>> > >> matrix inversion.
>> > >>
>> > >> Is there something I'm doing wrong?
>> > >>
>> > >> Cheers,
>> > >>
>> > >> ---david
>>
>>
>


[julia-users] Re: Help Keeping Up With Changes to Julia

2014-11-04 Thread James Porter
In general it's not recommended to use the latest nightly (version 0.4) 
(which is under rapid development and likely to break things quite often) 
right now. I would recommend switching to the latest 0.3 release (0.3.2 I 
believe), which if you have a git clone of the Julia repo you can get by 
doing `git checkout release-0.3` and then `make`. Once you've done this, I 
would also update your packages by doing `Pkg.update()` at the Julia REPL, 
as packages often have many changes to keep up with new versions.

On Tuesday, November 4, 2014 7:06:10 PM UTC-6, za...@rescale.com wrote:
>
> Hi,
>
> I had written a utility in Julia when I was taking a first stab at 
> learning the Julia.  After updating Julia recently, my utility stopped 
> working.  I can't seem to diagnose what the issue might be, and so I was 
> seeking out others' expertise to help me identify where the problem might 
> reside.  Aside from a number of deprecated syntax warnings related to 
> ArgParse, the traceback I see is:
>
> ERROR: `convert` has no method matching convert(::Type{Dict{Symbol,Int64
> }}, ::(Symbol,Symbol), ::(Int64,Int64))
>
>  
>
>  in call at /Users/zdavis/.julia/v0.4/Options/src/Options.jl:45 
>
>  in call at /Users/zdavis/.julia/v0.4/Options/src/Options.jl:68 
>
>  in parse_commandline at /Users/zdavis/bin/atmos:570 
>
>  in main at /Users/zdavis/bin/atmos:82 
>
>  in include at ./boot.jl:242 
>
>  in include_from_node1 at loading.jl:128 
>
>  in process_options at ./client.jl:293 
>
>  in _start at ./client.jl:375 
>
>  in _start at /Users/zdavis/Applications/Julia/usr/lib/julia/sys.dylib 
>
> while loading /Users/zdavis/bin/atmos, in expression starting on line 122
>
>
>
> I've attached the utility for anyone helpful enough to review.  Any help 
> would be greatly appreciated.
>
> Thanks!
>
>
>

Re: [julia-users] Help Keeping Up With Changes to Julia

2014-11-04 Thread Stefan Karpinski
While learning the language, I would advise staying on the release-0.3
branch (or stable binaries form that series). The 0.4 branch is undergoing
a substantial amount of changes these days.

On Wed, Nov 5, 2014 at 2:06 AM,  wrote:

> Hi,
>
> I had written a utility in Julia when I was taking a first stab at
> learning the Julia.  After updating Julia recently, my utility stopped
> working.  I can't seem to diagnose what the issue might be, and so I was
> seeking out others' expertise to help me identify where the problem might
> reside.  Aside from a number of deprecated syntax warnings related to
> ArgParse, the traceback I see is:
>
> ERROR: `convert` has no method matching convert(::Type{Dict{Symbol,Int64
> }}, ::(Symbol,Symbol), ::(Int64,Int64))
>
>
>
>  in call at /Users/zdavis/.julia/v0.4/Options/src/Options.jl:45
>
>  in call at /Users/zdavis/.julia/v0.4/Options/src/Options.jl:68
>
>  in parse_commandline at /Users/zdavis/bin/atmos:570
>
>  in main at /Users/zdavis/bin/atmos:82
>
>  in include at ./boot.jl:242
>
>  in include_from_node1 at loading.jl:128
>
>  in process_options at ./client.jl:293
>
>  in _start at ./client.jl:375
>
>  in _start at /Users/zdavis/Applications/Julia/usr/lib/julia/sys.dylib
>
> while loading /Users/zdavis/bin/atmos, in expression starting on line 122
>
>
>
> I've attached the utility for anyone helpful enough to review.  Any help
> would be greatly appreciated.
>
> Thanks!
>
>
>


Re: [julia-users] How Julia do math operations

2014-11-04 Thread Stefan Karpinski
On Wed, Nov 5, 2014 at 2:06 AM, K Leo  wrote:

> julia> 2*10.97 + 23.9985
> 45.9385005
>
> julia> 2*10.97 + 23.9985 == 45.9385005
> true
>
> Amazing.  I never expected this.  Is floating point comparison going to be
> guaranteed?


What's shocking about this? What do you mean by floating point comparison
being guaranteed? We always print individual floating-point numbers with
enough digits to reconstruct their exact value (moreover, they are always
printed with the minimal number of digits necessary to do so).
Floating-point arrays are printed truncated.


Re: [julia-users] How Julia do math operations

2014-11-04 Thread Stuart Brorson

Don't know what you mean by "guaranteeing a floating point comparison".

In any event, you should never check equality when comparing floating
point numbers (except perhaps in special cases).  Instead, use a
tolerance:

tol = 1e-12;
if (abs(a-b) < tol)
  # Close enough
else
  # not equal
end

The problems you see in other languages are due to the fact they round
the values they display (but the underlying values are not rounded).
The return you got from Juila was not rounded for display.

Stuart


On Wed, 5 Nov 2014, K Leo wrote:


julia> 2*10.97 + 23.9985
45.9385005

julia> 2*10.97 + 23.9985 == 45.9385005
true

Amazing.  I never expected this.  Is floating point comparison going to be 
guaranteed?


On 2014?11?05? 08:48, Stefan Karpinski wrote:
Some systems round their answers as John said but it's easy to check that 
it's a lie:


R version 3.1.0 (2014-04-10) -- "Spring Dance"
> 2*10.97 + 23.9985
[1] 45.9385
> 2*10.97 + 23.9985 == 45.9385
[1] FALSE

This is perl 5, version 16, subversion 2 (v5.16.2)
  DB<1> x 2*10.97 + 23.9985
0  45.9385
  DB<2> x 2*10.97 + 23.9985 == 45.9385
0  ''


I don't have a working copy of Matlab right now, but I think it does this 
too.


On Tue, Nov 4, 2014 at 8:31 PM, Neil Devadasan > wrote:


Thanks

On Tuesday, November 4, 2014 2:13:37 PM UTC-5, John Myles White
wrote:

Hi Neil,

Julie does math the same way that all computers do math.
You're probably coming from another language where a lot of
effort is invested into pretending that computers offer a
closer approximation to abstract mathematics than they
actually do. Those systems have been lying to you.

Put another way: you just took the red pill by using Julia.

 -- John

On Nov 4, 2014, at 11:06 AM, Neil Devadasan
 wrote:

> julia> f(x::Float64, y::Float64) = 2x + y;
>
> julia> f(10.97,23.9985)
> 45.9385005
>
> The above method execution of function f returns an answer
that I cannot understand.  Can someone clarify?
>
> Thank you.







Re: [julia-users] inv(::Symmetric), slow

2014-11-04 Thread Andreas Noack
Yes and no. The problem is that small floating point noise destroys
symmetry/Hermitianity and therefore factorize concludes that the matrix is
not positive definite (I know about the other definition). If you construct
your positive definite matrix by A'A, then Julia makes it exactly
symmetric/Hermitian, but if you do e.g. A'*Diagonal(rand(size(A,1)))*A then
your matrix is still positive definite in infinite precision, but it is
almost never exactly symmetric/Hermitian in finite precision.

Hence it is a good idea to use inv(cholfact(X)) whenever you know that your
matrix should be considered positive definite. This is also much faster as
it bypasses all the checks in factorize.

2014-11-04 20:22 GMT-05:00 Stefan Karpinski :

> So this works as desired if you just do inv(factorize(X))?
>
> On Wed, Nov 5, 2014 at 1:59 AM, Andreas Noack <
> andreasnoackjen...@gmail.com> wrote:
>
>> factorize(Matrix) is a full service check, but if you specify structure
>> as David did with Hermitian, factorize dispatches to an appropriate method.
>> In this case it is Bunch-Kaufman. E.g.
>>
>> julia> A = randn(3,3);A = A'A;
>>
>> julia> typeof(factorize(A))
>> Cholesky{Float64} (constructor with 1 method)
>>
>> julia> typeof(factorize(Hermitian(A)))
>> BunchKaufman{Float64} (constructor with 1 method)
>>
>> 2014-11-04 19:31 GMT-05:00 Tim Holy :
>>
>> On Wednesday, November 05, 2014 01:19:55 AM Stefan Karpinski wrote:
>>> > I know that factorize checks a bunch of properties of the matrix to be
>>> > factorized – is positive definiteness not something that it checks?
>>> Should
>>> > it?
>>>
>>> Positive definiteness is not a quick check. For example, the matrix
>>> `ones(2,2)`
>>> is symmetric and has all positive entries but is not positive-definite.
>>> You
>>> have to finish computing the Cholesky factorization before you can be
>>> sure it's
>>> positive definite, at which point you should of course just keep that
>>> factorization.
>>>
>>> --Tim
>>>
>>> >
>>> > On Tue, Nov 4, 2014 at 5:59 PM, Andreas Noack <
>>> andreasnoackjen...@gmail.com>
>>> > wrote:
>>> > > In your case, I think the right solution is to invert it by
>>> > > inv(cholfact(pd)). By calling cholfact, you are telling Julia that
>>> your
>>> > > matrix is positive definite and Julia will exploit that to give you
>>> a fast
>>> > > inverse which is also positive definite.
>>> > >
>>> > > inv(factorize(Hermitian(pd))) is slow because it uses a
>>> factorization that
>>> > > only exploits symmetry (Bunch-Kaufman), but not positive definiteness
>>> > > (Cholesky). However, the Bunch-Kaufman factorization preserves
>>> symmetry
>>> > > and
>>> > > hence the result is positive definite. In contrast, when doing
>>> inv(pd)
>>> > > Julia has no idea that pd is positive definite or even symmetric and
>>> hence
>>> > > it defaults to use the LU factorization which won't preserve
>>> symmetry and
>>> > > therefore isposdef will return false.
>>> > >
>>> > > Hope it made sense. I'll probably have to write a section in the
>>> > > documentation about this soon.
>>> > >
>>> > > 2014-11-03 18:53 GMT-05:00 David van Leeuwen <
>>> david.vanleeu...@gmail.com>:
>>> > >
>>> > > Hello,
>>> > >
>>> > >> I am struggling with the fact that covariance matrices computed
>>> from a
>>> > >> precision matrix aren't positive definite, according to `isposdef()`
>>> > >> (they
>>> > >> should be according to the maths).
>>> > >>
>>> > >> It looks like the culprit is `inv(pd::Matrix)` which does not always
>>> > >> result in a positive definite matrix if `pd` is one.  This is
>>> probably
>>> > >> because `inv()` is agnostic of the fact that the argument is
>>> positive
>>> > >> definite, and numerical details.
>>> > >>
>>> > >> Now I've tried to understand the support for special matrices, and I
>>> > >> believe that `inv(factorize(Hermitian(pd)))` is the proper way to do
>>> > >> this.
>>> > >> Indeed the resulting matrix is positive definite.  However, this
>>> > >> computation takes a lot longer than inv(), about 5--6 times as
>>> slow.  I
>>> > >> would have expected that the extra symmetry would lead to a more
>>> > >> efficient
>>> > >> matrix inversion.
>>> > >>
>>> > >> Is there something I'm doing wrong?
>>> > >>
>>> > >> Cheers,
>>> > >>
>>> > >> ---david
>>>
>>>
>>
>


[julia-users] Re: base case for the reduce operator?

2014-11-04 Thread James Porter
To get an empty list of polynomials, you can type `Poly[]`. I wouldn't 
recommend changing the behavior of the built in reduce function though, 
that would definitely be confusing for anyone else who later wants to work 
with your module (and who will reasonably expect Base.reduce to return a 
list). I would just handle the transformation of the empty list into 
ZeroPoly elsewhere in your code. A nice thing to do might be to define your 
own reduce function in your module (so Polynomials.reduce, or 
Polynomials.polyreduce or something as opposed to Base.reduce) that wraps 
Base.reduce and provides this behavior.


On Tuesday, November 4, 2014 5:20:53 PM UTC-6, Evan Pu wrote:
>
> Hi I'm writing a simple polynomial module which requires addition of 
> polynomials.
>
> I have defined the addition by overloading the function + with an 
> additional method:
> +(p1::Poly, p2::Poly) = ...# code for the addition
>
> I would like to use + now in a reduce call, imagine I have a list of 
> polynomials [p1, p2, p3],
> calling reduce(+, [p1, p2, p3]) behaves as expected, giving me a 
> polynomial that's the sum of the 3
>
> however, I would also like to cover the edge cases where there's only a 
> single polynomial or no polynomial.
> I would like the following behaviours, :
>
> # reducing with a single polynomial list gives just the polynomial back
> reduce(+, [p1]) 
> > p1
>
> # reducing with an empty list gives back the 0 polynomial
> reduce(+, [])
> > ZeroPoly
>
> How might I add such functionality?
> For the empty list case, how might I annotate the type so Julia is aware 
> that [] means an empty list of polynomials rather than an empty list of Any?
>


Re: [julia-users] How Julia do math operations

2014-11-04 Thread Stuart Brorson

Just to follow up on this topic, here's what Matlab does:


2*10.97 + 23.9985


ans =

   45.9385


2*10.97 + 23.9985 == 45.9385


ans =

 0


format long
2*10.97 + 23.9985


ans =

  45.9385005


2*10.97 + 23.9985 == 45.9385005


ans =

 1

Note that Matlab's display value is 45.9385, but the actual result is
45.9385005.  You only see the display value if you set the
display to show all digits ("format long")

Stuart



On Wed, 5 Nov 2014, K Leo wrote:


julia> 2*10.97 + 23.9985
45.9385005

julia> 2*10.97 + 23.9985 == 45.9385005
true

Amazing.  I never expected this.  Is floating point comparison going to be 
guaranteed?


On 2014?11?05? 08:48, Stefan Karpinski wrote:
Some systems round their answers as John said but it's easy to check that 
it's a lie:


R version 3.1.0 (2014-04-10) -- "Spring Dance"
> 2*10.97 + 23.9985
[1] 45.9385
> 2*10.97 + 23.9985 == 45.9385
[1] FALSE

This is perl 5, version 16, subversion 2 (v5.16.2)
  DB<1> x 2*10.97 + 23.9985
0  45.9385
  DB<2> x 2*10.97 + 23.9985 == 45.9385
0  ''


I don't have a working copy of Matlab right now, but I think it does this 
too.


On Tue, Nov 4, 2014 at 8:31 PM, Neil Devadasan > wrote:


Thanks

On Tuesday, November 4, 2014 2:13:37 PM UTC-5, John Myles White
wrote:

Hi Neil,

Julie does math the same way that all computers do math.
You're probably coming from another language where a lot of
effort is invested into pretending that computers offer a
closer approximation to abstract mathematics than they
actually do. Those systems have been lying to you.

Put another way: you just took the red pill by using Julia.

 -- John

On Nov 4, 2014, at 11:06 AM, Neil Devadasan
 wrote:

> julia> f(x::Float64, y::Float64) = 2x + y;
>
> julia> f(10.97,23.9985)
> 45.9385005
>
> The above method execution of function f returns an answer
that I cannot understand.  Can someone clarify?
>
> Thank you.







Re: [julia-users] inv(::Symmetric), slow

2014-11-04 Thread Stefan Karpinski
Ah, good to know. There's so much depth here. This could be a chapter or
two of a book on effective numerical analysis in Julia :-)

On Wed, Nov 5, 2014 at 2:33 AM, Andreas Noack 
wrote:

> Yes and no. The problem is that small floating point noise destroys
> symmetry/Hermitianity and therefore factorize concludes that the matrix is
> not positive definite (I know about the other definition). If you construct
> your positive definite matrix by A'A, then Julia makes it exactly
> symmetric/Hermitian, but if you do e.g. A'*Diagonal(rand(size(A,1)))*A then
> your matrix is still positive definite in infinite precision, but it is
> almost never exactly symmetric/Hermitian in finite precision.
>
> Hence it is a good idea to use inv(cholfact(X)) whenever you know that
> your matrix should be considered positive definite. This is also much
> faster as it bypasses all the checks in factorize.
>
> 2014-11-04 20:22 GMT-05:00 Stefan Karpinski :
>
> So this works as desired if you just do inv(factorize(X))?
>>
>> On Wed, Nov 5, 2014 at 1:59 AM, Andreas Noack <
>> andreasnoackjen...@gmail.com> wrote:
>>
>>> factorize(Matrix) is a full service check, but if you specify structure
>>> as David did with Hermitian, factorize dispatches to an appropriate method.
>>> In this case it is Bunch-Kaufman. E.g.
>>>
>>> julia> A = randn(3,3);A = A'A;
>>>
>>> julia> typeof(factorize(A))
>>> Cholesky{Float64} (constructor with 1 method)
>>>
>>> julia> typeof(factorize(Hermitian(A)))
>>> BunchKaufman{Float64} (constructor with 1 method)
>>>
>>> 2014-11-04 19:31 GMT-05:00 Tim Holy :
>>>
>>> On Wednesday, November 05, 2014 01:19:55 AM Stefan Karpinski wrote:
 > I know that factorize checks a bunch of properties of the matrix to be
 > factorized – is positive definiteness not something that it checks?
 Should
 > it?

 Positive definiteness is not a quick check. For example, the matrix
 `ones(2,2)`
 is symmetric and has all positive entries but is not positive-definite.
 You
 have to finish computing the Cholesky factorization before you can be
 sure it's
 positive definite, at which point you should of course just keep that
 factorization.

 --Tim

 >
 > On Tue, Nov 4, 2014 at 5:59 PM, Andreas Noack <
 andreasnoackjen...@gmail.com>
 > wrote:
 > > In your case, I think the right solution is to invert it by
 > > inv(cholfact(pd)). By calling cholfact, you are telling Julia that
 your
 > > matrix is positive definite and Julia will exploit that to give you
 a fast
 > > inverse which is also positive definite.
 > >
 > > inv(factorize(Hermitian(pd))) is slow because it uses a
 factorization that
 > > only exploits symmetry (Bunch-Kaufman), but not positive
 definiteness
 > > (Cholesky). However, the Bunch-Kaufman factorization preserves
 symmetry
 > > and
 > > hence the result is positive definite. In contrast, when doing
 inv(pd)
 > > Julia has no idea that pd is positive definite or even symmetric
 and hence
 > > it defaults to use the LU factorization which won't preserve
 symmetry and
 > > therefore isposdef will return false.
 > >
 > > Hope it made sense. I'll probably have to write a section in the
 > > documentation about this soon.
 > >
 > > 2014-11-03 18:53 GMT-05:00 David van Leeuwen <
 david.vanleeu...@gmail.com>:
 > >
 > > Hello,
 > >
 > >> I am struggling with the fact that covariance matrices computed
 from a
 > >> precision matrix aren't positive definite, according to
 `isposdef()`
 > >> (they
 > >> should be according to the maths).
 > >>
 > >> It looks like the culprit is `inv(pd::Matrix)` which does not
 always
 > >> result in a positive definite matrix if `pd` is one.  This is
 probably
 > >> because `inv()` is agnostic of the fact that the argument is
 positive
 > >> definite, and numerical details.
 > >>
 > >> Now I've tried to understand the support for special matrices, and
 I
 > >> believe that `inv(factorize(Hermitian(pd)))` is the proper way to
 do
 > >> this.
 > >> Indeed the resulting matrix is positive definite.  However, this
 > >> computation takes a lot longer than inv(), about 5--6 times as
 slow.  I
 > >> would have expected that the extra symmetry would lead to a more
 > >> efficient
 > >> matrix inversion.
 > >>
 > >> Is there something I'm doing wrong?
 > >>
 > >> Cheers,
 > >>
 > >> ---david


>>>
>>
>


Re: [julia-users] How Julia do math operations

2014-11-04 Thread K Leo
I meant this: to check whether 2 floating point valuables equal I had 
always had to do something like abs(x-y)<1.e-5 (never simply x==y) in 
other languages.  I wonder whether checking x==y would be sufficient in 
Julia?



On 2014年11月05日 09:30, Stefan Karpinski wrote:
On Wed, Nov 5, 2014 at 2:06 AM, K Leo > wrote:


julia> 2*10.97 + 23.9985
45.9385005

julia> 2*10.97 + 23.9985 == 45.9385005
true

Amazing.  I never expected this.  Is floating point comparison
going to be guaranteed?


What's shocking about this? What do you mean by floating point 
comparison being guaranteed? We always print individual floating-point 
numbers with enough digits to reconstruct their exact value (moreover, 
they are always printed with the minimal number of digits necessary to 
do so). Floating-point arrays are printed truncated.




Re: [julia-users] How Julia do math operations

2014-11-04 Thread Stefan Karpinski
The == check works exactly the same way in Julia as it does in other
languages. It's the printing of the numbers that's more precise.

On Wed, Nov 5, 2014 at 2:41 AM, K Leo  wrote:

> I meant this: to check whether 2 floating point valuables equal I had
> always had to do something like abs(x-y)<1.e-5 (never simply x==y) in other
> languages.  I wonder whether checking x==y would be sufficient in Julia?
>
>
> On 2014年11月05日 09:30, Stefan Karpinski wrote:
>
>  On Wed, Nov 5, 2014 at 2:06 AM, K Leo > cnbiz...@gmail.com>> wrote:
>>
>> julia> 2*10.97 + 23.9985
>> 45.9385005
>>
>> julia> 2*10.97 + 23.9985 == 45.9385005
>> true
>>
>> Amazing.  I never expected this.  Is floating point comparison
>> going to be guaranteed?
>>
>>
>> What's shocking about this? What do you mean by floating point comparison
>> being guaranteed? We always print individual floating-point numbers with
>> enough digits to reconstruct their exact value (moreover, they are always
>> printed with the minimal number of digits necessary to do so).
>> Floating-point arrays are printed truncated.
>>
>
>


Re: [julia-users] How Julia do math operations

2014-11-04 Thread Jameson Nash
fun fact: even with "format long", matlab doesn't always print enough
digits to reconstruct the number

On Tue, Nov 4, 2014 at 8:35 PM, Stuart Brorson  wrote:

> Just to follow up on this topic, here's what Matlab does:
>
>  2*10.97 + 23.9985
>>>
>>
> ans =
>
>45.9385
>
>  2*10.97 + 23.9985 == 45.9385
>>>
>>
> ans =
>
>  0
>
>  format long
>>> 2*10.97 + 23.9985
>>>
>>
> ans =
>
>   45.9385005
>
>  2*10.97 + 23.9985 == 45.9385005
>>>
>>
> ans =
>
>  1
>
> Note that Matlab's display value is 45.9385, but the actual result is
> 45.9385005.  You only see the display value if you set the
> display to show all digits ("format long")
>
> Stuart
>
>
>
> On Wed, 5 Nov 2014, K Leo wrote:
>
>  julia> 2*10.97 + 23.9985
>> 45.9385005
>>
>> julia> 2*10.97 + 23.9985 == 45.9385005
>> true
>>
>> Amazing.  I never expected this.  Is floating point comparison going to
>> be guaranteed?
>>
>> On 2014?11?05? 08:48, Stefan Karpinski wrote:
>>
>>> Some systems round their answers as John said but it's easy to check
>>> that it's a lie:
>>>
>>> R version 3.1.0 (2014-04-10) -- "Spring Dance"
>>> > 2*10.97 + 23.9985
>>> [1] 45.9385
>>> > 2*10.97 + 23.9985 == 45.9385
>>> [1] FALSE
>>>
>>> This is perl 5, version 16, subversion 2 (v5.16.2)
>>>   DB<1> x 2*10.97 + 23.9985
>>> 0  45.9385
>>>   DB<2> x 2*10.97 + 23.9985 == 45.9385
>>> 0  ''
>>>
>>>
>>> I don't have a working copy of Matlab right now, but I think it does
>>> this too.
>>>
>>> On Tue, Nov 4, 2014 at 8:31 PM, Neil Devadasan >> > wrote:
>>>
>>> Thanks
>>>
>>> On Tuesday, November 4, 2014 2:13:37 PM UTC-5, John Myles White
>>> wrote:
>>>
>>> Hi Neil,
>>>
>>> Julie does math the same way that all computers do math.
>>> You're probably coming from another language where a lot of
>>> effort is invested into pretending that computers offer a
>>> closer approximation to abstract mathematics than they
>>> actually do. Those systems have been lying to you.
>>>
>>> Put another way: you just took the red pill by using Julia.
>>>
>>>  -- John
>>>
>>> On Nov 4, 2014, at 11:06 AM, Neil Devadasan
>>>  wrote:
>>>
>>> > julia> f(x::Float64, y::Float64) = 2x + y;
>>> >
>>> > julia> f(10.97,23.9985)
>>> > 45.9385005
>>> >
>>> > The above method execution of function f returns an answer
>>> that I cannot understand.  Can someone clarify?
>>> >
>>> > Thank you.
>>>
>>>
>>>
>>
>>


Re: [julia-users] Re: some problems updating to latest git

2014-11-04 Thread Isaiah Norton
Please see https://github.com/joyent/libuv/issues/1478
On Nov 4, 2014 9:26 AM, "Neal Becker"  wrote:

> Just updated to release-0.3, and did make clean.  But now I don't get
> far at all:
>
> make
>  /bin/sh ./config.status
> config.status: creating libuv.pc
> config.status: creating Makefile
> config.status: executing depfiles commands
> config.status: executing libtool commands
>   GEN  include/uv-dtrace.h
> /usr/bin/dtrace invalid option -xnolibs
>
>
> Isaiah Norton wrote:
>
> > Master has some major breaking changes right now and it will take some
> time
> > to settle things out. People should use 'git checkout release-0.3' for
> the
> > stable branch unless they are working on/with master.
> > On Nov 4, 2014 8:10 AM, "Neal Becker"
> >  wrote:
> >
> >> After playing with julia a bit some weeks ago, I attempted to update to
> the
> >> latest git, but have some problems:
> >>
> >> 1. On startup:
> >> julia> ERROR: String not defined
> >>
> >> 2. Now let's update:
> >>
> >> julia> Pkg.update()
> >> INFO: Updating METADATA...
> >> INFO: Updating cache of SHA...
> >> INFO: Updating cache of ZMQ...
> >> INFO: Updating cache of Nettle...
> >> INFO: Updating cache of PyCall...
> >> INFO: Updating cache of JSON...
> >> INFO: Updating cache of BinDeps...
> >> INFO: Updating cache of PyPlot...
> >> INFO: Updating cache of URIParser...
> >> INFO: Updating cache of FixedPointNumbers...
> >> INFO: Updating cache of IJulia...
> >> INFO: Updating cache of Color...
> >> INFO: Updating cache of ArrayViews...
> >> INFO: Updating Cxx...
> >> INFO: Computing changes...
> >> INFO: Cloning cache of Compat from git://
> >> github.com/JuliaLang/Compat.jl.git
> >> INFO: Cloning cache of LaTeXStrings from
> >> git://github.com/stevengj/LaTeXStrings.jl.git
> >> INFO: Upgrading ArrayViews: v0.4.6 => v0.4.8
> >> INFO: Upgrading BinDeps: v0.3.3 => v0.3.6
> >> INFO: Upgrading Color: v0.3.4 => v0.3.10
> >> INFO: Installing Compat v0.1.0
> >> INFO: Upgrading FixedPointNumbers: v0.0.2 => v0.0.4
> >> INFO: Upgrading IJulia: v0.1.14 => v0.1.15
> >> INFO: Upgrading JSON: v0.3.7 => v0.3.9
> >> INFO: Installing LaTeXStrings v0.1.0
> >> INFO: Upgrading Nettle: v0.1.4 => v0.1.6
> >> INFO: Upgrading PyCall: v0.4.8 => v0.4.10
> >> INFO: Upgrading PyPlot: v1.3.0 => v1.4.0
> >> INFO: Upgrading SHA: v0.0.2 => v0.0.3
> >> INFO: Upgrading URIParser: v0.0.2 => v0.0.3
> >> INFO: Upgrading ZMQ: v0.1.13 => v0.1.14
> >> INFO: Building Nettle
> >>
> >> WARNING: deprecated syntax "{}" at
> >> /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:103.
> >> Use "[]" instead.
> >>
> >> WARNING: deprecated syntax "{}" at
> >> /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:104.
> >> Use "[]" instead.
> >>
> >> WARNING: deprecated syntax "(String=>String)[]" at
> >> /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:146.
> >> Use "Dict{String,String}()" instead.
> >>
> >> WARNING: deprecated syntax "(String=>String)[]" at
> >> /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:147.
> >> Use "Dict{String,String}()" instead.
> >>
> >> WARNING: deprecated syntax "(String=>String)[]" at
> >> /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:148.
> >> Use "Dict{String,String}()" instead.
> >>
> >> WARNING: deprecated syntax "(String=>String)[]" at
> >> /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl:149.
> >> Use "Dict{String,String}()" instead.
> >> ===[ ERROR: Nettle
> >> ]
> >>
> >> String not defined
> >> while loading /home/nbecker/.julia/v0.4/BinDeps/src/BinDeps.jl, in
> >> expression
> >> starting on line 45
> >> while loading /home/nbecker/.julia/v0.4/Nettle/deps/build.jl, in
> expression
> >> starting on line 1
> >>
> >>
> >>
>
> 
> >> INFO: Building ZMQ
> >> =[ ERROR: ZMQ
> >> ]=
> >>
> >> @setup not defined
> >> while loading /home/nbecker/.julia/v0.4/ZMQ/deps/build.jl, in expression
> >> starting on line 4
> >>
> >>
> >>
>
> 
> >> INFO: Building IJulia
> >> Found IPython version 2.1.0 ... ok.
> >> Creating julia profile in IPython...
> >> ===[ ERROR: IJulia
> >> ]
> >>
> >> String not defined
> >> while loading /home/nbecker/.julia/v0.4/IJulia/deps/build.jl, in
> expression
> >> starting on line 28
> >>
> >>
> >>
>
> 
> >>
> >> [ BUILD ERRORS
> >> ]
> >>
> >> WARNING: IJulia, Nettle and ZMQ had build errors.
> >>
> >>  - packages with build errors remain installed in
> /home/nbecker/.julia/v0.4
> >>  - build a package and all its dependencies with `Pkg.build(pkg)`
> >>  - build a single package by running its `deps/build.jl` script
> >>
> >>
> >>
>
> ==

Re: [julia-users] inv(::Symmetric), slow

2014-11-04 Thread Tony Kelman
There are several chapters of Trefethen, or Demmel, on these exact topics. 
Just have to translate their pseudocode or Matlab addenda into Julia.


On Tuesday, November 4, 2014 5:41:55 PM UTC-8, Stefan Karpinski wrote:
>
> Ah, good to know. There's so much depth here. This could be a chapter or 
> two of a book on effective numerical analysis in Julia :-)
>
> On Wed, Nov 5, 2014 at 2:33 AM, Andreas Noack  > wrote:
>
>> Yes and no. The problem is that small floating point noise destroys 
>> symmetry/Hermitianity and therefore factorize concludes that the matrix is 
>> not positive definite (I know about the other definition). If you construct 
>> your positive definite matrix by A'A, then Julia makes it exactly 
>> symmetric/Hermitian, but if you do e.g. A'*Diagonal(rand(size(A,1)))*A then 
>> your matrix is still positive definite in infinite precision, but it is 
>> almost never exactly symmetric/Hermitian in finite precision. 
>>
>> Hence it is a good idea to use inv(cholfact(X)) whenever you know that 
>> your matrix should be considered positive definite. This is also much 
>> faster as it bypasses all the checks in factorize.
>>
>> 2014-11-04 20:22 GMT-05:00 Stefan Karpinski > >:
>>
>> So this works as desired if you just do inv(factorize(X))?
>>>
>>> On Wed, Nov 5, 2014 at 1:59 AM, Andreas Noack >> > wrote:
>>>
 factorize(Matrix) is a full service check, but if you specify structure 
 as David did with Hermitian, factorize dispatches to an appropriate 
 method. 
 In this case it is Bunch-Kaufman. E.g.

 julia> A = randn(3,3);A = A'A;

 julia> typeof(factorize(A))
 Cholesky{Float64} (constructor with 1 method)

 julia> typeof(factorize(Hermitian(A)))
 BunchKaufman{Float64} (constructor with 1 method)

 2014-11-04 19:31 GMT-05:00 Tim Holy >:

 On Wednesday, November 05, 2014 01:19:55 AM Stefan Karpinski wrote:
> > I know that factorize checks a bunch of properties of the matrix to 
> be
> > factorized – is positive definiteness not something that it checks? 
> Should
> > it?
>
> Positive definiteness is not a quick check. For example, the matrix 
> `ones(2,2)`
> is symmetric and has all positive entries but is not 
> positive-definite. You
> have to finish computing the Cholesky factorization before you can be 
> sure it's
> positive definite, at which point you should of course just keep that
> factorization.
>
> --Tim
>
> >
> > On Tue, Nov 4, 2014 at 5:59 PM, Andreas Noack <
> andreasno...@gmail.com >
> > wrote:
> > > In your case, I think the right solution is to invert it by
> > > inv(cholfact(pd)). By calling cholfact, you are telling Julia that 
> your
> > > matrix is positive definite and Julia will exploit that to give 
> you a fast
> > > inverse which is also positive definite.
> > >
> > > inv(factorize(Hermitian(pd))) is slow because it uses a 
> factorization that
> > > only exploits symmetry (Bunch-Kaufman), but not positive 
> definiteness
> > > (Cholesky). However, the Bunch-Kaufman factorization preserves 
> symmetry
> > > and
> > > hence the result is positive definite. In contrast, when doing 
> inv(pd)
> > > Julia has no idea that pd is positive definite or even symmetric 
> and hence
> > > it defaults to use the LU factorization which won't preserve 
> symmetry and
> > > therefore isposdef will return false.
> > >
> > > Hope it made sense. I'll probably have to write a section in the
> > > documentation about this soon.
> > >
> > > 2014-11-03 18:53 GMT-05:00 David van Leeuwen <
> david.va...@gmail.com >:
> > >
> > > Hello,
> > >
> > >> I am struggling with the fact that covariance matrices computed 
> from a
> > >> precision matrix aren't positive definite, according to 
> `isposdef()`
> > >> (they
> > >> should be according to the maths).
> > >>
> > >> It looks like the culprit is `inv(pd::Matrix)` which does not 
> always
> > >> result in a positive definite matrix if `pd` is one.  This is 
> probably
> > >> because `inv()` is agnostic of the fact that the argument is 
> positive
> > >> definite, and numerical details.
> > >>
> > >> Now I've tried to understand the support for special matrices, 
> and I
> > >> believe that `inv(factorize(Hermitian(pd)))` is the proper way to 
> do
> > >> this.
> > >> Indeed the resulting matrix is positive definite.  However, this
> > >> computation takes a lot longer than inv(), about 5--6 times as 
> slow.  I
> > >> would have expected that the extra symmetry would lead to a more
> > >> efficient
> > >> matrix inversion.
> > >>
> > >> Is there something I'm doing wrong?
> > >>
> > >> Cheers,
> > >>
> > >> ---david
>
>

>>>
>>
>

Re: [julia-users] When or how a distributed array is deallocated?

2014-11-04 Thread Seth Yuan
I did a test on deallocations, and the behavior is not what I expected.

The code I ran is:

@everywhere using Ops

function test_allocation()
  a = drand(1, 1)
  b = 2a
  readline()
end

test_allocation()
@everywhere gc()
readline()

Ops module defined the '*' operator (where a new DArray is created). I can 
see that master's memory went down after the gc() call, but the workers' 
memory remained unchanged.


So the question is: "how to really free a DArray? Both master and workers!"


On Thursday, October 16, 2014 10:18:42 PM UTC+8, Amit Murthy wrote:
>
> meant to say  "all references  from all processes are removed"
>
> On Thu, Oct 16, 2014 at 7:47 PM, Amit Murthy  > wrote:
>
>> Yes, it is gc'ed just like any other object, when all references to it 
>> from any process is removed. 
>>
>> On Thu, Oct 16, 2014 at 2:59 PM, Seth Yuan > > wrote:
>>
>>> Now, I know by reading the documentation how a DArray is created, but 
>>> I'm not sure when a DArray is deallocated.
>>>
>>> Is it GC able when the master looses all references of it? Does somebody 
>>> know the answer to this question?
>>>
>>
>>
>

Re: [julia-users] How Julia do math operations

2014-11-04 Thread Steven G. Johnson
On Tuesday, November 4, 2014 8:42:02 PM UTC-5, K leo wrote:
>
> I meant this: to check whether 2 floating point valuables equal I had 
> always had to do something like abs(x-y)<1.e-5 (never simply x==y) in 
> other languages.  I wonder whether checking x==y would be sufficient in 
> Julia? 


It depends on what you want.   A quick floating-point tutorial seems to be 
in order here:

Each floating-point value is not a real number with "uncertainty" or 
"random fuzz" or an "interval".   It is just *a rational number.*  The 
floating-point numbers F are a specific subset of the rationals (plus 
oddballs like ∞ and NaN).   *==* simply compares whether two floating-point 
values are the same rational number.  If that is what you want, great!

However, because F is only a *subset* of the rationals, various operations 
incur a roundoff error.   This includes conversion from a string to a 
floating-point value (for binary floating point, F does not include decimal 
fractions like 0.1, so they are rounded to the nearest value in F, in this 
case to 0.15551115).  It also includes operations like 
x + y: if the result is in F, it is *exact*, but otherwise the results of 
binary operations +/–/*/÷ are rounded to the nearest element of F (this is 
*exact 
rounding*).And when you do many such operations (e.g. factorizing a 
matrix), the roundoff errors typically *accumulate*, so the final result 
can be very far from the exactly rounded answer.

(A disturbing number of people think that x * 0 != 0 in floating point 
[ignoring Inf/NaN], or that 1.0 + 1.0 != 2.0 in floating point.  Since 
these quantities are represented exactly in F, the computations are exact.)

As a result, if you are comparing two floating point numbers that are the 
result of long calculations, which would be exactly equal in exact 
arithmetic, you typically check whether the relative difference is small: 
|x–y| < ε|x|, where ε is some estimate of the relative accuracy you hope to 
attain.

--SGJ


Re: [julia-users] How Julia do math operations

2014-11-04 Thread Steven G. Johnson


On Tuesday, November 4, 2014 8:44:55 PM UTC-5, Stefan Karpinski wrote:
>
> The == check works exactly the same way in Julia as it does in other 
> languages. It's the printing of the numbers that's more precise.
>

Maybe K. Leo was asking whether Julia always prints out enough decimal 
digits from x::Float64 to guarantee that the original floating-point value x 
will be reconstructed from this decimal literal.   I think the answer is 
yes?


Re: [julia-users] How to remove warnings in PyPlot

2014-11-04 Thread Steven G. Johnson


On Saturday, November 1, 2014 2:18:17 PM UTC-4, Daniel Carrera wrote:
>
> AFAICT, this does not happen with matplotlib on its own:
>

It could be that matplotlib is using a different (non-Gtk) backend in 
Python... 


[julia-users] Multiple Plots with Winston

2014-11-04 Thread yaoismyhero
Just getting into plotting data in Julia today. Gravitating towards Winston 
because of the similarity of its syntax to that of Matplotlib.
Anyhow, I did have a question about how to do multiple plots (on separate 
figures) for Winston.

In Matplotlib, I can use 

plt.figure(1)
> ...
> plt.figure(2) 
>

In Winston, I first just tried 

> plot(x-array,y-array)
> plot(x2-array,y2-array)
>

though then the first plot just got overwritten. 

Thanks.  


Re: [julia-users] Multiple Plots with Winston

2014-11-04 Thread K Leo

pw=plot()
oplot(x1, copy(z1), "r")
oplot(x1, copy(z2), "g")
display(pw)


On 2014年11月05日 13:34, yaoismyh...@gmail.com wrote:
Just getting into plotting data in Julia today. Gravitating towards 
Winston because of the similarity of its syntax to that of Matplotlib.
Anyhow, I did have a question about how to do multiple plots (on 
separate figures) for Winston.


In Matplotlib, I can use

plt.figure(1)
...
plt.figure(2)


In Winston, I first just tried

plot(x-array,y-array)
plot(x2-array,y2-array)


though then the first plot just got overwritten.

Thanks.




Re: [julia-users] Multiple Plots with Winston

2014-11-04 Thread yaoismyhero
Thanks!

On Wednesday, November 5, 2014 1:03:01 AM UTC-5, K leo wrote:
>
> pw=plot() 
> oplot(x1, copy(z1), "r") 
> oplot(x1, copy(z2), "g") 
> display(pw) 
>
>
> On 2014年11月05日 13:34, yaois...@gmail.com  wrote: 
> > Just getting into plotting data in Julia today. Gravitating towards 
> > Winston because of the similarity of its syntax to that of Matplotlib. 
> > Anyhow, I did have a question about how to do multiple plots (on 
> > separate figures) for Winston. 
> > 
> > In Matplotlib, I can use 
> > 
> > plt.figure(1) 
> > ... 
> > plt.figure(2) 
> > 
> > 
> > In Winston, I first just tried 
> > 
> > plot(x-array,y-array) 
> > plot(x2-array,y2-array) 
> > 
> > 
> > though then the first plot just got overwritten. 
> > 
> > Thanks. 
>
>

Re: [julia-users] When or how a distributed array is deallocated?

2014-11-04 Thread Amit Murthy
Distributed objects are currently gc'ed asynchronously.

You may force this using an internal function flush_gc_msgs like this:

@everywhere Base.flush_gc_msgs()
@everywhere gc()


The above is an internal function and is not meant for general use. It will
be great if you could open an issue on github requesting a feature to force
gc of distributed objects.



On Wed, Nov 5, 2014 at 9:01 AM, Seth Yuan  wrote:

> I did a test on deallocations, and the behavior is not what I expected.
>
> The code I ran is:
>
> @everywhere using Ops
>
> function test_allocation()
>   a = drand(1, 1)
>   b = 2a
>   readline()
> end
>
> test_allocation()
> @everywhere gc()
> readline()
>
> Ops module defined the '*' operator (where a new DArray is created). I can
> see that master's memory went down after the gc() call, but the workers'
> memory remained unchanged.
>
>
> 
> So the question is: "how to really free a DArray? Both master and workers!"
>
>
> On Thursday, October 16, 2014 10:18:42 PM UTC+8, Amit Murthy wrote:
>>
>> meant to say  "all references  from all processes are removed"
>>
>> On Thu, Oct 16, 2014 at 7:47 PM, Amit Murthy  wrote:
>>
>>> Yes, it is gc'ed just like any other object, when all references to it
>>> from any process is removed.
>>>
>>> On Thu, Oct 16, 2014 at 2:59 PM, Seth Yuan  wrote:
>>>
 Now, I know by reading the documentation how a DArray is created, but
 I'm not sure when a DArray is deallocated.

 Is it GC able when the master looses all references of it? Does
 somebody know the answer to this question?

>>>
>>>
>>


Re: [julia-users] How to remove warnings in PyPlot

2014-11-04 Thread Daniel Carrera
Thanks. Is there a way I could determine if the backend is different?

I just tested a simple plot in Python and Julia. The windows do look very
slightly different. It's minor, but on Python the buttons in the plot
window have a thin frame around them (so they look more like buttons) and
in Julia they do not. That could indicate a different backend. But it would
be nice to confirm.

While doing this, I was reminded of a weird quirk of PyPlot that I was
hoping you might fix:

julia> using PyPlot
INFO: Loading help data...

julia> x = [1 2 3];

julia> plot(x,x)

This gives an empty plot because "x" has the wrong shape. I spent a while
debugging before I remembered. Would it be difficult to change the plot
function so that it transposes arrays when needed?

Cheers,
Daniel.



On 5 November 2014 05:41, Steven G. Johnson  wrote:

>
>
> On Saturday, November 1, 2014 2:18:17 PM UTC-4, Daniel Carrera wrote:
>>
>> AFAICT, this does not happen with matplotlib on its own:
>>
>
> It could be that matplotlib is using a different (non-Gtk) backend in
> Python...
>



-- 
When an engineer says that something can't be done, it's a code phrase that
means it's not fun to do.


Re: [julia-users] Re: linreg(X,Y) not working.

2014-11-04 Thread Ivar Nesje
It looks like you have a broken dependency, and that the problem is in how 
Julia is packaged in the "portage tree".

The quick solution is probably to compile yourself. That will also give you 
the opportunity get the latest bugfixes on the release-0.3 branch or the 
latest bugfix release v0.3.2, that contain backwards compatible bugfixes.

It would of course be better to contact the "portage tree" maintainers and 
get the Julia package fixed, and make them package the bugfix releases.

Ivar

kl. 01:56:14 UTC+1 onsdag 5. november 2014 skrev Andreas Noack følgende:
>
> I also cannot reproduce this. Is your input exactly as your have posted it?
>
> 2014-11-04 19:27 GMT-05:00 Arch Call >:
>
>> It works fine for me.  I am on Julia 3.2 using Windows 7 with 64 bit 
>> hardware.
>>
>>
>> On Tuesday, November 4, 2014 5:12:52 PM UTC-5, Karel Zapfe wrote:
>>>
>>> Hi fellas:
>>>
>>> I am starting the Julia Tutorial at:
>>> http://forio.com/labs/julia-studio/tutorials
>>>
>>> There is an example for linear regression in Julia. According to the 
>>> tutorial, linear regresion via the function
>>>
>>> linreg(x,y) 
>>>
>>> should just work. But it doesn't. Instead, it gives me a very very very 
>>> criptic error message:
>>>
>>>  ** On entry to DTZRZF parameter number  7 had an illegal value
>>>
>>> My code is very simple:
>>>
>>> x = float([1:12])
>>> y = [5.5; 6.3; 7.6; 8.8; 10.9; 11.79; 13.48; 15.02; 17.77; 20.81; 22.0; 
>>> 22.99]
>>> a, b = linreg(x,y)
>>>
>>> My Julia is 0.3. 
>>>
>>> I have tried it with bigger matrices (the ones in the tutorial) and give 
>>> me the same error.
>>>
>>> What does it mean?
>>>
>>>
>>>
>