[julia-users] Upgrading/building Images problem

2014-09-25 Thread Adrian Cuthbertson
I had upgraded to 0.4 and then decided to revert to 0.3, with:
git checkout release-0.3
make cleanall
make

I then did a Pkg.update(), but the Images update failed.
Pkg.build("Images") also.
Here's the info...

julia> BinDeps.debug("Images")
INFO: Reading build script...
ERROR: __init__ not defined
 in include at ./boot.jl:245
 in include_from_node1 at ./loading.jl:128
 in debug_context at /Users/adrian/.julia/BinDeps/src/debug.jl:54
 in debug at /Users/adrian/.julia/BinDeps/src/debug.jl:59
 in debug at /Users/adrian/.julia/BinDeps/src/debug.jl:65
while loading /Users/adrian/.julia/Images/deps/build.jl, in expression
starting on line 77

julia> versioninfo()
Julia Version 0.3.2-pre+23
Commit db57e41* (2014-09-25 20:26 UTC)
Platform Info:
  System: Darwin (x86_64-apple-darwin12.5.0)
  CPU: Intel(R) Core(TM)2 Duo CPU T7700  @ 2.40GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Core2)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

Any help appreciated,
Adrian.


Re: [julia-users] Re: PSA: Choosing between Julia 0.3 vs Julia 0.4

2014-09-25 Thread Steve Kelly
Case and point:
https://github.com/JuliaLang/julia/commit/2ef8d31b6b05ed0a8934c7a13f6490939a30b24b

:)

On Thu, Sep 25, 2014 at 11:46 PM, Isaiah Norton 
wrote:

> Checking out the release branch is fine; the 0.3.1 tag is on that branch.
>
> On Thu, Sep 25, 2014 at 11:12 PM, John Myles White <
> johnmyleswh...@gmail.com> wrote:
>
>> I think it's more correct to check out tags since there seems to be work
>> being done progressively on that branch to keep up with backports.
>>
>> Not totally sure, though.
>>
>>  -- John
>>
>> On Sep 25, 2014, at 7:58 PM, David P. Sanders 
>> wrote:
>>
>>
>>
>> El jueves, 25 de septiembre de 2014 19:59:41 UTC-5, John Myles White
>> escribió:
>>>
>>> I just wanted to suggest that almost everyone on this mailing list
>>> should be using Julia 0.3, not Julia 0.4. Julia 0.4 changes dramatically
>>> from day to day and is probably not safe for most use cases.
>>>
>>> I'd suggest the following criterion: "are you reading the comment
>>> threads for the majority of issues being filed on the Julia GitHub repo?"
>>> If the answer is no, you probably should use Julia 0.3.
>>>
>>
>> Thanks for the nice, clear statement, John!
>>
>> Currently I have been using
>>
>> git checkout release-0.3
>>
>> and compiling from there.
>>
>> Is this the "correct" thing to do?  I notice there is now a v0.3.1 tag.
>>
>> David.
>>
>>>
>>>  -- John
>>>
>>>
>>
>


Re: [julia-users] Re: pycall to use sklearn

2014-09-25 Thread Arshak Navruzyan
Didn't seem to work either

PyError (PyObject_Call) 
AttributeError("'float' object has no attribute 'shape'",)
  File "/Users/arshakn/anaconda/lib/python2.7/site-packages/sklearn/hmm.py",
line 419, in fit
self._init(obs, self.init_params)
  File "/Users/arshakn/anaconda/lib/python2.7/site-packages/sklearn/hmm.py",
line 756, in _init
self.n_features = obs[0].shape[1]

while loading In[295], in expression starting on line 3

 in pyerr_check at /Users/arshakn/.julia/v0.3/PyCall/src/exception.jl:58
 in pycall at /Users/arshakn/.julia/v0.3/PyCall/src/PyCall.jl:85



On Thu, Sep 25, 2014 at 8:37 PM, Steven G. Johnson 
wrote:

>
>
> On Thursday, September 25, 2014 11:10:22 PM UTC-4, Arshak Navruzyan wrote:
>>
>> Jake,
>>
>> Thanks for the suggestion.  When I do that, I get back (anonymous
>> function). What I would like to get back is the actual model (with the new
>> parameters) to be able to do things like this
>>
>
> If you want the return value to be the raw PyObject, do
>
>  pycall(hmmodel["fit"], PyObject, df[:abc])
>
> (This will no longer be necessary in Julia 0.4 once function-calling is
> overloadable.)
>


Re: [julia-users] Re: PSA: Choosing between Julia 0.3 vs Julia 0.4

2014-09-25 Thread Isaiah Norton
Checking out the release branch is fine; the 0.3.1 tag is on that branch.

On Thu, Sep 25, 2014 at 11:12 PM, John Myles White  wrote:

> I think it's more correct to check out tags since there seems to be work
> being done progressively on that branch to keep up with backports.
>
> Not totally sure, though.
>
>  -- John
>
> On Sep 25, 2014, at 7:58 PM, David P. Sanders  wrote:
>
>
>
> El jueves, 25 de septiembre de 2014 19:59:41 UTC-5, John Myles White
> escribió:
>>
>> I just wanted to suggest that almost everyone on this mailing list should
>> be using Julia 0.3, not Julia 0.4. Julia 0.4 changes dramatically from day
>> to day and is probably not safe for most use cases.
>>
>> I'd suggest the following criterion: "are you reading the comment threads
>> for the majority of issues being filed on the Julia GitHub repo?" If the
>> answer is no, you probably should use Julia 0.3.
>>
>
> Thanks for the nice, clear statement, John!
>
> Currently I have been using
>
> git checkout release-0.3
>
> and compiling from there.
>
> Is this the "correct" thing to do?  I notice there is now a v0.3.1 tag.
>
> David.
>
>>
>>  -- John
>>
>>
>


Re: [julia-users] Re: pycall to use sklearn

2014-09-25 Thread Steven G. Johnson


On Thursday, September 25, 2014 11:10:22 PM UTC-4, Arshak Navruzyan wrote:
>
> Jake,
>
> Thanks for the suggestion.  When I do that, I get back (anonymous 
> function). What I would like to get back is the actual model (with the new 
> parameters) to be able to do things like this
>

If you want the return value to be the raw PyObject, do

 pycall(hmmodel["fit"], PyObject, df[:abc])

(This will no longer be necessary in Julia 0.4 once function-calling is 
overloadable.)


Re: [julia-users] Re: PSA: Choosing between Julia 0.3 vs Julia 0.4

2014-09-25 Thread John Myles White
I think it's more correct to check out tags since there seems to be work being 
done progressively on that branch to keep up with backports.

Not totally sure, though.

 -- John

On Sep 25, 2014, at 7:58 PM, David P. Sanders  wrote:

> 
> 
> El jueves, 25 de septiembre de 2014 19:59:41 UTC-5, John Myles White escribió:
> I just wanted to suggest that almost everyone on this mailing list should be 
> using Julia 0.3, not Julia 0.4. Julia 0.4 changes dramatically from day to 
> day and is probably not safe for most use cases. 
> 
> I'd suggest the following criterion: "are you reading the comment threads for 
> the majority of issues being filed on the Julia GitHub repo?" If the answer 
> is no, you probably should use Julia 0.3. 
> 
> Thanks for the nice, clear statement, John!
> 
> Currently I have been using
> 
> git checkout release-0.3
> 
> and compiling from there.
> 
> Is this the "correct" thing to do?  I notice there is now a v0.3.1 tag.
>  
> David.
> 
>  -- John 
> 



Re: [julia-users] Re: pycall to use sklearn

2014-09-25 Thread Arshak Navruzyan
Jake,

Thanks for the suggestion.  When I do that, I get back (anonymous
function). What I would like to get back is the actual model (with the new
parameters) to be able to do things like this

hmmmodel.means_
hmmmodel.predict([[1 2 3]])

Thanks,

Arshak

On Thu, Sep 25, 2014 at 12:37 PM, Jake Bolewski 
wrote:

> Julia doesn't allow overloading field access for types, so you have to use
> this workaround in pycall -> hmmmodel[:fit](df[:abc])
>
> On Thursday, September 25, 2014 3:22:16 PM UTC-4, Arshak Navruzyan wrote:
>>
>> I am trying to use a sklearn model in Julia.  The first part works ok and
>> I get back the model object but when I try to fit the model, I get an error
>>
>> @pyimport sklearn.hmm as hmm
>>
>> hmmmodel = hmm.GaussianHMM(3, "full")
>>
>> PyObject GaussianHMM(algorithm='viterbi', covariance_type='full', 
>> covars_prior=0.01,
>>   covars_weight=1,
>>   init_params='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
>>   means_prior=None, means_weight=0, n_components=3, n_iter=10,
>>   params='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
>>   random_state=None, startprob=None, startprob_prior=None, thresh=0.01,
>>   transmat=None, transmat_prior=None)
>>
>>
>> hmmmodel.fit(df[:abc])
>>
>>
>> type PyObject has no field fit
>> while loading In[275], in expression starting on line 1
>>
>>


[julia-users] Re: PSA: Choosing between Julia 0.3 vs Julia 0.4

2014-09-25 Thread David P. Sanders


El jueves, 25 de septiembre de 2014 19:59:41 UTC-5, John Myles White 
escribió:
>
> I just wanted to suggest that almost everyone on this mailing list should 
> be using Julia 0.3, not Julia 0.4. Julia 0.4 changes dramatically from day 
> to day and is probably not safe for most use cases. 
>
> I'd suggest the following criterion: "are you reading the comment threads 
> for the majority of issues being filed on the Julia GitHub repo?" If the 
> answer is no, you probably should use Julia 0.3. 
>

Thanks for the nice, clear statement, John!

Currently I have been using

git checkout release-0.3

and compiling from there.

Is this the "correct" thing to do?  I notice there is now a v0.3.1 tag.
 
David.

>
>  -- John 
>
>

[julia-users] Re: Backslash/factorisation with rectangular sparse matrix?

2014-09-25 Thread Peter Simon
Hi, Jona.

I had a very similar question recently, and got some excellent advice on 
this news group. 
 See https://groups.google.com/d/msg/julia-users/--RaT-2QDSI/sOpsPEiQ4F4J

--Peter

On Thursday, September 25, 2014 4:38:02 PM UTC-7, Jona Sassenhagen wrote:
>
> Hey,
> in the context of linear regression/OLS, using Julia 0.4 on Mac OSX 10.9,
> x\y
>
> returns
> ERROR: argument matrix must be square
>  in lufact at linalg/umfpack.jl:110
>  in factorize at linalg/cholmod.jl:1047
>
> Indeed, x is a sparse, rectangular matrix, approx. 10x1000. y is a 
> dense matrix of 10x30, although I would be satisfied solving only one 
> line at a time. I have used similar (albeit larger and even sparser) data 
> in MATLAB and the solution is very fast, on the order of seconds.
>
> Can I use the backlash with sparse rectangular matrices?
>
> Also, the same x will probably be used in more equations in the future, 
> so I am considering storing a factorization. However, chol(x) very quickly 
> fills up all available memory (12GB). If it matters, x is highly collinear.
>
> (If it isn't obvious, I do not have much experience with linear algebra, 
> all of this is very new to me.)
> Thanks,
> Jona
>


[julia-users] PSA: Choosing between Julia 0.3 vs Julia 0.4

2014-09-25 Thread John Myles White
I just wanted to suggest that almost everyone on this mailing list should be 
using Julia 0.3, not Julia 0.4. Julia 0.4 changes dramatically from day to day 
and is probably not safe for most use cases.

I'd suggest the following criterion: "are you reading the comment threads for 
the majority of issues being filed on the Julia GitHub repo?" If the answer is 
no, you probably should use Julia 0.3.

 -- John



[julia-users] Backslash/factorisation with rectangular sparse matrix?

2014-09-25 Thread Jona Sassenhagen
Hey,
in the context of linear regression/OLS, using Julia 0.4 on Mac OSX 10.9,
x\y

returns
ERROR: argument matrix must be square
 in lufact at linalg/umfpack.jl:110
 in factorize at linalg/cholmod.jl:1047

Indeed, x is a sparse, rectangular matrix, approx. 10x1000. y is a 
dense matrix of 10x30, although I would be satisfied solving only one 
line at a time. I have used similar (albeit larger and even sparser) data 
in MATLAB and the solution is very fast, on the order of seconds.

Can I use the backlash with sparse rectangular matrices?

Also, the same x will probably be used in more equations in the future, so 
I am considering storing a factorization. However, chol(x) very quickly 
fills up all available memory (12GB). If it matters, x is highly collinear.

(If it isn't obvious, I do not have much experience with linear algebra, 
all of this is very new to me.)
Thanks,
Jona


[julia-users] Re: Nested parallelism possible?

2014-09-25 Thread Clemens Heitzinger
Thanks, Viral.  I guess I'll go with the second option for now.

Cheers,
Clemens


On Thursday, September 25, 2014 7:26:37 AM UTC-7, Viral Shah wrote:
>
> The parallelism is flat. You can certainly launch one julia per node, and 
> then use a multi-threaded solver on each node and such. If you want n 
> julias per node, just repeat that node n times.
>
> -viral
>
> On Thursday, September 25, 2014 11:52:23 AM UTC+5:30, Clemens Heitzinger 
> wrote:
>>
>> Hi all:
>>
>> I have a question regarding nested @parallel macros.  Let's say I have a 
>> cluster and I use ssh or Condor to log into the nodes.  Each node has 
>> multiple cores, of course, so on each node I would like to run a few 
>> simulations in parallel or run a parallel solver.
>>
>> I can add workers by using addprocs(["machine1", "machine2"]) or the HTC 
>> cluster manager.  The outer @parallel macro works and I can evaluate a 
>> function on each worker.  This part works fine:
>>
>> @everywhere function locally()
>> ## addprocs(2)
>> result = @parallel (*) for i in [1:2]
>> "[" * readall(`hostname`) * ":" * string(myid()) * "]"
>> end
>> readall(`hostname`) * " :: " * string(myid()) * " :: " * result
>> end
>>
>> @parallel vcat for i in [1:nworkers()]
>> locally()
>> end
>>
>> julia> include("test.jl")
>> 3-element Array{ASCIIString,1}:
>>  "mathe1\n :: 2 :: [mathe1\n:2][mathe3\n:3]"
>>  "mathe3\n :: 3 :: [mathe3\n:3][mathe1\n:2]"
>>  "mathe5\n :: 4 :: [mathe5\n:4][mathe1\n:2]"
>>
>> Then I tried to add workers, hopefully locally, by using addprocs() again 
>> on each worker. But after uncommenting the line addprocs(2), I get an error:
>>
>> julia> include("test.jl")
>> exception on 4: ERROR: assertion failed
>>  in add_workers at multi.jl:243
>>  in addprocs at multi.jl:1237
>>  in locally at /home/math/test.jl:4
>>  in anonymous at no file:12
>>  in anonymous at multi.jl:1279
>>  in anonymous at multi.jl:848
>>  in run_work_thunk at multi.jl:621
>>  in run_work_thunk at multi.jl:630
>>  in anonymous at task.jl:6
>> exception on 3: ERROR: assertion failed
>>  in add_workers at multi.jl:243
>>  in addprocs at multi.jl:1237
>>  in locally at /home/math/test.jl:4
>>  in anonymous at no file:12
>>  in anonymous at multi.jl:1279
>>  in anonymous at multi.jl:848
>>  in run_work_thunk at multi.jl:621
>>  in run_work_thunk at multi.jl:630
>>  in anonymous at task.jl:6
>> exception on 2: ERROR: assertion failed
>>  in add_workers at multi.jl:243
>>  in addprocs at multi.jl:1237
>>  in locally at /home/math/test.jl:4
>>  in anonymous at no file:12
>>  in anonymous at multi.jl:1279
>>  in anonymous at multi.jl:848
>>  in run_work_thunk at multi.jl:621
>>  in run_work_thunk at multi.jl:630
>>  in anonymous at task.jl:6
>> 3-element Array{ErrorException,1}:
>>  ErrorException("assertion failed")
>>  ErrorException("assertion failed")
>>  ErrorException("assertion failed")
>>
>>
>> So my questions are:  Has anybody tries this before?  Is it possible out 
>> of the box?
>>
>> And a related question is: Let's say each node has eight cores.  How can 
>> I tell Julia to use at most 8 workers on each node?
>>
>> Cheers,
>> Clemens
>>
>>

Re: [julia-users] how to create a lagged variable in a DataFrame

2014-09-25 Thread Thibaut Lamadon
Hi all, 

I wrote a bit of code to do something of the same sort. I wrote it to be 
able to get the wage of a given individual in a panel in the same quarter 
in the year before.

With data.table I would setkey(data, individual, year, quarter) and then 
use the amazing J(individual, year-1, quarter).

I wanted something similar, and as fast as possible, here is what I came up 
with:

I =  (1:100) - 1
d = DataFrame(
  t = map(x -> rem(x,10),I),
  n = map(x -> div(x,10),I), 
  x = rand(1:100, 100))


# create an index for location
cur = map(hash, zip(d[:n],d[:t]));

# create a hash table
hh = Dict(cur,1:10);

# create an index for target
trg = map(hash, zip(d[:n],d[:t]-1));

# map the evaluation
val = d[:x];
d[:tlag] = map( x-> val[get(hh,x,1)], trg);

there are a bunch of shortcuts. For instance if I don't find the value I 
put the first one, but I should put an NA. Also the Dictionary whould 
ideally be created within n to make it faster.

But this works relatively ok for my needs,

any comments are welcome!

cheers,

t.




On Wednesday, 28 May 2014 10:04:25 UTC-5, Milan Bouchet-Valat wrote:
>
> Le mercredi 28 mai 2014 à 15:49 +0100, Florian Oswald a écrit : 
> > oh sorry i forgot to mention that it's a panel, so the lag must be by 
> > "id" as in the example. it's not a simple time series, but the first 
> > lagged entry for each "id" must be NA. 
> I think the easiest way is to write a loop, and since you're using Julia 
> it may well be faster than convoluted vectorized expressions if written 
> carefully. Something like (untested): 
>
> df[1, :ylag] = NA 
> for i in 1:size(df, 2) 
> if df[i, :id] == df[i - 1, :id] 
> df[i, :ylag] = df[i - 1, :y] 
> else 
> df[i, :ylag] = NA 
> end 
> end 
>
>
> Regards 
>
> > 
> > On 28 May 2014 15:45, Florian Oswald > 
> wrote: 
> > Hi 
> > 
> > 
> > I'm looking for the easiest way to create a lagged variable in 
> > a dataframe. I'm almost there with this: 
> > 
> > 
> > df = 
> > 
> DataFrame(id=repeat([1:3],inner=[3],outer=[1]),time=repeat([1:3],inner=[1],outer=[3]),y=rand(9))
>  
>
> > by(df2, :id , d -> 
> > DataFrame(time=d[:,:time],y=d[:,:y],Ly=[0,d[1:end-1,:y]])) 
> > 
> > 
> > but of course instead of the 0.0 for the first entry for each 
> > id I would like to have an NA: 
> > 
> > 
> > by(df, :id , d -> 
> > DataFrame(time=d[:,:time],y=d[:,:y],Ly=[NA,d[1:end-1,:y]])) 
> > 
> > 
> > 
> > but I cant figure out how to pass a valid data type here. thsi 
> > says "ERROR: no method convert(Type{Float64}, NAtype)" 
> > 
> > 
> > i tried also this 
> > 
> > 
> > by(df, :id , d -> 
> > 
> DataFrame(time=d[:,:time],y=d[:,:y],Ly=@data([NA,d[1:end-1,:y]]))) 
> > 
> > 
> > 
> > but that gives "ERROR: no method 
> > DataArray{T,N}(DataArray{Float64,1}, Array{Bool,1})". 
> > 
> > 
> > 
> > 
> > 
>
>

[julia-users] Re: problem with HttpServer

2014-09-25 Thread Keith Campbell
https://github.com/JuliaWeb/HttpServer.jl/blob/master/test/runtests.jl

contains an example that may be helpful.  See lines 27-37.



[julia-users] Running julia with HTCondor

2014-09-25 Thread Roshan Chaudhari
I have one central manager and other 3 nodes. I installed HTCondor and 
Julia on all these machines and when I check condor_status, it list down 
all the nodes. Now, I am trying to use julia with htcondor, So I ran below 
commands:

export HOSTNAME=`hostname`

julia> Pkg.add("ClusterManagers")

julia> using ClusterManagers

addprocs(3, cman=HTCManager())
Submitting job(s)...
Waiting for 3 workers: 

but as you can see the last one waits for the workers. What I am missing 
here?



Thanks,
Roshan


[julia-users] Re: pycall to use sklearn

2014-09-25 Thread Jake Bolewski
Julia doesn't allow overloading field access for types, so you have to use 
this workaround in pycall -> hmmmodel[:fit](df[:abc])

On Thursday, September 25, 2014 3:22:16 PM UTC-4, Arshak Navruzyan wrote:
>
> I am trying to use a sklearn model in Julia.  The first part works ok and 
> I get back the model object but when I try to fit the model, I get an error
>
> @pyimport sklearn.hmm as hmm
>
> hmmmodel = hmm.GaussianHMM(3, "full")
>
> PyObject GaussianHMM(algorithm='viterbi', covariance_type='full', 
> covars_prior=0.01,
>   covars_weight=1,
>   init_params='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
>   means_prior=None, means_weight=0, n_components=3, n_iter=10,
>   params='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
>   random_state=None, startprob=None, startprob_prior=None, thresh=0.01,
>   transmat=None, transmat_prior=None)
>
>
> hmmmodel.fit(df[:abc])
>
>
> type PyObject has no field fit
> while loading In[275], in expression starting on line 1
>
>

[julia-users] pycall to use sklearn

2014-09-25 Thread Arshak Navruzyan
I am trying to use a sklearn model in Julia.  The first part works ok and I 
get back the model object but when I try to fit the model, I get an error

@pyimport sklearn.hmm as hmm

hmmmodel = hmm.GaussianHMM(3, "full")

PyObject GaussianHMM(algorithm='viterbi', covariance_type='full', 
covars_prior=0.01,
  covars_weight=1,
  init_params='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
  means_prior=None, means_weight=0, n_components=3, n_iter=10,
  params='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ',
  random_state=None, startprob=None, startprob_prior=None, thresh=0.01,
  transmat=None, transmat_prior=None)


hmmmodel.fit(df[:abc])


type PyObject has no field fit
while loading In[275], in expression starting on line 1



[julia-users] Re: Why lufact() is faster than my code?

2014-09-25 Thread Jason Riedy
And Staro Pickle writes:
> I am doing some matrix factorizations like LU, LDL'. I find that the
> julia's lufact() is much faster than my code. I wonder why my code is
> slow and why lufact is fast(is this one question or two?). I use
> vectorization(is that vectorization?). 

Others have given some reasons...  If you want to see contortions
that were (perhaps still are) necessary to be competitive, see:
  https://groups.google.com/forum/#!msg/julia-users/DJRxApmpJmU/NqepNaX5ARIJ

> n=5, a=10
> Matrix to be decomposed:
> 1 0 0 0 a 
> 0 1 0 0 a
> 0 0 1 0 a
> 0 0 0 1 a
> a a a a 1

However, if your matrices always have this form, I would highly
recommend solving a few by hand to notice the pattern.



[julia-users] problem with HttpServer

2014-09-25 Thread sagar ram
Hi,

I am running a performance test on MORSEL using HttpServer.

on load getting below message.

ERROR: connect: address not available (EADDRNOTAVAIL)
 in connect! at socket.jl:582
 in connect at socket.jl:603
 in open_stream at /root/.julia/v0.3/Requests/src/Requests.jl:210
 in open_stream at /root/.julia/v0.3/Requests/src/Requests.jl:203
 in do_request at /root/.julia/v0.3/Requests/src/Requests.jl:564
 in get at /root/.julia/v0.3/Requests/src/Requests.jl:578
 in get at /root/.julia/v0.3/Requests/src/Requests.jl:577
 in anonymous at /root/.julia/v0.3/Morsel/testMorsel.jl:10
 in anonymous at no file:197
 in handle at /root/.julia/v0.3/Meddle/src/Meddle.jl:175
 in anonymous at /root/.julia/v0.3/Morsel/src/Morsel.jl:203
 in on_message_complete at 
/root/.julia/v0.3/HttpServer/src/HttpServer.jl:274
 in on_message_complete at 
/root/.julia/v0.3/HttpServer/src/RequestParser.jl:107


Here is my code...invoking a get call i think i need to close HTTP 
connection can you please let us know how / where to do.

using Morsel
using Requests
using HttpServer

app = Morsel.app()

get(app, "/getKeyID/select") do req, res
#println(req.state)
keyVal =  req.state[:url_params]["q"]

string(get("http://10.101.41.246/solr/PhysicalDeviceInfo/select?q="*keyVal*"&wt=json";).data)
end

start(app, 50300)


[julia-users] Re: Problems with IJulia on Windows 7

2014-09-25 Thread Bill Hart
I've sorted this out. The Windows filesystem is case insensitive, and there 
was a file called C:\users\User\nemo. Apparently the IJulia notebook was 
using C:\users\User as the starting directory and apparently Julia looks in 
the pwd for any files named Nemo before trying to load the module Nemo from 
the package Nemo.jl.

Unfortunately I still have other issues, related to it not being able to 
find libflint.dll from the IJulia notebook when it does find it from the 
console. I presume that IJulia also plays with LD_LOAD_PATH, or IPython 
notebook uses its own mingw or something like that. I guess I'll figure 
that out in the end. We do modify the LD_LOAD_PATH in Nemo, which we know 
is not the right way to do things.

Bill.

On Thursday, 25 September 2014 17:47:10 UTC+2, Bill Hart wrote:
>
> I have quite a number of difficulties with IJulia on Windows 7.
>
> It complained about WinRPM, LibCurl, Nettle. I managed to resolve them all 
> by following advice on various tickets.
>
> But the following issue still eludes me. I'm loading Nemo in the usual way:
>
> using Nemo.Rings
>
> inside IJulia. It complains:
>
> syntax: extra token "home" after end of expression
>
> while loading C:\Users\user\Nemo, in expression starting on line 3
> while loading In[1], in expression starting on line 1
>
>  in include at boot.jl:245
>  in include_from_node1 at loading.jl:128
>
>
> That file "C:\Users\user\Nemo" doesn't exist.
>
>
> Moreover, the word "home" does not appear anywhere in the Nemo repository (at 
> least not in Julia code). However, Nemo works absolutely fine from the Julia 
> console.
>
>
> After a search, I found that other people had hit similar issues with other 
> packages, but they just said "do Pkg.update()" to fix this, without a hint as 
> to what causes it.
>
>
> It looks like some pattern matching issue. But I can't even figure out where 
> to look for the source of the issue. Can anyone help?
>
>
> Bill.
>
>

[julia-users] Why lufact() is faster than my code?

2014-09-25 Thread Douglas Bates
The short answer is the Lapack benchmark. For years computers, especially 
supercomputers, have been benchmarked on how fast they perform this particular 
calculation. As a result, the code actually used in various accelerated BLAS 
implementations is highly tuned.


Re: [julia-users] Why lufact() is faster than my code?

2014-09-25 Thread Andreas Noack
I think there are two major reasons. First of all the slicing makes copies
of the arrays. Hence you end up with a lot of temporary arrays. Try @time
lufact!(B1) vs @time crout(B1) and see if the allocation difference.

The second reason is that lufact calls LAPACK which uses a blocked
algorithm that can benefit from BLAS 3 calls. Your code is doing
matrix-vector multiplications where LAPACK tries to make as many
matrix-matrix multiplications as possible.


Med venlig hilsen

Andreas Noack

2014-09-25 12:52 GMT-04:00 Staro Pickle :

> Hi.
>
> I am doing some matrix factorizations like LU, LDL'. I find that the
> julia's lufact() is much faster than my code. I wonder why my code is slow
> and why lufact is fast(is this one question or two?). I use
> vectorization(is that vectorization?).
>
> I have to mention that I use *Crout* method which is a little different
> from lufact(). I do no pivoting because the leading principal minors of my
> matrix here are nonsingular(see below). And my U is a *unit upper*
> triangular while lufact() makes L a *unit **lower *triangular. But I
> don't think these differences are responsible for the huge disparity
> between their speed.
>
> Also, I overwrite A to save space. Will this save time or take more?
>
> n=5, a=10
> Matrix to be decomposed:
> 1 0 0 0 a
> 0 1 0 0 a
> 0 0 1 0 a
> 0 0 0 1 a
> a a a a 1
>
> My code:
>
> function create_B1(n,a)
> B = hcat(eye(n-1), a*ones(n-1,1))
> B = vcat(B, hcat(a*ones(1,n-1), 1))
> return(B::Array{Float64,2})
> end
>
> function crout(A::Array{Float64,2})
> n = size(A,1)
> A[1,2:n] = A[1,2:n] / A[1,1]
> for j = 2:n
> A[j:n,j] -= A[j:n,1:j-1] * A[1:j-1,j]
> A[j,j+1:n] = (A[j,j+1:n] - A[j,1:j-1]*A[1:j-1,j+1:n]) / A[j,j]
> end
> return(A::Array{Float64,2})
> end
>
> B1 = create_B1(1000,10)
>
> @printf "My Crout: %0.5fs\n" @elapsed(crout(B1))
> @printf "lufact(): %0.5fs\n" @elapsed(lufact(B1))
>
>
> The result is:
>
> My Crout: 2.63706s
> lufact(): 0.08283s
>
>
>
> This is the environment:
>
> julia> versioninfo()
> Julia Version 0.3.0
> Commit 7681878* (2014-08-20 20:43 UTC)
> Platform Info:
>   System: Darwin (x86_64-apple-darwin13.3.0)
>   CPU: Intel(R) Core(TM) i5-4288U CPU @ 2.60GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
>   LAPACK: libopenblas
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.3
>
>
>
> Can anyone help me with all above? Thanks a lot!
>


[julia-users] Why lufact() is faster than my code?

2014-09-25 Thread Staro Pickle
Hi.

I am doing some matrix factorizations like LU, LDL'. I find that the 
julia's lufact() is much faster than my code. I wonder why my code is slow 
and why lufact is fast(is this one question or two?). I use 
vectorization(is that vectorization?). 

I have to mention that I use *Crout* method which is a little different 
from lufact(). I do no pivoting because the leading principal minors of my 
matrix here are nonsingular(see below). And my U is a *unit upper* 
triangular while lufact() makes L a *unit **lower *triangular. But I don't 
think these differences are responsible for the huge disparity between 
their speed.

Also, I overwrite A to save space. Will this save time or take more?

n=5, a=10
Matrix to be decomposed:
1 0 0 0 a 
0 1 0 0 a
0 0 1 0 a
0 0 0 1 a
a a a a 1

My code:

function create_B1(n,a)
B = hcat(eye(n-1), a*ones(n-1,1))
B = vcat(B, hcat(a*ones(1,n-1), 1))
return(B::Array{Float64,2})
end

function crout(A::Array{Float64,2})
n = size(A,1)
A[1,2:n] = A[1,2:n] / A[1,1]
for j = 2:n
A[j:n,j] -= A[j:n,1:j-1] * A[1:j-1,j]
A[j,j+1:n] = (A[j,j+1:n] - A[j,1:j-1]*A[1:j-1,j+1:n]) / A[j,j]
end
return(A::Array{Float64,2})
end

B1 = create_B1(1000,10)

@printf "My Crout: %0.5fs\n" @elapsed(crout(B1))
@printf "lufact(): %0.5fs\n" @elapsed(lufact(B1))

 
The result is:

My Crout: 2.63706s
lufact(): 0.08283s 



This is the environment:

julia> versioninfo()
Julia Version 0.3.0
Commit 7681878* (2014-08-20 20:43 UTC)
Platform Info:
  System: Darwin (x86_64-apple-darwin13.3.0)
  CPU: Intel(R) Core(TM) i5-4288U CPU @ 2.60GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3 


 
Can anyone help me with all above? Thanks a lot!


[julia-users] workers disappear

2014-09-25 Thread Travis Porco
Hi--I'm new to parallel julia. I ran a process with -p 31 and it normally 
generates the requisite number of workers. I've noticed sometimes the 
worker processes are terminated for no obvious reason and no indication as 
to why. I don't know enough to tell why. Can a worker undergo its own SEGV, 
and get killed without bringing the whole job down? Wouldn't we see an 
indication of this? All I get is things like this:
Worker 12 terminated.
I realize this is a slightly vague question. I should say this is Version 
0.3.1-pre+405 and running on x86_64-linux-gnu


[julia-users] Problems with IJulia on Windows 7

2014-09-25 Thread Bill Hart
I have quite a number of difficulties with IJulia on Windows 7.

It complained about WinRPM, LibCurl, Nettle. I managed to resolve them all
by following advice on various tickets.

But the following issue still eludes me. I'm loading Nemo in the usual way:

using Nemo.Rings

inside IJulia. It complains:

syntax: extra token "home" after end of expression

while loading C:\Users\user\Nemo, in expression starting on line 3
while loading In[1], in expression starting on line 1

 in include at boot.jl:245
 in include_from_node1 at loading.jl:128


That file "C:\Users\user\Nemo" doesn't exist.


Moreover, the word "home" does not appear anywhere in the Nemo
repository (at least not in Julia code). However, Nemo works
absolutely fine from the Julia console.


After a search, I found that other people had hit similar issues with
other packages, but they just said "do Pkg.update()" to fix this,
without a hint as to what causes it.


It looks like some pattern matching issue. But I can't even figure out
where to look for the source of the issue. Can anyone help?


Bill.


Re: [julia-users] Re: DataArray Concatenation

2014-09-25 Thread John Myles White
Thanks, but these days I'd say Simon Kornblith deserves the credit for what's 
good about DataArrays.

 -- John

On Sep 25, 2014, at 8:11 AM, Li Zhang  wrote:

> I would very much like to contribute, but i am not sure if i had time, some 
> projects are keeping me busy. would to look at it when i have spare time. 
> 
> By the way, john, great work for julia statistics:)
> 
> On Tuesday, September 23, 2014 9:11:20 PM UTC-4, Li Zhang wrote:
> 
> 
> a=@data([NA,3,5,7,NA,3,7])
> i want to do this:
> b=[NA,a[1:end-1]]
> 
> but julia says no convert methods.
> 
> is there anyone know some other ways?



[julia-users] Re: DataArray Concatenation

2014-09-25 Thread Li Zhang
I would very much like to contribute, but i am not sure if i had time, some 
projects are keeping me busy. would to look at it when i have spare time. 

By the way, john, great work for julia statistics:)

On Tuesday, September 23, 2014 9:11:20 PM UTC-4, Li Zhang wrote:
>
>
>
> a=@data([NA,3,5,7,NA,3,7])
> i want to do this:
> b=[NA,a[1:end-1]]
>
> but julia says no convert methods.
>
> is there anyone know some other ways?
>


Re: [julia-users] Matlab bench in Julia

2014-09-25 Thread Andreas Noack
It appears that you are not using a fast BLAS. The BLAS and LAPACK entries
in versioninfo() should say libopenblas instead of libblas and liblapack.
You should use

https://launchpad.net/~staticfloat/+archive/ubuntu/juliareleases

as your repo for julia. That should give you Julia with fast linear algebra.

Med venlig hilsen

Andreas Noack

2014-09-25 10:36 GMT-04:00 Ján Dolinský :

> Hello,
>
> Yes, Andreas point makes sense. Sometimes you may not want threaded linear
> algebra routines.
>
> My current installation reports this:
> versioninfo()
> Julia Version 0.3.0
> Commit 7681878 (2014-08-20 20:43 UTC)
> Platform Info:
>   System: Linux (x86_64-linux-gnu)
>   CPU: Intel(R) Core(TM) i5-4300U CPU @ 1.90GHz
>   WORD_SIZE: 64
>   BLAS: libblas.so.3
>   LAPACK: liblapack.so.3
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.3
>
> Am I using the right library ? How do I plug-in the OpenBLAS ? I am under
> Ubuntu 14.4.01.
>
> Thanks,
> Jan
>
> Dňa štvrtok, 25. septembra 2014 14:47:12 UTC+2 Andreas Noack napísal(-a):
>>
>> OpenBLAS uses threads by default, but Milan reported that Fedora's
>> maintainer had them disabled. Hence, unless you are using Fedora, you
>> should have threaded OpenBLAS.
>>
>> What is the best setup for fast linear algebra operations ?
>>
>>
>> That question doesn't have a single answer. Often when people want to
>> show performance of linear algebra libraries they run a single routine on a
>> big matrix. In that case you'll often benefit from many threads. However,
>> in many applications you solve smaller problems many times. In this case,
>> many threads can actually be a problem and you could be better off with
>> turning off OpenBLAS threading. So it depends on your problem.
>>
>> Med venlig hilsen
>>
>> Andreas Noack
>>
>> 2014-09-25 5:52 GMT-04:00 Ján Dolinský:
>>
>>> Hello,
>>>
>>> How do I make Julia to use threaded version of OpenBLAS ? Do I have to
>>> compile using some special option or there is a config file ?
>>> What is the best setup for fast linear algebra operations ?
>>>
>>> Best Regards,
>>> Jan
>>>
>>> Dňa nedeľa, 21. septembra 2014 9:50:52 UTC+2 Stephan Buchert napísal(-a):
>>>
 Wow, I have now LU a little bit faster on the latest julia Fedora
 package than on my locally compiled julia:

 julia> versioninfo()
 Julia Version 0.3.0
 Platform Info:
   System: Linux (x86_64-redhat-linux)
   CPU: Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz
   WORD_SIZE: 64
   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
   LAPACK: libopenblasp.so.0
   LIBM: libopenlibm
   LLVM: libLLVM-3.3

 julia> include("code/julia/bench.jl")
 LU decomposition, elapsed time: 0.07222901 seconds, was 0.123 seconds
 with my julia
 FFT , elapsed time: 0.248571629 seconds

 Thanks for making and  improving the Fedora package

>>>
>>


Re: [julia-users] Matlab bench in Julia

2014-09-25 Thread Ján Dolinský
Hello,

Yes, Andreas point makes sense. Sometimes you may not want threaded linear 
algebra routines. 

My current installation reports this:
versioninfo()
Julia Version 0.3.0
Commit 7681878 (2014-08-20 20:43 UTC)
Platform Info:
  System: Linux (x86_64-linux-gnu)
  CPU: Intel(R) Core(TM) i5-4300U CPU @ 1.90GHz
  WORD_SIZE: 64
  BLAS: libblas.so.3
  LAPACK: liblapack.so.3
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

Am I using the right library ? How do I plug-in the OpenBLAS ? I am under 
Ubuntu 14.4.01.

Thanks,
Jan

Dňa štvrtok, 25. septembra 2014 14:47:12 UTC+2 Andreas Noack napísal(-a):
>
> OpenBLAS uses threads by default, but Milan reported that Fedora's 
> maintainer had them disabled. Hence, unless you are using Fedora, you 
> should have threaded OpenBLAS.
>
> What is the best setup for fast linear algebra operations ?
>
>
> That question doesn't have a single answer. Often when people want to show 
> performance of linear algebra libraries they run a single routine on a big 
> matrix. In that case you'll often benefit from many threads. However, in 
> many applications you solve smaller problems many times. In this case, many 
> threads can actually be a problem and you could be better off with turning 
> off OpenBLAS threading. So it depends on your problem.
>
> Med venlig hilsen
>
> Andreas Noack
>
> 2014-09-25 5:52 GMT-04:00 Ján Dolinský:
>
>> Hello,
>>
>> How do I make Julia to use threaded version of OpenBLAS ? Do I have to 
>> compile using some special option or there is a config file ?
>> What is the best setup for fast linear algebra operations ?
>>
>> Best Regards,
>> Jan
>>
>> Dňa nedeľa, 21. septembra 2014 9:50:52 UTC+2 Stephan Buchert napísal(-a):
>>
>>> Wow, I have now LU a little bit faster on the latest julia Fedora 
>>> package than on my locally compiled julia:
>>>
>>> julia> versioninfo()
>>> Julia Version 0.3.0
>>> Platform Info:
>>>   System: Linux (x86_64-redhat-linux)
>>>   CPU: Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
>>>   LAPACK: libopenblasp.so.0
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.3
>>>
>>> julia> include("code/julia/bench.jl")
>>> LU decomposition, elapsed time: 0.07222901 seconds, was 0.123 seconds 
>>> with my julia
>>> FFT , elapsed time: 0.248571629 seconds
>>>
>>> Thanks for making and  improving the Fedora package
>>>
>>
>

[julia-users] Re: Nested parallelism possible?

2014-09-25 Thread Viral Shah
The parallelism is flat. You can certainly launch one julia per node, and 
then use a multi-threaded solver on each node and such. If you want n 
julias per node, just repeat that node n times.

-viral

On Thursday, September 25, 2014 11:52:23 AM UTC+5:30, Clemens Heitzinger 
wrote:
>
> Hi all:
>
> I have a question regarding nested @parallel macros.  Let's say I have a 
> cluster and I use ssh or Condor to log into the nodes.  Each node has 
> multiple cores, of course, so on each node I would like to run a few 
> simulations in parallel or run a parallel solver.
>
> I can add workers by using addprocs(["machine1", "machine2"]) or the HTC 
> cluster manager.  The outer @parallel macro works and I can evaluate a 
> function on each worker.  This part works fine:
>
> @everywhere function locally()
> ## addprocs(2)
> result = @parallel (*) for i in [1:2]
> "[" * readall(`hostname`) * ":" * string(myid()) * "]"
> end
> readall(`hostname`) * " :: " * string(myid()) * " :: " * result
> end
>
> @parallel vcat for i in [1:nworkers()]
> locally()
> end
>
> julia> include("test.jl")
> 3-element Array{ASCIIString,1}:
>  "mathe1\n :: 2 :: [mathe1\n:2][mathe3\n:3]"
>  "mathe3\n :: 3 :: [mathe3\n:3][mathe1\n:2]"
>  "mathe5\n :: 4 :: [mathe5\n:4][mathe1\n:2]"
>
> Then I tried to add workers, hopefully locally, by using addprocs() again 
> on each worker. But after uncommenting the line addprocs(2), I get an error:
>
> julia> include("test.jl")
> exception on 4: ERROR: assertion failed
>  in add_workers at multi.jl:243
>  in addprocs at multi.jl:1237
>  in locally at /home/math/test.jl:4
>  in anonymous at no file:12
>  in anonymous at multi.jl:1279
>  in anonymous at multi.jl:848
>  in run_work_thunk at multi.jl:621
>  in run_work_thunk at multi.jl:630
>  in anonymous at task.jl:6
> exception on 3: ERROR: assertion failed
>  in add_workers at multi.jl:243
>  in addprocs at multi.jl:1237
>  in locally at /home/math/test.jl:4
>  in anonymous at no file:12
>  in anonymous at multi.jl:1279
>  in anonymous at multi.jl:848
>  in run_work_thunk at multi.jl:621
>  in run_work_thunk at multi.jl:630
>  in anonymous at task.jl:6
> exception on 2: ERROR: assertion failed
>  in add_workers at multi.jl:243
>  in addprocs at multi.jl:1237
>  in locally at /home/math/test.jl:4
>  in anonymous at no file:12
>  in anonymous at multi.jl:1279
>  in anonymous at multi.jl:848
>  in run_work_thunk at multi.jl:621
>  in run_work_thunk at multi.jl:630
>  in anonymous at task.jl:6
> 3-element Array{ErrorException,1}:
>  ErrorException("assertion failed")
>  ErrorException("assertion failed")
>  ErrorException("assertion failed")
>
>
> So my questions are:  Has anybody tries this before?  Is it possible out 
> of the box?
>
> And a related question is: Let's say each node has eight cores.  How can I 
> tell Julia to use at most 8 workers on each node?
>
> Cheers,
> Clemens
>
>

Re: [julia-users] least squares algorithm

2014-09-25 Thread Andreas Noack
That is right but it can be slightly difficult to find the actual
algorithm/LAPACK function due to the initial promotion and allocation steps.

Med venlig hilsen

Andreas Noack

2014-09-25 9:35 GMT-04:00 Tim Holy :

> Just FYI: you can easily find out the answer for yourself like this,
>
> julia> A = rand(5,4)
> 5x4 Array{Float64,2}:
>  0.248302   0.330028  0.893083  0.390297
>  0.0306052  0.298042  0.343798  0.569406
>  0.935467   0.384105  0.972919  0.716717
>  0.455494   0.351314  0.443435  0.848758
>  0.752286   0.827971  0.590855  0.582407
>
> julia> b = rand(5)
> 5-element Array{Float64,1}:
>  0.0326256
>  0.546855
>  0.425118
>  0.0974509
>  0.535496
>
> julia> @which A\b
> \(A::Union(DenseArray{T,2},SubArray{T,2,A<:DenseArray{T,N},I<:
>
> (Union(Int64,Range{Int64})...,)}),B::Union(SubArray{T,1,A<:DenseArray{T,N},I<:
>
> (Union(Int64,Range{Int64})...,)},DenseArray{T,2},SubArray{T,2,A<:DenseArray{T,N},I<:
> (Union(Int64,Range{Int64})...,)},DenseArray{T,1})) at linalg/dense.jl:409
>
> julia> edit("linalg/dense.jl", 409)
>
> and then look at the function definition.
>
> --Tim
>
> On Thursday, September 25, 2014 06:12:39 AM Davide Lasagna wrote:
> > Thank you Andreas.
> >
> > Sooner or later one needs to have a precise idea of what is going on
> behing
> > the scenes. Having a reference to the relevant lapack function is fine.
> >
> > Davide
>
>


Re: [julia-users] least squares algorithm

2014-09-25 Thread Tim Holy
Just FYI: you can easily find out the answer for yourself like this,

julia> A = rand(5,4)
5x4 Array{Float64,2}:
 0.248302   0.330028  0.893083  0.390297
 0.0306052  0.298042  0.343798  0.569406
 0.935467   0.384105  0.972919  0.716717
 0.455494   0.351314  0.443435  0.848758
 0.752286   0.827971  0.590855  0.582407

julia> b = rand(5)
5-element Array{Float64,1}:
 0.0326256
 0.546855 
 0.425118 
 0.0974509
 0.535496 

julia> @which A\b
\(A::Union(DenseArray{T,2},SubArray{T,2,A<:DenseArray{T,N},I<:
(Union(Int64,Range{Int64})...,)}),B::Union(SubArray{T,1,A<:DenseArray{T,N},I<:
(Union(Int64,Range{Int64})...,)},DenseArray{T,2},SubArray{T,2,A<:DenseArray{T,N},I<:
(Union(Int64,Range{Int64})...,)},DenseArray{T,1})) at linalg/dense.jl:409

julia> edit("linalg/dense.jl", 409)

and then look at the function definition.

--Tim

On Thursday, September 25, 2014 06:12:39 AM Davide Lasagna wrote:
> Thank you Andreas.
> 
> Sooner or later one needs to have a precise idea of what is going on behing
> the scenes. Having a reference to the relevant lapack function is fine.
> 
> Davide



Re: [julia-users] least squares algorithm

2014-09-25 Thread Davide Lasagna
Thank you Andreas. 

Sooner or later one needs to have a precise idea of what is going on behing the 
scenes. Having a reference to the relevant lapack function is fine.

Davide



Re: [julia-users] Matlab bench in Julia

2014-09-25 Thread Milan Bouchet-Valat
Le jeudi 25 septembre 2014 à 08:47 -0400, Andreas Noack a écrit :
> OpenBLAS uses threads by default, but Milan reported that Fedora's
> maintainer had them disabled. Hence, unless you are using Fedora, you
> should have threaded OpenBLAS.
> 
No, actually as I said above it was my mistake, and it's fixed for a few
days in the Copr packages. Now you get threads by default with RPM
packages on Fedora.


Regards



Re: [julia-users] least squares algorithm

2014-09-25 Thread Andreas Noack
Hi Davide

Unfortunately the documentation is not correct. A\b for least squares
problems uses a pivoted qr factorization algorithm identical to the
algorithm you can in LAPACK's xgelsy. I'll update the documentation.

Med venlig hilsen

Andreas Noack

2014-09-25 4:05 GMT-04:00 Davide Lasagna :

> Hi,
>
> Is there a reference to the algorithm used for solution of least-squares
> problems like A\b, with A \in R^{m \times n} and b \in R^m ? Documentation
> says it uses a decomposition to bidiagonal form, but it would be nice to
> have a more precise reference to that.
>
> Thanks,
>
> Davide
>
>


Re: [julia-users] Matlab bench in Julia

2014-09-25 Thread Andreas Noack
OpenBLAS uses threads by default, but Milan reported that Fedora's
maintainer had them disabled. Hence, unless you are using Fedora, you
should have threaded OpenBLAS.

What is the best setup for fast linear algebra operations ?


That question doesn't have a single answer. Often when people want to show
performance of linear algebra libraries they run a single routine on a big
matrix. In that case you'll often benefit from many threads. However, in
many applications you solve smaller problems many times. In this case, many
threads can actually be a problem and you could be better off with turning
off OpenBLAS threading. So it depends on your problem.

Med venlig hilsen

Andreas Noack

2014-09-25 5:52 GMT-04:00 Ján Dolinský :

> Hello,
>
> How do I make Julia to use threaded version of OpenBLAS ? Do I have to
> compile using some special option or there is a config file ?
> What is the best setup for fast linear algebra operations ?
>
> Best Regards,
> Jan
>
> Dňa nedeľa, 21. septembra 2014 9:50:52 UTC+2 Stephan Buchert napísal(-a):
>
>> Wow, I have now LU a little bit faster on the latest julia Fedora package
>> than on my locally compiled julia:
>>
>> julia> versioninfo()
>> Julia Version 0.3.0
>> Platform Info:
>>   System: Linux (x86_64-redhat-linux)
>>   CPU: Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz
>>   WORD_SIZE: 64
>>   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
>>   LAPACK: libopenblasp.so.0
>>   LIBM: libopenlibm
>>   LLVM: libLLVM-3.3
>>
>> julia> include("code/julia/bench.jl")
>> LU decomposition, elapsed time: 0.07222901 seconds, was 0.123 seconds
>> with my julia
>> FFT , elapsed time: 0.248571629 seconds
>>
>> Thanks for making and  improving the Fedora package
>>
>


Re: [julia-users] Strange vector operation

2014-09-25 Thread Andreas Noack
>
> come true, that is  1/x  operates as element wise division.


No! If you want element wise operations and broadcasting then you should
use the dot version. It is much clearer and it is only one key stroke more.
Division and multiplication with a scalar is not ambiguous, but division
with a vector is as your own example shows. Addition and subtraction were
allowed to behave inconsistently because there were no ambiguity in these
operations.

Med venlig hilsen

Andreas Noack

2014-09-25 8:10 GMT-04:00 Ivar Nesje :

> I think there was a problem with a 0.4-prerelease version that got pushed
> out to juliareleases. Seems like it has been fixed now, but we are probably
> going to get more reports on the topic when time passes, and people give up
> trying to find out why their packages is missing and Gadfly is broken.
>
> kl. 13:09:32 UTC+2 torsdag 25. september 2014 skrev Kaj Wiik følgende:
>
>> It looks like reinstalling julia gives 0.3.1 back:
>>
>> $ sudo apt-get --purge remove julia
>> $ sudo apt-get install julia
>>
>> $ julia
>>_
>>_   _ _(_)_ |  A fresh approach to technical computing
>>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>>_ _   _| |_  __ _   |  Type "help()" for help.
>>   | | | | | | |/ _` |  |
>>   | | |_| | | | (_| |  |  Version 0.3.1 (2014-09-21 21:30 UTC)
>>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
>> |__/   |  x86_64-linux-gnu
>>
>>
>> Kaj
>>
>>
>>
>>
>>
>>
>> On Thursday, September 25, 2014 2:00:59 PM UTC+3, Kaj Wiik wrote:
>>>
>>> I downloaded the package from releases PPA, extracted it and executed
>>> julia binary:
>>>
>>> /tmp/foo/usr/bin$ ./julia
>>>
>>>_
>>>_   _ _(_)_ |  A fresh approach to technical computing
>>>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>>>_ _   _| |_  __ _   |  Type "help()" for help.
>>>   | | | | | | |/ _` |  |
>>>   | | |_| | | | (_| |  |  Version 0.3.1 (2014-09-21 21:30 UTC)
>>>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
>>> |__/   |  x86_64-linux-gnu
>>>
>>>
>>> I.e. the deb package should be OK.
>>>
>>> Hmm...?
>>>
>>> Kaj
>>>
>>> On Thursday, September 25, 2014 1:13:36 PM UTC+3, Tim Holy wrote:

 Sounds like a bug. @staticfloat?

 --Tim

 On Thursday, September 25, 2014 03:01:58 AM Kaj Wiik wrote:
 > I was surprised like Hans (Ubuntu 14.04):
 >
 > dpkg -l julia
 > i  julia   0.3.1~trusty amd64high-performance
 programming
 > langua
 >
 >
 > /etc/apt/sources.list
 > deb http://ppa.launchpad.net/staticfloat/juliareleases/ubuntu trusty
 main
 > deb-src http://ppa.launchpad.net/staticfloat/juliareleases/ubuntu
 trusty
 > main
 >
 > Still
 > julia> versioninfo()
 > Julia Version 0.4.0-dev+543
 > Commit c79e349 (2014-09-11 13:47 UTC)
 >
 > In fact I noticed this only after reading Hans' email
 >
 > Kaj




Re: [julia-users] Strange vector operation

2014-09-25 Thread Ivar Nesje
I think there was a problem with a 0.4-prerelease version that got pushed 
out to juliareleases. Seems like it has been fixed now, but we are probably 
going to get more reports on the topic when time passes, and people give up 
trying to find out why their packages is missing and Gadfly is broken. 

kl. 13:09:32 UTC+2 torsdag 25. september 2014 skrev Kaj Wiik følgende:
>
> It looks like reinstalling julia gives 0.3.1 back:
>
> $ sudo apt-get --purge remove julia
> $ sudo apt-get install julia
>
> $ julia
>_
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "help()" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.3.1 (2014-09-21 21:30 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
> |__/   |  x86_64-linux-gnu
>
>
> Kaj
>
>
>
>
>
>
> On Thursday, September 25, 2014 2:00:59 PM UTC+3, Kaj Wiik wrote:
>>
>> I downloaded the package from releases PPA, extracted it and executed 
>> julia binary:
>>
>> /tmp/foo/usr/bin$ ./julia
>>
>>_
>>_   _ _(_)_ |  A fresh approach to technical computing
>>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>>_ _   _| |_  __ _   |  Type "help()" for help.
>>   | | | | | | |/ _` |  |
>>   | | |_| | | | (_| |  |  Version 0.3.1 (2014-09-21 21:30 UTC)
>>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
>> |__/   |  x86_64-linux-gnu
>>
>>
>> I.e. the deb package should be OK. 
>>
>> Hmm...?
>>
>> Kaj
>>
>> On Thursday, September 25, 2014 1:13:36 PM UTC+3, Tim Holy wrote:
>>>
>>> Sounds like a bug. @staticfloat? 
>>>
>>> --Tim 
>>>
>>> On Thursday, September 25, 2014 03:01:58 AM Kaj Wiik wrote: 
>>> > I was surprised like Hans (Ubuntu 14.04): 
>>> > 
>>> > dpkg -l julia 
>>> > i  julia   0.3.1~trusty amd64high-performance 
>>> programming 
>>> > langua 
>>> > 
>>> > 
>>> > /etc/apt/sources.list 
>>> > deb http://ppa.launchpad.net/staticfloat/juliareleases/ubuntu trusty 
>>> main 
>>> > deb-src http://ppa.launchpad.net/staticfloat/juliareleases/ubuntu 
>>> trusty 
>>> > main 
>>> > 
>>> > Still 
>>> > julia> versioninfo() 
>>> > Julia Version 0.4.0-dev+543 
>>> > Commit c79e349 (2014-09-11 13:47 UTC) 
>>> > 
>>> > In fact I noticed this only after reading Hans' email 
>>> > 
>>> > Kaj 
>>>
>>>

[julia-users] Re: @grisu_ccall not defined when using Gadfly

2014-09-25 Thread Ivar Nesje
Just copying @grisu_ccall is a good temporary fix, but as Julia is no 
longer building and distributing the double-conversion library, it will not 
work everywhere. It will work lots of places tough, and working lots of 
places is better than not working anywhere.

Ivar

kl. 09:41:29 UTC+2 torsdag 25. september 2014 skrev xiong...@gmail.com 
følgende:
>
> In Gadfly website, it also said the building was failed in Julia 0.4.0
> http://pkg.julialang.org/?pkg=Gadfly&ver=nightly
>
> I finially fixed this problem by brutally copy the @grisu_ccall macro from 
> Julia 0.3 to the Gadfly file format.jl.
>
>
> On Wednesday, September 24, 2014 9:58:43 AM UTC+2, xiong...@gmail.com 
> wrote:
>>
>> I've uploaded julia to the newest development version yesterday.
>>
>> Julia Version 0.4.0-dev+728
>> Commit f7172d3* (2014-09-22 12:08 UTC)
>> Platform Info:
>>   System: Linux (x86_64-redhat-linux)
>>   CPU: Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz
>>   WORD_SIZE: 64
>>   BLAS: libopenblas (USE64BITINT NO_AFFINITY NEHALEM)
>>   LAPACK: libopenblas
>>   LIBM: libopenlibm
>>   LLVM: libLLVM-3.3
>>
>> However, when I using Gadfly then, a error is showed:
>>
>>
>> using Gadfly
>>
>> @grisu_ccall not defined
>> while loading /home/JXiong/.julia/v0.4/Gadfly/src/format.jl, in expression 
>> starting on line 56
>> while loading /home/JXiong/.julia/v0.4/Gadfly/src/Gadfly.jl, in expression 
>> starting on line 50
>> while loading In[1], in expression starting on line 1
>>
>>  in include at ./boot.jl:246
>>  in include_from_node1 at ./loading.jl:128
>>  in include at ./boot.jl:246
>>  in include_from_node1 at ./loading.jl:128
>>  in reload_path at loading.jl:152
>>  in _require at loading.jl:67
>>  in require at loading.jl:51
>>
>>
>> I tried Pkd.update(), but the problem is the same. The Gadfly works file 
>> in the first version of Julia 0.4.0 released last month. Is the 
>> @grisu_ccall removed in this updating?
>>
>

Re: [julia-users] Strange vector operation

2014-09-25 Thread Hans W Borchers
Instead of changing the text in the manual, wouldn't it be reasonable 
to make the the line (emphasizing *one*)

  "Some operators without dots operate elementwise anyway when one argument 
   is a scalar. These operators are *, /, \, and the bitwise operators."

come true, that is  1/x  operates as elementwise division, and let [1]/x 
compute the matrix division:

julia> x = [1,2,3]

julia> 1/x  # *should* work 1./x alike
3-element Array{Float64,1}:
 1.0 
 0.5 
 0.33

julia> [1]/x# works that way right now
1x3 Array{Float64,2}:
 0.0714286  0.142857  0.214286


On Thursday, September 25, 2014 1:36:44 AM UTC+2, Andreas Noack wrote:
>
> The manual could possibly be a bit more clear on this, but I don't think 
> much has changed since 0.3.
>
> The forward slash in your first example gets computed as x1/x2=(x2'\x1')' 
> which Julia interprets as an underdetermined system with three right hand 
> sides.
>
> The description in the manual about defaulting to element wise operations 
> is only valid when the denominator argument is the scalar.
>
> I can't comment on the last parsing issue except that I agree that it 
> seems confusing.
>
> Med venlig hilsen
>
> Andreas Noack
>
> 2014-09-24 19:02 GMT-04:00 Hans W Borchers  >:
>
>> Yes, I know. But the manual says:
>>
>> "Some operators without dots operate elementwise anyway when one argument 
>> is a scalar.
>> These operators are *, /, \, and the bitwise operators."
>>
>>
>> On Thursday, September 25, 2014 12:55:16 AM UTC+2, Leah Hanson wrote:
>>>
>>> No comment on the rest, but the element-wise division operator is `./`, 
>>> which does work:
>>>
>>> ~~~
>>> julia> x1 = [1.0, 1, 1];
>>>
>>> julia> 1.0 ./ x1
>>> 3-element Array{Float64,1}:
>>>  1.0
>>>  1.0
>>>  1.0
>>> ~~~
>>>
>>> -- Leah
>>>
>>>
>

Re: [julia-users] Strange vector operation

2014-09-25 Thread Kaj Wiik
It looks like reinstalling julia gives 0.3.1 back:

$ sudo apt-get --purge remove julia
$ sudo apt-get install julia

$ julia
   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "help()" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.3.1 (2014-09-21 21:30 UTC)
 _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
|__/   |  x86_64-linux-gnu


Kaj






On Thursday, September 25, 2014 2:00:59 PM UTC+3, Kaj Wiik wrote:
>
> I downloaded the package from releases PPA, extracted it and executed 
> julia binary:
>
> /tmp/foo/usr/bin$ ./julia
>
>_
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "help()" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.3.1 (2014-09-21 21:30 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
> |__/   |  x86_64-linux-gnu
>
>
> I.e. the deb package should be OK. 
>
> Hmm...?
>
> Kaj
>
> On Thursday, September 25, 2014 1:13:36 PM UTC+3, Tim Holy wrote:
>>
>> Sounds like a bug. @staticfloat? 
>>
>> --Tim 
>>
>> On Thursday, September 25, 2014 03:01:58 AM Kaj Wiik wrote: 
>> > I was surprised like Hans (Ubuntu 14.04): 
>> > 
>> > dpkg -l julia 
>> > i  julia   0.3.1~trusty amd64high-performance 
>> programming 
>> > langua 
>> > 
>> > 
>> > /etc/apt/sources.list 
>> > deb http://ppa.launchpad.net/staticfloat/juliareleases/ubuntu trusty 
>> main 
>> > deb-src http://ppa.launchpad.net/staticfloat/juliareleases/ubuntu 
>> trusty 
>> > main 
>> > 
>> > Still 
>> > julia> versioninfo() 
>> > Julia Version 0.4.0-dev+543 
>> > Commit c79e349 (2014-09-11 13:47 UTC) 
>> > 
>> > In fact I noticed this only after reading Hans' email 
>> > 
>> > Kaj 
>>
>>

Re: [julia-users] Strange vector operation

2014-09-25 Thread Kaj Wiik
I downloaded the package from releases PPA, extracted it and executed julia 
binary:

/tmp/foo/usr/bin$ ./julia

   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "help()" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.3.1 (2014-09-21 21:30 UTC)
 _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
|__/   |  x86_64-linux-gnu


I.e. the deb package should be OK. 

Hmm...?

Kaj

On Thursday, September 25, 2014 1:13:36 PM UTC+3, Tim Holy wrote:
>
> Sounds like a bug. @staticfloat? 
>
> --Tim 
>
> On Thursday, September 25, 2014 03:01:58 AM Kaj Wiik wrote: 
> > I was surprised like Hans (Ubuntu 14.04): 
> > 
> > dpkg -l julia 
> > i  julia   0.3.1~trusty amd64high-performance 
> programming 
> > langua 
> > 
> > 
> > /etc/apt/sources.list 
> > deb http://ppa.launchpad.net/staticfloat/juliareleases/ubuntu trusty 
> main 
> > deb-src http://ppa.launchpad.net/staticfloat/juliareleases/ubuntu 
> trusty 
> > main 
> > 
> > Still 
> > julia> versioninfo() 
> > Julia Version 0.4.0-dev+543 
> > Commit c79e349 (2014-09-11 13:47 UTC) 
> > 
> > In fact I noticed this only after reading Hans' email 
> > 
> > Kaj 
>
>

Re: [julia-users] Re: a good IDE for Julia ? (Julia Studio does not work with Julia v 0.3.0)

2014-09-25 Thread 'Stéphane Laurent' via julia-users
Also Liclipse : 
https://groups.google.com/forum/#!searchin/julia-users/liclipse/julia-users/cw0vLsHTUJk/Y5HO29VjpMQJ


Re: [julia-users] Strange vector operation

2014-09-25 Thread Tim Holy
Sounds like a bug. @staticfloat?

--Tim

On Thursday, September 25, 2014 03:01:58 AM Kaj Wiik wrote:
> I was surprised like Hans (Ubuntu 14.04):
> 
> dpkg -l julia
> i  julia   0.3.1~trusty amd64high-performance programming
> langua
> 
> 
> /etc/apt/sources.list
> deb http://ppa.launchpad.net/staticfloat/juliareleases/ubuntu trusty main
> deb-src http://ppa.launchpad.net/staticfloat/juliareleases/ubuntu trusty
> main
> 
> Still
> julia> versioninfo()
> Julia Version 0.4.0-dev+543
> Commit c79e349 (2014-09-11 13:47 UTC)
> 
> In fact I noticed this only after reading Hans' email
> 
> Kaj



Re: [julia-users] Strange vector operation

2014-09-25 Thread Kaj Wiik
I was surprised like Hans (Ubuntu 14.04):

dpkg -l julia
i  julia   0.3.1~trusty amd64high-performance programming 
langua


/etc/apt/sources.list
deb http://ppa.launchpad.net/staticfloat/juliareleases/ubuntu trusty main 
deb-src http://ppa.launchpad.net/staticfloat/juliareleases/ubuntu trusty 
main 

Still
julia> versioninfo()
Julia Version 0.4.0-dev+543
Commit c79e349 (2014-09-11 13:47 UTC)

In fact I noticed this only after reading Hans' email

Kaj



Re: [julia-users] ArrayViews and "end" keyword

2014-09-25 Thread Ján Dolinský
Yes, I was told that in 0.4, [] is planned to return a view. At the moment 
in 0.3 it returns a copy which is obviously undesirable for larger arrays. 
Thumbs up for 0.4 :).

Dňa streda, 24. septembra 2014 15:29:36 UTC+2 Mauro napísal(-a):
>
> > OK, thanks for this info. I wonder whether e.g. view(A, :, 
> > 2:size(A,2))[end-10:end] will create a copy of the data or only a 
> > reference. 
>
> That depends on the implementation of getindex for ArrayView which could 
> do either.  Easy to check though: 
> typeof(view(A, :, 2:size(A,2))[end-10:end]) 
>
> BTW, I think it's planned to make [] return a view by default sometime 
> in the future. 
>
> M 
>
> > In Julia v0.3 bracket expressions like A[:,2:end] will actually 
> > create a copy what imposes quite an overhead if A is large. 
> > 
> > Jan 
> > 
> > Dňa streda, 24. septembra 2014 12:21:26 UTC+2 Mauro napísal(-a): 
> >> 
> >> Yes, I think this is a limitation of Julia.  `end` is really just 
> >> syntactic sugar: expressions like `A[1,end]` will be replaced by 
> >> `A[1,size(A,2)]` before being further parsed.  (at least that is my 
> >> understanding.)  If there is no `[]` indexing syntax then `end` cannot 
> >> work`.  Thus to construct the view you cannot use `end` but once you 
> >> have it it works, e.g. `view(A, :, 2:end)[1,end]` 
> >> 
> >> On Wed, 2014-09-24 at 05:42, Ján Dolinský  >> > wrote: 
> >> > Hello, 
> >> > 
> >> > I am using ArrayViews but it seems that statements with "end" keyword 
> >> are 
> >> > not supported e.g. view(A, :, 2:end) throws an error and I have to 
> use 
> >> > view(A, :, 2:size(A,2)). Is this the way it is meant to be ? No "end" 
> >> > keyword with ArrayViews ? 
> >> > 
> >> > Thanks, 
> >> > Jan 
> >> 
> >> 
>
> -- 
>


Re: [julia-users] Matlab bench in Julia

2014-09-25 Thread Ján Dolinský
Hello,

How do I make Julia to use threaded version of OpenBLAS ? Do I have to 
compile using some special option or there is a config file ?
What is the best setup for fast linear algebra operations ?

Best Regards,
Jan

Dňa nedeľa, 21. septembra 2014 9:50:52 UTC+2 Stephan Buchert napísal(-a):
>
> Wow, I have now LU a little bit faster on the latest julia Fedora package 
> than on my locally compiled julia:
>
> julia> versioninfo()
> Julia Version 0.3.0
> Platform Info:
>   System: Linux (x86_64-redhat-linux)
>   CPU: Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
>   LAPACK: libopenblasp.so.0
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.3
>
> julia> include("code/julia/bench.jl")
> LU decomposition, elapsed time: 0.07222901 seconds, was 0.123 seconds with 
> my julia
> FFT , elapsed time: 0.248571629 seconds
>
> Thanks for making and  improving the Fedora package
>


Re: [julia-users] Re: a good IDE for Julia ? (Julia Studio does not work with Julia v 0.3.0)

2014-09-25 Thread Ján Dolinský
Very good. Thumbs up!

Dňa streda, 24. septembra 2014 16:43:36 UTC+2 Stefan Karpinski napísal(-a):
>
> Yes, having a debugger is important and it's being actively worked on.
>
> On Tue, Sep 23, 2014 at 4:03 AM, Ján Dolinský  > wrote:
>
>> Hello,
>>
>> Thanks for another tip. I read a recent post of Stefan Karpinski 
>> concerning debugger and IDE and I assume all of these options (LightTable, 
>> SublimeText) does not have a debugging capability, right ?
>>
>> Debugging is an important feature :).
>>
>> Thanks,
>> Jan
>>
>>
>>
>> Dňa sobota, 20. septembra 2014 9:33:10 UTC+2 colint...@gmail.com 
>> napísal(-a):
>>
>>> My favourite of the IDE options is the Sublime-IJulia package: 
>>> https://github.com/quinnj/Sublime-IJulia
>>>
>>> Cheers,
>>>
>>> Colin
>>>
>>>
>>> On Friday, September 19, 2014 9:25:14 PM UTC+10, Ján Dolinský wrote:

 Thanks a lot for the tip. I'll compile from the source then.

 Regards,
 Jan

 Dňa piatok, 19. septembra 2014 11:57:47 UTC+2 Uwe Fechner napísal(-a):
>
> I think that this branch is already merged into the master branch:
> https://github.com/forio/julia-studio/tree/julia-0.3-compatibility
>
> On Friday, September 19, 2014 11:54:41 AM UTC+2, Uwe Fechner wrote:
>>
>> If you compile Julia Studio from source it should work with Julia 
>> 0.3. See:
>> https://github.com/forio/julia-studio/issues/241
>>
>> Regards:
>>
>> Uwe
>>
>> On Friday, September 19, 2014 10:58:26 AM UTC+2, Ján Dolinský wrote:
>>>
>>> Hello guys,
>>>
>>> After upgrading to Julia 0.3.0 Julia Studio stopped working (I 
>>> changed the symbolic links in Julia Studio directory but nevertheless 
>>> ...). 
>>> Can somebody suggest any workaround ? Is it true that Julia Studio will 
>>> not 
>>> support newer versions of Julia ?
>>> What are you guys using now ? 
>>>
>>> Thanks a lot,
>>> Jan 
>>>
>>
>

Re: [julia-users] Strange vector operation

2014-09-25 Thread Tim Holy
I don't see the statement you're referring to, or there's been a 
misunderstanding. Regardless: you're probably using "julianightlies," but you 
want "juliareleases."

Here's how I think about it:
"juliareleases" = for users
"julianightlies" = for testing the current development version

Both are used for automated package testing on Travis, so (1) developers who 
use master can find out if they're breaking packages by using 0.4-only 
features, and (2) developers who use 0.3-release can find out if changes in 0.4 
have broken their package. Both are extremely useful.

--Tim

On Wednesday, September 24, 2014 11:31:05 PM Hans W Borchers wrote:
> I wanted to stick with 0.3. I was under the impression (from thread "Re:
> Announcing Julia 0.3.0 final.") that the PPA would change to 0.4 only when
> a
> kind of prerelease is nearing.
> 
> In a phase of disruptive changes, it's difficult to tell whether the manual
> is
> not quite accurate or the behavior is new and intended. This will make
> clarifying the manual infeasible for outsiders.
> 
> Will the NEWS.md file immediately document the (disruptive or
> non-disruptive)
> changes? That would be very helpful, even if the change is withdrawn later
> on.
> Also, every NEWS entry could include a date to make it easier to follow the
> development.
> 
> On Thursday, September 25, 2014 2:44:17 AM UTC+2, Tim Holy wrote:
> > Rather than complain, please help clarify the manual.
> > 
> > Like Andreas said, I doubt much about those operations has changed between
> > 0.3
> > and 0.4. But if you don't want to be surprised by changes in behavior, you
> > might want to stick with 0.3; more disruptive changes are in the works.
> > 
> > --Tim



Re: [julia-users] Package installation directory: dealing with multiple Julia version

2014-09-25 Thread Tim Holy
I do this by defining JULIA_PKGDIR to be the central repository, and then 
define 
an alias "myjulia" that starts up as
JULIA_PKGDIR=/my/path julia

--Tim

On Wednesday, September 24, 2014 08:41:33 PM Giulio Valentino Dalla Riva 
wrote:
> The issue is concerned with Julia's packages install directory.
> 
> We successfully installed Julia on the department Linux server ("central
> Julia") and I have a personal installation on a local area ("my Julia").
> At the moment my install directory is ~/.julia/version/ for both mine and
> central Julia.
> Any other user (who may access only to central Julia) have an equivalent
> ~/.julia/version/, with a different home.
> 
> I'd like to change the behavior of central Julia so that all packages will
> be installed in central shared directory (namely,
> "/usr/local/julia/packages/v030").
> But I do not want to change the behavior of my Julia.
> 
> Hence, changing the env JULIA_PKGDIR is not a good idea.
> 
> Do you think it may be possible?



Re: [julia-users] congratulations to Indian dost

2014-09-25 Thread Francesco Bonazzi


On Wednesday, September 24, 2014 5:44:42 PM UTC+2, Isaiah wrote:
>
> Since we're off the ranch anyway, the brochure is a neat read:
>
> http://www.isro.gov.in/pslv-c25/pdf/pslv-c25-brochure.pdf
>
> (the MAR1750 processor is a 16-bit ISA from 1980, implemented in a 
> radiation-hardened package and widely used for spacecraft control)
>


Consider that probes outside of the Earth magnetic field are heavily 
exposed to cosmic rays, much more than low orbit satellites. I guess they 
need to build custom hardware to resist that and programming is probably 
low level.


[julia-users] least squares algorithm

2014-09-25 Thread Davide Lasagna
Hi, 

Is there a reference to the algorithm used for solution of least-squares 
problems like A\b, with A \in R^{m \times n} and b \in R^m ? Documentation says 
it uses a decomposition to bidiagonal form, but it would be nice to have a more 
precise reference to that.

Thanks, 

Davide



[julia-users] Re: @grisu_ccall not defined when using Gadfly

2014-09-25 Thread xiongjieyi
In Gadfly website, it also said the building was failed in Julia 0.4.0
http://pkg.julialang.org/?pkg=Gadfly&ver=nightly

I finially fixed this problem by brutally copy the @grisu_ccall macro from 
Julia 0.3 to the Gadfly file format.jl.


On Wednesday, September 24, 2014 9:58:43 AM UTC+2, xiong...@gmail.com wrote:
>
> I've uploaded julia to the newest development version yesterday.
>
> Julia Version 0.4.0-dev+728
> Commit f7172d3* (2014-09-22 12:08 UTC)
> Platform Info:
>   System: Linux (x86_64-redhat-linux)
>   CPU: Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (USE64BITINT NO_AFFINITY NEHALEM)
>   LAPACK: libopenblas
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.3
>
> However, when I using Gadfly then, a error is showed:
>
>
> using Gadfly
>
> @grisu_ccall not defined
> while loading /home/JXiong/.julia/v0.4/Gadfly/src/format.jl, in expression 
> starting on line 56
> while loading /home/JXiong/.julia/v0.4/Gadfly/src/Gadfly.jl, in expression 
> starting on line 50
> while loading In[1], in expression starting on line 1
>
>  in include at ./boot.jl:246
>  in include_from_node1 at ./loading.jl:128
>  in include at ./boot.jl:246
>  in include_from_node1 at ./loading.jl:128
>  in reload_path at loading.jl:152
>  in _require at loading.jl:67
>  in require at loading.jl:51
>
>
> I tried Pkd.update(), but the problem is the same. The Gadfly works file 
> in the first version of Julia 0.4.0 released last month. Is the 
> @grisu_ccall removed in this updating?
>