[julia-users] Re: Arithmetic with TypePar

2016-06-01 Thread Robert DJ
The problem with not specifying D is type inference. I actually have more 
entries in Foo and would like

type Foo{D}
baz::Array{D}
bar::Array{D+1}
end


I want to use Foo in calculations and for D = 1 I am doing something like

baz + bar[:,1]
baz + bar[:,2]


For D = 2:

baz + bar[:,:,1]
baz + bar[:,:,2]


I *could* instead (for D = 2) move the third dimension to extra columns...


On Wednesday, June 1, 2016 at 8:56:28 PM UTC+2, Robert DJ wrote:
>
> I have a custom type with a TypePar denoting a dimension and would like to 
> define the following:
>
> type Foo{D}
> bar::Array{D+1}
> end
>
> However, this does not work. As D is only 1 or 2 it would OK with
>
> type Foo{1}
> bar::Matrix
> end
>
> type Foo{2}
> bar::Array{3}
> end
>
> but unfortunately this isn't working, either. 
>
> Can this problem be solved?
>
> Thanks!
>


Re: [julia-users] how to set the maximal waiting time for ClusterManagers.addprocs_sge()?

2016-06-01 Thread Florian Oswald
Yes this seems to be ten same issue. When my cluster is heavily loaded it
sometimes is so slow that I hit the 60sec limit anyway.

Why don't we just grab the list of nodes from the sge env variable
PE_HOSTFILE as soon as this becomes available and put that list into a
simple addprocs( machines ) call?

On Thursday, 2 June 2016, David van Leeuwen 
wrote:

> Could this be related to this
> 
> ?
>
> On Wednesday, June 1, 2016 at 7:26:50 PM UTC+2, Florian Oswald wrote:
>>
>> i'm having a problem on a cluster where setting up the connections via
>> ClusterManagers.addprocs_sge() takes longer than the 60 second limit. how
>> can I extend that limit? thanks!
>>
>


[julia-users] ANN: DC.jl - automagical linked plots in your IJulia Notebook

2016-06-01 Thread Tim Wheeler
Hello Julia Users,

We are happy to announce DC.jl  - a 
package which gives you the power of DC.js  
in your IJulia notebook. The premise is simple: put in a DataFrame and you 
get out a nice set of charts that you can interactively filter. Whenever 
you do so they automatically crossfilter.





The package is up and running. We (three students) put it together for our 
data visualization course. We hope you like it and welcome comments / 
suggestions.





[julia-users] Re: Using pmap in julia

2016-06-01 Thread 'Greg Plowman' via julia-users
You say you get a large number of workers.
Without delving too deep, this seems pretty weird, regardless of other code.
Have you checked the number of workers (using nworkers()) after call to 
addprocs()? 
If you are getting errors and re-run the script, is addprocs() just 
accumulating more workers?
If so, perhaps try rmprocs(workers()) before addprocs()



[julia-users] Re: Importing Functions on Different Workers

2016-06-01 Thread ABB
That does work.  Thank you very much.

Is it possible that a difference is made by using addprocs() after I have 
the REPL running vs. starting the program as "julia -p N" for some N?  I 
ask because I am (nearly) sure my module was defined the way you suggest 
earlier today and it was giving me the error.  

On Wednesday, June 1, 2016 at 8:02:26 PM UTC-5, Greg Plowman wrote:
>
> I find that putting everything required on workers into a module is the 
> way to go.
> Then just use using Module (after adding workers)
>
> This works for me (v0.4.5):
>
> ProjectModule.jl:
> module ProjectModule
> using DataFrames
> include("function1.jl")
> export function1
> end
>
>
> function1.jl:
> function function1(input::DataFrame)
> # do something to input
> println("function1 here")
> end
>
> Now test:
> addprocs()
> using DataFrames
> using ProjectModule
> df = DataFrame(A = 1:4, B = ["M", "F", "F", "M"])
> pmap(function1, fill(df,10))
>
>
>
> On Thursday, June 2, 2016 at 8:24:52 AM UTC+10, ABB wrote:
>
>> Hello - 
>>
>> I have the following problem: I would like to find a good way to import a 
>> collection of user-defined functions across several workers.  Some of my 
>> functions are defined on DataFrames, but "using DataFrames" is not getting 
>> me anywhere on the other workers.
>>
>> I think the problem I am running into may result from some combination of 
>> the rules of scope and the command "using".  
>>
>> Alternatively, given what I want to do, maybe running this with "julia -p 
>> N" is not the best way to make use of N workers in the way I want to.  
>>
>> I open several workers: "julia -p 3"
>>
>> "using DataFrames" - this brings DataFrames into the 'main' worker's 
>> scope (I think), but not into the scope of subordinate workers.
>>
>> "using ProjectModule" - I am trying to load a module across all workers 
>> which contains several functions I have written (maybe this is not the best 
>> way to accomplish this task?)
>>
>> This error is returned:
>>
>> LoadError: LoadError: UndefVarError: DataFrame not defined
>>
>> ProjectModule looks something like
>>
>> module ProjectModule
>>include("function1.jl")
>>
>>export function1
>> end
>>
>> where function1 is defined as
>>
>> function1(input::DataFrame)
>>#do something to input
>> end 
>>
>> I have tried a few things:
>>
>> - Running "@everywhere using DataFrames" from within the main worker 
>> (this has worked once or twice - that is, I can then use function1 on a 
>> different worker - but it isn't consistent)
>>
>> - Opening the workers at the outset using julia -p N -L 
>> ProjectModule.jl... (repeated N times)  I get: "LoadError: UndefVarError: 
>> DataFrames not defined"
>>
>> - I also put "using DataFrames" into the ProjectModule.jl file.  The 
>> program definitely hated that.  (Specifically: I was warned that I was 
>> "overwriting DataFrames".)
>>
>> Is there a better way to load both the DataFrames package and the 
>> functions I have written across a couple of workers?  
>>
>> Thanks!
>>
>> ABB
>>
>

[julia-users] Re: Importing Functions on Different Workers

2016-06-01 Thread 'Greg Plowman' via julia-users
I find that putting everything required on workers into a module is the way 
to go.
Then just use using Module (after adding workers)

This works for me (v0.4.5):

ProjectModule.jl:
module ProjectModule
using DataFrames
include("function1.jl")
export function1
end


function1.jl:
function function1(input::DataFrame)
# do something to input
println("function1 here")
end

Now test:
addprocs()
using DataFrames
using ProjectModule
df = DataFrame(A = 1:4, B = ["M", "F", "F", "M"])
pmap(function1, fill(df,10))



On Thursday, June 2, 2016 at 8:24:52 AM UTC+10, ABB wrote:

> Hello - 
>
> I have the following problem: I would like to find a good way to import a 
> collection of user-defined functions across several workers.  Some of my 
> functions are defined on DataFrames, but "using DataFrames" is not getting 
> me anywhere on the other workers.
>
> I think the problem I am running into may result from some combination of 
> the rules of scope and the command "using".  
>
> Alternatively, given what I want to do, maybe running this with "julia -p 
> N" is not the best way to make use of N workers in the way I want to.  
>
> I open several workers: "julia -p 3"
>
> "using DataFrames" - this brings DataFrames into the 'main' worker's scope 
> (I think), but not into the scope of subordinate workers.
>
> "using ProjectModule" - I am trying to load a module across all workers 
> which contains several functions I have written (maybe this is not the best 
> way to accomplish this task?)
>
> This error is returned:
>
> LoadError: LoadError: UndefVarError: DataFrame not defined
>
> ProjectModule looks something like
>
> module ProjectModule
>include("function1.jl")
>
>export function1
> end
>
> where function1 is defined as
>
> function1(input::DataFrame)
>#do something to input
> end 
>
> I have tried a few things:
>
> - Running "@everywhere using DataFrames" from within the main worker (this 
> has worked once or twice - that is, I can then use function1 on a different 
> worker - but it isn't consistent)
>
> - Opening the workers at the outset using julia -p N -L 
> ProjectModule.jl... (repeated N times)  I get: "LoadError: UndefVarError: 
> DataFrames not defined"
>
> - I also put "using DataFrames" into the ProjectModule.jl file.  The 
> program definitely hated that.  (Specifically: I was warned that I was 
> "overwriting DataFrames".)
>
> Is there a better way to load both the DataFrames package and the 
> functions I have written across a couple of workers?  
>
> Thanks!
>
> ABB
>


Re: [julia-users] Using pmap in julia

2016-06-01 Thread Martha White
Thank you for the prompt reply!

No, I am not using either open or mmap.

On Wednesday, June 1, 2016 at 12:20:19 PM UTC-4, Stefan Karpinski wrote:
>
> Are you opening files via open or mmap in any of the functions 
> that learningExperimentRun calls?
>
> On Wed, Jun 1, 2016 at 11:42 AM, Martha White  > wrote:
>
>> I am having difficulty understanding how to use pmap in Julia. I am a 
>> reasonably experienced matlab and c programmer. However, I am new to Julia 
>> and to using parallel functions. I am running an experiment with nested for 
>> loops, benchmarking different algorithms. In the inner loop, I am running 
>> the algorithms across multiple trials. I would like to parallelize this 
>> inner loop (as the outer iteration I can easily run as multiple jobs on a 
>> cluster). The code looks like:
>>
>> effNumCores = 3
>> procids = addprocs(effNumCores)
>>
>> # This has to be added so that each run has access to these function 
>> definitions
>> @everywhere include("experimentUtils.jl")
>>
>> # Initialize array of RMSE
>> fill!(runErrors, 0.0);
>>
>> # Split up runs across number of cores
>> outerloop = floor(Int, numRuns / effNumCores)+1
>> r = 1
>> rend = effNumCores
>> for i = 1:outerloop
>> rend = min(r+effNumCores-1, numRuns)
>>
>> # Empty RMSE passed, since it is create and returned in pmap_errors 
>> Array{Float64}(0,0)
>> pmap_errors = pmap(r -> learningExperimentRun(mdp,hordeOfD, stepData, 
>> alpha,lambda,beta, numAgents, numSteps, Array{Float64}(0,0), r), r:rend)
>> for j=1:(rend-r+1)
>> runErrors[:,:,MEAN_IND] += pmap_errors[j]
>> runErrors[:,:,VAR_IND] += pmap_errors[j].^2
>> end
>> r += effNumCores
>> end
>> rmprocs(procids)
>>
>> The function called above is defined in separate file called 
>> experimentUtils.jl, as
>>
>> function learningExperimentRun(mdp::MDP, hordeOfD::horde, 
>> stepData::transData, alpha::Float64,lambda::Float64, beta::Float64, 
>> numAgents::Int64, numSteps::Int64, RMSE::Array{Float64, 2}, runNum::Int64)
>>   # if runErrors is empty, then initialize; this is empty for parallel 
>> version
>>   if (isempty(RMSE))
>>  RMSE = zeros(Float64,numAgents, numSteps)
>>   else
>> fill!(RMSE, 0.0)
>>   end
>>
>>  srand(runNum)
>>
>>  agentInit(hordeOfD, mdp, alpha, beta,lambda,BETA_ETD)
>>  getLearnerErrors(hordeOfD,mdp, RMSE,1)
>>  mdpStart(mdp,stepData)
>>  for i=2:numSteps
>>mdpStep(mdp,stepData)
>>updateLearners(stepData, mdp, hordeOfD)
>>getLearnerErrors(hordeOfD,mdp, RMSE,i)
>>  end
>>
>>  return RMSE
>> end
>>
>> When I try to run this, I get a large number of workers and get errors 
>> that state that I have too many files open. I believe I must be doing 
>> something seriously wrong. If anyone could help to parallelize this code in 
>> julia, that would be fantastic. I am not tied to pmap, but after reading a 
>> bit, it seemed to be the right function to use.
>>
>>
>> I should further add that I have an additional loop splitting runs over 
>> cores, even though pmap could do that for me. I did this because pmap_errors 
>> then becomes an array of numRuns (which could be 100s). By splitting it up 
>> into loops, the returned pmap_errors has size that is at most the number of 
>> cores. I am hoping that this memory then gets re-used when starting the 
>> next loop over cores.
>>
>> I tried at first avoiding this by using a distributed array for 
>> runErrors. But, this was not clearly documented and so I abandoned that 
>> approach.
>>
>
>

Re: [julia-users] Reading from a TCP socket with quasi-continuous data transfer

2016-06-01 Thread Joshua Jones
On Wednesday, June 1, 2016 at 5:36:44 AM UTC-7, Isaiah wrote:
>
>
> Yes: using multiprocess parallelism instead of tasks. '@everywhere' 
> executes code on other processes that do not share the same address space 
> as the head process and cannot access the same socket. See the section on 
> Tasks in the manual.
>
> (though it probably shouldn't segfault, please file a big about that)
>

My apologies for not understanding before. I'm certain this is basic 
knowledge for someone familiar with TCP sockets, but TCP and 
parallelization are the two areas of scientific computing with which I'm 
least familiar. I couldn't glean from the documentation how the address 
space of TCP connections was handled in Julia, though the Tasks section did 
help with a REPL solution.

On that note, I think what you're driving at for the REPL is something like 
this...? 
taskHdl(conn) = Task(() -> readbuf(conn.buffer))
function readbuf(buf)
  produce(read(buf, UInt8, nb_available(buf)))
end

julia> @async (eof(conn); println("Connection ready."))
Task (waiting) @0x7f760ee48c90

julia> Connection ready.

julia> append!(rbuf, consume(taskHdl(conn)));
Above, I append to a UInt8 array because raw data must be processed in 
520-byte packets (8-byte char ID + 512-byte packet of encoded data); 
servers *usually *send whole packets, but not *always*. 

That begs the question of how to do this in a function. In 0.5, it seems 
easy: create a RemoteChannel, open connection on remote process, dump 
packets to RemoteChannel via. "while" loop, return RemoteChannel. Is it 
really that simple?

That begs the followup question: can anything like that be done in 0.4?


Re: [julia-users] Plots.jl bar width and color

2016-06-01 Thread Tom Breloff
I think I implemented 'bar_width' recently. Might depend on backend? You
can always look at the src/args.jl code to get ideas of what is implemented
(until good docs are ready of course)

On Wednesday, June 1, 2016, Andre Bieler  wrote:

> Great,
>
> what about the width of the bars?
>
>


Re: [julia-users] Plots.jl bar width and color

2016-06-01 Thread Andre Bieler
Great,

what about the width of the bars?



[julia-users] Re: Using Julia for real time astronomy

2016-06-01 Thread Cedric St-Jean
Apparently, ITA Software (Orbitz) was written nearly entirely in Lisp, with 
0 heap-allocation during runtime to have performance guarantees. It's 
pretty inspiring , in a 
I-crossed-the-Himalayas-barefoot kind of way.

On Wednesday, June 1, 2016 at 5:59:15 PM UTC-4, Páll Haraldsson wrote:
>
> On Wednesday, June 1, 2016 at 9:40:54 AM UTC, John leger wrote:
>>
>> So for now the best is to build a toy that is equivalent in processing 
>> time to the original and see by myself what I'm able to get.
>> We have many ideas, many theories due to the nature of the GC so the best 
>> is to try.
>>
>> Páll -> Thanks for the links
>>
>
> No problem.
>
> While I did say it would be cool to now of Julia in space, I would hate 
> for the project to fail because of Julia (because of my advice).
>
> I endorse Julia for all kinds of uses, hard real-time (and building 
> operating systems) are where I have doubts.
>
> A. I thought a little more about making a macro @nogc to mark functions, 
> and it's probably not possible. You could I guess for one function, as the 
> macro has access to the AST of it. But what you really want to disallow, is 
> that function calling functions that are not similarly marked. I do not 
> know about metadata on functions and if a nogc-bit could be put in, but 
> even then, in theory couldn't that function be changed at runtime..?
>
> What you would want is that this nogc property is statically checked as I 
> guess D does, but Julia isn't separately compiled by default. Note there is 
> Julia2C, and see
>
> http://juliacomputing.com/blog/2016/02/09/static-julia.html
>
> for gory details on compiling Julia.
>
> I haven't looked, I guess Julia2C does not generate malloc and free, only 
> some malloc substitute in libjulia runtime. That substitute will allocate 
> and run the GC when needed. These are the calls you want to avoid in your 
> code and could maybe grep for.. There is a Lint.jl tool, but as memory 
> allocation isn't an error it would not flag it, maybe it could be an 
> option..
>
> B. One idea I just had (in the shower..), if @nogc is used or just on 
> "gc_disable" (note it is deprecated*), it would disallow allocations (with 
> an exception if tried), not just postpone them, it would be much easier to 
> test if your code uses allocations or calls code that would. Still, you 
> would have to check all code-paths..
>
> C. Ada, or the Spark-subset, might be the go-to language for hard 
> real-time. Rust seems also good, just not as tried. D could also be an 
> option with @nogc. And then there is C and especially C++ that I try do 
> avoid recommending.
>
> D. Do tell if you only need soft real-time, it makes the matter so much 
> simpler.. not just programming language choice..
>
> *
> help?> gc_enable
> search: gc_enable
>
>   gc_enable(on::Bool)
>
>   Control whether garbage collection is enabled using a boolean argument 
> (true for enabled, false for disabled). Returns previous GC state. Disabling
>   garbage collection should be used only with extreme caution, as it can 
> cause memory use to grow without bound.
>
>
>  
>
>>
>> Le mardi 31 mai 2016 18:44:17 UTC+2, Páll Haraldsson a écrit :
>>>
>>> On Monday, May 30, 2016 at 8:19:34 PM UTC, Tobias Knopp wrote:

 If you are prepared to make your code to not perform any heap 
 allocations, I don't see a reason why there should be any issue. When I 
 once worked on a very first multi-threading version of Julia I wrote 
 exactly such functions that won't trigger gc since the later was not 
 thread 
 safe. This can be hard work but I would assume that its at least not more 
 work than implementing the application in C/C++ (assuming that you have 
 some Julia experience)

>>>
>>> I would really like to know why the work is hard, is it getting rid of 
>>> the allocations, or being sure there are no more hidden in your code? I 
>>> would also like to know then if you can do the same as in D language:
>>>
>>> http://wiki.dlang.org/Memory_Management 
>>>
>>> The most reliable way to guarantee latency is to preallocate all data 
>>> that will be needed by the time critical portion. If no calls to allocate 
>>> memory are done, the GC will not run and so will not cause the maximum 
>>> latency to be exceeded.
>>>
>>> It is possible to create a real-time thread by detaching it from the 
>>> runtime, marking the thread function @nogc, and ensuring the real-time 
>>> thread does not hold any GC roots. GC objects can still be used in the 
>>> real-time thread, but they must be referenced from other threads to prevent 
>>> them from being collected."
>>>
>>> that is would it be possible to make a macro @nogc and mark functions in 
>>> a similar way? I'm not aware that such a macro is available, to disallow. 
>>> There is a macro, e.g. @time, that is not sufficient, that shows GC 
>>> actitivy, but knowing there was none could have been an accident; if you 
>>> run your co

[julia-users] Re: random number generation

2016-06-01 Thread David P. Sanders


El miércoles, 1 de junio de 2016, 14:24:01 (UTC-4), Michela Di Lullo 
escribió:
>
> How can I do to generate 6 *different* integer random numbers between 1 
> and 14?
>

This is known as "sampling without replacement" and is implemented in 
StatsBase.jl; documentation here:

http://statsbasejl.readthedocs.io/en/latest/sampling.html

Code:

julia> using StatsBase

julia> sample(1:14, 6, replace=false)
6-element Array{Int64,1}:
 10
  1
  8
  5
  6
 13



[julia-users] Importing Functions on Different Workers

2016-06-01 Thread ABB
Hello - 

I have the following problem: I would like to find a good way to import a 
collection of user-defined functions across several workers.  Some of my 
functions are defined on DataFrames, but "using DataFrames" is not getting 
me anywhere on the other workers.

I think the problem I am running into may result from some combination of 
the rules of scope and the command "using".  

Alternatively, given what I want to do, maybe running this with "julia -p 
N" is not the best way to make use of N workers in the way I want to.  

I open several workers: "julia -p 3"

"using DataFrames" - this brings DataFrames into the 'main' worker's scope 
(I think), but not into the scope of subordinate workers.

"using ProjectModule" - I am trying to load a module across all workers 
which contains several functions I have written (maybe this is not the best 
way to accomplish this task?)

This error is returned:

LoadError: LoadError: UndefVarError: DataFrame not defined

ProjectModule looks something like

module ProjectModule
   include("function1.jl")

   export function1
end

where function1 is defined as

function1(input::DataFrame)
   #do something to input
end 

I have tried a few things:

- Running "@everywhere using DataFrames" from within the main worker (this 
has worked once or twice - that is, I can then use function1 on a different 
worker - but it isn't consistent)

- Opening the workers at the outset using julia -p N -L ProjectModule.jl... 
(repeated N times)  I get: "LoadError: UndefVarError: DataFrames not 
defined"

- I also put "using DataFrames" into the ProjectModule.jl file.  The 
program definitely hated that.  (Specifically: I was warned that I was 
"overwriting DataFrames".)

Is there a better way to load both the DataFrames package and the functions 
I have written across a couple of workers?  

Thanks!

ABB


[julia-users] Re: Arithmetic with TypePar

2016-06-01 Thread Cedric St-Jean
Actually, forget what I said about unification, it's not true. 

I too am curious about why the field types are not evaluated once the type 
parameters are known. Maybe it makes inference about the abstract Foo type 
more difficult?

On Wednesday, June 1, 2016 at 6:04:51 PM UTC-4, Cedric St-Jean wrote:
>
> I really doubt that it can be expressed this way, because Julia will do 
> pattern matching/unification on the type of `bar`, and it would have to 
> know that -1 is the inverse of +1 to unify D+1 with the type of the input 
> array. Can you give more context about what you're trying to do? Why can't 
> you have `bar::Array{Any, D}`?
>
> You can also put D inside the constructor
>
> type Foo{E}
> bar::Array{Any, E}
> end
> Foo(D::Int) = Foo(Array{Any, D+1}())
>
> Foo(1)
>
> or use typealias
>
> On Wednesday, June 1, 2016 at 2:56:28 PM UTC-4, Robert DJ wrote:
>>
>> I have a custom type with a TypePar denoting a dimension and would like 
>> to define the following:
>>
>> type Foo{D}
>> bar::Array{D+1}
>> end
>>
>> However, this does not work. As D is only 1 or 2 it would OK with
>>
>> type Foo{1}
>> bar::Matrix
>> end
>>
>> type Foo{2}
>> bar::Array{3}
>> end
>>
>> but unfortunately this isn't working, either. 
>>
>> Can this problem be solved?
>>
>> Thanks!
>>
>

[julia-users] Re: Arithmetic with TypePar

2016-06-01 Thread Cedric St-Jean
I really doubt that it can be expressed this way, because Julia will do 
pattern matching/unification on the type of `bar`, and it would have to 
know that -1 is the inverse of +1 to unify D+1 with the type of the input 
array. Can you give more context about what you're trying to do? Why can't 
you have `bar::Array{Any, D}`?

You can also put D inside the constructor

type Foo{E}
bar::Array{Any, E}
end
Foo(D::Int) = Foo(Array{Any, D+1}())

Foo(1)

or use typealias

On Wednesday, June 1, 2016 at 2:56:28 PM UTC-4, Robert DJ wrote:
>
> I have a custom type with a TypePar denoting a dimension and would like to 
> define the following:
>
> type Foo{D}
> bar::Array{D+1}
> end
>
> However, this does not work. As D is only 1 or 2 it would OK with
>
> type Foo{1}
> bar::Matrix
> end
>
> type Foo{2}
> bar::Array{3}
> end
>
> but unfortunately this isn't working, either. 
>
> Can this problem be solved?
>
> Thanks!
>


[julia-users] Re: how to set the maximal waiting time for ClusterManagers.addprocs_sge()?

2016-06-01 Thread David van Leeuwen
Could this be related to this 

?

On Wednesday, June 1, 2016 at 7:26:50 PM UTC+2, Florian Oswald wrote:
>
> i'm having a problem on a cluster where setting up the connections via 
> ClusterManagers.addprocs_sge() takes longer than the 60 second limit. how 
> can I extend that limit? thanks!
>


[julia-users] Re: Using Julia for real time astronomy

2016-06-01 Thread Páll Haraldsson
On Wednesday, June 1, 2016 at 9:40:54 AM UTC, John leger wrote:
>
> So for now the best is to build a toy that is equivalent in processing 
> time to the original and see by myself what I'm able to get.
> We have many ideas, many theories due to the nature of the GC so the best 
> is to try.
>
> Páll -> Thanks for the links
>

No problem.

While I did say it would be cool to now of Julia in space, I would hate for 
the project to fail because of Julia (because of my advice).

I endorse Julia for all kinds of uses, hard real-time (and building 
operating systems) are where I have doubts.

A. I thought a little more about making a macro @nogc to mark functions, 
and it's probably not possible. You could I guess for one function, as the 
macro has access to the AST of it. But what you really want to disallow, is 
that function calling functions that are not similarly marked. I do not 
know about metadata on functions and if a nogc-bit could be put in, but 
even then, in theory couldn't that function be changed at runtime..?

What you would want is that this nogc property is statically checked as I 
guess D does, but Julia isn't separately compiled by default. Note there is 
Julia2C, and see

http://juliacomputing.com/blog/2016/02/09/static-julia.html

for gory details on compiling Julia.

I haven't looked, I guess Julia2C does not generate malloc and free, only 
some malloc substitute in libjulia runtime. That substitute will allocate 
and run the GC when needed. These are the calls you want to avoid in your 
code and could maybe grep for.. There is a Lint.jl tool, but as memory 
allocation isn't an error it would not flag it, maybe it could be an 
option..

B. One idea I just had (in the shower..), if @nogc is used or just on 
"gc_disable" (note it is deprecated*), it would disallow allocations (with 
an exception if tried), not just postpone them, it would be much easier to 
test if your code uses allocations or calls code that would. Still, you 
would have to check all code-paths..

C. Ada, or the Spark-subset, might be the go-to language for hard 
real-time. Rust seems also good, just not as tried. D could also be an 
option with @nogc. And then there is C and especially C++ that I try do 
avoid recommending.

D. Do tell if you only need soft real-time, it makes the matter so much 
simpler.. not just programming language choice..

*
help?> gc_enable
search: gc_enable

  gc_enable(on::Bool)

  Control whether garbage collection is enabled using a boolean argument 
(true for enabled, false for disabled). Returns previous GC state. Disabling
  garbage collection should be used only with extreme caution, as it can 
cause memory use to grow without bound.


 

>
> Le mardi 31 mai 2016 18:44:17 UTC+2, Páll Haraldsson a écrit :
>>
>> On Monday, May 30, 2016 at 8:19:34 PM UTC, Tobias Knopp wrote:
>>>
>>> If you are prepared to make your code to not perform any heap 
>>> allocations, I don't see a reason why there should be any issue. When I 
>>> once worked on a very first multi-threading version of Julia I wrote 
>>> exactly such functions that won't trigger gc since the later was not thread 
>>> safe. This can be hard work but I would assume that its at least not more 
>>> work than implementing the application in C/C++ (assuming that you have 
>>> some Julia experience)
>>>
>>
>> I would really like to know why the work is hard, is it getting rid of 
>> the allocations, or being sure there are no more hidden in your code? I 
>> would also like to know then if you can do the same as in D language:
>>
>> http://wiki.dlang.org/Memory_Management 
>>
>> The most reliable way to guarantee latency is to preallocate all data 
>> that will be needed by the time critical portion. If no calls to allocate 
>> memory are done, the GC will not run and so will not cause the maximum 
>> latency to be exceeded.
>>
>> It is possible to create a real-time thread by detaching it from the 
>> runtime, marking the thread function @nogc, and ensuring the real-time 
>> thread does not hold any GC roots. GC objects can still be used in the 
>> real-time thread, but they must be referenced from other threads to prevent 
>> them from being collected."
>>
>> that is would it be possible to make a macro @nogc and mark functions in 
>> a similar way? I'm not aware that such a macro is available, to disallow. 
>> There is a macro, e.g. @time, that is not sufficient, that shows GC 
>> actitivy, but knowing there was none could have been an accident; if you 
>> run your code again and memory fills up you see different result.
>>
>> As with D, the GC in Julia is optional. The above @nogc, is really the 
>> only thing different, that I can think of that is better with their 
>> optional memory management. But I'm no expert on D, and I mey not have 
>> looked too closely:
>>
>> https://dlang.org/spec/garbage.html
>>
>>
>>> Tobi
>>>
>>> Am Montag, 30. Mai 2016 12:00:13 UTC+2 schrieb John leger:

 Hi everyone,

 I am working in ast

[julia-users] Re: julia equivalent of python [] (Part II)

2016-06-01 Thread Ford Ox
You have created Vector. I want to create 2D array.

IMO the use of space as hcat is very error prone.
Why dont we use for example "," as hcat? After all you dont need ",", when 
you have ";". Why should we have two ways of doing exactly the same thing?
Same goes for "for i in 1:10" and "for i = 1:10". Both of these have exact 
same speed and memory usage, so why do we need both?  

On Wednesday, June 1, 2016 at 8:56:06 PM UTC+2, Andreas Lobinger wrote:
>
>
>
> On Wednesday, June 1, 2016 at 8:49:44 PM UTC+2, Ford Ox wrote:
>>
>> When we are on this topic
>>
>> Why does
>> [1 + 2]
>> result in
>> [3]
>> instead of
>> [1 + 2]
>>
>>
> well, in v0.5 with the correct element limiter
> _
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "?help" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.5.0-dev+4330 (2016-05-26 09:11 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Commit 493157e* (6 days old master)
> |__/   |  x86_64-linux-gnu
>
> julia> b = [4,+,3]
> 3-element Array{Any,1}:
>  4 
>   +
>  3 
>
> julia> 
>
>
>
>

Re: [julia-users] Plots.jl bar width and color

2016-06-01 Thread Tom Breloff
You can use any Colors.Colorant, or arrays of them: c = [:blue, RGB(1,0,0),
RGBA(1,0,0,0.2)]

On Wed, Jun 1, 2016 at 2:23 PM, Andre Bieler 
wrote:

> How can I set the with of bars in a bar plot.
>
> say for
>
> ```julia
> bar([1,2,3], [4,6,3])
> ```
>
> Is it possible to pick colors that are not defined as
> keywords like :black, :grey. etc?
>
> Something like #BFBFBF
>
> Cheers,
> Andre
>


Re: [julia-users] random number generation

2016-06-01 Thread cdm

this method does not exclude the possibility that the same integer turns up 
more than once among the six ...

julia> 
rand(1:14, 6)   
  
6-element Array{Int64,1}:
 9
 2
 7
 6
 6
 3
 

On Wednesday, June 1, 2016 at 11:32:34 AM UTC-7, El suisse wrote:
>
> I think like this:
>
> `rand(1:14, 6)`
>
> 2016-06-01 15:24 GMT-03:00 Michela Di Lullo  >:
>
>> How can I do to generate 6 *different* integer random numbers between 1 
>> and 14?
>>
>> Thanks in advance to who will answer, 
>>
>> M.
>>
>> ___
>> INVESTI SUL FUTURO, FAI CRESCERE L’UNIVERSITÀ:
>>
>> *DONA IL 5 PER MILLE ALLA SAPIENZA*
>>
>> CODICE FISCALE *80209930587*
>>
>
>

Re: [julia-users] random number generation

2016-06-01 Thread cdm

a solution, among many:

sub(randperm(14), 1:6)


enjoy !!!


Re: [julia-users] filter() function edge case, possible bug.

2016-06-01 Thread Anonymous
ok I just filed one.

On Wednesday, June 1, 2016 at 5:16:04 AM UTC-7, Mauro wrote:
>
> Yes, this is a bug: ranges should behave as vectors in read-only 
> situations.  I suspect that this is on the radar of the devs already as 
> there is quite a bit of work happening in this area.  But still filing a 
> bug report, if one does not exist yet, is probably the right thing to do. 
>
> On Wed, 2016-06-01 at 03:26, Anonymous > 
> wrote: 
> > Consider the code: 
> > 
> > filter(n -> true, collect(1:0)) 
> > 
> > this returns a 0-element array.  However if I don't collect the iterator 
> > into an array: 
> > 
> > filter(n -> true, 1:0) 
> > 
> > I get the error 
> > 
> > Error: TypeError: typeassert: expected AbstractArray{Bool, N}, got 
> > Array{Int64, 1} in filter at array.jl:923 
> > 
> > It seems like this should also return a 0-element array, since if the 
> > iterator is non empty, e.g. 
> > 
> > filter(n -> true, 1:2) 
> > 
> > I get a two the element array [1,2]. 
>


Re: [julia-users] random number generation

2016-06-01 Thread Michele Zaffalon
shuffle(collect(1:14))[1:6] but I remember seeing a method that picks n
random elements from an array.

By the way, shuffle(1:14) does not seem to work.

On Wed, Jun 1, 2016 at 8:32 PM, El suisse  wrote:

> I think like this:
>
> `rand(1:14, 6)`
>
> 2016-06-01 15:24 GMT-03:00 Michela Di Lullo :
>
>> How can I do to generate 6 *different* integer random numbers between 1
>> and 14?
>>
>> Thanks in advance to who will answer,
>>
>> M.
>>
>> ___
>> INVESTI SUL FUTURO, FAI CRESCERE L’UNIVERSITÀ:
>>
>> *DONA IL 5 PER MILLE ALLA SAPIENZA*
>>
>> CODICE FISCALE *80209930587*
>>
>
>


[julia-users] Arithmetic with TypePar

2016-06-01 Thread Robert DJ
I have a custom type with a TypePar denoting a dimension and would like to 
define the following:

type Foo{D}
bar::Array{D+1}
end

However, this does not work. As D is only 1 or 2 it would OK with

type Foo{1}
bar::Matrix
end

type Foo{2}
bar::Array{3}
end

but unfortunately this isn't working, either. 

Can this problem be solved?

Thanks!


[julia-users] Re: julia equivalent of python [] (Part II)

2016-06-01 Thread Andreas Lobinger


On Wednesday, June 1, 2016 at 8:49:44 PM UTC+2, Ford Ox wrote:
>
> When we are on this topic
>
> Why does
> [1 + 2]
> result in
> [3]
> instead of
> [1 + 2]
>
>
well, in v0.5 with the correct element limiter
_
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "?help" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.5.0-dev+4330 (2016-05-26 09:11 UTC)
 _/ |\__'_|_|_|\__'_|  |  Commit 493157e* (6 days old master)
|__/   |  x86_64-linux-gnu

julia> b = [4,+,3]
3-element Array{Any,1}:
 4 
  +
 3 

julia> 





[julia-users] Re: julia equivalent of python [] (Part II)

2016-06-01 Thread Ford Ox
When we are on this topic

Why does
[1 + 2]
result in
[3]
instead of
[1 + 2]

like any other function would do:
foo() = nothing
[1 foo 2]

> [1 foo 2]
>

I don't think it should be considered a function call.

What if you want to do something like this:
[
+ -  ;
* /  ;
^ log;
]



On Wednesday, June 1, 2016 at 11:21:59 AM UTC+2, Andreas Lobinger wrote:
>
> Hello colleagues,
>
> i actually was a little bit puzzled by this:
>
>_
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "?help" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.4.5 (2016-03-18 00:58 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
> |__/   |  x86_64-linux-gnu
>
> julia> a = [8,"9",10]
> 3-element Array{Any,1}:
>   8
>"9"
>  10
>
> julia> b = ["8",9,10,[12,"c"]]
> WARNING: [a,b,...] concatenation is deprecated; use [a;b;...] instead
>  in depwarn at deprecated.jl:73
>  in oldstyle_vcat_warning at ./abstractarray.jl:29
>  in vect at abstractarray.jl:38
> while loading no file, in expression starting on line 0
> 5-element Array{Any,1}:
>"8"
>   9
>  10
>  12
>"c"
>
> julia> c = [12,"c"]
> 2-element Array{Any,1}:
>  12
>"c"
>
> julia> b_prime = ["8",9,10,c]
> WARNING: [a,b,...] concatenation is deprecated; use [a;b;...] instead
>  in depwarn at deprecated.jl:73
>  in oldstyle_vcat_warning at ./abstractarray.jl:29
>  in vect at abstractarray.jl:38
> while loading no file, in expression starting on line 0
> 5-element Array{Any,1}:
>"8"
>   9
>  10
>  12
>"c"
>
>
> How do i put a list into a list? (bonus points for literals!)
>
>
>

Re: [julia-users] random number generation

2016-06-01 Thread El suisse
I think like this:

`rand(1:14, 6)`

2016-06-01 15:24 GMT-03:00 Michela Di Lullo :

> How can I do to generate 6 *different* integer random numbers between 1
> and 14?
>
> Thanks in advance to who will answer,
>
> M.
>
> ___
> INVESTI SUL FUTURO, FAI CRESCERE L’UNIVERSITÀ:
>
> *DONA IL 5 PER MILLE ALLA SAPIENZA*
>
> CODICE FISCALE *80209930587*
>


[julia-users] random number generation

2016-06-01 Thread Michela Di Lullo
How can I do to generate 6 *different* integer random numbers between 1 and 
14?

Thanks in advance to who will answer, 

M.

-- 
___
INVESTI SUL FUTURO, FAI CRESCERE L’UNIVERSITÀ:

*DONA IL 5 PER MILLE ALLA SAPIENZA*

CODICE FISCALE *80209930587*


[julia-users] Plots.jl bar width and color

2016-06-01 Thread Andre Bieler
How can I set the with of bars in a bar plot.

say for

```julia
bar([1,2,3], [4,6,3])
```

Is it possible to pick colors that are not defined as
keywords like :black, :grey. etc?

Something like #BFBFBF

Cheers,
Andre


[julia-users] Multiple subplots in Gadfly

2016-06-01 Thread Alain Cuvillier
Hi, I have some plots of functions in Gadfly, for example: 

plot(sin, 0, 4)
plot(cos, 0, 4)
...

How can I do a plot containing this functions as subplots in a grid? 
I tried with Geom.subplot_grid but all the examples I found are based on 
DataFrame, not functions.
Thank you!


[julia-users] Regression of promote with 6 or more arguments in Julia 0.5

2016-06-01 Thread Mosè Giordano
Hi all,

I'm noticing a large (factor of >100) regression of performance of promote 
function when fed with more than 5 arguments ("julia" is Julia 0.4.5; 
"./julia" is the official current nightly build, version 0.5.0-dev+4438, 
commit aa1ce87):

% julia -e 'using Benchmarks;print(@benchmark promote(1,2,3,4,5))'
 Benchmark Results 
 Time per evaluation: 2.00 ns [1.96 ns, 2.04 ns]
Proportion of time in GC: 0.00% [0.00%, 0.00%]
Memory allocated: 0.00 bytes
   Number of allocations: 0 allocations
   Number of samples: 4101
   Number of evaluations: 99601
 R² of OLS model: 0.952
 Time spent benchmarking: 1.20 s
% ./julia -e 'using Benchmarks;print(@benchmark promote(1,2,3,4,5))'
 Benchmark Results 
 Time per evaluation: 2.11 ns [2.06 ns, 2.16 ns]
Proportion of time in GC: 0.00% [0.00%, 0.00%]
Memory allocated: 0.00 bytes
   Number of allocations: 0 allocations
   Number of samples: 3801
   Number of evaluations: 74701
 R² of OLS model: 0.950
 Time spent benchmarking: 2.35 s
% julia -e 'using Benchmarks;print(@benchmark promote(1,2,3,4,5,6))'  
 Benchmark Results 
 Time per evaluation: 2.38 ns [2.34 ns, 2.42 ns]
Proportion of time in GC: 0.00% [0.00%, 0.00%]
Memory allocated: 0.00 bytes
   Number of allocations: 0 allocations
   Number of samples: 6301
   Number of evaluations: 811601
 R² of OLS model: 0.956
 Time spent benchmarking: 1.75 s
% ./julia -e 'using Benchmarks;print(@benchmark promote(1,2,3,4,5,6))'
 Benchmark Results 
 Time per evaluation: 306.79 ns [300.44 ns, 313.14 ns]
Proportion of time in GC: 0.00% [0.00%, 0.00%]
Memory allocated: 144.00 bytes
   Number of allocations: 3 allocations
   Number of samples: 4001
   Number of evaluations: 90501
 R² of OLS model: 0.955
 Time spent benchmarking: 2.64 s

I get the same results also with a custom build of the same commit (I 
thought I had a problem with my build and then downloaded the nightly build 
from the site).  Actually, something similar happens also for Julia 0.4 but 
starting from 9 arguments:

% julia -e 'using Benchmarks;print(@benchmark promote(1,2,3,4,5,6,7,8))'  
 Benchmark Results 
 Time per evaluation: 3.71 ns [3.63 ns, 3.79 ns]
Proportion of time in GC: 0.00% [0.00%, 0.00%]
Memory allocated: 0.00 bytes
   Number of allocations: 0 allocations
   Number of samples: 3801
   Number of evaluations: 74701
 R² of OLS model: 0.955
 Time spent benchmarking: 1.13 s
% julia -e 'using Benchmarks;print(@benchmark promote(1,2,3,4,5,6,7,8,9))'
 Benchmark Results 
 Time per evaluation: 6.27 μs [4.53 μs, 8.02 μs]
Proportion of time in GC: 0.00% [0.00%, 0.00%]
Memory allocated: 928.00 bytes
   Number of allocations: 29 allocations
   Number of samples: 100
   Number of evaluations: 100
 Time spent benchmarking: 0.18 s
% julia -e 'using Benchmarks;print(@benchmark 
promote(1,2,3,4,5,6,7,8,9,10))'
 Benchmark Results 
 Time per evaluation: 12.85 μs [11.13 μs, 14.56 μs]
Proportion of time in GC: 0.00% [0.00%, 0.00%]
Memory allocated: 1.11 kb
   Number of allocations: 33 allocations
   Number of samples: 100
   Number of evaluations: 100
 Time spent benchmarking: 0.18 s

Is this normal?

Bye,
Mosè


[julia-users] how to set the maximal waiting time for ClusterManagers.addprocs_sge()?

2016-06-01 Thread Florian Oswald
i'm having a problem on a cluster where setting up the connections via 
ClusterManagers.addprocs_sge() takes longer than the 60 second limit. how 
can I extend that limit? thanks!


Re: [julia-users] Using pmap in julia

2016-06-01 Thread Stefan Karpinski
Are you opening files via open or mmap in any of the functions
that learningExperimentRun calls?

On Wed, Jun 1, 2016 at 11:42 AM, Martha White 
wrote:

> I am having difficulty understanding how to use pmap in Julia. I am a
> reasonably experienced matlab and c programmer. However, I am new to Julia
> and to using parallel functions. I am running an experiment with nested for
> loops, benchmarking different algorithms. In the inner loop, I am running
> the algorithms across multiple trials. I would like to parallelize this
> inner loop (as the outer iteration I can easily run as multiple jobs on a
> cluster). The code looks like:
>
> effNumCores = 3
> procids = addprocs(effNumCores)
>
> # This has to be added so that each run has access to these function 
> definitions
> @everywhere include("experimentUtils.jl")
>
> # Initialize array of RMSE
> fill!(runErrors, 0.0);
>
> # Split up runs across number of cores
> outerloop = floor(Int, numRuns / effNumCores)+1
> r = 1
> rend = effNumCores
> for i = 1:outerloop
> rend = min(r+effNumCores-1, numRuns)
>
> # Empty RMSE passed, since it is create and returned in pmap_errors 
> Array{Float64}(0,0)
> pmap_errors = pmap(r -> learningExperimentRun(mdp,hordeOfD, stepData, 
> alpha,lambda,beta, numAgents, numSteps, Array{Float64}(0,0), r), r:rend)
> for j=1:(rend-r+1)
> runErrors[:,:,MEAN_IND] += pmap_errors[j]
> runErrors[:,:,VAR_IND] += pmap_errors[j].^2
> end
> r += effNumCores
> end
> rmprocs(procids)
>
> The function called above is defined in separate file called
> experimentUtils.jl, as
>
> function learningExperimentRun(mdp::MDP, hordeOfD::horde, 
> stepData::transData, alpha::Float64,lambda::Float64, beta::Float64, 
> numAgents::Int64, numSteps::Int64, RMSE::Array{Float64, 2}, runNum::Int64)
>   # if runErrors is empty, then initialize; this is empty for parallel version
>   if (isempty(RMSE))
>  RMSE = zeros(Float64,numAgents, numSteps)
>   else
> fill!(RMSE, 0.0)
>   end
>
>  srand(runNum)
>
>  agentInit(hordeOfD, mdp, alpha, beta,lambda,BETA_ETD)
>  getLearnerErrors(hordeOfD,mdp, RMSE,1)
>  mdpStart(mdp,stepData)
>  for i=2:numSteps
>mdpStep(mdp,stepData)
>updateLearners(stepData, mdp, hordeOfD)
>getLearnerErrors(hordeOfD,mdp, RMSE,i)
>  end
>
>  return RMSE
> end
>
> When I try to run this, I get a large number of workers and get errors
> that state that I have too many files open. I believe I must be doing
> something seriously wrong. If anyone could help to parallelize this code in
> julia, that would be fantastic. I am not tied to pmap, but after reading a
> bit, it seemed to be the right function to use.
>
>
> I should further add that I have an additional loop splitting runs over
> cores, even though pmap could do that for me. I did this because pmap_errors
> then becomes an array of numRuns (which could be 100s). By splitting it up
> into loops, the returned pmap_errors has size that is at most the number of
> cores. I am hoping that this memory then gets re-used when starting the
> next loop over cores.
>
> I tried at first avoiding this by using a distributed array for runErrors.
> But, this was not clearly documented and so I abandoned that approach.
>


[julia-users] Re: Is Julia slow with large arrays?

2016-06-01 Thread Lutfullah Tomak
Thanks for the tip. It makes sense. I did not pay attention that the given 
function does not returns but I always assumed that timings of for-loop should 
not change before even if the function does not return bar. Also, on julia 0.4, 
it is strange that after first run my last timing triples(maybe due to GC 
interupting more.). It stays about the same on julia 0.5 though.

[julia-users] Using pmap in julia

2016-06-01 Thread Martha White


I am having difficulty understanding how to use pmap in Julia. I am a 
reasonably experienced matlab and c programmer. However, I am new to Julia 
and to using parallel functions. I am running an experiment with nested for 
loops, benchmarking different algorithms. In the inner loop, I am running 
the algorithms across multiple trials. I would like to parallelize this 
inner loop (as the outer iteration I can easily run as multiple jobs on a 
cluster). The code looks like:

effNumCores = 3
procids = addprocs(effNumCores)

# This has to be added so that each run has access to these function definitions
@everywhere include("experimentUtils.jl")

# Initialize array of RMSE
fill!(runErrors, 0.0);

# Split up runs across number of cores
outerloop = floor(Int, numRuns / effNumCores)+1
r = 1
rend = effNumCores
for i = 1:outerloop
rend = min(r+effNumCores-1, numRuns)

# Empty RMSE passed, since it is create and returned in pmap_errors 
Array{Float64}(0,0)
pmap_errors = pmap(r -> learningExperimentRun(mdp,hordeOfD, stepData, 
alpha,lambda,beta, numAgents, numSteps, Array{Float64}(0,0), r), r:rend)
for j=1:(rend-r+1)
runErrors[:,:,MEAN_IND] += pmap_errors[j]
runErrors[:,:,VAR_IND] += pmap_errors[j].^2
end
r += effNumCores
end
rmprocs(procids)

The function called above is defined in separate file called 
experimentUtils.jl, as

function learningExperimentRun(mdp::MDP, hordeOfD::horde, stepData::transData, 
alpha::Float64,lambda::Float64, beta::Float64, numAgents::Int64, 
numSteps::Int64, RMSE::Array{Float64, 2}, runNum::Int64)
  # if runErrors is empty, then initialize; this is empty for parallel version
  if (isempty(RMSE))
 RMSE = zeros(Float64,numAgents, numSteps)
  else
fill!(RMSE, 0.0)
  end

 srand(runNum)

 agentInit(hordeOfD, mdp, alpha, beta,lambda,BETA_ETD)
 getLearnerErrors(hordeOfD,mdp, RMSE,1)
 mdpStart(mdp,stepData)
 for i=2:numSteps
   mdpStep(mdp,stepData)
   updateLearners(stepData, mdp, hordeOfD)
   getLearnerErrors(hordeOfD,mdp, RMSE,i)
 end

 return RMSE
end

When I try to run this, I get a large number of workers and get errors that 
state that I have too many files open. I believe I must be doing something 
seriously wrong. If anyone could help to parallelize this code in julia, 
that would be fantastic. I am not tied to pmap, but after reading a bit, it 
seemed to be the right function to use.


I should further add that I have an additional loop splitting runs over 
cores, even though pmap could do that for me. I did this because pmap_errors 
then becomes an array of numRuns (which could be 100s). By splitting it up 
into loops, the returned pmap_errors has size that is at most the number of 
cores. I am hoping that this memory then gets re-used when starting the 
next loop over cores.

I tried at first avoiding this by using a distributed array for runErrors. 
But, this was not clearly documented and so I abandoned that approach.


Re: [julia-users] Re: Double free or corruption (out)

2016-06-01 Thread 'Bill Hart' via julia-users
I've checked that the problem we were having doesn't happen with Julia
0.4.5 on Travis. In fact, it also doesn't happen on another one of our
systems with Julia 0.4.5, so at this stage we have no idea what the problem
is. It may be totally unrelated to the problem you are having.

Bill.

On 31 May 2016 at 13:25, Bill Hart  wrote:

> We are also suddenly getting crashes with 2.4.5. when running our (Nemo)
> test suite. It says that some memory allocation is failing due to invalid
> next size. I suspect there is a bug that wasn't there until the last few
> days, since we were passing just fine on Travis. Though at this stage, I
> haven't checked whether we are still passing on Travis.
>
> Bill.
>
> On 31 May 2016 at 12:52, Nils Gudat  wrote:
>
>> Resurrecting this very old thread - after having been able to solve the
>> model with no seg faults over the last couple of months, they have now
>> returned and occur much faster (usually within 2 hours of running the code).
>> I have refactored the code a little so that it hopefully will be possible
>> for others to run it. Cloning the entire repo at
>> http://github.com/nilshg/LearningModels, it should run when altering the
>> path in
>> https://github.com/nilshg/LearningModels/blob/master/NHL/NHL_maximize.jl
>> to whatever path it has been cloned to.
>>
>> I'm running this code on a 16-core Ubuntu 14.04 machine with Julia 0.4.5
>> installed an all packages on the latest tagged versions.
>>
>> On Tuesday, September 29, 2015 at 1:43:31 PM UTC+1, Nils Gudat wrote:
>>>
>>> The code usually segfaults after 2-5 hours, and is available at
>>> http://github.com/nilshg/LearningModels, however I haven't written it
>>> up in a way that is easy to run (right now it depends on some data not
>>> included in the repo), so I'll have to restructure a bit before you can run
>>> it. I'll try to do so today if I find the time.
>>>
>>
>


Re: [julia-users] Context manager and Julia (with...)

2016-06-01 Thread Stefan Karpinski
See also https://github.com/JuliaLang/julia/issues/7721

On Wed, Jun 1, 2016 at 10:26 AM, Erik Schnetter  wrote:

> Julia has a similar feature that is built on closures. See the "do
> notation" in the manual. For example:
>
> ```Julia
> open("file", "r") do f
> @show readline(f)
> end
> ```
>
> -erik
>
>
> On Wed, Jun 1, 2016 at 10:18 AM, Femto Trader 
> wrote:
>
>> Hello,
>>
>> Python have a nice concept called "context manager" (see "with" statement)
>> http://book.pythontips.com/en/latest/context_managers.html
>> With this concept it's quite easy to open a file without having to close
>> it.
>> Yattag, for example, use this concept to create XML files.
>> You just need to use context managers to create opening tags... closing
>> tags
>> are automatically created.
>> So is there a similar concept in Julia.
>> If yes, where can I find some doc about it ?
>> If not, is there any plan to provide such feature ?
>>
>> Kind regards
>>
>
>
>
> --
> Erik Schnetter 
> http://www.perimeterinstitute.ca/personal/eschnetter/
>


Re: [julia-users] Context manager and Julia (with...)

2016-06-01 Thread Erik Schnetter
Julia has a similar feature that is built on closures. See the "do
notation" in the manual. For example:

```Julia
open("file", "r") do f
@show readline(f)
end
```

-erik


On Wed, Jun 1, 2016 at 10:18 AM, Femto Trader 
wrote:

> Hello,
>
> Python have a nice concept called "context manager" (see "with" statement)
> http://book.pythontips.com/en/latest/context_managers.html
> With this concept it's quite easy to open a file without having to close
> it.
> Yattag, for example, use this concept to create XML files.
> You just need to use context managers to create opening tags... closing
> tags
> are automatically created.
> So is there a similar concept in Julia.
> If yes, where can I find some doc about it ?
> If not, is there any plan to provide such feature ?
>
> Kind regards
>



-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] Context manager and Julia (with...)

2016-06-01 Thread Femto Trader
Hello,

Python have a nice concept called "context manager" (see "with" statement)
http://book.pythontips.com/en/latest/context_managers.html
With this concept it's quite easy to open a file without having to close it.
Yattag, for example, use this concept to create XML files.
You just need to use context managers to create opening tags... closing tags
are automatically created.
So is there a similar concept in Julia.
If yes, where can I find some doc about it ?
If not, is there any plan to provide such feature ?

Kind regards


Re: [julia-users] Re: Is Julia slow with large arrays?

2016-06-01 Thread Scott Jones
Good catch!  I'd already added returning bar to my parameterized Julia 
version, comparing the two now on my machine, they are essentially 
identical in performance.
Still, writing 1/3 the code (and having it much more readable), having 
generic code that would also work for complex, integer, decimal floats, or 
whatever, not just `double`/`Float64`, and getting the exact same speed as 
Fortran,
long the scientific speed champ, is absolutely excellent!

On Wednesday, June 1, 2016 at 9:42:29 AM UTC-4, Kristoffer Carlsson wrote:
>
> Remember to actually return bar when you benchmark the julia code. Also, 
> you are not running the fortran code with optimizations on:
>
> ➜  Documents gfortran test.f -O2; ./a.out
>   0.01 seconds
>
>
> Making sure that bar is actually used:
>
> ➜  Documents gfortran test.f -O2; ./a.out
>   0.047754 seconds
>
>
>
>
>
> On Wednesday, June 1, 2016 at 3:10:26 PM UTC+2, Mosè Giordano wrote:
>>
>> Oh my bad, this was really easy to fix!  I usually do inspect the code 
>> with @code_warntype but missed to do the same this time.  Lesson 
>> learned. 
>>
>> Now the same loop is about ~30 times faster in Julia than in Fortran, 
>> really impressive. 
>>
>> Thank you all for the prompt comments! 
>>
>> Bye, 
>> Mosè 
>>
>>
>> 2016-06-01 13:27 GMT+02:00 Lutfullah Tomak: 
>> > First thing I caught in your code 
>> > 
>> > 
>> http://docs.julialang.org/en/release-0.4/manual/performance-tips/#avoid-changing-the-type-of-a-variable
>>  
>> > 
>> > make bar non-type-changing 
>> > function foo() 
>> > array1 = rand(70, 1000) 
>> > array2 = rand(70, 1000) 
>> > array3 = rand(2, 70, 20, 20) 
>> > bar = 0.0 
>> > @time for l = 1:1000, k = 1:20, j = 1:20, i = 1:70 
>> > bar = bar + 
>> > (array1[i, l] - array3[1, i, j, k])^2 + 
>> > (array2[i, l] - array3[2, i, j, k])^2 
>> > end 
>> > end 
>> > 
>> > Second, Julia checks array bounds so @inbounds macro before for-loop 
>> should 
>> > help 
>> > improve performance. In some situation @simd may emit vector 
>> instructions 
>> > thus faster 
>> > code. 
>>
>

Re: [julia-users] Re: Is Julia slow with large arrays?

2016-06-01 Thread Kristoffer Carlsson
Remember to actually return bar when you benchmark the julia code. Also, 
you are not running the fortran code with optimizations on:

➜  Documents gfortran test.f -O2; ./a.out
  0.01 seconds


Making sure that bar is actually used:

➜  Documents gfortran test.f -O2; ./a.out
  0.047754 seconds





On Wednesday, June 1, 2016 at 3:10:26 PM UTC+2, Mosè Giordano wrote:
>
> Oh my bad, this was really easy to fix!  I usually do inspect the code 
> with @code_warntype but missed to do the same this time.  Lesson 
> learned. 
>
> Now the same loop is about ~30 times faster in Julia than in Fortran, 
> really impressive. 
>
> Thank you all for the prompt comments! 
>
> Bye, 
> Mosè 
>
>
> 2016-06-01 13:27 GMT+02:00 Lutfullah Tomak: 
> > First thing I caught in your code 
> > 
> > 
> http://docs.julialang.org/en/release-0.4/manual/performance-tips/#avoid-changing-the-type-of-a-variable
>  
> > 
> > make bar non-type-changing 
> > function foo() 
> > array1 = rand(70, 1000) 
> > array2 = rand(70, 1000) 
> > array3 = rand(2, 70, 20, 20) 
> > bar = 0.0 
> > @time for l = 1:1000, k = 1:20, j = 1:20, i = 1:70 
> > bar = bar + 
> > (array1[i, l] - array3[1, i, j, k])^2 + 
> > (array2[i, l] - array3[2, i, j, k])^2 
> > end 
> > end 
> > 
> > Second, Julia checks array bounds so @inbounds macro before for-loop 
> should 
> > help 
> > improve performance. In some situation @simd may emit vector 
> instructions 
> > thus faster 
> > code. 
>


Re: [julia-users] Plots gadfly

2016-06-01 Thread Tom Breloff
This might have been a deprecation warning from Plots that Gadfly isn't
supported with the development branches of Plots. I don't have the capacity
to update the backend code to handle the big internal rebuild that I'm
finishing, so for now Gadfly gets a warning message.

I'll post a more complete write up and announcement within a week or 2. In
the meantime pyplot, GR, and plotly/plotlyjs backends are all nice to use.

On Wednesday, June 1, 2016, Samuele Carcagno  wrote:

> On 01/06/16 09:44, Henri Girard wrote:
>
>> Gadfly is deprecated : Does it mean we shouldn't use it anymore ?
>> I am trying to use it but I got a long list of recompiling ?
>> What can I use instead ? I don't want to use plotly because I don't want
>> to have a password (which doesn't work properly either !) each time I
>> connect... It's very awfull this interface
>>
>
> if you use the `PlotlyJS.jl` interface to plotly you won't need a Plotly
> account or an internet connection, see here:
>
> http://spencerlyon.com/PlotlyJS.jl/
>
> Sam
>


Re: [julia-users] Re: Is Julia slow with large arrays?

2016-06-01 Thread Yichao Yu
On Wed, Jun 1, 2016 at 8:37 AM, DNF  wrote:
> Out of curiosity: My understanding has been that type instabilities are
> related to the compiler not being able to predict the types of variables
> based on input types, not variables simply (and predictably) changing types
> during program execution.

For this case, the variable is **NOT** predictably changing type
without knowing that the loop will run for at least once and FWIW it
can have different types in the loop.

The so-called `henchmen unrolling` (name invented by Stefan or Jeff)
can help with this to a certain degree by unrolling the first loop.

>
> Should the compiler ideally be able to catch the case we're seeing here, and
> predict that bar should be a float? Or is this case particularly hard for
> some reason?


[julia-users] Re: Getting sequence of time and subset it

2016-06-01 Thread Evan Fields
Of course there's a way :)

You can use the isna function to check if a value is NA or not. There's 
also the dropna function which takes a DataArray as input and returns a 
regular Vector with the NA elements removed.

You could try something like the following:

firstbreakcol = 4
lastbreakcol = 5
for i in 1:nrow(sdt2), t in 
sdt2[:StartTime][i]:Dates.Minute(30):sdt2[:EndTime][i]
shouldprint = true
for j in firstbreakcol:lastbreakcol
if !isna(sdt2[i,j]) && sdt2[i,j] == t
# t is in a break column
shouldprint = false
break
end
end
if shouldprint
println(t)
end
end

It's a little verbose, true. I'm sure there's something cleaner with 
dropna, filter, etc.


Re: [julia-users] Re: Is Julia slow with large arrays?

2016-06-01 Thread Mosè Giordano
Oh my bad, this was really easy to fix!  I usually do inspect the code
with @code_warntype but missed to do the same this time.  Lesson
learned.

Now the same loop is about ~30 times faster in Julia than in Fortran,
really impressive.

Thank you all for the prompt comments!

Bye,
Mosè


2016-06-01 13:27 GMT+02:00 Lutfullah Tomak:
> First thing I caught in your code
>
> http://docs.julialang.org/en/release-0.4/manual/performance-tips/#avoid-changing-the-type-of-a-variable
>
> make bar non-type-changing
> function foo()
> array1 = rand(70, 1000)
> array2 = rand(70, 1000)
> array3 = rand(2, 70, 20, 20)
> bar = 0.0
> @time for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
> bar = bar +
> (array1[i, l] - array3[1, i, j, k])^2 +
> (array2[i, l] - array3[2, i, j, k])^2
> end
> end
>
> Second, Julia checks array bounds so @inbounds macro before for-loop should
> help
> improve performance. In some situation @simd may emit vector instructions
> thus faster
> code.


Re: [julia-users] Re: Is Julia slow with large arrays?

2016-06-01 Thread Mauro
On Wed, 2016-06-01 at 14:37, DNF  wrote:
> Out of curiosity: My understanding has been that type instabilities are
> related to the compiler not being able to predict the types of variables
> based on input types, not variables simply (and predictably) changing types
> during program execution.
>
> Should the compiler ideally be able to catch the case we're seeing here,
> and predict that bar should be a float? Or is this case particularly hard
> for some reason?

I think this was discussed yesterday or the day before in this thread:
https://groups.google.com/d/msg/julia-users/gKQnJyUhc5Y/KC64yfDhAQAJ


Re: [julia-users] Re: Is Julia slow with large arrays?

2016-06-01 Thread Erik Schnetter
You can use `@simd` with the innermost loop. (It really should be extended
to work with loop nests as well, but that hasn't been done yet.) Here is
how `@simd` would look in your case:

```Julia
@inbounds for l = 1:1000, k = 1:20, j = 1:20
@simd for i = 1:70
bar = bar + (array1[i, l] - array3[1, i, j, k])^2 + (array2[i, l] -
array3[2, i, j, k])^2
end
end
```

That is, you split off the innermost loop, and attach the `@simd` there.

-erik


On Wed, Jun 1, 2016 at 7:51 AM, Lutfullah Tomak 
wrote:

> Ok, I revise what I said about @ simd, according to documentation, it only
> works for
> one dimensional ranges and it does not allow this type for-loop and throws
> error.
>
> Here is timings for me
>
> julia> function foo()
>   array1 = rand(70, 1000)
>   array2 = rand(70, 1000)
>   array3 = rand(2, 70, 20, 20)
>   bar = 0
>   @time for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
> bar = bar +
> (array1[i, l] - array3[1, i, j, k])^2 +
> (array2[i, l] - array3[2, i, j, k])^2
>   end
>end
> foo (generic function with 1 method)
>
>
> julia> foo()
>   1.215454 seconds (84.00 M allocations: 1.252 GB, 4.16% gc time)
>
>
> julia> foo()
>   1.308979 seconds (84.00 M allocations: 1.252 GB, 3.92% gc time)
>
>
> julia> function foo()
>   array1 = rand(70, 1000)
>   array2 = rand(70, 1000)
>   array3 = rand(2, 70, 20, 20)
>   bar = 0.0
>   @time for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
> bar = bar +
> (array1[i, l] - array3[1, i, j, k])^2 +
> (array2[i, l] - array3[2, i, j, k])^2
>   end
>end
> foo (generic function with 1 method)
>
>
> julia> foo()
>   0.114811 seconds
>
>
> julia> foo()
>   0.150542 seconds
>
>
> julia> function foo()  array1 = rand(70, 1000)
>   array2 = rand(70, 1000)
>   array3 = rand(2, 70, 20, 20)
>   bar = 0.0
>   @time @inbounds for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
> bar = bar +
> (array1[i, l] - array3[1, i, j, k])^2 +
> (array2[i, l] - array3[2, i, j, k])^2
>   end
>
>end
>
> foo (generic function with 1 method)
>
>
> julia> foo()
>   0.004927 seconds
>
>
>
>


-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] Re: Is Julia slow with large arrays?

2016-06-01 Thread DNF
Out of curiosity: My understanding has been that type instabilities are 
related to the compiler not being able to predict the types of variables 
based on input types, not variables simply (and predictably) changing types 
during program execution.

Should the compiler ideally be able to catch the case we're seeing here, 
and predict that bar should be a float? Or is this case particularly hard 
for some reason?


Re: [julia-users] Reading from a TCP socket with quasi-continuous data transfer

2016-06-01 Thread Isaiah Norton
On Wednesday, June 1, 2016, Joshua Jones <
highly.creative.pseudo...@gmail.com> wrote:

>
>>>1. I've read (here and elsewhere) that Julia does an implicit block
>>>while waiting for data. Is there a workaround?
>>>
>>> Do the blocking read asynchronously in a task. For network I/O (but not
>> file), `read` will only block at the task level. To end the read, `close`
>> the socket from another task.
>>
>
> Only a call to @async eof(conn) works as an asynchronous task; I'm not
> sure how that's going to help, except in the REPL. Any other attempt to
> read remotely is a segmentation fault that destroys the worker.
>
> *Example (test function)*
> @everywhere readSL(conn) = begin
>   while true
> eof(conn)
> tmp = read(conn.buffer, UInt8, 520)
>   end
> end
>
>
>
> Test call
> julia> remotecall(6, readSL, conn);
>
> julia>
> signal (11): Segmentation fault
> uv_read_start at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown
> line)
> start_reading at stream.jl:850
> wait_readnb at stream.jl:373
> eof at stream.jl:96
> readSL at none:4
> jl_apply_generic at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so 
> (unknown
> line)
> jl_f_apply at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown
> line)
> anonymous at multi.jl:920
> run_work_thunk at multi.jl:661
> run_work_thunk at multi.jl:670
> jlcall_run_work_thunk_21322 at  (unknown line)
> jl_apply_generic at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so 
> (unknown
> line)
> anonymous at task.jl:58
> unknown function (ip: 0x7f1d162898cc)
> unknown function (ip: (nil))
> Worker 6 terminated.
> ERROR (unhandled task failure): EOFError: read end of file
>  in read at stream.jl:929
>  in message_handler_loop at multi.jl:878
>  in process_tcp_streams at multi.jl:867
>  in anonymous at task.jl:63
>
> Am I doing something fundamentally wrong here?
>

Yes: using multiprocess parallelism instead of tasks. '@everywhere'
executes code on other processes that do not share the same address space
as the head process and cannot access the same socket. See the section on
Tasks in the manual.

(though it probably shouldn't segfault, please file a big about that)


[julia-users] Re: Is Julia slow with large arrays?

2016-06-01 Thread Kristoffer Carlsson
The previous timings are too fast because they are not returning bar which 
makes the compiler remove a bunch of stuff.

On Wednesday, June 1, 2016 at 2:14:48 PM UTC+2, Scott Jones wrote:
>
> Is that last timing correct?  That's 10x faster than I get.
>
> My version of the Julia code (I parameterized it, so that it doesn't have 
> to be Float64, could be complex, integer, rational, decimal float, whatever 
> floats your boat!)
>
> function foo{T<:Number}(array1::Matrix{T}, array2::Matrix{T}, array3::
> Array{T,4})
> bar = zero(T)
> @inbounds for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
> bar = bar +
> (array1[i, l] - array3[1, i, j, k])^2 +
> (array2[i, l] - array3[2, i, j, k])^2
> end
> bar
> end
>
> function testfoo()
> arr1, arr2, arr3 = rand(70, 1000), rand(70, 1000), rand(2, 70, 20, 20)
> @time foo(arr1, arr2, arr3)
> end
>
>
> and the timings:
>
> 08:04 $ gfortran /j/test.f&&./a.out
>>   0.135017 seconds
>> *julia> *
>> *testfoo() ;*  0.042858 seconds
>> *julia> *
>> *testfoo() ;*  0.047505 seconds
>> *julia> *
>> *testfoo() ;*  0.046309 seconds
>> *julia> *
>> *testfoo() ;*  0.045292 seconds
>
>
> So, Julia about 3x faster with 1/3 code, seems pretty impressive to me 
> (esp. given the reputation of Fortran for numeric speed [maybe no longer 
> all that true, compared to Julia, C, C++?])
>
>
> On Wednesday, June 1, 2016 at 7:51:45 AM UTC-4, Lutfullah Tomak wrote:
>>
>> Ok, I revise what I said about @ simd, according to documentation, it 
>> only works for 
>> one dimensional ranges and it does not allow this type for-loop and 
>> throws error.
>>
>> Here is timings for me
>>
>> julia> function foo()
>>   array1 = rand(70, 1000)
>>   array2 = rand(70, 1000)
>>   array3 = rand(2, 70, 20, 20)
>>   bar = 0
>>   @time for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
>> bar = bar +
>> (array1[i, l] - array3[1, i, j, k])^2 +
>> (array2[i, l] - array3[2, i, j, k])^2
>>   end
>>end
>> foo (generic function with 1 method)
>>
>>
>> julia> foo()
>>   1.215454 seconds (84.00 M allocations: 1.252 GB, 4.16% gc time)
>>
>>
>> julia> foo()
>>   1.308979 seconds (84.00 M allocations: 1.252 GB, 3.92% gc time)
>>
>>
>> julia> function foo()
>>   array1 = rand(70, 1000)
>>   array2 = rand(70, 1000)
>>   array3 = rand(2, 70, 20, 20)
>>   bar = 0.0
>>   @time for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
>> bar = bar +
>> (array1[i, l] - array3[1, i, j, k])^2 +
>> (array2[i, l] - array3[2, i, j, k])^2
>>   end
>>end
>> foo (generic function with 1 method)
>>
>>
>> julia> foo()
>>   0.114811 seconds
>>
>>
>> julia> foo()
>>   0.150542 seconds
>>
>>
>> julia> function foo()  array1 = rand(70, 1000)
>>   array2 = rand(70, 1000)
>>   array3 = rand(2, 70, 20, 20)
>>   bar = 0.0
>>   @time @inbounds for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
>> bar = bar +
>> (array1[i, l] - array3[1, i, j, k])^2 +
>> (array2[i, l] - array3[2, i, j, k])^2
>>   end
>>
>>end
>>
>> foo (generic function with 1 method)
>>
>>
>> julia> foo()
>>   0.004927 seconds
>>
>>
>>
>>

Re: [julia-users] filter() function edge case, possible bug.

2016-06-01 Thread Mauro
Yes, this is a bug: ranges should behave as vectors in read-only
situations.  I suspect that this is on the radar of the devs already as
there is quite a bit of work happening in this area.  But still filing a
bug report, if one does not exist yet, is probably the right thing to do.

On Wed, 2016-06-01 at 03:26, Anonymous  wrote:
> Consider the code:
>
> filter(n -> true, collect(1:0))
>
> this returns a 0-element array.  However if I don't collect the iterator
> into an array:
>
> filter(n -> true, 1:0)
>
> I get the error
>
> Error: TypeError: typeassert: expected AbstractArray{Bool, N}, got
> Array{Int64, 1} in filter at array.jl:923
>
> It seems like this should also return a 0-element array, since if the
> iterator is non empty, e.g.
>
> filter(n -> true, 1:2)
>
> I get a two the element array [1,2].


[julia-users] Re: Is Julia slow with large arrays?

2016-06-01 Thread Scott Jones
Is that last timing correct?  That's 10x faster than I get.

My version of the Julia code (I parameterized it, so that it doesn't have 
to be Float64, could be complex, integer, rational, decimal float, whatever 
floats your boat!)

function foo{T<:Number}(array1::Matrix{T}, array2::Matrix{T}, array3::Array{
T,4})
bar = zero(T)
@inbounds for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
bar = bar +
(array1[i, l] - array3[1, i, j, k])^2 +
(array2[i, l] - array3[2, i, j, k])^2
end
bar
end

function testfoo()
arr1, arr2, arr3 = rand(70, 1000), rand(70, 1000), rand(2, 70, 20, 20)
@time foo(arr1, arr2, arr3)
end


and the timings:

08:04 $ gfortran /j/test.f&&./a.out
>   0.135017 seconds
> *julia> *
> *testfoo() ;*  0.042858 seconds
> *julia> *
> *testfoo() ;*  0.047505 seconds
> *julia> *
> *testfoo() ;*  0.046309 seconds
> *julia> *
> *testfoo() ;*  0.045292 seconds


So, Julia about 3x faster with 1/3 code, seems pretty impressive to me 
(esp. given the reputation of Fortran for numeric speed [maybe no longer 
all that true, compared to Julia, C, C++?])


On Wednesday, June 1, 2016 at 7:51:45 AM UTC-4, Lutfullah Tomak wrote:
>
> Ok, I revise what I said about @ simd, according to documentation, it only 
> works for 
> one dimensional ranges and it does not allow this type for-loop and throws 
> error.
>
> Here is timings for me
>
> julia> function foo()
>   array1 = rand(70, 1000)
>   array2 = rand(70, 1000)
>   array3 = rand(2, 70, 20, 20)
>   bar = 0
>   @time for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
> bar = bar +
> (array1[i, l] - array3[1, i, j, k])^2 +
> (array2[i, l] - array3[2, i, j, k])^2
>   end
>end
> foo (generic function with 1 method)
>
>
> julia> foo()
>   1.215454 seconds (84.00 M allocations: 1.252 GB, 4.16% gc time)
>
>
> julia> foo()
>   1.308979 seconds (84.00 M allocations: 1.252 GB, 3.92% gc time)
>
>
> julia> function foo()
>   array1 = rand(70, 1000)
>   array2 = rand(70, 1000)
>   array3 = rand(2, 70, 20, 20)
>   bar = 0.0
>   @time for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
> bar = bar +
> (array1[i, l] - array3[1, i, j, k])^2 +
> (array2[i, l] - array3[2, i, j, k])^2
>   end
>end
> foo (generic function with 1 method)
>
>
> julia> foo()
>   0.114811 seconds
>
>
> julia> foo()
>   0.150542 seconds
>
>
> julia> function foo()  array1 = rand(70, 1000)
>   array2 = rand(70, 1000)
>   array3 = rand(2, 70, 20, 20)
>   bar = 0.0
>   @time @inbounds for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
> bar = bar +
> (array1[i, l] - array3[1, i, j, k])^2 +
> (array2[i, l] - array3[2, i, j, k])^2
>   end
>
>end
>
> foo (generic function with 1 method)
>
>
> julia> foo()
>   0.004927 seconds
>
>
>
>

[julia-users] Re: Is Julia slow with large arrays?

2016-06-01 Thread Lutfullah Tomak
Ok, I revise what I said about @ simd, according to documentation, it only 
works for 
one dimensional ranges and it does not allow this type for-loop and throws 
error.

Here is timings for me

julia> function foo()
  array1 = rand(70, 1000)
  array2 = rand(70, 1000)
  array3 = rand(2, 70, 20, 20)
  bar = 0
  @time for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
bar = bar +
(array1[i, l] - array3[1, i, j, k])^2 +
(array2[i, l] - array3[2, i, j, k])^2
  end
   end
foo (generic function with 1 method)


julia> foo()
  1.215454 seconds (84.00 M allocations: 1.252 GB, 4.16% gc time)


julia> foo()
  1.308979 seconds (84.00 M allocations: 1.252 GB, 3.92% gc time)


julia> function foo()
  array1 = rand(70, 1000)
  array2 = rand(70, 1000)
  array3 = rand(2, 70, 20, 20)
  bar = 0.0
  @time for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
bar = bar +
(array1[i, l] - array3[1, i, j, k])^2 +
(array2[i, l] - array3[2, i, j, k])^2
  end
   end
foo (generic function with 1 method)


julia> foo()
  0.114811 seconds


julia> foo()
  0.150542 seconds


julia> function foo()  array1 = rand(70, 1000)
  array2 = rand(70, 1000)
  array3 = rand(2, 70, 20, 20)
  bar = 0.0
  @time @inbounds for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
bar = bar +
(array1[i, l] - array3[1, i, j, k])^2 +
(array2[i, l] - array3[2, i, j, k])^2
  end

   end

foo (generic function with 1 method)


julia> foo()
  0.004927 seconds





[julia-users] Re: Is Julia slow with large arrays?

2016-06-01 Thread Lutfullah Tomak
Since it is in function, it is not timing copile time but only for-loop. If 
it were in @time foo() it would time compilation.


On Wednesday, June 1, 2016 at 2:30:16 PM UTC+3, Andreas Lobinger wrote:
>
> It's valid and interesting to measure full roundtrip including compile 
> time like you do, however in examples like this, the julia overhead on 
> compiling dominates your measurement.
> You could put your code into a package and pre-compile. 
>
> In any case your not measuring the time to run the code only.
>
> On Wednesday, June 1, 2016 at 1:15:54 PM UTC+2, Mosè Giordano wrote:
>>
>> Hi all,
>>
>> I'm working on a Fortran77 code, but I'd much prefer to translate it to 
>> Julia.  However, I've the impression that when working with large arrays 
>> (of the order of tens of thousands elements) Julia is way slower than the 
>> equivalent Fortran code.  See the following examples:
>>
>> Julia:
>>
>> function foo()
>> array1 = rand(70, 1000)
>> array2 = rand(70, 1000)
>> array3 = rand(2, 70, 20, 20)
>> bar = 0
>> @time for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
>> bar = bar +
>> (array1[i, l] - array3[1, i, j, k])^2 +
>> (array2[i, l] - array3[2, i, j, k])^2
>> end
>> end
>> foo()
>>
>> Fortran77 (it uses some GNU extensions, so gfortran is required):
>>
>>   program main
>>   implicit none
>>   double precision array1(70, 1000), array2(70, 1000),
>>  &array3(2, 70, 20, 20), bar, start, finish
>>   integer i, j, k, l
>>   call srand(time())
>> c Initialize "array1" and "array2"
>>   do j = 1, 1000
>> do i = 1, 70
>>   array1(i, j) = rand()
>>   array2(i, j) = rand()
>> enddo
>>   enddo
>> c Initialize "array3"
>>   do l = 1, 2
>> do k = 1, 70
>>   do j = 1, 20
>> do i = 1, 20
>>   array3(i, j, k, l) = rand()
>> enddo
>>   enddo
>> enddo
>>   enddo
>> c Do the calculations
>>   bar = 0
>>   call cpu_time(start)
>>   do l = 1, 1000
>> do k = 1, 20
>>   do j = 1, 20
>> do i = 1, 70
>>   bar = bar +
>>  &(array1(i, l) - array3(1, i, j, k))**2 +
>>  &(array2(i, l) - array3(2, i, j, k))**2
>> enddo
>>   enddo
>> enddo
>>   enddo
>>   call cpu_time(finish)
>>   print "(f10.6, a)", finish - start, " seconds"
>>   end program main
>>
>> This is the result of running the two programs on my computer:
>>
>> % julia --version
>> julia version 0.4.5
>> % gfortran --version
>> GNU Fortran (Debian 5.3.1-20) 5.3.1 20160519
>> Copyright (C) 2015 Free Software Foundation, Inc.
>>
>> GNU Fortran comes with NO WARRANTY, to the extent permitted by law.
>> You may redistribute copies of GNU Fortran
>> under the terms of the GNU General Public License.
>> For more information about these matters, see the file named COPYING
>>
>> % julia -f test.jl
>>   1.099910 seconds (84.00 M allocations: 1.252 GB, 7.14% gc time)
>> % gfortran test.f&&./a.out
>>   0.132000 seconds
>>
>> While Julia code is 3 times shorter than the Fortran77 code (and this one 
>> of the many reasons why I like Julia very much), it's also more than 8 
>> times slower and my build of Julia 0.5 (updated to aa1ce87) performs even 
>> worse, it takes about 1.4 seconds.  If I remove access to the arrays (just 
>> put "bar = bar" as loop body) then Julia is infinitely faster than Fortran 
>> in the sense that Julia takes 0.00 seconds, Fortran 0.056000.
>>
>> Is this difference to be expected or are there tricks to fasten the Julia 
>> code?  I should have gotten the order of nested loops right.  @inbounds 
>> doesn't help that much.  Parallelization is probably not an option because 
>> the code above is a small part of a larger loop that does data movement 
>> (can DistributedArrays help?).
>>
>> Bye,
>> Mosè
>>
>

[julia-users] Re: Is Julia slow with large arrays?

2016-06-01 Thread Andreas Lobinger
It's valid and interesting to measure full roundtrip including compile time 
like you do, however in examples like this, the julia overhead on compiling 
dominates your measurement.
You could put your code into a package and pre-compile. 

In any case your not measuring the time to run the code only.

On Wednesday, June 1, 2016 at 1:15:54 PM UTC+2, Mosè Giordano wrote:
>
> Hi all,
>
> I'm working on a Fortran77 code, but I'd much prefer to translate it to 
> Julia.  However, I've the impression that when working with large arrays 
> (of the order of tens of thousands elements) Julia is way slower than the 
> equivalent Fortran code.  See the following examples:
>
> Julia:
>
> function foo()
> array1 = rand(70, 1000)
> array2 = rand(70, 1000)
> array3 = rand(2, 70, 20, 20)
> bar = 0
> @time for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
> bar = bar +
> (array1[i, l] - array3[1, i, j, k])^2 +
> (array2[i, l] - array3[2, i, j, k])^2
> end
> end
> foo()
>
> Fortran77 (it uses some GNU extensions, so gfortran is required):
>
>   program main
>   implicit none
>   double precision array1(70, 1000), array2(70, 1000),
>  &array3(2, 70, 20, 20), bar, start, finish
>   integer i, j, k, l
>   call srand(time())
> c Initialize "array1" and "array2"
>   do j = 1, 1000
> do i = 1, 70
>   array1(i, j) = rand()
>   array2(i, j) = rand()
> enddo
>   enddo
> c Initialize "array3"
>   do l = 1, 2
> do k = 1, 70
>   do j = 1, 20
> do i = 1, 20
>   array3(i, j, k, l) = rand()
> enddo
>   enddo
> enddo
>   enddo
> c Do the calculations
>   bar = 0
>   call cpu_time(start)
>   do l = 1, 1000
> do k = 1, 20
>   do j = 1, 20
> do i = 1, 70
>   bar = bar +
>  &(array1(i, l) - array3(1, i, j, k))**2 +
>  &(array2(i, l) - array3(2, i, j, k))**2
> enddo
>   enddo
> enddo
>   enddo
>   call cpu_time(finish)
>   print "(f10.6, a)", finish - start, " seconds"
>   end program main
>
> This is the result of running the two programs on my computer:
>
> % julia --version
> julia version 0.4.5
> % gfortran --version
> GNU Fortran (Debian 5.3.1-20) 5.3.1 20160519
> Copyright (C) 2015 Free Software Foundation, Inc.
>
> GNU Fortran comes with NO WARRANTY, to the extent permitted by law.
> You may redistribute copies of GNU Fortran
> under the terms of the GNU General Public License.
> For more information about these matters, see the file named COPYING
>
> % julia -f test.jl
>   1.099910 seconds (84.00 M allocations: 1.252 GB, 7.14% gc time)
> % gfortran test.f&&./a.out
>   0.132000 seconds
>
> While Julia code is 3 times shorter than the Fortran77 code (and this one 
> of the many reasons why I like Julia very much), it's also more than 8 
> times slower and my build of Julia 0.5 (updated to aa1ce87) performs even 
> worse, it takes about 1.4 seconds.  If I remove access to the arrays (just 
> put "bar = bar" as loop body) then Julia is infinitely faster than Fortran 
> in the sense that Julia takes 0.00 seconds, Fortran 0.056000.
>
> Is this difference to be expected or are there tricks to fasten the Julia 
> code?  I should have gotten the order of nested loops right.  @inbounds 
> doesn't help that much.  Parallelization is probably not an option because 
> the code above is a small part of a larger loop that does data movement 
> (can DistributedArrays help?).
>
> Bye,
> Mosè
>


[julia-users] Re: Is Julia slow with large arrays?

2016-06-01 Thread Andras Niedermayer
The code becomes about 10 times faster on my computer if I replace "bar=0" 
with "bar=0.0", 
see 
http://docs.julialang.org/en/release-0.4/manual/performance-tips/#avoid-changing-the-type-of-a-variable

(Also running "foo()" twice avoids measuring JIT compilation.)

On Wednesday, June 1, 2016 at 1:15:54 PM UTC+2, Mosè Giordano wrote:
>
> Hi all,
>
> I'm working on a Fortran77 code, but I'd much prefer to translate it to 
> Julia.  However, I've the impression that when working with large arrays 
> (of the order of tens of thousands elements) Julia is way slower than the 
> equivalent Fortran code.  See the following examples:
>
> Julia:
>
> function foo()
> array1 = rand(70, 1000)
> array2 = rand(70, 1000)
> array3 = rand(2, 70, 20, 20)
> bar = 0
> @time for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
> bar = bar +
> (array1[i, l] - array3[1, i, j, k])^2 +
> (array2[i, l] - array3[2, i, j, k])^2
> end
> end
> foo()
>
> Fortran77 (it uses some GNU extensions, so gfortran is required):
>
>   program main
>   implicit none
>   double precision array1(70, 1000), array2(70, 1000),
>  &array3(2, 70, 20, 20), bar, start, finish
>   integer i, j, k, l
>   call srand(time())
> c Initialize "array1" and "array2"
>   do j = 1, 1000
> do i = 1, 70
>   array1(i, j) = rand()
>   array2(i, j) = rand()
> enddo
>   enddo
> c Initialize "array3"
>   do l = 1, 2
> do k = 1, 70
>   do j = 1, 20
> do i = 1, 20
>   array3(i, j, k, l) = rand()
> enddo
>   enddo
> enddo
>   enddo
> c Do the calculations
>   bar = 0
>   call cpu_time(start)
>   do l = 1, 1000
> do k = 1, 20
>   do j = 1, 20
> do i = 1, 70
>   bar = bar +
>  &(array1(i, l) - array3(1, i, j, k))**2 +
>  &(array2(i, l) - array3(2, i, j, k))**2
> enddo
>   enddo
> enddo
>   enddo
>   call cpu_time(finish)
>   print "(f10.6, a)", finish - start, " seconds"
>   end program main
>
> This is the result of running the two programs on my computer:
>
> % julia --version
> julia version 0.4.5
> % gfortran --version
> GNU Fortran (Debian 5.3.1-20) 5.3.1 20160519
> Copyright (C) 2015 Free Software Foundation, Inc.
>
> GNU Fortran comes with NO WARRANTY, to the extent permitted by law.
> You may redistribute copies of GNU Fortran
> under the terms of the GNU General Public License.
> For more information about these matters, see the file named COPYING
>
> % julia -f test.jl
>   1.099910 seconds (84.00 M allocations: 1.252 GB, 7.14% gc time)
> % gfortran test.f&&./a.out
>   0.132000 seconds
>
> While Julia code is 3 times shorter than the Fortran77 code (and this one 
> of the many reasons why I like Julia very much), it's also more than 8 
> times slower and my build of Julia 0.5 (updated to aa1ce87) performs even 
> worse, it takes about 1.4 seconds.  If I remove access to the arrays (just 
> put "bar = bar" as loop body) then Julia is infinitely faster than Fortran 
> in the sense that Julia takes 0.00 seconds, Fortran 0.056000.
>
> Is this difference to be expected or are there tricks to fasten the Julia 
> code?  I should have gotten the order of nested loops right.  @inbounds 
> doesn't help that much.  Parallelization is probably not an option because 
> the code above is a small part of a larger loop that does data movement 
> (can DistributedArrays help?).
>
> Bye,
> Mosè
>


[julia-users] Re: Is Julia slow with large arrays?

2016-06-01 Thread Lutfullah Tomak
First thing I caught in your code

http://docs.julialang.org/en/release-0.4/manual/performance-tips/#avoid-changing-the-type-of-a-variable

make bar non-type-changing
function foo()
array1 = rand(70, 1000)
array2 = rand(70, 1000)
array3 = rand(2, 70, 20, 20)
bar = 0.0
@time for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
bar = bar +
(array1[i, l] - array3[1, i, j, k])^2 +
(array2[i, l] - array3[2, i, j, k])^2
end
end

Second, Julia checks array bounds so @inbounds macro before for-loop should 
help
improve performance. In some situation @simd may emit vector instructions 
thus faster
code.


[julia-users] Re: Is Julia slow with large arrays?

2016-06-01 Thread Mr.Ed.
using @code_warntype suggests that you should change bar=0 to bar=0.0 (or 
bar::Float64 = 0)

i think... (newbie here)




[julia-users] Is Julia slow with large arrays?

2016-06-01 Thread Mosè Giordano
Hi all,

I'm working on a Fortran77 code, but I'd much prefer to translate it to 
Julia.  However, I've the impression that when working with large arrays 
(of the order of tens of thousands elements) Julia is way slower than the 
equivalent Fortran code.  See the following examples:

Julia:

function foo()
array1 = rand(70, 1000)
array2 = rand(70, 1000)
array3 = rand(2, 70, 20, 20)
bar = 0
@time for l = 1:1000, k = 1:20, j = 1:20, i = 1:70
bar = bar +
(array1[i, l] - array3[1, i, j, k])^2 +
(array2[i, l] - array3[2, i, j, k])^2
end
end
foo()

Fortran77 (it uses some GNU extensions, so gfortran is required):

  program main
  implicit none
  double precision array1(70, 1000), array2(70, 1000),
 &array3(2, 70, 20, 20), bar, start, finish
  integer i, j, k, l
  call srand(time())
c Initialize "array1" and "array2"
  do j = 1, 1000
do i = 1, 70
  array1(i, j) = rand()
  array2(i, j) = rand()
enddo
  enddo
c Initialize "array3"
  do l = 1, 2
do k = 1, 70
  do j = 1, 20
do i = 1, 20
  array3(i, j, k, l) = rand()
enddo
  enddo
enddo
  enddo
c Do the calculations
  bar = 0
  call cpu_time(start)
  do l = 1, 1000
do k = 1, 20
  do j = 1, 20
do i = 1, 70
  bar = bar +
 &(array1(i, l) - array3(1, i, j, k))**2 +
 &(array2(i, l) - array3(2, i, j, k))**2
enddo
  enddo
enddo
  enddo
  call cpu_time(finish)
  print "(f10.6, a)", finish - start, " seconds"
  end program main

This is the result of running the two programs on my computer:

% julia --version
julia version 0.4.5
% gfortran --version
GNU Fortran (Debian 5.3.1-20) 5.3.1 20160519
Copyright (C) 2015 Free Software Foundation, Inc.

GNU Fortran comes with NO WARRANTY, to the extent permitted by law.
You may redistribute copies of GNU Fortran
under the terms of the GNU General Public License.
For more information about these matters, see the file named COPYING

% julia -f test.jl
  1.099910 seconds (84.00 M allocations: 1.252 GB, 7.14% gc time)
% gfortran test.f&&./a.out
  0.132000 seconds

While Julia code is 3 times shorter than the Fortran77 code (and this one 
of the many reasons why I like Julia very much), it's also more than 8 
times slower and my build of Julia 0.5 (updated to aa1ce87) performs even 
worse, it takes about 1.4 seconds.  If I remove access to the arrays (just 
put "bar = bar" as loop body) then Julia is infinitely faster than Fortran 
in the sense that Julia takes 0.00 seconds, Fortran 0.056000.

Is this difference to be expected or are there tricks to fasten the Julia 
code?  I should have gotten the order of nested loops right.  @inbounds 
doesn't help that much.  Parallelization is probably not an option because 
the code above is a small part of a larger loop that does data movement 
(can DistributedArrays help?).

Bye,
Mosè


Re: [julia-users] Plots gadfly

2016-06-01 Thread Samuele Carcagno

On 01/06/16 09:44, Henri Girard wrote:

Gadfly is deprecated : Does it mean we shouldn't use it anymore ?
I am trying to use it but I got a long list of recompiling ?
What can I use instead ? I don't want to use plotly because I don't want
to have a password (which doesn't work properly either !) each time I
connect... It's very awfull this interface


if you use the `PlotlyJS.jl` interface to plotly you won't need a Plotly 
account or an internet connection, see here:


http://spencerlyon.com/PlotlyJS.jl/

Sam


Re: [julia-users] Re: julia equivalent of python [] (Part II)

2016-06-01 Thread Kristoffer Carlsson
Original poster (the person who made the thread).

On Wednesday, June 1, 2016 at 12:22:46 PM UTC+2, Michele Zaffalon wrote:
>
> What is OP?
>
> On Wed, Jun 1, 2016 at 12:14 PM, Milan Bouchet-Valat  > wrote:
>
>> Le mercredi 01 juin 2016 à 02:35 -0700, Lutfullah Tomak a écrit :
>> > julia> b_prime = ["8",9,10,c]
>> >
>> > This works with Any.
>> >
>> > julia> Any["3", 4, 14, c]
>> > 4-element Array{Any,1}:
>> >"3"
>> >   4   
>> >  14   
>> >Any[10,"c"]
>> >
>> Yes, for now you need to use that syntax. In 0.5, the OP's code works
>> fine too since the deprecation path has been removed.
>>
>>
>> Regards
>>
>
>

Re: [julia-users] Re: julia equivalent of python [] (Part II)

2016-06-01 Thread Michele Zaffalon
What is OP?

On Wed, Jun 1, 2016 at 12:14 PM, Milan Bouchet-Valat 
wrote:

> Le mercredi 01 juin 2016 à 02:35 -0700, Lutfullah Tomak a écrit :
> > julia> b_prime = ["8",9,10,c]
> >
> > This works with Any.
> >
> > julia> Any["3", 4, 14, c]
> > 4-element Array{Any,1}:
> >"3"
> >   4
> >  14
> >Any[10,"c"]
> >
> Yes, for now you need to use that syntax. In 0.5, the OP's code works
> fine too since the deprecation path has been removed.
>
>
> Regards
>


Re: [julia-users] Re: julia equivalent of python [] (Part II)

2016-06-01 Thread Milan Bouchet-Valat
Le mercredi 01 juin 2016 à 02:35 -0700, Lutfullah Tomak a écrit :
> julia> b_prime = ["8",9,10,c]
> 
> This works with Any.
> 
> julia> Any["3", 4, 14, c]
> 4-element Array{Any,1}:
>    "3"        
>   4           
>  14           
>    Any[10,"c"]
>
Yes, for now you need to use that syntax. In 0.5, the OP's code works
fine too since the deprecation path has been removed.


Regards


[julia-users] Re: Using Julia for real time astronomy

2016-06-01 Thread John leger
So for now the best is to build a toy that is equivalent in processing time 
to the original and see by myself what I'm able to get.
We have many ideas, many theories due to the nature of the GC so the best 
is to try.

Páll -> Thanks for the links 

Le mardi 31 mai 2016 18:44:17 UTC+2, Páll Haraldsson a écrit :
>
> On Monday, May 30, 2016 at 8:19:34 PM UTC, Tobias Knopp wrote:
>>
>> If you are prepared to make your code to not perform any heap 
>> allocations, I don't see a reason why there should be any issue. When I 
>> once worked on a very first multi-threading version of Julia I wrote 
>> exactly such functions that won't trigger gc since the later was not thread 
>> safe. This can be hard work but I would assume that its at least not more 
>> work than implementing the application in C/C++ (assuming that you have 
>> some Julia experience)
>>
>
> I would really like to know why the work is hard, is it getting rid of the 
> allocations, or being sure there are no more hidden in your code? I would 
> also like to know then if you can do the same as in D language:
>
> http://wiki.dlang.org/Memory_Management 
>
> The most reliable way to guarantee latency is to preallocate all data that 
> will be needed by the time critical portion. If no calls to allocate memory 
> are done, the GC will not run and so will not cause the maximum latency to 
> be exceeded.
>
> It is possible to create a real-time thread by detaching it from the 
> runtime, marking the thread function @nogc, and ensuring the real-time 
> thread does not hold any GC roots. GC objects can still be used in the 
> real-time thread, but they must be referenced from other threads to prevent 
> them from being collected."
>
> that is would it be possible to make a macro @nogc and mark functions in a 
> similar way? I'm not aware that such a macro is available, to disallow. 
> There is a macro, e.g. @time, that is not sufficient, that shows GC 
> actitivy, but knowing there was none could have been an accident; if you 
> run your code again and memory fills up you see different result.
>
> As with D, the GC in Julia is optional. The above @nogc, is really the 
> only thing different, that I can think of that is better with their 
> optional memory management. But I'm no expert on D, and I mey not have 
> looked too closely:
>
> https://dlang.org/spec/garbage.html
>
>
>> Tobi
>>
>> Am Montag, 30. Mai 2016 12:00:13 UTC+2 schrieb John leger:
>>>
>>> Hi everyone,
>>>
>>> I am working in astronomy and we are thinking of using Julia for a real 
>>> time, high performance adaptive optics system on a solar telescope.
>>>
>>> This is how the system is supposed to work: 
>>>1) the image is read from the camera
>>>2) some correction are applied
>>>3) the atmospheric turbulence is numerically estimated in order to 
>>> calculate the command to be sent to the deformable mirror
>>>
>>> The overall process should be executed in less than 1ms so that it can 
>>> be integrated to the chain (closed loop).
>>>
>>> Do you think it is possible to do all the computation in Julia or would 
>>> it be better to code some part in C/C++. What I fear the most is the GC but 
>>> in our case we can pre-allocate everything, so once we launch the system 
>>> there will not be any memory allocated during the experiment and it will 
>>> run for days.
>>>
>>> So, what do you think? Considering the current state of Julia will I be 
>>> able to get the performances I need. Will the garbage collector be an 
>>> hindrance ?
>>>
>>> Thank you.
>>>
>>

[julia-users] Re: julia equivalent of python [] (Part II)

2016-06-01 Thread Lutfullah Tomak
julia> b_prime = ["8",9,10,c]

This works with Any.

julia> Any["3", 4, 14, c]
4-element Array{Any,1}:
   "3"
  4   
 14   
   Any[10,"c"]



[julia-users] Re: julia equivalent of python [] (Part II)

2016-06-01 Thread Andreas Lobinger


On Wednesday, June 1, 2016 at 11:30:01 AM UTC+2, Lutfullah Tomak wrote:
>
> Do what the warning says. Replace ',' with ';', meaning ["8"; 9; 10; c] or 
> ["8"; 9;10; [12; "c"]] 
>

 julia> b = ["8";9;10;[12;"c"]]
5-element Array{Any,1}:
   "8"
  9
 10
 12
   "c"


I'd like to have b[4] = [12;"c"] and to concatenate the list(s).



[julia-users] Re: julia equivalent of python [] (Part II)

2016-06-01 Thread Nils Gudat
A roundabout way:

lol = ["a",1,"b",(2,3)]
lol[4] = collect(lol[4])

Output:

4-element Array(Any,1):
  "a"
  1
  "b"
  [2,3]



[julia-users] Re: julia equivalent of python [] (Part II)

2016-06-01 Thread Lutfullah Tomak
Sorry for the noise it doesn't work obviously.


[julia-users] Re: julia equivalent of python [] (Part II)

2016-06-01 Thread Lutfullah Tomak
Do what the warning says. Replace ',' with ';', meaning ["8"; 9; 10; c] or [
"8"; 9;10; [12; "c"]]

On Wednesday, June 1, 2016 at 12:21:59 PM UTC+3, Andreas Lobinger wrote:
>
> Hello colleagues,
>
> i actually was a little bit puzzled by this:
>
>_
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "?help" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.4.5 (2016-03-18 00:58 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
> |__/   |  x86_64-linux-gnu
>
> julia> a = [8,"9",10]
> 3-element Array{Any,1}:
>   8
>"9"
>  10
>
> julia> b = ["8",9,10,[12,"c"]]
> WARNING: [a,b,...] concatenation is deprecated; use [a;b;...] instead
>  in depwarn at deprecated.jl:73
>  in oldstyle_vcat_warning at ./abstractarray.jl:29
>  in vect at abstractarray.jl:38
> while loading no file, in expression starting on line 0
> 5-element Array{Any,1}:
>"8"
>   9
>  10
>  12
>"c"
>
> julia> c = [12,"c"]
> 2-element Array{Any,1}:
>  12
>"c"
>
> julia> b_prime = ["8",9,10,c]
> WARNING: [a,b,...] concatenation is deprecated; use [a;b;...] instead
>  in depwarn at deprecated.jl:73
>  in oldstyle_vcat_warning at ./abstractarray.jl:29
>  in vect at abstractarray.jl:38
> while loading no file, in expression starting on line 0
> 5-element Array{Any,1}:
>"8"
>   9
>  10
>  12
>"c"
>
>
> How do i put a list into a list? (bonus points for literals!)
>
>
>

[julia-users] julia equivalent of python [] (Part II)

2016-06-01 Thread Andreas Lobinger
Hello colleagues,

i actually was a little bit puzzled by this:

   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "?help" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.4.5 (2016-03-18 00:58 UTC)
 _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
|__/   |  x86_64-linux-gnu

julia> a = [8,"9",10]
3-element Array{Any,1}:
  8
   "9"
 10

julia> b = ["8",9,10,[12,"c"]]
WARNING: [a,b,...] concatenation is deprecated; use [a;b;...] instead
 in depwarn at deprecated.jl:73
 in oldstyle_vcat_warning at ./abstractarray.jl:29
 in vect at abstractarray.jl:38
while loading no file, in expression starting on line 0
5-element Array{Any,1}:
   "8"
  9
 10
 12
   "c"

julia> c = [12,"c"]
2-element Array{Any,1}:
 12
   "c"

julia> b_prime = ["8",9,10,c]
WARNING: [a,b,...] concatenation is deprecated; use [a;b;...] instead
 in depwarn at deprecated.jl:73
 in oldstyle_vcat_warning at ./abstractarray.jl:29
 in vect at abstractarray.jl:38
while loading no file, in expression starting on line 0
5-element Array{Any,1}:
   "8"
  9
 10
 12
   "c"


How do i put a list into a list? (bonus points for literals!)




Re: [julia-users] Bizarre Segfault during ccall on OSX

2016-06-01 Thread Helge Eichhorn
That was actually not the root cause of the problem. I refactored my 
library significantly and made and now the segfault is randomly triggered 
at different points in the test suite on *0.5*. This is the backtrace I get 
on a debug build of Julia:

signal (11): Segmentation fault: 11
while loading /Users/helge/.julia/v0.5/Dopri/test/lowlevel.jl, in 
expression starting on line 42
jl_object_id_ at /Users/helge/projects/julia/src/builtins.c:1023
jl_object_id at /Users/helge/projects/julia/src/builtins.c:1060
hash_svec at /Users/helge/projects/julia/src/builtins.c:1003
jl_object_id_ at /Users/helge/projects/julia/src/builtins.c:1025
jl_object_id at /Users/helge/projects/julia/src/builtins.c:1060
typekey_compare at /Users/helge/projects/julia/src/jltypes.c:1894
lookup_type_idx at /Users/helge/projects/julia/src/jltypes.c:1931
lookup_type at /Users/helge/projects/julia/src/jltypes.c:1959
inst_datatype at /Users/helge/projects/julia/src/jltypes.c:2118
jl_inst_concrete_tupletype_v at 
/Users/helge/projects/julia/src/jltypes.c:2278
arg_type_tuple at /Users/helge/projects/julia/src/gf.c:997
jl_apply_generic at /Users/helge/projects/julia/src/gf.c:1570
is_allocation at ./inference.jl:3245
unknown function (ip: 0x10bce7ae0)
jl_call_method_internal at 
/Users/helge/projects/julia/src/./julia_internal.h:88
jl_apply_generic at /Users/helge/projects/julia/src/gf.c:1595
alloc_elim_pass! at ./inference.jl:3315
finish at ./inference.jl:1918
typeinf_frame at ./inference.jl:1823
typeinf_loop at ./inference.jl:1570
unknown function (ip: 0x10bccf906)
jl_call_method_internal at 
/Users/helge/projects/julia/src/./julia_internal.h:88
jl_apply_generic at /Users/helge/projects/julia/src/gf.c:1595
typeinf_edge at ./inference.jl:1522
unknown function (ip: 0x10bcdcfbc)
jl_call_method_internal at 
/Users/helge/projects/julia/src/./julia_internal.h:88
jl_apply_generic at /Users/helge/projects/julia/src/gf.c:1595
typeinf_edge at ./inference.jl:1527
unknown function (ip: 0x10bcdb038)
jl_call_method_internal at 
/Users/helge/projects/julia/src/./julia_internal.h:88
jl_apply_generic at /Users/helge/projects/julia/src/gf.c:1595
abstract_call_gf_by_type at ./inference.jl:852
unknown function (ip: 0x10bcda534)
jl_call_method_internal at 
/Users/helge/projects/julia/src/./julia_internal.h:88
jl_apply_generic at /Users/helge/projects/julia/src/gf.c:1595
abstract_call at ./inference.jl:1023
unknown function (ip: 0x10bcd5bcc)
jl_call_method_internal at 
/Users/helge/projects/julia/src/./julia_internal.h:88
jl_apply_generic at /Users/helge/projects/julia/src/gf.c:1595
abstract_eval_call at ./inference.jl:1053
abstract_eval at ./inference.jl:1076
unknown function (ip: 0x10bcd31a4)
jl_call_method_internal at 
/Users/helge/projects/julia/src/./julia_internal.h:88
jl_apply_generic at /Users/helge/projects/julia/src/gf.c:1595
abstract_eval_call at ./inference.jl:1027
abstract_eval at ./inference.jl:1076
unknown function (ip: 0x10bcd31a4)
jl_call_method_internal at 
/Users/helge/projects/julia/src/./julia_internal.h:88
jl_apply_generic at /Users/helge/projects/julia/src/gf.c:1595
abstract_interpret at ./inference.jl:1201
unknown function (ip: 0x10bcd0314)
jl_call_method_internal at 
/Users/helge/projects/julia/src/./julia_internal.h:88
jl_apply_generic at /Users/helge/projects/julia/src/gf.c:1595
typeinf_frame at ./inference.jl:1645
typeinf_loop at ./inference.jl:1570
unknown function (ip: 0x10bccf906)
jl_call_method_internal at 
/Users/helge/projects/julia/src/./julia_internal.h:88
jl_apply_generic at /Users/helge/projects/julia/src/gf.c:1595
typeinf_edge at ./inference.jl:1522
unknown function (ip: 0x10bcdcfbc)
jl_call_method_internal at 
/Users/helge/projects/julia/src/./julia_internal.h:88
jl_apply_generic at /Users/helge/projects/julia/src/gf.c:1595
typeinf_edge at ./inference.jl:1527
unknown function (ip: 0x10bcdb038)
jl_call_method_internal at 
/Users/helge/projects/julia/src/./julia_internal.h:88
jl_apply_generic at /Users/helge/projects/julia/src/gf.c:1595
abstract_call_gf_by_type at ./inference.jl:852
unknown function (ip: 0x10bcda534)
jl_call_method_internal at 
/Users/helge/projects/julia/src/./julia_internal.h:88
jl_apply_generic at /Users/helge/projects/julia/src/gf.c:1595
abstract_call at ./inference.jl:1023
unknown function (ip: 0x10bcd5bcc)
jl_call_method_internal at 
/Users/helge/projects/julia/src/./julia_internal.h:88
jl_apply_generic at /Users/helge/projects/julia/src/gf.c:1595
abstract_eval_call at ./inference.jl:1053
abstract_eval at ./inference.jl:1076
unknown function (ip: 0x10bcd31a4)
jl_call_method_internal at 
/Users/helge/projects/julia/src/./julia_internal.h:88
jl_apply_generic at /Users/helge/projects/julia/src/gf.c:1595
typeinf_frame at ./inference.jl:1722
typeinf_loop at ./inference.jl:1588
unknown function (ip: 0x10bccf906)
jl_call_method_internal at 
/Users/helge/projects/julia/src/./julia_internal.h:88
jl_apply_generic at /Users/helge/projects/julia/src/gf.c:1595
typeinf_edge at ./inference.jl:1522
jl_call_method_inter

[julia-users] Re: Plots gadfly

2016-06-01 Thread Andreas Lobinger


On Wednesday, June 1, 2016 at 10:44:42 AM UTC+2, Henri Girard wrote:
>
> Gadfly is deprecated : Does it mean we shouldn't use it anymore ?
>

Where did you get the message that Gadlfy is deprecated? 
If you mean, you get a lot of deprecation warning, when trying to run 
Gadfly on julia v0.5dev, you are right.
In general Gadlfy and its infrastructure is a little bit unmaintained at 
the moment.
 

> I am trying to use it but I got a long list of recompiling ?
>
 
Recompiling in itself is nothing wrong, that Gadfly triggers (via 
dependencies) a really long recompilation (a few minutes on my box) isn't 
nice, but not a showstopper.

What can I use instead ? I don't want to use plotly because I don't want to 
> have a password (which doesn't work properly either !) each time I 
> connect... It's very awfull this interface
>

Tom Breloff started to generalise a julia plotting API in Plots.jl that is 
used by many (? don't know exact numbers) and targets a list of backends, 
so you can decide. Tom made a comment in the Plots.jl development, he's 
positive to replace Gadfly with his development.

 


Re: [julia-users] Plots gadfly

2016-06-01 Thread Tamas Papp
I was not aware of Gadfly being deprecated. Where was that announced?

In any case, try Plots.jl, where you can use various backends.

On Wed, Jun 01 2016, Henri Girard wrote:

> Gadfly is deprecated : Does it mean we shouldn't use it anymore ?
> I am trying to use it but I got a long list of recompiling ?
> What can I use instead ? I don't want to use plotly because I don't want to 
> have a password (which doesn't work properly either !) each time I 
> connect... It's very awfull this interface



[julia-users] Plots gadfly

2016-06-01 Thread Henri Girard
Gadfly is deprecated : Does it mean we shouldn't use it anymore ?
I am trying to use it but I got a long list of recompiling ?
What can I use instead ? I don't want to use plotly because I don't want to 
have a password (which doesn't work properly either !) each time I 
connect... It's very awfull this interface


Re: [julia-users] Reading from a TCP socket with quasi-continuous data transfer

2016-06-01 Thread Joshua Jones

>
>
>>1. I've read (here and elsewhere) that Julia does an implicit block 
>>while waiting for data. Is there a workaround?
>>
>> Do the blocking read asynchronously in a task. For network I/O (but not 
> file), `read` will only block at the task level. To end the read, `close` 
> the socket from another task.
>

Only a call to @async eof(conn) works as an asynchronous task; I'm not sure 
how that's going to help, except in the REPL. Any other attempt to read 
remotely is a segmentation fault that destroys the worker.

*Example (test function)*
@everywhere readSL(conn) = begin
  while true
eof(conn)
tmp = read(conn.buffer, UInt8, 520)
  end
end



Test call
julia> remotecall(6, readSL, conn);

julia> 
signal (11): Segmentation fault
uv_read_start at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown 
line)
start_reading at stream.jl:850
wait_readnb at stream.jl:373
eof at stream.jl:96
readSL at none:4
jl_apply_generic at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown 
line)
jl_f_apply at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown 
line)
anonymous at multi.jl:920
run_work_thunk at multi.jl:661
run_work_thunk at multi.jl:670
jlcall_run_work_thunk_21322 at  (unknown line)
jl_apply_generic at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown 
line)
anonymous at task.jl:58
unknown function (ip: 0x7f1d162898cc)
unknown function (ip: (nil))
Worker 6 terminated.
ERROR (unhandled task failure): EOFError: read end of file
 in read at stream.jl:929
 in message_handler_loop at multi.jl:878
 in process_tcp_streams at multi.jl:867
 in anonymous at task.jl:63

Am I doing something fundamentally wrong here?