[julia-users] parallel installation of 0.3 and 0.4?

2014-12-07 Thread Andreas Lobinger
Hello colleagues,

sorry, if i ask something that is somewhere in the doc already, but somehow 
i missed it:

How can i use two different versions of julia in parallel? Obviously i can 
have two directories with a local build and the executable, and two 
repositories of packets in .julia/v0.3 and .julia/v0.4. The command-line 
history seems to be the only file that is shared. Anything else?

Wishing a happy day,
Andreas



Re: [julia-users] Re: julia vs cython benchmark

2014-12-07 Thread Andre Bieler
Yes I am very pleased with the result too!
Really impressed with both the julia language and community.
Keep up the good work!

On Saturday, December 6, 2014 6:17:40 PM UTC-5, Stefan Karpinski wrote:

 Great – thanks for reporting back. It's nice that you could get that kind 
 of good performance here without too much shenanigans.

 On Sat, Dec 6, 2014 at 5:50 PM, Andre Bieler andre.b...@gmail.com 
 javascript: wrote:

 for completeness:

 with the inner loops now going through the first index as suggested by 
 Jeff,
 there was another increase in speed. So now I stand at *16.8 s* on 
 average
 with julia.

 The same thing in python/numpy takes roughly *6800 s* to run
 (however not vectorized in numpy, using for loops as in the examples
 above)




[julia-users] Re: julia vs cython benchmark

2014-12-07 Thread Andre Bieler
vectorized numpy i dont know the result.. :)
but not sure if it would be a good approach anyway
as the outer loop can end after one iteration already.


[julia-users] @nloops Cartesian

2014-12-07 Thread gexarcha
Hi, I would like to use N loops in a script for an arbitrary number of 
loops using Cartesian. Do you know if that is possible?

E.g. : 


julia using Cartesian

julia it=[-1.,0.,1.]
3-element Array{Float64,1}:
 -1.0
  0.0
  1.0




julia @nloops 3 i d-it begin
  tp = @ntuple 3 i
  println(tp)
  end
(-1.0,-1.0,-1.0)
(0.0,-1.0,-1.0)
(1.0,-1.0,-1.0)
(-1.0,0.0,-1.0)
(0.0,0.0,-1.0)
(1.0,0.0,-1.0)
(-1.0,1.0,-1.0)
(0.0,1.0,-1.0)
(1.0,1.0,-1.0)
(-1.0,-1.0,0.0)
(0.0,-1.0,0.0)
(1.0,-1.0,0.0)
(-1.0,0.0,0.0)
(0.0,0.0,0.0)
(1.0,0.0,0.0)
(-1.0,1.0,0.0)
(0.0,1.0,0.0)
(1.0,1.0,0.0)
(-1.0,-1.0,1.0)
(0.0,-1.0,1.0)
(1.0,-1.0,1.0)
(-1.0,0.0,1.0)
(0.0,0.0,1.0)
(1.0,0.0,1.0)
(-1.0,1.0,1.0)
(0.0,1.0,1.0)
(1.0,1.0,1.0)






However

julia N=3
3

julia @nloops N i d-it begin
 tp = @ntuple N i
 println(tp)
 end
ERROR: `_nloops` has no method matching _nloops(::Symbol, ::Symbol, ::Expr, 
::Expr)


Is it possible to modify nloops so that it takes a variable for the number 
of loops?

Thanks and best wishes,
Georgios


Re: [julia-users] @nloops Cartesian

2014-12-07 Thread Tim Holy
On Sunday, December 07, 2014 04:59:36 AM gexarcha wrote:
 Is it possible to modify nloops so that it takes a variable for the number 
 of loops?

No, because macros work at parsing time, and the value of a variable is not 
known at parsing time.

However, you have several options:

- Use @ngenerate 
http://docs.julialang.org/en/latest/devdocs/cartesian/#supplying-the-number-of-expressions

- Directly generate your different function variants:
for N = 1:4
@eval begin
function myfunction{T}(A::Array{T,$N})
# body goes here
end
end
end

- Use julia 0.4 and use
for I in eachindex(A)
(no macros required)

- Use julia 0.4 and use stagedfunctions (which work similarly to the @eval 
version above).

--Tim



Re: [julia-users] Re: How can I sort Dict efficiently?

2014-12-07 Thread Michiaki ARIGA
I'm sorry that my example is not good to explain what I want to do.

I tried to count up words and get top N frequent words and I referred to
following example.

https://github.com/JuliaLang/julia/blob/master/examples/wordcount.jl

I don't think it should return Dict for top N words, so I think DataFrame
is good to get top N words using head(). But I wonder if DataFrame isn't
suitable because DataFrame converted from Dict is not sortable style using
its values of Dict.

On Sat Dec 06 2014 at 11:24:02 Steven G. Johnson stevenj@gmail.com
wrote:



 On Friday, December 5, 2014 9:57:28 AM UTC-5, Michiaki Ariga wrote:

 I found there are no method such as sort_by() after v0.3.
 But I want to count word frequency with Dict() and sort by its value to
 find frequent word.
 So, how can I sort Dict efficiently?


  You may want to use a different data structure.  For example, you can
 store word frequencies in a PriorityQueue and then pull out the most
 frequent word with peek or dequeue.  See:

 http://julia.readthedocs.org/en/latest/stdlib/collections/

 (A PriorityQueue lets you quickly fetch the smallest value, whereas you
 want the largest frequency, but you can work around this by just storing
 frequency * -1.)

 If you need all of the values in order, you can instead use an OrderedDict
 from https://github.com/JuliaLang/DataStructures.jl



Re: [julia-users] parallel installation of 0.3 and 0.4?

2014-12-07 Thread Pontus Stenetorp
On 7 December 2014 at 19:57, Andreas Lobinger lobing...@gmail.com wrote:

 How can i use two different versions of julia in parallel? Obviously i can
 have two directories with a local build and the executable, and two
 repositories of packets in .julia/v0.3 and .julia/v0.4. The command-line
 history seems to be the only file that is shared. Anything else?

That seems exactly like my set-up and it has been working just lovely
since the v0.3 release.  Only thing to add are symlinks `julia0.3` and
`julia0.4`, then another symlink `julia` to `julia0.3` since it is the
latest release.

Pontus


[julia-users] Re: Set of Rational{Int} raises InexactError()

2014-12-07 Thread remi . berson
Here is the issue I opened on github: 
https://github.com/JuliaLang/julia/issues/9264

On Saturday, December 6, 2014 10:34:34 PM UTC+1, remi@gmail.com wrote:

 Hi guys,

 While trying to insert elements of type Rational{Int} into a Set, I ran 
 into an issue with an InexactError exception.
 It happens with some elements. For example: 1//6

 s = Set{Rational{Int}}()
 push!(s, 1//6)

 ERROR: InexactError()
  in hash_integer at hashing2.jl:4
  in hash at hashing2.jl:148
  in ht_keyindex2 at dict.jl:527
  in setindex! at dict.jl:580
  in push! at set.jl:27

 I don't really understand why it behaves like that. Is this a bug? Or a 
 problem when the fraction is converted to float for hashing?

 Thank you for your help,
 Rémi



Re: [julia-users] @nloops Cartesian

2014-12-07 Thread gexarcha
Oh, sorry! I can't believe I missed @ngenerate.
This helped, thanks a lot!
Best,
Georgios

On Sunday, December 7, 2014 2:30:59 PM UTC+1, Tim Holy wrote:

 On Sunday, December 07, 2014 04:59:36 AM gexarcha wrote: 
  Is it possible to modify nloops so that it takes a variable for the 
 number 
  of loops? 

 No, because macros work at parsing time, and the value of a variable is 
 not 
 known at parsing time. 

 However, you have several options: 

 - Use @ngenerate 

 http://docs.julialang.org/en/latest/devdocs/cartesian/#supplying-the-number-of-expressions
  

 - Directly generate your different function variants: 
 for N = 1:4 
 @eval begin 
 function myfunction{T}(A::Array{T,$N}) 
 # body goes here 
 end 
 end 
 end 

 - Use julia 0.4 and use 
 for I in eachindex(A) 
 (no macros required) 

 - Use julia 0.4 and use stagedfunctions (which work similarly to the @eval 
 version above). 

 --Tim 



[julia-users] Lot of allocations in Array assignement

2014-12-07 Thread remi . berson


Hey guys,

I'm currently playing with some Eratosthenes sieving in Julia, and found a 
strange behavior of memory allocation.
My naive sieve is as follows:

#=
Naive version of Erato sieve.
* bitarray to store primes
* only eliminate multiples of primes
* separately eliminate even non-primes
=#
function erato1(n::Int)
# Create bitarray to store primes
primes_mask = trues(n)
primes_mask[1] = false

# Eliminate even numbers first
for i = 4:2:n
primes_mask[i] = false
end

# Eliminate odd non-primes numbers
for i = 3:2:n
if primes_mask[i]
for j = (i + i):i:n
primes_mask[j] = false
end
end
end

# Collect every primes in an array
n_primes = countnz(primes_mask)
primes_arr = Array(Int64, n_primes)
collect1!(primes_mask, primes_arr)
end


With the collect1! function that takes a BitArray as argument and return an 
Array containing the primes numbers.

function collect1!(primes_mask::BitArray{1}, primes_arr::Array{Int64, 1})
prime_index = 1
for i = 2:n
if primes_mask[i]
primes_arr[prime_index] = i
prime_index += 1
end
end
return primes_arr
end

The codes works, but is slow because of a lot of memory allocation at the 
line:
primes_arr[prime_index] = i

Here is an extract of the memory allocation profile 
(--track-allocation=user):

- function collect1!(primes_mask::BitArray{1}, primes_arr::Array{
Int64, 1})
0 prime_index = 1
-84934576 for i = 2:n
0 if primes_mask[i]
*184350208* primes_arr[prime_index] = i
0 prime_index += 1
- end
- end
0 return primes_arr
- end



But, if I inline the definition of collect1! into the erato1, this is much 
faster and the allocation in the loop of collect disapears. Here is the 
code updated:

function erato1(n::Int)
# Create bitarray to store primes
primes_mask = trues(n)
primes_mask[1] = false

# Eliminate even numbers first
for i = 4:2:n
primes_mask[i] = false
end

# Eliminate odd non-primes numbers
for i = 3:2:n
if primes_mask[i]
for j = (i + i):i:n
primes_mask[j] = false
end
end
end

# Collect every primes in an array
n_primes = countnz(primes_mask)
primes_arr = Array(Int64, n_primes)
prime_index = 1
for i = 2:n
if primes_mask[i]
primes_arr[prime_index] = i
prime_index += 1
end
end
return primes_arr
end

And the memory profile seems more reasonable:

0 n_primes = countnz(primes_mask)
 92183392 primes_arr = Array(Int64, n_primes)
0 prime_index = 1
0 for i = 2:n
0 if primes_mask[i]
0 primes_arr[prime_index] = i
0 prime_index += 1
- end
- end


So I'm wondering why the simple fact of inlining the code would remove the 
massive memory allocation when assigning to the array.

Thank you for your help,
Rémis


Re: [julia-users] Struggling with generic functions.

2014-12-07 Thread Rob J. Goedman
Hi John,

Thanks, yes, I should have studied StatsBase.jl in more depth. I have been more 
focused on Distributions.jl.

It seems from my point of view, as a user, there are 3 or 4 levels:

1. BayesianModels, an abstract type that can contain specializations for a 
Bugs/Jags model, a Mamba model , Stan model, MCMC/Lora model, etc.
2. Types to hold a posterior set of samples, like Mamba’s Chains
3. Use of StatsBase’s StatisticalModel and RegressionModel to extract and 
compare summary stats
4. Additional (Bayesian specific?) posterior tools, like Mamba’s diagnostic and 
plotting capabilities.

If we continue this discussion, maybe we should switch to JuliaStats or even 
tmpMCMC.jl?

Regards,
Rob J. Goedman
goed...@mac.com





 On Dec 5, 2014, at 5:04 PM, John Myles White johnmyleswh...@gmail.com wrote:
 
 StatsBase is meant to occupy that sort of role, but there's enough 
 disagreement that we haven't moved as far foward as I'd like. Have you read 
 through the StatsBase codebase?
 
 -- John
 
 On Dec 2, 2014, at 8:19 PM, Rob J. Goedman goed...@icloud.com wrote:
 
 Thanks John, I’d come to a similar conclusion about there not being a 
 straightforward solution. Nice to get that confirmed from your side. Cutting 
 back on exporting names whenever possible also makes a lot of sense. 
 
 Is StatsBase intended to play a similar role for types? Or is that more the 
 domain of PGM.jl?
 
 As an example,  Model is used in several packages (MCMC, Mamba). This makes 
 using both packages simultaneously harder as the end user needs to figure 
 out which names to qualify. Other packages have taken a different approach. 
 MixedModels uses MixedModel and LinearMixedModel, GLM uses LmMod and GlmMod, 
 Stan and Jags use Stanmodel and Jagsmodel (I could rename them to StanModel 
 and JagsModel). Is it reasonable to make Model an abstract type?
 
 Rob
 
 
 On Dec 2, 2014, at 4:37 PM, John Myles White johnmyleswh...@gmail.com 
 wrote:
 
 There's no clean solution to this. In general, I'd argue that we should 
 stop exporting so many names and encourage people to use qualified names 
 much more often than we do right now.
 
 But for important abstractions, we can put them into StatsBase, which all 
 stats packages should be derived from.
 
 -- John
 
 On Dec 2, 2014, at 4:34 PM, Rob J. Goedman goed...@icloud.com wrote:
 
 I’ll try to give an example of my problem based on how I’ve seen it occur 
 in Stan.jl and Jags.jl.
 
 Both DataFrames.jl and Mamba.jl export describe(). Stan.jl relies on 
 Mamba, but neither Stan or Mamba need DataFrames. So DataFrames is not 
 imported by default.
 
 Recently someone used Stan and wanted to read in a .csv file and added 
 DataFrames to the using clause in the script, i.e.
 
 ```
 using Gadfly, Stan, Mamba, DataFrames
 ```
 
 After running a simulation, Mamba’s describe(::Mamba.Chains) could no 
 longer be found.
 
 I wonder if someone can point me in the right direction how best to solve 
 these kind of problems (for end users):
 
 1. One way around it is to always qualify describe(), e.g. 
 Mamba.describe().
 2. Use isdefined(Main, :DataFrames) to upfront test for such a collision.
 3. Suggest to end users to import DataFrames and qualify e.g. 
 DataFrames.readtable().
 4. ?
 
 Thanks and regards,
 Rob J. Goedman
 goed...@mac.com
 
 
 
 
 
 
 
 



Re: [julia-users] Lot of allocations in Array assignement

2014-12-07 Thread Milan Bouchet-Valat
Le dimanche 07 décembre 2014 à 08:31 -0800, remi.ber...@gmail.com a
écrit :
 
 
 Hey guys,
 
 I'm currently playing with some Eratosthenes sieving in Julia, and
 found a strange behavior of memory allocation.
 My naive sieve is as follows:
 
 #=
 Naive version of Erato sieve.
 * bitarray to store primes
 * only eliminate multiples of primes
 * separately eliminate even non-primes
 =#
 function erato1(n::Int)
 # Create bitarray to store primes
 primes_mask = trues(n)
 primes_mask[1] = false
 
 # Eliminate even numbers first
 for i = 4:2:n
 primes_mask[i] = false
 end
 
 # Eliminate odd non-primes numbers
 for i = 3:2:n
 if primes_mask[i]
 for j = (i + i):i:n
 primes_mask[j] = false
 end
 end
 end
 
 # Collect every primes in an array
 n_primes = countnz(primes_mask)
 primes_arr = Array(Int64, n_primes)
 collect1!(primes_mask, primes_arr)
 end
 
 
 
 With the collect1! function that takes a BitArray as argument and
 return an Array containing the primes numbers.
 
 function collect1!(primes_mask::BitArray{1}, primes_arr::Array{Int64,
 1})
 prime_index = 1
 for i = 2:n
 if primes_mask[i]
 primes_arr[prime_index] = i
 prime_index += 1
 end
 end
 return primes_arr
 end
 
 
 The codes works, but is slow because of a lot of memory allocation at
 the line:
 primes_arr[prime_index] = i
 
 
 Here is an extract of the memory allocation profile
 (--track-allocation=user):
 
 - function collect1!(primes_mask::BitArray{1},
 primes_arr::Array{Int64, 1})
 0 prime_index = 1
 -84934576 for i = 2:n
 0 if primes_mask[i]
 184350208 primes_arr[prime_index] = i
 0 prime_index += 1
 - end
 - end
 0 return primes_arr
 - end
 
 
 
 
 But, if I inline the definition of collect1! into the erato1, this is
 much faster and the allocation in the loop of collect disapears. Here
 is the code updated:
 
 function erato1(n::Int)
 # Create bitarray to store primes
 primes_mask = trues(n)
 primes_mask[1] = false
 
 # Eliminate even numbers first
 for i = 4:2:n
 primes_mask[i] = false
 end
 
 # Eliminate odd non-primes numbers
 for i = 3:2:n
 if primes_mask[i]
 for j = (i + i):i:n
 primes_mask[j] = false
 end
 end
 end
 
 # Collect every primes in an array
 n_primes = countnz(primes_mask)
 primes_arr = Array(Int64, n_primes)
 prime_index = 1
 for i = 2:n
 if primes_mask[i]
 primes_arr[prime_index] = i
 prime_index += 1
 end
 end
 return primes_arr
 end
 
 
 And the memory profile seems more reasonable:
 
 0 n_primes = countnz(primes_mask)
  92183392 primes_arr = Array(Int64, n_primes)
 0 prime_index = 1
 0 for i = 2:n
 0 if primes_mask[i]
 0 primes_arr[prime_index] = i
 0 prime_index += 1
 - end
 - end
 
 
 
 So I'm wondering why the simple fact of inlining the code would remove
 the massive memory allocation when assigning to the array.
I think the difference may come from the fact that n is not a parameter
of collect1!(). As a general rule, avoid using global variables (or at
least mark them as const). Anyway it's much clearer when a function only
depends on its arguments, else it's very hard to follow what it does.

In the specific case of your example, Julia looks for n in the global
scope, not even in the parent function erato1(). So with a fresh
session, I get:
julia erato1(100)
ERROR: n not defined
 in collect1! at none:3
 in erato1 at none:23


I'm not clear on how variable scoping works with function calls: the
manual says variables are inherited by inner scopes. But I guess it does
not apply to calling a function, only to defining it. Is that right? I
could add a word to the manual if it's deemed useful.

http://docs.julialang.org/en/latest/manual/variables-and-scoping/


Regards



Re: [julia-users] Lot of allocations in Array assignement

2014-12-07 Thread John Myles White
It might be useful to put a bit about Julia being lexically scoped into the 
manual and refer to the Wikipedia article on scope.

 — John

 On Dec 7, 2014, at 9:05 AM, Milan Bouchet-Valat nalimi...@club.fr wrote:
 
 Le dimanche 07 décembre 2014 à 08:31 -0800, remi.ber...@gmail.com a
 écrit :
 
 
 Hey guys,
 
 I'm currently playing with some Eratosthenes sieving in Julia, and
 found a strange behavior of memory allocation.
 My naive sieve is as follows:
 
 #=
 Naive version of Erato sieve.
* bitarray to store primes
* only eliminate multiples of primes
* separately eliminate even non-primes
 =#
 function erato1(n::Int)
# Create bitarray to store primes
primes_mask = trues(n)
primes_mask[1] = false
 
# Eliminate even numbers first
for i = 4:2:n
primes_mask[i] = false
end
 
# Eliminate odd non-primes numbers
for i = 3:2:n
if primes_mask[i]
for j = (i + i):i:n
primes_mask[j] = false
end
end
end
 
# Collect every primes in an array
n_primes = countnz(primes_mask)
primes_arr = Array(Int64, n_primes)
collect1!(primes_mask, primes_arr)
 end
 
 
 
 With the collect1! function that takes a BitArray as argument and
 return an Array containing the primes numbers.
 
 function collect1!(primes_mask::BitArray{1}, primes_arr::Array{Int64,
 1})
prime_index = 1
for i = 2:n
if primes_mask[i]
primes_arr[prime_index] = i
prime_index += 1
end
end
return primes_arr
 end
 
 
 The codes works, but is slow because of a lot of memory allocation at
 the line:
 primes_arr[prime_index] = i
 
 
 Here is an extract of the memory allocation profile
 (--track-allocation=user):
 
- function collect1!(primes_mask::BitArray{1},
 primes_arr::Array{Int64, 1})
0 prime_index = 1
 -84934576 for i = 2:n
0 if primes_mask[i]
 184350208 primes_arr[prime_index] = i
0 prime_index += 1
- end
- end
0 return primes_arr
- end
 
 
 
 
 But, if I inline the definition of collect1! into the erato1, this is
 much faster and the allocation in the loop of collect disapears. Here
 is the code updated:
 
 function erato1(n::Int)
# Create bitarray to store primes
primes_mask = trues(n)
primes_mask[1] = false
 
# Eliminate even numbers first
for i = 4:2:n
primes_mask[i] = false
end
 
# Eliminate odd non-primes numbers
for i = 3:2:n
if primes_mask[i]
for j = (i + i):i:n
primes_mask[j] = false
end
end
end
 
# Collect every primes in an array
n_primes = countnz(primes_mask)
primes_arr = Array(Int64, n_primes)
prime_index = 1
for i = 2:n
if primes_mask[i]
primes_arr[prime_index] = i
prime_index += 1
end
end
return primes_arr
 end
 
 
 And the memory profile seems more reasonable:
 
0 n_primes = countnz(primes_mask)
 92183392 primes_arr = Array(Int64, n_primes)
0 prime_index = 1
0 for i = 2:n
0 if primes_mask[i]
0 primes_arr[prime_index] = i
0 prime_index += 1
- end
- end
 
 
 
 So I'm wondering why the simple fact of inlining the code would remove
 the massive memory allocation when assigning to the array.
 I think the difference may come from the fact that n is not a parameter
 of collect1!(). As a general rule, avoid using global variables (or at
 least mark them as const). Anyway it's much clearer when a function only
 depends on its arguments, else it's very hard to follow what it does.
 
 In the specific case of your example, Julia looks for n in the global
 scope, not even in the parent function erato1(). So with a fresh
 session, I get:
 julia erato1(100)
 ERROR: n not defined
 in collect1! at none:3
 in erato1 at none:23
 
 
 I'm not clear on how variable scoping works with function calls: the
 manual says variables are inherited by inner scopes. But I guess it does
 not apply to calling a function, only to defining it. Is that right? I
 could add a word to the manual if it's deemed useful.
 
 http://docs.julialang.org/en/latest/manual/variables-and-scoping/
 
 
 Regards
 



Re: [julia-users] Struggling with generic functions.

2014-12-07 Thread John Myles White
Yes, we should open an issue in tmpMCMC.jl.

 — John

 On Dec 7, 2014, at 8:42 AM, Rob J. Goedman goed...@icloud.com wrote:
 
 Hi John,
 
 Thanks, yes, I should have studied StatsBase.jl in more depth. I have been 
 more focused on Distributions.jl.
 
 It seems from my point of view, as a user, there are 3 or 4 levels:
 
 1. BayesianModels, an abstract type that can contain specializations for a 
 Bugs/Jags model, a Mamba model , Stan model, MCMC/Lora model, etc.
 2. Types to hold a posterior set of samples, like Mamba’s Chains
 3. Use of StatsBase’s StatisticalModel and RegressionModel to extract and 
 compare summary stats
 4. Additional (Bayesian specific?) posterior tools, like Mamba’s diagnostic 
 and plotting capabilities.
 
 If we continue this discussion, maybe we should switch to JuliaStats or even 
 tmpMCMC.jl?
 
 Regards,
 Rob J. Goedman
 goed...@mac.com
 
 
 
 
 
 On Dec 5, 2014, at 5:04 PM, John Myles White johnmyleswh...@gmail.com 
 wrote:
 
 StatsBase is meant to occupy that sort of role, but there's enough 
 disagreement that we haven't moved as far foward as I'd like. Have you read 
 through the StatsBase codebase?
 
 -- John
 
 On Dec 2, 2014, at 8:19 PM, Rob J. Goedman goed...@icloud.com wrote:
 
 Thanks John, I’d come to a similar conclusion about there not being a 
 straightforward solution. Nice to get that confirmed from your side. 
 Cutting back on exporting names whenever possible also makes a lot of 
 sense. 
 
 Is StatsBase intended to play a similar role for types? Or is that more the 
 domain of PGM.jl?
 
 As an example,  Model is used in several packages (MCMC, Mamba). This makes 
 using both packages simultaneously harder as the end user needs to figure 
 out which names to qualify. Other packages have taken a different approach. 
 MixedModels uses MixedModel and LinearMixedModel, GLM uses LmMod and 
 GlmMod, Stan and Jags use Stanmodel and Jagsmodel (I could rename them to 
 StanModel and JagsModel). Is it reasonable to make Model an abstract type?
 
 Rob
 
 
 On Dec 2, 2014, at 4:37 PM, John Myles White johnmyleswh...@gmail.com 
 wrote:
 
 There's no clean solution to this. In general, I'd argue that we should 
 stop exporting so many names and encourage people to use qualified names 
 much more often than we do right now.
 
 But for important abstractions, we can put them into StatsBase, which all 
 stats packages should be derived from.
 
 -- John
 
 On Dec 2, 2014, at 4:34 PM, Rob J. Goedman goed...@icloud.com wrote:
 
 I’ll try to give an example of my problem based on how I’ve seen it occur 
 in Stan.jl and Jags.jl.
 
 Both DataFrames.jl and Mamba.jl export describe(). Stan.jl relies on 
 Mamba, but neither Stan or Mamba need DataFrames. So DataFrames is not 
 imported by default.
 
 Recently someone used Stan and wanted to read in a .csv file and added 
 DataFrames to the using clause in the script, i.e.
 
 ```
 using Gadfly, Stan, Mamba, DataFrames
 ```
 
 After running a simulation, Mamba’s describe(::Mamba.Chains) could no 
 longer be found.
 
 I wonder if someone can point me in the right direction how best to solve 
 these kind of problems (for end users):
 
 1. One way around it is to always qualify describe(), e.g. 
 Mamba.describe().
 2. Use isdefined(Main, :DataFrames) to upfront test for such a collision.
 3. Suggest to end users to import DataFrames and qualify e.g. 
 DataFrames.readtable().
 4. ?
 
 Thanks and regards,
 Rob J. Goedman
 goed...@mac.com
 
 
 
 
 
 
 
 
 



[julia-users] error initializing module LinAlg

2014-12-07 Thread Luthaf

Hello !

I get a strange error on the latest Julia Master. After launching the 
make install command, I get a warning on startup :


$ julia
Warning: error initializing module LinAlg:
ErrorException(symbol could not be found openblas_get_config64_: 
dlsym(0x7f9ae3769890, openblas_get_config64_): symbol not found

)
   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type help() for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.4.0-dev+1971 (2014-12-06 22:54 UTC)
 _/ |\__'_|_|_|\__'_|  |  Commit b83e4bb (0 days old master)
|__/   |  x86_64-apple-darwin13.4.0

julia versioninfo()
Julia Version 0.4.0-dev+1971
Commit b83e4bb (2014-12-06 22:54 UTC)
Platform Info:
  System: Darwin (x86_64-apple-darwin13.4.0)
  CPU: Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz
  WORD_SIZE: 64
ERROR: symbol could not be found openblas_get_config64_: 
dlsym(0x7f9ae3769890, openblas_get_config64_): symbol not found


 in openblas_get_config at ./util.jl:120
 in versioninfo at interactiveutil.jl:176
 in versioninfo at interactiveutil.jl:146

My Make.user file only contains PREFIX=/usr/local/. This warning/error 
goes out if I start julia directly from the build folder :


julia versioninfo()
Julia Version 0.4.0-dev+1971
Commit b83e4bb (2014-12-06 22:54 UTC)
DEBUG build
Platform Info:
  System: Darwin (x86_64-apple-darwin13.4.0)
  CPU: Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

Should I fill an issue about this, or have I messed something while 
building Julia ?


Regards,
Guillaume







[julia-users] Memory allocation in BLAS wrappers

2014-12-07 Thread adokano
I am writing a Julia function which implements some of the functionality of 
the LAPACK function TPQRT, but my version stops before computing the block 
reflectors. In my function, I make several calls to gemv! and ger! from 
Base.LinAlg.BLAS. Each of these calls seems to generate extra overhead of 
440 bytes according to the output of --track-allocation, and since each of 
those functions gets called several times within a function that also gets 
called many times, the extra allocations blow up and I end up spending a 
large chunk of time in garbage collection. Is it normal for each function 
call to dynamically allocate so much space? Here is the function in 
question:

function tpqrf!( A::StridedMatrix{Float64}, B::StridedMatrix{Float64}, 
 tau::StridedVector{Float64}, work::Vector{Float64} )
  m,n = size(B)
  for j = 1:n
# Construct Householder reflector
A[j,j], tau[j] = larfg!(A[j,j], sub(B, :, j), tau[j])
# Apply reflector to remainder
for i = j+1:n
  work[i] = A[j,i]
end
Base.LinAlg.BLAS.gemv!('T', 1.0, sub(B,:,j+1:n), sub(B,:,j), 1.0, 
sub(work,j+1:n))
for k = j+1:n
  A[j,k] = A[j,k] - tau[j]*work[k]
end
Base.LinAlg.BLAS.ger!(-tau[j], sub(B,:,j), sub(work,j+1:n), 
sub(B,:,j+1:n))
  end
end

Thanks,
Aaron





Re: [julia-users] Lot of allocations in Array assignement

2014-12-07 Thread Stefan Karpinski
Indeed, this is the case. That n is coming from global scope and isn't a
constant when the code is generated, so the generated code has to be able
to deal with the type of n being anything at all. The type of n affects the
type of i, so the compiler also can't know the type of i and must, just to
be safe, heap allocate each value of i.

On Sun, Dec 7, 2014 at 12:05 PM, Milan Bouchet-Valat nalimi...@club.fr
wrote:

 Le dimanche 07 décembre 2014 à 08:31 -0800, remi.ber...@gmail.com a
 écrit :
 
 
  Hey guys,
 
  I'm currently playing with some Eratosthenes sieving in Julia, and
  found a strange behavior of memory allocation.
  My naive sieve is as follows:
 
  #=
  Naive version of Erato sieve.
  * bitarray to store primes
  * only eliminate multiples of primes
  * separately eliminate even non-primes
  =#
  function erato1(n::Int)
  # Create bitarray to store primes
  primes_mask = trues(n)
  primes_mask[1] = false
 
  # Eliminate even numbers first
  for i = 4:2:n
  primes_mask[i] = false
  end
 
  # Eliminate odd non-primes numbers
  for i = 3:2:n
  if primes_mask[i]
  for j = (i + i):i:n
  primes_mask[j] = false
  end
  end
  end
 
  # Collect every primes in an array
  n_primes = countnz(primes_mask)
  primes_arr = Array(Int64, n_primes)
  collect1!(primes_mask, primes_arr)
  end
 
 
 
  With the collect1! function that takes a BitArray as argument and
  return an Array containing the primes numbers.
 
  function collect1!(primes_mask::BitArray{1}, primes_arr::Array{Int64,
  1})
  prime_index = 1
  for i = 2:n
  if primes_mask[i]
  primes_arr[prime_index] = i
  prime_index += 1
  end
  end
  return primes_arr
  end
 
 
  The codes works, but is slow because of a lot of memory allocation at
  the line:
  primes_arr[prime_index] = i
 
 
  Here is an extract of the memory allocation profile
  (--track-allocation=user):
 
  - function collect1!(primes_mask::BitArray{1},
  primes_arr::Array{Int64, 1})
  0 prime_index = 1
  -84934576 for i = 2:n
  0 if primes_mask[i]
  184350208 primes_arr[prime_index] = i
  0 prime_index += 1
  - end
  - end
  0 return primes_arr
  - end
 
 
 
 
  But, if I inline the definition of collect1! into the erato1, this is
  much faster and the allocation in the loop of collect disapears. Here
  is the code updated:
 
  function erato1(n::Int)
  # Create bitarray to store primes
  primes_mask = trues(n)
  primes_mask[1] = false
 
  # Eliminate even numbers first
  for i = 4:2:n
  primes_mask[i] = false
  end
 
  # Eliminate odd non-primes numbers
  for i = 3:2:n
  if primes_mask[i]
  for j = (i + i):i:n
  primes_mask[j] = false
  end
  end
  end
 
  # Collect every primes in an array
  n_primes = countnz(primes_mask)
  primes_arr = Array(Int64, n_primes)
  prime_index = 1
  for i = 2:n
  if primes_mask[i]
  primes_arr[prime_index] = i
  prime_index += 1
  end
  end
  return primes_arr
  end
 
 
  And the memory profile seems more reasonable:
 
  0 n_primes = countnz(primes_mask)
   92183392 primes_arr = Array(Int64, n_primes)
  0 prime_index = 1
  0 for i = 2:n
  0 if primes_mask[i]
  0 primes_arr[prime_index] = i
  0 prime_index += 1
  - end
  - end
 
 
 
  So I'm wondering why the simple fact of inlining the code would remove
  the massive memory allocation when assigning to the array.
 I think the difference may come from the fact that n is not a parameter
 of collect1!(). As a general rule, avoid using global variables (or at
 least mark them as const). Anyway it's much clearer when a function only
 depends on its arguments, else it's very hard to follow what it does.

 In the specific case of your example, Julia looks for n in the global
 scope, not even in the parent function erato1(). So with a fresh
 session, I get:
 julia erato1(100)
 ERROR: n not defined
  in collect1! at none:3
  in erato1 at none:23


 I'm not clear on how variable scoping works with function calls: the
 manual says variables are inherited by inner scopes. But I guess it does
 not apply to calling a function, only to defining it. Is that right? I
 could add a word to the manual if it's deemed useful.

 http://docs.julialang.org/en/latest/manual/variables-and-scoping/


 Regards




Re: [julia-users] Memory allocation in BLAS wrappers

2014-12-07 Thread Andreas Noack
The allocation is from gemv!. It is likely to be sub that allocates the
memory. See


julia let
   A = randn(5,5);
   x = randn(5);
   b = Array(Float64, 5)
   @time for i = 1:10;BLAS.gemv!('N',1.0,A,x,0.0,b);end
   end
elapsed time: 8.0933e-5 seconds (0 bytes allocated)


versus

julia let
   x = randn(5);
   @time for i = 1:10;sub(x, 1:5);end
   end
elapsed time: 1.258e-6 seconds (1440 bytes allocated)


I've been told that it is because of the garbage collector and
unfortunately I'm not sure if it is possible to change. It can in some
cases be extensive and make it difficult to compete with Fortran
implementations of linear algebra algorithms.

2014-12-07 12:41 GMT-05:00 adok...@ucdavis.edu:

 I am writing a Julia function which implements some of the functionality
 of the LAPACK function TPQRT, but my version stops before computing the
 block reflectors. In my function, I make several calls to gemv! and ger!
 from Base.LinAlg.BLAS. Each of these calls seems to generate extra overhead
 of 440 bytes according to the output of --track-allocation, and since each
 of those functions gets called several times within a function that also
 gets called many times, the extra allocations blow up and I end up spending
 a large chunk of time in garbage collection. Is it normal for each function
 call to dynamically allocate so much space? Here is the function in
 question:

 function tpqrf!( A::StridedMatrix{Float64}, B::StridedMatrix{Float64},
  tau::StridedVector{Float64}, work::Vector{Float64} )
   m,n = size(B)
   for j = 1:n
 # Construct Householder reflector
 A[j,j], tau[j] = larfg!(A[j,j], sub(B, :, j), tau[j])
 # Apply reflector to remainder
 for i = j+1:n
   work[i] = A[j,i]
 end
 Base.LinAlg.BLAS.gemv!('T', 1.0, sub(B,:,j+1:n), sub(B,:,j), 1.0,
 sub(work,j+1:n))
 for k = j+1:n
   A[j,k] = A[j,k] - tau[j]*work[k]
 end
 Base.LinAlg.BLAS.ger!(-tau[j], sub(B,:,j), sub(work,j+1:n),
 sub(B,:,j+1:n))
   end
 end

 Thanks,
 Aaron






Re: [julia-users] Memory allocation in BLAS wrappers

2014-12-07 Thread adokano
Thank you! I had suspected it may have been related to sub. Perhaps I will 
just call GEMV directly and pass it pointers to the correct locations.

On Sunday, December 7, 2014 9:59:47 AM UTC-8, Andreas Noack wrote:

 The allocation is from gemv!. It is likely to be sub that allocates the 
 memory. See


 julia let
A = randn(5,5);
x = randn(5);
b = Array(Float64, 5)
@time for i = 1:10;BLAS.gemv!('N',1.0,A,x,0.0,b);end
end
 elapsed time: 8.0933e-5 seconds (0 bytes allocated)


 versus

 julia let
x = randn(5);
@time for i = 1:10;sub(x, 1:5);end
end
 elapsed time: 1.258e-6 seconds (1440 bytes allocated)


 I've been told that it is because of the garbage collector and 
 unfortunately I'm not sure if it is possible to change. It can in some 
 cases be extensive and make it difficult to compete with Fortran 
 implementations of linear algebra algorithms.

 2014-12-07 12:41 GMT-05:00 ado...@ucdavis.edu javascript::

 I am writing a Julia function which implements some of the functionality 
 of the LAPACK function TPQRT, but my version stops before computing the 
 block reflectors. In my function, I make several calls to gemv! and ger! 
 from Base.LinAlg.BLAS. Each of these calls seems to generate extra overhead 
 of 440 bytes according to the output of --track-allocation, and since each 
 of those functions gets called several times within a function that also 
 gets called many times, the extra allocations blow up and I end up spending 
 a large chunk of time in garbage collection. Is it normal for each function 
 call to dynamically allocate so much space? Here is the function in 
 question:

 function tpqrf!( A::StridedMatrix{Float64}, B::StridedMatrix{Float64}, 
  tau::StridedVector{Float64}, work::Vector{Float64} )
   m,n = size(B)
   for j = 1:n
 # Construct Householder reflector
 A[j,j], tau[j] = larfg!(A[j,j], sub(B, :, j), tau[j])
 # Apply reflector to remainder
 for i = j+1:n
   work[i] = A[j,i]
 end
 Base.LinAlg.BLAS.gemv!('T', 1.0, sub(B,:,j+1:n), sub(B,:,j), 1.0, 
 sub(work,j+1:n))
 for k = j+1:n
   A[j,k] = A[j,k] - tau[j]*work[k]
 end
 Base.LinAlg.BLAS.ger!(-tau[j], sub(B,:,j), sub(work,j+1:n), 
 sub(B,:,j+1:n))
   end
 end

 Thanks,
 Aaron






Re: [julia-users] Memory allocation in BLAS wrappers

2014-12-07 Thread Andreas Noack

 Perhaps I will just call GEMV directly and pass it pointers to the correct
 locations.


Yes that would solve the allocation problem. Some of the BLAS wrappers
support pointers as input, but I think that is only the BLAS1 routines.
Hence, you'll have to make your own ccall to BLAS. Maybe we should consider
to support pointer version of the BLAS2 and 3 wrappers.

2014-12-07 13:05 GMT-05:00 adok...@ucdavis.edu:

 Thank you! I had suspected it may have been related to sub. Perhaps I will
 just call GEMV directly and pass it pointers to the correct locations.

 On Sunday, December 7, 2014 9:59:47 AM UTC-8, Andreas Noack wrote:

 The allocation is from gemv!. It is likely to be sub that allocates the
 memory. See


 julia let
A = randn(5,5);
x = randn(5);
b = Array(Float64, 5)
@time for i = 1:10;BLAS.gemv!('N',1.0,A,x,0.0,b);end
end
 elapsed time: 8.0933e-5 seconds (0 bytes allocated)


 versus

 julia let
x = randn(5);
@time for i = 1:10;sub(x, 1:5);end
end
 elapsed time: 1.258e-6 seconds (1440 bytes allocated)


 I've been told that it is because of the garbage collector and
 unfortunately I'm not sure if it is possible to change. It can in some
 cases be extensive and make it difficult to compete with Fortran
 implementations of linear algebra algorithms.

 2014-12-07 12:41 GMT-05:00 ado...@ucdavis.edu:

 I am writing a Julia function which implements some of the functionality
 of the LAPACK function TPQRT, but my version stops before computing the
 block reflectors. In my function, I make several calls to gemv! and ger!
 from Base.LinAlg.BLAS. Each of these calls seems to generate extra overhead
 of 440 bytes according to the output of --track-allocation, and since each
 of those functions gets called several times within a function that also
 gets called many times, the extra allocations blow up and I end up spending
 a large chunk of time in garbage collection. Is it normal for each function
 call to dynamically allocate so much space? Here is the function in
 question:

 function tpqrf!( A::StridedMatrix{Float64}, B::StridedMatrix{Float64},
  tau::StridedVector{Float64}, work::Vector{Float64} )
   m,n = size(B)
   for j = 1:n
 # Construct Householder reflector
 A[j,j], tau[j] = larfg!(A[j,j], sub(B, :, j), tau[j])
 # Apply reflector to remainder
 for i = j+1:n
   work[i] = A[j,i]
 end
 Base.LinAlg.BLAS.gemv!('T', 1.0, sub(B,:,j+1:n), sub(B,:,j), 1.0,
 sub(work,j+1:n))
 for k = j+1:n
   A[j,k] = A[j,k] - tau[j]*work[k]
 end
 Base.LinAlg.BLAS.ger!(-tau[j], sub(B,:,j), sub(work,j+1:n),
 sub(B,:,j+1:n))
   end
 end

 Thanks,
 Aaron







Re: [julia-users] Memory allocation in BLAS wrappers

2014-12-07 Thread Stefan Karpinski
On Sun, Dec 7, 2014 at 1:15 PM, Andreas Noack andreasnoackjen...@gmail.com
wrote:

 Maybe we should consider to support pointer version of the BLAS2 and 3
 wrappers.


Seems like a good idea.


Re: [julia-users] Re: Article on finite element programming in Julia

2014-12-07 Thread Petr Krysl


Hello everybody,


I found Amuthan 's blog a while back, but only about two weeks ago I found 
the time to look seriously at Julia. What I found was very encouraging.


For a variety of teaching and research purposes I maintain a Matlab FEA 
toolkit called FinEALE. It is about 80,000 lines of code with all the 
examples and tutorials. In the past week I rewrote the bits and pieces that 
allow me to run a comparison with Amuthan 's code. Here are the results:


For 1000 x 1000 grid (2 million triangles):


Amuthan's code: 29 seconds


J FinEALE: 86 seconds


FinEALE: 810 seconds


Mind you, we are not doing the same thing in these codes. FinEALE and J 
FinEALE implement code to solve the heat conduction problem with 
arbitrarily anisotropic materials. The calculation of the FE space is also 
not vectorized as in Amuthan's code. The code is written to be legible and 
general: the same code that calculates the matrices and vectors for a 
triangle mesh would also work for quadrilaterals, linear and quadratic, 
both in the pure 2-D and the axially symmetric set up, and tetrahedral and 
hexahedral elements in 3-D. There is obviously a price to pay for all this 
generality.


Concerning Amuthan 's comparison with the two compiled FEA codes: it really 
depends how the problem is set up for those codes. I believe that Fenics 
has a form compiler which can spit out an optimized code that in this case 
would be entirely equivalent to the simplified calculation (isotropic 
material with conductivity equal to 1.0), and linear triangles. I'm not 
sure about freefem++, but since it has a domain-specific language, it can 
also presumably optimize its operations. So in my opinion it is rather 
impressive that Amuthan 's code in Julia can do so well.


Petr
 


Re: [julia-users] Lot of allocations in Array assignement

2014-12-07 Thread Milan Bouchet-Valat
Le dimanche 07 décembre 2014 à 09:12 -0800, John Myles White a écrit :
 It might be useful to put a bit about Julia being lexically scoped
 into the manual and refer to the Wikipedia article on scope.
Ah, that's the term I needed. Opened a pull request here:
https://github.com/JuliaLang/julia/pull/9267



Regards

  — John
 
  On Dec 7, 2014, at 9:05 AM, Milan Bouchet-Valat nalimi...@club.fr wrote:
  
  Le dimanche 07 décembre 2014 à 08:31 -0800, remi.ber...@gmail.com a
  écrit :
  
  
  Hey guys,
  
  I'm currently playing with some Eratosthenes sieving in Julia, and
  found a strange behavior of memory allocation.
  My naive sieve is as follows:
  
  #=
  Naive version of Erato sieve.
 * bitarray to store primes
 * only eliminate multiples of primes
 * separately eliminate even non-primes
  =#
  function erato1(n::Int)
 # Create bitarray to store primes
 primes_mask = trues(n)
 primes_mask[1] = false
  
 # Eliminate even numbers first
 for i = 4:2:n
 primes_mask[i] = false
 end
  
 # Eliminate odd non-primes numbers
 for i = 3:2:n
 if primes_mask[i]
 for j = (i + i):i:n
 primes_mask[j] = false
 end
 end
 end
  
 # Collect every primes in an array
 n_primes = countnz(primes_mask)
 primes_arr = Array(Int64, n_primes)
 collect1!(primes_mask, primes_arr)
  end
  
  
  
  With the collect1! function that takes a BitArray as argument and
  return an Array containing the primes numbers.
  
  function collect1!(primes_mask::BitArray{1}, primes_arr::Array{Int64,
  1})
 prime_index = 1
 for i = 2:n
 if primes_mask[i]
 primes_arr[prime_index] = i
 prime_index += 1
 end
 end
 return primes_arr
  end
  
  
  The codes works, but is slow because of a lot of memory allocation at
  the line:
  primes_arr[prime_index] = i
  
  
  Here is an extract of the memory allocation profile
  (--track-allocation=user):
  
 - function collect1!(primes_mask::BitArray{1},
  primes_arr::Array{Int64, 1})
 0 prime_index = 1
  -84934576 for i = 2:n
 0 if primes_mask[i]
  184350208 primes_arr[prime_index] = i
 0 prime_index += 1
 - end
 - end
 0 return primes_arr
 - end
  
  
  
  
  But, if I inline the definition of collect1! into the erato1, this is
  much faster and the allocation in the loop of collect disapears. Here
  is the code updated:
  
  function erato1(n::Int)
 # Create bitarray to store primes
 primes_mask = trues(n)
 primes_mask[1] = false
  
 # Eliminate even numbers first
 for i = 4:2:n
 primes_mask[i] = false
 end
  
 # Eliminate odd non-primes numbers
 for i = 3:2:n
 if primes_mask[i]
 for j = (i + i):i:n
 primes_mask[j] = false
 end
 end
 end
  
 # Collect every primes in an array
 n_primes = countnz(primes_mask)
 primes_arr = Array(Int64, n_primes)
 prime_index = 1
 for i = 2:n
 if primes_mask[i]
 primes_arr[prime_index] = i
 prime_index += 1
 end
 end
 return primes_arr
  end
  
  
  And the memory profile seems more reasonable:
  
 0 n_primes = countnz(primes_mask)
  92183392 primes_arr = Array(Int64, n_primes)
 0 prime_index = 1
 0 for i = 2:n
 0 if primes_mask[i]
 0 primes_arr[prime_index] = i
 0 prime_index += 1
 - end
 - end
  
  
  
  So I'm wondering why the simple fact of inlining the code would remove
  the massive memory allocation when assigning to the array.
  I think the difference may come from the fact that n is not a parameter
  of collect1!(). As a general rule, avoid using global variables (or at
  least mark them as const). Anyway it's much clearer when a function only
  depends on its arguments, else it's very hard to follow what it does.
  
  In the specific case of your example, Julia looks for n in the global
  scope, not even in the parent function erato1(). So with a fresh
  session, I get:
  julia erato1(100)
  ERROR: n not defined
  in collect1! at none:3
  in erato1 at none:23
  
  
  I'm not clear on how variable scoping works with function calls: the
  manual says variables are inherited by inner scopes. But I guess it does
  not apply to calling a function, only to defining it. Is that right? I
  could add a word to the manual if it's deemed useful.
  
  http://docs.julialang.org/en/latest/manual/variables-and-scoping/
  
  
  Regards
  



[julia-users] error initializing module LinAlg

2014-12-07 Thread Ivar Nesje
Yes I think you should report this on github. I have never used make install, 
because julia runs fine from the build folder, and I have multiple folders 
where I compile different versions. Then I have aliases in my .profile so that 
I don't have to type the full path. It should work though.

Re: [julia-users] Newbie question about Tasks

2014-12-07 Thread Bill Allen
Great! That's what I thought, but it can't hurt to ask.

On Saturday, December 6, 2014 4:58:59 PM UTC-5, Steven G. Johnson wrote:



 On Saturday, December 6, 2014 4:20:17 PM UTC-5, Bill Allen wrote:

 With the sum function, and by extension any function that deals with 
 iterables, is it safe to assume they work without allocating storage for 
 the results of the iterable?


 Yes, they just loop over the iterable and accumulate as they go. 



Re: [julia-users] Re: Lot of allocations in Array assignement

2014-12-07 Thread Stefan Karpinski
Values that are used inside of functions come from an outer scope – if they
are not defined in any enclosing local scopes, then they are global. In
particular, function names are just global constants, so this behavior is
really quite important. You wouldn't want to have to declare every function
that you're going to call to be global before calling it.

On Sun, Dec 7, 2014 at 3:47 PM, remi.ber...@gmail.com wrote:

 Well indeed that was the problem. Thank you very much. I wasn't aware of
 this behavior of Julia, and I didn't even see that n wasn't in the scope of
 the function. Somehow I believed that if it were the case an error would be
 raised.

 Is there situations where this behavior is wanted? Because I find not
 counterintuitive (but maybe it's just me). I would be glad to know more
 about the decisions that have lead to this design choice about Julia (or in
 other language that could have the same feature).

 Best,
 Rémi



 On Sunday, December 7, 2014 5:31:40 PM UTC+1, remi@gmail.com wrote:



 Hey guys,

 I'm currently playing with some Eratosthenes sieving in Julia, and found
 a strange behavior of memory allocation.
 My naive sieve is as follows:

 #=
 Naive version of Erato sieve.
 * bitarray to store primes
 * only eliminate multiples of primes
 * separately eliminate even non-primes
 =#
 function erato1(n::Int)
 # Create bitarray to store primes
 primes_mask = trues(n)
 primes_mask[1] = false

 # Eliminate even numbers first
 for i = 4:2:n
 primes_mask[i] = false
 end

 # Eliminate odd non-primes numbers
 for i = 3:2:n
 if primes_mask[i]
 for j = (i + i):i:n
 primes_mask[j] = false
 end
 end
 end

 # Collect every primes in an array
 n_primes = countnz(primes_mask)
 primes_arr = Array(Int64, n_primes)
 collect1!(primes_mask, primes_arr)
 end


 With the collect1! function that takes a BitArray as argument and return
 an Array containing the primes numbers.

 function collect1!(primes_mask::BitArray{1}, primes_arr::Array{Int64, 1})
 prime_index = 1
 for i = 2:n
 if primes_mask[i]
 primes_arr[prime_index] = i
 prime_index += 1
 end
 end
 return primes_arr
 end

 The codes works, but is slow because of a lot of memory allocation at the
 line:
 primes_arr[prime_index] = i

 Here is an extract of the memory allocation profile
 (--track-allocation=user):

 - function collect1!(primes_mask::BitArray{1}, primes_arr::Array{
 Int64, 1})
 0 prime_index = 1
 -84934576 for i = 2:n
 0 if primes_mask[i]
 *184350208* primes_arr[prime_index] = i
 0 prime_index += 1
 - end
 - end
 0 return primes_arr
 - end



 But, if I inline the definition of collect1! into the erato1, this is
 much faster and the allocation in the loop of collect disapears. Here is
 the code updated:

 function erato1(n::Int)
 # Create bitarray to store primes
 primes_mask = trues(n)
 primes_mask[1] = false

 # Eliminate even numbers first
 for i = 4:2:n
 primes_mask[i] = false
 end

 # Eliminate odd non-primes numbers
 for i = 3:2:n
 if primes_mask[i]
 for j = (i + i):i:n
 primes_mask[j] = false
 end
 end
 end

 # Collect every primes in an array
 n_primes = countnz(primes_mask)
 primes_arr = Array(Int64, n_primes)
 prime_index = 1
 for i = 2:n
 if primes_mask[i]
 primes_arr[prime_index] = i
 prime_index += 1
 end
 end
 return primes_arr
 end

 And the memory profile seems more reasonable:

 0 n_primes = countnz(primes_mask)
  92183392 primes_arr = Array(Int64, n_primes)
 0 prime_index = 1
 0 for i = 2:n
 0 if primes_mask[i]
 0 primes_arr[prime_index] = i
 0 prime_index += 1
 - end
 - end


 So I'm wondering why the simple fact of inlining the code would remove
 the massive memory allocation when assigning to the array.

 Thank you for your help,
 Rémis




Re: [julia-users] Re: Article on finite element programming in Julia

2014-12-07 Thread Rob J. Goedman
Petr,

Are you referring to 
http://www.codeproject.com/Articles/579983/Finite-Element-programming-in-Julia ?

Or is this from another blog?

Rob J. Goedman
goed...@mac.com





 On Dec 7, 2014, at 10:21 AM, Petr Krysl krysl.p...@gmail.com wrote:
 
 Hello everybody,
 
 
 
 I found Amuthan 's blog a while back, but only about two weeks ago I found 
 the time to look seriously at Julia. What I found was very encouraging.
 
 
 
 For a variety of teaching and research purposes I maintain a Matlab FEA 
 toolkit called FinEALE. It is about 80,000 lines of code with all the 
 examples and tutorials. In the past week I rewrote the bits and pieces that 
 allow me to run a comparison with Amuthan 's code. Here are the results:
 
 
 
 For 1000 x 1000 grid (2 million triangles):
 
 
 
 Amuthan's code: 29 seconds
 
 
 
 J FinEALE: 86 seconds
 
 
 
 FinEALE: 810 seconds
 
 
 
 Mind you, we are not doing the same thing in these codes. FinEALE and J 
 FinEALE implement code to solve the heat conduction problem with arbitrarily 
 anisotropic materials. The calculation of the FE space is also not vectorized 
 as in Amuthan's code. The code is written to be legible and general: the same 
 code that calculates the matrices and vectors for a triangle mesh would also 
 work for quadrilaterals, linear and quadratic, both in the pure 2-D and the 
 axially symmetric set up, and tetrahedral and hexahedral elements in 3-D. 
 There is obviously a price to pay for all this generality.
 
 
 
 Concerning Amuthan 's comparison with the two compiled FEA codes: it really 
 depends how the problem is set up for those codes. I believe that Fenics has 
 a form compiler which can spit out an optimized code that in this case would 
 be entirely equivalent to the simplified calculation (isotropic material with 
 conductivity equal to 1.0), and linear triangles. I'm not sure about 
 freefem++, but since it has a domain-specific language, it can also 
 presumably optimize its operations. So in my opinion it is rather impressive 
 that Amuthan 's code in Julia can do so well.
 
 
 
 Petr
 



Re: [julia-users] Memory allocation in BLAS wrappers

2014-12-07 Thread adokano
As a side question, what's the best way to get a pointer to an element of 
an array, e.g. if I want a pointer to the memory location which holds 
A[3,6]?

On Sunday, December 7, 2014 10:21:17 AM UTC-8, Stefan Karpinski wrote:

 On Sun, Dec 7, 2014 at 1:15 PM, Andreas Noack andreasno...@gmail.com 
 javascript: wrote:

 Maybe we should consider to support pointer version of the BLAS2 and 3 
 wrappers.


 Seems like a good idea.



Re: [julia-users] Memory allocation in BLAS wrappers

2014-12-07 Thread Andreas Noack
For matrices, I think it is

pointer(A, stride(A, 2)*(n - 1) + stride(A, 1)*m) in general, but if you
are only considering Array{T,2} then

pointer(A, size(A, 2)*(n - 1) + m)

should be fine.

2014-12-07 16:14 GMT-05:00 adok...@ucdavis.edu:

 As a side question, what's the best way to get a pointer to an element of
 an array, e.g. if I want a pointer to the memory location which holds
 A[3,6]?

 On Sunday, December 7, 2014 10:21:17 AM UTC-8, Stefan Karpinski wrote:

 On Sun, Dec 7, 2014 at 1:15 PM, Andreas Noack andreasno...@gmail.com
 wrote:

 Maybe we should consider to support pointer version of the BLAS2 and 3
 wrappers.


 Seems like a good idea.




Re: [julia-users] Re: Article on finite element programming in Julia

2014-12-07 Thread Petr Krysl
Correct. That is the blog cited by the OP in this thread.

On Sunday, December 7, 2014 1:09:36 PM UTC-8, Rob J Goedman wrote:

 Petr,

 Are you referring to 
 http://www.codeproject.com/Articles/579983/Finite-Element-programming-in-Julia
  ?

 Or is this from another blog?

 Rob J. Goedman
 goe...@mac.com javascript:




  
 On Dec 7, 2014, at 10:21 AM, Petr Krysl krysl...@gmail.com javascript: 
 wrote:

 Hello everybody,


 I found Amuthan 's blog a while back, but only about two weeks ago I found 
 the time to look seriously at Julia. What I found was very encouraging.


 For a variety of teaching and research purposes I maintain a Matlab FEA 
 toolkit called FinEALE. It is about 80,000 lines of code with all the 
 examples and tutorials. In the past week I rewrote the bits and pieces that 
 allow me to run a comparison with Amuthan 's code. Here are the results:


 For 1000 x 1000 grid (2 million triangles):


 Amuthan's code: 29 seconds


 J FinEALE: 86 seconds


 FinEALE: 810 seconds


 Mind you, we are not doing the same thing in these codes. FinEALE and J 
 FinEALE implement code to solve the heat conduction problem with 
 arbitrarily anisotropic materials. The calculation of the FE space is also 
 not vectorized as in Amuthan's code. The code is written to be legible and 
 general: the same code that calculates the matrices and vectors for a 
 triangle mesh would also work for quadrilaterals, linear and quadratic, 
 both in the pure 2-D and the axially symmetric set up, and tetrahedral and 
 hexahedral elements in 3-D. There is obviously a price to pay for all this 
 generality.


 Concerning Amuthan 's comparison with the two compiled FEA codes: it 
 really depends how the problem is set up for those codes. I believe that 
 Fenics has a form compiler which can spit out an optimized code that in 
 this case would be entirely equivalent to the simplified calculation 
 (isotropic material with conductivity equal to 1.0), and linear triangles. 
 I'm not sure about freefem++, but since it has a domain-specific language, 
 it can also presumably optimize its operations. So in my opinion it is 
 rather impressive that Amuthan 's code in Julia can do so well.


 Petr
  




Re: [julia-users] Memory allocation in BLAS wrappers

2014-12-07 Thread Ivar Nesje
Seems to me like pointer(A, 3, 6) would be nice and unambiguous for 2d arrays. 
Is there any reason why that shouldn't be implemented?

The current implementation is a little too dangerous for AbstractArray, in my 
opinion. Can we limit it to ContiguousArray (or whatever it is called now), and 
make it somewhat safer?

https://github.com/JuliaLang/julia/blob/4b299c2fd5464ece308a8e708789a9d2aa9e32d3/base/pointer.jl#L29

I know these questions is a better fit for github, but I don't have time to 
create a PR right now.

[julia-users] Re: How to find index in Array{AbstractString,1}

2014-12-07 Thread David van Leeuwen
Hi, 

On Thursday, December 4, 2014 8:53:00 PM UTC+1, Steven G. Johnson wrote:

 help(findin) will tell you that the 2nd argument of findin should be a 
 collection, not a single element.  So you want findin(suo2, [a])


It is strange that for an array of Int Paul's approach works fine:

julia findin([1, 4, 6], 4)
1-element Array{Int64,1}: 
2

julia findin([1, 4, 6], 4) == findin([1, 4, 6], [4]) 
true


I think the current interpretation for an ASCIIString in terms of the 2nd 
argument in findin() is the collection of characters
julia findin(['a', 'b', 'c'], bc)
2-element Array{Int64,1}:
2
3

which may not be the intention of findin().  The reason, I believe, is the 
way the 2nd argument is cast into a set:
...
   bset = union!(Set(), b)
...

which seems to put most single element types in a Set, but for a string 
type this makes a set of the characters in the string. 



[julia-users] Why is typeof hex or binary number Uint64, while typeof decimal number is Int64?

2014-12-07 Thread Phil Tomson
julia typeof(-0b111)
Uint64

julia typeof(-7)
Int64

julia typeof(-0x7)
Uint64

julia typeof(-7)
Int64

I find this a bit surprising. Why does the base of the number determine 
signed or unsigned-ness? Is this intentional or possibly a bug?


[julia-users] Is there a null IOStream?

2014-12-07 Thread K Leo
At times I don't want to output anything, so I pass a null IOStream to a 
function that requires an IOStream.  How to do that?


[julia-users] Re: Why is typeof hex or binary number Uint64, while typeof decimal number is Int64?

2014-12-07 Thread elextr


On Monday, December 8, 2014 10:21:52 AM UTC+10, Phil Tomson wrote:

 julia typeof(-0b111)
 Uint64

 julia typeof(-7)
 Int64

 julia typeof(-0x7)
 Uint64

 julia typeof(-7)
 Int64

 I find this a bit surprising. Why does the base of the number determine 
 signed or unsigned-ness? Is this intentional or possibly a bug?


This is documented behaviour 
http://docs.julialang.org/en/latest/manual/integers-and-floating-point-numbers/#integers
 
based on the heuristic that using hex is mostly in situations where you 
need unsigned behaviour anyway.

Cheers
Lex   


Re: [julia-users] Avoiding allocation when writing to arrays of immutables

2014-12-07 Thread Will Dobbie
I tested it on 0.4-dev+1310 and it fixed the allocations in the example 
above. But my real program was still allocating at the same spot. It seems 
to be a different issue unrelated to immutables. I've boiled it down to the 
example below. It's a little contrived as my real aim is to write to memory 
not allocated by Julia. It seems to be something to do with accessing 
arrays directly versus through a field? I've tried splitting it up into 
separate type-annotated functions but there was no change.

module immtest

type Container
array::Array{Float32}
size::Uint32
end

function runtest(reps)
dst = resize!(Float32[], 100_000)
src = resize!(Float32[], 10)
container = Container(dst, 1000)

@time begin
for i=1:reps
# This does not cause allocation
# copy!(dst, container.size+1, src, 1)

# This does
copy!(container.array, container.size+1, src, 1)
end
end
end

runtest(10_000_000)

end




On Sunday, December 7, 2014 3:06:52 AM UTC+11, Stefan Karpinski wrote:

 I can reproduce this on 0.3. Looking into it.

 On Sat, Dec 6, 2014 at 7:04 AM, Tim Holy tim@gmail.com javascript: 
 wrote:

 Curiously, I don't even see it on my copy of julia 0.3.

 Will, one tip: julia optimizes within functions. Anytime you see something
 weird like this, try to separate allocation and operation into two 
 separate
 functions. That way, the function that's performing lots of computation 
 will
 receive a concrete type as an input, and be well-optimized.

 --Tim

 On Friday, December 05, 2014 06:38:28 PM John Myles White wrote:
  I think this might be a problem with Julia 0.3. I see it on Julia 0.3, 
 but
  not on the development branch for Julia 0.4.
 
   — John
 
  On Dec 5, 2014, at 6:27 PM, Will Dobbie wdo...@gmail.com javascript: 
 wrote:
   Hi,
  
   I have a program which copies elements between two arrays of 
 immutables in
   a tight loop. The sizes of the arrays never change. I've been 
 struggling
   to get it to avoid spending a large chunk of its time in the garbage
   collector. I have an example of what I mean below.
  
   With arrays of Int64 I get:
   elapsed time: 0.164429425 seconds (0 bytes allocated)
  
   With arrays of an immutable the same size as Int64 I get:
   elapsed time: 1.421834146 seconds (32000 bytes allocated, 15.97% 
 gc
   time)
  
   My understanding was arrays of immutables should behave like arrays of
   structs in C and not require heap allocation. Is there a way I can
   achieve that? I'm using Julia 0.3.3.
  
   Thanks,
   Will
  
  
  
   module immtest
  
   immutable Vec2
  
   x::Float32
   y::Float32
  
   end
  
   # typealias element_type Vec2   # Results in allocation 
 in the loop
 below
   typealias element_type Int64# Does not cause 
 allocation
  
   function runtest(reps)
  
   dst = resize!(element_type[], 100_000)
   src = resize!(element_type[], 10)
  
   @time begin
  
   for i=1:reps
  
   copy!(dst, 1000, src, 1, length(src))
   # dst[1000:1009] = src  # same 
 performance as above
  
   end
  
   end
  
   end
  
   runtest(10_000_000)
  
   end




[julia-users] set LC_ALL=C using run()

2014-12-07 Thread David Koslicki
Hello,

I am trying to sort a large (4G) file consisting only of strings of length 
50 on the alphabet {A,C,T,G}. While the built in Julia sort() works, it 
uses quite a bit of memory. I have had good success using the linux command 
LC_ALL=C sort larg_file.txt -o out.txt, but get the following error when I 
try to evaluate the command run(`LC_ALL=C sort larg_file.txt -o out.txt`) 
in Julia:

ERROR: could not spawn `LC_ALL=C sort large_file.txt -o out.txt`: no such 
file or directory (ENOENT)
 in _jl_spawn at process.jl:217
 in spawn at process.jl:348
 in spawn at process.jl:389
 in run at process.jl:470

Running the command
export LC_ALL=C in the shell before running Julia works, but I would like 
to be able to do this directly from Julia. Unfortunately, the command 
run(`export LC_ALL=C`) also returns the error:

ERROR: could not spawn `export LC_ALL=C`: no such file or directory (ENOENT)
 in _jl_spawn at process.jl:217
 in spawn at process.jl:348
 in spawn at process.jl:389
 in run at process.jl:470

Does anyone know how to set LC_ALL=C in julia using run()?

Thanks,

~David


[julia-users] Re: Is there a null IOStream?

2014-12-07 Thread Matt Bauman
It'd be nice if the DevNull object 
(http://docs.julialang.org/en/latest/stdlib/base/#Base.DevNull) would work 
for this, but it seems like it only works for command redirection right now:

julia run(`echo Hello` | DevNull)

julia print(DevNull, Hello)
ERROR: type DevNullStream has no field status
 in isopen at stream.jl:286
 in check_open at stream.jl:293
 in write at stream.jl:730
 in print at ascii.jl:93



On Sunday, December 7, 2014 7:24:53 PM UTC-5, K leo wrote:

 At times I don't want to output anything, so I pass a null IOStream to a 
 function that requires an IOStream.  How to do that? 



Re: [julia-users] set LC_ALL=C using run()

2014-12-07 Thread Isaiah Norton
You could use setenv:
http://julia.readthedocs.org/en/latest/stdlib/base/#Base.setenv

`run` in Julia executes calls directly (setting up the environment,
interpolating, and spawning subprocess) - not via the shell. So, built-ins
like `export` don't work.

See this blog post for some more information:
http://julialang.org/blog/2012/03/shelling-out-sucks/

(I do think it would be convenient to allow single-shot environment changes
as keyword arguments to `run`. not sure if there is an open issue for this
already, but it feels like something that has come up before)

On Sun, Dec 7, 2014 at 9:48 PM, David Koslicki dmkosli...@gmail.com wrote:

 Hello,

 I am trying to sort a large (4G) file consisting only of strings of length
 50 on the alphabet {A,C,T,G}. While the built in Julia sort() works, it
 uses quite a bit of memory. I have had good success using the linux command
 LC_ALL=C sort larg_file.txt -o out.txt, but get the following error when I
 try to evaluate the command run(`LC_ALL=C sort larg_file.txt -o out.txt`)
 in Julia:

 ERROR: could not spawn `LC_ALL=C sort large_file.txt -o out.txt`: no such
 file or directory (ENOENT)
  in _jl_spawn at process.jl:217
  in spawn at process.jl:348
  in spawn at process.jl:389
  in run at process.jl:470

 Running the command
 export LC_ALL=C in the shell before running Julia works, but I would like
 to be able to do this directly from Julia. Unfortunately, the command
 run(`export LC_ALL=C`) also returns the error:

 ERROR: could not spawn `export LC_ALL=C`: no such file or directory
 (ENOENT)
  in _jl_spawn at process.jl:217
  in spawn at process.jl:348
  in spawn at process.jl:389
  in run at process.jl:470

 Does anyone know how to set LC_ALL=C in julia using run()?

 Thanks,

 ~David



Re: [julia-users] Re: Is there a null IOStream?

2014-12-07 Thread K Leo

Even if I could check something like the following is better:

isopen(DevNull)

On 2014年12月08日 11:08, Matt Bauman wrote:
It'd be nice if the DevNull object 
(http://docs.julialang.org/en/latest/stdlib/base/#Base.DevNull) would 
work for this, but it seems like it only works for command redirection 
right now:


|
julia run(`echo Hello` | DevNull)

julia print(DevNull, Hello)
ERROR: type DevNullStream has no field status
 in isopen at stream.jl:286
 in check_open at stream.jl:293
 in write at stream.jl:730
 in print at ascii.jl:93
|



On Sunday, December 7, 2014 7:24:53 PM UTC-5, K leo wrote:

At times I don't want to output anything, so I pass a null
IOStream to a
function that requires an IOStream.  How to do that?





[julia-users] Missing newline in file output?

2014-12-07 Thread Greg Plowman
Hi

Are newlines missing from the following output to file? Or am I missing 
something?

fileout = open(test.txt, w)
println(fileout, Hello)
println(fileout, World)
close(fileout)

File test.txt contains:
HelloWorld 

and not what I expected:
Hello
World

Cheers, Greg



[julia-users] Re: Why is typeof hex or binary number Uint64, while typeof decimal number is Int64?

2014-12-07 Thread Phil Tomson


On Sunday, December 7, 2014 5:08:45 PM UTC-8, ele...@gmail.com wrote:



 On Monday, December 8, 2014 10:21:52 AM UTC+10, Phil Tomson wrote:

 julia typeof(-0b111)
 Uint64

 julia typeof(-7)
 Int64

 julia typeof(-0x7)
 Uint64

 julia typeof(-7)
 Int64

 I find this a bit surprising. Why does the base of the number determine 
 signed or unsigned-ness? Is this intentional or possibly a bug?


 This is documented behaviour 
 http://docs.julialang.org/en/latest/manual/integers-and-floating-point-numbers/#integers
  
 based on the heuristic that using hex is mostly in situations where you 
 need unsigned behaviour anyway.


The doc says: 

 This behavior is based on the observation that when one uses *unsigned 
 hex literals* for integer values, one typically is using them to 
 represent a fixed numeric byte sequence, rather than just an integer value.
  


Hmm In the above cases they were signed hex literals. 


Re: [julia-users] Missing newline in file output?

2014-12-07 Thread K Leo

in my case it works perfectly.
   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type help() for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.3.3 (2014-10-21 20:18 UTC)
 _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
|__/   |  x86_64-linux-gnu


On 2014年12月08日 11:37, Greg Plowman wrote:

Hi

Are newlines missing from the following output to file? Or am I 
missing something?


fileout = open(test.txt, w)
println(fileout, Hello)
println(fileout, World)
close(fileout)

File test.txt contains:
HelloWorld

and not what I expected:
Hello
World

Cheers, Greg





Re: [julia-users] Missing newline in file output?

2014-12-07 Thread John Myles White
What platform are you on? What's the hex dump of the file that gets created? 
Are perhaps Unix newlines being used, but you're using something like Notepad?

 -- John

On Dec 7, 2014, at 7:37 PM, Greg Plowman greg.plow...@gmail.com wrote:

 Hi
 
 Are newlines missing from the following output to file? Or am I missing 
 something?
 
 fileout = open(test.txt, w)
 println(fileout, Hello)
 println(fileout, World)
 close(fileout)
 
 File test.txt contains:
 HelloWorld 
 
 and not what I expected:
 Hello
 World
 
 Cheers, Greg
 



Re: [julia-users] Re: Why is typeof hex or binary number Uint64, while typeof decimal number is Int64?

2014-12-07 Thread Elliot Saba
What you're getting confused by is that your literals above are still
unsigned hex literals; but they are being applied to the negation operator,
which doesn't do what you want.  In essence, when you type -0x7, it's
getting parsed as -(0x7):

julia 0x7
0x07

julia -0x7
0xfff9

julia -(int(0x7))
-7

Therefore I'd suggest explicitly making anything you want to express as a
signed integer as decimal.  Note that -0x7 == -int(0x7) == int(-0x7), so
it's not like any information is lost here, it's just the interpretation of
the bits that is different.

On Sun, Dec 7, 2014 at 7:38 PM, Phil Tomson philtom...@gmail.com wrote:



 On Sunday, December 7, 2014 5:08:45 PM UTC-8, ele...@gmail.com wrote:



 On Monday, December 8, 2014 10:21:52 AM UTC+10, Phil Tomson wrote:

 julia typeof(-0b111)
 Uint64

 julia typeof(-7)
 Int64

 julia typeof(-0x7)
 Uint64

 julia typeof(-7)
 Int64

 I find this a bit surprising. Why does the base of the number determine
 signed or unsigned-ness? Is this intentional or possibly a bug?


 This is documented behaviour http://docs.julialang.org/en/
 latest/manual/integers-and-floating-point-numbers/#integers based on the
 heuristic that using hex is mostly in situations where you need unsigned
 behaviour anyway.


 The doc says:

 This behavior is based on the observation that when one uses *unsigned
 hex literals* for integer values, one typically is using them to
 represent a fixed numeric byte sequence, rather than just an integer value.



 Hmm In the above cases they were signed hex literals.



Re: [julia-users] Re: Article on finite element programming in Julia

2014-12-07 Thread Petr Krysl
Bit more optimization:

Amuthan's code: 29 seconds
J FinEALE: 54 seconds
Matlab FinEALE: 810 seconds

Petr


On Sunday, December 7, 2014 1:45:28 PM UTC-8, Petr Krysl wrote:

 Sorry: minor correction.  I mistakenly timed also output to a VTK file for 
 paraview postprocessing.  This being an ASCII file, it takes a few seconds.

 With J FinEALE the timing is now 78 seconds.

 Petr

 On Sunday, December 7, 2014 10:21:51 AM UTC-8, Petr Krysl wrote:


 Amuthan's code: 29 seconds

 J FinEALE: 86 seconds

 FinEALE: 810 seconds




Re: [julia-users] Re: Article on finite element programming in Julia

2014-12-07 Thread Amuthan
Hi Petr: thanks for the note. It is definitely true that making the code
more general to accommodate a larger class of elements and/or PDEs would
add some computational overhead. I wrote that code primarily to assess the
ease with which one could implement a simple FE solver in Julia; the code
is admittedly restrictive in its scope and not very extensible either since
a lot of the details are hard-coded, but that was a deliberate choice. I
had some plans of developing a full fledged FE solver in Julia last year
but dropped the idea since I shifted to atomistic simulation and no longer
work with finite elements. I would still be interested in hearing about how
a general FE solver in Julia performs in comparison to an equivalent FE
software. If you have some pointers on that please drop in a note. Thanks!
Best regards,
Amuthan
On Dec 7, 2014 11:51 PM, Petr Krysl krysl.p...@gmail.com wrote:

 Hello everybody,


 I found Amuthan 's blog a while back, but only about two weeks ago I found
 the time to look seriously at Julia. What I found was very encouraging.


 For a variety of teaching and research purposes I maintain a Matlab FEA
 toolkit called FinEALE. It is about 80,000 lines of code with all the
 examples and tutorials. In the past week I rewrote the bits and pieces that
 allow me to run a comparison with Amuthan 's code. Here are the results:


 For 1000 x 1000 grid (2 million triangles):


 Amuthan's code: 29 seconds


 J FinEALE: 86 seconds


 FinEALE: 810 seconds


 Mind you, we are not doing the same thing in these codes. FinEALE and J
 FinEALE implement code to solve the heat conduction problem with
 arbitrarily anisotropic materials. The calculation of the FE space is also
 not vectorized as in Amuthan's code. The code is written to be legible and
 general: the same code that calculates the matrices and vectors for a
 triangle mesh would also work for quadrilaterals, linear and quadratic,
 both in the pure 2-D and the axially symmetric set up, and tetrahedral and
 hexahedral elements in 3-D. There is obviously a price to pay for all this
 generality.


 Concerning Amuthan 's comparison with the two compiled FEA codes: it
 really depends how the problem is set up for those codes. I believe that
 Fenics has a form compiler which can spit out an optimized code that in
 this case would be entirely equivalent to the simplified calculation
 (isotropic material with conductivity equal to 1.0), and linear triangles.
 I'm not sure about freefem++, but since it has a domain-specific language,
 it can also presumably optimize its operations. So in my opinion it is
 rather impressive that Amuthan 's code in Julia can do so well.


 Petr




Re: [julia-users] Avoiding allocation when writing to arrays of immutables

2014-12-07 Thread Stefan Karpinski
I did spend some time poking at this but I couldn't figure out what the
problem is on 0.3. You may want to file an issue and you may get the
attention of some other people who may figure it out faster than me.

On Sun, Dec 7, 2014 at 9:47 PM, Will Dobbie wdob...@gmail.com wrote:

 I tested it on 0.4-dev+1310 and it fixed the allocations in the example
 above. But my real program was still allocating at the same spot. It seems
 to be a different issue unrelated to immutables. I've boiled it down to the
 example below. It's a little contrived as my real aim is to write to memory
 not allocated by Julia. It seems to be something to do with accessing
 arrays directly versus through a field? I've tried splitting it up into
 separate type-annotated functions but there was no change.

 module immtest

 type Container
 array::Array{Float32}
 size::Uint32
 end

 function runtest(reps)
 dst = resize!(Float32[], 100_000)
 src = resize!(Float32[], 10)
 container = Container(dst, 1000)

 @time begin
 for i=1:reps
 # This does not cause allocation
 # copy!(dst, container.size+1, src, 1)

 # This does
 copy!(container.array, container.size+1, src, 1)
 end
 end
 end

 runtest(10_000_000)

 end




 On Sunday, December 7, 2014 3:06:52 AM UTC+11, Stefan Karpinski wrote:

 I can reproduce this on 0.3. Looking into it.

 On Sat, Dec 6, 2014 at 7:04 AM, Tim Holy tim@gmail.com wrote:

 Curiously, I don't even see it on my copy of julia 0.3.

 Will, one tip: julia optimizes within functions. Anytime you see
 something
 weird like this, try to separate allocation and operation into two
 separate
 functions. That way, the function that's performing lots of computation
 will
 receive a concrete type as an input, and be well-optimized.

 --Tim

 On Friday, December 05, 2014 06:38:28 PM John Myles White wrote:
  I think this might be a problem with Julia 0.3. I see it on Julia 0.3,
 but
  not on the development branch for Julia 0.4.
 
   — John
 
  On Dec 5, 2014, at 6:27 PM, Will Dobbie wdo...@gmail.com wrote:
   Hi,
  
   I have a program which copies elements between two arrays of
 immutables in
   a tight loop. The sizes of the arrays never change. I've been
 struggling
   to get it to avoid spending a large chunk of its time in the garbage
   collector. I have an example of what I mean below.
  
   With arrays of Int64 I get:
   elapsed time: 0.164429425 seconds (0 bytes allocated)
  
   With arrays of an immutable the same size as Int64 I get:
   elapsed time: 1.421834146 seconds (32000 bytes allocated, 15.97%
 gc
   time)
  
   My understanding was arrays of immutables should behave like arrays
 of
   structs in C and not require heap allocation. Is there a way I can
   achieve that? I'm using Julia 0.3.3.
  
   Thanks,
   Will
  
  
  
   module immtest
  
   immutable Vec2
  
   x::Float32
   y::Float32
  
   end
  
   # typealias element_type Vec2   # Results in allocation
 in the loop
 below
   typealias element_type Int64# Does not cause
 allocation
  
   function runtest(reps)
  
   dst = resize!(element_type[], 100_000)
   src = resize!(element_type[], 10)
  
   @time begin
  
   for i=1:reps
  
   copy!(dst, 1000, src, 1, length(src))
   # dst[1000:1009] = src  # same
 performance as above
  
   end
  
   end
  
   end
  
   runtest(10_000_000)
  
   end





Re: [julia-users] Re: Is there a null IOStream?

2014-12-07 Thread Stefan Karpinski
cc:ing Keno and Jameson as the authors of DevNull.

On Sun, Dec 7, 2014 at 10:28 PM, K Leo cnbiz...@gmail.com wrote:

 Even if I could check something like the following is better:

 isopen(DevNull)


 On 2014年12月08日 11:08, Matt Bauman wrote:

 It'd be nice if the DevNull object (http://docs.julialang.org/en/
 latest/stdlib/base/#Base.DevNull) would work for this, but it seems like
 it only works for command redirection right now:

 |
 julia run(`echo Hello` | DevNull)

 julia print(DevNull, Hello)
 ERROR: type DevNullStream has no field status
  in isopen at stream.jl:286
  in check_open at stream.jl:293
  in write at stream.jl:730
  in print at ascii.jl:93
 |



 On Sunday, December 7, 2014 7:24:53 PM UTC-5, K leo wrote:

 At times I don't want to output anything, so I pass a null
 IOStream to a
 function that requires an IOStream.  How to do that?





Re: [julia-users] Re: Is there a null IOStream?

2014-12-07 Thread Keno Fischer
I don't see a good reason for DevNull not to behave like this.

On Sun, Dec 7, 2014 at 11:39 PM, Stefan Karpinski ste...@karpinski.org
wrote:

 cc:ing Keno and Jameson as the authors of DevNull.

 On Sun, Dec 7, 2014 at 10:28 PM, K Leo cnbiz...@gmail.com wrote:

 Even if I could check something like the following is better:

 isopen(DevNull)


 On 2014年12月08日 11:08, Matt Bauman wrote:

 It'd be nice if the DevNull object (http://docs.julialang.org/en/
 latest/stdlib/base/#Base.DevNull) would work for this, but it seems
 like it only works for command redirection right now:

 |
 julia run(`echo Hello` | DevNull)

 julia print(DevNull, Hello)
 ERROR: type DevNullStream has no field status
  in isopen at stream.jl:286
  in check_open at stream.jl:293
  in write at stream.jl:730
  in print at ascii.jl:93
 |



 On Sunday, December 7, 2014 7:24:53 PM UTC-5, K leo wrote:

 At times I don't want to output anything, so I pass a null
 IOStream to a
 function that requires an IOStream.  How to do that?






Re: [julia-users] Re: Why is typeof hex or binary number Uint64, while typeof decimal number is Int64?

2014-12-07 Thread Stefan Karpinski
http://stackoverflow.com/questions/27349517/why-is-typeof-hex-or-binary-number-uint64-while-type-of-decimal-number-is-int64

Although I recall now that there was already a rationale for this behavior
in the manual where it is described.

On Sun, Dec 7, 2014 at 10:44 PM, Elliot Saba staticfl...@gmail.com wrote:

 What you're getting confused by is that your literals above are still
 unsigned hex literals; but they are being applied to the negation operator,
 which doesn't do what you want.  In essence, when you type -0x7, it's
 getting parsed as -(0x7):

 julia 0x7
 0x07

 julia -0x7
 0xfff9

 julia -(int(0x7))
 -7

 Therefore I'd suggest explicitly making anything you want to express as a
 signed integer as decimal.  Note that -0x7 == -int(0x7) == int(-0x7), so
 it's not like any information is lost here, it's just the interpretation of
 the bits that is different.

 On Sun, Dec 7, 2014 at 7:38 PM, Phil Tomson philtom...@gmail.com wrote:



 On Sunday, December 7, 2014 5:08:45 PM UTC-8, ele...@gmail.com wrote:



 On Monday, December 8, 2014 10:21:52 AM UTC+10, Phil Tomson wrote:

 julia typeof(-0b111)
 Uint64

 julia typeof(-7)
 Int64

 julia typeof(-0x7)
 Uint64

 julia typeof(-7)
 Int64

 I find this a bit surprising. Why does the base of the number determine
 signed or unsigned-ness? Is this intentional or possibly a bug?


 This is documented behaviour http://docs.julialang.org/en/
 latest/manual/integers-and-floating-point-numbers/#integers based on
 the heuristic that using hex is mostly in situations where you need
 unsigned behaviour anyway.


 The doc says:

 This behavior is based on the observation that when one uses *unsigned
 hex literals* for integer values, one typically is using them to
 represent a fixed numeric byte sequence, rather than just an integer value.



 Hmm In the above cases they were signed hex literals.





Re: [julia-users] Avoiding allocation when writing to arrays of immutables

2014-12-07 Thread Will Dobbie
Thanks for taking a look! I opened an issue 
here https://github.com/JuliaLang/julia/issues/9272

On Monday, December 8, 2014 3:36:05 PM UTC+11, Stefan Karpinski wrote:

 I did spend some time poking at this but I couldn't figure out what the 
 problem is on 0.3. You may want to file an issue and you may get the 
 attention of some other people who may figure it out faster than me.

 On Sun, Dec 7, 2014 at 9:47 PM, Will Dobbie wdo...@gmail.com 
 javascript: wrote:

 I tested it on 0.4-dev+1310 and it fixed the allocations in the example 
 above. But my real program was still allocating at the same spot. It seems 
 to be a different issue unrelated to immutables. I've boiled it down to the 
 example below. It's a little contrived as my real aim is to write to memory 
 not allocated by Julia. It seems to be something to do with accessing 
 arrays directly versus through a field? I've tried splitting it up into 
 separate type-annotated functions but there was no change.

 module immtest

 type Container
 array::Array{Float32}
 size::Uint32
 end

 function runtest(reps)
 dst = resize!(Float32[], 100_000)
 src = resize!(Float32[], 10)
 container = Container(dst, 1000)

 @time begin
 for i=1:reps
 # This does not cause allocation
 # copy!(dst, container.size+1, src, 1)

 # This does
 copy!(container.array, container.size+1, src, 1)
 end
 end
 end

 runtest(10_000_000)

 end




 On Sunday, December 7, 2014 3:06:52 AM UTC+11, Stefan Karpinski wrote:

 I can reproduce this on 0.3. Looking into it.

 On Sat, Dec 6, 2014 at 7:04 AM, Tim Holy tim@gmail.com wrote:

 Curiously, I don't even see it on my copy of julia 0.3.

 Will, one tip: julia optimizes within functions. Anytime you see 
 something
 weird like this, try to separate allocation and operation into two 
 separate
 functions. That way, the function that's performing lots of computation 
 will
 receive a concrete type as an input, and be well-optimized.

 --Tim

 On Friday, December 05, 2014 06:38:28 PM John Myles White wrote:
  I think this might be a problem with Julia 0.3. I see it on Julia 
 0.3, but
  not on the development branch for Julia 0.4.
 
   — John
 
  On Dec 5, 2014, at 6:27 PM, Will Dobbie wdo...@gmail.com wrote:
   Hi,
  
   I have a program which copies elements between two arrays of 
 immutables in
   a tight loop. The sizes of the arrays never change. I've been 
 struggling
   to get it to avoid spending a large chunk of its time in the garbage
   collector. I have an example of what I mean below.
  
   With arrays of Int64 I get:
   elapsed time: 0.164429425 seconds (0 bytes allocated)
  
   With arrays of an immutable the same size as Int64 I get:
   elapsed time: 1.421834146 seconds (32000 bytes allocated, 
 15.97% gc
   time)
  
   My understanding was arrays of immutables should behave like arrays 
 of
   structs in C and not require heap allocation. Is there a way I can
   achieve that? I'm using Julia 0.3.3.
  
   Thanks,
   Will
  
  
  
   module immtest
  
   immutable Vec2
  
   x::Float32
   y::Float32
  
   end
  
   # typealias element_type Vec2   # Results in allocation 
 in the loop
 below
   typealias element_type Int64# Does not cause 
 allocation
  
   function runtest(reps)
  
   dst = resize!(element_type[], 100_000)
   src = resize!(element_type[], 10)
  
   @time begin
  
   for i=1:reps
  
   copy!(dst, 1000, src, 1, length(src))
   # dst[1000:1009] = src  # same 
 performance as above
  
   end
  
   end
  
   end
  
   runtest(10_000_000)
  
   end





[julia-users] [WIP] CSVReaders.jl

2014-12-07 Thread John Myles White
Over the last month or so, I've been slowly working on a new library that 
defines an abstract toolkit for writing CSV parsers. The goal is to provide an 
abstract interface that users can implement in order to provide functions for 
reading data into their preferred data structures from CSV files. In principle, 
this approach should allow us to unify the code behind Base's readcsv and 
DataFrames's readtable functions.

The library is still very much a work-in-progress, but I wanted to let others 
see what I've done so that I can start getting feedback on the design.

Because the library makes heavy use of Nullables, you can only try out the 
library on Julia 0.4. If you're interested, it's available at 
https://github.com/johnmyleswhite/CSVReaders.jl

For now, I've intentionally given very sparse documentation to discourage 
people from seriously using the library before it's officially released. But 
there are some examples in the README that should make clear how the library is 
intended to be used.

 -- John



[julia-users] Difference between {T:AbstractType} and just x::AbstractType

2014-12-07 Thread Igor Demura
What exactly the difference between:
function foo{T:AbstractType}(x::T) = ...
and 
function foo(x::AbstractType) = ... ?
Is any difference at all? The tips section of the docs says the second is 
preferred. If they are the same, why first syntax? I can imagine that I can 
benefit if I have several parameters:
function bar{T:AbstractType}(x::Concrete{T}, y::AnotherOf{T}, z::T) = ...



[julia-users] .travis.yml that tests against Julia v0.3 and v0.4

2014-12-07 Thread John Zuhone
Hi all,

Is anyone doing testing on Travis that tests against both v0.3 (the current 
stable, e.g. v.0.3.3 at the moment) and v0.4 (the nightly)? If so, can you 
show an example .travis.yml?

Thanks,

John Z


[julia-users] Re: .travis.yml that tests against Julia v0.3 and v0.4

2014-12-07 Thread John Zuhone
I think I have answered my own question--found an example here:

https://github.com/JuliaWeb/HttpCommon.jl/blob/master/.travis.yml

On Monday, December 8, 2014 2:28:38 AM UTC-5, John Zuhone wrote:

 Hi all,

 Is anyone doing testing on Travis that tests against both v0.3 (the 
 current stable, e.g. v.0.3.3 at the moment) and v0.4 (the nightly)? If so, 
 can you show an example .travis.yml?

 Thanks,

 John Z