[julia-users] Re: @inbounds macro scope, and @simd

2016-08-27 Thread Blake Johnson
Yes, if you change the first line to

@inbounds for site in 1:nsites

Then that will declare everything inside that outer loop as being in 
bounds. If you'd like to make a more restricted declaration, than you 
should put it on specific lines inside the loop.

On Friday, August 26, 2016 at 1:43:08 PM UTC-4, Ben Ward wrote:
>
> Hi, Just wondering, I have the following doube loop:
>
> for site in 1:nsites
> votes[1] = votes[2] = votes[3] = votes[4] = votes[5] = 0
> for seq in 1:neqs
> nuc = mat[seq, site]
> votes[1] += iscompatible(nuc, DNA_A)
> votes[2] += iscompatible(nuc, DNA_C)
> votes[3] += iscompatible(nuc, DNA_G)
> votes[4] += iscompatible(nuc, DNA_T)
> votes[5] += iscompatible(nuc, DNA_Gap)
> end
> end
>
> Say I add an @inbounds macro to the outer loop to eliminate bounds checks, 
> will it's effects extend to setmatrix statements in the inner loop. 
> Inspecting the expanded macro I believe it is the case, as an @inbounds is 
> set to true, and then after the inner loop is popped. But I'm not 100% sure 
> if I am correct that is indeed how it works:
>
> *quote  # REPL[63], line 2:*
>
> *begin *
>
> *$(Expr(:inbounds, true))*
>
> *for site = 1:nsites # REPL[63], line 3:*
>
> *votes[1] = (votes[2] = (votes[3] = (votes[4] = (votes[5] = 
> 0 # REPL[63], line 4:*
>
> *for seq = 1:neqs # REPL[63], line 5:*
>
> *nuc = mat[seq,site] # REPL[63], line 6:*
>
> *votes[1] += iscompatible(nuc,DNA_A) # REPL[63], line 7:*
>
> *votes[2] += iscompatible(nuc,DNA_C) # REPL[63], line 8:*
>
> *votes[3] += iscompatible(nuc,DNA_G) # REPL[63], line 9:*
>
> *votes[4] += iscompatible(nuc,DNA_T) # REPL[63], line 10:*
>
> *votes[5] += iscompatible(nuc,DNA_Gap)*
>
> *end*
>
> *end*
>
> *$(Expr(:inbounds, :pop))*
>
> *end*
>
> *end*
>
> I'd also like someone's opinion. Will I benefit from @simd on the inner 
> loop?
>
> The function `iscompatible` is annotated with @inline, and has no 
> branching:
>
> @inline function iscompatible{T<:Nucleotide}(x::T, y::T)
> return compatbits(x) & compatbits(y) != 0
> end
>
> # Return the compatibility bits of `nt`.
> @inline function compatbits(nt::Nucleotide)
> return reinterpret(UInt8, nt)
> end
>
> As per the assumptions of an simd loop I read in the docs, each iteration 
> is independent, order does not matter.
> I'd just like some advice as if I'm right this will be the first time I 
> use a simd loop to speed up my loops. 
>
> Thanks,
> Ben.
>


[julia-users] Re: 0.5: is @inbounds now overridable?

2016-08-16 Thread Blake Johnson
Yes, for details see:

http://docs.julialang.org/en/latest/devdocs/boundscheck/

For many custom array-like types, it will be sufficient to define 
indices(A::MyArray) 
= ...

--Blake

On Tuesday, August 16, 2016 at 9:45:39 PM UTC-4, Sheehan Olver wrote:
>
>
> Is it possible now to override @inbounds for custom AbstractArray subtypes?
>


[julia-users] Re: What do @_inline_meta and @_propagate_inbounds_meta do?

2016-05-31 Thread Blake Johnson
Those macros are used in abstractarray.jl because at the point in the 
boostrapping process where that file is loaded, we do not yet have access 
to @inbounds and @propagate_inbounds.

A meta node is a way to pass information about a method to the compiler.

--Blake

On Tuesday, May 31, 2016 at 5:42:28 PM UTC-4, Davide Lasagna wrote:
>
> Thanks Kristoffer. AFAIU these are not supposed to be used in user code. 
> However, the follow up question is what is a meta node?
>
> On Monday, May 30, 2016 at 5:21:02 PM UTC+1, Kristoffer Carlsson wrote:
>>
>> I believe they insert the corresponding Meta node into the AST before the 
>> actual macros (like @inline) have been defined.
>
>

[julia-users] git protocol packages

2016-03-28 Thread Blake Johnson
Is there a way to still support git protocol (as opposed to https) packages 
with the new libgit2 based package system? I have a fair number of private 
packages on a local server, and it sure would be nice to be able to fetch 
those with SSH key authentication.


Re: [julia-users] What's the "correct" way to handle optional bounds-checking?

2016-01-25 Thread Blake Johnson
In 0.5 you can write this as:

function getindex(A::MyCustomArray, x)
ix = round(Int, x) @boundscheck ix = clamp(ix, 1, length(A)) ...

On Monday, January 25, 2016 at 2:13:41 PM UTC-5, Tomas Lycken wrote:
>
> Interesting read, thanks!
>
> I guess I'll hold off implementing this until 0.5 lands, then.
>
> // T
>
> On Monday, January 25, 2016 at 7:56:15 PM UTC+1, Yichao Yu wrote:
>>
>> On Mon, Jan 25, 2016 at 1:49 PM, Tomas Lycken  
>> wrote: 
>> > On e.g. Arrays, I can index with @inbounds to avoid bounds checking. In 
>> my 
>> > own custom type, which also implements getindex, what is the correct 
>> way of 
>> > leveraging inbounds? 
>> > 
>> > For example, I have code now that looks something like this: 
>> > 
>> > function getindex(A::MyCustomArray, x) 
>> > ix = clamp(round(Int, x), 1, length(A)) # yes, clamp makes sense in 
>> this 
>> > case 
>> > ... 
>> > 
>> > but if the caller has specified @inbounds, I want to avoid the clamp 
>> and 
>> > just set ix = round(Int, x). What would be the correct way to express 
>> that? 
>>
>> No automatic way to do this on 0.4 AFAIK. For 0.5-dev see 
>> https://github.com/JuliaLang/julia/pull/14474 
>>
>> > 
>> > // T 
>>
>

[julia-users] Re: A question of Style: Iterators into regular Arrays

2015-10-21 Thread Blake Johnson
It doesn't directly answer your question, but one thought is to not force 
many of these ranges to become vectors. For instance, the line
t = t0 + [0:Ns-1;]*Ts;

could also have been written
t = t0 + (0:Ns-1)*Ts;

and that would still be valid input to functions like sin, cos, exp, etc...


On Wednesday, October 21, 2015 at 3:11:44 AM UTC-4, Gabriel Gellner wrote:
>
> I find the way that you need to use `linspace` and `range` objects a bit 
> jarring for when you want to write vectorized code, or when I want to pass 
> an array to a function that requires an Array. I get how nice the iterators 
> are when writing loops and that you can use `collect(iter)` to get a array 
> (and that it is possible to write polymorphic code that takes LinSpace 
> types and uses them like Arrays … but this hurts my small brain). But I 
> find I that I often want to write code that uses an actual array and having 
> to use `collect` all the time seems like a serious wart for an otherwise 
> stunning language for science. (
> https://github.com/JuliaLang/julia/issues/9637 gives the evolution I 
> think of making these iterators)
>
>  
>
> For example recently the following code was posted/refined on this mailing 
> list:
>
>  
>
> function Jakes_Flat( fd, Ts, Ns, t0 = 0, E0 = 1, phi_N = 0 )
>
> # Inputs:
>
> #
>
> # Outputs:
>
>   N0  = 8;  # As suggested by Jakes
>
>   N   = 4*N0+2; # An accurate approximation
>
>   wd  = 2*pi*fd;# Maximum Doppler frequency
>
>   t   = t0 + [0:Ns-1;]*Ts;
>
>   tf  = t[end] + Ts;
>
>   coswt = [ sqrt(2)*cos(wd*t'); 2*cos(wd*cos(2*pi/N*[1:N0;])*t') ]
>
>   temp = zeros(1,N0+1)
>
>   temp[1,2:end] = pi/(N0+1)*[1:N0;]'
>
>   temp[1,1] = phi_N
>
>   h = E0/sqrt(2*N0+1)*exp(im*temp ) * coswt
>
>   return h, tf;
>
> end
>
>  
>
> From  
>
>  
>
> Notice all the horrible [;] notations to make these arrays … and it 
> seems like the devs want to get rid of this notation as well (which they 
> should it is way too subtle in my opinion). So imagine the above code with 
> `collect` statements. Is this the way people work? I find the `collect` 
> statements in mathematical expressions to really break me out of the 
> abstraction (that I am just writing math).
>
>  
>
> I get that this could be written as an explicit loop, and this would 
> likely make it faster as well (man I love looping in Julia). That being 
> said in this case I don't find the vectorized version a performance issue, 
> rather I prefer how this reads as it feels closer to the math to me. 
>
>  
>
> So my question: what is the Juilan way of making explicit arrays using 
> either `range (:)` or `linspace`? Is it to pollute everything with 
> `collect`? Would it be worth having versions of linspace that return an 
> actual array? (something like alinspace or whatnot)
>
>
> Thanks for any tips, comments etc
>


[julia-users] Re: ANN: NBInclude.jl --- include() for IJulia notebooks

2015-10-12 Thread Blake Johnson
That's sounds incredibly useful. Thanks for doing it!

On Monday, October 12, 2015 at 9:29:19 PM UTC-4, Steven G. Johnson wrote:
>
> NBInclude is a new registered package (
> https://github.com/stevengj/NBInclude.jl).  It is a drop-in replacement 
> for include(path) that allows you to execute IJulia notebook files rather 
> than .jl files.  That is, just do:
>
> using NBInclude
> nbinclude("mynotebook.ipynb")
>
> and it will be just as if you had called include("myfile.jl") on a Julia 
> file consisting of the code in that notebook.
>
> This allows you to develop code in a notebook, interspersed with formatted 
> comments, and re-use it easily, including within modules (where it works 
> with precompilation and relative paths, just like include).
>
> --SGJ
>


Re: [julia-users] Levi-Civita symbol/tensor

2015-02-11 Thread Blake Johnson
Thanks for posting these, Pablo. For my most frequent use case I care about 
n = 3, but I suppose the O(n) algorithms would be more appropriate in Base.

You are also correct that sign(::AbstractVector) currently does an 
element-wise sign(). I didn't realize before writing my post that 
combinatorics.jl defines Permutations and Combinations types, but not the 
singular equivalents. So, that still leaves the naming issue unresolved.

Pablo, would you mind opening an issue or pull request to continue the 
discussion?

--Blake

On Wednesday, February 11, 2015 at 3:11:27 PM UTC-5, Pablo Zubieta wrote:
>
> Hi again,
>
> There were some bugs in my implementations. I updated the gist 
>  
> with the corrected versions and added a simpler looking function (but of 
> O(n²) running time).
>
> I did some tests and found (with my slow processor) that for permutations 
> of length <= 5 the quadratic implementation (levicivita_simple) performs 
> as fast as the (levicivita_inplace_check). For lengths from 5 to 15, 
> levicivita_inplace_check 
> is the fastest, followed by levicivita_simple. For lengths from 15 to 25 
> levicivita_simple 
> and levicivita perform the same (but slower than levicivita_inplace_check). 
> For more than 25 elements levicivita_inplace_check is always the fastest, 
> 2x faster than levicivita and n times faster than levicivita_simple.
>
> For people wanting the 3D Levi-Civita tensor, levicivita_simple and 
> levicivita_inplace_check 
> should be the same. For people wanting the parity of a permutation for long 
> permutations levicivita_inplace_check should work the best.
>
> Greetings!
>


Re: [julia-users] Levi-Civita symbol/tensor

2015-02-09 Thread Blake Johnson
Some possibilities: levicivita, parity, sign, signature, or sgn.

"sign" is already an exported method, but sign(::Permutation) is not 
defined, yet.

On Monday, February 9, 2015 at 6:18:42 PM UTC-5, Stefan Karpinski wrote:
>
> I agree that it's useful but what should it be called?
>
> On Mon, Feb 9, 2015 at 2:45 PM, Jiahao Chen > 
> wrote:
>
>> > But, if it is there, I haven't found it yet
>>
>> Me neither. I think Levi-Civita would be useful to have in Base if you'd 
>> like to write an implementation.
>>
>
>

[julia-users] Levi-Civita symbol/tensor

2015-02-09 Thread Blake Johnson
I keep writing code that needs the Levi-Civita symbol, i.e.:

\epsilon_{ijk} = { 1 if (i,j,k) is an even permutation, -1 if (i,j,k) is an 
odd permutation, otherwise 0 }

It is used frequently in physics, so I keep expecting it to be in Julia's 
stdlib, perhaps under a different name (in Mathematica, it is called 
Signature[]). But, if it is there, I haven't found it yet. Is it there and 
I just can't find it?


Re: [julia-users] Re: Why was REPL cursor moved to the beginning of the line?

2014-12-16 Thread Blake Johnson
As I was the person that broke it, I figured I should fix it.

For those that have previously edited their default keymap to restore the 
0.3 behavior, please give the new defaults a try. I think we have arrived 
at a behavior that should appease most people.

--Blake

On Tuesday, December 16, 2014 7:16:33 AM UTC-5, Mike Innes wrote:
>
> You're right, that's a huge improvement. Kudos to Blake Johnson.
>
> On 16 December 2014 at 11:32, Harold Cavendish  > wrote:
>>
>> It was commit b5f94b6 (5 days old master) and you're right, the newest 
>> version is different, and as far as I'm concerned splendid!
>>
>> Thanks for the help.
>>
>

Re: [julia-users] Broadcasting variables

2014-11-24 Thread Blake Johnson
I use this macro to send variables to remote processes:

macro sendvar(proc, x)
quote
rr = RemoteRef()
put!(rr, $x)
remotecall($proc, (rr)->begin
global $(esc(x))
$(esc(x)) = fetch(rr)
end, rr)
end
end

Though the solution above looks a little simpler.

--Blake

On Sunday, November 23, 2014 1:30:49 AM UTC-5, Amit Murthy wrote:
>
> From the description of Base.localize_vars - 'wrap an expression in "let 
> a=a,b=b,..." for each var it references'
>
> Though that does not seem to the only(?) issue here 
>
> On Sun, Nov 23, 2014 at 11:52 AM, Madeleine Udell  > wrote:
>
>> Thanks! This is extremely helpful. 
>>
>> Can you tell me more about what localize_vars does?
>>
>> On Sat, Nov 22, 2014 at 9:11 PM, Amit Murthy > > wrote:
>>
>>> This works:
>>>
>>> function doparallelstuff(m = 10, n = 20)
>>> # initialize variables
>>> localX = Base.shmem_rand(m; pids=procs())
>>> localY = Base.shmem_rand(n; pids=procs())
>>> localf = [x->i+sum(x) for i=1:m]
>>> localg = [x->i+sum(x) for i=1:n]
>>>
>>> # broadcast variables to all worker processes
>>> @sync begin
>>> for i in procs(localX)
>>> remotecall(i, x->(global X; X=x; nothing), localX)
>>> remotecall(i, x->(global Y; Y=x; nothing), localY)
>>> remotecall(i, x->(global f; f=x; nothing), localf)
>>> remotecall(i, x->(global g; g=x; nothing), localg)
>>> end
>>> end
>>>
>>> # compute
>>> for iteration=1:1
>>> @everywhere for i=localindexes(X)
>>> X[i] = f[i](Y)
>>> end
>>> @everywhere for j=localindexes(Y)
>>> Y[j] = g[j](X)
>>> end
>>> end
>>> end
>>>
>>> doparallelstuff()
>>>
>>> Though I would have expected broadcast of variables to be possible with 
>>> just 
>>> @everywhere X=localX
>>> and so on 
>>>
>>>
>>> Looks like @everywhere does not call localize_vars.  I don't know if 
>>> this is by design or just an oversight. I would have expected it to do so. 
>>> Will file an issue on github.
>>>
>>>
>>>
>>> On Sun, Nov 23, 2014 at 8:24 AM, Madeleine Udell >> > wrote:
>>>
 The code block I posted before works, but throws an error when embedded 
 in a function: "ERROR: X not defined" (in first line of @parallel). Why am 
 I getting this error when I'm *assigning to* X?

 function doparallelstuff(m = 10, n = 20)
 # initialize variables
 localX = Base.shmem_rand(m)
 localY = Base.shmem_rand(n)
 localf = [x->i+sum(x) for i=1:m]
 localg = [x->i+sum(x) for i=1:n]

 # broadcast variables to all worker processes
 @parallel for i=workers()
 global X = localX
 global Y = localY
 global f = localf
 global g = localg
 end
 # give variables same name on master
 X,Y,f,g = localX,localY,localf,localg

 # compute
 for iteration=1:1
 @everywhere for i=localindexes(X)
 X[i] = f[i](Y)
 end
 @everywhere for j=localindexes(Y)
 Y[j] = g[j](X)
 end
 end
 end

 doparallelstuff()

 On Fri, Nov 21, 2014 at 5:13 PM, Madeleine Udell >>> > wrote:

> My experiments with parallelism also occur in focused blocks; I think 
> that's a sign that it's not yet as user friendly as it could be.
>
> Here's a solution to the problem I posed that's simple to use: 
> @parallel + global can be used to broadcast a variable, while @everywhere 
> can be used to do a computation on local data (ie, without resending the 
> data). I'm not sure how to do the variable renaming programmatically, 
> though.
>
> # initialize variables
> m,n = 10,20
> localX = Base.shmem_rand(m)
> localY = Base.shmem_rand(n)
> localf = [x->i+sum(x) for i=1:m]
> localg = [x->i+sum(x) for i=1:n]
>
> # broadcast variables to all worker processes
> @parallel for i=workers()
> global X = localX
> global Y = localY
> global f = localf
> global g = localg
> end
> # give variables same name on master
> X,Y,f,g = localX,localY,localf,localg
>
> # compute
> for iteration=1:10
> @everywhere for i=localindexes(X)
> X[i] = f[i](Y)
> end
> @everywhere for j=localindexes(Y)
> Y[j] = g[j](X)
> end
> end
>
> On Fri, Nov 21, 2014 at 11:14 AM, Tim Holy  > wrote:
>
>> My experiments with parallelism tend to occur in focused blocks, and 
>> I haven't
>> done it in quite a while. So I doubt I can help much. But in general 
>> I suspect
>> you're encountering these problems because much of the IPC goes 
>> through
>> thunks, and so a lot of stuff gets reclaimed when execution is done.
>>
>> If I were experimenting,

[julia-users] Re: LsqFit having trouble with a particular model

2014-10-04 Thread Blake Johnson
You simply have problems with your model. What LsqFit returns has 
dramatically smaller chi^2 than the value you post above from Mathematica. 
The problem is entirely with those points for which xdata .< 1, in which 
your model diverges to infinity. Try restricting the data set to x-points 
greater than 1, e.g.:

sel = xdata .> 1
fit = curve_fit(model, xdata[sel], ydata[sel], [0.5, 0.5])

--Blake

On Thursday, October 2, 2014 12:47:11 PM UTC-4, Helgi Freyr wrote:
>
> Hello,
>
> I have been trying Julia out today. In particular, I have been playing 
> around with fitting some data with LsqFit.
>
> However, it's not working as well as I would have hoped.
>
> Here is the code and the two data files and the output figure are attached:
>
> using LsqFit
> using PyPlot
>
> model(x, p) = p[1]./x + p[2]./x.^2
>
> xdata = readdlm("gridx.dat")
> ydata = readdlm("F1.dat")
>
> fit = curve_fit(model, xdata[1:end], ydata[1:end], [0.5, 0.5])
> println(fit.param)
> errors = estimate_errors(fit, 0.95)
> println(errors)
>
>
> plot(xdata,ydata,color="blue")
> xlim(0,120)
> plot(xdata,model(xdata, fit.param),color="green")
> savefig("lsq.png")
>
>
> Playing around with the model itself does do something. For example, as 
> this code is basically just the one from the readme file of LsqFit, I tried:
>
> model(x, p) = p[1]*exp(-x.*p[2])
>
> which is used there. It does fit somewhat better but not quite. I ran the 
> same thing in Mathematica and obtained:
>
> p[1]: 0.229263
> p[2]: -0.0949777
>
> which gets most of the curve right. Am I missing something in trying to 
> fit this curve to the data or are there any particular tricks to use with 
> curve_fit that I should know about?
>
> Best regards,
> Helgi
>


[julia-users] Re: Strassen algorithm in Julia

2014-07-18 Thread Blake Johnson
I think what you want is a view rather than a copy. This will probably come 
to base Julia in the next release cycle, but for now you might check out 
the ArrayViews.jl package:

https://github.com/lindahua/ArrayViews.jl

On Friday, July 18, 2014 12:00:05 PM UTC-4, Yimin Zhong wrote:
>
> I tried to implement Strassen algorithm for matrix multiplication in 
> julia. And here is the gist, a naive implementation. a sample  benchmark  
> is included at beginning of the file.
>
> https://gist.github.com/GaZ3ll3/87df748f76b119199fed
>
> It can beat Julia's A_mul_B!(). around 2~8% faster, depending on the size 
> of matrix and mindim threhold.
>
> However*, my problem* is it takes julia like 2~5% of the running time on 
> allocating *lots* of memory, like storing 3 extra matrices.
>
> Now I am using 
>
> a11 = a[1:mt, 1:kt] 
>
> to get a submatrix, I assume this is a copy of values, not passing 
> reference or using pointer. I think this is the problem, but I do not know 
> how to avoid this and use pure julia pointer for all computing( in julia).
>
> later I found out "sub" function can give a SUBARRAY, which is using 
> reference. But when I changed the code into sub-based.
>
> It takes much longer time to run a program. Not better, also it consumes 
> lots of memory. I do not know why.
>
> Anyway, I can write this in C using cblas, using pure pointer operations 
> and then call the C library(in fact, i already wrote it), I just want to 
> see how is strassen algorithm working on Julia.
>
>
>

[julia-users] [PSA]: curve_fit from Optim.jl is moving to LsqFit.jl

2014-07-14 Thread Blake Johnson
The curve fitting functionality in Optim.jl is being moved into its own 
package: LsqFit.jl:

https://github.com/JuliaOpt/LsqFit.jl

Installable via Pkg.add("LsqFit").


[julia-users] Re: efficient iterative updating for dataframe subsets

2014-07-10 Thread Blake Johnson
The Devectorize package should help you here in keeping your code clean 
while getting explicit-loop-like performance.

On Wednesday, July 9, 2014 4:05:23 PM UTC-4, Steven G. Johnson wrote:
>
>
>
> On Wednesday, July 9, 2014 1:59:46 PM UTC-4, Steve Bellan wrote:
>>
>> Hi Josh, thanks for the response. I've managed to get a working version 
>> up with an Array. Its about twice as fast as R and I'm wondering if there 
>> are still ways I can speed it up. Here's the Julia version:
>>
>>
>> (s__, mb_a1, mb_a2, mb_, f_ba1, f_ba2, f_b, hb1b2, hb2b1) = 1:9
>> function pre_coupleMat(serostates, sexually_active) 
>> temp = serostates[sexually_active,:]
>> serostates[sexually_active,s__]   = temp[:,s__] .* (1-p_m_bef) .* (1-
>> p_f_bef)
>>
>
> This will be way faster if you just write out a loop to update the 
> serostates array.
>
> The problem with your current code is that it allocates zillions of little 
> temporary arrays, which is always a slow thing to do in a 
> performance-critical function.Not only do you have temp, but every .* 
> operation allocates a temporary array for its result.
>
> That will make the code longer and a bit uglier, unfortunately, but 
> basically you need C-like inner-loop code for C-like performance.
>


Re: [julia-users] Julia performance on large-scale text processing?

2014-06-13 Thread Blake Johnson
I mean, after constructing data_list try:
println(typeof(data_list))

To see what actual type you are getting. Sometimes list comprehensions 
can't figure out a tight type, and you'll get an Array{Any,1}.

--Blake

On Friday, June 13, 2014 11:06:40 AM UTC-4, Rich Morin wrote:
>
> On Jun 13, 2014, at 06:31, Blake Johnson wrote: 
> > What's the type of data_list in the list comprehension version? 
>
> It's basically an array of strings, but I'm not doing anything 
> explicit to specify that. 
>
> -r 
>
>  -- 
> http://www.cfcl.com/rdm   Rich Morin   r...@cfcl.com 
>  
> http://www.cfcl.com/rdm/resumeSan Bruno, CA, USA   +1 650-873-7841 
>
> Software system design, development, and documentation 
>
>
>

Re: [julia-users] Julia performance on large-scale text processing?

2014-06-13 Thread Blake Johnson
What's the type of data_list in the list comprehension version?

--Blake

On Thursday, June 12, 2014 8:13:18 PM UTC-4, John Myles White wrote:
>
> That’s a shame. Maybe the list comprehension has some typing issues.
>
>  — John
>
> On Jun 12, 2014, at 5:06 PM, Rich Morin > 
> wrote:
>
> On Thursday, June 12, 2014 4:15:44 PM UTC-7, John Myles White wrote:
>>
>> This line
>>
>> data_list = map(key -> get(parm_hash, key, ""), prop_list)
>>
>> could probably be sped up replacing map with a list comprehension, 
>>
> which would also remove the anonymous function. Both map and 
>>
> anonymous functions are slower than you might hope, so removing 
>>
> them can offer meaningful speedups.
>>
>
> I just tried this; sadly, the performance got a lot *worse*:
>
> 4836   data_list = map(key -> get(parm_hash, key, ""), prop_list)
>
> 10953  data_list = [ get(parm_hash, key, "") for key in prop_list ]
>
>
>

[julia-users] ANN: Cliffords.jl

2014-06-10 Thread Blake Johnson
BBN Technologies has released v0.1 of Cliffords.jl, a package for efficient 
calculation of Clifford circuits using the so-called *tableau*
 representation:

https://github.com/BBN-Q/Cliffords.jl

Some background for non-experts: the Pauli operators form a basis for the 
state-space of N two-level quantum systems (qubits). The operators that 
transform Pauli eigenstates to Pauli eigenstates are the Clifford group. 
While insufficient to describe universal quantum computation, the Clifford 
operators are relevant to a large class of quantum error correction codes. 
It turns out that one can simulate the action of an arbitrary circuit 
composed only of Clifford operations in polynomial space and time, a result 
known as the Gottesman-Knill theorem.

The Cliffords.jl package provides a convenient syntax for simulating such 
Clifford circuits. The functionality of this package compared to other 
alternatives is a bit limited at the moment. For instance, we currently do 
not support circuits with measurements. However, Julia allows us to get 
good performance out of a rather high-level description of the 
transformation relations.


[julia-users] Re: dot function problem

2014-05-20 Thread Blake Johnson
In Julia, [1.0 1.0] is a 1x2 Array. If you insert commas you get a 
2-element vector and then dot works, i.e.
dot([1.0, 1.0], [1.0, 1.0])

On Tuesday, May 20, 2014 8:25:01 PM UTC-4, Altieres Del-Sent wrote:
>
>  I am used to write at matlab dot([1 1], [1 1]). I know I can use [1 1]' 
> *[1 1] to calc the dot product, but I use that way because I think is 
> faster without ask to transpose,  I tried do the samething with julia and 
> get dot([1.0 1.0],[1.0 1.0]) 
>
> MethodError(dot,(
>
> 1x2 Array{Float64,2}:
>
>  1.0  1.0,
>
>
> 1x2 Array{Float64,2}:
>
>  1.0  1.0))
>
> why?
>
>

Re: [julia-users] New Year's resolutions for DataArrays, DataFrames and other packages

2014-01-22 Thread Blake Johnson
Sure, but the resulting expression is *much* more verbose. I just noticed 
that all expression-based indexing was on the chopping block. What is left 
after all this?

I can see how axing these features would make DataFrames.jl easier to 
maintain, but I found the expression stuff to present a rather nice 
interface.

--Blake

On Tuesday, January 21, 2014 11:51:03 AM UTC-5, John Myles White wrote:
>
> Can you do something like df[“ColA”] = f(df)?
>
>  — John
>
> On Jan 21, 2014, at 8:48 AM, Blake Johnson 
> > 
> wrote:
>
> I use within! pretty frequently. What should I be using instead if that is 
> on the chopping block?
>
> --Blake
>
> On Tuesday, January 21, 2014 7:42:39 AM UTC-5, tshort wrote:
>>
>> I also agree with your approach, John. Based on your criteria, here 
>> are some other things to consider for the chopping block. 
>>
>> - expression-based indexing 
>> - NamedArray (you already have an issue on this) 
>> - with, within, based_on and variants 
>> - @transform, @DataFrame 
>> - select, filter 
>> - DataStream 
>>
>> Many of these were attempts to ease syntax via delayed evaluation. We 
>> can either do without or try to implement something like LINQ. 
>>
>>
>>
>> On Mon, Jan 20, 2014 at 7:02 PM, Kevin Squire  
>> wrote: 
>> > Hi John, 
>> > 
>> > I agree with pretty much everything you have written here, and really 
>> > appreciate that you've taken the lead in cleaning things up and getting 
>> us 
>> > on track. 
>> > 
>> > Cheers! 
>> >Kevin 
>> > 
>> > 
>> > On Mon, Jan 20, 2014 at 1:57 PM, John Myles White > > 
>> > wrote: 
>> >> 
>> >> As I said in another thread recently, I am currently the lead 
>> maintainer 
>> >> of more packages than I can keep up with. I think it’s been useful for 
>> me to 
>> >> start so many different projects, but I can’t keep maintaining most of 
>> my 
>> >> packages given my current work schedule. 
>> >> 
>> >> Without Simon Kornblith, Kevin Squire, Sean Garborg and several others
>>  
>> >> doing amazing work to keep DataArrays and DataFrames going, much of our
>>  
>> >> basic data infrastructure would have already become completely 
>> unusable. But 
>> >> even with the great work that’s been done on those package recently, 
>> there’s 
>> >> still lot of additional design work required. I’d like to free up some 
>> of my 
>> >> time to do that work. 
>> >> 
>> >> To keep things moving forward, I’d like to propose a couple of radical 
>> New 
>> >> Year’s resolutions for the packages I work on. 
>> >> 
>> >> (1) We need to stop adding functionality and focus entirely on 
>> improving 
>> >> the quality and documentation of our existing functionality. We have 
>> way too 
>> >> much prototype code in DataFrames that I can’t keep up with. I’m about 
>> to 
>> >> make a pull request for DataFrames that will remove everything related 
>> to 
>> >> column groupings, database-style indexing and Blocks.jl support. I 
>> >> absolutely want to see us push all of those ideas forward in the 
>> future, but 
>> >> they need to happen in unmerged forks or separate packages until we 
>> have the 
>> >> resources needed to support them. Right now, they make an overwhelming
>>  
>> >> maintenance challenge even more onerous. 
>> >> 
>> >> (2) We can’t support anything other than the master branch of most 
>> >> JuliaStats packages except possibly for Distributions. I personally 
>> don’t 
>> >> have the time to simultaneously keep stuff working with Julia 0.2 and 
>> Julia 
>> >> 0.3. Moreover, many of our basic packages aren’t mature enough to 
>> justify 
>> >> supporting older versions. We should do a better job of supporting our
>>  
>> >> master releases and not invest precious time trying to support older 
>> >> releases. 
>> >> 
>> >> (3) We need to make more of DataArrays and DataFrames reflect the 
>> Julian 
>> >> worldview. Lots of our code uses an interface that is incongruous with 
>> the 
>> >> interfaces found in Base. Even worse, a large chunk of code has 
>> >> type-stability problems that makes it very slow, when comparable code 
>> that 
>> >> uses normal Arrays is 100x fa

Re: [julia-users] New Year's resolutions for DataArrays, DataFrames and other packages

2014-01-21 Thread Blake Johnson
I use within! pretty frequently. What should I be using instead if that is 
on the chopping block?

--Blake

On Tuesday, January 21, 2014 7:42:39 AM UTC-5, tshort wrote:
>
> I also agree with your approach, John. Based on your criteria, here 
> are some other things to consider for the chopping block. 
>
> - expression-based indexing 
> - NamedArray (you already have an issue on this) 
> - with, within, based_on and variants 
> - @transform, @DataFrame 
> - select, filter 
> - DataStream 
>
> Many of these were attempts to ease syntax via delayed evaluation. We 
> can either do without or try to implement something like LINQ. 
>
>
>
> On Mon, Jan 20, 2014 at 7:02 PM, Kevin Squire 
> > 
> wrote: 
> > Hi John, 
> > 
> > I agree with pretty much everything you have written here, and really 
> > appreciate that you've taken the lead in cleaning things up and getting 
> us 
> > on track. 
> > 
> > Cheers! 
> >Kevin 
> > 
> > 
> > On Mon, Jan 20, 2014 at 1:57 PM, John Myles White 
> > > 
>
> > wrote: 
> >> 
> >> As I said in another thread recently, I am currently the lead 
> maintainer 
> >> of more packages than I can keep up with. I think it’s been useful for 
> me to 
> >> start so many different projects, but I can’t keep maintaining most of 
> my 
> >> packages given my current work schedule. 
> >> 
> >> Without Simon Kornblith, Kevin Squire, Sean Garborg and several others 
> >> doing amazing work to keep DataArrays and DataFrames going, much of our 
> >> basic data infrastructure would have already become completely 
> unusable. But 
> >> even with the great work that’s been done on those package recently, 
> there’s 
> >> still lot of additional design work required. I’d like to free up some 
> of my 
> >> time to do that work. 
> >> 
> >> To keep things moving forward, I’d like to propose a couple of radical 
> New 
> >> Year’s resolutions for the packages I work on. 
> >> 
> >> (1) We need to stop adding functionality and focus entirely on 
> improving 
> >> the quality and documentation of our existing functionality. We have 
> way too 
> >> much prototype code in DataFrames that I can’t keep up with. I’m about 
> to 
> >> make a pull request for DataFrames that will remove everything related 
> to 
> >> column groupings, database-style indexing and Blocks.jl support. I 
> >> absolutely want to see us push all of those ideas forward in the 
> future, but 
> >> they need to happen in unmerged forks or separate packages until we 
> have the 
> >> resources needed to support them. Right now, they make an overwhelming 
> >> maintenance challenge even more onerous. 
> >> 
> >> (2) We can’t support anything other than the master branch of most 
> >> JuliaStats packages except possibly for Distributions. I personally 
> don’t 
> >> have the time to simultaneously keep stuff working with Julia 0.2 and 
> Julia 
> >> 0.3. Moreover, many of our basic packages aren’t mature enough to 
> justify 
> >> supporting older versions. We should do a better job of supporting our 
> >> master releases and not invest precious time trying to support older 
> >> releases. 
> >> 
> >> (3) We need to make more of DataArrays and DataFrames reflect the 
> Julian 
> >> worldview. Lots of our code uses an interface that is incongruous with 
> the 
> >> interfaces found in Base. Even worse, a large chunk of code has 
> >> type-stability problems that makes it very slow, when comparable code 
> that 
> >> uses normal Arrays is 100x faster. We need to develop new idioms and 
> new 
> >> strategies for making code that interacts with type-destabilizing NA’s 
> >> faster. More generally, we need to make DataArrays and DataFrames fit 
> in 
> >> better with Julia when Julia and R disagree. Following R’s lead has 
> often 
> >> lead us astray because R doesn’t share Julia’s strenths or weaknesses. 
> >> 
> >> (4) Going forward, there should be exactly one way to do most things. 
> The 
> >> worst part of our current codebase is that there are multiple ways to 
> >> express the same computation, but (a) some of them are unusably slow 
> and (b) 
> >> some of them don’t ever get tested or maintained properly. This is 
> closely 
> >> linked to the excess proliferation of functionality described in 
> Resolution 
> >> 1 above. We need to start removing stuff from our packages and making 
> the 
> >> parts we keep both reliable and fast. 
> >> 
> >> I think we can push DataArrays and DataFrames to 1.0 status by the end 
> of 
> >> this year. But I think we need to adopt a new approach if we’re going 
> to get 
> >> there. Lots of stuff needs to get deprecated and what remains needs a 
> lot 
> >> more testing, benchmarking and documentation. 
> >> 
> >>  — John 
> >> 
> > 
>


[julia-users] Re: Some problem with the interpreter

2013-12-20 Thread Blake Johnson
Eduardo, do you have a Haswell processor? If so, this error was probably 
the same as #5132 , which 
was fixed yesterday.

--Blake

On Thursday, December 19, 2013 1:58:01 AM UTC-5, Eduardo Mendes wrote:
>
> Hi, 
>
> I'm having some problem with the interpreter. I tried a clean install and 
> did not fix it.
>
> Essentially if I omit the semicolon of an array expression I get 
> BoundsError(). I am not sure whether anyone is having similar problem or is 
> something particular to my computer.
>
> Any ideas on how can I fix it?
>
> Thanks
> D.
>
> $ julia
>_
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "help()" to list help topics
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.3.0-prerelease+490 (2013-12-15 07:16 
> UTC)
>  _/ |\__'_|_|_|\__'_|  |  Commit f8f3190* (0 days old master)
> |__/   |  x86_64-linux-gnu
>
> julia> [1]
> 1-element Array{Int64,1}:
> Evaluation succeeded, but an error occurred while showing value of type 
> Array{Int64,1}:
> ERROR: BoundsError()
>  in parseint_nocheck at string.jl:1472
>  in parseint_nocheck at string.jl:1508
>  in parseint at string.jl:1511
>  in writemime at repl.jl:21
>  in display at multimedia.jl:117
>  in display at multimedia.jl:119
>  in display at multimedia.jl:151
>
>
>
>