The problem occurs when a function having a threaded loop calling another
function with
a threaded loop inside.
here is a code sample computing the mean of a random matrix.
expected result 0.5
here is the code:
==
de
It seems to me that your code is correct BUT:
allocating a SharedArray is a bit expensive, and should be done once.
The follwowing modifications runs OK
function chisq(A::SharedArray{Float64})
n=length(A)
@sync @parallel for i in 1:n
A[i]=(rand()-rand())^2
end
sumsq=sum(A)
en
have not
>> > that yet.
>> >
>> > Le lundi 13 juin 2016 13:43:00 UTC+2, Kristoffer Carlsson a écrit :
>> >>
>> >> It seems weird to me that you guys want to call Jaccard distance with
>> >> float arrays. AFAIK Jaccard distance mea
> Is there some more general formulation of it that extends to vectors in a
> continuous vector space?
>
> And, to note, Jaccard is type stable for inputs of Vector{Bool} in
> Distances.jl.
>
> On Monday, June 13, 2016 at 3:53:14 AM UTC+2, jean-pierre both wrote:
>>
&g
I encountered in my application with Distances.Jaccard compared with
Distances.Euclidean
It was very slow.
For example with 2 vecteurs Float64 of size 11520
I get the following
julia> D=Euclidean()
Distances.Euclidean()
julia> @time for i in 1:500
evaluate(D,v1,v2)
end
0.00255
using Distances
v1=rand(1)
v2=rand(1)
function testDistances(v1::Array{Float64,1}, v2::Array{Float64,1},
D::SemiMetric)
for i in 1:5000
evaluate(D,v1,v2)
end
end
@time testDistances(v1,v2,Jaccard())
18.351446 seconds (350.02 M allocations: 5.961 GB, 8.