I posted this because I also find the results... astonishingly surprising. 
Howeverm the timings are apparently real, as the first one took more than 
1.5mins on my wrist watch, and the second calculation was instantly.
And no, no function wrapping whatsoever...

On Thursday, July 21, 2016 at 6:22:50 PM UTC+2, Chris Rackauckas wrote:
>
> I wouldn't expect that much of a change unless you have a whole lot of 
> cores (even then, wouldn't expect this much of a change).
>
> Is this wrapped in a function when you're timing it?
>
> On Thursday, July 21, 2016 at 9:00:47 AM UTC-7, Ferran Mazzanti wrote:
>>
>> Hi,
>>
>> mostly showing my astonishment, but I can even understand the figures in 
>> this stupid parallelization code
>>
>> A = [[1.0 1.0001];[1.0002 1.0003]]
>> z = A
>> tic()
>> for i in 1:1000000000
>>     z *= A
>> end
>> toc()
>> A
>>
>> produces
>>
>> elapsed time: 105.458639263 seconds
>>
>> 2x2 Array{Float64,2}:
>>  1.0     1.0001
>>  1.0002  1.0003
>>
>>
>>
>> But then add @parallel in the for loop
>>
>> A = [[1.0 1.0001];[1.0002 1.0003]]
>> z = A
>> tic()
>> @parallel for i in 1:1000000000
>>     z *= A
>> end
>> toc()
>> A
>>
>> and get 
>>
>> elapsed time: 0.008912282 seconds
>>
>> 2x2 Array{Float64,2}:
>>  1.0     1.0001
>>  1.0002  1.0003
>>
>>
>> look at the elapsed time differences! And I'm running this on my Xeon 
>> desktop, not even a cluster
>> Of course A-B reports
>>
>> 2x2 Array{Float64,2}:
>>  0.0  0.0
>>  0.0  0.0
>>
>>
>> So is this what one should expect from this kind of simple 
>> paralleizations? If so, I'm definitely *in love* with Julia :):):)
>>
>> Best,
>>
>> Ferran.
>>
>>
>>

Reply via email to