I looked a bit deeper (i.e. found a machine where I have access to an Intel
compiler, albeit not up to date - my shop is cursed by budget cuts). ICC
breaks up a loop like
for (i=0; i
> That's interesting. I generally don't test with gcc and my experiments
> with ICC/C have shown something lik
1) Won't that have bad interactions with pre-compilation? Since macros
apply at parse time, the package will stay in the "state" the it
precompiles in: so if one precompiles the package and then adds
ParallelAccelerator, wouldn't that not be used? And the other way around,
if one removes Parall
Looks like version 0.2.1 has been merged now.
On Wednesday, October 26, 2016 at 5:13:38 PM UTC-7, Todd Anderson wrote:
>
> Okay, METADATA with ParallelAccelerator verison 0.2 has been merged so if
> you do a standard Pkg.add() or update() you should get the latest version.
>
> For native threads,
To answer your question #1, would the following be suitable? There may be
a couple details to work out but what about the general approach?
if haskey(Pkg.installed(), "ParallelAccelerator")
println("ParallelAccelerator present")
using ParallelAccelerator
macro PkgCheck(ast)
Not speaking on behalf of the ParallelAccelerator team - but the long term
future of ParallelAccelerator in my opinion is to do exactly that - keep
pushing on new things and get them (code or ideas) merged into Base as they
stabilize. Without the ParallelAccelerator team pushing us, multi-thread
Thank you for all of your amazing work. I will be giving v0.2 a try soon.
But I have two questions:
1) How do you see ParallelAccelerator integrating with packages? I asked
this in the chatroom, but I think having it here might be helpful for
others to chime in. If I want to use ParallelAcceler
That's interesting. I generally don't test with gcc and my experiments
with ICC/C have shown something like 20% slower for LLVM/native threads for
some class of benchmarks (like blackscholes) but 2-4x slower for some other
benchmarks (like laplace-3d). The 20% may be attributable to ICC being
With appreciation for Intel Labs' commitment, our thanks to the people who
landed v0.2 of the ParallelAccelerator project.
On Wednesday, October 26, 2016 at 8:13:38 PM UTC-4, Todd Anderson wrote:
>
> Okay, METADATA with ParallelAccelerator verison 0.2 has been merged so if
> you do a standard Pk
This is great stuff. Initial observations (under Linux/GCC) are that
native threads are about 20% faster than OpenMP, so I surmise you are
feeding LLVM some very tasty
code. (I tested long loops with straightforward memory access.)
On the other hand, some of the earlier posts make me think that
Okay, METADATA with ParallelAccelerator verison 0.2 has been merged so if
you do a standard Pkg.add() or update() you should get the latest version.
For native threads, please note that we've identified some issues with
reductions and stencils that have been fixed and we will shortly be
release
Actually, it seems that the pull request into Julia metadata to set the
default ParallelAccelerator version has not yet been merged. So, if you
want the simplest package update or to get the correct version with a
simple Pkg.add then hold off until the merge happens. I'll post here again
when
11 matches
Mail list logo