Hi everyone,

In case anyone is interested, I just set up a google group to discuss 
GPU-based simulation for our Python neural simulator Brian:
http://groups.google.fr/group/brian-on-gpu
Our simulator relies heavily Numpy. I would be very happy if the GPU 
experts here would like to share their expertise.

Best,
Romain

Romain Brette a écrit :
> Sturla Molden a écrit :
>> Thus, here is my plan:
>>
>> 1. a special context-manager class
>> 2. immutable arrays inside with statement
>> 3. lazy evaluation: expressions build up a parse tree
>> 4. dynamic code generation
>> 5. evaluation on exit
>>
> 
> There seems to be some similarity with what we want to do to accelerate 
> our neural simulations (briansimulator.org), as described here:
> http://brian.svn.sourceforge.net/viewvc/brian/trunk/dev/BEPs/BEP-9-Automatic%20code%20generation.txt?view=markup
> (by the way BEP is "Brian Enhancement Proposal")
> The speed-up factor we got in our experimental code with GPU is very 
> substantial when there are many neurons (= large vectors, e.g. 10 000 
> elements), even when operations are simple.
> 
> Romain

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to