Hi,

How do someone find beforehand whether CUDA architecture is enough to
execute his project/program ?
 
I am looking to do topological sort on GPU. I think this tesla archi. have 4
GPU(128 core each). How do I decide whether it is sufficient to process big
graph on it ?

If threads are not enough then can we assign computation of some chunks to
one thread (instead of 1 chunk per thread). This sure sounds easy, but needs
some work or maybe it can't be done.

Any ideas on how to find beforehand whether big Graph and can be executed on
it ?

I am also looking and if I found something relevant I will post it.

--
View this message in context: 
http://pycuda.2962900.n2.nabble.com/how-to-decide-if-a-certain-CUDA-archi-is-enough-to-execute-project-code-tp7574704.html
Sent from the PyCuda mailing list archive at Nabble.com.

_______________________________________________
PyCUDA mailing list
PyCUDA@tiker.net
http://lists.tiker.net/listinfo/pycuda

Reply via email to