If it helps, I have similar problems on a TeslaC2050 and the latest pycuda against CUDA3.1. Here is my test output:
garr...@alienbox:~/src/PyCUDA/pycuda/test$ python test_gpuarray.py ============================= test session starts ============================== python: platform linux2 -- Python 2.6.5 -- pytest-1.2.1 test object 1: test_gpuarray.py test_gpuarray.py ...F......F.....F................F. =================================== FAILURES =================================== ___________________________ TestGPUArray.test_minmax ___________________________ def f(*args, **kwargs): import pycuda.driver # appears to be idempotent, i.e. no harm in calling it more than once pycuda.driver.init() ctx = make_default_context() try: assert isinstance(ctx.get_device().name(), str) assert isinstance(ctx.get_device().compute_capability(), tuple) assert isinstance(ctx.get_device().get_attributes(), dict) > inner_f(*args, **kwargs) /usr/local/lib/python2.6/dist-packages/pycuda-0.94.1-py2.6-linux-x86_64.egg/pycuda/tools.py:503: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <test_gpuarray.TestGPUArray instance at 0x1e65c20> @mark_cuda_test def test_minmax(self): from pycuda.curandom import rand as curand if has_double_support(): dtypes = [numpy.float64, numpy.float32, numpy.int32] else: dtypes = [numpy.float32, numpy.int32] for what in ["min", "max"]: for dtype in dtypes: a_gpu = curand((200000,), dtype) a = a_gpu.get() op_a = getattr(numpy, what)(a) op_a_gpu = getattr(gpuarray, what)(a_gpu).get() > assert op_a_gpu == op_a, (op_a_gpu, op_a, dtype, what) E AssertionError: (array(5.4287724196910858e-05, dtype=float32), 2.5250483e-06, <type 'numpy.float32'>, 'min') test_gpuarray.py:450: AssertionError _______________________ TestGPUArray.test_subset_minmax ________________________ def f(*args, **kwargs): import pycuda.driver # appears to be idempotent, i.e. no harm in calling it more than once pycuda.driver.init() ctx = make_default_context() try: assert isinstance(ctx.get_device().name(), str) assert isinstance(ctx.get_device().compute_capability(), tuple) assert isinstance(ctx.get_device().get_attributes(), dict) > inner_f(*args, **kwargs) /usr/local/lib/python2.6/dist-packages/pycuda-0.94.1-py2.6-linux-x86_64.egg/pycuda/tools.py:503: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <test_gpuarray.TestGPUArray instance at 0x1f9c878> @mark_cuda_test def test_subset_minmax(self): from pycuda.curandom import rand as curand l_a = 200000 gran = 5 l_m = l_a - l_a // gran + 1 if has_double_support(): dtypes = [numpy.float64, numpy.float32, numpy.int32] else: dtypes = [numpy.float32, numpy.int32] for dtype in dtypes: a_gpu = curand((l_a,), dtype) a = a_gpu.get() meaningful_indices_gpu = gpuarray.zeros(l_m, dtype=numpy.int32) meaningful_indices = meaningful_indices_gpu.get() j = 0 for i in range(len(meaningful_indices)): meaningful_indices[i] = j j = j + 1 if j % gran == 0: j = j + 1 meaningful_indices_gpu = gpuarray.to_gpu(meaningful_indices) b = a[meaningful_indices] min_a = numpy.min(b) min_a_gpu = gpuarray.subset_min(meaningful_indices_gpu, a_gpu).get() > assert min_a_gpu == min_a E assert array(0.00024149427190423012, dtype=float32) == 5.105976e-06 test_gpuarray.py:484: AssertionError ____________________________ TestGPUArray.test_sum _____________________________ def f(*args, **kwargs): import pycuda.driver # appears to be idempotent, i.e. no harm in calling it more than once pycuda.driver.init() ctx = make_default_context() try: assert isinstance(ctx.get_device().name(), str) assert isinstance(ctx.get_device().compute_capability(), tuple) assert isinstance(ctx.get_device().get_attributes(), dict) > inner_f(*args, **kwargs) /usr/local/lib/python2.6/dist-packages/pycuda-0.94.1-py2.6-linux-x86_64.egg/pycuda/tools.py:503: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <test_gpuarray.TestGPUArray instance at 0x1fa9dd0> @mark_cuda_test def test_sum(self): from pycuda.curandom import rand as curand a_gpu = curand((200000,)) a = a_gpu.get() sum_a = numpy.sum(a) from pycuda.reduction import get_sum_kernel sum_a_gpu = gpuarray.sum(a_gpu).get() > assert abs(sum_a_gpu-sum_a)/abs(sum_a) < 1e-4 E assert (abs((array(1562.4339599609375, dtype=float32) - 100069.64)) / abs(100069.64)) < 0.0001 test_gpuarray.py:431: AssertionError ____________________________ TestGPUArray.test_dot _____________________________ def f(*args, **kwargs): import pycuda.driver # appears to be idempotent, i.e. no harm in calling it more than once pycuda.driver.init() ctx = make_default_context() try: assert isinstance(ctx.get_device().name(), str) assert isinstance(ctx.get_device().compute_capability(), tuple) assert isinstance(ctx.get_device().get_attributes(), dict) > inner_f(*args, **kwargs) /usr/local/lib/python2.6/dist-packages/pycuda-0.94.1-py2.6-linux-x86_64.egg/pycuda/tools.py:503: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <test_gpuarray.TestGPUArray instance at 0x1fb7d88> @mark_cuda_test def test_dot(self): from pycuda.curandom import rand as curand a_gpu = curand((200000,)) a = a_gpu.get() b_gpu = curand((200000,)) b = b_gpu.get() dot_ab = numpy.dot(a, b) dot_ab_gpu = gpuarray.dot(a_gpu, b_gpu).get() > assert abs(dot_ab_gpu-dot_ab)/abs(dot_ab) < 1e-4 E assert (abs((array(777.169189453125, dtype=float32) - 50140.887)) / abs(50140.887)) < 0.0001 test_gpuarray.py:498: AssertionError ===================== 4 failed, 31 passed in 17.31 seconds ===================== cumath and driver pass everytime I have tried. The tesla is compute cap.2.0. I have two compute cap.1.3 cards in the same machine... @Andreas, is there a way to run your tests on a specific GPU instead of GPU0 (as decided by CUDA)? On Mon, Sep 27, 2010 at 3:00 PM, <pycuda-requ...@tiker.net> wrote: > Send PyCUDA mailing list submissions to > pycuda@tiker.net > > To subscribe or unsubscribe via the World Wide Web, visit > http://lists.tiker.net/listinfo/pycuda > or, via email, send a message with subject or body 'help' to > pycuda-requ...@tiker.net > > You can reach the person managing the list at > pycuda-ow...@tiker.net > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of PyCUDA digest..." > > > Today's Topics: > > 1. failed test_gpuarray on GTX480 (jmcarval) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 27 Sep 2010 04:42:00 -0700 (PDT) > From: jmcarval <jmcar...@fe.up.pt> > To: pycuda@tiker.net > Subject: [PyCUDA] failed test_gpuarray on GTX480 > Message-ID: <1285587720039-5574551.p...@n2.nabble.com> > Content-Type: text/plain; charset=us-ascii > > > Hi. > Installed PyCUDA 0.94.1 in several Linux boxes. > All have Ubuntu 10.4 with CUDA 3.1 (drv 256.40) and python 2.6.5 > > Boxes with 1.1 capability GPUs like 8600GT, 9400 GT or FX850 are ok and > some > user's are already trying them. > Boxes with 1.3 (GTX280) and 2.0 (GTX480) have dificulties just running the > supplied tests. On these: > > test_cumath.py passes all tests but is 5 times slower in the GTX280 and > 40(!) times slower in the GTX480 > As far as I can see, this test never uses float64 > > test_driver.py passes all tests but is 4 times slower in the GTX280 and > 9(!) > times slower in the GTX480 > As far as I can see, this test also doesn't use native float64. Testing > fp_textures seem to be done with some sort of emulation, but I may be wrong > here. > > test_gpuarray.py is 2 times slower in the GTX280 and fails the dot, sum, > minmax and subset_minmax tests on the GTX480. > > Can anybody please point me to some mail thread, FAQ or instalation guide > that helps me correct whatever I have done wrong? > > Thanks > > Joao > > -- > View this message in context: > http://pycuda.2962900.n2.nabble.com/failed-test-gpuarray-on-GTX480-tp5574551p5574551.html > Sent from the PyCuda mailing list archive at Nabble.com. > > > > ------------------------------ > > _______________________________________________ > PyCUDA mailing list > PyCUDA@tiker.net > http://lists.tiker.net/listinfo/pycuda > > > End of PyCUDA Digest, Vol 27, Issue 27 > ************************************** >
_______________________________________________ PyCUDA mailing list PyCUDA@tiker.net http://lists.tiker.net/listinfo/pycuda