Hi,

Note below what happens when I run fill_gpu_with_nans.py as a script:

c:\>python fill_gpu_with_nans.py
filled 237043712 out of 239140864 bytes with NaNs

however, if I start an interpreter:

c:\>python
Python 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import fill_gpu_with_nans
filled 237043712 out of 239140864 bytes with NaNs
>>> reload ( fill_gpu_with_nans)
filled 983040 out of 1114112 bytes with NaNs
<module 'fill_gpu_with_nans' from 'fill_gpu_with_nans.pyc'>
>>> reload ( fill_gpu_with_nans)
filled 1110016 out of 1114112 bytes with NaNs
<module 'fill_gpu_with_nans' from 'fill_gpu_with_nans.pyc'>

Thus, when run as a script, I get:

filled 237043712 out of 239140864 bytes with NaNs

but from inside the interpreter, I get:

filled 237043712 out of 239140864 bytes with NaNs
filled 983040 out of 1114112 bytes with NaNs
filled 1110016 out of 1114112 bytes with NaNs

what is happening here? Is there some kind of caching going on?

Thanks in advance.

On 9/15/2010 10:38 AM, Andreas Kloeckner wrote:
On Wed, 15 Sep 2010 10:27:39 -0700, reckoner<recko...@gmail.com>  wrote:
Hi,

According to dump_properties.py ( shown below ), I have

    Total Memory: 261824 KB

How much of this can I allocate using gpuarray as in the following:

http://documen.tician.de/pycuda/driver.html#pycuda.driver.mem_get_info

tells you how much of that is actually free (i.e. not taken up by frame
buffers etc). I'd expect that you get nearly all of that via any of the
allocation methods in PyCUDA.

Checkout examples/fill_gpu_with_nans.py--it tries to allocate the
biggest possible chunk it can get to fill it with NaNs as a debugging
aid.

Andreas


_______________________________________________
PyCUDA mailing list
PyCUDA@tiker.net
http://lists.tiker.net/listinfo/pycuda

Reply via email to