Thanks Pascal.
I tried using gpu preallocate 0.01 and 0.1. The run with 0.1, for instance,
starts like this:
$ python scripts/run_rl_mj.py --env_name CartPole-v0 --log trpo_logs/
CartPole-v0
Using cuDNN version 5105 on context None
Preallocating 1218/12186 Mb (0.10) on cuda
Mapped nam
What happens if you set gpuarray.preallocate to something much smaller, or
even to -1?
Also, I see the script uses multiprocessing. Weird things happen if new
Python processes are spawned after the GPU has been initialized. This is a
limitation of how cuda handles GPU contexts I believe.
The so