Hi Christopher,

Are you by any chance logged in with ssh X11 forwarding (ssh -X ...)? It seems the error you report comes up often in that context. X forwarding would not be the right way to run PV on your cluster.

Depending on how your cluster is setup you may need to start up the xserver before launching PV, and make sure to close it after PV exits. IUn that scenario your xorg.conf would specify the nvidia driver and a screen for each gpu which you would refernece in the shell used to start PV through the DISPLAY variable. If you already have x11 running and screens configured then it's just a matter of setting the display variable correctly. When there are multiple GPU's per node then you'd need to set the display using mpi rank modulo the number of gpus per node.

I'm not sure it matters that much but I don't think that you want --use-offscreen-rendering option.

Burlen

On 10/26/2014 10:23 PM, R C Bording wrote:
Hi,
Managed to get a "working version of Paraview-4.2.0.1" on our GPU cluster but when I try to run the parallelSphere.py script on more than one node it just hangs. Work like it is supposed to up to 12 cores on a single node. I am still trying work out if I a running on the GPU "tesla- C2070).

Here is the list of cake configurations

IBS_TOOL_CONFIGURE='-DCMAKE_BUILD_TYPE=Release \
-DParaView_FROM_GIT=OFF \
-DParaView_URL=$MYGROUP/vis/src/ParaView-v4.2.0-source.tar.gz \
-DENABLE_boost=ON \
-DENABLE_cgns=OFF \
-DENABLE_ffmpeg=ON \
-DENABLE_fontconfig=ON \
-DENABLE_freetype=ON \
-DENABLE_hdf5=ON \
-DENABLE_libxml2=ON \
-DENABLE_matplotlib=ON \
-DENABLE_mesa=OFF \
-DENABLE_mpi=ON \
-DENABLE_numpy=ON \
-DENABLE_osmesa=OFF \
-DENABLE_paraview=ON \
-DENABLE_png=ON \
-DENABLE_python=ON \
-DENABLE_qhull=ON \
-DENABLE_qt=ON \
-DENABLE_silo=ON \
-DENABLE_szip=ON \
-DENABLE_visitbridge=ON \
-DMPI_CXX_LIBRARIES:STRING="$MPI_HOME/lib/libmpi_cxx.so" \
-DMPI_C_LIBRARIES:STRING="$MPI_HOME/lib/libmpi.so" \
-DMPI_LIBRARY:FILEPATH="$MPI_HOME/lib/libmpi_cxx.so" \
-DMPI_CXX_INCLUDE_PATH:STRING="$MPI_HOME/include" \
-DMPI_C_INCLUDE_PATH:STRING="$MPI_HOME/include" \
-DUSE_SYSTEM_mpi=ON \
-DUSE_SYSTEM_python=OFF \
-DUSE_SYSTEM_qt=OFF \
-DUSE_SYSTEM_zlib=OFF '

The goal is to be able to support batch rendering on the whole cluster ~96 nodes.

Also do I need set another environment variable in my Paraview module to make the Xlib
warning go away?

[cbording@f100 Paraview]$ mpirun -n 12 pvbatch --use-offscreen-rendering parallelSphere.py
Xlib:  extension "NV-GLX" missing on display "localhost:50.0".
Xlib:  extension "NV-GLX" missing on display "localhost:50.0".
Xlib:  extension "NV-GLX" missing on display "localhost:50.0".
Xlib:  extension "NV-GLX" missing on display "localhost:50.0".
Xlib:  extension "NV-GLX" missing on display "localhost:50.0".
Xlib:  extension "NV-GLX" missing on display "localhost:50.0".

Is this related to my not being able to run across multiple nodes?

R. Christopher Bording
Supercomputing Team-iVEC@UWA
E: cbord...@ivec.org <mailto:cbord...@ivec.org>
T: +61 8 6488 6905

26 Dick Perry Avenue,
Technology Park
Kensington, Western Australia.
6151







_______________________________________________
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview

_______________________________________________
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview

Reply via email to