Hi Ganesh,
Sig 9 usually means your job was killed by the system. Is it possible
that you have exhausted the available ram or hit some artificial limit
that is imposed by your batch system?
Burlen
On 08/08/2012 01:15 PM, Ganesh Vijayakumar wrote:
hi!
I recently installed paraview 3.12 with offscreen mesa on a SGI
cluster with intel compilers and SGI MPT. Using the same version of
paraview on my local computer I recorded a script in the qt version
using python trace. I was able to execute the same script just fine on
the cluster on a similar but larger case using pvpython. However I'm
unable to use pvbatch. First off the cluster forces me to use mpirun
to run pvbatch, saying that all MPI applications must be launched with
mpirun. Even when I do this
mpirun -np 1 pvbatch --use-offscreen-rendering saveImage.py
MPI: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize()
MPI: aborting job
MPI: Received signal 9
a termination is all I get. Is something wrong the way I'm using
pvbatch? I'll be working on datasets that are over 20 million cells. I
hope to use multiple processors if that will help me speed up
visualization. Please note that I'm not using a server. I just intend
to submit the visualization as a job on the same cluster where I run
the simulation.
ganesh
_______________________________________________
Powered by www.kitware.com
Visit other Kitware open-source projects at
http://www.kitware.com/opensource/opensource.html
Please keep messages on-topic and check the ParaView Wiki at:
http://paraview.org/Wiki/ParaView
Follow this link to subscribe/unsubscribe:
http://www.paraview.org/mailman/listinfo/paraview
_______________________________________________
Powered by www.kitware.com
Visit other Kitware open-source projects at
http://www.kitware.com/opensource/opensource.html
Please keep messages on-topic and check the ParaView Wiki at:
http://paraview.org/Wiki/ParaView
Follow this link to subscribe/unsubscribe:
http://www.paraview.org/mailman/listinfo/paraview