Just out of curiosity, can you compile ParaView without MPI or use the ParaView binaries from paraview.org? I am asking this because we have been tracking down a nasty rendering performance issue that only shows up when ParaView is linked against MPI.
-berk On Tue, Apr 10, 2012 at 11:32 AM, Albina, Frank < frank.alb...@sauber-motorsport.com> wrote: > Dear ParaView users & developers,**** > > ** ** > > I am trying to reduce the time spent in performing generic 3D surface > rendering. A lot has been achieved by using ParaView driven by python > scripting already but actually I'd like to achieve more. At the time > being, all rendering is performed using CPU rendering with the MESA > libraries and I have started investigating if it could be worth > performing the rendering using GPUs instead of using N CPUs in parallel.** > ** > > ** ** > > For the purpose of checking the GPU performance over CPUs, I have devised > a simple benchmark with a python script which drives the surface rendering. > The script performs the generation of 60 images and dumps them onto diskin > JPEG format. > When performing this task, I have been tumbling upon the following > rendering times for the generation of the aforementioned 60 images, using > different versions of paraview on an "old" workstation, which is my > guinea pig for this benchmark:**** > > ** ** > > Paraview 3.8.1**** > > Paraview 3.10.1**** > > Paraview 3.12.0**** > > Paraview 3.14.1**** > > 138.12s**** > > 591.06s**** > > 592.53s**** > > 594.10s**** > > ** ** > > What is striking is that the rendering time *is 4 times less(!)* with PV > v3.8.1 than with all subsequent versions. I had already noticed something > similar when running MESA on dissimilar architectures, but I assumed that > the culprit were the MESA libraries used. Here, the HW and libraries are > the same, so I am enclined to believe that I am missing something here in > the general rendering settings which do not affect PV v3.8.1 but induce a > big performance hit for all PV versions above 3.10. Is anybody aware of > rendering settings which could induce such a performance difference?**** > > ** ** > > BTW, for each rendering, a window opens with the OpenGL tag in the window > title bar, so I am quite sure that I am not using any SW rendering, all > the more as all the PV versions I have compiled have VTK_OPENGL_HAS_OSMESA > set to off.**** > > ** ** > > A few more details concerning the test I have been running:**** > > ** ** > > - Workstation: Linux workstation with SuSE SLED 10**** > > 2 x Intel Xeon Dual core 5160 @ 3.00GHz **** > > 2 x NVIDIA Quadro FX3500 (NV71GL chipset)**** > > ** ** > > - Paraview version 3.8.1, 3.10.1, 3.12.0 and 3.14.1 were compiled with > OpenGL support, Qt v4.6.x, python v2.7, OpenMPI v1.4.x using the GCC > compiler v4.5.x.**** > > ** ** > > - The script is run using pvpython (and not pvbatch) in order to force > the assignement of the graphics card:**** > > pvserver -display localhost:0.0 **** > > Then the rendering script runs with pvpython from the command line. > Within the script, a Connect("localhost",11111) forces the connection to > the pvserver running on localhost.**** > > ** ** > > Any suggestions welcome.**** > > Best regards **** > > Frank Albina.**** > > _______________________________________________ > Powered by www.kitware.com > > Visit other Kitware open-source projects at > http://www.kitware.com/opensource/opensource.html > > Please keep messages on-topic and check the ParaView Wiki at: > http://paraview.org/Wiki/ParaView > > Follow this link to subscribe/unsubscribe: > http://www.paraview.org/mailman/listinfo/paraview > >
_______________________________________________ Powered by www.kitware.com Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Please keep messages on-topic and check the ParaView Wiki at: http://paraview.org/Wiki/ParaView Follow this link to subscribe/unsubscribe: http://www.paraview.org/mailman/listinfo/paraview