Re: [Paraview] Performance of the CGNS Reader

2013-08-30 Thread Richard GRENON

Thank you for this answer, Mickael.

My 1.36 Gb CGNS dataset is built from structured meshes and Paraview 
should not 'eat' too much memory. I have checked that enabling 
multi-core in Paraview does not change anything: PV always needs about 
15 mn to load my dataset, same loading time as without multi-core.


I think that the loading time ratio 15 mn / 1 mn for PV against Tecplot 
remains too high, even if PV parses the file two times. If Tecplot takes 
advantage from multi-core (I don't know), the loading time ratio between 
PV and Tecplot should not excess 8 when using 4 CPUs. A ratio of 15 
leading to 15 mn of loading time or more for larger datasets is 
unacceptable for interactivity. So PV is unusable for large CGNS 
datasets, unless using batch mode. I think that an effort in redesigning 
the CGNS reader would be welcome.


Best regards.

Richard

Le 29/08/2013 20:55, Mickael Philit a écrit :


Hello,

First, the CGNS reader coming through the VisItBridge is not working 
in parallel, it's a plain serial reader.
Second, there are limitations to the current cgns reader way of doing 
thing, since :
  - At the beginning, it parses the whole file (this takes a lot of 
time) to get variable names, blocks and so on, before actually reading 
the data.  [ I think that tecplot is cleaner because it seems to read 
the whole CGNS file in one pass ]
 - meshes are read in a temporary array and converted to a VTK vector 
of coordinates (thus memory manipulation)
 - for unstructured meshes, convertion from 'integer' to 'long' of 
cells connectivity eats memory.
The CGNS reader can improve but at the cost of redesining some parts 
to fit better in paraview and go for parallel.


Mickael


On 29/08/2013 16:50, Angelini, Richard C (Rick) CIV USARMY ARL (US) 
wrote:
As a followup to this that may be related - does the CGNS reader 
through the VisItBridge work in parallel?I've loaded up a couple 
of different CGNS datasets and then applied the ProcessIDScalars 
filter and it doesn't appear to be distributing the data - even 
multi-block CGNS files.




Rick Angelini

USArmy Research Laboratory
CISD/HPC Architectures Team
Building 120 Cube 315
Phone:  410-278-6266


From: paraview-boun...@paraview.org [paraview-boun...@paraview.org] 
on behalf of Richard GRENON [richard.gre...@onera.fr]

Sent: Thursday, August 29, 2013 10:38 AM
To: paraview@paraview.org
Subject: [Paraview]  Performance of the CGNS Reader

Hello.

I am testing the CGNS reader of Paraview 4.0.1 64 bits running on a
Linux Workstation having 4 CPUs and 5.8 Gbytes of memory. Paraview was
installed from the binaries available on the download page.

I am trying to load a 1.36 Gbytes CGNS file that is available through
the network.

While loading this file, the Paraview Windows is frozen and cannot be
refreshed, and I must check with the ps command on a terminal window
or with a system monitor if PV  is still running or if it is really
frozen. A progress bar for all readers would be welcome in a next 
release.


Finally, the file can be loaded, but it always takes about 15 mn (+ or -
1 mn depending of the load of the network), while Tecplot always loads
the same file within less that 1 mn !

How do you explain this poor performance of the CGNS reader ? Can it be
improved, or am I missing something ? Is there some Paraview option that
could reduce loading time of large files ?

Best regards

--
   Richard GRENON
   ONERA
   Departement d'Aerodynamique Appliquee - DAAP/ACI
   8 rue des Vertugadins
   92190 MEUDON - FRANCE
   phone : +33 1 46 73 42 17
   fax   : +33 1 46 73 41 46
   mailto:richard.gre...@onera.fr
   http://www.onera.fr

___
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html


Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView


Follow this link to subscribe/unsubscribe:
http://www.paraview.org/mailman/listinfo/paraview
___
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html


Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView


Follow this link to subscribe/unsubscribe:
http://www.paraview.org/mailman/listinfo/paraview






--
 Richard GRENON
 ONERA
 Departement d'Aerodynamique Appliquee - DAAP/ACI
 8 rue des Vertugadins
 92190 MEUDON - FRANCE
 phone : +33 1 46 73 42 17
 fax   : +33 1 46 73 41 46
 mailto:richard.gre...@onera.fr
 http://www.onera.fr

___
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Follow this link to 

[Paraview] pvserver with POE

2013-08-30 Thread Marc Rugeri
Hi,

I have some  difficulties  to use pvserver running under IBM's native
Parallel Operating Environment (POE).
It looks like pvserver is failing to opening up a server-socket  
to which the paraview clients are expected.

$ poe pvserver -procs 2

produce the following error message:
Waiting for client...
Connection URL: cs://ada258:1
Waiting for client...
Connection URL: cs://ada258:1
Accepting connection(s): ada258:1
ERROR: In 
/smplocal/src/pub/Paraview/3.9.8-par/src/VTK/Common/System/vtkSocket.cxx, line 
206
vtkServerSocket (0x127b170): Socket error in call to bind. Address already in 
use.
ERROR: In 
/smplocal/src/pub/Paraview/3.9.8-par/src/ParaViewCore/ClientServerCore/Core/vtkTCPNetworkAccessManager.cxx,
 
line 354
vtkTCPNetworkAccessManager (0x11eba20): Failed to set up server socket.

Exiting...

It's work fine with mpirun  (Intel MPI Library) 
 
$ mpirun -np 2 pvserver 
Waiting for client...
Connection URL: cs://ada337:1
Accepting connection(s): ada337:1

Any ideas ?
___
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Follow this link to subscribe/unsubscribe:
http://www.paraview.org/mailman/listinfo/paraview


Re: [Paraview] Building on Titan using ParaViewSuperbuild

2013-08-30 Thread Benson Muite
Hi,

Can you let me know whether it is BG P or BG Q on which catalyst was built.

Thanks,
Benson

On 30/08/2013 11:24, paraview-requ...@paraview.org wrote:
1. Re: Building on Titan using ParaViewSuperbuild (David E DeMarle)


 --

 Message: 1
 Date: Thu, 29 Aug 2013 16:08:13 -0400
 From: David E DeMarle dave.dema...@kitware.com
 Subject: Re: [Paraview] Building on Titan using ParaViewSuperbuild
 To: Vanmoer, Mark W mvanm...@illinois.edu
 Cc: paraview@paraview.org paraview@paraview.org
 Message-ID:
   canjzai-1k+bflglevpeqdau+wpt-aqku0azddce4kcd-rot...@mail.gmail.com
 Content-Type: text/plain; charset=iso-8859-1

 On Thu, Aug 29, 2013 at 3:51 PM, Vanmoer, Mark W mvanm...@illinois.eduwrote:

  So coprocessing will not be built using the below instructions? I would
 have mentioned that, but coprocessing appears to

 still be part of a regular, non-cross-compile build, so I figured it was
 part of ENABLE_paraview

 The coprocessing plugin, which adds things to the GUI to make it easy to
 record coprocessing pipeline setups doesn't need to be turned on since that
 lives in the client only. (It is like python trace or state recording, but
 tailored to recording in-situ setups).

 Catalyst (the stripped down version of ParaView server that a simulation
 code can link to and use to run those recorded pipelines quickly) is not
 yet an option in ParaViewSuperbuild. To cross compile Catalyst a bit more
 work will be required. It will follow the same plan as how the ParaView
 server is compiled, but I just haven't tried it. When I did cross compile
 Catalyst last year at this time I did the same steps that
 ParaViewSuperbuild's TOOLS and CROSS build passes did, just by hand.

 Also, for the below configcross.sh, do we need to pass in a CMake variable
 telling it where the tools build dir is located?


 That should be an option that you can easily set, but it isn't sorry.

  CMake/CrossCompilationMacros.cmake assumes it can find it one directory up
 and over like so:
 macro(find_hosttools)
   set(PARAVIEW_HOSTTOOLS_DIR
 ${CMAKE_BINARY_DIR}/../tools/paraview/src/paraview-build/ CACHE PATH
 Location of host built paraview compile tools directory)
   set(PYTHON_HOST_EXE ${CMAKE_BINARY_DIR}/../tools/install/bin/python CACHE
 PATH
 Location of host built python executable)
   set(PYTHON_HOST_LIBDIR ${CMAKE_BINARY_DIR}/../tools/install/lib CACHE PATH
 Location of host built python libraries)
   set(BOOST_HOST_INCLUDEDIR ${CMAKE_BINARY_DIR}/../tools/install/include
 CACHE PATH
 Location of host built boost headers)
 endmacro()

 You could predefine all four of those if you like.


 Thanks,

 Mark

 ** **

 *From:* David E DeMarle [mailto:dave.dema...@kitware.com]
 *Sent:* Thursday, August 29, 2013 1:41 PM
 *To:* Hong Yi
 *Cc:* Vanmoer, Mark W; paraview@paraview.org

 *Subject:* Re: [Paraview] Building on Titan using ParaViewSuperbuild

  ** **

 On Thu, Aug 29, 2013 at 2:13 PM, Hong Yi hon...@renci.org wrote:

 Hi David,

 I just started to try superbuild on Titan also. I don't see you set
 ENABLE_MPI to be true in your configure script. Could you confirm whether
 ENABLE_MPI needs to be set to TRUE in order for ParaView to run on Titan in
 parallel? Since my purpose is to link our

 ** **

 The ENABLE_MPI flag at the Superbuild level is unrelated. It has a purpose
 only when CROSS_BUILD_STAGE=HOST, that is when making ParaView binary
 installers for desktops from Superbuild. 

 ** **

 You shouldn't turn it on in the TOOLS or CROSS stages. Instead let the
 CROSS stage use the system installed MPI. It does that by turning
 PARAVIEW_USE_MPI=ON when it configures the ParaView sub-build. See
 CMake/crosscompile/xk7_gnu to see where it does that, and to see the other
 flags it uses.

  

  simulation code (already built statically with CMake on Titan) to
 ParaView CoProcessing libraries (I am using version 3.98.1) for in-situ
 visualization on Titan, so in this case, do I have to set ENABLE_paraview
 to true and do I need to enable OSMesa for ParaView to resort to off-screen
 rendering for in-situ visualization?  

  ** **

 The CROSS stage turns on Python, Mesa and ParaView. Titan's accelerators
 don't really run X11, so Mesa is the only option for rendering there.

 ** **

  Although I can build ParaView from source on Titan login nodes, I am not
 able to run it on compute nodes, so I am starting to try superbuild hoping
 to be able to cross build ParaView libraries to run in-situ visualization
 on Titan.

  ** **

 I've cross compiled Catalyst itself before on a bluegene. I did it
 manually before SuperBuild existed. I will see if I can dig up my config
 scripts. Cross compiling Catalyst should be more or less that same thing as
 cross compiling ParaView, but a bit faster and easier because their is less
 code involved.

 ** **

  Thanks,
 Hong
  --

 

[Paraview] Volume Rendering Crash

2013-08-30 Thread Greer, Cody
Dear Paraview Community,

I am volume rendering a 2560 x 2160 x 200 image stack that totals 2 GB in size. 
 The stack loads fine, Paraview consistently crashes when I try to render the 
whole volume.  The data is represented as a uniform grid.  I don't think this 
is a memory issue.  I have monitored memory usage during crashes and the system 
always has at least 29 GB of free memory when it crashes.  The behavior is the 
same regardless of which rendering algorithm I choose (smart, ray cast, texture 
mapping, GPU).  I have tried loading the data in both RAW format and NRRD 
format with no luck.  I am using the Linux 64 bit binary installation from the 
Paraview website.  My desktop is running CentOS.

I'd appreciate any advice you might have for me.

Thanks,
Cody
___
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Follow this link to subscribe/unsubscribe:
http://www.paraview.org/mailman/listinfo/paraview


Re: [Paraview] Building on Titan using ParaViewSuperbuild

2013-08-30 Thread Hong Yi
I tried to follow the instructions and the configuration scripts to build 
ParaView for compute nodes on Titan. It built successfully without issues for 
the TOOLS stage, but when doing final linking for paraview in the CROSS stage, 
I got numerous similar linking error from different lines such as the following:


/../ParaView/ParaViewSuperbuild/cross/paraview/src/paraview/Utilities/mpi4py/src/mpi4py.MPI.c:
 In function '__pyx_pf_6mpi4py_3MPI_4File_54Sync':
/../ParaView/ParaViewSuperbuild/cross/paraview/src/paraview/Utilities/mpi4py/src/mpi4py.MPI.c:89682:7:
 error: '_save' undeclared (first use in this function)


The same error message were raised from the same file mpi4py.MPI.c from 
different lines.

I am using CMake version 2.8.10.2 which is provided by Titan, and using the 
ParaView source tree version 3.98 with one additional filter I have developed.

Any idea on what could cause the linking error?

Thanks,
Hong


From: David E DeMarle [dave.dema...@kitware.com]
Sent: Thursday, August 29, 2013 4:08 PM
To: Vanmoer, Mark W
Cc: Hong Yi; paraview@paraview.org
Subject: Re: [Paraview] Building on Titan using ParaViewSuperbuild

On Thu, Aug 29, 2013 at 3:51 PM, Vanmoer, Mark W 
mvanm...@illinois.edumailto:mvanm...@illinois.edu wrote:

So coprocessing will not be built using the below instructions? I would have 
mentioned that, but coprocessing appears to

still be part of a regular, non-cross-compile build, so I figured it was part 
of ENABLE_paraview

The coprocessing plugin, which adds things to the GUI to make it easy to record 
coprocessing pipeline setups doesn't need to be turned on since that lives in 
the client only. (It is like python trace or state recording, but tailored to 
recording in-situ setups).

Catalyst (the stripped down version of ParaView server that a simulation code 
can link to and use to run those recorded pipelines quickly) is not yet an 
option in ParaViewSuperbuild. To cross compile Catalyst a bit more work will be 
required. It will follow the same plan as how the ParaView server is compiled, 
but I just haven't tried it. When I did cross compile Catalyst last year at 
this time I did the same steps that ParaViewSuperbuild's TOOLS and CROSS build 
passes did, just by hand.

Also, for the below configcross.sh, do we need to pass in a CMake variable 
telling it where the tools build dir is located?

That should be an option that you can easily set, but it isn't sorry.

 CMake/CrossCompilationMacros.cmake assumes it can find it one directory up and 
over like so:
macro(find_hosttools)
  set(PARAVIEW_HOSTTOOLS_DIR 
${CMAKE_BINARY_DIR}/../tools/paraview/src/paraview-build/ CACHE PATH
Location of host built paraview compile tools directory)
  set(PYTHON_HOST_EXE ${CMAKE_BINARY_DIR}/../tools/install/bin/python CACHE PATH
Location of host built python executable)
  set(PYTHON_HOST_LIBDIR ${CMAKE_BINARY_DIR}/../tools/install/lib CACHE PATH
Location of host built python libraries)
  set(BOOST_HOST_INCLUDEDIR ${CMAKE_BINARY_DIR}/../tools/install/include CACHE 
PATH
Location of host built boost headers)
endmacro()

You could predefine all four of those if you like.


Thanks,

Mark



From: David E DeMarle 
[mailto:dave.dema...@kitware.commailto:dave.dema...@kitware.com]
Sent: Thursday, August 29, 2013 1:41 PM
To: Hong Yi
Cc: Vanmoer, Mark W; paraview@paraview.orgmailto:paraview@paraview.org

Subject: Re: [Paraview] Building on Titan using ParaViewSuperbuild



On Thu, Aug 29, 2013 at 2:13 PM, Hong Yi 
hon...@renci.orgmailto:hon...@renci.org wrote:

Hi David,

I just started to try superbuild on Titan also. I don't see you set ENABLE_MPI 
to be true in your configure script. Could you confirm whether ENABLE_MPI needs 
to be set to TRUE in order for ParaView to run on Titan in parallel? Since my 
purpose is to link our



The ENABLE_MPI flag at the Superbuild level is unrelated. It has a purpose only 
when CROSS_BUILD_STAGE=HOST, that is when making ParaView binary installers for 
desktops from Superbuild.



You shouldn't turn it on in the TOOLS or CROSS stages. Instead let the CROSS 
stage use the system installed MPI. It does that by turning PARAVIEW_USE_MPI=ON 
when it configures the ParaView sub-build. See CMake/crosscompile/xk7_gnu to 
see where it does that, and to see the other flags it uses.



simulation code (already built statically with CMake on Titan) to ParaView 
CoProcessing libraries (I am using version 3.98.1) for in-situ visualization on 
Titan, so in this case, do I have to set ENABLE_paraview to true and do I need 
to enable OSMesa for ParaView to resort to off-screen rendering for in-situ 
visualization?



The CROSS stage turns on Python, Mesa and ParaView. Titan's accelerators don't 
really run X11, so Mesa is the only option for rendering there.



Although I can build ParaView from source on Titan login nodes, I am not able 
to run it on compute nodes, so I am starting to try 

Re: [Paraview] Building on Titan using ParaViewSuperbuild

2013-08-30 Thread Vanmoer, Mark W
Hi Hong,
I was able to get David's instructions to work using CMake 2.8.11.2 and 
ParaView 4.0.1. The build process seems to be sensitive to versions.
Mark

From: Hong Yi [mailto:hon...@renci.org]
Sent: Friday, August 30, 2013 3:52 PM
To: David E DeMarle; Vanmoer, Mark W
Cc: paraview@paraview.org
Subject: RE: [Paraview] Building on Titan using ParaViewSuperbuild

I tried to follow the instructions and the configuration scripts to build 
ParaView for compute nodes on Titan. It built successfully without issues for 
the TOOLS stage, but when doing final linking for paraview in the CROSS stage, 
I got numerous similar linking error from different lines such as the following:


/../ParaView/ParaViewSuperbuild/cross/paraview/src/paraview/Utilities/mpi4py/src/mpi4py.MPI.c:
 In function '__pyx_pf_6mpi4py_3MPI_4File_54Sync':
/../ParaView/ParaViewSuperbuild/cross/paraview/src/paraview/Utilities/mpi4py/src/mpi4py.MPI.c:89682:7:
 error: '_save' undeclared (first use in this function)


The same error message were raised from the same file mpi4py.MPI.c from 
different lines.

I am using CMake version 2.8.10.2 which is provided by Titan, and using the 
ParaView source tree version 3.98 with one additional filter I have developed.

Any idea on what could cause the linking error?

Thanks,
Hong

From: David E DeMarle [dave.dema...@kitware.com]
Sent: Thursday, August 29, 2013 4:08 PM
To: Vanmoer, Mark W
Cc: Hong Yi; paraview@paraview.orgmailto:paraview@paraview.org
Subject: Re: [Paraview] Building on Titan using ParaViewSuperbuild
On Thu, Aug 29, 2013 at 3:51 PM, Vanmoer, Mark W 
mvanm...@illinois.edumailto:mvanm...@illinois.edu wrote:
So coprocessing will not be built using the below instructions? I would have 
mentioned that, but coprocessing appears to
still be part of a regular, non-cross-compile build, so I figured it was part 
of ENABLE_paraview

The coprocessing plugin, which adds things to the GUI to make it easy to record 
coprocessing pipeline setups doesn't need to be turned on since that lives in 
the client only. (It is like python trace or state recording, but tailored to 
recording in-situ setups).

Catalyst (the stripped down version of ParaView server that a simulation code 
can link to and use to run those recorded pipelines quickly) is not yet an 
option in ParaViewSuperbuild. To cross compile Catalyst a bit more work will be 
required. It will follow the same plan as how the ParaView server is compiled, 
but I just haven't tried it. When I did cross compile Catalyst last year at 
this time I did the same steps that ParaViewSuperbuild's TOOLS and CROSS build 
passes did, just by hand.
Also, for the below configcross.sh, do we need to pass in a CMake variable 
telling it where the tools build dir is located?

That should be an option that you can easily set, but it isn't sorry.

 CMake/CrossCompilationMacros.cmake assumes it can find it one directory up and 
over like so:
macro(find_hosttools)
  set(PARAVIEW_HOSTTOOLS_DIR 
${CMAKE_BINARY_DIR}/../tools/paraview/src/paraview-build/ CACHE PATH
Location of host built paraview compile tools directory)
  set(PYTHON_HOST_EXE ${CMAKE_BINARY_DIR}/../tools/install/bin/python CACHE PATH
Location of host built python executable)
  set(PYTHON_HOST_LIBDIR ${CMAKE_BINARY_DIR}/../tools/install/lib CACHE PATH
Location of host built python libraries)
  set(BOOST_HOST_INCLUDEDIR ${CMAKE_BINARY_DIR}/../tools/install/include CACHE 
PATH
Location of host built boost headers)
endmacro()

You could predefine all four of those if you like.

Thanks,
Mark

From: David E DeMarle 
[mailto:dave.dema...@kitware.commailto:dave.dema...@kitware.com]
Sent: Thursday, August 29, 2013 1:41 PM
To: Hong Yi
Cc: Vanmoer, Mark W; paraview@paraview.orgmailto:paraview@paraview.org

Subject: Re: [Paraview] Building on Titan using ParaViewSuperbuild

On Thu, Aug 29, 2013 at 2:13 PM, Hong Yi 
hon...@renci.orgmailto:hon...@renci.org wrote:
Hi David,

I just started to try superbuild on Titan also. I don't see you set ENABLE_MPI 
to be true in your configure script. Could you confirm whether ENABLE_MPI needs 
to be set to TRUE in order for ParaView to run on Titan in parallel? Since my 
purpose is to link our

The ENABLE_MPI flag at the Superbuild level is unrelated. It has a purpose only 
when CROSS_BUILD_STAGE=HOST, that is when making ParaView binary installers for 
desktops from Superbuild.

You shouldn't turn it on in the TOOLS or CROSS stages. Instead let the CROSS 
stage use the system installed MPI. It does that by turning PARAVIEW_USE_MPI=ON 
when it configures the ParaView sub-build. See CMake/crosscompile/xk7_gnu to 
see where it does that, and to see the other flags it uses.

simulation code (already built statically with CMake on Titan) to ParaView 
CoProcessing libraries (I am using version 3.98.1) for in-situ visualization on 
Titan, so in this case, do I have to set ENABLE_paraview to true and do I 

Re: [Paraview] Building on Titan using ParaViewSuperbuild

2013-08-30 Thread Vanmoer, Mark W
Looking more closely, it seems like ParaViewSuperbuild does build catalyst, at 
least the libs are getting built:

vanmoer@titan-ext3:~/builds/superbuild/cross/paraview/src/paraview-build/lib 
ls *Catalyst*
libvtkPVCatalystCS-pv4.0.a libvtkPVCatalystPython-pv4.0.a 
libvtkPVPythonCatalystPython-pv4.0.a
libvtkPVCatalyst-pv4.0.a   libvtkPVPythonCatalyst-pv4.0.a
libvtkPVCatalystPython27D-pv4.0.a  libvtkPVPythonCatalystPythonD-pv4.0.a

I was able to compile by hunting down all the headers, but not link.

I tried adding -DPARAVIEW_INSTALL_DEVELOPMENT_FILES:BOOL=TRUE to configcross.sh 
but I get

CMake Warning:
  Manually-specified variables were not used by the project:

PARAVIEW_INSTALL_DEVELOPMENT_FILES

is this because there's no paraviewsdk.cmake in ParaViewSuperbuild/Projects?


From: David E DeMarle [mailto:dave.dema...@kitware.com]
Sent: Thursday, August 29, 2013 3:08 PM
To: Vanmoer, Mark W
Cc: Hong Yi; paraview@paraview.org
Subject: Re: [Paraview] Building on Titan using ParaViewSuperbuild

On Thu, Aug 29, 2013 at 3:51 PM, Vanmoer, Mark W 
mvanm...@illinois.edumailto:mvanm...@illinois.edu wrote:
So coprocessing will not be built using the below instructions? I would have 
mentioned that, but coprocessing appears to
still be part of a regular, non-cross-compile build, so I figured it was part 
of ENABLE_paraview

The coprocessing plugin, which adds things to the GUI to make it easy to record 
coprocessing pipeline setups doesn't need to be turned on since that lives in 
the client only. (It is like python trace or state recording, but tailored to 
recording in-situ setups).

Catalyst (the stripped down version of ParaView server that a simulation code 
can link to and use to run those recorded pipelines quickly) is not yet an 
option in ParaViewSuperbuild. To cross compile Catalyst a bit more work will be 
required. It will follow the same plan as how the ParaView server is compiled, 
but I just haven't tried it. When I did cross compile Catalyst last year at 
this time I did the same steps that ParaViewSuperbuild's TOOLS and CROSS build 
passes did, just by hand.
Also, for the below configcross.sh, do we need to pass in a CMake variable 
telling it where the tools build dir is located?

That should be an option that you can easily set, but it isn't sorry.

 CMake/CrossCompilationMacros.cmake assumes it can find it one directory up and 
over like so:
macro(find_hosttools)
  set(PARAVIEW_HOSTTOOLS_DIR 
${CMAKE_BINARY_DIR}/../tools/paraview/src/paraview-build/ CACHE PATH
Location of host built paraview compile tools directory)
  set(PYTHON_HOST_EXE ${CMAKE_BINARY_DIR}/../tools/install/bin/python CACHE PATH
Location of host built python executable)
  set(PYTHON_HOST_LIBDIR ${CMAKE_BINARY_DIR}/../tools/install/lib CACHE PATH
Location of host built python libraries)
  set(BOOST_HOST_INCLUDEDIR ${CMAKE_BINARY_DIR}/../tools/install/include CACHE 
PATH
Location of host built boost headers)
endmacro()

You could predefine all four of those if you like.

Thanks,
Mark

From: David E DeMarle 
[mailto:dave.dema...@kitware.commailto:dave.dema...@kitware.com]
Sent: Thursday, August 29, 2013 1:41 PM
To: Hong Yi
Cc: Vanmoer, Mark W; paraview@paraview.orgmailto:paraview@paraview.org

Subject: Re: [Paraview] Building on Titan using ParaViewSuperbuild

On Thu, Aug 29, 2013 at 2:13 PM, Hong Yi 
hon...@renci.orgmailto:hon...@renci.org wrote:
Hi David,

I just started to try superbuild on Titan also. I don't see you set ENABLE_MPI 
to be true in your configure script. Could you confirm whether ENABLE_MPI needs 
to be set to TRUE in order for ParaView to run on Titan in parallel? Since my 
purpose is to link our

The ENABLE_MPI flag at the Superbuild level is unrelated. It has a purpose only 
when CROSS_BUILD_STAGE=HOST, that is when making ParaView binary installers for 
desktops from Superbuild.

You shouldn't turn it on in the TOOLS or CROSS stages. Instead let the CROSS 
stage use the system installed MPI. It does that by turning PARAVIEW_USE_MPI=ON 
when it configures the ParaView sub-build. See CMake/crosscompile/xk7_gnu to 
see where it does that, and to see the other flags it uses.

simulation code (already built statically with CMake on Titan) to ParaView 
CoProcessing libraries (I am using version 3.98.1) for in-situ visualization on 
Titan, so in this case, do I have to set ENABLE_paraview to true and do I need 
to enable OSMesa for ParaView to resort to off-screen rendering for in-situ 
visualization?

The CROSS stage turns on Python, Mesa and ParaView. Titan's accelerators don't 
really run X11, so Mesa is the only option for rendering there.

Although I can build ParaView from source on Titan login nodes, I am not able 
to run it on compute nodes, so I am starting to try superbuild hoping to be 
able to cross build ParaView libraries to run in-situ visualization on Titan.

I've cross compiled Catalyst itself before on a bluegene. I did it manually 
before SuperBuild 

Re: [Paraview] Building on Titan using ParaViewSuperbuild

2013-08-30 Thread Hong Yi
Thanks for the info, Mark. Looks like it is sensible for me to try newer 
version of CMake 2.8.11.2 and see how it goes.

On a somewhat related question: I am trying to pass in some CMake flags to make 
it build my new filter plugin as well as FortranAdaptor for coprocessing (yes, 
I discovered also that coprocessing/catalyst is built by default by superbuild, 
but FortranAdaptor is turned off by default). I tried to pass it in by adding 
corresponding -D to configuretools, but got CMake warning as well indicating 
those manually-specified variables were not used. I am wondering whether I can 
do it by directly changing CMakeCache.txt under paraview/src/paraview-build to 
force the corresponding flags to be on...

Thanks,
Hong

From: Vanmoer, Mark W [mailto:mvanm...@illinois.edu]
Sent: Friday, August 30, 2013 5:34 PM
To: Hong Yi; David E DeMarle
Cc: paraview@paraview.org
Subject: RE: [Paraview] Building on Titan using ParaViewSuperbuild

Hi Hong,
I was able to get David's instructions to work using CMake 2.8.11.2 and 
ParaView 4.0.1. The build process seems to be sensitive to versions.
Mark

From: Hong Yi [mailto:hon...@renci.org]
Sent: Friday, August 30, 2013 3:52 PM
To: David E DeMarle; Vanmoer, Mark W
Cc: paraview@paraview.orgmailto:paraview@paraview.org
Subject: RE: [Paraview] Building on Titan using ParaViewSuperbuild

I tried to follow the instructions and the configuration scripts to build 
ParaView for compute nodes on Titan. It built successfully without issues for 
the TOOLS stage, but when doing final linking for paraview in the CROSS stage, 
I got numerous similar linking error from different lines such as the following:


/../ParaView/ParaViewSuperbuild/cross/paraview/src/paraview/Utilities/mpi4py/src/mpi4py.MPI.c:
 In function '__pyx_pf_6mpi4py_3MPI_4File_54Sync':
/../ParaView/ParaViewSuperbuild/cross/paraview/src/paraview/Utilities/mpi4py/src/mpi4py.MPI.c:89682:7:
 error: '_save' undeclared (first use in this function)


The same error message were raised from the same file mpi4py.MPI.c from 
different lines.

I am using CMake version 2.8.10.2 which is provided by Titan, and using the 
ParaView source tree version 3.98 with one additional filter I have developed.

Any idea on what could cause the linking error?

Thanks,
Hong

From: David E DeMarle [dave.dema...@kitware.com]
Sent: Thursday, August 29, 2013 4:08 PM
To: Vanmoer, Mark W
Cc: Hong Yi; paraview@paraview.orgmailto:paraview@paraview.org
Subject: Re: [Paraview] Building on Titan using ParaViewSuperbuild
On Thu, Aug 29, 2013 at 3:51 PM, Vanmoer, Mark W 
mvanm...@illinois.edumailto:mvanm...@illinois.edu wrote:
So coprocessing will not be built using the below instructions? I would have 
mentioned that, but coprocessing appears to
still be part of a regular, non-cross-compile build, so I figured it was part 
of ENABLE_paraview

The coprocessing plugin, which adds things to the GUI to make it easy to record 
coprocessing pipeline setups doesn't need to be turned on since that lives in 
the client only. (It is like python trace or state recording, but tailored to 
recording in-situ setups).

Catalyst (the stripped down version of ParaView server that a simulation code 
can link to and use to run those recorded pipelines quickly) is not yet an 
option in ParaViewSuperbuild. To cross compile Catalyst a bit more work will be 
required. It will follow the same plan as how the ParaView server is compiled, 
but I just haven't tried it. When I did cross compile Catalyst last year at 
this time I did the same steps that ParaViewSuperbuild's TOOLS and CROSS build 
passes did, just by hand.
Also, for the below configcross.sh, do we need to pass in a CMake variable 
telling it where the tools build dir is located?

That should be an option that you can easily set, but it isn't sorry.

 CMake/CrossCompilationMacros.cmake assumes it can find it one directory up and 
over like so:
macro(find_hosttools)
  set(PARAVIEW_HOSTTOOLS_DIR 
${CMAKE_BINARY_DIR}/../tools/paraview/src/paraview-build/ CACHE PATH
Location of host built paraview compile tools directory)
  set(PYTHON_HOST_EXE ${CMAKE_BINARY_DIR}/../tools/install/bin/python CACHE PATH
Location of host built python executable)
  set(PYTHON_HOST_LIBDIR ${CMAKE_BINARY_DIR}/../tools/install/lib CACHE PATH
Location of host built python libraries)
  set(BOOST_HOST_INCLUDEDIR ${CMAKE_BINARY_DIR}/../tools/install/include CACHE 
PATH
Location of host built boost headers)
endmacro()

You could predefine all four of those if you like.

Thanks,
Mark

From: David E DeMarle 
[mailto:dave.dema...@kitware.commailto:dave.dema...@kitware.com]
Sent: Thursday, August 29, 2013 1:41 PM
To: Hong Yi
Cc: Vanmoer, Mark W; paraview@paraview.orgmailto:paraview@paraview.org

Subject: Re: [Paraview] Building on Titan using ParaViewSuperbuild

On Thu, Aug 29, 2013 at 2:13 PM, Hong Yi 
hon...@renci.orgmailto:hon...@renci.org wrote:
Hi David,

I 

[Paraview] viewing Gadget2 simulations

2013-08-30 Thread Tim Haines

Greetings, all.

I am a PhD student in astrophysics at the University of 
Wisconsin-Madison working on dynamics simulations with Gadget2. To test 
my setup, I ran the galaxy collision simulation included with the 
Gadget2 distribution, and generated a sequence of snapshots. I am trying 
to view these snapshots in Paraview (3.98.0-enhanced 64-bit), but I 
cannot see the position vector data. The object inspector only shows the 
ghost, mass, tag, and velocity fields. Using the snapshot extraction 
codes provided with the Gadget2 distribution, I was able to see that the 
position data is indeed present in the snapshot files.


Has anyone had the problem before?

I have searched through the mailing list archives, read through the 
user's manual, and read over the 2011 ApJ paper, all to no avail.


Many thanks for your help in advance.

- Tim
___
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Follow this link to subscribe/unsubscribe:
http://www.paraview.org/mailman/listinfo/paraview