Am currently using Mesa 10.5.4 as I’d seen that as a working config mentioned 
somewhere on the web, so will try newer. Any reason to avoid the 10.6 series?

I guess I should also try moving to Paraview 4.4 now :-)

--
David Trudgian Ph.D.
Computational Scientist, BioHPC
UT Southwestern Medical Center
Dallas, TX 75390-9039
Tel: (214) 648-4833

From: Ken Martin [mailto:ken.mar...@kitware.com]
Sent: Monday, September 14, 2015 2:29 PM
To: paraview@paraview.org
Subject: Re: [Paraview] Volume Rendering 17GB 8.5 billion cell volume

I am not sure about the rest of the email, but the error you included is 
typically caused by using too old of a version of mesa. I’m not sure about 
specific driver/etc but I usually suggest trying Mesa version 10.5.5 or later 
to see if that solves the issue.

Thanks
Ken

Ken Martin PhD
Chairman & CFO
Kitware Inc.
28 Corporate Drive
Clifton Park NY 12065
ken.mar...@kitware.com<mailto:ken.mar...@kitware.com>
919 869-8871 (w)


This communication, including all attachments, contains confidential and 
legally privileged information, and it is intended only for the use of the 
addressee.  Access to this email by anyone else is unauthorized. If you are not 
the intended recipient, any disclosure, copying, distribution or any action 
taken in reliance on it is prohibited and may be unlawful. If you received this 
communication in error please notify us immediately and destroy the original 
message.  Thank you.

From: David Trudgian 
<david.trudg...@utsouthwestern.edu<mailto:david.trudg...@utsouthwestern.edu>>
Date: Mon, Sep 14, 2015 at 1:35 PM
Subject: RE: [Paraview] Volume Rendering 17GB 8.5 billion cell volume
To: Aashish Chaudhary 
<aashish.chaudh...@kitware.com<mailto:aashish.chaudh...@kitware.com>>
Cc: Berk Geveci <berk.gev...@kitware.com<mailto:berk.gev...@kitware.com>>, 
ParaView list <paraview@paraview.org<mailto:paraview@paraview.org>>


I've now had chance to build paraview 4.3.1 with the OpenGL2 backend.

I've not been able to test on GPU accelerated nodes (cluster is too busy to get 
enough right now), but I have also built as an OSMESA with OpenGL2 version. 
Unfortunately the symptoms are the same. With the largest 16GB VTI I see 
wireframe etc, but switching to volume rendering results in nothing visible in 
the client. No messages from the server or client. Memory usage well below RAM 
in the systems. Runnning across 8 256GB 24-core nodes with 8-mpi tasks per node.

As with the OpenGL backend I go to our 9GB or 4GB downsampled VTIs and things 
work as expected. Great rendering performance running across the 8-node 
allocation with OpenMPI and OSMESA.

I did also come across another issue. Switching back to surface view on the 
16GB data I had a crash shortly after I started manipulating the view with the 
mouse. I couldn't replicate this crash with the smaller datasets though.

ERROR: In 
/home2/dtrudgian/paraview/ParaView-v4.3.1-source/VTK/Rendering/OpenGL2/vtkShaderProgram.cxx,
 line 292
vtkShaderProgram (0x6488a60): 0:22(12): warning: extension `GL_EXT_gpu_shader4' 
unsupported in fragment shader
0:130(12): error: `gl_PrimitiveID' undeclared
0:130(12): error: operands to arithmetic operators must be numeric
0:130(12): error: operands to arithmetic operators must be numeric
0:131(28): error: operator '%' is reserved in GLSL 1.10 (GLSL 1.30 or GLSL ES 
3.00 required)
0:131(22): error: cannot construct `float' from a non-numeric data type
0:131(22): error: operands to arithmetic operators must be numeric
0:131(17): error: cannot construct `vec4' from a non-numeric data type

Cheers,

--
David Trudgian Ph.D.
Computational Scientist, BioHPC
UT Southwestern Medical Center
Dallas, TX 75390-9039
Tel: (214) 648-4833<tel:%28214%29%20648-4833>
-----Original Message-----
From: David Trudgian
Sent: Thursday, September 10, 2015 10:13 AM
To: Aashish Chaudhary 
<aashish.chaudh...@kitware.com<mailto:aashish.chaudh...@kitware.com>>
Cc: Berk Geveci <berk.gev...@kitware.com<mailto:berk.gev...@kitware.com>>; 
ParaView list <paraview@paraview.org<mailto:paraview@paraview.org>>
Subject: RE: [Paraview] Volume Rendering 17GB 8.5 billion cell volume

Aashish,

(sorry - didn't hit reply-all first time)

> Would it be possible for you to try OpenGL2 backend?

Yes - I can try this, but probably next week. I just change 
VTK_RENDERING_BACKENDS? Do you know if OSMESA has to be built with any 
particularly flags itself?

Thanks,

DT


________________________________________
From: Aashish Chaudhary 
[aashish.chaudh...@kitware.com<mailto:aashish.chaudh...@kitware.com>]
Sent: Thursday, September 10, 2015 9:59 AM
To: David Trudgian
Cc: Berk Geveci; ParaView list
Subject: Re: [Paraview] Volume Rendering 17GB 8.5 billion cell volume

Thanks Dave. Haven' t looked at your email in detail (will do in a moment) but 
another thought would be some sort of limit we are hitting on the indices 
(MAX_INT or MAX_<TYPE>) being used when dealing with very large dataset such as 
yours.

Would it be possible for you to try OpenGL2 backend?

- Aashish

On Thu, Sep 10, 2015 at 10:55 AM, David Trudgian 
<david.trudg...@utsouthwestern.edu<mailto:david.trudg...@utsouthwestern.edu><mailto:david.trudg...@utsouthwestern.edu<mailto:david.trudg...@utsouthwestern.edu>>>
 wrote:
Berk (and others), thanks for your replies!

> This is pretty awesome. I am assuming that this has something to do
> with things not fitting on the GPU memory or exceeding some texture
> memory limitation. Can you provide some more details?

Sure - thanks for your help.

> * Which version of ParaView are you using?

This is with Paraview 4.3.1

> * It sounds like you have multiple GPUs and multiple nodes. What is
> the setup? Are you running in parallel with MPI?

Have tried in two ways, both are using MPI (OpenMPI/1.8.3 on an InfiniBand FDR
network):

Setup 1) Paraview 4.3.1 pvserver is running with MPI across multiple cluster 
nodes, each with a Tesla K20 GPU. Only up to 4 nodes total, each one has a 
single Tesla K20. Have used various numbers of MPI tasks. The machines are 16 
physical cores, with hyper-threading on for 32 logical cores. 256GB RAM and the 
Tesla K20 has 5GB.

... when this didn't work we did suspect out of GPU memory. Since we have a 
limited number of GPU nodes then decided to try the CPU approach...

Setup 2) Paraview 4.3.1 rebuilt with OSMESA support, to run pvserver on a 
larger number of cluster nodes without any GPUs. These are 16 or 24 core 
machines with 128/256/384GB RAM. Tried various numbers of nodes (up to 16) and 
MPI tasks per node, allowing for OSMESA threading per the docs/graphs on the 
Paraview wiki page.

Watching the pvserver processes when running across 16 nodes I wasn't seeing 
more than ~2GB RAM usage per process. Across 16 nodes I ran with 8 tasks per 
node, so at 2GB each this is well under the minimum of 128GB RAM per node.

> * If you are running parallel with MPI and you have multiple GPUs per
> node, did you setup the DISPLAYs to leverage the GPUs?

As above, only 1 GPU per node, or 0 when switched to the OSMESA approach to try 
with across more nodes.

As mentioned before, we can view a smaller version of the data without issue on 
both GPU and OSMESA setups. I just opened a 4GB version (approx 25% of full 
size) using the OSMESA setup on a single node (8 MPI tasks) without issue. The 
responsiveness is really great - but the 16GB file is a no-go even scaling up 
across 16 nodes. The VTI itself seems fine, as slices and surface look as 
expected.

Thanks again for any and all suggestions!

DT

> On Wed, Sep 9, 2015 at 5:00 PM, David Trudgian <
> david.trudg...@utsouthwestern.edu<mailto:david.trudg...@utsouthwestern.edu><mailto:david.trudg...@utsouthwestern.edu<mailto:david.trudg...@utsouthwestern.edu>>>
>  wrote:
>
> > Hi,
> >
> > We have been experimenting with using Paraview to display very
> > volumes from very large TIFF stacks generated by whole-brain
> > microscopy equipment. The test stack has dimensions of
> > 5,368x10,695x150. Stack is assembled in ImageJ from individual
> > TIFFs, exported as a RAW and loaded into paraview. Saved as a .vti
> > for convenience. Can view slices fine in standalone paraview client
> > on a 256GB machine.
> >
> > When we attempt volume rendering on this data across multiple nodes
> > with MPI nothing appears in the client. Surface view works as
> > expected. On switching to volume rendering the client's display will
> > show nothing. There are no messages from the client or servers - no
> > output.
> >
> > This is happening when running pvserver across GPU nodes with NVIDIA
> > Tesla cards, or using CPU only with OSMESA. pvserver memory usage is
> > well below what we have on the nodes - no memory warnings/errors.
> >
> > Data is about 17GB, 8 billion cells. If we downsize to ~4GB or ~9GB
> > then we can get working volume rendering. The 17GB never works
> > regardless of scaling nodes/mpi processes. The 4/9GB will work on 1
> > or 2 nodes.
> >
> > Am confused by the lack of rendering, as we don't have memory
> > issues, or an other messages at all. Am wondering if there are any
> > inherent limitation, or I'm missing something stupid.
> >
> > Thanks,
> >
> > Dave Trudgian
> >
> >
> > _______________________________________________
> > Powered by www.kitware.com<http://www.kitware.com><http://www.kitware.com>
> >
> > Visit other Kitware open-source projects at
> > http://www.kitware.com/opensource/opensource.html
> >
> > Please keep messages on-topic and check the ParaView Wiki at:
> > http://paraview.org/Wiki/ParaView
> >
> > Search the list archives at: http://markmail.org/search/?q=ParaView
> >
> > Follow this link to subscribe/unsubscribe:
> > http://public.kitware.com/mailman/listinfo/paraview
> >

--
David Trudgian Ph.D.
Computational Scientist, BioHPC
UT Southwestern Medical Center
Dallas, TX 75390-9039
Tel: (214) 648-4833<tel:%28214%29%20648-4833><tel:%28214%29%20648-4833>


_______________________________________________
Powered by www.kitware.com<http://www.kitware.com><http://www.kitware.com>

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Search the list archives at: http://markmail.org/search/?q=ParaView

Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview



--
| Aashish Chaudhary
| Technical Leader
| Kitware Inc.
| http://www.kitware.com/company/team/chaudhary.html

________________________________

UT Southwestern


Medical Center



The future of medicine, today.

_______________________________________________
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Search the list archives at: http://markmail.org/search/?q=ParaView

Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview

Reply via email to