Yeah, it is not that easy. If you compile Mesa with llvmpipe, the limit is in something like gallium/drivers/llvmpipe/lp_limits.h. Otherwise it doesn't work. However, I had crashes when I increased that to something like 8 GBs and then volume rendered something large. It also looks like other drivers have their own limits that are pretty small. I will talk to some folks doing work with Mesa about this. Hopefully, we can address it in the upcoming OpenSWR driver. We'll have to support streaming for other drivers though...
-berk On Tue, Oct 20, 2015 at 5:55 PM, Aashish Chaudhary < aashish.chaudh...@kitware.com> wrote: > Here it is. It would be great if someone else try it as well: > > ----Steps---- > 1. In src/mesa/main/config.h, there should MAX_TEXTURE_MBYTES defined. I > believe by default it was 1024 Mbytes. Please change it to 4096 or > something higher. > > 2. Then Compile and Install MESA again (do not forget to set the > MESA_GL_VERSION_OVERRIDE to 3.2). > > 3. Compile paraview again (server) > > > On Tue, Oct 20, 2015 at 5:48 PM, Aashish Chaudhary < > aashish.chaudh...@kitware.com> wrote: > >> Berk, >> >> On Tue, Oct 20, 2015 at 4:00 PM, Berk Geveci <berk.gev...@kitware.com> >> wrote: >> >>> Hi folks, >>> >>> I wanted to close the loop on this. Here are my findings: >>> >>> * ParaView master (4.4 should also do) + OpenGL2 + NVIDIA Tesla w 12 GB >>> memory: I verified that I can volume render data up to the capacity of the >>> card. I could volume render a 1400x1400x1400 volume of floats. >>> >>> * ParaView master (4.4 should also do) + OpenGL2 + Mesa (OSMesa 11, >>> llvmpipe, swrast): Mesa has some fairly small limits on 3D texture size, >>> which is what we use for volume rendering. So, ~ 1000x1000x1000 will be the >>> upper end of what can be done for now. In time, we will implement multiple >>> textures / streaming to enable rendering of larger volumes. >>> >> >> Did you see my other email? You can change the default for OSMesa. I sent >> it last week. >> >> - Aashish >> >>> >>> Best, >>> -berk >>> >>> On Mon, Sep 28, 2015 at 11:00 AM, David Trudgian < >>> david.trudg...@utsouthwestern.edu> wrote: >>> >>>> Berk, >>>> >>>> >>>> >>>> Thanks very much for looking into this. Look forward to trying things >>>> out whenever they’re ready. >>>> >>>> >>>> >>>> DT >>>> >>>> >>>> >>>> -- >>>> David Trudgian Ph.D. >>>> Computational Scientist, BioHPC >>>> UT Southwestern Medical Center >>>> Dallas, TX 75390-9039 >>>> Tel: (214) 648-4833 >>>> >>>> >>>> >>>> *From:* Berk Geveci [mailto:berk.gev...@kitware.com] >>>> *Sent:* Monday, September 28, 2015 9:58 AM >>>> >>>> *To:* David Trudgian <david.trudg...@utsouthwestern.edu> >>>> *Cc:* ParaView Mailing List <paraview@paraview.org> >>>> *Subject:* Re: [Paraview] Volume Rendering 17GB 8.5 billion cell volume >>>> >>>> >>>> >>>> Hi David, >>>> >>>> >>>> >>>> I have been trying to find some cycles to check this out myself with >>>> ParaView 4.4. Thanks to hardware issues (i.e. my big workstation's disk >>>> dying), I haven't been able to. Good news is that I found issues with >>>> OSMesa + OpenGL2 that we are working through. Give me another 1-1.5 weeks. >>>> >>>> >>>> >>>> Best, >>>> >>>> -berk >>>> >>>> >>>> >>>> On Mon, Sep 28, 2015 at 10:46 AM, David Trudgian < >>>> david.trudg...@utsouthwestern.edu> wrote: >>>> >>>> Hi Berk, >>>> >>>> >>>> >>>> Finally managed to grab an allocation of some Tesla K40 nodes on our >>>> cluster, to check GPU rendering of the full 17GB file with 2 x 12GB GPUs. I >>>> see the same thing as I did with OSMesa rendering. >>>> >>>> >>>> >>>> The 9GB downsampled version works great, across 2 nodes both with a >>>> single K40. Go up to the 17GB original file and nothing is rendered, no >>>> errors. Same behavior with OPENGL or OPENGL2 backends. >>>> >>>> >>>> >>>> This is all on paraview 4.3.1 still – I need to find time to build >>>> OSMesa / MPI versions of 4.4 here. But, does 4.4. have any fixes that would >>>> be expected to affect this? >>>> >>>> >>>> >>>> Thanks, >>>> >>>> >>>> >>>> -- >>>> David Trudgian Ph.D. >>>> Computational Scientist, BioHPC >>>> UT Southwestern Medical Center >>>> Dallas, TX 75390-9039 >>>> Tel: (214) 648-4833 >>>> >>>> >>>> >>>> *From:* Berk Geveci [mailto:berk.gev...@kitware.com] >>>> *Sent:* Tuesday, September 15, 2015 2:43 PM >>>> *To:* David Trudgian <david.trudg...@utsouthwestern.edu> >>>> *Cc:* ParaView Mailing List <paraview@paraview.org> >>>> *Subject:* Re: [Paraview] Volume Rendering 17GB 8.5 billion cell volume >>>> >>>> >>>> >>>> Hey David, >>>> >>>> >>>> >>>> I am hoping to have some time to play around with volume rendering and >>>> hopefully tracking this issue, one thing that I wanted to clarify: it >>>> sounds from you description that you have a short (2 byte) value. Is that >>>> correct? >>>> >>>> >>>> >>>> Thanks, >>>> >>>> -berk >>>> >>>> >>>> >>>> On Wed, Sep 9, 2015 at 5:00 PM, David Trudgian < >>>> david.trudg...@utsouthwestern.edu> wrote: >>>> >>>> Hi, >>>> >>>> We have been experimenting with using Paraview to display very volumes >>>> from very >>>> large TIFF stacks generated by whole-brain microscopy equipment. The >>>> test stack >>>> has dimensions of 5,368x10,695x150. Stack is assembled in ImageJ from >>>> individual >>>> TIFFs, exported as a RAW and loaded into paraview. Saved as a .vti for >>>> convenience. Can view slices fine in standalone paraview client on a >>>> 256GB machine. >>>> >>>> When we attempt volume rendering on this data across multiple nodes >>>> with MPI >>>> nothing appears in the client. Surface view works as expected. On >>>> switching to >>>> volume rendering the client's display will show nothing. There are no >>>> messages >>>> from the client or servers - no output. >>>> >>>> This is happening when running pvserver across GPU nodes with NVIDIA >>>> Tesla >>>> cards, or using CPU only with OSMESA. pvserver memory usage is well >>>> below what >>>> we have on the nodes - no memory warnings/errors. >>>> >>>> Data is about 17GB, 8 billion cells. If we downsize to ~4GB or ~9GB >>>> then we can >>>> get working volume rendering. The 17GB never works regardless of scaling >>>> nodes/mpi processes. The 4/9GB will work on 1 or 2 nodes. >>>> >>>> Am confused by the lack of rendering, as we don't have memory issues, >>>> or an >>>> other messages at all. Am wondering if there are any inherent >>>> limitation, or I'm >>>> missing something stupid. >>>> >>>> Thanks, >>>> >>>> Dave Trudgian >>>> >>>> >>>> _______________________________________________ >>>> Powered by www.kitware.com >>>> >>>> Visit other Kitware open-source projects at >>>> http://www.kitware.com/opensource/opensource.html >>>> >>>> Please keep messages on-topic and check the ParaView Wiki at: >>>> http://paraview.org/Wiki/ParaView >>>> >>>> Search the list archives at: http://markmail.org/search/?q=ParaView >>>> >>>> Follow this link to subscribe/unsubscribe: >>>> http://public.kitware.com/mailman/listinfo/paraview >>>> >>>> >>>> >>>> >>>> ------------------------------ >>>> >>>> *UT** Southwestern* >>>> >>>> Medical Center >>>> >>>> The future of medicine, today. >>>> >>>> >>>> >>> >>> >>> _______________________________________________ >>> Powered by www.kitware.com >>> >>> Visit other Kitware open-source projects at >>> http://www.kitware.com/opensource/opensource.html >>> >>> Please keep messages on-topic and check the ParaView Wiki at: >>> http://paraview.org/Wiki/ParaView >>> >>> Search the list archives at: http://markmail.org/search/?q=ParaView >>> >>> Follow this link to subscribe/unsubscribe: >>> http://public.kitware.com/mailman/listinfo/paraview >>> >>> >> >> >> -- >> >> >> >> *| Aashish Chaudhary | Technical Leader | Kitware Inc. >> * >> *| http://www.kitware.com/company/team/chaudhary.html >> <http://www.kitware.com/company/team/chaudhary.html>* >> > > > > -- > > > > *| Aashish Chaudhary | Technical Leader | Kitware Inc. * > *| http://www.kitware.com/company/team/chaudhary.html > <http://www.kitware.com/company/team/chaudhary.html>* >
_______________________________________________ Powered by www.kitware.com Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Please keep messages on-topic and check the ParaView Wiki at: http://paraview.org/Wiki/ParaView Search the list archives at: http://markmail.org/search/?q=ParaView Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/paraview