Re: [Paraview] Saving a slice of data for later visualization
Thanks guys, As always, it was a nice combination of user error and unexpected behavior. My slice origin was [1.0, 1.0, 0.0] and when I used the CSV writer, it happily wrote out data files for me so I assumed everything worked fine. But when I used any other writer, the slice didn't actually exist (that's why there was the error about no output port). When I moved my slice to [1.0, 1.0, 0.1] then I could get other writers to work. I went with the XdmfWriter just because we're used to dealing with Xdmf files anyway. SaveData() also works (once my slice is in the right place) on later versions, but doesn't exist in 4.1. I should upgrade, but it's such a pain to build on clusters that I like to avoid it as long as possible! Thanks again, Tim - Original Message - From: "Ganesh Vijayakumar" Cc: "ParaView list" Sent: Tuesday, October 27, 2015 3:11:54 PM Subject: Re: [Paraview] Saving a slice of data for later visualization I use this. Has worked for me fairly well. SaveData('fileName.vtm', proxy=Clip1, Writealltimestepsasfileseries=0, DataMode='Binary', HeaderType='UInt64', EncodeAppendedData=0, CompressorType='None') On Tue, Oct 27, 2015 at 7:55 AM Andy Bauer < andy.ba...@kitware.com > wrote: Hi Tim, I believe that the writer you want is the XML multiblock data writer -- XMLMultiBlockDataWriter(). The extension for that is .vtm. The reason for this is that a slice through a multiblock data set outputs a multiblock of polydata. You can use the Merge Blocks filter to reduce it to an unstructured grid. Cheers, Andy On Mon, Oct 26, 2015 at 8:21 PM, Tim Gallagher < tim.gallag...@gatech.edu > wrote: Hi, I'm struggling to write a script for Paraview that will let me take a slice through my vtkMultiblockDataSet and save just the slice (so all of the data on the slice and all of the points that make up the slice) in a format that I can look at later. I can get it to dump all of the data to a set of CSV files, but I can't look at those again in paraview. My function is very simple (see below). I have tried to use CreateWriter directly with the .vtk file extension like is shown on http://www.paraview.org/Wiki/ParaView/Python_Scripting#Writing_Data_Files_.28ParaView_3.9_or_later.29 but that says the vtk file format is unknown and so it doesn't work. I have tried virtually every writer that would make sense in that writer line and none of them work properly. As it is, the one that is there now says: vtkCompositeDataPipeline (0x9ac9380): Can not execute simple alorithm without output ports and I don't know what that means or why it fails to write. (Side note -- algorithm is spelled wrong in that error message, comes from vtkCompositeDataPipeline.cxx line 168). Anybody have any suggestions or advice on how to save the datasets that results from a slice so I can look at just that slice later? Thanks, Tim def run(out_dir, file_num, spreadsheet_name, slice_origin, slice_normal, triangulate=False): restart_file = XDMFReader(FileName=out_dir+'/RESTS/rest_%05i.xmf' % file_num) restart_file_dr = Show() if triangulate: tri = 1 else: tri = 0 my_slice = Slice(SliceOffsetValues=[0.0], Triangulatetheslice=tri, SliceType="Plane" ) my_slice.SliceType.Origin = slice_origin my_slice.SliceType.Normal = slice_normal slice_dr = Show() writer = XMLUnstructuredGridWriter(Input=my_slice) writer.FileName = out_dir+"/post/"+"%s_data_%05i_.vtu" % (spreadsheet_name, file_num) writer.UpdatePipeline() ___ Powered by www.kitware.com Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Please keep messages on-topic and check the ParaView Wiki at: http://paraview.org/Wiki/ParaView Search the list archives at: http://markmail.org/search/?q=ParaView Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/paraview ___ Powered by www.kitware.com Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Please keep messages on-topic and check the ParaView Wiki at: http://paraview.org/Wiki/ParaView Search the list archives at: http://markmail.org/search/?q=ParaView Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/paraview ___ Powered by www.kitware.com Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Please keep messages on-topic and check the ParaView Wiki at: http://paraview.org/Wiki/ParaView Search the list archives at: http://markmail.org/search/?q=ParaView Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/paraview ___ Powered by www.kitware.com Visit other Kitware open-source projects at http://www.
Re: [Paraview] Saving a slice of data for later visualization
I use this. Has worked for me fairly well. SaveData('fileName.vtm', proxy=Clip1, Writealltimestepsasfileseries=0, DataMode='Binary', HeaderType='UInt64', EncodeAppendedData=0, CompressorType='None') On Tue, Oct 27, 2015 at 7:55 AM Andy Bauer wrote: > Hi Tim, > > I believe that the writer you want is the XML multiblock data writer -- > XMLMultiBlockDataWriter(). The extension for that is .vtm. The reason for > this is that a slice through a multiblock data set outputs a multiblock of > polydata. You can use the Merge Blocks filter to reduce it to an > unstructured grid. > > Cheers, > Andy > > On Mon, Oct 26, 2015 at 8:21 PM, Tim Gallagher > wrote: > >> Hi, >> >> I'm struggling to write a script for Paraview that will let me take a >> slice through my vtkMultiblockDataSet and save just the slice (so all of >> the data on the slice and all of the points that make up the slice) in a >> format that I can look at later. I can get it to dump all of the data to a >> set of CSV files, but I can't look at those again in paraview. >> >> My function is very simple (see below). I have tried to use CreateWriter >> directly with the .vtk file extension like is shown on >> http://www.paraview.org/Wiki/ParaView/Python_Scripting#Writing_Data_Files_.28ParaView_3.9_or_later.29 >> but that says the vtk file format is unknown and so it doesn't work. >> >> I have tried virtually every writer that would make sense in that writer >> line and none of them work properly. As it is, the one that is there now >> says: >> >> vtkCompositeDataPipeline (0x9ac9380): Can not execute simple alorithm >> without output ports >> >> and I don't know what that means or why it fails to write. (Side note -- >> algorithm is spelled wrong in that error message, comes from >> vtkCompositeDataPipeline.cxx line 168). >> >> Anybody have any suggestions or advice on how to save the datasets that >> results from a slice so I can look at just that slice later? >> >> Thanks, >> >> Tim >> >> def run(out_dir, file_num, spreadsheet_name, slice_origin, slice_normal, >> triangulate=False): >> restart_file = XDMFReader(FileName=out_dir+'/RESTS/rest_%05i.xmf' % >> file_num) >> restart_file_dr = Show() >> >> if triangulate: >> tri = 1 >> else: >> tri = 0 >> >> my_slice = Slice(SliceOffsetValues=[0.0], Triangulatetheslice=tri, >> SliceType="Plane" ) >> my_slice.SliceType.Origin = slice_origin >> my_slice.SliceType.Normal = slice_normal >> >> slice_dr = Show() >> >> writer = XMLUnstructuredGridWriter(Input=my_slice) >> writer.FileName = out_dir+"/post/"+"%s_data_%05i_.vtu" % >> (spreadsheet_name, file_num) >> writer.UpdatePipeline() >> ___ >> Powered by www.kitware.com >> >> Visit other Kitware open-source projects at >> http://www.kitware.com/opensource/opensource.html >> >> Please keep messages on-topic and check the ParaView Wiki at: >> http://paraview.org/Wiki/ParaView >> >> Search the list archives at: http://markmail.org/search/?q=ParaView >> >> Follow this link to subscribe/unsubscribe: >> http://public.kitware.com/mailman/listinfo/paraview >> > > ___ > Powered by www.kitware.com > > Visit other Kitware open-source projects at > http://www.kitware.com/opensource/opensource.html > > Please keep messages on-topic and check the ParaView Wiki at: > http://paraview.org/Wiki/ParaView > > Search the list archives at: http://markmail.org/search/?q=ParaView > > Follow this link to subscribe/unsubscribe: > http://public.kitware.com/mailman/listinfo/paraview > ___ Powered by www.kitware.com Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Please keep messages on-topic and check the ParaView Wiki at: http://paraview.org/Wiki/ParaView Search the list archives at: http://markmail.org/search/?q=ParaView Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/paraview
Re: [Paraview] Catalyst and adaptor: attaching new fields
Hi Michel, Yep, I saw your email that you figured out the issue. The Fortran API for Catalyst is probably a little bit clunky but it's difficult to tell who's using what for that so I didn't want to change anything there. Feel free to push some of that code back if you want to. The gitlab tools are much nicer than gerrit so hopefully it's easier to make changes. As for the IsFieldNeeded() function, it's already been tested for a simulation code and works quite nicely. That code has a lot of derived variables that it can compute but doesn't store explicitly. We hope to make it easy to take advantage of that when generating scripts but there's quite a bit of work to do in order to get it working. It's probably too much to try to completely automate it by going through the pipeline (impossible to automate for data extract output since it's not known what fields the user wants written out). If you have a specific script that you want modified for that though, if you share it I can help modify it to get that behavior. Hopefully the changes are just a couple of lines so that in the future you can follow along and maybe make the changes yourself if you're in a rush. Cheers, Andy ps. If you're going to be at SC15, we can meet up there and talk about requesting specific fields if you have the time. On Tue, Oct 27, 2015 at 1:45 PM, Michel Rasquin wrote: > Hi Andy, > > Thank you for your quick answer. > > You probably already read my previous answer about this issue. > The problem was located in NeedToCreateGrid(...), which also clears any > field data associated with an existing grid. > The solution consists in calling simply this function only once for every > new time step. > > That said, you raised a good point about IsFieldNeeded(…), which was also > on my radar. > > I already observed that RequestDataDescription(datadescription) in the > python script sets AllFields in the data description object to On. As a > consequence, all the fields specified in the adaptor are passed to Catalyst > since IsFieldNeeded() always returns true, whether the corresponding field > is used in the Catalyst pipeline or not. Since we have improved our adaptor > (we should commit it back now) and increased the number of fields we are > potentially interest in for coprocesing purpose, this can indeed leads to > additional memory usage and cpu time. > > I definitely agree it would therefore be quite useful to have the > possibility to request only the desired field variables through the > IsFieldNeeded() function in the adaptor. > If you have any advice regarding this feature, I would be very interested > in trying that out. > > Thank you for your help! > > Cheers, > > Michel > > > > > On Oct 26, 2015, at 5:57 PM, Andy Bauer wrote: > > Hi Michel, > > You should be able to pass a single field at a time to Catalyst. I'm not > sure where the problem is but my first guess is that maybe you're giving > the same name to all of the fields. What does the code that's calling > addfield() look like? > > Note that the Catalyst API uses things like idd->IsFieldNeeded("pressure") > to check if a field is needed by the pipeline. This has been in the API > since nearly the beginning but we've never had a chance to generate Python > scripts which can take advantage of loading only desired fields. This can > potentially save on both execution time and memory usage. This is on my > radar again but I'm not sure when it will get done. You can modify the > Python scripts though to just request the desired field variables in the > RequestDataDescription() method and everything should work as desired. Let > us know if you want to try that out and need help with it. > > Cheers, > Andy > > On Mon, Oct 26, 2015 at 11:22 AM, Michel Rasquin < > michel.rasq...@colorado.edu> wrote: > >> Hi everyone, >> >> I am trying to add some fields to a vtkCPAdaptorAPI object for >> coprocessing with Catalyst. >> I rely for that purpose on the successful implementation of the Phasta >> adaptor provided along with ParaView. >> See >> ParaView-v4.4.0-source/CoProcessing/Adaptors/PhastaAdaptor/PhastaAdaptor.cxx. >> After the initialization of the coprocessing objects and the generation >> of the grid, the current implementation to add fields in the phasta adaptor >> relies on the following function: >> >> void addfields(… double* dofArray, double* vortArray, double * >> otherFieldOfInterest … ) >> { >> vtkCPInputDataDescription* idd = >> vtkCPAdaptorAPI::GetCoProcessorData()->GetInputDescriptionByName("input”); >> vtkUnstructuredGrid* UnstructuredGrid = >> vtkUnstructuredGrid::SafeDownCast(idd->GetGrid()); >> if(!UnstructuredGrid) { >> vtkGenericWarningMacro("No unstructured grid to attach field data >> to."); >> return; >> } >> >> // now add numerical field data >> //velocity >> vtkIdType NumberOfNodes = UnstructuredGrid->GetNumberOfPoints(); >> if(idd->IsFieldNeeded("velocity")) >> { >> vtkDoubleArray* velocity = vtkDoubleArray::New(); >>
Re: [Paraview] Catalyst and adaptor: attaching new fields
Hi Andy, Thank you for your quick answer. You probably already read my previous answer about this issue. The problem was located in NeedToCreateGrid(...), which also clears any field data associated with an existing grid. The solution consists in calling simply this function only once for every new time step. That said, you raised a good point about IsFieldNeeded(…), which was also on my radar. I already observed that RequestDataDescription(datadescription) in the python script sets AllFields in the data description object to On. As a consequence, all the fields specified in the adaptor are passed to Catalyst since IsFieldNeeded() always returns true, whether the corresponding field is used in the Catalyst pipeline or not. Since we have improved our adaptor (we should commit it back now) and increased the number of fields we are potentially interest in for coprocesing purpose, this can indeed leads to additional memory usage and cpu time. I definitely agree it would therefore be quite useful to have the possibility to request only the desired field variables through the IsFieldNeeded() function in the adaptor. If you have any advice regarding this feature, I would be very interested in trying that out. Thank you for your help! Cheers, Michel On Oct 26, 2015, at 5:57 PM, Andy Bauer mailto:andy.ba...@kitware.com>> wrote: Hi Michel, You should be able to pass a single field at a time to Catalyst. I'm not sure where the problem is but my first guess is that maybe you're giving the same name to all of the fields. What does the code that's calling addfield() look like? Note that the Catalyst API uses things like idd->IsFieldNeeded("pressure") to check if a field is needed by the pipeline. This has been in the API since nearly the beginning but we've never had a chance to generate Python scripts which can take advantage of loading only desired fields. This can potentially save on both execution time and memory usage. This is on my radar again but I'm not sure when it will get done. You can modify the Python scripts though to just request the desired field variables in the RequestDataDescription() method and everything should work as desired. Let us know if you want to try that out and need help with it. Cheers, Andy On Mon, Oct 26, 2015 at 11:22 AM, Michel Rasquin mailto:michel.rasq...@colorado.edu>> wrote: Hi everyone, I am trying to add some fields to a vtkCPAdaptorAPI object for coprocessing with Catalyst. I rely for that purpose on the successful implementation of the Phasta adaptor provided along with ParaView. See ParaView-v4.4.0-source/CoProcessing/Adaptors/PhastaAdaptor/PhastaAdaptor.cxx. After the initialization of the coprocessing objects and the generation of the grid, the current implementation to add fields in the phasta adaptor relies on the following function: void addfields(… double* dofArray, double* vortArray, double * otherFieldOfInterest … ) { vtkCPInputDataDescription* idd = vtkCPAdaptorAPI::GetCoProcessorData()->GetInputDescriptionByName("input”); vtkUnstructuredGrid* UnstructuredGrid = vtkUnstructuredGrid::SafeDownCast(idd->GetGrid()); if(!UnstructuredGrid) { vtkGenericWarningMacro("No unstructured grid to attach field data to."); return; } // now add numerical field data //velocity vtkIdType NumberOfNodes = UnstructuredGrid->GetNumberOfPoints(); if(idd->IsFieldNeeded("velocity")) { vtkDoubleArray* velocity = vtkDoubleArray::New(); velocity->SetName("velocity"); velocity->SetNumberOfComponents(3); velocity->SetNumberOfTuples(NumberOfNodes); for (vtkIdType idx=0; idxSetTuple3(idx, dofArray[idx], dofArray[idx+ *nshg], dofArray[idx+ *nshg*2]); } UnstructuredGrid->GetPointData()->AddArray(velocity); velocity->Delete(); } if(idd->IsFieldNeeded(“vorticity")) { vtkDoubleArray* vorticity = vtkDoubleArray::New(); velocity->SetName(“vorticity"); velocity->SetNumberOfComponents(3); velocity->SetNumberOfTuples(NumberOfNodes); for (vtkIdType idx=0; idxSetTuple3(idx, vortArray[idx], vortArray[idx+ *nshg], vortArray[idx+ *nshg*2]); } UnstructuredGrid->GetPointData()->AddArray(vorticity); vorticity->Delete(); } // etc for any the other fields of interest for Catalyst } Currently, all the fields requested for coprocessing needs to be attached in this function at the same time, using the same pointer to vtkUnstructuredGrid resulting from the SafeDownCast mentioned above. However, I need a more flexible implementation so that I can call addfield (with no “s”) as many times as needed and attach a single field to the vtkCPAdaptorAPI object each time this function is called. Concretely, my first implementation is simply the following: void addfield(std::string fieldName, int* NumberOfComp, double* fieldArray) { vtkCPInputDataDescription* idd =
Re: [Paraview] Python array data into Paraview with simultaneous manipulation by Programmable Filter
> > Is there a way to conduct this procedure all in one fell swoop in the > Python Shell? Yes. You may want to write your Python script in a text file that you can run in ParaView's Python Shell via the "Run Script" button. > The main things I am curious about are 1) whether I can load a data from a > Python array into Paraview as x, y, and z coordinates and Yes. Do you have your data in numpy arrays? If so, see this series of blog posts on Numpy and ParaView for more info: http://www.kitware.com/blog/home/post/709 > 2) whether there’s a way to control the creation of a Programmable Filter > from the Python shell. > Yes. programmableSource1 = ProgrammableSource() programmableSource1.Script = 'import vtk' programmableSource1.ScriptRequestInformation = '' programmableSource1.PythonPath = '' Note that this requires you to write the Python script for the Programmable Source as a string in your main Python script. Of course, you can also load it from a file or some other source you may have. HTH, Cory > > > -- > Eli Medvescek > Duke University '17 | Biomedical Engineering > 520.780.6888 > eli.medves...@duke.edu > > > > ___ > Powered by www.kitware.com > > Visit other Kitware open-source projects at > http://www.kitware.com/opensource/opensource.html > > Please keep messages on-topic and check the ParaView Wiki at: > http://paraview.org/Wiki/ParaView > > Search the list archives at: http://markmail.org/search/?q=ParaView > > Follow this link to subscribe/unsubscribe: > http://public.kitware.com/mailman/listinfo/paraview > -- Cory Quammen R&D Engineer Kitware, Inc. ___ Powered by www.kitware.com Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Please keep messages on-topic and check the ParaView Wiki at: http://paraview.org/Wiki/ParaView Search the list archives at: http://markmail.org/search/?q=ParaView Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/paraview
Re: [Paraview] segmentation fault with netCDF when loading state file
David, Thanks for your investigation. I will be able to try this tomorrow. The original issue was with Paraview 4.3.1. We recently built the latest master, so I will be able to confirm whether it was fixed in between. Best, Ryan On Tue, Oct 27, 2015 at 10:23 AM, David Lonie wrote: > Hi Ryan, > > I'm looking into this issue. > > I download the sample data you provided, and the state file loads and > renders fine for me with the current (as of this morning) master branch of > ParaView. What version of ParaView are you using? Can you test the latest > master branch? It appears that whatever the bug was, it has been fixed. > > Dave > > On Wed, Oct 21, 2015 at 9:50 AM, Aashish Chaudhary < > aashish.chaudh...@kitware.com> wrote: > >> Ryan, >> >> would it be possible for you to share a sample dataset? The NetCDF reader >> has fails to read data in certain corner cases but without having the data >> to reproduce the issue it is kind of little tricky to get to the problem. >> >> - Aashish >> >> On Wed, Oct 21, 2015 at 9:44 AM, Ryan Abernathey < >> ryan.abernat...@gmail.com> wrote: >> >>> Does anyone have any feedback on this issue? >>> >>> I have no idea how to debug or continue. I will have to abandon paraview >>> for my project unless I can get some help somehow. >>> >>> Thanks, >>> >>> Ryan Abernathey >>> Assistant Professor >>> Columbia University, Department of Earth & Environmental Sciences >>> Lamont-Doherty Earth Observatory, Division of Ocean & Climate Physics >>> 205 C Oceanography >>> 61 Route 9W - PO Box 1000 >>> Palisades, NY 10964-8000 >>> http://rabernat.github.io >>> r...@ldeo.columbia.edu >>> >>> >>> On Mon, Oct 19, 2015 at 4:59 PM, Ryan Abernathey < >>> ryan.abernat...@gmail.com> wrote: >>> Hello, I am extremely frustrated and stuck on this problem. Any advice would be appreciated. I am working with a dataset of 2400 sequentially numbered netCDF files (generic & CF conventions). In a fresh pipeline, Paraview is able to recognizes these files as a single source and load them correctly. I then create a standard pipeline involving various filters (contour, etc., nothing funny) and save a state file. When I attempt to load the state file, I get the following segmentation fault. Further down you can see a ncdump of the offending file. Would appreciate any advice on how to debug / overcome this issue. Is it a bug? It sure feels like one... Best, Ryan Abernathey ERROR: In /home/kitware/Dashboards/MyTests/NightlyMaster/ParaViewSuperbuild-Release/paraview/src/paraview/VTK/IO/NetCDF/vtkNetCDFReader.cxx, line 822 vtkNetCDFCFReader (0x55c0620): netCDF Error: NetCDF: Index exceeds dimension bound ERROR: In /home/kitware/Dashboards/MyTests/NightlyMaster/ParaViewSuperbuild-Release/paraview/src/paraview/VTK/Common/ExecutionModel/vtkExecutive.cxx, line 784 vtkPVCompositeDataPipeline (0x5be23d0): Algorithm vtkFileSeriesReader(0x4c0c3e0) returned failure for request: vtkInformation (0x8141700) Debug: Off Modified Time: 05 Reference Count: 1 Registered Events: (none) Request: REQUEST_DATA FORWARD_DIRECTION: 0 FROM_OUTPUT_PORT: 0 ALGORITHM_AFTER_FORWARD: 1 ERROR: In /home/kitware/Dashboards/MyTests/NightlyMaster/ParaViewSuperbuild-Release/paraview/src/paraview/VTK/Common/ExecutionModel/vtkExecutive.cxx, line 784 vtkCompositeDataPipeline (0x744fde0): Algorithm vtkPVGeometryFilter(0x7436500) returned failure for request: vtkInformation (0x5737250) Debug: Off Modified Time: 43 Reference Count: 1 Registered Events: (none) Request: REQUEST_DATA_OBJECT FORWARD_DIRECTION: 0 FROM_OUTPUT_PORT: 0 ALGORITHM_AFTER_FORWARD: 1 Segmentation fault (core dumped) $ ncdump -h rce_rrtm303x512dx3f20_512x512x64_3km_12s_128_000300.nc netcdf rce_rrtm303x512dx3f20_512x512x64_3km_12s_128_000300 { dimensions: x = 512 ; y = 512 ; z = 64 ; time = UNLIMITED ; // (1 currently) variables: float x(x) ; x:units = "m" ; float y(y) ; y:units = "m" ; float z(z) ; z:units = "m" ; z:long_name = "height" ; float time(time) ; time:units = "d" ; time:long_name = "time" ; float p(z) ; p:units = "mb" ; p:long_name = "pressure" ; float zi(z) ; zi:units = "m" ; zi:long_name = "intfc_ht" ; float U(time, z, y, x) ; U:long_name = "X Wind Component" ; U:units = "m/s " ; floa
Re: [Paraview] segmentation fault with netCDF when loading state file
Hi Ryan, I'm looking into this issue. I download the sample data you provided, and the state file loads and renders fine for me with the current (as of this morning) master branch of ParaView. What version of ParaView are you using? Can you test the latest master branch? It appears that whatever the bug was, it has been fixed. Dave On Wed, Oct 21, 2015 at 9:50 AM, Aashish Chaudhary < aashish.chaudh...@kitware.com> wrote: > Ryan, > > would it be possible for you to share a sample dataset? The NetCDF reader > has fails to read data in certain corner cases but without having the data > to reproduce the issue it is kind of little tricky to get to the problem. > > - Aashish > > On Wed, Oct 21, 2015 at 9:44 AM, Ryan Abernathey < > ryan.abernat...@gmail.com> wrote: > >> Does anyone have any feedback on this issue? >> >> I have no idea how to debug or continue. I will have to abandon paraview >> for my project unless I can get some help somehow. >> >> Thanks, >> >> Ryan Abernathey >> Assistant Professor >> Columbia University, Department of Earth & Environmental Sciences >> Lamont-Doherty Earth Observatory, Division of Ocean & Climate Physics >> 205 C Oceanography >> 61 Route 9W - PO Box 1000 >> Palisades, NY 10964-8000 >> http://rabernat.github.io >> r...@ldeo.columbia.edu >> >> >> On Mon, Oct 19, 2015 at 4:59 PM, Ryan Abernathey < >> ryan.abernat...@gmail.com> wrote: >> >>> Hello, >>> >>> I am extremely frustrated and stuck on this problem. Any advice would be >>> appreciated. >>> >>> I am working with a dataset of 2400 sequentially numbered netCDF files >>> (generic & CF conventions). >>> >>> In a fresh pipeline, Paraview is able to recognizes these files as a >>> single source and load them correctly. I then create a standard pipeline >>> involving various filters (contour, etc., nothing funny) and save a state >>> file. >>> >>> When I attempt to load the state file, I get the following segmentation >>> fault. Further down you can see a ncdump of the offending file. >>> >>> Would appreciate any advice on how to debug / overcome this issue. Is it >>> a bug? It sure feels like one... >>> >>> Best, >>> Ryan Abernathey >>> >>> >>> ERROR: In >>> /home/kitware/Dashboards/MyTests/NightlyMaster/ParaViewSuperbuild-Release/paraview/src/paraview/VTK/IO/NetCDF/vtkNetCDFReader.cxx, >>> line 822 >>> vtkNetCDFCFReader (0x55c0620): netCDF Error: NetCDF: Index exceeds >>> dimension bound >>> >>> >>> ERROR: In >>> /home/kitware/Dashboards/MyTests/NightlyMaster/ParaViewSuperbuild-Release/paraview/src/paraview/VTK/Common/ExecutionModel/vtkExecutive.cxx, >>> line 784 >>> vtkPVCompositeDataPipeline (0x5be23d0): Algorithm >>> vtkFileSeriesReader(0x4c0c3e0) returned failure for request: vtkInformation >>> (0x8141700) >>> Debug: Off >>> Modified Time: 05 >>> Reference Count: 1 >>> Registered Events: (none) >>> Request: REQUEST_DATA >>> FORWARD_DIRECTION: 0 >>> FROM_OUTPUT_PORT: 0 >>> ALGORITHM_AFTER_FORWARD: 1 >>> >>> >>> >>> >>> ERROR: In >>> /home/kitware/Dashboards/MyTests/NightlyMaster/ParaViewSuperbuild-Release/paraview/src/paraview/VTK/Common/ExecutionModel/vtkExecutive.cxx, >>> line 784 >>> vtkCompositeDataPipeline (0x744fde0): Algorithm >>> vtkPVGeometryFilter(0x7436500) returned failure for request: vtkInformation >>> (0x5737250) >>> Debug: Off >>> Modified Time: 43 >>> Reference Count: 1 >>> Registered Events: (none) >>> Request: REQUEST_DATA_OBJECT >>> FORWARD_DIRECTION: 0 >>> FROM_OUTPUT_PORT: 0 >>> ALGORITHM_AFTER_FORWARD: 1 >>> >>> >>> >>> >>> Segmentation fault (core dumped) >>> >>> >>> >>> >>> >>> >>> $ ncdump -h rce_rrtm303x512dx3f20_512x512x64_3km_12s_128_000300.nc >>> netcdf rce_rrtm303x512dx3f20_512x512x64_3km_12s_128_000300 { >>> dimensions: >>> x = 512 ; >>> y = 512 ; >>> z = 64 ; >>> time = UNLIMITED ; // (1 currently) >>> variables: >>> float x(x) ; >>> x:units = "m" ; >>> float y(y) ; >>> y:units = "m" ; >>> float z(z) ; >>> z:units = "m" ; >>> z:long_name = "height" ; >>> float time(time) ; >>> time:units = "d" ; >>> time:long_name = "time" ; >>> float p(z) ; >>> p:units = "mb" ; >>> p:long_name = "pressure" ; >>> float zi(z) ; >>> zi:units = "m" ; >>> zi:long_name = "intfc_ht" ; >>> float U(time, z, y, x) ; >>> U:long_name = "X Wind >>> Component" ; >>> U:units = "m/s " ; >>> float V(time, z, y, x) ; >>> V:long_name = "Y Wind >>> Component" ; >>> V:units = "m/s " ; >>> float W(time, z, y, x) ; >>> W:long_name = "Z Wind >>> Component" ; >>> W:units = "m/s " ; >>> float PP(time, z, y, x) ; >>> PP:long_name = "Pressure >>> Perturbation
Re: [Paraview] [vtk-developers] [vtk-users] OpenGL2 - GPU Volume Rendering performance
Hi Simon, This is helpful but just missing few more bits: 1) Did you try without the shading and see how the performance compares? 2) ParaView 4.4.0-193-gec96423 --> Where did you get this one from (ParaView download page or did you built yourself?) Also, so on your system the old mapper is running 30FPS and the new one at 15-20 FPS as per your summary. Thanks, - Aashish On Tue, Oct 27, 2015 at 9:43 AM, Simon ESNEAULT wrote: > Hello Aashish, > > Sorry for the late answer, I was busy this morning. > Thanks for testing with the DataSet. > I agree the performance is still quite good with the new backend, and I > also get something like 15/20 fps on windows on an HD screen. But when > compared to the old one, and in some condition (when zoomed especially), it > looks really slower to me > The two tested version are : > - ParaView 4.4.0 64 bits final version for the old backend > - ParaView 4.4.0-193-gec96423 64 bits, for the OpenGL2 backend. > on a windows 7 box, Xeon E3-1220 v3 CPU, 16GB ram and Nvidia Quadro K420 > > To highlight the difference, here is what I do : > - Launch both version on the same computer at the same time > - Load the above dataset on each > - Select volume rendering > - Adjust the transfer function data range to [100-750] (the default "Cool > to Warm" is fine) > - Set the view direction to +Y > - Adjust the Y of the camera position to -300 > > And start interacting ... > Dunno if there is an easy way to print out the Frame Rate in Paraview, but > the new version seems really twice slower in these conditions... We can see > it does not scale in the same way, the old backend seems more aggressive on > the image sample reduction, hence the interactivity is better. > Shading enable or not does not change much > > I'm aware of the DesiredUpdateRate thing, we use to play with this with > the old backend to fine tune the interactivity, although what's really > inside was never clear to me > > I hope that there is enough information for you to reproduce this, do not > hesitate to ask for some more information. > > Thanks a lot for your help > Simon > > > 2015-10-27 14:10 GMT+01:00 Aashish Chaudhary < > aashish.chaudh...@kitware.com>: > >> Dear Simon, >> >> Checking again. Wondering if you can provide some more detail on the >> binary you are using and whether or not without shading the rendering >> performance comparable to older version. >> >> Thanks, >> >> >> On Mon, Oct 26, 2015 at 3:12 PM, Aashish Chaudhary < >> aashish.chaudh...@kitware.com> wrote: >> >>> Simon, >>> >>> I used your dataset on paraview master as of today on my Linux box >>> running Ubuntu 14.04 and NVIDA Quadro card and I am getting about 15-20 FPS >>> with shading on with 1920x1080 resolution. >>> >>> Are you on the proper 4.4 or using RC1/RC2? I checked the shading >>> performance fix was in 4.4 but not in RC's. I don't have access to Windows >>> box right away but I will try there too. >>> >>> NOTE: You might get multiple emails because of the attachment size >>> issue. Sorry about that. >>> >>> Thanks, >>> >>> On Mon, Oct 26, 2015 at 2:45 PM, Aashish Chaudhary < >>> aashish.chaudh...@kitware.com> wrote: >>> On Mon, Oct 26, 2015 at 2:13 PM, Simon ESNEAULT < simon.esnea...@gmail.com> wrote: > Hello Aashish, > > Thanks for the quick answer > We are using a vtkImageData, 512x512x591 with short element (you can > find the dataset here : > https://www.dropbox.com/s/ptqwi0ebv75kt35/volume.zip). So I think > it's all about GPU volume raycast mapper. > The new mapper does bring low resolution, but when compared to the old > one, it seems less "low resolution" during interaction than the old one > Right, so that's why its not a exact comparison. What happens is that depending on what is interactive, (you can set the desired update rate in VTK, not exposed in ParaView I believe), it will do interactive but with higher resolution (smaller sample distance). If they both have the same sample distance, then the new mapper should out perform the old one, however, there is another thing we need to consider here which is shading. > Shading is enabled, gradient opacity disabled > Can you disable the shading and see if now they both (opengl1 and 2) equally better? We already pushed a fix for it but not sure if that you have in your build. > > Don't know if you need a minimal example, but I believe the > GPURenderDemo used with this dataset is enough to highlight the slow down. > Yes, I will use this dataset. Thanks. > > Thanks > Simon > > > 2015-10-26 18:57 GMT+01:00 Aashish Chaudhary < > aashish.chaudh...@kitware.com>: > >> Also, >> >> Do you have shading enabled? We fixed a bug with shading that was >> causing the slow performance a while back. I don't remember if that was >> included in 4.4 or not ( I ca
Re: [Paraview] [vtk-developers] [vtk-users] OpenGL2 - GPU Volume Rendering performance
Hello Aashish, Sorry for the late answer, I was busy this morning. Thanks for testing with the DataSet. I agree the performance is still quite good with the new backend, and I also get something like 15/20 fps on windows on an HD screen. But when compared to the old one, and in some condition (when zoomed especially), it looks really slower to me The two tested version are : - ParaView 4.4.0 64 bits final version for the old backend - ParaView 4.4.0-193-gec96423 64 bits, for the OpenGL2 backend. on a windows 7 box, Xeon E3-1220 v3 CPU, 16GB ram and Nvidia Quadro K420 To highlight the difference, here is what I do : - Launch both version on the same computer at the same time - Load the above dataset on each - Select volume rendering - Adjust the transfer function data range to [100-750] (the default "Cool to Warm" is fine) - Set the view direction to +Y - Adjust the Y of the camera position to -300 And start interacting ... Dunno if there is an easy way to print out the Frame Rate in Paraview, but the new version seems really twice slower in these conditions... We can see it does not scale in the same way, the old backend seems more aggressive on the image sample reduction, hence the interactivity is better. Shading enable or not does not change much I'm aware of the DesiredUpdateRate thing, we use to play with this with the old backend to fine tune the interactivity, although what's really inside was never clear to me I hope that there is enough information for you to reproduce this, do not hesitate to ask for some more information. Thanks a lot for your help Simon 2015-10-27 14:10 GMT+01:00 Aashish Chaudhary : > Dear Simon, > > Checking again. Wondering if you can provide some more detail on the > binary you are using and whether or not without shading the rendering > performance comparable to older version. > > Thanks, > > > On Mon, Oct 26, 2015 at 3:12 PM, Aashish Chaudhary < > aashish.chaudh...@kitware.com> wrote: > >> Simon, >> >> I used your dataset on paraview master as of today on my Linux box >> running Ubuntu 14.04 and NVIDA Quadro card and I am getting about 15-20 FPS >> with shading on with 1920x1080 resolution. >> >> Are you on the proper 4.4 or using RC1/RC2? I checked the shading >> performance fix was in 4.4 but not in RC's. I don't have access to Windows >> box right away but I will try there too. >> >> NOTE: You might get multiple emails because of the attachment size issue. >> Sorry about that. >> >> Thanks, >> >> On Mon, Oct 26, 2015 at 2:45 PM, Aashish Chaudhary < >> aashish.chaudh...@kitware.com> wrote: >> >>> >>> >>> On Mon, Oct 26, 2015 at 2:13 PM, Simon ESNEAULT < >>> simon.esnea...@gmail.com> wrote: >>> Hello Aashish, Thanks for the quick answer We are using a vtkImageData, 512x512x591 with short element (you can find the dataset here : https://www.dropbox.com/s/ptqwi0ebv75kt35/volume.zip). So I think it's all about GPU volume raycast mapper. The new mapper does bring low resolution, but when compared to the old one, it seems less "low resolution" during interaction than the old one >>> >>> Right, so that's why its not a exact comparison. What happens is that >>> depending on what is interactive, (you can set the desired update rate in >>> VTK, not exposed in ParaView I believe), it will do interactive but with >>> higher resolution (smaller sample distance). If they both have the same >>> sample distance, then the new mapper should out perform the old one, >>> however, there is another thing we need to consider here which is shading. >>> >>> Shading is enabled, gradient opacity disabled >>> >>> Can you disable the shading and see if now they both (opengl1 and 2) >>> equally better? We already pushed a fix for it but not sure if that you >>> have in your build. >>> Don't know if you need a minimal example, but I believe the GPURenderDemo used with this dataset is enough to highlight the slow down. >>> >>> Yes, I will use this dataset. Thanks. >>> Thanks Simon 2015-10-26 18:57 GMT+01:00 Aashish Chaudhary < aashish.chaudh...@kitware.com>: > Also, > > Do you have shading enabled? We fixed a bug with shading that was > causing the slow performance a while back. I don't remember if that was > included in 4.4 or not ( I can check ). > > - Aashish > > On Mon, Oct 26, 2015 at 1:53 PM, Aashish Chaudhary < > aashish.chaudh...@kitware.com> wrote: > >> Simon, >> >> What kind of dataset you are using? Depending on the data type you >> might be using >> the GPU one or the unstructured renderer. The performance we measured >> is related to the GPU ray cast mapper >> and will apply only to the vtkImageData inputs. >> >> Also, helpful would be is if you can tell if the new mapper is >> bringing low resolution when you interact with the volume (and whether or >> not it happens with
Re: [Paraview] [vtk-developers] [vtk-users] OpenGL2 - GPU Volume Rendering performance
Dear Simon, Checking again. Wondering if you can provide some more detail on the binary you are using and whether or not without shading the rendering performance comparable to older version. Thanks, On Mon, Oct 26, 2015 at 3:12 PM, Aashish Chaudhary < aashish.chaudh...@kitware.com> wrote: > Simon, > > I used your dataset on paraview master as of today on my Linux box running > Ubuntu 14.04 and NVIDA Quadro card and I am getting about 15-20 FPS with > shading on with 1920x1080 resolution. > > Are you on the proper 4.4 or using RC1/RC2? I checked the shading > performance fix was in 4.4 but not in RC's. I don't have access to Windows > box right away but I will try there too. > > NOTE: You might get multiple emails because of the attachment size issue. > Sorry about that. > > Thanks, > > On Mon, Oct 26, 2015 at 2:45 PM, Aashish Chaudhary < > aashish.chaudh...@kitware.com> wrote: > >> >> >> On Mon, Oct 26, 2015 at 2:13 PM, Simon ESNEAULT > > wrote: >> >>> Hello Aashish, >>> >>> Thanks for the quick answer >>> We are using a vtkImageData, 512x512x591 with short element (you can >>> find the dataset here : >>> https://www.dropbox.com/s/ptqwi0ebv75kt35/volume.zip). So I think it's >>> all about GPU volume raycast mapper. >>> The new mapper does bring low resolution, but when compared to the old >>> one, it seems less "low resolution" during interaction than the old one >>> >> >> Right, so that's why its not a exact comparison. What happens is that >> depending on what is interactive, (you can set the desired update rate in >> VTK, not exposed in ParaView I believe), it will do interactive but with >> higher resolution (smaller sample distance). If they both have the same >> sample distance, then the new mapper should out perform the old one, >> however, there is another thing we need to consider here which is shading. >> >> >>> Shading is enabled, gradient opacity disabled >>> >> >> Can you disable the shading and see if now they both (opengl1 and 2) >> equally better? We already pushed a fix for it but not sure if that you >> have in your build. >> >>> >>> Don't know if you need a minimal example, but I believe the >>> GPURenderDemo used with this dataset is enough to highlight the slow down. >>> >> >> Yes, I will use this dataset. Thanks. >> >>> >>> Thanks >>> Simon >>> >>> >>> 2015-10-26 18:57 GMT+01:00 Aashish Chaudhary < >>> aashish.chaudh...@kitware.com>: >>> Also, Do you have shading enabled? We fixed a bug with shading that was causing the slow performance a while back. I don't remember if that was included in 4.4 or not ( I can check ). - Aashish On Mon, Oct 26, 2015 at 1:53 PM, Aashish Chaudhary < aashish.chaudh...@kitware.com> wrote: > Simon, > > What kind of dataset you are using? Depending on the data type you > might be using > the GPU one or the unstructured renderer. The performance we measured > is related to the GPU ray cast mapper > and will apply only to the vtkImageData inputs. > > Also, helpful would be is if you can tell if the new mapper is > bringing low resolution when you interact with the volume (and whether or > not it happens with old mapper). > > Thanks, > > > On Mon, Oct 26, 2015 at 1:47 PM, Simon ESNEAULT < > simon.esnea...@gmail.com> wrote: > >> Hi All, >> >> We are trying to make the switch to the new OpenGL2 backend for our >> application, and although the switch was easy (thanks for not breaking >> the >> API ;) ), we can see a significant slowdown on the GPU volume rendering >> part, especially during interaction. Typically we dropped from 15/20 fps >> to >> 7/8 fps, on the same machine (Win32, Nvidia Quadro K420), with the same >> code around. >> >> This slow down can be seen in ParaView, if you compare the latest 4.4 >> OpenGL2 build with the classic 4.4 build while volume rendering a big >> enough volume (512^3) >> >> The blog post here >> http://www.kitware.com/blog/home/post/976 >> claims that the new GPU volume rendering implementation should be >> faster than the old one, is there some more detailed explanation >> somewhere >> ? Are there some important parameters that can make the difference ? >> >> Thanks, >> >> Simon >> >> PS : The polygonal rendering seems a lot faster with the new backend ! >> >> -- >> -- >> Simon Esneault >> Rennes, France >> -- >> >> ___ >> Powered by www.kitware.com >> >> Visit other Kitware open-source projects at >> http://www.kitware.com/opensource/opensource.html >> >> Search the list archives at: >> http://markmail.org/search/?q=vtk-developers >> >> Follow this
Re: [Paraview] Saving a slice of data for later visualization
Hi Tim, I believe that the writer you want is the XML multiblock data writer -- XMLMultiBlockDataWriter(). The extension for that is .vtm. The reason for this is that a slice through a multiblock data set outputs a multiblock of polydata. You can use the Merge Blocks filter to reduce it to an unstructured grid. Cheers, Andy On Mon, Oct 26, 2015 at 8:21 PM, Tim Gallagher wrote: > Hi, > > I'm struggling to write a script for Paraview that will let me take a > slice through my vtkMultiblockDataSet and save just the slice (so all of > the data on the slice and all of the points that make up the slice) in a > format that I can look at later. I can get it to dump all of the data to a > set of CSV files, but I can't look at those again in paraview. > > My function is very simple (see below). I have tried to use CreateWriter > directly with the .vtk file extension like is shown on > http://www.paraview.org/Wiki/ParaView/Python_Scripting#Writing_Data_Files_.28ParaView_3.9_or_later.29 > but that says the vtk file format is unknown and so it doesn't work. > > I have tried virtually every writer that would make sense in that writer > line and none of them work properly. As it is, the one that is there now > says: > > vtkCompositeDataPipeline (0x9ac9380): Can not execute simple alorithm > without output ports > > and I don't know what that means or why it fails to write. (Side note -- > algorithm is spelled wrong in that error message, comes from > vtkCompositeDataPipeline.cxx line 168). > > Anybody have any suggestions or advice on how to save the datasets that > results from a slice so I can look at just that slice later? > > Thanks, > > Tim > > def run(out_dir, file_num, spreadsheet_name, slice_origin, slice_normal, > triangulate=False): > restart_file = XDMFReader(FileName=out_dir+'/RESTS/rest_%05i.xmf' % > file_num) > restart_file_dr = Show() > > if triangulate: > tri = 1 > else: > tri = 0 > > my_slice = Slice(SliceOffsetValues=[0.0], Triangulatetheslice=tri, > SliceType="Plane" ) > my_slice.SliceType.Origin = slice_origin > my_slice.SliceType.Normal = slice_normal > > slice_dr = Show() > > writer = XMLUnstructuredGridWriter(Input=my_slice) > writer.FileName = out_dir+"/post/"+"%s_data_%05i_.vtu" % > (spreadsheet_name, file_num) > writer.UpdatePipeline() > ___ > Powered by www.kitware.com > > Visit other Kitware open-source projects at > http://www.kitware.com/opensource/opensource.html > > Please keep messages on-topic and check the ParaView Wiki at: > http://paraview.org/Wiki/ParaView > > Search the list archives at: http://markmail.org/search/?q=ParaView > > Follow this link to subscribe/unsubscribe: > http://public.kitware.com/mailman/listinfo/paraview > ___ Powered by www.kitware.com Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Please keep messages on-topic and check the ParaView Wiki at: http://paraview.org/Wiki/ParaView Search the list archives at: http://markmail.org/search/?q=ParaView Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/paraview