Hi Ufuk, It's looking to me like the issue is in the adaptor some place. I tried running your scripts (modified slightly) with the CxxFullExample (./CxxFullExample script_1.py script_2.py) and was able to get image_s1* and image_s2* files out of it and did not see the warning you mention. I also modified that example to use atm_input2d instead of input for the input/grid identifier to make sure that wasn't the case. I've attached this example with the scripts modified to print out some debug information.
Some questions (these are pretty basic things that you've probably already gotten working properly but maybe there's some small thing that was forgotten): - Does your g_coprocessor only get created in your initialization routine and deleted in the finalization step? That should not be created and deleted during each in situ processing step. - Do the vtkCPPythonScriptPipelines only get added during initialization? They should not be added and removed from g_coprocessor every time step. Also, could you try running with the scripts that I included in the cat.tgz tarball? The print statements there will print out the time and time steps during the simulation run to help me diagnose (both need to be increasing between calls to Catalyst). Also, if you share your adaptor code with me I can take a quick look. Cheers, Andy On Fri, Jul 7, 2017 at 7:30 AM, Ufuk Utku Turuncoglu (BE) < u.utku.turunco...@be.itu.edu.tr> wrote: > Hi Andy, > > Strange! To test the idea and eliminate other problems i am using same > script twice with little mods (i just changed the name of the output png > file). So, if i pass script_1.py and script_2.py to the model, it gives > warning like before and creates output just for second script > (script_2.py). If i pass only one of them then the code is working without > any problem and produces desired output. > > Thanks, > Regards, > > --ufuk > > > > On 06/07/2017 18:09, Andy Bauer wrote: > > Hi Ufuk, > > I'm guessing the issue is that the calls to Catalyst are not consistent. > Could you share your Python scripts? Also, did you modify them manually? > > I tried with PV 5.3. with the > ../ParaView-v5.3.0/Examples/Catalyst/CxxFullExample/ > example with the attached scripts simultaneously by running with > "./CxxFullExample doubleoutputs.py output3.py image11.py" and got the > correct output and no warnings. > > Cheers, > Andy > > On Tue, Jul 4, 2017 at 7:04 AM, Ufuk Utku Turuncoglu (BE) < > u.utku.turunco...@be.itu.edu.tr> wrote: > >> Hi Andy, >> >> I tested you suggestion about using multiple script in co-processing. In >> this case, i used following code in the adaptor side to add multiple >> pipeline >> >> for (int i = 0; i < *nscript; i++) { >> pipeline->Initialize(pythonScriptNames[i]); >> g_coprocessor->AddPipeline(pipeline); >> } >> >> When i run the simulation, i am getting following warning >> >> Warning: In /okyanus/users/uturuncoglu/progs/paraview-5.3.0/src/ParaView >> Core/VTKExtensions/Core/vtkPVTrivialProducer.cxx, line 66 >> vtkPVTrivialProducer (0x13816760): New time step is not after last time >> step. >> >> the output seems not correct and it is zoom out version of second >> pipeline (png file). The first pipeline is not even triggered. Do i missing >> something in here? BTW, i am using PV 5.3. >> >> Thanks, >> >> --ufuk >> >> >> On 16/05/2017 16:08, Andy Bauer wrote: >> >> Hi Ufuk, >> >> If you create a vtkCPythonScriptPipeline, when you initialize it with the >> script file name (which has to be done on each process) everything will be >> taken care of with respect to broadcasting the file contents from process 0 >> to the others. We aren't sophisticated enough to parse the Python script to >> see if it imports other scripts that are not part of ParaView (e.g. >> paraview.simple) or Python (e.g. sys). That is why I recommended the first >> approach as opposed to the second approach above. Depending on the compute >> platform and how many MPI processes are in the run the difference may be >> negligible but having 100K processes or more trying to access the same file >> can seriously slow down an HPC machine. >> >> Cheers, >> Andy >> >> On Tue, May 16, 2017 at 8:24 AM, Ufuk Utku Turuncoglu (BE) < >> u.utku.turunco...@be.itu.edu.tr> wrote: >> >>> Thanks Andy. That is exactly what i am looking for. The broadcasting >>> mechanism is not clear to me yet. Do i need to broadcast only the file >>> names? Anyway, i will try to implement it and see what is going on there. >>> >>> Thanks again, >>> Regards, >>> >>> --ufuk >>> >>> >>> On 16/05/2017 14:58, Andy Bauer wrote: >>> >>> Hi Ufuk, >>> >>> Unless I'm not understanding your question correctly, I think you can >>> get what you want by adding in multiple vtkCPPythonScriptPipelines to your >>> vtkCPProcessor object in your adaptor. Alternatively if you want to have a >>> single, master Catalyst script handling other Catalyst scripts you can do >>> something like the following: >>> ================ >>> import script_a >>> import script_b >>> import script_c >>> >>> def RequestDataDescription(datadescription): >>> script_a.RequestDataDescription(datadescription) >>> script_b.RequestDataDescription(datadescription) >>> script_c.RequestDataDescription(datadescription) >>> >>> def DoCoProcessing(datadescription): >>> script_a.DoCoProcessing(datadescription) >>> script_b.DoCoProcessing(datadescription) >>> script_c.DoCoProcessing(datadescription) >>> =================== >>> >>> The first way is the recommended way though as that should be more >>> efficient by having process 0 read the scripts and broadcasting the script >>> contents to the other processes for use. The second method will only do >>> that for the master script. >>> >>> Please let me know if this doesn't answer your question. >>> >>> Cheers, >>> Andy >>> >>> On Tue, May 16, 2017 at 5:46 AM, Ufuk Utku Turuncoglu (BE) < >>> u.utku.turunco...@be.itu.edu.tr> wrote: >>> >>>> Hi All, >>>> >>>> I just wonder that is it possible to trigger multiple visualization >>>> pipeline in the same time with co-processing. The co-processing script >>>> generator plugin mainly outputs only single pipeline at a time and that is >>>> fine but what about combining multiple Python script (generated by plugin) >>>> using higher level Python script to trigger multiple pipelines. So, i think >>>> that this will be much efficient way to look at different part of the data >>>> without writing to the disk. I am not sure but somebody else might do it >>>> before. >>>> >>>> Regards, >>>> >>>> --ufuk >>>> >>>> _______________________________________________ >>>> Powered by www.kitware.com >>>> >>>> Visit other Kitware open-source projects at >>>> http://www.kitware.com/opensource/opensource.html >>>> >>>> Please keep messages on-topic and check the ParaView Wiki at: >>>> http://paraview.org/Wiki/ParaView >>>> >>>> Search the list archives at: http://markmail.org/search/?q=ParaView >>>> >>>> Follow this link to subscribe/unsubscribe: >>>> http://public.kitware.com/mailman/listinfo/paraview >>>> >>> >>> >>> >> >> > >
cat.tgz
Description: GNU Zip compressed data
_______________________________________________ Powered by www.kitware.com Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Please keep messages on-topic and check the ParaView Wiki at: http://paraview.org/Wiki/ParaView Search the list archives at: http://markmail.org/search/?q=ParaView Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/paraview