Hi David and list. I'm still struggling with distributing my dataset (bmag) across nodes. My Python Programmable source script below produces correct output when run on a single node.

contr = vtk.vtkMultiProcessController.GetGlobalController()
nranks = contr.GetNumberOfProcesses()
rank = contr.GetLocalProcessId()

# read in bmag data and process

executive = self.GetExecutive()
outInfo = executive.GetOutputInformation(0)
updateExtent = [executive.UPDATE_EXTENT().Get(outInfo, i) for i in xrange(6)]
imageData = self.GetOutput()
imageData.SetExtent(updateExtent)
myout = ns.numpy_to_vtk(bmag.ravel(),1,vtk.VTK_FLOAT)
myout.SetName("B field magnitude")
imageData.GetPointData().SetScalars(myout)


I then used the Filters/Parallel/Testing/Python/testTransmit.py test code as a basis to continue the script to work on multiple nodes. Following the last line above, I added:

da = vtk.vtkIntArray()
da.SetNumberOfTuples(6)
if rank == 0:
   ext = imageData.GetExtent()
   for i in range(6):
      da.SetValue(i,ext[i])
contr.Broadcast(da,0)

ext = []
for i in range(6):
   ext.append(da.GetValue(i))
ext = tuple(ext)

tp = vtk.vtkTrivialProducer()
tp.SetOutput(imageData)
tp.SetWholeExtent(ext)

xmit = vtk.vtkTransmitImageDataPiece()
xmit.SetInputConnection(tp.GetOutputPort())
xmit.UpdateInformation()
xmit.SetUpdateExtent(rank, nranks, 0)
xmit.Update()

However, when I run this on two nodes, it looks like both of them get the same half of the data, rather than the whole dataset getting split between the two. Can anyone see what might be wrong? Thanks.

-jeff

On 07/23/2015 11:18 AM, David E DeMarle wrote:
Excellent.

No advice yet. Anyone have a worked out example ready? If so, post it to the wiki please.


David E DeMarle
Kitware, Inc.
R&D Engineer
21 Corporate Drive
Clifton Park, NY 12065-8662
Phone: 518-881-4909

On Thu, Jul 23, 2015 at 2:14 PM, Jeff Becker <jeffrey.c.bec...@nasa.gov <mailto:jeffrey.c.bec...@nasa.gov>> wrote:

    Hi David,

    On 07/23/2015 10:57 AM, David E DeMarle wrote:
    pyhon shell runs on the client side.

    try doing that within the python programmable filter, which runs
    on the server side.

    Yes that works. Thanks. Now I will try to use that and follow

    http://www.paraview.org/pipermail/paraview/2011-August/022421.html

    to get my existing Image Data producing Python Programmable Source
    to distribute the data across the servers. Any additional advice?
    Thanks again.

    -jeff



    David E DeMarle
    Kitware, Inc.
    R&D Engineer
    21 Corporate Drive
    Clifton Park, NY 12065-8662
    Phone: 518-881-4909 <tel:518-881-4909>

    On Thu, Jul 23, 2015 at 1:53 PM, Jeff Becker
    <jeffrey.c.bec...@nasa.gov <mailto:jeffrey.c.bec...@nasa.gov>> wrote:

        Hi. I do "mpirun -np 4 pvserver --client-host=xxx
        --use-offscreen-rendering", and connect a ParaView client
        viewer. I can see 4 nodes in the memory inspector, but when I
        start a python shell in ParaView, and do:

        from mpi4py import MPI
        print MPI.COMM_WORLD.Get_size()

        I get the answer 1. Shouldn't it be 4? Thanks.

        -jeff


        _______________________________________________
        Powered by www.kitware.com <http://www.kitware.com>

        Visit other Kitware open-source projects at
        http://www.kitware.com/opensource/opensource.html

        Please keep messages on-topic and check the ParaView Wiki at:
        http://paraview.org/Wiki/ParaView

        Search the list archives at:
        http://markmail.org/search/?q=ParaView

        Follow this link to subscribe/unsubscribe:
        http://public.kitware.com/mailman/listinfo/paraview





_______________________________________________
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Search the list archives at: http://markmail.org/search/?q=ParaView

Follow this link to subscribe/unsubscribe:
http://public.kitware.com/mailman/listinfo/paraview

Reply via email to