Hi,

I need to use Numpy in a lot of the programmable filters that I write, and I've 
run into differences in how its masking feature works in serial and parallel. 
Masking allows one to filter out portions of an array that do not pass some 
condition.


As an example, I've created a stock paraview wavelet, and saved it as a pvd 
file.  I then load it in, and run this inside of a programmable filter:
---

import numpy

data = inputs[0].PointData['RTData']
# create a mask that tells us which points are equal to one
mask = numpy.ma.masked_equal(data, 1)
# filter data array by the mask conditions (so that other points are excluded)
maskedPnts = numpy.extract(mask, data)

print len(maskedPnts)

---
In serial mode, I get 9261 points. With two processes, I get 2 x 4851 or 9702.  
So masking always produces more points.

Any ideas to why that is?  Is there anything I can do/print out to see why 
masking doesn't quite work in parallel?

Thanks, Sohail
_______________________________________________
Powered by www.kitware.com

Visit other Kitware open-source projects at 
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: 
http://paraview.org/Wiki/ParaView

Follow this link to subscribe/unsubscribe:
http://www.paraview.org/mailman/listinfo/paraview

Reply via email to