On Thu, Oct 16, 2008 at 9:31 AM, David Fuentes [EMAIL PROTECTED] wrote:
On Thu, 16 Oct 2008, Berk Geveci wrote:
You can't connect to the paraview gui. You can only connect to
instances of pvserver. Currently, those accept one connection only. We
will be working on collaboration support that
Hi,
I export VTK multiblock files. I have the following questions:
* How can I assign names to the datasets? I googled and found a hint
to use name in
the pvd file but this is ignored.
* Is it possible to write the blocks to a single file?
* Paraview crashes when I load the pvd file:
VTKFile
Hi,
if I run mpirun -np 4 ./pvserver on our cluster-node and connect from
my client, this pvserver always shows 100% cpu usage - even if I do
nothing at the client.
Seems to me as if there is a loop waiting for the client to ask for
action - but this loop is calling no wait/sleep function.
Hi Jens,
Your pvserver is probably waiting on an MPI_Recv and your MPI
implementation is spinning.
You will note that process 0 probably isn't doing this, as the other
nodes are waiting on process 0 to send.
I have searched this problem all the way to the MPI developers as
it's easy to
Hi John,
thanks for your answer. That makes sense. Normal mpi-apps are probably
not written to wait for more things to do - they are simply always busy.
It is just a pity that the cluster has to run 100% producing a lot of
heat for nothing.
So the MPI-lib will probably not change this behavior
Hi John,
thought about this problem again...
A solution could be
a)to use MPI_IRecv/MPI_Wait instead of MPI_Recv.
If that results in the same 100% cpu for the MPI_Wait-command,
b) it could be a solution to add a wait()/sleep() just beweeen MPI_IRecv
and MPI_Wait.
How long this wait/sleep would
Hi Jens,
If I recall correctly, your observation lies in the type of
hardware you are using. I think some hardware allows a developer to
leverage an interrupt while some requires polling for a received
message. Design requirements for MPI to be fast with low latency
usually (I
Hi Jens,
I would think that each pvserver process would have to be able to
detect that it was in a lengthy wait state because it sure would suck
to have an MPI_Wait in the middle of each send/recv pair during
compositing ... And it would probably be difficult to handle the
tiled
Hi all,
It just depends on how it is implemented with MPI. There will be lower
latency if it spins in a loop waiting for a message. Here is a link to the
FAQ which shows how you can stop this for OpenMPI:
http://www.open-mpi.org/faq/?category=running#oversubscribing
Just a quick note - I have noticed the same behavior with LAM-MPI, but not
with MPICH.
Jacques
2008/12/5 Paul Edwards [EMAIL PROTECTED]
Hi all,
It just depends on how it is implemented with MPI. There will be lower
latency if it spins in a loop waiting for a message. Here is a link to the
I am guessing this is a 2D contour, correct? If yes, this is a feature
I am supposed to work on for the next release. Can you send me an
example file?
-berk
On Thu, Nov 20, 2008 at 11:16 AM, Sergio Di Bari [EMAIL PROTECTED] wrote:
Hi everybody,
I'm having some problem with Paraview.
I have a
Chris,
If you put a feature request at http://paraview.org/Bug, maybe I will
get to provide an example python script to accomplish this. Send me an
e-mail after submitting the bug. Thanks.
-berk
On Thu, Nov 27, 2008 at 3:16 AM, Christoph Held [EMAIL PROTECTED] wrote:
Hi there,
I'm trying to
If the geometry size is below the remote render threshold, geometry is
transferred and rendered on the client. If the geometry size is above
the threshold, rendering happens on the server and the resulting image
is transferred. No OpenGL commands travel over the network.
On Fri, Dec 5, 2008 at
13 matches
Mail list logo