Hi Jeff, Ralph,
first of all: thanks for your work on this!
On 3 July 2013 21:09, Jeff Squyres (jsquyres) wrote:
> 1. The root cause of the issue is that you are assigning a
> non-existent IP address to a name. I.e., maps to 127.0.1.1,
> but that IP address does not exist
Hi,
sorry for the delay in replying -- pretty busy week :-(
On 28 June 2013 21:54, Jeff Squyres (jsquyres) wrote:
> Here's what we think we know (I'm using the name "foo" instead of
> your actual hostname because it's easier to type):
>
> 1. When you run "hostname", you get
Hello,
On 26 June 2013 03:11, Ralph Castain wrote:
> I've been reviewing the code, and I think I'm getting a handle on
> the issue.
>
> Just to be clear - your hostname resolves to the 127 address? And you are on
> a Linux (not one of the BSD flavors out there)?
Yes (but
On 20 June 2013 11:29, Riccardo Murri <riccardo.mu...@uzh.ch> wrote:
> However, I cannot reproduce the issue now
Just to be clear: the "issue" in that mail refers to the OpenMPI SGE
ras plugin not working with our version of SGE.
The issue with 127.0.1.1 addres
On 19 June 2013 23:52, Reuti <re...@staff.uni-marburg.de> wrote:
> Am 19.06.2013 um 22:14 schrieb Riccardo Murri:
>
>> On 19 June 2013 20:42, Reuti <re...@staff.uni-marburg.de> wrote:
>>> Am 19.06.2013 um 19:43 schrieb Riccardo Murri <riccardo.mu...@uzh.ch>
On 20 June 2013 06:33, Ralph Castain wrote:
> Been trying to decipher this problem, and think maybe I'm beginning to
> understand it. Just to clarify:
>
> * when you execute "hostname", you get the .local response?
Yes:
[rmurri@nh64-2-11 ~]$ hostname
nh64-2-11.local
On 19 June 2013 20:42, Reuti <re...@staff.uni-marburg.de> wrote:
> Am 19.06.2013 um 19:43 schrieb Riccardo Murri <riccardo.mu...@uzh.ch>:
>
>> On 19 June 2013 16:01, Ralph Castain <r...@open-mpi.org> wrote:
>>> How is OMPI picking up this hostfile? It isn
On 19 June 2013 16:01, Ralph Castain wrote:
> How is OMPI picking up this hostfile? It isn't being specified on the cmd
> line - are you running under some resource manager?
Via the environment variable `OMPI_MCA_orte_default_hostfile`.
We're running under SGE, but disable
d host names:
$ cat $TMPDIR/machines
nh64-1-17
nh64-1-17
No problem if we modify the setup script to create the hostfile using
FQDNs instead. (`uname -n` returns the FQDN, not the unqualified host name.)
Thanks,
Riccardo
--
Riccardo Murri
http://www.gc3.uzh.ch/people/rm
Grid Computin
On Mon, Dec 13, 2010 at 4:57 PM, Kechagias Apostolos
wrote:
> I have the code that is in the attachment.
> Can anybody explain how to use scatter function?
MPI_Scatter receives the data in the initial segment of the given
buffer. (The receiving buffer needs to be 1/Nth of
Hi,
On Fri, Dec 10, 2010 at 2:51 AM, Santosh Ansumali wrote:
>> - the "static" data member is shared between all instances of the
>> class, so it cannot be part of the MPI datatype (it will likely be
>> at a fixed memory location);
>
> Yes! I agree that i is global as far
On Wed, Dec 8, 2010 at 10:04 PM, Santosh Ansumali wrote:
> I am confused with the use of MPI derived datatype for classes with
> static member. How to create derived datatype for something like
> class test{
> static const int i=5;
> double data[5];
> }
>
This looks like
Hi Jeff,
thanks for the explanation - I should have read the MPI standard more carefully.
In the end, I traced the bug down to using standard send instead of
synchronous send,
so it had nothing to do with the receiving side at all.
Best regards,
Riccardo
Hello,
I'm trying to debug a segfaulting application; the segfault does not
happen consistently, however, so my guess is that it is due to some
memory corruption problem which I'm trying to find.
I'm using code like this:
MPI_Iprobe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, , );
if(flag)
>> Other than not having the obvious "OMPI is MPI-2.2 compliant" checkmark
>> for marketing reasons, is there anyone who *needs* the functionality
>> represented by those still-open tickets?
>
> I have been writing some code that would have benefited greatly from the fix
> to #2219 (MPI datatypes for C99 types and MPI integer typedefs).
+1
--
Riccardo Murri, Hadlaubstr. 150, 8006 Zürich (CH)
On Tue, Aug 10, 2010 at 9:49 PM, Alexandru Blidaru wrote:
> Are the Boost.MPI send and recv functions as fast as the standard ones when
> using Open-MPI?
Boost.MPI is layered on top of plain MPI; it basically provides a
mapping from complex and user-defined C++ data types to
Hi Alexandru,
you can read all about Boost.MPI at:
http://www.boost.org/doc/libs/1_43_0/doc/html/mpi.html
On Mon, Aug 9, 2010 at 10:27 PM, Alexandru Blidaru wrote:
> I basically have to implement a 4D vector. An additional goal of my project
> is to support char, int,
Hello Alexandru,
On Mon, Aug 9, 2010 at 6:05 PM, Alexandru Blidaru wrote:
> I have to send some vectors from node to node, and the vecotrs are built
> using a template. The datatypes used in the template will be long, int,
> double, and char. How may I send those vectors
Hi Jack,
On Wed, Aug 4, 2010 at 6:25 AM, Jack Bryan wrote:
> I need to transfer some data, which is C++ class with some vector
> member data.
> I want to use MPI_Bcast(buffer, count, datatype, root, comm);
> May I use MPI_Datatype to define customized data structure that
Hello,
The FAQ states: "Support for MPI_THREAD_MULTIPLE [...] has been
designed into Open MPI from its first planning meetings. Support for
MPI_THREAD_MULTIPLE is included in the first version of Open MPI, but
it is only lightly tested and likely still has some bugs."
The man page of "mpirun"
Hello,
I just re-compiled OMPI, and noticed this in the
"ompi_info --all" output:
Open MPI: 1.4.3a1r23323
...
Thread support: posix (mpi: yes, progress: no)
...
what is this "progress thread support"? Is it the "asynchronous
progress
Sorry, just found out about the "--debug-daemons" option, which
allowed me to google a meaningful error message and find the solution
in the archives of this list.
For the record, the problem was that the "orted" being launched on the
remote node is the one from the system-wide MPI install, not
Hello,
On Tue, Jun 22, 2010 at 8:05 AM, Ralph Castain wrote:
> Sorry for the problem - the issue is a bug in the handling of the
>pernode option in 1.4.2. This has been fixed and awaits release in
>1.4.3.
>
Thank you for pointing this out. Unfortunately, I still am not able
Hello,
I'm using OpenMPI 1.4.2 on a Rocks 5.2 cluster. I compiled it on my
own to have a thread-enabled MPI (the OMPI coming with Rocks 5.2
apparently only supports MPI_THREAD_SINGLE), and installed into ~/sw.
To test the newly installed library I compiled a simple "hello world"
that comes with
24 matches
Mail list logo