Re: [OMPI devel] One sided tests
Thanks Gilles. It looks more than a tentative ;) I commented out on the patch, but so far it fixes all the issues related to one-sided. Thanks, George. On Wed, Jan 21, 2015 at 11:51 PM, Gilles Gouaillardet < gilles.gouaillar...@iferc.org> wrote: > George, > > a tentative fix is available at https://github.com/open-mpi/ompi/pull/355 > > i asked Nathan to review it before it lands into the master > > Cheers, > > Gilles > > > On 2015/01/22 7:08, George Bosilca wrote: > > Current trunk compiled with any compiler (gcc or icc) fails the one sided > tests from mpi_test_suite. It deadlocks in a fetch. > > George. > > > > > ___ > devel mailing listde...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel > Link to this post: > http://www.open-mpi.org/community/lists/devel/2015/01/16813.php > > > > ___ > devel mailing list > de...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel > Link to this post: > http://www.open-mpi.org/community/lists/devel/2015/01/16814.php >
[OMPI devel] open mpi error
hello everyone, I have set up one cluster with two nodes.one node is having two network intewrface both are up...i have defined one of them ip address with interface eno1 in /etc/hosts file and when i try to run send recv program with mpirun it works welll.. but when i off eno1 interface it works well with another ip address with interface eno2but when i again on eno1 and try to run program it will get hanged at connecting point... here i m putting output where i stuck.. please help me.. output:: *hello U are in MPI_init.U are leaving MPI_init nowmsg from senderHello world: processor 0 of 2 of n0cc29[n0cc29:24114] btl: tcp: attempting to connect() to [[14813,1],1] address 172.16.15.73 on port 1024* after this nothing is happening... with regards, khushi.
Re: [OMPI devel] open mpi error
Can you send all the information listed here: http://www.open-mpi.org/community/help/ > On Jan 22, 2015, at 2:05 AM, khushi popat wrote: > > hello everyone, > > > I have set up one cluster with two nodes.one node is having two network > intewrface both are up...i have defined one of them ip address with interface > eno1 in /etc/hosts file and when i try to run send recv program with mpirun > it works welll.. > > but when i off eno1 interface it works well with another ip address with > interface eno2but when i again on eno1 and try to run program it will get > hanged at connecting point... > > > here i m putting output where i stuck.. > please help me.. > output:: > > hello U are in MPI_init.U are leaving MPI_init now > msg from senderHello world: processor 0 of 2 of n0cc29 > [n0cc29:24114] btl: tcp: attempting to connect() to [[14813,1],1] address > 172.16.15.73 on port 1024 > > after this nothing is happening... > > > > with regards, > khushi. > ___ > devel mailing list > de...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel > Link to this post: > http://www.open-mpi.org/community/lists/devel/2015/01/16816.php -- Jeff Squyres jsquy...@cisco.com For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/