Re: [OMPI users] Cannot catch std::bac_alloc?

2019-04-03 Thread Zhen Wang
You will likely find StackOverflow to be a > more effective way to get support on C++ issues. > > Jeff > > On Wed, Apr 3, 2019 at 8:47 AM Zhen Wang wrote: > >> Joseph, >> >> Thanks for your response. I'm no expert on Linux so please bear with me. >> If I un

Re: [OMPI users] Cannot catch std::bac_alloc?

2019-04-03 Thread Zhen Wang
the vector > to it's initial value. There is no way you can catch that. You might > want to try to disable overcommit in the kernel and see if > std::vector::resize throws an exception because malloc fails. > > HTH, > Joseph > > [1] https://www.kernel.org/doc/Documentation/vm/ov

[OMPI users] Cannot catch std::bac_alloc?

2019-04-03 Thread Zhen Wang
Hi, I have difficulty catching std::bac_alloc in an MPI environment. The code is attached. I'm uisng gcc 6.3 on SUSE Linux Enterprise Server 11 (x86_64). OpenMPI is built from source. The commands are as follows: *Build* g++ -I -L -lmpi memory.cpp *Run* -n 2 a.out *Output* 0 0 1 1

Re: [OMPI users] Isend, Recv and Test

2016-05-09 Thread Zhen Wang
Jeff, Thanks for the explanation. It's very clear. Best regards, Zhen On Mon, May 9, 2016 at 10:19 AM, Jeff Squyres (jsquyres) <jsquy...@cisco.com > wrote: > On May 9, 2016, at 8:23 AM, Zhen Wang <tod...@gmail.com> wrote: > > > > I have another question. I thoug

Re: [OMPI users] Isend, Recv and Test

2016-05-09 Thread Zhen Wang
ou want. Pass > "--mca btl_tcp_progress_thread 1" on the mpirun command line to enable the > TCP progress thread to try it. > > > > On May 4, 2016, at 7:40 PM, Zhen Wang <tod...@gmail.com> wrote: > > > > Hi, > > > > I'm having a problem with I

Re: [OMPI users] Isend, Recv and Test

2016-05-06 Thread Zhen Wang
com> wrote: > On May 5, 2016, at 10:09 PM, Zhen Wang <tod...@gmail.com> wrote: > > > > It's taking so long because you are sleeping for .1 second between > calling MPI_Test(). > > > > The TCP transport is only sending a few fragments of your message during

Re: [OMPI users] Isend, Recv and Test

2016-05-05 Thread Zhen Wang
"--mca btl_tcp_progress_thread 1" on the mpirun command line to enable the > TCP progress thread to try it. > Does this mean there's an additional thread to transfer data in background? > > > > On May 4, 2016, at 7:40 PM, Zhen Wang <tod...@gmail.com> wrote:

Re: [OMPI users] Isend, Recv and Test

2016-05-05 Thread Zhen Wang
86] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages > > and see if one performs better than the other ? > > Cheers, > > Gilles > > On Thursday, May 5, 2016, Zhen Wang <tod...@gmail.com> wrote: > >> Gilles, >&

Re: [OMPI users] Isend, Recv and Test

2016-05-05 Thread Zhen Wang
4 at 08:39:06. MPI 0: MPI_Test of 4 at 08:39:06. MPI 0: MPI_Test of 4 at 08:39:06. MPI 1: Recv of 4 finished at 08:39:06. MPI 0: MPI_Test of 4 at 08:39:06. MPI 0: Isend of 4 finished at 08:39:06. > > Cheers, > > Gilles > > > On Thursday, May 5, 2016, Zhen Wang <tod...@gma

[OMPI users] Isend, Recv and Test

2016-05-04 Thread Zhen Wang
Hi, I'm having a problem with Isend, Recv and Test in Linux Mint 16 Petra. The source is attached. Open MPI 1.10.2 is configured with ./configure --enable-debug --prefix=/home//Tool/openmpi-1.10.2-debug The source is built with ~/Tool/openmpi-1.10.2-debug/bin/mpiCC a5.cpp and run in one node