Re: [OMPI devel] ompi development call

2025-01-28 Thread Christoph Niethammer
Hello Tommy, I'd appreciate to have a quick look at PR 13039 [1] from my student here at HLRS. He implemented some message aggregation for partitioned P2P in a new module based on our work presented at EuroMPI/Australia last year. Feedback welcome. :) Best Christoph [1] https://github.com/open

Re: [OMPI devel] [Open MPI Announce] Open MPI v5.0.0rc13 is available for testing

2023-10-04 Thread Christoph Niethammer via devel
Hello Austen, Unfortunately I could not attend the last telco and therefore I do not know if this was discussed. So I'd like to bring up attention for https://github.com/mpi-forum/mpi-issues/issues/765 It is not voted on yet but seems to have support to go in. As Partitioned communication is int

[OMPI devel] Open MPI + UCX paramater tunging

2020-02-06 Thread Christoph Niethammer via devel
e Open MPI documentation could include some more information and maybe best practices about this. Best Christoph Niethammer -- Christoph Niethammer High Performance Computing Center Stuttgart (HLRS) Nobelstrasse 19 70569 Stuttgart Tel: ++49(0)711-685-87203 email: nietham...@hlrs.de http://www.hlrs

Re: [OMPI devel] MTT Perl client

2018-09-14 Thread Christoph Niethammer
Works for the installation at HLRS. Short note/question: I am using the mtt-relay script. This is written in perl. Is there a python based replacement? Best Christoph Niethammer - Mensaje original - De: "Open MPI Developers" Para: "Open MPI Developers" CC: &quo

Re: [OMPI devel] Upcoming nightly tarball URL changes

2018-03-08 Thread Christoph Niethammer
/md5sums.txt: 2018-03-07 23:16:19 ERROR 403: Forbidden. ... Fetching the tarballs itself works fine. Anything else I have to change in the setup? Best Christoph Niethammer - Original Message - From: "Open MPI Developers" To: "Open MPI Developers" Cc: "Barrett, Brian

Re: [OMPI devel] orte-clean not cleaning left over temporary I/O files in /tmp

2017-06-23 Thread Christoph Niethammer
eft over temporary I/O files in /tmp Hi Chris Please go ahead and open a PR for master and I'll open corresponding ones for the release branches. Howard Christoph Niethammer < [ mailto:nietham...@hlrs.de | nietham...@hlrs.de ] > schrieb am Do. 22. Juni 2017 um 01:10: Hi

Re: [OMPI devel] orte-clean not cleaning left over temporary I/O files in /tmp

2017-06-22 Thread Christoph Niethammer
l.com | hpprit...@gmail.com ] > : Hi Chris, Thanks very much for the patch! Howard 2017-06-21 9:43 GMT-06:00 Christoph Niethammer < [ mailto:nietham...@hlrs.de | nietham...@hlrs.de ] > : Hello Ralph, Thanks for the update on this issue. I used the latest master (c38866eb392

Re: [OMPI devel] orte-clean not cleaning left over temporary I/O files in /tmp

2017-06-21 Thread Christoph Niethammer
me before - anything we make should be under the session directory, not directly in /tmp. > On May 9, 2017, at 2:10 AM, Christoph Niethammer wrote: > > Hi, > > I am using Open MPI 2.1.0. > > Best > Christoph > > - Original Message - > From: "R

Re: [OMPI devel] orte-clean not cleaning left over temporary I/O files in /tmp

2017-05-09 Thread Christoph Niethammer
are you using? > On May 8, 2017, at 8:56 AM, Christoph Niethammer wrote: > > Hello > > According to the manpage "...orte-clean attempts to clean up any processes > and files left over from Open MPI jobs that were run in the past as well as > any currently running jobs

[OMPI devel] orte-clean not cleaning left over temporary I/O files in /tmp

2017-05-08 Thread Christoph Niethammer
e job, and any temporary files...". If I now have a program which calls MPI_File_open, MPI_File_write and MPI_Abort() in order, I get left over files /tmp/OMPI_*.sm. Running orte-clean does not remove them. Is this a bug or a feature? Best Christoph Niethammer -- Christoph Niethammer High P

Re: [OMPI devel] v2.1.0rc1 has been released

2017-02-28 Thread Christoph Niethammer
heers, Gilles On Tuesday, February 28, 2017, Christoph Niethammer < [ mailto:nietham...@hlrs.de | nietham...@hlrs.de ] > wrote: Hi Gilles, Is this the same issue I reported 4/29/2014: 'Wrong Endianness in Open MPI for external32 representation'? [ https://www.mail-archive.c

Re: [OMPI devel] v2.1.0rc1 has been released

2017-02-28 Thread Christoph Niethammer
Hi Gilles, Is this the same issue I reported 4/29/2014: 'Wrong Endianness in Open MPI for external32 representation'? https://www.mail-archive.com/devel@lists.open-mpi.org/msg14698.html Best Christoph - Original Message - From: "Gilles Gouaillardet" To: "Open MPI Developers" Sent: Tu

[OMPI devel] Current progress threads status in Open MPI

2016-11-17 Thread Christoph Niethammer
hat about "OMPI progress"? ompi_info --all --all | grep -i "Thread support" Thread support: posix (MPI_THREAD_MULTIPLE: no, OPAL support: yes, OMPI progress: no, ORTE progress: yes, Event lib: yes) Best regards Christoph Niethammer __

Re: [OMPI devel] Performance analysis proposal

2016-09-05 Thread Christoph Niethammer
this data and keep the history of all measurements. > > > > > > > > Is there any chance that we will not came up with well defined set of > > > > tests and drop the ball here? > > > > > > > > пятн

Re: [OMPI devel] Performance analysis proposal

2016-08-25 Thread Christoph Niethammer
Please, check https://github.com/open-mpi/ompi/wiki/Request-refactoring-test for the testing methodology and https://github.com/open-mpi/2016-summer-perf-testing for examples and launch scripts. 2016-08-23 21:20 GMT+07:00 Christoph Niethammer < nietham...@hlrs.de > : Hello, I just came o

Re: [OMPI devel] Performance analysis proposal

2016-08-23 Thread Christoph Niethammer
Hello, I just came over this and would like to contribute some results from our system(s). Are there any specific configure options you want to see enabled beside --enable-mpi-thread-multiple? How to commit results? Best Christoph Niethammer - Original Message - From: &quo

Re: [OMPI devel] sm BTL performace of the openmpi-2.0.0

2016-08-10 Thread Christoph Niethammer
Hello, I can confirm, that it works for me, too. Thanks! Best Christoph Niethammer - Original Message - From: tmish...@jcity.maeda.co.jp To: "Open MPI Developers" Sent: Wednesday, August 10, 2016 5:58:50 AM Subject: Re: [OMPI devel] sm BTL performace of the openmpi-2.0.0

Re: [OMPI devel] sm BTL performace of the openmpi-2.0.0

2016-08-08 Thread Christoph Niethammer
e part yet, but maybe there is a problem not skipping "as intended" confusing interfaces later on? Results see below. Best regards Christoph Niethammer -- Christoph Niethammer High Performance Computing Center Stuttgart (HLRS) Nobelstrasse 19 70569 Stuttgart Tel: ++49(0)

Re: [OMPI devel] sm BTL performace of the openmpi-2.0.0

2016-08-05 Thread Christoph Niethammer
see below. Best regards Christoph Niethammer -- Christoph Niethammer High Performance Computing Center Stuttgart (HLRS) Nobelstrasse 19 70569 Stuttgart Tel: ++49(0)711-685-87203 email: nietham...@hlrs.de http://www.hlrs.de/people/niethammer mpirun -np 2 --mca btl self,vader osu_bw # OSU MPI

Re: [OMPI devel] sm BTL performace of the openmpi-2.0.0

2016-08-05 Thread Christoph Niethammer
there is a problem not skipping "as intended" confusing interfaces later on? Results see below. Best regards Christoph Niethammer -- Christoph Niethammer High Performance Computing Center Stuttgart (HLRS) Nobelstrasse 19 70569 Stuttgart Tel: ++49(0)711-685-87203 email: nietham...@hlrs.d

Re: [OMPI devel] stdout, stderr reporting different values for isatty

2015-07-30 Thread Christoph Niethammer
not think that would be hard to change. is this a source of problem for your applications ? note this kind of behavior can be caused by the batch manager. if you use slurm and srun instead of mpirun, I am not even sure stdout is a tty. Cheers, Gilles On Monday, July 27, 2015, Christoph

[OMPI devel] C standard compatibility

2015-07-30 Thread Christoph Niethammer
Hello, What is the C standard version to be used for the Open MPI code base? Most seems to be < C99. C99 features I saw so far mostuly in newer components: * restrict keyword * variable declaration inside for loop heads Regards Christoph Niethammer -- Christoph Niethammer High Performa

[OMPI devel] stdout, stderr reporting different values for isatty

2015-07-27 Thread Christoph Niethammer
I see... Redirecting stdout or stderr to files does not change anything in the Open MPI case. Best regards Christoph Niethammer PS: MPICH reports in all cases 0 for isatty() on stdout and stderr.

[OMPI devel] Missing f08 binding for Win_allocate

2014-10-15 Thread Christoph Niethammer
this into master after the svn->git transition? * Open a bug first * fork + pull request or * email patch from git format-patch to devel list? Best regards Christoph Niethammer -- Christoph Niethammer High Performance Computing Center Stuttgart (HLRS) Nobelstrasse 19 70569 Stuttgart Tel: ++4

[OMPI devel] PML-bfo deadlocks for message size > eager limit after connection loss

2014-07-24 Thread Christoph Niethammer
--mca pml_v_verbose 100 --mca orte_base_help_aggregate 0 Some log output is attached below. I would appreciate any feedback concerning current status of the bfo PML as well as ideas how to debug and where to search for the problem inside the Open MPI code base. Best regards Christoph Niethammer -- Chr

[OMPI devel] Wrong Endianness in Open MPI for external32 representation

2014-04-29 Thread Christoph Niethammer
gards Christoph Niethammer -- Christoph Niethammer High Performance Computing Center Stuttgart (HLRS) Nobelstrasse 19 70569 Stuttgart Tel: ++49(0)711-685-87203 email: nietham...@hlrs.de http://www.hlrs.de/people/niethammer#include #include int main(int argc, char* argv[]) { FILE *fp; XDR xdr_o

Re: [OMPI devel] 1-question developer poll

2014-04-17 Thread Christoph Niethammer
git (Github mirror, git-svn, git patches) -- Christoph Niethammer High Performance Computing Center Stuttgart (HLRS) Nobelstrasse 19 70569 Stuttgart Tel: ++49(0)711-685-87203 email: nietham...@hlrs.de http://www.hlrs.de/people/niethammer - Original Message - From: "Jeff Sq

Re: [OMPI devel] Reviewing MPI_Dims_create

2014-02-11 Thread Christoph Niethammer
rimes: 0.048702 sec optimized-factorization: 0.13 sec Regards Christoph -- Christoph Niethammer High Performance Computing Center Stuttgart (HLRS) Nobelstrasse 19 70569 Stuttgart Tel: ++49(0)711-685-87203 email: nietham...@hlrs.de http://www.hlrs.de/people/niethammer - Ursprüngliche

Re: [OMPI devel] Reviewing MPI_Dims_create

2014-02-11 Thread Christoph Niethammer
trunk. Best regards Christoph -- Christoph Niethammer High Performance Computing Center Stuttgart (HLRS) Nobelstrasse 19 70569 Stuttgart Tel: ++49(0)711-685-87203 email: nietham...@hlrs.de http://www.hlrs.de/people/niethammer - Ursprüngliche Mail - Von: "Andreas Schäfer"

Re: [OMPI devel] Reviewing MPI_Dims_create

2014-02-10 Thread Christoph Niethammer
MPI Developers" Gesendet: Dienstag, 11. Februar 2014 01:32:53 Betreff: Re: [OMPI devel] Reviewing MPI_Dims_create On Feb 10, 2014, at 7:22 PM, Christoph Niethammer wrote: > 2.) Interesting idea: Using the approximation from the cited paper we should > only need around 400 MB to store all

Re: [OMPI devel] Reviewing MPI_Dims_create

2014-02-10 Thread Christoph Niethammer
prime. No complication with counts IMHO. I leave this without patch as it is already 2:30 in the morning. :P Regards Christoph -- Christoph Niethammer High Performance Computing Center Stuttgart (HLRS) Nobelstrasse 19 70569 Stuttgart Tel: ++49(0)711-685-87203 email: nietham...@hlrs.de http://

Re: [OMPI devel] Reviewing MPI_Dims_create

2014-02-10 Thread Christoph Niethammer
nteresting idea: Using the approximation from the cited paper we should only need around 400 MB to store all primes in the int32 range. Potential for applying compression techniques still present. ^^ Regards Christoph -- Christoph Niethammer High Performance Computing Center Stuttgart (HLRS)

[OMPI devel] Reviewing MPI_Dims_create

2014-02-10 Thread Christoph Niethammer
ould like to commit this to trunk for further testing (+cmr for 1.7.5?) end of this week. Best regards Christoph [1] http://www.ams.org/journals/mcom/1999-68-225/S0025-5718-99-01037-6/home.html -- Christoph Niethammer High Performance Computing Center Stuttgart (HLRS) Nobelstrasse 19 70569 St

[OMPI devel] mca_bml_r2_del_btl incorrect memory size reallocation?

2014-01-23 Thread Christoph Niethammer
Hello I think I found a minor memory bug in the bml_r2 code in function mca_bml_r2_del_btl but I could not figure out when this function ever gets called. How can I test this function in a proper way? Here the diff showing the issue: @@ -699,11 +699,11 @@ static int mca_bml_r2_del_btl(mca_btl_

Re: [OMPI devel] Missing --bycore option in Open MPI 1.7.?

2014-01-08 Thread Christoph Niethammer
Hi, Just found the following ticket which answers my question: https://svn.open-mpi.org/trac/ompi/ticket/4044 Sorry for spam. :/ Regards Christoph - Ursprüngliche Mail - Von: "Christoph Niethammer" An: "Open MPI Developers" Gesendet: Mittwoch, 8. Januar 2

[OMPI devel] Missing --bycore option in Open MPI 1.7.?

2014-01-08 Thread Christoph Niethammer
Hello Using Open MPI 1.7.3 I got the following error message when executing mpirun -np 16 --bycore /bin/hostname mpirun: Error: unknown option "--bycore" The option is documented in the man pages and with Open MPI 1.6.5 everything works fine. For --bysocket I get the same error but --bynode see

Re: [OMPI devel] Annual OMPI membership review: SVN accounts

2013-07-09 Thread Christoph Niethammer
You can remove Shiqing. Rainer should be listed in future under hft-stuttgart.de. ;) Regards Christoph > hlrs.de > === > shiqing: Shiqing Fan > hpcchris: Christoph Niethammer > rusraink: Rainer Keller **NO COMMITS IN LAST > YEAR**

[OMPI devel] Datasize confusion in MPI_Write can lead to data los!

2008-02-08 Thread Christoph Niethammer
ite function in the "continues" way on the data and should take care of the gaps. Regards Christoph Niethammer signature.asc Description: This is a digitally signed message part.

[OMPI devel] patch for btl_sm.c fixing segmentation fault

2007-07-11 Thread Christoph Niethammer
Hello, Since some time I'm testing Open MPI at the HRLS. My main topic there is the thread support of Open MPI. Some time ago I found a segmentation fault when running the svn-trunk Version. Thanks to the help of Sven I could locate it now to be in the shared memory btl. (ompi/mca/btl/sm/btl_