Hello Tommy,
I'd appreciate to have a quick look at PR 13039 [1] from my student here at
HLRS.
He implemented some message aggregation for partitioned P2P in a new module
based
on our work presented at EuroMPI/Australia last year.
Feedback welcome. :)
Best
Christoph
[1] https://github.com/open
Hello Austen,
Unfortunately I could not attend the last telco and therefore I do not know if
this was discussed.
So I'd like to bring up attention for
https://github.com/mpi-forum/mpi-issues/issues/765
It is not voted on yet but seems to have support to go in.
As Partitioned communication is int
e Open MPI documentation could
include some more information and maybe best practices about this.
Best
Christoph Niethammer
--
Christoph Niethammer
High Performance Computing Center Stuttgart (HLRS)
Nobelstrasse 19
70569 Stuttgart
Tel: ++49(0)711-685-87203
email: nietham...@hlrs.de
http://www.hlrs
Works for the installation at HLRS.
Short note/question: I am using the mtt-relay script. This is written in perl.
Is there a python based replacement?
Best
Christoph Niethammer
- Mensaje original -
De: "Open MPI Developers"
Para: "Open MPI Developers"
CC: &quo
/md5sums.txt:
2018-03-07 23:16:19 ERROR 403: Forbidden.
...
Fetching the tarballs itself works fine.
Anything else I have to change in the setup?
Best
Christoph Niethammer
- Original Message -
From: "Open MPI Developers"
To: "Open MPI Developers"
Cc: "Barrett, Brian
eft over temporary I/O files
in /tmp
Hi Chris
Please go ahead and open a PR for master and I'll open corresponding ones for
the release branches.
Howard
Christoph Niethammer < [ mailto:nietham...@hlrs.de | nietham...@hlrs.de ] >
schrieb am Do. 22. Juni 2017 um 01:10:
Hi
l.com |
hpprit...@gmail.com ] > :
Hi Chris,
Thanks very much for the patch!
Howard
2017-06-21 9:43 GMT-06:00 Christoph Niethammer < [ mailto:nietham...@hlrs.de |
nietham...@hlrs.de ] > :
Hello Ralph,
Thanks for the update on this issue.
I used the latest master (c38866eb392
me before - anything we make should be
under the session directory, not directly in /tmp.
> On May 9, 2017, at 2:10 AM, Christoph Niethammer wrote:
>
> Hi,
>
> I am using Open MPI 2.1.0.
>
> Best
> Christoph
>
> - Original Message -
> From: "R
are you using?
> On May 8, 2017, at 8:56 AM, Christoph Niethammer wrote:
>
> Hello
>
> According to the manpage "...orte-clean attempts to clean up any processes
> and files left over from Open MPI jobs that were run in the past as well as
> any currently running jobs
e job, and any temporary files...".
If I now have a program which calls MPI_File_open, MPI_File_write and
MPI_Abort() in order, I get left over files /tmp/OMPI_*.sm.
Running orte-clean does not remove them.
Is this a bug or a feature?
Best
Christoph Niethammer
--
Christoph Niethammer
High P
heers,
Gilles
On Tuesday, February 28, 2017, Christoph Niethammer < [
mailto:nietham...@hlrs.de | nietham...@hlrs.de ] > wrote:
Hi Gilles,
Is this the same issue I reported 4/29/2014: 'Wrong Endianness in Open MPI for
external32 representation'?
[ https://www.mail-archive.c
Hi Gilles,
Is this the same issue I reported 4/29/2014: 'Wrong Endianness in Open MPI for
external32 representation'?
https://www.mail-archive.com/devel@lists.open-mpi.org/msg14698.html
Best
Christoph
- Original Message -
From: "Gilles Gouaillardet"
To: "Open MPI Developers"
Sent: Tu
hat
about "OMPI progress"?
ompi_info --all --all | grep -i "Thread support"
Thread support: posix (MPI_THREAD_MULTIPLE: no, OPAL support: yes, OMPI
progress: no, ORTE progress: yes, Event lib: yes)
Best regards
Christoph Niethammer
__
this data and keep the history of all measurements.
> > > >
> > > > Is there any chance that we will not came up with well defined set of
> > > > tests and drop the ball here?
> > > >
> > > > пятн
Please, check https://github.com/open-mpi/ompi/wiki/Request-refactoring-test
for the testing methodology and
https://github.com/open-mpi/2016-summer-perf-testing
for examples and launch scripts.
2016-08-23 21:20 GMT+07:00 Christoph Niethammer < nietham...@hlrs.de > :
Hello,
I just came o
Hello,
I just came over this and would like to contribute some results from our
system(s).
Are there any specific configure options you want to see enabled beside
--enable-mpi-thread-multiple?
How to commit results?
Best
Christoph Niethammer
- Original Message -
From: &quo
Hello,
I can confirm, that it works for me, too.
Thanks!
Best
Christoph Niethammer
- Original Message -
From: tmish...@jcity.maeda.co.jp
To: "Open MPI Developers"
Sent: Wednesday, August 10, 2016 5:58:50 AM
Subject: Re: [OMPI devel] sm BTL performace of the openmpi-2.0.0
e part yet, but maybe there is a problem not
skipping "as intended" confusing interfaces later on?
Results see below.
Best regards
Christoph Niethammer
--
Christoph Niethammer
High Performance Computing Center Stuttgart (HLRS)
Nobelstrasse 19
70569 Stuttgart
Tel: ++49(0)
see below.
Best regards
Christoph Niethammer
--
Christoph Niethammer
High Performance Computing Center Stuttgart (HLRS)
Nobelstrasse 19
70569 Stuttgart
Tel: ++49(0)711-685-87203
email: nietham...@hlrs.de
http://www.hlrs.de/people/niethammer
mpirun -np 2 --mca btl self,vader osu_bw
# OSU MPI
there is a problem not
skipping "as intended" confusing interfaces later on?
Results see below.
Best regards
Christoph Niethammer
--
Christoph Niethammer
High Performance Computing Center Stuttgart (HLRS)
Nobelstrasse 19
70569 Stuttgart
Tel: ++49(0)711-685-87203
email: nietham...@hlrs.d
not think that would be hard to change.
is this a source of problem for your applications ?
note this kind of behavior can be caused by the batch manager.
if you use slurm and srun instead of mpirun, I am not even sure stdout is a
tty.
Cheers,
Gilles
On Monday, July 27, 2015, Christoph
Hello,
What is the C standard version to be used for the Open MPI code base?
Most seems to be < C99.
C99 features I saw so far mostuly in newer components:
* restrict keyword
* variable declaration inside for loop heads
Regards
Christoph Niethammer
--
Christoph Niethammer
High Performa
I see...
Redirecting stdout or stderr to files does not change anything in the Open MPI
case.
Best regards
Christoph Niethammer
PS: MPICH reports in all cases 0 for isatty() on stdout and stderr.
this into master after the svn->git transition?
* Open a bug first
* fork + pull request or
* email patch from git format-patch to devel list?
Best regards
Christoph Niethammer
--
Christoph Niethammer
High Performance Computing Center Stuttgart (HLRS)
Nobelstrasse 19
70569 Stuttgart
Tel: ++4
--mca pml_v_verbose 100 --mca
orte_base_help_aggregate 0
Some log output is attached below.
I would appreciate any feedback concerning current status of the bfo PML as
well as ideas how to debug and where to search for the problem inside the Open
MPI code base.
Best regards
Christoph Niethammer
--
Chr
gards
Christoph Niethammer
--
Christoph Niethammer
High Performance Computing Center Stuttgart (HLRS)
Nobelstrasse 19
70569 Stuttgart
Tel: ++49(0)711-685-87203
email: nietham...@hlrs.de
http://www.hlrs.de/people/niethammer#include
#include
int main(int argc, char* argv[]) {
FILE *fp;
XDR xdr_o
git (Github mirror, git-svn, git patches)
--
Christoph Niethammer
High Performance Computing Center Stuttgart (HLRS)
Nobelstrasse 19
70569 Stuttgart
Tel: ++49(0)711-685-87203
email: nietham...@hlrs.de
http://www.hlrs.de/people/niethammer
- Original Message -
From: "Jeff Sq
rimes: 0.048702 sec
optimized-factorization: 0.13 sec
Regards
Christoph
--
Christoph Niethammer
High Performance Computing Center Stuttgart (HLRS)
Nobelstrasse 19
70569 Stuttgart
Tel: ++49(0)711-685-87203
email: nietham...@hlrs.de
http://www.hlrs.de/people/niethammer
- Ursprüngliche
trunk.
Best regards
Christoph
--
Christoph Niethammer
High Performance Computing Center Stuttgart (HLRS)
Nobelstrasse 19
70569 Stuttgart
Tel: ++49(0)711-685-87203
email: nietham...@hlrs.de
http://www.hlrs.de/people/niethammer
- Ursprüngliche Mail -
Von: "Andreas Schäfer"
MPI Developers"
Gesendet: Dienstag, 11. Februar 2014 01:32:53
Betreff: Re: [OMPI devel] Reviewing MPI_Dims_create
On Feb 10, 2014, at 7:22 PM, Christoph Niethammer wrote:
> 2.) Interesting idea: Using the approximation from the cited paper we should
> only need around 400 MB to store all
prime. No
complication with counts IMHO. I leave this without patch as it is already 2:30
in the morning. :P
Regards
Christoph
--
Christoph Niethammer
High Performance Computing Center Stuttgart (HLRS)
Nobelstrasse 19
70569 Stuttgart
Tel: ++49(0)711-685-87203
email: nietham...@hlrs.de
http://
nteresting idea: Using the approximation from the cited paper we should
only need around 400 MB to store all primes in the int32 range. Potential for
applying compression techniques still present. ^^
Regards
Christoph
--
Christoph Niethammer
High Performance Computing Center Stuttgart (HLRS)
ould like to commit this to trunk for further testing
(+cmr for 1.7.5?) end of this week.
Best regards
Christoph
[1] http://www.ams.org/journals/mcom/1999-68-225/S0025-5718-99-01037-6/home.html
--
Christoph Niethammer
High Performance Computing Center Stuttgart (HLRS)
Nobelstrasse 19
70569 St
Hello
I think I found a minor memory bug in the bml_r2 code in function
mca_bml_r2_del_btl but I could not figure out when this function ever gets
called.
How can I test this function in a proper way?
Here the diff showing the issue:
@@ -699,11 +699,11 @@ static int mca_bml_r2_del_btl(mca_btl_
Hi,
Just found the following ticket which answers my question:
https://svn.open-mpi.org/trac/ompi/ticket/4044
Sorry for spam. :/
Regards
Christoph
- Ursprüngliche Mail -
Von: "Christoph Niethammer"
An: "Open MPI Developers"
Gesendet: Mittwoch, 8. Januar 2
Hello
Using Open MPI 1.7.3 I got the following error message when executing
mpirun -np 16 --bycore /bin/hostname
mpirun: Error: unknown option "--bycore"
The option is documented in the man pages and with Open MPI 1.6.5 everything
works fine.
For --bysocket I get the same error but --bynode see
You can remove Shiqing.
Rainer should be listed in future under hft-stuttgart.de. ;)
Regards
Christoph
> hlrs.de
> ===
> shiqing: Shiqing Fan
> hpcchris: Christoph Niethammer
> rusraink: Rainer Keller **NO COMMITS IN LAST
> YEAR**
ite function in
the "continues" way on the data and should take care of the gaps.
Regards
Christoph Niethammer
signature.asc
Description: This is a digitally signed message part.
Hello,
Since some time I'm testing Open MPI at the HRLS. My main topic there is the
thread support of Open MPI.
Some time ago I found a segmentation fault when running the svn-trunk Version.
Thanks to the help of Sven I could locate it now to be in the shared memory
btl. (ompi/mca/btl/sm/btl_
39 matches
Mail list logo