Looks okay - good to go
On Aug 22, 2014, at 12:09 PM, Jeff Squyres (jsquyres)
wrote:
> No -- most of these were not user-visible, or they were fixes from fixes
> post-1.8.1.
>
> I think the relevant ones were put in NEWS already. I'm recording a podcast
> right now -- can you double check?
I think these are fixed now - at least, your test cases all pass for me
On Aug 22, 2014, at 9:12 AM, Ralph Castain wrote:
>
> On Aug 22, 2014, at 9:06 AM, Gilles Gouaillardet
> wrote:
>
>> Ralph,
>>
>> Will do on Monday
>>
>> About the first test, in my case echo $? returns 0
>
> My "sho
No -- most of these were not user-visible, or they were fixes from fixes
post-1.8.1.
I think the relevant ones were put in NEWS already. I'm recording a podcast
right now -- can you double check?
On Aug 22, 2014, at 2:42 PM, Ralph Castain wrote:
> Did you update the NEWS with these?
>
> O
Did you update the NEWS with these?
On Aug 22, 2014, at 11:33 AM, Jeff Squyres (jsquyres)
wrote:
> In the usual location:
>
>http://www.open-mpi.org/software/ompi/v1.8/
>
> Changes since rc4:
>
> - Add a missing atomics stuff into tarball
> - fortran: add missing bindings for WIN_SYNC, W
In the usual location:
http://www.open-mpi.org/software/ompi/v1.8/
Changes since rc4:
- Add a missing atomics stuff into tarball
- fortran: add missing bindings for WIN_SYNC, WIN_LOCK_ALL, WIN_UNLOCK_ALL
- README updates
- usnic: ensure to have a safe destruction of an opal_list_item_t
- rem
On Aug 22, 2014, at 9:06 AM, Gilles Gouaillardet
wrote:
> Ralph,
>
> Will do on Monday
>
> About the first test, in my case echo $? returns 0
My "showcode" is just an alias for the echo
> I noticed this confusing message in your output :
> mpirun noticed that process rank 0 with PID 24382 o
Ralph,
Will do on Monday
About the first test, in my case echo $? returns 0
I noticed this confusing message in your output :
mpirun noticed that process rank 0 with PID 24382 on node bend002 exited on
signal 0 (Unknown signal 0).
About the second test, please note my test program return 3;
whe
You might want to try again with current head of trunk as something seems off
in what you are seeing - more below
On Aug 22, 2014, at 3:12 AM, Gilles Gouaillardet
wrote:
> Ralph,
>
> i tried again after the merge and found the same behaviour, though the
> internals are very different.
>
> i
Hi again,
I generated a video that demonstrates the problem; for brevity I did
not run a full process, but I'm providing the timing below. If you'd
like me to record a full process, just let me know -- but as I said in
my previous email, 32 procs drop to 1 after about a minute and the
computation
Ralph,
i tried again after the merge and found the same behaviour, though the
internals are very different.
i run without any batch manager
from node0:
mpirun -np 1 --mca btl tcp,self -host node1 ./abort
exit with exit code zero :-(
short story : i applied pmix.2.patch and that fixed my proble
Hi Ralph, Chris,
You guys are both correct:
(1) The output that I passed along /is/ exemplary of only 32 processors
running (provided htop reports things correctly). The job I
submitted is the exact same process called 48 times (well, np
times), so all procs should take about the same
11 matches
Mail list logo