Re: [OMPI users] Mixed Mellanox and Qlogic problems

2011-07-27 Thread David Warren
Ok, I finally was able to get on and run some ofed tests - it looks to 
me like I must have something configured wrong with the qlogic cards, 
but I have no idea what???


Mellanox to Qlogic:
 ibv_rc_pingpong n15
  local address:  LID 0x0006, QPN 0x240049, PSN 0x87f83a, GID ::
  remote address: LID 0x000d, QPN 0x00b7cb, PSN 0xcc9dee, GID ::
8192000 bytes in 0.01 seconds = 4565.38 Mbit/sec
1000 iters in 0.01 seconds = 14.35 usec/iter

ibv_srq_pingpong n15
  local address:  LID 0x0006, QPN 0x280049, PSN 0xf83e06, GID ::
 ...
8192000 bytes in 0.01 seconds = 9829.91 Mbit/sec
1000 iters in 0.01 seconds = 6.67 usec/iter

ibv_uc_pingpong n15
  local address:  LID 0x0006, QPN 0x680049, PSN 0x7b33d2, GID ::
  remote address: LID 0x000d, QPN 0x00b7ed, PSN 0x7fafaa, GID ::
8192000 bytes in 0.02 seconds = 4080.19 Mbit/sec
1000 iters in 0.02 seconds = 16.06 usec/iter

Qlogic to Qlogic

ibv_rc_pingpong n15
  local address:  LID 0x000b, QPN 0x00afb7, PSN 0x3f08df, GID ::
  remote address: LID 0x000d, QPN 0x00b7ef, PSN 0xd15096, GID ::
8192000 bytes in 0.02 seconds = 3223.13 Mbit/sec
1000 iters in 0.02 seconds = 20.33 usec/iter

ibv_srq_pingpong n15
  local address:  LID 0x000b, QPN 0x00afb9, PSN 0x9cdde3, GID ::
 ...
8192000 bytes in 0.01 seconds = 9018.30 Mbit/sec
1000 iters in 0.01 seconds = 7.27 usec/iter

ibv_uc_pingpong n15
  local address:  LID 0x000b, QPN 0x00afd9, PSN 0x98cfa0, GID ::
  remote address: LID 0x000d, QPN 0x00b811, PSN 0x0a0d6e, GID ::
8192000 bytes in 0.02 seconds = 3318.28 Mbit/sec
1000 iters in 0.02 seconds = 19.75 usec/iter

Mellanox to Mellanox

ibv_rc_pingpong n5
  local address:  LID 0x0009, QPN 0x240049, PSN 0xd72119, GID ::
  remote address: LID 0x0006, QPN 0x6c0049, PSN 0xc1909e, GID ::
8192000 bytes in 0.01 seconds = 7121.93 Mbit/sec
1000 iters in 0.01 seconds = 9.20 usec/iter

ibv_srq_pingpong n5
  local address:  LID 0x0009, QPN 0x280049, PSN 0x78f4f7, GID ::
...
8192000 bytes in 0.00 seconds = 24619.08 Mbit/sec
1000 iters in 0.00 seconds = 2.66 usec/iter

ibv_uc_pingpong n5
  local address:  LID 0x0009, QPN 0x680049, PSN 0x4002ea, GID ::
  remote address: LID 0x0006, QPN 0x300049, PSN 0x29abf0, GID ::
8192000 bytes in 0.01 seconds = 7176.52 Mbit/sec
1000 iters in 0.01 seconds = 9.13 usec/iter


On 07/17/11 05:49, Jeff Squyres wrote:

Interesting.

Try with the native OFED benchmarks -- i.e., get MPI out of the way and see if 
the raw/native performance of the network between the devices reflects the same 
dichotomy.

(e.g., ibv_rc_pingpong)


On Jul 15, 2011, at 7:58 PM, David Warren wrote:

   

All OFED 1.4 and 2.6.32 (that's what I can get to today)
qib to qib:

# OSU MPI Latency Test v3.3
# SizeLatency (us)
0 0.29
1 0.32
2 0.31
4 0.32
8 0.32
160.35
320.35
640.47
128   0.47
256   0.50
512   0.53
1024  0.66
2048  0.88
4096  1.24
8192  1.89
16384 3.94
32768 5.94
65536 9.79
131072   18.93
262144   37.36
524288   71.90
1048576 189.62
2097152 478.55
41943041148.80

# OSU MPI Bandwidth Test v3.3
# SizeBandwidth (MB/s)
1 2.48
2 5.00
410.04
820.02
16   33.22
32   67.32
64  134.65
128 260.30
256 486.44
512 860.77
1024   1385.54
2048   1940.68
4096   2231.20
8192   2343.30
16384  2944.99
32768  3213.77
65536  3174.85
131072 3220.07
262144 3259.48
524288 3277.05
10485763283.97
20971523288.91
41943043291.84

# OSU MPI Bi-Directional Bandwidth Test v3.3
# Size Bi-Bandwidth (MB/s)
1 3.10
2 6.21
413.08
826.91
16   41.00
32   78.17
64  161.13
128 312.08
256 588.18
512 968.32
1024   1683.42
2048   2513.86
4096   2948.11
8192   2918.39
16384  3370.28
32768  3543.99
65536  4159.99
131072 4709.73
262144 4733.31
524288 4795.44
10485764753.69
20971524786.11
4194304

Re: [OMPI users] Seg fault with PBS Pro 10.4

2011-07-27 Thread Ralph Castain
Great - thanks!

On Jul 27, 2011, at 12:16 PM, Justin Wood wrote:

> I heard back from my Altair contact this morning.  He told me that they did 
> in fact make a change in some version of 10.x that broke this.  They don't 
> have a workaround for v10, but he said it was fixed in v11.x.
> 
> I built OpenMPI 1.5.3 this morning with PBSPro v11.0, and it works fine.  I 
> don't get any segfaults.
> 
> -Justin.
> 
> On 07/26/2011 05:49 PM, Ralph Castain wrote:
>> I don't believe we ever got anywhere with this due to lack of response. If 
>> you get some info on what happened to tm_init, please pass it along.
>> 
>> Best guess: something changed in a recent PBS Pro release. Since none of us 
>> have access to it, we don't know what's going on. :-(
>> 
>> 
>> On Jul 26, 2011, at 10:10 AM, Wood, Justin Contractor, SAIC wrote:
>> 
>>> I'm having a problem using OpenMPI under PBS Pro 10.4.  I tried both 1.4.3 
>>> and 1.5.3, both behave the same.  I'm able to run just fine if I don't use 
>>> PBS and go direct to the nodes.  Also, if I run under PBS and use only 1 
>>> node, it works fine, but as soon as I span nodes, I get the following:
>>> 
>>> [a4ou-n501:07366] *** Process received signal ***
>>> [a4ou-n501:07366] Signal: Segmentation fault (11)
>>> [a4ou-n501:07366] Signal code: Address not mapped (1)
>>> [a4ou-n501:07366] Failing at address: 0x3f
>>> [a4ou-n501:07366] [ 0] /lib64/libpthread.so.0 [0x3f2b20eb10]
>>> [a4ou-n501:07366] [ 1] 
>>> /opt/ompi/1.4.3/intel/lib/libopen-rte.so.0(discui_+0x84) [0x2affa453765c]
>>> [a4ou-n501:07366] [ 2] 
>>> /opt/ompi/1.4.3/intel/lib/libopen-rte.so.0(diswsi+0xc3) [0x2affa4534c6f]
>>> [a4ou-n501:07366] [ 3] /opt/ompi/1.4.3/intel/lib/libopen-rte.so.0 
>>> [0x2affa453290c]
>>> [a4ou-n501:07366] [ 4] 
>>> /opt/ompi/1.4.3/intel/lib/libopen-rte.so.0(tm_init+0x1fe) [0x2affa4532bf8]
>>> [a4ou-n501:07366] [ 5] /opt/ompi/1.4.3/intel/lib/libopen-rte.so.0 
>>> [0x2affa452691c]
>>> [a4ou-n501:07366] [ 6] mpirun [0x404c17]
>>> [a4ou-n501:07366] [ 7] mpirun [0x403e28]
>>> [a4ou-n501:07366] [ 8] /lib64/libc.so.6(__libc_start_main+0xf4) 
>>> [0x3f2a61d994]
>>> [a4ou-n501:07366] [ 9] mpirun [0x403d59]
>>> [a4ou-n501:07366] *** End of error message ***
>>> Segmentation fault
>>> 
>>> I searched the archives and found a similar issue from last year:
>>> 
>>> http://www.open-mpi.org/community/lists/users/2010/02/12084.php
>>> 
>>> The last update I saw was that someone was going to contact Altair and have 
>>> them look at why it was failing to do the tm_init.  Does anyone have an 
>>> update to this, and has anyone been able to run successfully using recent 
>>> versions of PBSPro?  I've also contacted our rep at Altair, but he hasn't 
>>> responded yet.
>>> 
>>> Thanks, Justin.
>>> 
>>> Justin Wood
>>> Systems Engineer
>>> FNMOC | SAIC
>>> 7 Grace Hopper, Stop 1
>>> Monterey, CA
>>> justin.g.wood@navy.mil
>>> justin.g.w...@saic.com
>>> office: 831.656.4671
>>> mobile: 831.869.1576
>>> 
>>> 
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> -- 
> Justin Wood
> Systems Engineer
> FNMOC | SAIC
> 7 Grace Hopper, Stop 1
> Monterey, CA
> justin.g.wood@navy.mil
> justin.g.w...@saic.com
> office: 831.656.4671
> mobile: 831.869.1576
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Seg fault with PBS Pro 10.4

2011-07-27 Thread Justin Wood
I heard back from my Altair contact this morning.  He told me that they 
did in fact make a change in some version of 10.x that broke this.  They 
don't have a workaround for v10, but he said it was fixed in v11.x.


I built OpenMPI 1.5.3 this morning with PBSPro v11.0, and it works fine. 
 I don't get any segfaults.


-Justin.

On 07/26/2011 05:49 PM, Ralph Castain wrote:

I don't believe we ever got anywhere with this due to lack of response. If you 
get some info on what happened to tm_init, please pass it along.

Best guess: something changed in a recent PBS Pro release. Since none of us 
have access to it, we don't know what's going on. :-(


On Jul 26, 2011, at 10:10 AM, Wood, Justin Contractor, SAIC wrote:


I'm having a problem using OpenMPI under PBS Pro 10.4.  I tried both 1.4.3 and 
1.5.3, both behave the same.  I'm able to run just fine if I don't use PBS and 
go direct to the nodes.  Also, if I run under PBS and use only 1 node, it works 
fine, but as soon as I span nodes, I get the following:

[a4ou-n501:07366] *** Process received signal ***
[a4ou-n501:07366] Signal: Segmentation fault (11)
[a4ou-n501:07366] Signal code: Address not mapped (1)
[a4ou-n501:07366] Failing at address: 0x3f
[a4ou-n501:07366] [ 0] /lib64/libpthread.so.0 [0x3f2b20eb10]
[a4ou-n501:07366] [ 1] /opt/ompi/1.4.3/intel/lib/libopen-rte.so.0(discui_+0x84) 
[0x2affa453765c]
[a4ou-n501:07366] [ 2] /opt/ompi/1.4.3/intel/lib/libopen-rte.so.0(diswsi+0xc3) 
[0x2affa4534c6f]
[a4ou-n501:07366] [ 3] /opt/ompi/1.4.3/intel/lib/libopen-rte.so.0 
[0x2affa453290c]
[a4ou-n501:07366] [ 4] 
/opt/ompi/1.4.3/intel/lib/libopen-rte.so.0(tm_init+0x1fe) [0x2affa4532bf8]
[a4ou-n501:07366] [ 5] /opt/ompi/1.4.3/intel/lib/libopen-rte.so.0 
[0x2affa452691c]
[a4ou-n501:07366] [ 6] mpirun [0x404c17]
[a4ou-n501:07366] [ 7] mpirun [0x403e28]
[a4ou-n501:07366] [ 8] /lib64/libc.so.6(__libc_start_main+0xf4) [0x3f2a61d994]
[a4ou-n501:07366] [ 9] mpirun [0x403d59]
[a4ou-n501:07366] *** End of error message ***
Segmentation fault

I searched the archives and found a similar issue from last year:

http://www.open-mpi.org/community/lists/users/2010/02/12084.php

The last update I saw was that someone was going to contact Altair and have 
them look at why it was failing to do the tm_init.  Does anyone have an update 
to this, and has anyone been able to run successfully using recent versions of 
PBSPro?  I've also contacted our rep at Altair, but he hasn't responded yet.

Thanks, Justin.

Justin Wood
Systems Engineer
FNMOC | SAIC
7 Grace Hopper, Stop 1
Monterey, CA
justin.g.wood@navy.mil
justin.g.w...@saic.com
office: 831.656.4671
mobile: 831.869.1576


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Justin Wood
Systems Engineer
FNMOC | SAIC
7 Grace Hopper, Stop 1
Monterey, CA
justin.g.wood@navy.mil
justin.g.w...@saic.com
office: 831.656.4671
mobile: 831.869.1576


Re: [OMPI users] Can run OpenMPI testcode on 86 or fewer slots in cluster, but nothing more than that

2011-07-27 Thread Lane, William
Thank you for your help Ralph and Reuti,

The problem turned out to be the number of file descriptors was insufficient.

The reason given by a sys admin was that since SGE isn't a user it wasn't 
initially using the new
upper bound on the number of file descriptors.

-Bill Lane





From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of 
Ralph Castain [r...@open-mpi.org]
Sent: Tuesday, July 26, 2011 1:22 PM
To: Open MPI Users
Subject: Re: [OMPI users] Can run OpenMPI testcode on 86 or fewer slots in  
cluster, but nothing more than that

On Jul 26, 2011, at 1:58 PM, Reuti wrote:
 allocation_rule$fill_up
>>>
>>> Here you specify to fill one machine after the other completely before 
>>> gathering slots from the next machine. You can change this to $round_robin 
>>> to get one slot form each node before taking a second from particular 
>>> machines. If you prefer a fixed allocation, you could also put an integer 
>>> here.
>>
>> Remember, OMPI only uses sge to launch one daemon/node. The placement of MPI 
>> procs is totally up to mpirun itself, which doesn't look at any SGE envar.
>
> I thought this is the purpose to use --with-sge during configure as you don't 
> have to provide any hostlist at all and Open MPI will honor it by reading SGE 
> envars to get the granted slots?
>

We use the envars to determine how many slots were allocated, but not the 
placement. So we'll look at the envar to get the number of slots allocated on 
each node, but we then determine the layout of processes against the slots. To 
the point, we don't look at an sge envar to determine how that layout is to be 
done.

I was only trying to point out the difference. I admit it can be confusing when 
using sge, especially since sge doesn't actually have visibility into the MPI 
procs themselves (i.e., the only processes launched by sge are the daemons).



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
IMPORTANT WARNING: This message is intended for the use of the person or entity 
to which it is addressed and may contain information that is privileged and 
confidential, the disclosure of which is governed by applicable law. If the 
reader of this message is not the intended recipient, or the employee or agent 
responsible for delivering it to the intended recipient, you are hereby 
notified that any dissemination, distribution or copying of this information is 
STRICTLY PROHIBITED. If you have received this message in error, please notify 
us immediately by calling (310) 423-6428 and destroy the related message. Thank 
You for your cooperation.
IMPORTANT WARNING:  This message is intended for the use of the person or 
entity to which it is addressed and may contain information that is privileged 
and confidential, the disclosure of which is governed by
applicable law.  If the reader of this message is not the intended recipient, 
or the employee or agent responsible for delivering it to the intended 
recipient, you are hereby notified that any dissemination, distribution or 
copying of this information is STRICTLY PROHIBITED.

If you have received this message in error, please notify us immediately
by calling (310) 423-6428 and destroy the related message.  Thank You for your 
cooperation.



Re: [OMPI users] parallel I/O on 64-bit indexed arays

2011-07-27 Thread Troels Haugboelle
For the benefit of people running into similar problems and ending up 
reading this thread, we finally found a solution.


One can use the mpi function MPI_TYPE_CREATE_HINDEXED to create an mpi 
data type with 32-bit local variable count and 64-bit offsets, which 
will work good enough for us for the time being.


Specifically the code looks like this :

  ! Create vector type with 64-bit offset measured in bytes
  CALL MPI_TYPE_CREATE_HINDEXED (1, local_particle_number, 
offset_in_global_particle_array, MPI_REAL, filetype, err)

  CALL MPI_TYPE_COMMIT (filetype, err)
  ! Write data
  CALL MPI_FILE_SET_VIEW (file_handle, file_position, MPI_REAL, 
filetype, 'native', MPI_INFO_NULL, err)
  CALL MPI_FILE_WRITE_ALL(file_handle, data, local_particle_number, 
MPI_REAL, status, err)

  file_position = file_position + global_particle_number
  ! Free type
  CALL MPI_TYPE_FREE (filetype, err)

and we get good (+GB/s) performance when writing files from large runs.

Interestingly, an alternative and conceptually simpler option is to use 
MPI_FILE_WRITE_ORDERED, but the performance of that function on 
Blue-Gene/P sucks - 20 MB/s instead of GB/s. I do not know why.


best,

Troels

On 6/7/11 15:04 , Jeff Squyres wrote:

On Jun 7, 2011, at 4:53 AM, Troels Haugboelle wrote:


In principle yes, but the problem is we have an unequal amount of particles on 
each node, so the length of each array is not guaranteed to be divisible by 2, 
4 or any other number. If I have understood the definition of 
MPI_TYPE_CREATE_SUBARRAY correctly the offset can be 64-bit, but not the global 
array size, so, optimally, what I am looking for is something that has unequal 
size for each thread, simple vector, and with 64-bit offsets and global array 
size.

It's a bit awkward, but you can still make datatypes to give the offset that 
you want.  E.g., if you need an offset of 2B+31 bytes, you can make datatype A 
with type contig of N=(2B/sizeof(int)) int's.  Then make datatype B with type 
struct, containing type A and 31 MPI_BYTEs.  Then use 1 instance of datatype B 
to get the offset that you want.

You could make utility functions that, given a specific (64 bit) offset, it 
makes an MPI datatype that matches the offset, and then frees it (and all 
sub-datatypes).

There is a bit of overhead in creating these datatypes, but it should be 
dwarfed by the amount of data that you're reading/writing, right?

It's awkward, but it should work.


Another possible workaround would be to identify subsections that do not pass 
2B elements, make sub communicators, and then let each of them dump their 
elements with proper offsets. It may work. The problematic architecture is a 
BG/P. On other clusters doing simple I/O, letting all threads open the file, 
seek to their position, and then write their chunk works fine, but somehow on 
BG/P performance drops dramatically. My guess is that there is some file 
locking, or we are overwhelming the I/O nodes..


This ticket for the MPI-3 standard is a first step in the right direction, but 
won't do everything you need (this is more FYI):

 https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/265

See the PDF attached to the ticket; it's going up for a "first reading" in a 
month.  It'll hopefully be part of the MPI-3 standard by the end of the year (Fab 
Tillier, CC'ed, has been the chief proponent of this ticket for the past several months).

Quincey Koziol from the HDF group is going to propose a follow on to this 
ticket, specifically about the case you're referring to -- large counts for 
file functions and datatype constructors.  Quincey -- can you expand on what 
you'll be proposing, perchance?

Interesting, I think something along the lines of the note would be very useful 
and needed for large applications.

Thanks a lot for the pointers and your suggestions,

cheers,

Troels






Re: [OMPI users] btl_openib_cpc_include rdmacm questions

2011-07-27 Thread Brock Palen
Sorry to bring this back up.
We recently had an outage updated the firmware on our GD4700 and installed a 
new mellonox provided OFED stack and the problem has returned.
Specifically I am able to produce the problem with IMB 4 12 core nodes when it 
tries to go to 16 cores.  I have verified that enabling different openib_flags 
of 313 fix the issue abit lower bw for some message sizes. 

Has there been any progress on this issue?

Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@umich.edu
(734)936-1985



On May 18, 2011, at 10:25 AM, Brock Palen wrote:

> Well I have a new wrench into this situation.
> We have a power failure at our datacenter took down our entire system 
> nodes,switch,sm.  
> Now I am unable to produce the error with oob default ibflags etc.
> 
> Does this shed any light on the issue?  It also makes it hard to now debug 
> the issue without being able to reproduce it.
> 
> Any thoughts?  Am I overlooking something? 
> 
> Brock Palen
> www.umich.edu/~brockp
> Center for Advanced Computing
> bro...@umich.edu
> (734)936-1985
> 
> 
> 
> On May 17, 2011, at 2:18 PM, Brock Palen wrote:
> 
>> Sorry typo 314 not 313, 
>> 
>> Brock Palen
>> www.umich.edu/~brockp
>> Center for Advanced Computing
>> bro...@umich.edu
>> (734)936-1985
>> 
>> 
>> 
>> On May 17, 2011, at 2:02 PM, Brock Palen wrote:
>> 
>>> Thanks, I though of looking at ompi_info after I sent that note sigh.
>>> 
>>> SEND_INPLACE appears to help performance of larger messages in my synthetic 
>>> benchmarks over regular SEND.  Also it appears that SEND_INPLACE still 
>>> allows our code to run.
>>> 
>>> We working on getting devs access to our system and code. 
>>> 
>>> Brock Palen
>>> www.umich.edu/~brockp
>>> Center for Advanced Computing
>>> bro...@umich.edu
>>> (734)936-1985
>>> 
>>> 
>>> 
>>> On May 16, 2011, at 11:49 AM, George Bosilca wrote:
>>> 
 Here is the output of the "ompi_info --param btl openib":
 
  MCA btl: parameter "btl_openib_flags" (current value: <306>, 
 data
   source: default value)
   BTL bit flags (general flags: SEND=1, PUT=2, GET=4,
   SEND_INPLACE=8, RDMA_MATCHED=64, 
 HETEROGENEOUS_RDMA=256; flags
   only used by the "dr" PML (ignored by others): 
 ACK=16,
   CHECKSUM=32, RDMA_COMPLETION=128; flags only used by 
 the "bfo"
   PML (ignored by others): FAILOVER_SUPPORT=512)
 
 So the 305 flags means: HETEROGENEOUS_RDMA | CHECKSUM | ACK | SEND. Most 
 of these flags are totally useless in the current version of Open MPI (DR 
 is not supported), so the only value that really matter is SEND | 
 HETEROGENEOUS_RDMA.
 
 If you want to enable the send protocol try first with SEND | SEND_INPLACE 
 (9), if not downgrade to SEND (1)
 
 george.
 
 On May 16, 2011, at 11:33 , Samuel K. Gutierrez wrote:
 
> 
> On May 16, 2011, at 8:53 AM, Brock Palen wrote:
> 
>> 
>> 
>> 
>> On May 16, 2011, at 10:23 AM, Samuel K. Gutierrez wrote:
>> 
>>> Hi,
>>> 
>>> Just out of curiosity - what happens when you add the following MCA 
>>> option to your openib runs?
>>> 
>>> -mca btl_openib_flags 305
>> 
>> You Sir found the magic combination.
> 
> :-)  - cool.
> 
> Developers - does this smell like a registered memory availability hang?
> 
>> I verified this lets IMB and CRASH progress pass their lockup points,
>> I will have a user test this, 
> 
> Please let us know what you find.
> 
>> Is this an ok option to put in our environment?  What does 305 mean?
> 
> There may be a performance hit associated with this configuration, but if 
> it lets your users run, then I don't see a problem with adding it to your 
> environment.
> 
> If I'm reading things correctly, 305 turns off RDMA PUT/GET and turns on 
> SEND.
> 
> OpenFabrics gurus - please correct me if I'm wrong :-).
> 
> Samuel Gutierrez
> Los Alamos National Laboratory
> 
> 
>> 
>> 
>> Brock Palen
>> www.umich.edu/~brockp
>> Center for Advanced Computing
>> bro...@umich.edu
>> (734)936-1985
>> 
>>> 
>>> Thanks,
>>> 
>>> Samuel Gutierrez
>>> Los Alamos National Laboratory
>>> 
>>> On May 13, 2011, at 2:38 PM, Brock Palen wrote:
>>> 
 On May 13, 2011, at 4:09 PM, Dave Love wrote:
 
> Jeff Squyres  writes:
> 
>> On May 11, 2011, at 3:21 PM, Dave Love wrote:
>> 
>>> We can reproduce it with IMB.  We could provide access, but we'd 
>>> have to
>>> negotiate with the owners of the relevant nodes to give you 
>>> interactive
>>> access to them.  Maybe Brock's would be more accessible?  (If you

Re: [OMPI users] Seg fault with PBS Pro 10.4

2011-07-27 Thread Youri LACAN-BARTLEY
Hi,

For what it's worth: we're successfully running OMPI 1.4.3 compiled with 
gcc-4.1.2 along with PBS Pro 10.4.

Kind regards,

Youri LACAN-BARTLEY

-Message d'origine-
De : users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] De la part 
de Ralph Castain
Envoyé : mercredi 27 juillet 2011 02:49
À : Open MPI Users
Objet : Re: [OMPI users] Seg fault with PBS Pro 10.4

I don't believe we ever got anywhere with this due to lack of response. If you 
get some info on what happened to tm_init, please pass it along.

Best guess: something changed in a recent PBS Pro release. Since none of us 
have access to it, we don't know what's going on. :-(


On Jul 26, 2011, at 10:10 AM, Wood, Justin Contractor, SAIC wrote:

> I'm having a problem using OpenMPI under PBS Pro 10.4.  I tried both 1.4.3 
> and 1.5.3, both behave the same.  I'm able to run just fine if I don't use 
> PBS and go direct to the nodes.  Also, if I run under PBS and use only 1 
> node, it works fine, but as soon as I span nodes, I get the following:
> 
> [a4ou-n501:07366] *** Process received signal ***
> [a4ou-n501:07366] Signal: Segmentation fault (11)
> [a4ou-n501:07366] Signal code: Address not mapped (1)
> [a4ou-n501:07366] Failing at address: 0x3f
> [a4ou-n501:07366] [ 0] /lib64/libpthread.so.0 [0x3f2b20eb10]
> [a4ou-n501:07366] [ 1] 
> /opt/ompi/1.4.3/intel/lib/libopen-rte.so.0(discui_+0x84) [0x2affa453765c]
> [a4ou-n501:07366] [ 2] 
> /opt/ompi/1.4.3/intel/lib/libopen-rte.so.0(diswsi+0xc3) [0x2affa4534c6f]
> [a4ou-n501:07366] [ 3] /opt/ompi/1.4.3/intel/lib/libopen-rte.so.0 
> [0x2affa453290c]
> [a4ou-n501:07366] [ 4] 
> /opt/ompi/1.4.3/intel/lib/libopen-rte.so.0(tm_init+0x1fe) [0x2affa4532bf8]
> [a4ou-n501:07366] [ 5] /opt/ompi/1.4.3/intel/lib/libopen-rte.so.0 
> [0x2affa452691c]
> [a4ou-n501:07366] [ 6] mpirun [0x404c17]
> [a4ou-n501:07366] [ 7] mpirun [0x403e28]
> [a4ou-n501:07366] [ 8] /lib64/libc.so.6(__libc_start_main+0xf4) [0x3f2a61d994]
> [a4ou-n501:07366] [ 9] mpirun [0x403d59]
> [a4ou-n501:07366] *** End of error message ***
> Segmentation fault
> 
> I searched the archives and found a similar issue from last year:
> 
> http://www.open-mpi.org/community/lists/users/2010/02/12084.php
> 
> The last update I saw was that someone was going to contact Altair and have 
> them look at why it was failing to do the tm_init.  Does anyone have an 
> update to this, and has anyone been able to run successfully using recent 
> versions of PBSPro?  I've also contacted our rep at Altair, but he hasn't 
> responded yet.
> 
> Thanks, Justin.
> 
> Justin Wood
> Systems Engineer
> FNMOC | SAIC
> 7 Grace Hopper, Stop 1
> Monterey, CA
> justin.g.wood@navy.mil
> justin.g.w...@saic.com
> office: 831.656.4671
> mobile: 831.869.1576
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users