Re: [OMPI devel] large virtual memory consumption on smp nodes and gridengine problems

2007-06-17 Thread Markus Daene
Hi Ralph,

many thanks. This is exactly what I need.
Markus

Ralph Castain wrote:
> Hi Markus
>
> There are two MCA params that can help you, I believe:
>
> 1. You to set the maximum size of the shared memory file with
>
> -mca mpool_sm_max_size xxx
>
> where xxx is the maximum memory file you want, expressed in bytes. The
> default value I see is 512MBytes.
>
> 2. You can set the size/peer of the file, again in bytes:
>
> -mca mpool_sm_per_peer_size xxx
>
> This will allocate a file that is xxx * num_procs_on_the_node on each node,
> up to the maximum file size (either the 512MB default or whatever you
> specified using the previous param). This defaults to 32MBytes/proc.
>
>
> I see that there is also a minimum (total, not per-proc) file size that
> defaults to 128MBytes. If that is still too large, you can adjust it using
>
> -mca mpool_sm_min_size yyy
>
>
> Hope that helps
> Ralph
>
>   


Re: [OMPI devel] large virtual memory consumption on smp nodes and gridengine problems

2007-06-17 Thread Markus Daene
Hi Jeff,

thanks for your comments.

1.  I will report this to the GE mailing list.

2. We have a cluster of 18 nodes with 16 cores each (8x dual core
Opteron). So we plan to run something between 1...128 processes in
total, 16 per node. Of course, if the sm component allocates 512MB x 16
on on node, this is 8GB, just for the MPI, it will be too much. I
reduced the size like Ralph suggests.

3. I think it will not be possible to use the OpenFabrics kernel/user
stack. The machine was installed by SUN, it seems that they did not use
the one from OpenFabrics. I guess it will be a hard discussion to change
this and we cannot do this by our own, we will eventually loose the support.

4. I will try if the DMA engine would works better instead of the sm
component.
We will run 16 processes per node with different message sizes. We are
using 2 HCA on each node (bonded).

Markus

Jeff Squyres wrote:
> In addition to what Ralph said, I have the following random comments:
>
> 1. You'll have to ask on the GE mailing lists about the GE issues  
> (2gb vs. 2000mb, etc.); I doubt we'll be of much help here on this list.
>
> 2. Do you have a very large SMP machine (i.e., 16 cores or more)?   
> More specifically, how many MPI processes do you plan to run at once  
> on a host?
>
> 3. Unrelated to the SMP issue, I see that you are using the  
> InfiniBand Mellanox VAPI interface (mvapi BTL).  Is there any chance  
> that you can upgrade to the newer OpenFabrics kernel/user stack?  All  
> the IB vendors support it for their HPC customers.  FWIW: all Open  
> MPI InfiniBand work is being done in support of OpenFabrics; the  
> "mvapi" BTL is only maintained for backward compatibility and has had  
> no new work done on it in at least a year.  See http://www.open- 
> mpi.org/faq/?category=openfabrics#vapi-support.
>
> 4. Note that depending on your application (e.g., if it primarily  
> sends large messages), it *may* be faster to use the DMA engine in  
> your IB interface and not use Open MPI's shared memory interface.   
> But there are a lot of factors involved here, such as the size of  
> your typical messages, how many processes you run per host (i.e., I'm  
> assuming you have one HCA that would need to service all the  
> processes), etc.
>
>
> On Jun 10, 2007, at 6:04 PM, Ralph Castain wrote:
>
>   
>> Hi Markus
>>
>> There are two MCA params that can help you, I believe:
>>
>> 1. You to set the maximum size of the shared memory file with
>>
>> -mca mpool_sm_max_size xxx
>>
>> where xxx is the maximum memory file you want, expressed in bytes. The
>> default value I see is 512MBytes.
>>
>> 2. You can set the size/peer of the file, again in bytes:
>>
>> -mca mpool_sm_per_peer_size xxx
>>
>> This will allocate a file that is xxx * num_procs_on_the_node on  
>> each node,
>> up to the maximum file size (either the 512MB default or whatever you
>> specified using the previous param). This defaults to 32MBytes/proc.
>>
>>
>> I see that there is also a minimum (total, not per-proc) file size  
>> that
>> defaults to 128MBytes. If that is still too large, you can adjust  
>> it using
>>
>> -mca mpool_sm_min_size yyy
>>
>>
>> Hope that helps
>> Ralph
>>
>>
>>
>> On 6/10/07 2:55 PM, "Markus Daene" > halle.de> wrote:
>>
>> 
>>> Dear all,
>>>
>>> I hope I am in the correct mailing list with my problem.
>>> I try to run openmpi with the gridengine(6.0u10, 6.1). Therefore I
>>> compiled openmpi (1.2.2),
>>> which has the gridengine support included, I have checked it with  
>>> ompi_info.
>>> In principle, openmpi runs well.
>>> The gridengine is configured such that the user has to specify the
>>> memory consumption
>>> via the h_vmem option. Then I noticed that with a larger number of
>>> processes the job
>>> is killed by the gridengine because of taking too much memory.
>>> To take a closer look on that, I wrote a small and simple  
>>> (Fortran) MPI
>>> program which has just a MPI_Init
>>> and a (static) array, in my case of 50MB, then the programm goes  
>>> into a
>>> (infinite) loop, because it
>>> takes some time until the gridengine reports the maxvmem.
>>> I found, that if the processes run all on different nodes, there  
>>> is only
>>> a offset per process, at least
>>> a linear scaling. But it becomes worse when the jobs run on one node.
>>> There it seems to be a quadratic
>>> scaling with the offset, in my case about 30M. I made a list of the
>>> virtual memory reported by the
>>> gridengine, I was running on a 16 processor node:
>>>
>>> #N procvirt. Mem[MB]
>>> 1  182
>>> 2  468
>>> 3  825
>>> 4  1065
>>> 5  1001
>>> 6  1378
>>> 7  1817
>>> 8  2303
>>> 12 4927
>>> 16 8559
>>>
>>> the pure program should need N*50MB, for 16 it is only 800M, but it
>>> takes 10 times more, >7GB!!!
>>> Of course, the gridengine will kills the job is this overtaking is  
>>> not
>>> taken into accout,
>>> because of too much virtual memory consum