Re: [OMPI users] [Fwd: mpi alltoall memory requirement]

2009-04-23 Thread viral . vkm
or any link which helps to understand system reuirement for certain test  
scenario ..


On Apr 23, 2009 12:42pm, viral@gmail.com wrote:

Hi
Thanks for your response.
However, I am running
mpiexec  -ppn 24 -n 192 /opt/IMB-MPI1 alltaoll -msglen /root/temp


And file /root/temp contains entry upto 65535 size only. That means  
alltoall test will run upto 65K size only


So, in that case I will require very less memory but then in that case  
also test is running out-of-memory. Please help someone to understand the  
scenario.
Or do I need to switch to some algorithm or do I need to set some other  
environment variables ? or anything like that ?



On Apr 22, 2009 6:43pm, Ashley Pittman ash...@pittman.co.uk> wrote:
> On Wed, 2009-04-22 at 12:40 +0530, vkm wrote:
>
>
>
> > The same amount of memory required for recvbuf. So at the least each
>
> > node should have 36GB of memory.
>
> >
>
> > Am I calculating right ? Please correct.
>
>
>
> Your calculation looks correct, the conclusion is slightly wrong
>
> however. The Application buffers will consume 36Gb of memory, the rest
>
> of the application, any comms buffers and the usual OS overhead will be
>
> on top of this so putting only 36Gb of ram in your nodes will still
>
> leave you short.
>
>
>
> Ashley,
>
>
>


Re: [OMPI users] [Fwd: mpi alltoall memory requirement]

2009-04-23 Thread viral . vkm

Hi
Thanks for your response.
However, I am running
mpiexec  -ppn 24 -n 192 /opt/IMB-MPI1 alltaoll -msglen /root/temp

And file /root/temp contains entry upto 65535 size only. That means  
alltoall test will run upto 65K size only


So, in that case I will require very less memory but then in that case also  
test is running out-of-memory. Please help someone to understand the  
scenario.
Or do I need to switch to some algorithm or do I need to set some other  
environment variables ? or anything like that ?


On Apr 22, 2009 6:43pm, Ashley Pittman <ash...@pittman.co.uk> wrote:

On Wed, 2009-04-22 at 12:40 +0530, vkm wrote:





> The same amount of memory required for recvbuf. So at the least each



> node should have 36GB of memory.



>



> Am I calculating right ? Please correct.





Your calculation looks correct, the conclusion is slightly wrong



however. The Application buffers will consume 36Gb of memory, the rest



of the application, any comms buffers and the usual OS overhead will be



on top of this so putting only 36Gb of ram in your nodes will still



leave you short.





Ashley,






[OMPI users] [Fwd: mpi alltoall memory requirement]

2009-04-22 Thread vkm

Hi,
I am running MPI alltoall test on my 8nodes cluster. They all have 
24core cpus.
So total number of processes that I am running is 8*24=192. In summary, 
alltoall test on 8nodes and 24 processes per node.


But, my test consumes all RAM and swap space memory. However, if I count 
required memory then calculation comes up as below.


Alltoall test runs max upto 4M datasizes. Each proc will have ONE 
sendbuf and ONE recvbuf for all remaining 191 processes to talk(and one 
to talk to itself).


So, on one node one process will need 192*4M = 768M memory for sendbuf. 
Now, one one node there are in fact 24 process running. So on one node, 
in total, I need 768M *24 = 18432M = ~18G for sendbuf


The same amount of memory required for recvbuf. So at the least each 
node should have 36GB of memory.


Am I calculating right ? Please correct.





[OMPI users] openmpi src rpm and message coalesce

2009-04-10 Thread vkm

Hi,

I was trying to understand how "btl_openib_use_message_coalescing" is working.

Since for a certain test scenario, IMB-EXT is working if I use 
"btl_openib_use_message_coalescing = 0" and not for 
"btl_openib_use_message_coalescing = 1"
No idea, who can have BUG here either open-mpi or low-level-driver !! ??

Howsoever, I have one more concern as well. I added some prints to debug 
openmpi.

I was following below procedure,
Extract OFED TAR
Extract openmpi*.src.rpm
Go to SOURCE
Extract openmpi*.tgz
modify code
Create TAR
Create openmpi*.src.rpm
Build rpm

But, it was taking whole lot of my time. Is there any short cut ?