Ethernet port is simply not up to this type of
thing and becomes the main bottleneck.
It was an interesting exercise to go through and the setup would almost
certainly work nicely for MPI applications that are more in the vein of
map-reduce type.
Steve
From: Steve O'Hara
Sent: 24 January 20
et the cpu0 activity on the LED0. Do you have a
quick read on how you did that?
Thanks,
Spencer
From: users mailto:users-boun...@open-mpi.org>> on
behalf of Steve O'Hara
mailto:soh...@pivotal-solutions.co.uk>>
Sent: Sunday, January 24, 2016
node and see how it impacts
performances.
you can also run some standard MPI benchmark (osu, imb) and see if you get the
performance you expect.
Cheers,
Gilles
On Sunday, January 24, 2016, Steve O'Hara
mailto:soh...@pivotal-solutions.co.uk>> wrote:
Hi,
I’m afraid I’m pretty
January 2016 09:28
To: Open MPI Users
Subject: Re: [OMPI users] Raspberry Pi 2 Beowulf Cluster for OpenFOAM
Hi Steve.
Regarding Step 3, have you thought of using some shared storage?
NFS shared drive perhaps, or there are many alternatives!
On 23 January 2016 at 20:47, Steve O'Hara
mailt
Hi,
I'm afraid I'm pretty new to both OpenFOAM and openMPI so please excuse me if
my questions are either stupid or badly framed.
I've created a 10 Raspberry pi beowulf cluster for testing out MPI concepts and
see how they are harnessed in OpenFOAM. After a helluva lot of hassle, I've
got the