Hmmm...well, a few points here. First, the Phi's sadly don't show up in the
hwloc tree as they apparently are hidden behind the PCIe bridge. I don't know
if there is a way for hwloc to "probe" and find processors on PCI cards, but
that's something I'll have to defer to Jeff and Brice.
So the
Jeff,
I know Intel MPI (MPICH based) "just works" with Phi, but you need to do
things like:
mpirun –n 2 –host cpu host.exe : –n 4 –host mic0 mic.exe
if you want to use the Phi for more than just kernel-offload (in which case
they won't have/need an MPI rank).
So, launch procs is PART of the
I know the MPICH guys did a bunch of work to support the Phi's. I don't know
exactly what that means (I haven't read their docs about this stuff), but I
suspect that it's more than just launching MPI processes on them...
On May 2, 2013, at 8:54 PM, Paul Hargrove wrote:
>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 03/05/13 10:47, Ralph Castain wrote:
> We had something similar at one time - I developed it for the
> Roadrunner cluster so you could run MPI tasks on the GPUs. Worked
> well, but eventually fell into disrepair due to lack of use.
OK,
Ralph,
I am not an expert, by any means, but based on a presentation I heard 4
hours ago:
The Xeon and Phi instruction sets have a large intersection, but neither is
a subset of the other.
In particular, Phi has its own SIMD instructions *instead* of Xeon's MMX,
SSEn, etc.
There is also on
On May 2, 2013, at 5:12 PM, Christopher Samuel wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi folks,
>
> The new system we're bringing up has 10 nodes with dual Xeon Phi MIC
> cards, are there any plans to support them by launching MPI tasks
> directly
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi folks,
The new system we're bringing up has 10 nodes with dual Xeon Phi MIC
cards, are there any plans to support them by launching MPI tasks
directly on the Phis themselves (rather than just as offload devices
for code on the hosts)?
All the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Ralph, Jeff, Paul,
On 02/05/13 14:14, Ralph Castain wrote:
> Depends on what you think you might want, and how tolerant you and
> your users are about bugs.
>
> The 1.6 series is clearly more mature and stable. It has nearly
> all the MPI-2
On my trunk MTT runs, I'm getting a bunch of timed out tests with this message:
[node014][[56709,1],31][btl_tcp_endpoint.c:678:mca_btl_tcp_endpoint_complete_connect]
connect() to 10.1.0.7 failed: No route to host (113)
This appeared to be due to a problem with a switch in my cluster, but the
Supermicro does provide very good documents about its hardware
architecture. Unfortunately, my machine is from Dell.
But it does proves the I/O connections in a multi-processor machine is
heterogeneous.
Thanks,
Da
On Thu, May 2, 2013 at 1:42 PM, Brice Goglin wrote:
>
All processor info is easy to find in Google once you have exact model
numbers. Then the best source of information is in the motherboard
manuals. We have several supermicro servers whose manuals have nice
pictures of all QPI, PCI, etc links. With other vendors such as Dell,
it's sometimes harder
Thank you for your reply.
Where is such information documented? I mean how I/O hubs are connected to
processors/sockets. I really have hard time of searching for such
information.
I'm interested in knowing how I/O hubs are connected in a machine with
multiple sockets (4, 8 or even more sockets).
On May 2, 2013, at 12:14 AM, Ralph Castain wrote:
> Personally, even though I'm one of the 1.7 release managers, I'm a little
> leery of recommending it for a production installation until we get further
> down the road. You might consider installing 1.6 as your "base"
On May 1, 2013, at 10:32 PM, Orion Poplawski wrote:
> Great! I'll try to take a look at next week.
Hold off on this -- Ralph and I looked at this a bit closer, and the work is
not quite complete yet (read: it doesn't work).
> I noticed another message about using a
Hello,
Both cases are possible, and lstopo reports the correct information as
long as the BIOS give correct locality information.
On Nehalem and Westmere dual-xeon servers, in most cases, you have a
single I/O hub (host bridge) connected to both sockets (what your "intel
documents" say). Some
Hello,
I was recently told about this tool. It's really nice. It shows me so much
information I previously tried very hard to find out.
Right now, I try to find out the PCI connectivity in my NUMA machine.
lstopo shows me that each processor connects to a separate bridge host.
However, many
Depends on what you think you might want, and how tolerant you and your users
are about bugs.
The 1.6 series is clearly more mature and stable. It has nearly all the MPI-2
stuff now, but no MPI-3.
If you think there is something in MPI-3 you might want, then the 1.7 series
could be the way to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi folks,
We're about to bring up a new cluster (IBM iDataplex with SandyBridge
CPUs including 10 nodes with two Intel Xeon Phi cards) and I'm at the
stage where we need to pick an OMPI release to put on.
Given that this system is at the start of
18 matches
Mail list logo