Thanks Tom, I will test it out...
regards
Michael

On Mon, Jul 8, 2013 at 1:16 PM, Elken, Tom <tom.el...@intel.com> wrote:

>   ** **
>
> Thanks Tom, that sounds good. I will give it a try as soon as our Phi host
> here host gets installed. ****
>
> ** **
>
> I assume that all the prerequisite libs and bins on the Phi side are
> available when we download the Phi s/w stack from Intel's site, right ?***
> *
>
> *[Tom] *
>
> *Right.  When you install Intel’s MPSS (Manycore Platform Software
> Stack), including following the section on “OFED Support” in the readme
> file, you should have all the prerequisite libs and bins.  Note that I have
> not built Open MPI for Xeon Phi for your interconnect, but it seems to me
> that it should work. *
>
> * *
>
> *-Tom*****
>
> ** **
>
> Cheers****
>
> Michael****
>
> ** **
>
> ** **
>
> ** **
>
> On Mon, Jul 8, 2013 at 12:10 PM, Elken, Tom <tom.el...@intel.com> wrote:**
> **
>
> Do you guys have any plan to support Intel Phi in the future? That is,
> running MPI code on the Phi cards or across the multicore and Phi, as Intel
> MPI does?****
>
> *[Tom] *****
>
> Hi Michael,****
>
> Because a Xeon Phi card acts a lot like a Linux host with an x86
> architecture, you can build your own Open MPI libraries to serve this
> purpose.
>
> Our team has used existing (an older 1.4.3 version of) Open MPI source to
> build an Open MPI for running MPI code on Intel Xeon Phi cards over Intel’s
> (formerly QLogic’s) True Scale InfiniBand fabric, and it works quite well.
> We have not released a pre-built Open MPI as part of any Intel software
> release.   But I think if you have a compiler for Xeon Phi (Intel Compiler
> or GCC) and an interconnect for it, you should be able to build an Open MPI
> that works on Xeon Phi.  ****
>
> Cheers,
> Tom Elken****
>
> thanks...****
>
> Michael****
>
>  ****
>
> On Sat, Jul 6, 2013 at 2:36 PM, Ralph Castain <r...@open-mpi.org> wrote:***
> *
>
> Rolf will have to answer the question on level of support. The CUDA code
> is not in the 1.6 series as it was developed after that series went
> "stable". It is in the 1.7 series, although the level of support will
> likely be incrementally increasing as that "feature" series continues to
> evolve.****
>
>
>
> On Jul 6, 2013, at 12:06 PM, Michael Thomadakis <drmichaelt7...@gmail.com>
> wrote:
>
> > Hello OpenMPI,
> >
> > I am wondering what level of support is there for CUDA and GPUdirect on
> OpenMPI 1.6.5 and 1.7.2.
> >
> > I saw the ./configure --with-cuda=CUDA_DIR option in the FAQ. However,
> it seems that with configure v1.6.5 it was ignored.
> >
> > Can you identify GPU memory and send messages from it directly without
> copying to host memory first?
> >
> >
> > Or in general, what level of CUDA support is there on 1.6.5 and 1.7.2 ?
> Do you support SDK 5.0 and above?
> >
> > Cheers ...
> > Michael****
>
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users****
>
>  ****
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users****
>
> ** **
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to