You shouldn't have to do anything special; just write normal MPI programs. There are a variety of MPI tutorials available on the web; a particularly good one is available here:

    http://webct.ncsa.uiuc.edu:8900/public/MPI/

I think someone mentioned that you can use MPI_ALLOC_MEM to get pre- registered memory (also as pointed out on this thread, "registered" is typically more than just "pinning" -- it frequently also means notifying the NIC of the pinned memory). You can also experiment with using the MCA parameter mpi_leave_pinned (e.g., set it to 1) if you are unable to use MPI_ALLOC_MEM to get pre-registered memory in conjunction with your DMA-able PCIx devices.



On Nov 2, 2006, at 10:30 PM, Brian Budge wrote:

Ha, yeah, I should have been more clear there. I'm simply writing an MPI application.

Thanks,
  Brian

On 11/2/06, Jeff Squyres < jsquy...@cisco.com> wrote:It depends on what you're trying to do. Are you writing new
components internal to Open MPI, or are you just trying to leverage
OMPI's PML for some other project?  Or are you writing MPI
applications?  Or ...?


On Nov 2, 2006, at 2:22 PM, Brian Budge wrote:

> Thanks for the pointer, it was a very interesting read.
>
>  It seems that by default OpenMPI uses the nifty pipelining trick
> with pinning pages while transfer is happening.  Also the pinning
> can be (somewhat) perminant and the state is cached so that next
> usage requires no registration.  I guess it is possible to use pre-
> pinned memory, but do I need to do anything special to do so?  I
> will already have some buffers pinned to allow DMAs to devices
> across PCI-Express, so it makes sense to use one pinned buffer so
> that I can avoid memcpys.
>
> Are there any HOWTO tutorials or anything?  I've searched around,
> but it's possible I just used the wrong search terms.
>
> Thanks,
>   Brian
>
>
>
> On 11/2/06, Jeff Squyres <jsquy...@cisco.com > wrote: This paper
> explains it pretty well:
>
>      http://www.open-mpi.org/papers/euro-pvmmpi-2006-hpc-protocols/
>
>
>
> On Nov 2, 2006, at 1:37 PM, Brian Budge wrote:
>
> > Hi all -
> >
> > I'm wondering how DMA is handled in OpenMPI when using the
> > infiniband protocol.  In particular, will I get a speed gain if my
> > read/write buffers are already pinned via mlock?
> >
> > Thanks,
> >   Brian
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> Server Virtualization Business Unit
> Cisco Systems
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems

Reply via email to