[OMPI devel] Hi!! need to find internal state changes
Hi everyone. I am Pooja and I am doing a project in my High performance computing lab. In this project I need to find the internal state chnages of Openmpi.For eg:When does MPi_send is used how does messages are actually send and how does control messages are send. Please help me. Thanks and Regards Pooja
[OMPI devel] Is it possible to get BTL transport work directly with MPI level
Hi I am Pooja and I am working on a course project which requires me -> to track the internal state changes of MPI and need me to figure out how does ORTE maps MPi Process to actual physical processes ->Also I need to find way to get BTL transports work directly with MPI level calls. I just want to know is this posible and if yes what procedure I should follow or I should look into which files (for change). Please Help Thanks and Regards Pooja
Re: [OMPI devel] Is it possible to get BTL transport work directly with MPI level
Hi, Actually I am working on the course project in which I am running a huge computational intensive code. I am running this code on cluster. Now my work is to find out when does the process send control messages (e.g. compute process to I/O process indicating I/O data is ready) and when does they send actual data (e.g I/O nodes fetching actual data that is to be transfered.) And I have to log the timing and duration in other file. For this I need to know the States of Open MPi (Control messges) So that I can simply put print statements in Open MPi code and find out how it works. For this reason I was asking to know the state changes or atleast the way to find it out. Also my proff asked me to look into BTl transport layer to be used with MPi Api. I hope you will help. Thanks and Regards Pooja > On Apr 1, 2007, at 3:12 PM, Ralph Castain wrote: > >> I can't help you with the BTL question. On the others: > > Yes, you can "sorta" call BTL's directly from application programs > (are you trying to use MPI alongside other communication libraries, > and using the BTL components as a sample?), but there are issues > involved with this. > > First, you need to install Open MPI with all the development > headers. Open MPI normally only installs "mpi.h" and a small number > of other heads; installing *all* the headers will allow you to write > applications that use OMPI's internal headers (such as btl.h) while > developing outside of the Open MPI source tree. > > Second, you probably won't want to access the BTL's directly. To > make this make sense, here's how the code is organized (even if the > specific call sequence is not exactly this layered for performance/ > optimization reasons): > > MPI layer (e.g., MPI_SEND) > -> PML > -> BML > -> BTL > > You have two choices: > > 1. Go through the PML instead (this is what we do in the MPI > collectives, for example) -- but this imposes MPI semantics on > sending and receiving, which assumedly you are trying to avoid. > Check out ompi/mca/pml/pml.h. > > 2. Go through the BML instead -- the BTL Management Layer. This is > essentially a multiplexor for all the BTLs that have been > instantiated. I'm guessing that this is what you want to do > (remember that OMPI has true multi-device support; using the BML and > multiple BTLs is one of the ways that we do this). Have a look at > ompi/mca/bml/bml.h for the interface. > > There is also currently no mechanism to get the BML and BTL pointers > that were instantiated by the PML. However, if you're just doing > proof-of-concept code, you can extract these directly from the MPI > layer's global variables to see how this stuff works. > > To have full interoperability of the underlying BTLs and between > multiple upper-layer communication libraries (e.g., between OMPI and > something else) is something that we have talked about a little, but > have not done much work on. > > To see the BTL interface (just for completeness), see ompi/mca/btl/ > btl.h. > > You can probably see the pattern here... In all of Open MPI's > frameworks, the public interface is in /mca// > .h, where is one of opal, orte, or ompi, and > is the name of the framework. > >> 1. states are reported via the orte/mca/smr framework. You will see >> the >> states listed in orte/mca/smr/smr_types.h. We track both process >> and job >> states. Hopefully, the state names will be somewhat self- >> explanatory and >> indicative of the order in which they are traversed. The job states >> are set >> when *all* of the processes in the job reach the corresponding state. > > Note that these are very coarse-grained process-level states (e.g., > is a given process running or not?). It's not clear what kind of > states you were asking about -- the Open MPI code base has many > internal state machines for various message passing and other > mechanisms. > > What information are you looking for, specifically? > >> 2. I'm not sure what you mean by mapping MPI processes to "physical" >> processes, but I assume you mean how do we assign MPI ranks to >> processes on >> specific nodes. You will find that done in the orte/mca/rmaps >> framework. We >> currently only have one component in that framework - the round-robin >> implementation - that maps either by slot or by node, as indicated >> by the >> user. That code is fairly heavily commented, so you hopefully can >> understand >> what it is doing. >> >> Hope that helps! >> Ralph >> >> >> On 4/1/07 1:32 PM, "po...@cc.gatech.edu" wrote: >> &
Re: [OMPI devel] Is it possible to get BTL transport work directly with MPI level
Hi I need to find when the underlying network is free. Means I dont need to go into the details of how MPi_send is implemented. What I want to know is when the MPI_Send is started .Or rather when MPi does not use the underlying network. I need to find timing for 1) When the application issue send command 2)When Mpi actually issues send command 3) When does BTl perform atual transfer(send) 4)When doe send complete 5) Who was thr receiver. etc. this was an example of MPi_send. like this I need to know MPI_Isend,broadcast etc. I guess this can be done using PMPI. But PMPI can do it during profile stages while I want all this data during runtime. So that I can improve the performance of the system while using that ideal time. Well I/o used is Lustre (its ROMIO). What I mean by I/O node is nodes that does input and ouput processing i.e they write to lustre and compute node just transfer data to i/o node to write it in Lustre.Compute node does not have memory at all.So when ever they have something to write it gets transfered to I/o node. and then I/o node does read and write. So when MPi_send is not issued the the network(Infiniband interconnect) can be used for some other transfer. Can anyone help me wih how to go abt tracing this at run time? Please help Pooja > On Apr 3, 2007, at 9:07 AM, po...@cc.gatech.edu wrote: > >> Actually I am working on the course project in which I am running a >> huge >> computational intensive code. >> I am running this code on cluster. >> Now my work is to find out when does the process send control messages >> (e.g. compute process to I/O process indicating I/O data is ready) > > By "I/O", do you mean stdin/stdout/stderr, or other file I/O? > > If you mean stdin/stdout/stderr, this is handled by the IOF (I/O > Forwarding) framework/components in Open MPI. It's somewhat > complicated, system-level code involving logically multiplexing data > sent across pipes to sockets (i.e., local process(es) to remote > process(es)). > > If you mean MPI-2 file I/O, you want to look at the ROMIO package; it > handles all the MPI-2 API for I/O. > > Or do you mean "I/O" such as normal MPI messages (such as those > generated by MPI_SEND and MPI_RECV)? FWIW, we normally refer to > these as MPI messages, not really "I/O" (we typically reserve the > term "I/O" for file IO and/or stdin/stdout/stderr). > > Which do you mean? > >> and when does they send actual data (e.g I/O nodes fetching actual >> data >> that is to be transfered.) > > This seems to imply that you're talking about parallel/network > filesystems. I have to admit that I'm now quite confused about what > you're asking for. :-) > >> And I have to log the timing and duration in other file. > > If you need to log the timing and duration of MPI calls, this is > pretty easy to do with the PMPI interface -- you can intercept all > MPI calls, log whatever information you want to log, invoke the > underlying MPI function to do the real work, and then log the duration. > >> For this I need to know the States of Open MPi (Control messges) >> So that I can simply put print statements in Open MPi code and find >> out >> how it works. > > I would [strongly] advise using a debugger. Printf statements will > only take you so far, and can be quite confusing in a parallel > scenario -- especially when they can alter the timing of the system > (i.e., Heisenburg kinds of effects). > >> For this reason I was asking to know the state changes or atleast >> the way >> to find it out. > > I'm still not clear on what state changes you're asking about. > > From this e-mail and your prior e-mails, it *seems* like you're > asking about how data gets from MPI_SEND in one process to MPI_RECV > in another process. Is that right? > > If so, I would not characterize the code that does this as a state > machine in the traditional sense. Sure, as a computer program, it > technically *is* a state machine that changes states according to > assembly instructions, registers, etc., but we did not use generic > state machine abstractions throughout the code base. In many places, > there's simply a linear sequence of events -- not a re-entrant state > machine. > > So if you're asking how a user message gets from MPI_SEND in one > process to MPI_RECV in another, we can describe that (it's a very > complicated answer that depends on many factors, actually -- it is > *not* a straightforward answer, not only because OMPI deals with many > device/network types, but also because there can be many variables > decided at run time that determine how a message is sent from a > process to a pee
Re: [OMPI devel] Is it possible to get BTL transport work directly with MPI level
Hi I have downloaded the developer version of source code by downloading a nightly Subversion snapshot tarball.And have installed the openmpi. Using ./configure --prefix=/usr/local make all install. But I want to install with all the development headers.So that I can write an application that can use Ompi internal headers. Thanks and Regards Pooja > On Apr 1, 2007, at 3:12 PM, Ralph Castain wrote: > >> I can't help you with the BTL question. On the others: > > Yes, you can "sorta" call BTL's directly from application programs > (are you trying to use MPI alongside other communication libraries, > and using the BTL components as a sample?), but there are issues > involved with this. > > First, you need to install Open MPI with all the development > headers. Open MPI normally only installs "mpi.h" and a small number > of other heads; installing *all* the headers will allow you to write > applications that use OMPI's internal headers (such as btl.h) while > developing outside of the Open MPI source tree. > > Second, you probably won't want to access the BTL's directly. To > make this make sense, here's how the code is organized (even if the > specific call sequence is not exactly this layered for performance/ > optimization reasons): > > MPI layer (e.g., MPI_SEND) > -> PML > -> BML > -> BTL > > You have two choices: > > 1. Go through the PML instead (this is what we do in the MPI > collectives, for example) -- but this imposes MPI semantics on > sending and receiving, which assumedly you are trying to avoid. > Check out ompi/mca/pml/pml.h. > > 2. Go through the BML instead -- the BTL Management Layer. This is > essentially a multiplexor for all the BTLs that have been > instantiated. I'm guessing that this is what you want to do > (remember that OMPI has true multi-device support; using the BML and > multiple BTLs is one of the ways that we do this). Have a look at > ompi/mca/bml/bml.h for the interface. > > There is also currently no mechanism to get the BML and BTL pointers > that were instantiated by the PML. However, if you're just doing > proof-of-concept code, you can extract these directly from the MPI > layer's global variables to see how this stuff works. > > To have full interoperability of the underlying BTLs and between > multiple upper-layer communication libraries (e.g., between OMPI and > something else) is something that we have talked about a little, but > have not done much work on. > > To see the BTL interface (just for completeness), see ompi/mca/btl/ > btl.h. > > You can probably see the pattern here... In all of Open MPI's > frameworks, the public interface is in /mca// > .h, where is one of opal, orte, or ompi, and > is the name of the framework. > >> 1. states are reported via the orte/mca/smr framework. You will see >> the >> states listed in orte/mca/smr/smr_types.h. We track both process >> and job >> states. Hopefully, the state names will be somewhat self- >> explanatory and >> indicative of the order in which they are traversed. The job states >> are set >> when *all* of the processes in the job reach the corresponding state. > > Note that these are very coarse-grained process-level states (e.g., > is a given process running or not?). It's not clear what kind of > states you were asking about -- the Open MPI code base has many > internal state machines for various message passing and other > mechanisms. > > What information are you looking for, specifically? > >> 2. I'm not sure what you mean by mapping MPI processes to "physical" >> processes, but I assume you mean how do we assign MPI ranks to >> processes on >> specific nodes. You will find that done in the orte/mca/rmaps >> framework. We >> currently only have one component in that framework - the round-robin >> implementation - that maps either by slot or by node, as indicated >> by the >> user. That code is fairly heavily commented, so you hopefully can >> understand >> what it is doing. >> >> Hope that helps! >> Ralph >> >> >> On 4/1/07 1:32 PM, "po...@cc.gatech.edu" wrote: >> >>> Hi >>> I am Pooja and I am working on a course project which requires me >>> -> to track the internal state changes of MPI and need me to >>> figure out >>> how does ORTE maps MPi Process to actual physical processes >>> ->Also I need to find way to get BTL transports work directly with >>> MPI >>> level calls. >>> I just want to know is this posible and if yes
Re: [OMPI devel] Is it possible to get BTL transport work directly with MPI level
Hi!!! Thanks for help!!! Right now I am just trying to install the normal openmpi(without using all development header files). But it is still giving me some error. I have downloaded the developer version from the openmpi.org site. Then I gave ./configure --prefix=/net/hc293/pooja/dev_openmpi (lots of out put) make all install (lots of output ) and error :ld returned 1 exit status make[2]: *** [libopen-pal.la] Error 1 make[2]: Leaving directory `/net/hc293/pooja/openmpi-1.2.1a0r14362-dev/opal' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/net/hc293/pooja/openmpi-1.2.1a0r14362-dev/opal' make: *** [all-recursive] Error 1 Also the dev_openmpi folder is empty. SO I am not able to complie normal ring_c.c example also. Please help Thanks and Regards Pooja > Configure with the --with-devel-headers switch. This will install > all the developer headers. > > If you care, check out "./configure --help" -- that shows all the > options available to the configure script (including --with-devel- > headers). > > > On Apr 13, 2007, at 7:36 PM, po...@cc.gatech.edu wrote: > >> Hi >> >> I have downloaded the developer version of source code by >> downloading a >> nightly Subversion snapshot tarball.And have installed the openmpi. >> Using >> >> ./configure --prefix=/usr/local >> make all install. >> >> But I want to install with all the development headers.So that I >> can write >> an application that can use Ompi internal headers. >> >> >> Thanks and Regards >> Pooja >> >> >> >> >> >>> On Apr 1, 2007, at 3:12 PM, Ralph Castain wrote: >>> >>>> I can't help you with the BTL question. On the others: >>> >>> Yes, you can "sorta" call BTL's directly from application programs >>> (are you trying to use MPI alongside other communication libraries, >>> and using the BTL components as a sample?), but there are issues >>> involved with this. >>> >>> First, you need to install Open MPI with all the development >>> headers. Open MPI normally only installs "mpi.h" and a small number >>> of other heads; installing *all* the headers will allow you to write >>> applications that use OMPI's internal headers (such as btl.h) while >>> developing outside of the Open MPI source tree. >>> >>> Second, you probably won't want to access the BTL's directly. To >>> make this make sense, here's how the code is organized (even if the >>> specific call sequence is not exactly this layered for performance/ >>> optimization reasons): >>> >>> MPI layer (e.g., MPI_SEND) >>> -> PML >>> -> BML >>> -> BTL >>> >>> You have two choices: >>> >>> 1. Go through the PML instead (this is what we do in the MPI >>> collectives, for example) -- but this imposes MPI semantics on >>> sending and receiving, which assumedly you are trying to avoid. >>> Check out ompi/mca/pml/pml.h. >>> >>> 2. Go through the BML instead -- the BTL Management Layer. This is >>> essentially a multiplexor for all the BTLs that have been >>> instantiated. I'm guessing that this is what you want to do >>> (remember that OMPI has true multi-device support; using the BML and >>> multiple BTLs is one of the ways that we do this). Have a look at >>> ompi/mca/bml/bml.h for the interface. >>> >>> There is also currently no mechanism to get the BML and BTL pointers >>> that were instantiated by the PML. However, if you're just doing >>> proof-of-concept code, you can extract these directly from the MPI >>> layer's global variables to see how this stuff works. >>> >>> To have full interoperability of the underlying BTLs and between >>> multiple upper-layer communication libraries (e.g., between OMPI and >>> something else) is something that we have talked about a little, but >>> have not done much work on. >>> >>> To see the BTL interface (just for completeness), see ompi/mca/btl/ >>> btl.h. >>> >>> You can probably see the pattern here... In all of Open MPI's >>> frameworks, the public interface is in /mca// >>> .h, where is one of opal, orte, or ompi, and >>> is the name of the framework. >>> >>>> 1. states are reported via the orte/mca/smr framework. You will see >>>> the >>>> states listed in orte/mca/smr/smr_types.h. We track both process >>&
Re: [OMPI devel] Is it possible to get BTL transport work directly with MPI level
Hi!! Thanks for reply.Actaully there was some problem with the my downloaded version of openmpi.But when I downloaded everything again and did all configure and make statements again it worked fine. Thanks a lot . And next time I will make sure that I give all details. Thanks Pooja > This is unfortunately not enough information to provide any help -- > the (lots of output) parts are pretty important. Can you provide all > the information cited here: > > http://www.open-mpi.org/community/help/ > > > On Apr 14, 2007, at 11:36 PM, po...@cc.gatech.edu wrote: > >> Hi!!! >> Thanks for help!!! >> >> Right now I am just trying to install the normal openmpi(without >> using all >> development header files). >> But it is still giving me some error. >> I have downloaded the developer version from the openmpi.org site. >> Then I gave >> ./configure --prefix=/net/hc293/pooja/dev_openmpi >> (lots of out put) >> make all install >> (lots of output ) >> and error :ld returned 1 exit status >> make[2]: *** [libopen-pal.la] Error 1 >> make[2]: Leaving directory `/net/hc293/pooja/openmpi-1.2.1a0r14362- >> dev/opal' >> make[1]: *** [all-recursive] Error 1 >> make[1]: Leaving directory `/net/hc293/pooja/openmpi-1.2.1a0r14362- >> dev/opal' >> make: *** [all-recursive] Error 1 >> >> >> >> Also the dev_openmpi folder is empty. >> >> SO I am not able to complie normal ring_c.c example also. >> >> Please help >> >> Thanks and Regards >> Pooja >> >> >> >> >> >> >>> Configure with the --with-devel-headers switch. This will install >>> all the developer headers. >>> >>> If you care, check out "./configure --help" -- that shows all the >>> options available to the configure script (including --with-devel- >>> headers). >>> >>> >>> On Apr 13, 2007, at 7:36 PM, po...@cc.gatech.edu wrote: >>> >>>> Hi >>>> >>>> I have downloaded the developer version of source code by >>>> downloading a >>>> nightly Subversion snapshot tarball.And have installed the openmpi. >>>> Using >>>> >>>> ./configure --prefix=/usr/local >>>> make all install. >>>> >>>> But I want to install with all the development headers.So that I >>>> can write >>>> an application that can use Ompi internal headers. >>>> >>>> >>>> Thanks and Regards >>>> Pooja >>>> >>>> >>>> >>>> >>>> >>>>> On Apr 1, 2007, at 3:12 PM, Ralph Castain wrote: >>>>> >>>>>> I can't help you with the BTL question. On the others: >>>>> >>>>> Yes, you can "sorta" call BTL's directly from application programs >>>>> (are you trying to use MPI alongside other communication libraries, >>>>> and using the BTL components as a sample?), but there are issues >>>>> involved with this. >>>>> >>>>> First, you need to install Open MPI with all the development >>>>> headers. Open MPI normally only installs "mpi.h" and a small >>>>> number >>>>> of other heads; installing *all* the headers will allow you to >>>>> write >>>>> applications that use OMPI's internal headers (such as btl.h) while >>>>> developing outside of the Open MPI source tree. >>>>> >>>>> Second, you probably won't want to access the BTL's directly. To >>>>> make this make sense, here's how the code is organized (even if the >>>>> specific call sequence is not exactly this layered for performance/ >>>>> optimization reasons): >>>>> >>>>> MPI layer (e.g., MPI_SEND) >>>>> -> PML >>>>> -> BML >>>>> -> BTL >>>>> >>>>> You have two choices: >>>>> >>>>> 1. Go through the PML instead (this is what we do in the MPI >>>>> collectives, for example) -- but this imposes MPI semantics on >>>>> sending and receiving, which assumedly you are trying to avoid. >>>>> Check out ompi/mca/pml/pml.h. >>>>> >>>>> 2. Go through the BML instead -- the BTL Management Layer. This is >>>>> essentially a multiplexor for all the BTLs that have
Re: [OMPI devel] SOS... help needed :(
Hi I am Pooja.I am working with chaitali on this project. We want to shift BTL transport to the higher levels. So we are configuring MPi using all development header file . This will enable us to call btl /Bml functions directly at the higher level. So we will send a normal file across the network calling BTL btl_send function directly . Another option is not to use BTL_send but to send using TCp socket (writing our own tcp_send code). SO we just want to ask whether that is possible and if yes what we are thinking ahead is right?? Thanks a lot Pooja > On Sun, Apr 15, 2007 at 10:25:06PM -0400, chaitali dherange wrote: > >> Hi, > > Hi! > >> giving more priority to the MPI calls over the non MPI ones. > >> static I mean.. we know that our clusters use Infiniband for MPI ... >> so all the non MPI communication can be assumed to be TCP >> communication using the 'mca_btl_tcp_send()' from the >> ompi/mca/btl/tcp/btl_tcp.c file. > > I don't see why you call BTL/IB a MPI call, but BTL/TCP is non-MPI. > > The BTL components are used to provide MPI data transport. Depending on > your installed hardware, this transport can be done via IB, Myrinet or > at least TCP. Open MPI is even able to mix multiple transports and do > message striping. > > I suggest you read the comments in pml.h to make things clear. Don't get > confused, they still use the old terminology 'PTL' instead of 'BTL', but > just consider them to be equal. > > > -- > Cluster and Metacomputing Working Group > Friedrich-Schiller-Universität Jena, Germany > > private: http://adi.thur.de > ___ > devel mailing list > de...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/devel > >
Re: [OMPI devel] SOS... help needed :(
Hi!!! I am Pooja ,I am working with chaitali on this project. What we meant by Btl_Tcp is a call to btl_send that our program will give directly on the higher levels.In short we want to call BTL transport at the higher levels and so we have configure openmpi using all development header files.(So that we can call btl.h directly and use btl_tcp_send). As a result it will not be a MPI_call but a direct call from our own code. So we just want to know whether this can be done and if yes what we are thinking ahead is right and doable??? Thanks and Reagrds Pooja > On Sun, Apr 15, 2007 at 10:25:06PM -0400, chaitali dherange wrote: > >> Hi, > > Hi! > >> giving more priority to the MPI calls over the non MPI ones. > >> static I mean.. we know that our clusters use Infiniband for MPI ... >> so all the non MPI communication can be assumed to be TCP >> communication using the 'mca_btl_tcp_send()' from the >> ompi/mca/btl/tcp/btl_tcp.c file. > > I don't see why you call BTL/IB a MPI call, but BTL/TCP is non-MPI. > > The BTL components are used to provide MPI data transport. Depending on > your installed hardware, this transport can be done via IB, Myrinet or > at least TCP. Open MPI is even able to mix multiple transports and do > message striping. > > I suggest you read the comments in pml.h to make things clear. Don't get > confused, they still use the old terminology 'PTL' instead of 'BTL', but > just consider them to be equal. > > > -- > Cluster and Metacomputing Working Group > Friedrich-Schiller-Universität Jena, Germany > > private: http://adi.thur.de > ___ > devel mailing list > de...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/devel > >
[OMPI devel] Need help for semaphore in BML
Hi, I want to put semaphore in bml.h--- mca_bml_send before and after calling btl_send. SO that when a process call btl_send it first lock a global variable X and then proceeds.Also if an external Tcp function wants to send data it should first lock global variable X and then proceed. Can anyone tell me only changing bml.h is enough or are there any other files where I need to make changes. (As I tried doing this and run mpi program it gave me ORTE time out error also when I changed file back to normal it was not compiling and giving me error in libmca_bml.la etc...unfortunately I deleted entire folder and downloaded new version.) Can any one please help me and tell me how should I go about implementing locks/semaphore in bml layer so that all mpi process access lock(of same priority ) and continue working while Tcp acquire only when network is free(or there is lot of serial operation between 2 mpi sends). Thanks and Regards Pooja
Re: [OMPI devel] SOS... help needed :(
Hi, I am Pooja working with chaitali on this project. The idea behind this is while running a parallelized code ,if a huge chunks of serial computation is encountered at that time underlying network infrastructure can be used for some other data transfer. This increases the network utilization. But this (non Mpi) data transfer should not keep Mpi calls blocking. So we need to give them priorities. Also we are trying to predict a behavior of the code (like if there are more MPi calls coming with short interval or if they are coming after large interval ) based on previous calls. As a result we can make this mechanism more efficient. Thanks and regards Pooja > On Sun, Apr 15, 2007 at 10:25:06PM -0400, chaitali dherange wrote: > >>schedule MPI and non MPI calls... giving more priority to the MPI >> calls >>over the non >>MPI ones. > > What is the idea behind this and what advantages are expected? > > > Christian Leber > > -- > http://rettetdieti.vde-uni-mannheim.de/ > > ___ > devel mailing list > de...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/devel >
Re: [OMPI devel] SOS... help needed :(
Hi, Some of our clusters uses Gigabit Ethernet and Infiniband. So we are trying to multiplex them. Thanks and Regards Pooja > On Thu, Apr 19, 2007 at 06:58:37PM -0400, po...@cc.gatech.edu wrote: > >> I am Pooja working with chaitali on this project. >> The idea behind this is while running a parallelized code ,if a huge >> chunks of serial computation is encountered at that time underlying >> network infrastructure can be used for some other data transfer. >> This increases the network utilization. >> But this (non Mpi) data transfer should not keep Mpi calls blocking. >> So we need to give them priorities. >> Also we are trying to predict a behavior of the code (like if there are >> more MPi calls coming with short interval or if they are coming after >> large interval ) based on previous calls. >> As a result we can make this mechanism more efficient. > > Ok, so you have a Cluster with Infiniband a while the network traffic is > low you want to utilize the Infiniband network for other data transfers > with a lower priority? > > What does this have to do with TCP or are you using TCP over Infiniband? > > Regards > Christian Leber > > -- > http://rettetdieti.vde-uni-mannheim.de/ > > ___ > devel mailing list > de...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/devel >