[OMPI devel] How is a MPI process launched ?
Hi everyone, I wanted to know how OpenMPI launches a MPI process in a cluster environment. I am assuming if the process lifecycle management it will be using rsh. Anyhelp would be greatly appreciated.
[OMPI devel] 1.5 branch broken
https://svn.open-mpi.org/trac/ompi/changeset/23025 broke the v1.5 branch; I get compile failures on Linux. - CC ess_singleton_module.lo ess_singleton_module.c:89: error: ‘orte_ess_base_query_sys_info’ undeclared here (not in a function) ess_singleton_module.c:91: warning: excess elements in struct initializer ess_singleton_module.c:91: warning: (near initialization for ‘orte_ess_singleton_module’) make[2]: *** [ess_singleton_module.lo] Error 1 - Please fix. -- Jeff Squyres jsquy...@cisco.com For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
Re: [OMPI devel] How is a MPI process launched ?
It depends - if you have an environment like slurm, sge, or torque, then we use that to launch our daemons on each node. Otherwise, we default to using ssh. Once the daemons are launched, we then tell the daemons what processes each is to run. So it is a two-stage launch procedure. On Apr 26, 2010, at 2:22 AM, Leo P. wrote: > Hi everyone, > > I wanted to know how OpenMPI launches a MPI process in a cluster > environment. I am assuming if the process lifecycle management it will be > using rsh. > > > Anyhelp would be greatly appreciated. > > > ___ > devel mailing list > de...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/devel
Re: [OMPI devel] 1.5 branch broken
Just delete the offending line - the 1.5 ESS API doesn't contain it. On Apr 26, 2010, at 6:22 AM, Jeff Squyres wrote: > https://svn.open-mpi.org/trac/ompi/changeset/23025 broke the v1.5 branch; I > get compile failures on Linux. > > - > CC ess_singleton_module.lo > ess_singleton_module.c:89: error: ‘orte_ess_base_query_sys_info’ undeclared > here (not in a function) > ess_singleton_module.c:91: warning: excess elements in struct initializer > ess_singleton_module.c:91: warning: (near initialization for > ‘orte_ess_singleton_module’) > make[2]: *** [ess_singleton_module.lo] Error 1 > - > > Please fix. > > -- > Jeff Squyres > jsquy...@cisco.com > For corporate legal information go to: > http://www.cisco.com/web/about/doing_business/legal/cri/ > > > ___ > devel mailing list > de...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/devel
Re: [OMPI devel] How is a MPI process launched ?
Hi Ralph, Thank you for your response. Really appreciate it as usual. :) It depends - if you have an environment like slurm, sge, or torque, then we use that to launch our daemons on each node. Otherwise, we default to using ssh. Once the daemons are launched, we then tell the daemons what processes each is to run. So it is a two-stage launch procedure. Ralph after starting the orte_deamon 1. what is the role of ssh then ? 2. Also i am assuming HNP is created before using ssh. Am i right ? 3. Also Ralph i would to know how i can tell the daemon to run a process ? Ralph i am tying to create run a simple experiment where i can create a simple process between two computer using SSH module without using mpirun. I would to hack the mpi library so that i can send a simple "Hello World " from process A running in computer A to process B running in computer B. I would be create both the process myself. HOPE I AM BEING CLEAR. Basically what i am saying is i would to create the MPI_COMM_WORLD comprising of two process Process A and Process B. For that i would to create a functions called Create_Process_A and Create_Process_B and Send_Message by utilizing Open MPI source code. Also, I know i should be looking into PLM subsystem, RMAPS subsystem, ODLS subsystem, and ORTED subsystem. But Ralph if you guide me a bit i can finish the experiment with less sleepless night, headache, and stress. Leo P On Apr 26, 2010, at 2:22 AM, Leo P. wrote: Hi everyone, > > >I wanted to know how OpenMPI launches a MPI process in a cluster environment. >I am assuming if the process lifecycle management it will be using rsh. > > > > >Anyhelp would be greatly appreciated. > > >___ >devel mailing list >de...@open-mpi.org >http://www.open-mpi.org/mailman/listinfo.cgi/devel
[OMPI devel] r23023 change to trunk causes problems with exit value
With our MTT testing we have noticed a problem that has cropped up in the trunk. There are some tests that are supposed to return a non-zero status because they are getting errors, but are instead returning 0. This problem does not exist in r23022 but does exist in r23023. One can use the ibm/final test to reproduce the problem. An example of a passing case followed by a failing case is shown below. Ralph, you want me to open a ticket on this? Or do you just want to take a look. I am asking you since you did the r23023 commit. Rolf TRUNK VERSION r23022: [rolfv@burl-ct-x2200-6 environment]$ mpirun -np 1 -mca btl sm,self final ** This test should generate a message about MPI is either not initialized or has already been finialized. ERRORS ARE EXPECTED AND NORMAL IN THIS PROGRAM!! ** *** The MPI_Barrier() function was called after MPI_FINALIZE was invoked. *** This is disallowed by the MPI standard. *** Your MPI job will now abort. [burl-ct-x2200-6:6072] Abort after MPI_FINALIZE completed successfully; not able to guarantee that all other processes were killed! -- mpirun noticed that the job aborted, but has no info as to the process that caused that situation. -- [rolfv@burl-ct-x2200-6 environment]$ echo $status 1 [rolfv@burl-ct-x2200-6 environment]$ TRUNK VERSION r23023: [rolfv@burl-ct-x2200-6 environment]$ mpirun -np 1 -mca btl sm,self final ** This test should generate a message about MPI is either not initialized or has already been finialized. ERRORS ARE EXPECTED AND NORMAL IN THIS PROGRAM!! ** *** The MPI_Barrier() function was called after MPI_FINALIZE was invoked. *** This is disallowed by the MPI standard. *** Your MPI job will now abort. [burl-ct-x2200-6:4089] Abort after MPI_FINALIZE completed successfully; not able to guarantee that all other processes were killed! [rolfv@burl-ct-x2200-6 environment]$ echo $status 0 [rolfv@burl-ct-x2200-6 environment]$
Re: [OMPI devel] r23023 change to trunk causes problems with exit value
I'll try to keep it in mind as I continue the errmgr work. I gather these tests all call MPI_Abort? On Apr 26, 2010, at 12:31 PM, Rolf vandeVaart wrote: > > With our MTT testing we have noticed a problem that has cropped up in the > trunk. There are some tests that are supposed to return a non-zero status > because they are getting errors, but are instead returning 0. This problem > does not exist in r23022 but does exist in r23023. > > One can use the ibm/final test to reproduce the problem. An example of a > passing case followed by a failing case is shown below. > > Ralph, you want me to open a ticket on this? Or do you just want to take a > look. I am asking you since you did the r23023 commit. > > Rolf > > > TRUNK VERSION r23022: > [rolfv@burl-ct-x2200-6 environment]$ mpirun -np 1 -mca btl sm,self final > ** > This test should generate a message about MPI is either not initialized or > has already been finialized. > ERRORS ARE EXPECTED AND NORMAL IN THIS PROGRAM!! > ** > *** The MPI_Barrier() function was called after MPI_FINALIZE was invoked. > *** This is disallowed by the MPI standard. > *** Your MPI job will now abort. > [burl-ct-x2200-6:6072] Abort after MPI_FINALIZE completed successfully; not > able to guarantee that all other processes were killed! > -- > mpirun noticed that the job aborted, but has no info as to the process > that caused that situation. > -- > [rolfv@burl-ct-x2200-6 environment]$ echo $status > 1 > [rolfv@burl-ct-x2200-6 environment]$ > > > TRUNK VERSION r23023: > [rolfv@burl-ct-x2200-6 environment]$ mpirun -np 1 -mca btl sm,self final > ** > This test should generate a message about MPI is either not initialized or > has already been finialized. > ERRORS ARE EXPECTED AND NORMAL IN THIS PROGRAM!! > ** > *** The MPI_Barrier() function was called after MPI_FINALIZE was invoked. > *** This is disallowed by the MPI standard. > *** Your MPI job will now abort. > [burl-ct-x2200-6:4089] Abort after MPI_FINALIZE completed successfully; not > able to guarantee that all other processes were killed! > [rolfv@burl-ct-x2200-6 environment]$ echo $status > 0 > [rolfv@burl-ct-x2200-6 environment]$ > > ___ > devel mailing list > de...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/devel
Re: [OMPI devel] r23023 change to trunk causes problems with exit value
The ibm/final test does not call MPI_Abort directly. It is calling MPI_Barrier after MPI_Finalize is called, which is a no-no. This is detected and eventually the library calls ompi_mpi_abort(). This is very similar to MPI_Abort() which ultimately calls ompi_mpi_abort as well. So, I guess I am saying for all intents and purposes, it calls MPI_Abort. Rolf On 04/26/10 14:41, Ralph Castain wrote: I'll try to keep it in mind as I continue the errmgr work. I gather these tests all call MPI_Abort? On Apr 26, 2010, at 12:31 PM, Rolf vandeVaart wrote: With our MTT testing we have noticed a problem that has cropped up in the trunk. There are some tests that are supposed to return a non-zero status because they are getting errors, but are instead returning 0. This problem does not exist in r23022 but does exist in r23023. One can use the ibm/final test to reproduce the problem. An example of a passing case followed by a failing case is shown below. Ralph, you want me to open a ticket on this? Or do you just want to take a look. I am asking you since you did the r23023 commit. Rolf TRUNK VERSION r23022: [rolfv@burl-ct-x2200-6 environment]$ mpirun -np 1 -mca btl sm,self final ** This test should generate a message about MPI is either not initialized or has already been finialized. ERRORS ARE EXPECTED AND NORMAL IN THIS PROGRAM!! ** *** The MPI_Barrier() function was called after MPI_FINALIZE was invoked. *** This is disallowed by the MPI standard. *** Your MPI job will now abort. [burl-ct-x2200-6:6072] Abort after MPI_FINALIZE completed successfully; not able to guarantee that all other processes were killed! -- mpirun noticed that the job aborted, but has no info as to the process that caused that situation. -- [rolfv@burl-ct-x2200-6 environment]$ echo $status 1 [rolfv@burl-ct-x2200-6 environment]$ TRUNK VERSION r23023: [rolfv@burl-ct-x2200-6 environment]$ mpirun -np 1 -mca btl sm,self final ** This test should generate a message about MPI is either not initialized or has already been finialized. ERRORS ARE EXPECTED AND NORMAL IN THIS PROGRAM!! ** *** The MPI_Barrier() function was called after MPI_FINALIZE was invoked. *** This is disallowed by the MPI standard. *** Your MPI job will now abort. [burl-ct-x2200-6:4089] Abort after MPI_FINALIZE completed successfully; not able to guarantee that all other processes were killed! [rolfv@burl-ct-x2200-6 environment]$ echo $status 0 [rolfv@burl-ct-x2200-6 environment]$ ___ devel mailing list de...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/devel ___ devel mailing list de...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/devel
Re: [OMPI devel] How is a MPI process launched ?
UI sincerely hope you are kidding :-) Is there some reason why you don't just use MPI_Comm_spawn? This is precisely what it was created to do. You can still execute it from a singleton, if you don't want to start your first process via mpirun (and is there some reason why you don't use mpirun???). Yes, you -could- hack the MPI code to do this. Starting from scratch, with little knowledge of the code base - figure on taking awhile. I could probably come up with a way to do it, but this would have to be a very low priority for me. On Apr 26, 2010, at 12:12 PM, Leo P. wrote: > Hi Ralph, > > Thank you for your response. Really appreciate it as usual. :) > > It depends - if you have an environment like slurm, sge, or torque, then we > use that to launch our daemons on each node. Otherwise, we default to using > ssh. > > Once the daemons are launched, we then tell the daemons what processes each > is to run. So it is a two-stage launch procedure. > > Ralph after starting the orte_deamon > what is the role of ssh then ? > Also i am assuming HNP is created before using ssh. Am i right ? > Also Ralph i would to know how i can tell the daemon to run a process ? > Ralph i am tying to create run a simple experiment where i can create a > simple process between two computer using SSH module without using mpirun. I > would to hack the mpi library so that i can send a simple "Hello World " from > process A running in computer A to process B running in computer B. I would > be create both the process myself. HOPE I AM BEING CLEAR. > > Basically what i am saying is i would to create the MPI_COMM_WORLD comprising > of two process Process A and Process B. For that i would to create a > functions called Create_Process_A and Create_Process_B and Send_Message by > utilizing Open MPI source code. > > Also, I know i should be looking into PLM subsystem, RMAPS subsystem, ODLS > subsystem, and ORTED subsystem. But Ralph if you guide me a bit i can finish > the experiment with less sleepless night, headache, and stress. > > Leo P > > > On Apr 26, 2010, at 2:22 AM, Leo P. wrote: > >> Hi everyone, >> >> I wanted to know how OpenMPI launches a MPI process in a cluster >> environment. I am assuming if the process lifecycle management it will be >> using rsh. >> >> >> Anyhelp would be greatly appreciated. >> >> >> ___ >> devel mailing list >> de...@open-mpi.org >> http://www.open-mpi.org/mailman/listinfo.cgi/devel > > > ___ > devel mailing list > de...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/devel
Re: [OMPI devel] How is a MPI process launched ?
Hi Ralph, Is there some reason why you don't just use MPI_Comm_spawn? This is precisely what it was created to do. You can still execute it from a singleton, if you don't want to start your first process via mpirun (and is there some reason why you don't use mpirun???). The reason why i am using MPI_Comm_spawn and singleton is i am going to route the MPI Communication (btl and OOB) from another computer before it reaches it intended destination. :) Yes, you -could- hack the MPI code to do this. Starting from scratch, with little knowledge of the code base - figure on taking awhile. I could probably come up with a way to do it, but this would have to be a very low priority for me. I am trying to learn the OpenMPI code base and i know its going to take time. Now i need to understand how the processes are started and made part of MPI_Comm_World. I really want to do this but i need help. If you can suggest how this can be done, i would really appreciate a lot. Leo From: Ralph Castain To: Open MPI Developers Sent: Tue, 27 April, 2010 6:44:49 AM Subject: Re: [OMPI devel] How is a MPI process launched ? UI sincerely hope you are kidding :-) Is there some reason why you don't just use MPI_Comm_spawn? This is precisely what it was created to do. You can still execute it from a singleton, if you don't want to start your first process via mpirun (and is there some reason why you don't use mpirun???). Yes, you -could- hack the MPI code to do this. Starting from scratch, with little knowledge of the code base - figure on taking awhile. I could probably come up with a way to do it, but this would have to be a very low priority for me. On Apr 26, 2010, at 12:12 PM, Leo P. wrote: Hi Ralph, > > >Thank you for your response. Really appreciate it as usual. :) > >It depends - if you have an environment like slurm, sge, or torque, then we >use that to launch our daemons on each node. Otherwise, we default to using >ssh. > > >Once the daemons are launched, we then tell the daemons what processes each is >to run. So it is a two-stage launch procedure. > > >Ralph after starting the orte_deamon > 1. what is the role of ssh then ? > 2. Also i am assuming HNP is created before using ssh. Am i right ? > 3. Also Ralph i would to know how i can tell the daemon to run a > process ? >Ralph i am tying to create run a simple experiment where i can create a simple >process between two computer using SSH module without using mpirun. I would to >hack the mpi library so that i can send a simple "Hello World " from process A >running in computer A to process B running in computer B. I would be create >both the process myself. HOPE I AM BEING CLEAR. > > >Basically what i am saying is i would to create the MPI_COMM_WORLD comprising >of two process Process A and Process B. For that i would to create a functions >called Create_Process_A and Create_Process_B and Send_Message by utilizing >Open MPI source code. > > >Also, I know i should be looking into PLM subsystem, RMAPS subsystem, ODLS >subsystem, and ORTED subsystem. But Ralph if you guide me a bit i can finish >the experiment with less sleepless night, headache, and stress. > > >Leo P > > > > >On Apr 26, 2010, at 2:22 AM, Leo P. wrote: > >Hi everyone, >> >> >>I wanted to know how OpenMPI launches a MPI process in a cluster >>environment. I am assuming if the process lifecycle management it will be >>using rsh. >> >> >> >> >>Anyhelp would be greatly appreciated. >> >> >>___ >>devel mailing list >>de...@open-mpi.org >>http://www.open-mpi.org/mailman/listinfo.cgi/devel > >___ >devel mailing list >de...@open-mpi.org >http://www.open-mpi.org/mailman/listinfo.cgi/devel
Re: [OMPI devel] How is a MPI process launched ?
On Apr 26, 2010, at 9:05 PM, Leo P. wrote: > Hi Ralph, > > Is there some reason why you don't just use MPI_Comm_spawn? This is precisely > what it was created to do. You can still execute it from a singleton, if you > don't want to start your first process via mpirun (and is there some reason > why you don't use mpirun???). > > The reason why i am using MPI_Comm_spawn and singleton is i am going to > route the MPI Communication (btl and OOB) from another computer before it > reaches it intended destination. :) Others have done this - the OOB is simple, but the btl is not easy. I'll suggest they contact you. I honestly don't think you understand how the OOB, comm_spawn, and singletons work. What you are trying to do for the OOB has already been done for you in the system as shipped today. All you need to do is start the "ompi-server" daemon where the two procs can both see it, and then use the patch I provided to the other people who were just asking about something like this. See the following thread: http://www.open-mpi.org/community/lists/users/2010/04/12763.php If this is a common desired feature, it would be simple to apply my patch to the devel trunk and the other releases. > > Yes, you -could- hack the MPI code to do this. Starting from scratch, with > little knowledge of the code base - figure on taking awhile. I could probably > come up with a way to do it, but this would have to be a very low priority > for me. > > I am trying to learn the OpenMPI code base and i know its going to take > time. Now i need to understand how the processes are started and made part of > MPI_Comm_World. I really want to do this but i need help. If you can suggest > how this can be done, i would really appreciate a lot. See the above thread - you'll still need to route the BTL, but at least the launch and wireup are resolved. > > Leo > > > From: Ralph Castain > To: Open MPI Developers > Sent: Tue, 27 April, 2010 6:44:49 AM > Subject: Re: [OMPI devel] How is a MPI process launched ? > > UI sincerely hope you are kidding :-) > > Is there some reason why you don't just use MPI_Comm_spawn? This is precisely > what it was created to do. You can still execute it from a singleton, if you > don't want to start your first process via mpirun (and is there some reason > why you don't use mpirun???). > > Yes, you -could- hack the MPI code to do this. Starting from scratch, with > little knowledge of the code base - figure on taking awhile. I could probably > come up with a way to do it, but this would have to be a very low priority > for me. > > > On Apr 26, 2010, at 12:12 PM, Leo P. wrote: > >> Hi Ralph, >> >> Thank you for your response. Really appreciate it as usual. :) >> >> It depends - if you have an environment like slurm, sge, or torque, then we >> use that to launch our daemons on each node. Otherwise, we default to using >> ssh. >> >> Once the daemons are launched, we then tell the daemons what processes each >> is to run. So it is a two-stage launch procedure. >> >> Ralph after starting the orte_deamon >> what is the role of ssh then ? >> Also i am assuming HNP is created before using ssh. Am i right ? >> Also Ralph i would to know how i can tell the daemon to run a process ? >> Ralph i am tying to create run a simple experiment where i can create a >> simple process between two computer using SSH module without using mpirun. I >> would to hack the mpi library so that i can send a simple "Hello World " >> from process A running in computer A to process B running in computer B. I >> would be create both the process myself. HOPE I AM BEING CLEAR. >> >> Basically what i am saying is i would to create the MPI_COMM_WORLD >> comprising of two process Process A and Process B. For that i would to >> create a functions called Create_Process_A and Create_Process_B and >> Send_Message by utilizing Open MPI source code. >> >> Also, I know i should be looking into PLM subsystem, RMAPS subsystem, ODLS >> subsystem, and ORTED subsystem. But Ralph if you guide me a bit i can finish >> the experiment with less sleepless night, headache, and stress. >> >> Leo P >> >> >> On Apr 26, 2010, at 2:22 AM, Leo P. wrote: >> >>> Hi everyone, >>> >>> I wanted to know how OpenMPI launches a MPI process in a cluster >>> environment. I am assuming if the process lifecycle management it will be >>> using rsh. >>> >>> >>> Anyhelp would be greatly appreciated. >>> >>> >>> ___ >>> devel mailing list >>> de...@open-mpi.org >>> http://www.open-mpi.org/mailman/listinfo.cgi/devel >> >> >> ___ >> devel mailing list >> de...@open-mpi.org >> http://www.open-mpi.org/mailman/listinfo.cgi/devel > > > ___ > devel mailing list > de...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/devel
Re: [OMPI devel] r23023 change to trunk causes problems with exit value
Okay, this should finally be fixed. See the commit message for r23045 for an explanation. It really wasn't anything in the cited changeset that caused the problem. The root cause is that $#@$ abort file we dropped in the session dir to indicate you called MPI_Abort vs trying to thoroughly cleanup. Been biting us in the butt for years - finally removed it. On Apr 26, 2010, at 12:58 PM, Rolf vandeVaart wrote: > The ibm/final test does not call MPI_Abort directly. It is calling > MPI_Barrier after MPI_Finalize is called, which is a no-no. This is detected > and eventually the library calls ompi_mpi_abort(). This is very similar to > MPI_Abort() which ultimately calls ompi_mpi_abort as well. So, I guess I am > saying for all intents and purposes, it calls MPI_Abort. > > Rolf > > On 04/26/10 14:41, Ralph Castain wrote: >> >> I'll try to keep it in mind as I continue the errmgr work. I gather these >> tests all call MPI_Abort? >> >> >> On Apr 26, 2010, at 12:31 PM, Rolf vandeVaart wrote: >> >> >>> With our MTT testing we have noticed a problem that has cropped up in the >>> trunk. There are some tests that are supposed to return a non-zero status >>> because they are getting errors, but are instead returning 0. This problem >>> does not exist in r23022 but does exist in r23023. >>> >>> One can use the ibm/final test to reproduce the problem. An example of a >>> passing case followed by a failing case is shown below. >>> >>> Ralph, you want me to open a ticket on this? Or do you just want to take a >>> look. I am asking you since you did the r23023 commit. >>> >>> Rolf >>> >>> >>> TRUNK VERSION r23022: >>> [rolfv@burl-ct-x2200-6 environment]$ mpirun -np 1 -mca btl sm,self final >>> ** >>> This test should generate a message about MPI is either not initialized or >>> has already been finialized. >>> ERRORS ARE EXPECTED AND NORMAL IN THIS PROGRAM!! >>> ** >>> *** The MPI_Barrier() function was called after MPI_FINALIZE was invoked. >>> *** This is disallowed by the MPI standard. >>> *** Your MPI job will now abort. >>> [burl-ct-x2200-6:6072] Abort after MPI_FINALIZE completed successfully; not >>> able to guarantee that all other processes were killed! >>> -- >>> mpirun noticed that the job aborted, but has no info as to the process >>> that caused that situation. >>> -- >>> [rolfv@burl-ct-x2200-6 environment]$ echo $status >>> 1 >>> [rolfv@burl-ct-x2200-6 environment]$ >>> >>> >>> TRUNK VERSION r23023: >>> [rolfv@burl-ct-x2200-6 environment]$ mpirun -np 1 -mca btl sm,self final >>> ** >>> This test should generate a message about MPI is either not initialized or >>> has already been finialized. >>> ERRORS ARE EXPECTED AND NORMAL IN THIS PROGRAM!! >>> ** >>> *** The MPI_Barrier() function was called after MPI_FINALIZE was invoked. >>> *** This is disallowed by the MPI standard. >>> *** Your MPI job will now abort. >>> [burl-ct-x2200-6:4089] Abort after MPI_FINALIZE completed successfully; not >>> able to guarantee that all other processes were killed! >>> [rolfv@burl-ct-x2200-6 environment]$ echo $status >>> 0 >>> [rolfv@burl-ct-x2200-6 environment]$ >>> >>> ___ >>> devel mailing list >>> de...@open-mpi.org >>> http://www.open-mpi.org/mailman/listinfo.cgi/devel >>> >> >> >> ___ >> devel mailing list >> de...@open-mpi.org >> http://www.open-mpi.org/mailman/listinfo.cgi/devel >> > > ___ > devel mailing list > de...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/devel