Re: [OMPI users] Crashes over TCP/ethernet but not on shared memory

2008-10-27 Thread Jeff Squyres

On Oct 24, 2008, at 12:10 PM, V. Ram wrote:


Resuscitating this thread...

Well, we spent some time testing the various options, and Leonardo's
suggestion seems to work!

We disabled TCP Segment Offloading on the e1000 NICs using "ethtool -K
eth tso off" and this type of crash no longer happens.

I hope this message can help anyone else experiencing the same issues.
Thanks Leonardo!

OMPI devs: does this imply bug(s) in the e1000 driver/chip?  Should I
contact the driver authors?


Maybe?  :-)

I don't think that we do anything particularly whacky, TCP-wise -- we  
just open sockets and read/write plain vanilla data down the fd's.  So  
it might be worth contacting them and asking if there are any known  
issues...?


--
Jeff Squyres
Cisco Systems



Re: [OMPI users] ADIOI_GEN_DELETE

2008-10-27 Thread Jeff Squyres
After a little digging, I am able to run your code (it looks like it  
expects both an input file and an output file on the command line, or  
it segv's).  But I don't get those errors, either with OMPI v1.2.8 or  
the upcoming v1.3 series; I ran with as many as 16 processes across 4  
nodes.


Can you narrow the problem down a bit more?  You still didn't provide  
too many details about the problem. :-)



On Oct 27, 2008, at 5:27 PM, Davi Vercillo C. Garcia (ダヴィ) wrote:


Hi,

On Mon, Oct 27, 2008 at 6:48 PM, Jeff Squyres   
wrote:
I can't seem to run your code, either.  Can you provide a more  
precise
description of what exactly is happening?  It's quite possible /  
probable
that Rob's old post is the answer, but I can't tell from your  
original post

-- there just aren't enough details.


When I execute this code with more than once process (-n > 1), this
error message appear. My code is a distributed compressor, that the
compress task is distributed. A single process reads a block from a
file, compress it and writes a compressed block to a file.


On Oct 27, 2008, at 3:26 AM, jody wrote:

Perhaps this post in the Open-MPI archives can help:
http://www.open-mpi.org/community/lists/users/2008/01/4898.php

http://www.open-mpi.org/mailman/listinfo.cgi/users


I already saw this post before, but this didn't help me. I'm not using
MPI_File_delete in my code.

--
Davi Vercillo Carneiro Garcia
http://davivercillo.blogspot.com/

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

Grupo de Usuários GNU/Linux da UFRJ (GUL-UFRJ)
http://www.dcc.ufrj.br/~gul

Linux User: #388711
http://counter.li.org/

"Theory is when you know something, but it doesn't work. Practice is
when something works, but you don't know why.
Programmers combine theory and practice: Nothing works and they don't
know why." - Anon

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems




Re: [OMPI users] ADIOI_GEN_DELETE

2008-10-27 Thread Davi Vercillo C. Garcia (ダヴィ)
Hi,

On Mon, Oct 27, 2008 at 6:48 PM, Jeff Squyres  wrote:
> I can't seem to run your code, either.  Can you provide a more precise
> description of what exactly is happening?  It's quite possible / probable
> that Rob's old post is the answer, but I can't tell from your original post
> -- there just aren't enough details.

When I execute this code with more than once process (-n > 1), this
error message appear. My code is a distributed compressor, that the
compress task is distributed. A single process reads a block from a
file, compress it and writes a compressed block to a file.

> On Oct 27, 2008, at 3:26 AM, jody wrote:
>> Perhaps this post in the Open-MPI archives can help:
>> http://www.open-mpi.org/community/lists/users/2008/01/4898.php
> http://www.open-mpi.org/mailman/listinfo.cgi/users

I already saw this post before, but this didn't help me. I'm not using
MPI_File_delete in my code.

-- 
Davi Vercillo Carneiro Garcia
http://davivercillo.blogspot.com/

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

Grupo de Usuários GNU/Linux da UFRJ (GUL-UFRJ)
http://www.dcc.ufrj.br/~gul

Linux User: #388711
http://counter.li.org/

"Theory is when you know something, but it doesn't work. Practice is
when something works, but you don't know why.
Programmers combine theory and practice: Nothing works and they don't
know why." - Anon



Re: [OMPI users] Working with a CellBlade cluster

2008-10-27 Thread Lenny Verkhovsky
can you update me with the mapping or the way to get it from the OS on the
Cell.

thanks

On Thu, Oct 23, 2008 at 8:08 PM, Mi Yan  wrote:

> Lenny,
>
> Thanks.
> I asked the Cell/BE Linux Kernel developer to get the CPU mapping :) The
> mapping is fixed in current kernel.
>
> Mi
> [image: Inactive hide details for "Lenny Verkhovsky"
> ]"Lenny Verkhovsky" <
> lenny.verkhov...@gmail.com>
>
>
>
> *"Lenny Verkhovsky" *
> Sent by: users-boun...@open-mpi.org
>
> 10/23/2008 01:52 PM Please respond to
> Open MPI Users 
>
>
> To
>
> "Open MPI Users" 
> cc
>
>
> Subject
>
> Re: [OMPI users] Working with a CellBlade cluster
> According to *
> https://svn.open-mpi.org/trac/ompi/milestone/Open%20MPI%201.3*very
>  soon,
> but you can download trunk version 
> *http://www.open-mpi.org/svn/*and check if it 
> works for you.
>
> how can you check mapping CPUs by OS , my cat /proc/cpuinfo shows very
> little info
> # cat /proc/cpuinfo
> processor : 0
> cpu : Cell Broadband Engine, altivec supported
> clock : 3200.00MHz
> revision : 48.0 (pvr 0070 3000)
> processor : 1
> cpu : Cell Broadband Engine, altivec supported
> clock : 3200.00MHz
> revision : 48.0 (pvr 0070 3000)
> processor : 2
> cpu : Cell Broadband Engine, altivec supported
> clock : 3200.00MHz
> revision : 48.0 (pvr 0070 3000)
> processor : 3
> cpu : Cell Broadband Engine, altivec supported
> clock : 3200.00MHz
> revision : 48.0 (pvr 0070 3000)
> timebase : 2666
> platform : Cell
> machine : CHRP IBM,0793-1RZ
>
>
>
> On Thu, Oct 23, 2008 at 3:00 PM, Mi Yan <*mi...@us.ibm.com*>
> wrote:
>
>Hi, Lenny,
>
>So rank file map will be supported in OpenMPI 1.3? I'm using
>OpenMPI1.2.6 and did not find parameter "rmaps_rank_file_".
>Do you have idea when OpenMPI 1.3 will be available? OpenMPI 1.3 has
>quite a few features I'm looking for.
>
>Thanks,
>
>Mi
>[image: Inactive hide details for "Lenny Verkhovsky"
>]"Lenny Verkhovsky" <*
>lenny.verkhov...@gmail.com* >
>
>
>
>
>   *"Lenny Verkhovsky" 
> <**lenny.verkhov...@gmail.com*
> *>*
> Sent by: 
> *users-boun...@open-mpi.org*
>
> 10/23/2008 05:48 AM
>
> Please respond to
> Open MPI Users <*us...@open-mpi.org* >
>  To
>
> "Open MPI Users" <*us...@open-mpi.org* >cc
> Subject
>
> Re: [OMPI users] Working with a CellBlade cluster
>
>
>Hi,
>
>
>If I understand you correctly the most suitable way to do it is by
>paffinity that we have in Open MPI 1.3 and the trank.
>how ever usually OS is distributing processes evenly between sockets by
>it self.
>
>There still no formal FAQ due to a multiple reasons but you can read
>how to use it in the attached scratch ( there were few name changings of 
> the
>params, so check with ompi_info )
>
>shared memory is used between processes that share same machine, and
>openib is used between different machines ( hostnames ), no special mca
>params are needed.
>
>Best Regards
>Lenny,
>
>
> On Sun, Oct 19, 2008 at 10:32 AM, Gilbert Grosdidier <*
>gro...@mail.cern.ch* > wrote:
>   Working with a CellBlade cluster (QS22), the requirement is to have
>  one
>  instance of the executable running on each socket of the blade
>  (there are 2
>  sockets). The application is of the 'domain decomposition' type,
>  and each
>  instance is required to often send/receive data with both the
>  remote blades and
>  the neighbor socket.
>
>  Question is : which specification must be used for the mca btl
>  component
>  to force 1) shmem type messages when communicating with this
>  neighbor socket,
>  while 2) using openib to communicate with the remote blades ?
>  Is '-mca btl sm,openib,self' suitable for this ?
>
>  Also, which debug flags could be used to crosscheck that the
>  messages are
>  _actually_ going thru the right channel for a given channel,
>  please ?
>
>  We are currently using OpenMPI 1.2.5 shipped with RHEL5.2
>  (ppc64).
>  Which version do you think is currently the most optimised for
>  these
>  processors and problem type ? Should we go towards OpenMPI 1.2.8
>  instead ?
>  Or even try some OpenMPI 1.3 nightly build ?
>
>  Thanks in advance for your help, Gilbert.
>
>  ___
>  users mailing list*
>  

Re: [OMPI users] Fwd: Problems installing in Cygwin

2008-10-27 Thread Jeff Squyres
Sorry for the lack of reply; several of us were at the MPI Forum  
meeting last week, and although I can't speak for everyone else, I  
know that I always fall [way] behind on e-mail when I travel.  :-\


The windows port is very much a work-in-progress.  I'm not surprised  
that it doesn't work.  :-\


The good folks at U. Stuttgart/HLRS are actively working on a real  
Windows port, but it's off in a side-branch right now.  I don't know  
the exact status of this port -- George / Rainer / Shiqing, can you  
comment?



On Oct 22, 2008, at 9:54 AM, Gustavo Seabra wrote:


Hi All,

(Sorry if you already got this message befor, but since I didn't get
any answer, I'm assuming it didn't get through to the list.)

I am trying to install OpenMPI in Cygwin. from a cygwin bash shell, I
configured OpenMPI with the command below:

$ echo $MPI_HOME
/home/seabra/local/openmpi-1.2.7
$ ./configure --prefix=$MPI_HOME \
   --with-mpi-param_check=always \
   --with-threads=posix \
   --enable-mpi-threads \
   --disable-io-romio \
   FC="g95" FFLAGS="-O0  -fno-second-underscore" \
   CXX="g++"

The configuration *seems* to be OK (it finishes with: "configure: exit
0"). However, when I try to install it, the installation finishes with
the error below. I wonder if anyone here could help me figure out what
is going wrong.

Thanks a lot!
Gustavo.

==
$ make clean
[...]
$ make install
[...]
Making install in mca/timer/windows
make[2]: Entering directory
`/home/seabra/local/openmpi-1.2.7/opal/mca/timer/windows'
depbase=`echo timer_windows_component.lo | sed 's|[^/]*$|.deps/&|;s| 
\.lo$||'`;\

   /bin/sh ../../../../libtool --tag=CC   --mode=compile gcc
-DHAVE_CONFIG_H -I. -I../../../../opal/include
-I../../../../orte/include -I../../../../ompi/include   -I../../../..
-D_REENTRANT  -O3 -DNDEBUG -finline-functions -fno-strict-aliasing
-MT timer_windows_component.lo -MD -MP -MF $depbase.Tpo -c -o
timer_windows_component.lo timer_windows_component.c &&\
   mv -f $depbase.Tpo $depbase.Plo
libtool: compile:  gcc -DHAVE_CONFIG_H -I. -I../../../../opal/include
-I../../../../orte/include -I../../../../ompi/include -I../../../..
-D_REENTRANT -O3 -DNDEBUG -finline-functions -fno-strict-aliasing -MT
timer_windows_component.lo -MD -MP -MF
.deps/timer_windows_component.Tpo -c timer_windows_component.c
-DDLL_EXPORT -DPIC -o .libs/timer_windows_component.o
timer_windows_component.c:22:60:
opal/mca/timer/windows/timer_windows_component.h: No such file or
directory
timer_windows_component.c:25: error: parse error before
"opal_timer_windows_freq"
timer_windows_component.c:25: warning: data definition has no type or
storage class
timer_windows_component.c:26: error: parse error before
"opal_timer_windows_start"
timer_windows_component.c:26: warning: data definition has no type or
storage class
timer_windows_component.c: In function `opal_timer_windows_open':
timer_windows_component.c:60: error: `LARGE_INTEGER' undeclared (first
use in this function)
timer_windows_component.c:60: error: (Each undeclared identifier is
reported only once
timer_windows_component.c:60: error: for each function it appears in.)
timer_windows_component.c:60: error: parse error before "now"
timer_windows_component.c:62: error: `now' undeclared (first use in
this function)
make[2]: *** [timer_windows_component.lo] Error 1
make[2]: Leaving directory
`/home/seabra/local/openmpi-1.2.7/opal/mca/timer/windows'
make[1]: *** [install-recursive] Error 1
make[1]: Leaving directory `/home/seabra/local/openmpi-1.2.7/opal'
make: *** [install-recursive] Error 1
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems



Re: [OMPI users] ADIOI_GEN_DELETE

2008-10-27 Thread Jeff Squyres
I can't seem to run your code, either.  Can you provide a more precise  
description of what exactly is happening?  It's quite possible /  
probable that Rob's old post is the answer, but I can't tell from your  
original post -- there just aren't enough details.


Thanks.



On Oct 27, 2008, at 3:26 AM, jody wrote:


Perhaps this post in the Open-MPI archives can help:
http://www.open-mpi.org/community/lists/users/2008/01/4898.php

Jody

On Sun, Oct 26, 2008 at 4:30 AM, Davi Vercillo C. Garcia (ダヴィ)
 wrote:

Anybody !?

On Thu, Oct 23, 2008 at 12:41 AM, Davi Vercillo C. Garcia (ダヴィ)
 wrote:

Hi,

I'm trying to run a code using OpenMPI and I'm getting this error:

ADIOI_GEN_DELETE (line 22): **io No such file or directory

I don't know why this occurs, I only know this happens when I use  
more

than one process.

The code can be found at: http://pastebin.com/m149a1302

--
Davi Vercillo Carneiro Garcia
http://davivercillo.blogspot.com/

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

Grupo de Usuários GNU/Linux da UFRJ (GUL-UFRJ)
http://www.dcc.ufrj.br/~gul

Linux User: #388711
http://counter.li.org/

"Theory is when you know something, but it doesn't work. Practice is
when something works, but you don't know why.
Programmers combine theory and practice: Nothing works and they  
don't

know why." - Anon





--
Davi Vercillo Carneiro Garcia
http://davivercillo.blogspot.com/

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

Grupo de Usuários GNU/Linux da UFRJ (GUL-UFRJ)
http://www.dcc.ufrj.br/~gul

Linux User: #388711
http://counter.li.org/

"Theory is when you know something, but it doesn't work. Practice is
when something works, but you don't know why.
Programmers combine theory and practice: Nothing works and they don't
know why." - Anon

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems




Re: [OMPI users] MPI_SUM and MPI_REAL16 with MPI_ALLREDUCE in fortran90

2008-10-27 Thread Eugene Loh
I think the KINDs are compiler dependent.  For Sun Studio Fortran, 
REAL*16 and REAL(16) are the same thing.  For Intel, maybe it's 
different.  I don't know.  Try running this program:


double precision xDP
real(16) x16
real*16 xSTAR16
write(6,*) kind(xDP), kind(x16), kind(xSTAR16), kind(1.0_16)
end

and checking if the output matches your expectations.

Jeff Squyres wrote:

I dabble in Fortran but am not an expert -- is REAL(kind=16) the same  
as REAL*16?  MPI_REAL16 should be a 16 byte REAL; I'm not 100% sure  
that REAL(kind=16) is the same thing...?


On Oct 23, 2008, at 7:37 AM, Julien Devriendt wrote:


Hi,

I'm trying to do an MPI_ALLREDUCE with quadruple precision real and
MPI_SUM and open mpi does not give me the correct answer (vartemp
is equal to vartored instead of 2*vartored). Switching to double  
precision

real works fine.
My version of openmpi is 1.2.7 and it has been compiled with ifort  
v10.1

and icc/icpc at installation

Here's the simple f90 code which fails:

program test_quad

   implicit none

   include "mpif.h"

   real(kind=16) :: vartored(8),vartemp(8)
   integer   :: nn,nslaves,my_index
   integer   :: mpierror

   call MPI_INIT(mpierror)
   call MPI_COMM_SIZE(MPI_COMM_WORLD,nslaves,mpierror)
   call MPI_COMM_RANK(MPI_COMM_WORLD,my_index,mpierror)

   nn   = 8
   vartored = 1.0_16
   vartemp  = 0.0_16
   print*,"P1 ",my_index,vartored
   call  MPI_ALLREDUCE 
(vartored,vartemp,nn,MPI_REAL16,MPI_SUM,MPI_COMM_WORLD,mpierror)

   print*,"P2 ",my_index,vartemp

   stop

end program test_quad

Any idea why this happens?




Re: [OMPI users] MPI_SUM and MPI_REAL16 with MPI_ALLREDUCE in fortran90

2008-10-27 Thread Jeff Squyres
I dabble in Fortran but am not an expert -- is REAL(kind=16) the same  
as REAL*16?  MPI_REAL16 should be a 16 byte REAL; I'm not 100% sure  
that REAL(kind=16) is the same thing...?



On Oct 23, 2008, at 7:37 AM, Julien Devriendt wrote:



Hi,

I'm trying to do an MPI_ALLREDUCE with quadruple precision real and
MPI_SUM and open mpi does not give me the correct answer (vartemp
is equal to vartored instead of 2*vartored). Switching to double  
precision

real works fine.
My version of openmpi is 1.2.7 and it has been compiled with ifort  
v10.1

and icc/icpc at installation

Here's the simple f90 code which fails:

program test_quad

   implicit none

   include "mpif.h"


   real(kind=16) :: vartored(8),vartemp(8)
   integer   :: nn,nslaves,my_index
   integer   :: mpierror


   call MPI_INIT(mpierror)
   call MPI_COMM_SIZE(MPI_COMM_WORLD,nslaves,mpierror)
   call MPI_COMM_RANK(MPI_COMM_WORLD,my_index,mpierror)

   nn   = 8
   vartored = 1.0_16
   vartemp  = 0.0_16
   print*,"P1 ",my_index,vartored
   call  
MPI_ALLREDUCE 
(vartored,vartemp,nn,MPI_REAL16,MPI_SUM,MPI_COMM_WORLD,mpierror)

   print*,"P2 ",my_index,vartemp

   stop

end program test_quad

Any idea why this happens?

Many thanks in advance!

J.
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems



Re: [OMPI users] job abort on MPI task exit

2008-10-27 Thread Ralph Castain
This was added to the 1.3 version - it was not back-ported to the  
1.2.x series.


Ralph

On Oct 27, 2008, at 5:46 AM, David Singleton wrote:



Apologies if this has been covered in a previous thread - I
went back through a lot of posts without seeing anything
similar.

In an attempt to protect some users from themselves, I was hoping
that OpenMPI could be configured so that an MPI task calling
exit before calling MPI_Finalize() would cause job cleanup, i.e.
behave effectively as if MPI_Abort() was called.  The reason is
that many users dont realise they need to use MPI_Abort()
instead of Fortran stop or C exit.  If exit is called,  all
other processes get stuck in the next blocking call and, for a
large walltime limit batch job, that can be a real waste of
resources.

I think LAM terminated the job if a task exited with non-zero
exit status or due to a signal. OpenMPI appears to cleanup
only in the case a signalled task.  Ideally, any exit before
MPI_Finalize() should be terminal.  Why is this not the case?

Thanks,
David
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Mixed Threaded MPI code, how to launch?

2008-10-27 Thread Ralph Castain
I take it this is using OMPI 1.2.x? If so, there really isn't a way to  
do this in that series.


If they are using 1.3 (in some pre-release form), then there are two  
options:


1. they could use the sequential mapper by specifying "-mca rmaps  
seq". This mapper takes a hostfile and maps one process to each entry,  
in rank order. So they could specify that we only map to half of the  
actual number of cores on a particular node


2. they could use the rank_file mapper that allows you to specify what  
cores are to be used by what rank. I am less familiar with this option  
and there isn't a lot of documentation on how to use it - but you may  
have to provide a fairly comprehensive map file since your nodes are  
not all the same.


I have been asked by some other folks to provide a mapping option "-- 
stride x" that would cause the default round-robin mapper to step  
across the specified number of slots. So a stride of 2 would  
automatically cause byslot mapping to increment by 2 instead of the  
current stride of 1. I doubt that will be in 1.3.0, but it will show  
up in later releases.


Ralph


On Oct 25, 2008, at 3:36 PM, Brock Palen wrote:

We have a user with a code that uses threaded solvers inside each  
MPI rank.  They would like to run two threads per process.


The question is how to launch this?  The default -byslot puts all  
the processes on the first sets of cpus not leaving any cpus for the  
second thread for each process.  And half the cpus are wasted.


The -bynode option works in theory, if all our nodes had the same  
number of core (they do not).


So right now the user did:

#PBS -l nodes=22:ppn=2
export OMP_NUM_THREADS=2
mpirun -np 22 app

Which made me aware of the problem.

How can I basically tell OMPI that a 'slot'  is two cores on the  
same machine?This needs to work inside out torque based queueing  
system.


Sorry If I was not clear about my goal.


Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@umich.edu
(734)936-1985



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




[OMPI users] job abort on MPI task exit

2008-10-27 Thread David Singleton


Apologies if this has been covered in a previous thread - I
went back through a lot of posts without seeing anything
similar.

In an attempt to protect some users from themselves, I was hoping
that OpenMPI could be configured so that an MPI task calling
exit before calling MPI_Finalize() would cause job cleanup, i.e.
behave effectively as if MPI_Abort() was called.  The reason is
that many users dont realise they need to use MPI_Abort()
instead of Fortran stop or C exit.  If exit is called,  all
other processes get stuck in the next blocking call and, for a
large walltime limit batch job, that can be a real waste of
resources.

I think LAM terminated the job if a task exited with non-zero
exit status or due to a signal. OpenMPI appears to cleanup
only in the case a signalled task.  Ideally, any exit before
MPI_Finalize() should be terminal.  Why is this not the case?

Thanks,
David


Re: [OMPI users] ADIOI_GEN_DELETE

2008-10-27 Thread jody
Perhaps this post in the Open-MPI archives can help:
http://www.open-mpi.org/community/lists/users/2008/01/4898.php

Jody

On Sun, Oct 26, 2008 at 4:30 AM, Davi Vercillo C. Garcia (ダヴィ)
 wrote:
> Anybody !?
>
> On Thu, Oct 23, 2008 at 12:41 AM, Davi Vercillo C. Garcia (ダヴィ)
>  wrote:
>> Hi,
>>
>> I'm trying to run a code using OpenMPI and I'm getting this error:
>>
>> ADIOI_GEN_DELETE (line 22): **io No such file or directory
>>
>> I don't know why this occurs, I only know this happens when I use more
>> than one process.
>>
>> The code can be found at: http://pastebin.com/m149a1302
>>
>> --
>> Davi Vercillo Carneiro Garcia
>> http://davivercillo.blogspot.com/
>>
>> Universidade Federal do Rio de Janeiro
>> Departamento de Ciência da Computação
>> DCC-IM/UFRJ - http://www.dcc.ufrj.br
>>
>> Grupo de Usuários GNU/Linux da UFRJ (GUL-UFRJ)
>> http://www.dcc.ufrj.br/~gul
>>
>> Linux User: #388711
>> http://counter.li.org/
>>
>> "Theory is when you know something, but it doesn't work. Practice is
>> when something works, but you don't know why.
>> Programmers combine theory and practice: Nothing works and they don't
>> know why." - Anon
>>
>
>
>
> --
> Davi Vercillo Carneiro Garcia
> http://davivercillo.blogspot.com/
>
> Universidade Federal do Rio de Janeiro
> Departamento de Ciência da Computação
> DCC-IM/UFRJ - http://www.dcc.ufrj.br
>
> Grupo de Usuários GNU/Linux da UFRJ (GUL-UFRJ)
> http://www.dcc.ufrj.br/~gul
>
> Linux User: #388711
> http://counter.li.org/
>
> "Theory is when you know something, but it doesn't work. Practice is
> when something works, but you don't know why.
> Programmers combine theory and practice: Nothing works and they don't
> know why." - Anon
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users