Re: [OMPI users] ADIOI_GEN_DELETE

2008-10-27 Thread Davi Vercillo C. Garcia (ダヴィ)
Hi,

On Mon, Oct 27, 2008 at 6:48 PM, Jeff Squyres  wrote:
> I can't seem to run your code, either.  Can you provide a more precise
> description of what exactly is happening?  It's quite possible / probable
> that Rob's old post is the answer, but I can't tell from your original post
> -- there just aren't enough details.

When I execute this code with more than once process (-n > 1), this
error message appear. My code is a distributed compressor, that the
compress task is distributed. A single process reads a block from a
file, compress it and writes a compressed block to a file.

> On Oct 27, 2008, at 3:26 AM, jody wrote:
>> Perhaps this post in the Open-MPI archives can help:
>> http://www.open-mpi.org/community/lists/users/2008/01/4898.php
> http://www.open-mpi.org/mailman/listinfo.cgi/users

I already saw this post before, but this didn't help me. I'm not using
MPI_File_delete in my code.

-- 
Davi Vercillo Carneiro Garcia
http://davivercillo.blogspot.com/

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

Grupo de Usuários GNU/Linux da UFRJ (GUL-UFRJ)
http://www.dcc.ufrj.br/~gul

Linux User: #388711
http://counter.li.org/

"Theory is when you know something, but it doesn't work. Practice is
when something works, but you don't know why.
Programmers combine theory and practice: Nothing works and they don't
know why." - Anon



Re: [OMPI users] ADIOI_GEN_DELETE

2008-10-25 Thread Davi Vercillo C. Garcia (ダヴィ)
Anybody !?

On Thu, Oct 23, 2008 at 12:41 AM, Davi Vercillo C. Garcia (ダヴィ)
<daviverci...@gmail.com> wrote:
> Hi,
>
> I'm trying to run a code using OpenMPI and I'm getting this error:
>
> ADIOI_GEN_DELETE (line 22): **io No such file or directory
>
> I don't know why this occurs, I only know this happens when I use more
> than one process.
>
> The code can be found at: http://pastebin.com/m149a1302
>
> --
> Davi Vercillo Carneiro Garcia
> http://davivercillo.blogspot.com/
>
> Universidade Federal do Rio de Janeiro
> Departamento de Ciência da Computação
> DCC-IM/UFRJ - http://www.dcc.ufrj.br
>
> Grupo de Usuários GNU/Linux da UFRJ (GUL-UFRJ)
> http://www.dcc.ufrj.br/~gul
>
> Linux User: #388711
> http://counter.li.org/
>
> "Theory is when you know something, but it doesn't work. Practice is
> when something works, but you don't know why.
> Programmers combine theory and practice: Nothing works and they don't
> know why." - Anon
>



-- 
Davi Vercillo Carneiro Garcia
http://davivercillo.blogspot.com/

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

Grupo de Usuários GNU/Linux da UFRJ (GUL-UFRJ)
http://www.dcc.ufrj.br/~gul

Linux User: #388711
http://counter.li.org/

"Theory is when you know something, but it doesn't work. Practice is
when something works, but you don't know why.
Programmers combine theory and practice: Nothing works and they don't
know why." - Anon



Re: [OMPI users] how to install openmpi with a specific gcc

2008-09-26 Thread Davi Vercillo C. Garcia (デビッド)
Hi,

> on my system the default gcc is 2.95.3. For openmpi i have installed
> gcc-3.4.6 but i kept the default gcc to stay gcc-2.95.3. Now, how can I
> configure/install openmpi with gcc-3.4.6?what options should i give when
> configuring it so that it doesnt go and pick upt the dafault 2.95.3 ???

Why do you need use this version ? the most part of moderns softwares
recommends the use of gcc version 3 or greater (the best is version
4).

-- 
Davi Vercillo Carneiro Garcia
http://davivercillo.blogspot.com/

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

Grupo de Usuários GNU/Linux da UFRJ (GUL-UFRJ)
http://www.dcc.ufrj.br/~gul

Linux User: #388711
http://counter.li.org/

"Theory is when you know something, but it doesn't work. Practice is
when something works, but you don't know why.
Programmers combine theory and practice: Nothing works and they don't
know why." - Anon



Re: [OMPI users] Newbie doubt.

2008-09-17 Thread Davi Vercillo C. Garcia (デビッド)
Hi,

> Yuo must close the File using
> MPI_File_close(MPI_File *fh)
> before calling MPI_Finalize.

Newbie question... newbie problem ! UHiauhiauh... Thanks !!!

> By the way i think you shouldn't do
>  strcat(argv[1], ".bz2");
> This would overwrite any following arguments.

I know... I was just trying ! =D

-- 
Davi Vercillo Carneiro Garcia
http://davivercillo.blogspot.com/

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

Grupo de Usuários GNU/Linux da UFRJ (GUL-UFRJ)
http://www.dcc.ufrj.br/~gul

Linux User: #388711
http://counter.li.org/

"Good things come to those who... wait." - Debian Project

"A computer is like air conditioning: it becomes useless when you open
windows." - Linus Torvalds



[OMPI users] Newbie doubt.

2008-09-17 Thread Davi Vercillo C. Garcia (デビッド)
Hi,

I'm starting to use OpenMPI and I'm having some troubles. I wrote a
simple program that tries to open files using the function
MPI_File_open(). Like below:

#include 
#include 
#include 

#include 

int processoPrincipal(void);
int processosEscravos(void);

int main(int argc, char** argv) {
int meuRank, numeroProcessos;
MPI_File arquivoEntrada, arquivoSaida;

MPI_Init(, );
MPI_Comm_rank(MPI_COMM_WORLD, );
MPI_Comm_size(MPI_COMM_WORLD,);

MPI_File_open(MPI_COMM_WORLD, argv[1], MPI_MODE_RDONLY,
MPI_INFO_NULL, );
strcat(argv[1], ".bz2");
MPI_File_open(MPI_COMM_WORLD, argv[1], MPI_MODE_RDWR |
MPI_MODE_CREATE, MPI_INFO_NULL, );

if (meuRank != 0) {
processoPrincipal();
} else {
processosEscravos();
}

MPI_Finalize();
return 0;
}

But I'm getting a error message like:

*** An error occurred in MPI_Barrier
*** An error occurred in MPI_Barrier
*** after MPI was finalized
*** MPI_ERRORS_ARE_FATAL (goodbye)
*** after MPI was finalized
*** MPI_ERRORS_ARE_FATAL (goodbye)

What is this ?

-- 
Davi Vercillo Carneiro Garcia
http://davivercillo.blogspot.com/

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

Grupo de Usuários GNU/Linux da UFRJ (GUL-UFRJ)
http://www.dcc.ufrj.br/~gul

Linux User: #388711
http://counter.li.org/

"Good things come to those who... wait." - Debian Project

"A computer is like air conditioning: it becomes useless when you open
windows." - Linus Torvalds



Re: [OMPI users] Warnings in Ubuntu Hardy

2008-09-06 Thread Davi Vercillo C. Garcia (デビッド)
Thanks !

On Sat, Sep 6, 2008 at 10:34 PM, Dirk Eddelbuettel <e...@debian.org> wrote:
>
> On 6 September 2008 at 22:13, Davi Vercillo C. Garcia () wrote:
> | I'm trying to execute some programs in my notebook (Ubuntu 8.04) using
> | OpenMPI, and I always get a warning message like:
> |
> | libibverbs: Fatal: couldn't read uverbs ABI version.
> | --
> | [0,0,0]: OpenIB on host juliana was unable to find any HCAs.
> | Another transport will be used instead, although this may result in
> | lower performance.
> | --
> |
> | What is this ?!
>
> Uncomment this in /etc/openmpi/openmpi-mca-params.conf:
>
>  # Disable the use of InfiniBand
>  btl = ^openib
>
> which is the default in newer packages.
>
> Dirk
>
> --
> Three out of two people have difficulties with fractions.
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>



-- 
Davi Vercillo Carneiro Garcia
http://davivercillo.blogspot.com/

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

Grupo de Usuários GNU/Linux da UFRJ (GUL-UFRJ)
http://groups.google.com/group/gul-ufrj

Linux User: #388711
http://counter.li.org/

"Good things come to those who... wait." - Debian Project

"A computer is like air conditioning: it becomes useless when you open
windows." - Linus Torvalds



[OMPI users] Warnings in Ubuntu Hardy

2008-09-06 Thread Davi Vercillo C. Garcia (デビッド)
Hi,

I'm trying to execute some programs in my notebook (Ubuntu 8.04) using
OpenMPI, and I always get a warning message like:

libibverbs: Fatal: couldn't read uverbs ABI version.
--
[0,0,0]: OpenIB on host juliana was unable to find any HCAs.
Another transport will be used instead, although this may result in
lower performance.
--

What is this ?!

-- 
Davi Vercillo Carneiro Garcia
http://davivercillo.blogspot.com/

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

Grupo de Usuários GNU/Linux da UFRJ (GUL-UFRJ)
http://groups.google.com/group/gul-ufrj

Linux User: #388711
http://counter.li.org/

"Good things come to those who... wait." - Debian Project

"A computer is like air conditioning: it becomes useless when you open
windows." - Linus Torvalds



[OMPI users] OpenMPI and C++

2008-06-16 Thread Davi Vercillo C. Garcia
Hi,

Can I use OpenMPI with C++ without any POGs ? Are there some kind of
wrapper of OpenMPI to C++ ?

-- 
Davi Vercillo Carneiro Garcia

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

"Good things come to those who... wait." - Debian Project

"A computer is like air conditioning: it becomes useless when you open
windows." - Linus Torvalds

"Há duas coisas infinitas, o universo e a burrice humana. E eu estou
em dúvida quanto o primeiro." - Albert Einstein



Re: [OMPI users] Problem with NFS + PVFS2 + OpenMPI

2008-05-30 Thread Davi Vercillo C. Garcia
Hi,

Sorry but I made a mistake... I'm not trying to use PVFS over NFS but
PVFS over EXT3. I still don't know this error message...

On Thu, May 29, 2008 at 5:33 PM, Robert Latham <r...@mcs.anl.gov> wrote:
> On Thu, May 29, 2008 at 04:48:49PM -0300, Davi Vercillo C. Garcia wrote:
>> > Oh, I see you want to use ordered i/o in your application.  PVFS
>> > doesn't support that mode.  However, since you know how much data each
>> > process wants to write, a combination of MPI_Scan (to compute each
>> > processes offset) and MPI_File_write_at_all (to carry out the
>> > collective i/o) will give you the same result with likely better
>> > performance (and has the nice side effect of working with pvfs).
>>
>> I don't understand very well this... what do I need to change in my code ?
>
> MPI_File_write_ordered has an interesting property (which you probably
> know since you use it, but i'll spell it out anyway):  writes end up
> in the file in rank-order, but are not necessarily carried out in
> rank-order.
>
> Once each process knows the offsets and lengths of the writes the
> other process will do, that process can writes its data.  Observe that
> rank 0 can write immediately.  Rank 1 only needs to know how much data
> rank 0 will write.  and so on.
>
> Rank N can compute its offset by knowing how much data the proceeding
> N-1 processes want to write.  The most efficent way to collect this is
> to use MPI_Scan and collect a sum of data:
>
> http://www.mpi-forum.org/docs/mpi-11-html/node84.html#Node84
>
> Once you've computed these offsets, MPI_File_write_at_all has enough
> information to cary out a collective write of the data.
>
> ==rob
>
> --
> Rob Latham
> Mathematics and Computer Science DivisionA215 0178 EA2D B059 8CDF
> Argonne National Lab, IL USA B29D F333 664A 4280 315B
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>



-- 
Davi Vercillo Carneiro Garcia

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

"Good things come to those who... wait." - Debian Project

"A computer is like air conditioning: it becomes useless when you open
windows." - Linus Torvalds

"Há duas coisas infinitas, o universo e a burrice humana. E eu estou
em dúvida quanto o primeiro." - Albert Einstein



Re: [OMPI users] Problem with NFS + PVFS2 + OpenMPI

2008-05-29 Thread Davi Vercillo C. Garcia
HI,

> Oh, I see you want to use ordered i/o in your application.  PVFS
> doesn't support that mode.  However, since you know how much data each
> process wants to write, a combination of MPI_Scan (to compute each
> processes offset) and MPI_File_write_at_all (to carry out the
> collective i/o) will give you the same result with likely better
> performance (and has the nice side effect of working with pvfs).

I don't understand very well this... what do I need to change in my code ?

-- 
Davi Vercillo Carneiro Garcia

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

"Good things come to those who... wait." - Debian Project

"A computer is like air conditioning: it becomes useless when you open
windows." - Linus Torvalds

"Há duas coisas infinitas, o universo e a burrice humana. E eu estou
em dúvida quanto o primeiro." - Albert Einstein



[OMPI users] Problem with NFS + PVFS2 + OpenMPI

2008-05-29 Thread Davi Vercillo C. Garcia
Hi,

I'm trying to run my program in my environment and some problems are
happening. My environment is based on PVFS2 over NFS (PVFS is mounted
over NFS partition), OpenMPI and Ubuntu. My program uses MPI-IO and
BZ2 development libraries. When I tried to run, a message appears:

File locking failed in ADIOI_Set_lock. If the file system is NFS, you
need to use NFS version 3, ensure that the lockd daemon is running on
all the machines, and mount the directory with the 'noac' option (no
attribute caching).
[campogrande05.dcc.ufrj.br:05005] MPI_ABORT invoked on rank 0 in
communicator MPI_COMM_WORLD with errorcode 1
mpiexec noticed that job rank 1 with PID 5008 on node campogrande04
exited on signal 15 (Terminated).

Why ?!

-- 
Davi Vercillo Carneiro Garcia

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

"Good things come to those who... wait." - Debian Project

"A computer is like air conditioning: it becomes useless when you open
windows." - Linus Torvalds

"Há duas coisas infinitas, o universo e a burrice humana. E eu estou
em dúvida quanto o primeiro." - Albert Einstein
/**
* - Lembrar na hora de executar com o MPI que os usuarios PRECISAM ter o mesmo ID.
*
*
*/
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include "bzlib.h"
#include 
#include "mpi.h"

#define FILE_NAME_LEN 1034
#define BENCH   1

typedef unsigned char uchar;
typedef char Char;
typedef unsigned char Bool;
typedef unsigned char UChar;
typedef int Int32;
typedef unsigned int UInt32;
typedef short Int16;
typedef unsigned short UInt16;

#define True  ((Bool)1)
#define False ((Bool)0)

/**
 * Define o modo verboso
 */
int VERBOSE = 1;

/*--
 IntNative is your platform's `native' int size.
 Only here to avoid probs with 64-bit platforms.
 --*/
typedef int IntNative;

Int32 blockSize100k = 9;
Int32 verbosity = 0;
Int32 workFactor= 30;

/**
 * Define o tamanho Maximo da fila
 */
long TAM_FILA = 10;
/**
 * Tamanho do bloco lido por cada thread
 */
long M_BLOCK  = 900*1000;
#define M_BLOCK_OUT (M_BLOCK + M_BLOCK)


/**
 * MPI Variables
 */

int  nProcs= 0;
int  rank  = 0;
int  nfiles= 0;
int  nBlocosPorProc= 0;
int  nBlocosResto  = 0;
long nBlocos   = 0;
long long filesize = 0;
long long tamComprimidoPorProc = 0;

typedef struct SBloco{
   UChar* dado;
   long int id;
} Bloco;


typedef struct s_OutputBuffer{
   long size;
   uchar *zbuf;
} OutputBuffer;


/**
 * TODO Implementando
 */
static void comprime( MPI_File stream, MPI_File zStream )
{
   // 1 T Leitora, 1 T Escritora, o resto são compressoras
   // OBS: No minimo deve existir 3 T
   #define NUM_THREADS 4
   MPI_Status status;
   //MPI_Offset offset; [DAVI]

   uchar *zbuf;
   long r, count;
   unsigned int nZ;
   long nIdBlock;
   UChar *ibuf[TAM_FILA]; // buffer de entrada
   OutputBuffer **obuf; //buffer de saida
   Int32 nIbuf[TAM_FILA];
   Int32 block_in_use[TAM_FILA];

   long nLeituraAtual;
   long nProcAtual;
   long nGravacaoAtual;
   Int32 erro;
   Int32 endRead;
   long long nTamOBuf = ( filesize / M_BLOCK ) + 1;

   // incia buffer de saida
   obuf = (OutputBuffer**)malloc( sizeof(OutputBuffer*)*nTamOBuf );

   for( count = 0; count < nTamOBuf; count++ )
   {
   if( count < TAM_FILA )
   ibuf[count] = (UChar*)malloc( sizeof(UChar) * M_BLOCK );
   obuf[count] = (OutputBuffer*)malloc( sizeof(OutputBuffer) );
   obuf[count]->size = -1;
   obuf[count]->zbuf = NULL;
   }

   // Configura nro de threads
   omp_set_num_threads( NUM_THREADS );

   erro   = 0;
   nLeituraAtual  = 0;
   nProcAtual = 0;
   nGravacaoAtual = 0;
   endRead= 0;
   nIdBlock   = -1;
//  char str[10];
   //int nPrinted = 0;
   int tsleep   = 0;

   for (count = 0; count < TAM_FILA; ++count) {
   block_in_use[count] = 0;
   }

   MPI_File_set_view( stream,  0, MPI_BYTE, MPI_BYTE, "native", MPI_INFO_NULL );
   MPI_File_set_view( zStream, 0, MPI_BYTE, MPI_BYTE, "native", MPI_INFO_NULL );

// Inicio Regiao Paralela
#pragma omp parallel default(shared) private(zbuf, nZ, r, nIdBlock )
{
   zbuf = (uchar*)malloc( (M_BLOCK + 600 + (M_BLOCK / 100)) * sizeof(uchar) );

   while ( !erro && omp_get_thread_num() != 1 )
   {
   //printf( "PROCESSO %d\n", rank );
   if( omp_get_thread_num() == 0 ) //Thread Leitora
   {
   if( VERBOSE )printf( "Processo %d Thread Leitora\n", rank );

   if ( (rank + nLeituraAtual*nProcs) >= nBlocos &&
nLeituraAtual > 0

Re: [OMPI users] ROMIO of OpenMPI

2008-05-18 Thread Davi Vercillo C. Garcia
Thanks.

On 5/18/08, Andreas Schäfer <gent...@gmx.de> wrote:
> On 02:22 Sun 18 May     , Davi Vercillo C. Garcia wrote:
>  > I want to know if OpenMPI has a ROMIO implementation build-in. I
>  > compiled and installed the last version of OpenMPI and I didn't use
>  > any option on configuration step to enable this.
>
>
> From the configure script:
>
>  > gentryx@wintermute ~/ompi-trunk $ ./configure --help
>  > [snip]
>  > Default is to use the internal component system and its specially
>  > modified version of ROMIO
>  > [snip]
>
>  I guess this counts as yes. ;-) But you can disable it with configure,
>  if you want to.
>
>  Cheers!
>  -Andi
>
>
>  --
>  
>  Andreas Schäfer
>  Cluster and Metacomputing Working Group
>  Friedrich-Schiller-Universität Jena, Germany
>  PGP/GPG key via keyserver
>  I'm a bright... http://www.the-brights.net
>  
>
>  (\___/)
>  (+'.'+)
>  (")_(")
>  This is Bunny. Copy and paste Bunny into your
>  signature to help him gain world domination!
>
> ___
>  users mailing list
>  us...@open-mpi.org
>  http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>


-- 
Davi Vercillo Carneiro Garcia

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

"Good things come to those who... wait." - Debian Project

"A computer is like air conditioning: it becomes useless when you open
windows." - Linus Torvalds

"Há duas coisas infinitas, o universo e a burrice humana. E eu estou
em dúvida quanto o primeiro." - Albert Einstein



[OMPI users] ROMIO of OpenMPI

2008-05-18 Thread Davi Vercillo C. Garcia
Hi,

I want to know if OpenMPI has a ROMIO implementation build-in. I
compiled and installed the last version of OpenMPI and I didn't use
any option on configuration step to enable this.

-- 
Davi Vercillo Carneiro Garcia

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

"Good things come to those who... wait." - Debian Project

"A computer is like air conditioning: it becomes useless when you open
windows." - Linus Torvalds

"Há duas coisas infinitas, o universo e a burrice humana. E eu estou
em dúvida quanto o primeiro." - Albert Einstein



Re: [OMPI users] "Permission denied" during MPI installation

2008-04-29 Thread Davi Vercillo C. Garcia
Hi,

>  Making install in etc
>  test -z "/usr/local/etc" || ../../config/install-sh -c -d "/usr/local/
>  etc"
>  /usr/bin/install -c -m 644 openmpi-mca-params.conf /usr/local/etc/
>  openmpi-mca-params.conf
>  install: /usr/local/etc/openmpi-mca-params.conf: Permission denied
>  make[3]: *** [install-data-local] Error 71
>  make[2]: *** [install-am] Error 2
>  make[1]: *** [install-recursive] Error 1
>  make: *** [install-recursive] Error 1

Which user are you using to execute make install ? Maybe you need some
privileges to do it. See if you have them.

-- 
Davi Vercillo Carneiro Garcia

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

"Good things come to those who... wait." - Debian Project

"A computer is like air conditioning: it becomes useless when you open
windows." - Linus Torvalds

"Há duas coisas infinitas, o universo e a burrice humana. E eu estou
em dúvida quanto o primeiro." - Albert Einstein



[OMPI users] Troubles with MPI-IO Test and Torque/PVFS

2008-04-10 Thread Davi Vercillo C. Garcia
Hi all,

I have a Cluster with Torque and PVFS. I'm trying to test my
environment with MPI-IO Test but some segfault are occurring.
Does anyone know what is happening ? The error output is below:

Rank 1 Host campogrande03.dcc.ufrj.br WARNING ERROR 1207853304: 1 bad
bytes at file offset 0.  Expected (null), received (null)
Rank 2 Host campogrande02.dcc.ufrj.br WARNING ERROR 1207853304: 1 bad
bytes at file offset 0.  Expected (null), received (null)
[campogrande01:10646] *** Process received signal ***
Rank 0 Host campogrande04.dcc.ufrj.br WARNING ERROR 1207853304: 1 bad
bytes at file offset 0.  Expected (null), received (null)
Rank 0 Host campogrande04.dcc.ufrj.br WARNING ERROR 1207853304: 65537
bad bytes at file offset 0.  Expected (null), received (null)
[campogrande04:05192] *** Process received signal ***
[campogrande04:05192] Signal: Segmentation fault (11)
[campogrande04:05192] Signal code: Address not mapped (1)
[campogrande04:05192] Failing at address: 0x1
Rank 1 Host campogrande03.dcc.ufrj.br WARNING ERROR 1207853304: 65537
bad bytes at file offset 0.  Expected (null), received (null)
[campogrande03:05377] *** Process received signal ***
[campogrande03:05377] Signal: Segmentation fault (11)
[campogrande03:05377] Signal code: Address not mapped (1)
[campogrande03:05377] Failing at address: 0x1
[campogrande03:05377] [ 0] [0xe440]
[campogrande03:05377] [ 1]
/lib/tls/i686/cmov/libc.so.6(vsnprintf+0xb4) [0xb7d5fef4]
[campogrande03:05377] [ 2] mpiIO_test(make_error_messages+0xcf) [0x80502e4]
[campogrande03:05377] [ 3] mpiIO_test(warning_msg+0x8c) [0x8050569]
[campogrande03:05377] [ 4] mpiIO_test(report_errs+0xe2) [0x804d413]
[campogrande03:05377] [ 5] mpiIO_test(read_write_file+0x594) [0x804d9c2]
[campogrande03:05377] [ 6] mpiIO_test(main+0x1d0) [0x804aa14]
[campogrande03:05377] [ 7]
/lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe0) [0xb7d15050]
[campogrande03:05377] [ 8] mpiIO_test [0x804a7e1]
[campogrande03:05377] *** End of error message ***
Rank 2 Host campogrande02.dcc.ufrj.br WARNING ERROR 1207853304: 65537
bad bytes at file offset 0.  Expected (null), received (null)
[campogrande02:05187] *** Process received signal ***
[campogrande02:05187] Signal: Segmentation fault (11)
[campogrande02:05187] Signal code: Address not mapped (1)
[campogrande02:05187] Failing at address: 0x1
[campogrande01:10646] Signal: Segmentation fault (11)
[campogrande01:10646] Signal code: Address not mapped (1)
[campogrande01:10646] Failing at address: 0x1a
[campogrande02:05187] [ 0] [0xe440]
[campogrande02:05187] [ 1]
/lib/tls/i686/cmov/libc.so.6(vsnprintf+0xb4) [0xb7d5fef4]
[campogrande02:05187] [ 2] mpiIO_test(make_error_messages+0xcf) [0x80502e4]
[campogrande02:05187] [ 3] mpiIO_test(warning_msg+0x8c) [0x8050569]
[campogrande02:05187] [ 4] mpiIO_test(report_errs+0xe2) [0x804d413]
[campogrande02:05187] [ 5] mpiIO_test(read_write_file+0x594) [0x804d9c2]
[campogrande02:05187] [ 6] mpiIO_test(main+0x1d0) [0x804aa14]
[campogrande02:05187] [ 7]
/lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe0) [0xb7d15050]
[campogrande02:05187] [ 8] mpiIO_test [0x804a7e1]
[campogrande02:05187] *** End of error message ***
[campogrande04:05192] [ 0] [0xe440]
[campogrande04:05192] [ 1]
/lib/tls/i686/cmov/libc.so.6(vsnprintf+0xb4) [0xb7d5fef4]
[campogrande04:05192] [ 2] mpiIO_test(make_error_messages+0xcf) [0x80502e4]
[campogrande04:05192] [ 3] mpiIO_test(warning_msg+0x8c) [0x8050569]
[campogrande04:05192] [ 4] mpiIO_test(report_errs+0xe2) [0x804d413]
[campogrande04:05192] [ 5] mpiIO_test(read_write_file+0x594) [0x804d9c2]
[campogrande04:05192] [ 6] mpiIO_test(main+0x1d0) [0x804aa14]
[campogrande04:05192] [ 7]
/lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe0) [0xb7d15050]
[campogrande04:05192] [ 8] mpiIO_test [0x804a7e1]
[campogrande04:05192] *** End of error message ***
[campogrande01:10646] [ 0] [0xe440]
[campogrande01:10646] [ 1]
/lib/tls/i686/cmov/libc.so.6(vsnprintf+0xb4) [0xb7d5fef4]
[campogrande01:10646] [ 2] mpiIO_test(make_error_messages+0xcf) [0x80502e4]
[campogrande01:10646] [ 3] mpiIO_test(warning_msg+0x8c) [0x8050569]
[campogrande01:10646] [ 4] mpiIO_test(report_errs+0xe2) [0x804d413]
[campogrande01:10646] [ 5] mpiIO_test(read_write_file+0x594) [0x804d9c2]
[campogrande01:10646] [ 6] mpiIO_test(main+0x1d0) [0x804aa14]
[campogrande01:10646] [ 7]
/lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe0) [0xb7d15050]
[campogrande01:10646] [ 8] mpiIO_test [0x804a7e1]
[campogrande01:10646] *** End of error message ***
mpiexec noticed that job rank 0 with PID 5192 on node campogrande04
exited on signal 11 (Segmentation fault).

-- 
Davi Vercillo Carneiro Garcia

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

"Good things come to those who... wait." - Debian Project

"A computer is like air conditioning: it becomes useless when you open
windows." - Linus Torvalds

"Há duas coisas infinitas, o universo e a burrice humana. E eu estou
em 

Re: [OMPI users] Introduce myself.

2008-04-06 Thread Davi Vercillo C. Garcia
Brock,

On Sun, Apr 6, 2008 at 12:39 AM, Brock Palen  wrote:
> The best online way to learn MPI I have ever found was the NCSA class
>  here:
>
>  http://ci-tutor.ncsa.uiuc.edu/login.php

Thanks a lot !!!

-- 
Davi Vercillo Carneiro Garcia

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

"Good things come to those who... wait." - Debian Project

"A computer is like air conditioning: it becomes useless when you open
windows." - Linus Torvalds

"Há duas coisas infinitas, o universo e a burrice humana. E eu estou
em dúvida quanto o primeiro." - Albert Einstein



[OMPI users] Introduce myself.

2008-04-05 Thread Davi Vercillo C. Garcia
HI all,

My name is Davi Vercillo and I'm from Brazil. I'm starting right now
to study MPI and I'll use the implementation of OpenMPI for this. I
want to know if there is a kind of "MPI for Dummies" that I can find
in Internet. Another thing that I would like to know is what the
diference with others implementations of MPI like MPICH, MPI-2, etc.

PS: Sorry about my english.

-- 
Davi Vercillo Carneiro Garcia

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

"Good things come to those who... wait." - Debian Project

"A computer is like air conditioning: it becomes useless when you open
windows." - Linus Torvalds

"Há duas coisas infinitas, o universo e a burrice humana. E eu estou
em dúvida quanto o primeiro." - Albert Einstein