Re: [OMPI users] Simple question on GRID

2012-03-01 Thread Alexander Beck-Ratzka

Hi Shaandar,

this is not a simple question! If you want to bring your cluster into 
the Grid, you first have to decide which Grid, because the different 
Grids use different Grid softwares. Having taken this decision, I would 
recommend to look onto the wen page of this Grid community, usually you 
can find here instructions on how to integrate your cluster into their 
Grid. Dependend on the Grid software used, these instructions can be 
really very different, therefore I cannot be more precise here and now. 
If you are deciding for a Grid which is using the Globus software, feel 
free to contact me for further question. In the case of Globus I can 
help you...


Best wishes

Alexander


Hi
I have two Beowulf clusters (both Ubuntu 10.10, one is OpenMPI, one is 
MPICH2).
They run separately in their local network environment.I know there is 
a way to integrate them through Internet, presumably by Grid software,

I guess. Is there any tutorial to do this?




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] problems with parallel IO solved!

2011-08-25 Thread Alexander Beck-Ratzka
Hi Folks,

the problem could be solved be using the same compiler settings for writung 
out and reading in. Writing out was done with -trace (Intel compiler), and the 
read in withou any supplemental options.

Best wishes

Alexander

> Hi Folks,
> 
> I have problems to retrieve my data thatI have written out with MPI
> parallel IO. Ins tests everything works fine, but within an huger
> environment, the data read in differ from those written out.
> 
> Here the setup of my experiment:
> 
> # the writer #
> program parallel_io
> 
>   use mpi
> 
>   implicit none
> 
>   integer,parameter :: nx=1,ny=300,nz=256,nv=12
>   integer ierr, i, myrank, comm_size, BUFSIZE, thefile, intsize
> 
>   parameter (BUFSIZE=1075200)
> 
>   real,dimension(nv+2,nx,ny,nz) :: v1
> 
>   integer (kind=MPI_OFFSET_KIND) disp
>   integer ix, iy, iz, nn, counter
> 
>   character(6) cname
>   call mpi_init(ierr)
>   call mpi_comm_size(mpi_comm_world, comm_size, ierr)
>   call mpi_comm_rank(mpi_comm_world, myrank,ierr)
> 
>   counter=0
>   do ix = 1,nz
>  do iy=1,ny
> do iz=1,nx
>do nn=1,nv+2
>   v1(nn,ix,iy,iz) = counter*(myrank+20)/200.
>   counter=counter+1
>end do
> end do
>  end do
>   end do
> 
>   call mpi_barrier(mpi_comm_world,ierr)
> 
>   call mpi_type_extent(mpi_real, intsize, ierr)
>   call mpi_file_open(mpi_comm_world, 'testfile', MPI_MODE_WRONLY +
> MPI_MODE_CREATE, mpi_info_null, thefile, ierr)
>   call mpi_type_size(MPI_INTEGER, intsize, ierr)
> 
>   disp = myrank * BUFSIZE * intsize
> 
>   !  call mpi_file_set_view(thefile, disp, MPI_INTEGER, MPI_INTEGER,
> 'native', mpi_info_null, ierr)
>   call mpi_file_write_at(thefile, disp, v1(1,1,1,1), BUFSIZE, MPI_REAL,
> mpi_status_ignore, ierr)
> 
>   call mpi_file_close(thefile, ierr)
> 
>   !  print the data read in...
> 
>   open (12, file='out000.dat-parallel-write-0')
> 
>   if (myrank.eq.0) then
>  write (12,'(i4,e18.8)') myrank,
> v1(nn,ix,iy,iz),nn=1,nv+2),ix=1,nx),iy=1,ny), iz=1,nz)
>   endif
> 
>   close (12)
> 
>   call mpi_finalize(ierr)
> 
> 
> end program parallel_io
> 
> ###
> 
> and the reader...
> 
> reader###
>  program parallel_read_io
> 
>   use mpi
> 
>   implicit none
>   integer,parameter :: nx=1,ny=300,nz=256,nv=12
> 
>   integer ierr, i, myrank, comm_size, BUFSIZE, thefile, realsize
>   parameter (BUFSIZE=1075200)
> 
>   real,dimension(nv+2,nx,ny,nz) :: v1
> 
>   integer (kind=MPI_OFFSET_KIND) disp
> 
>   integer ix, iy, iz, nn
> 
>   call mpi_init(ierr)
>   call mpi_comm_size(mpi_comm_world, comm_size, ierr)
>   call mpi_comm_rank(mpi_comm_world, myrank,ierr)
> 
>   !  do i=0,BUFSIZE
>   ! buf(i) = myrank*BUFSIZE + i
>   !  end do
> 
>   call mpi_type_extent(mpi_integer, realsize, ierr)
>   call mpi_file_open(mpi_comm_world, 'testfile', MPI_MODE_RDONLY,
> mpi_info_null, thefile, ierr)
>   call mpi_type_size(MPI_REAL, realsize, ierr)
> 
>   disp = myrank * BUFSIZE * realsize
>   print*, 'myrank: ', myrank, '  disp: ', disp, '  realsize: ', realsize
> 
>   !  call mpi_file_set_view(thefile, disp, MPI_INTEGER, MPI_INTEGER,
> 'native', mpi_info_null, ierr)
>   !  call mpi_file_read(thefile, buf, BUFSIZE, MPI_INTEGER,
> mpi_status_ignore, ierr)
> 
>   call mpi_file_read_at(thefile, disp, v1(1,1,1,1), BUFSIZE, MPI_REAL,
> mpi_status_ignore, ierr)
> 
>   call mpi_file_close(thefile, ierr)
> 
>   call mpi_barrier(mpi_comm_world,ierr)
> 
>   !  print the data read in...
> 
>   open (12, file='out000.dat-parallel-read-0')
> 
>   if (myrank.eq.0) then
>  write (12,'(i4,e18.8)') myrank,
> v1(nn,ix,iy,iz),nn=1,nv+2),ix=1,nx),iy=1,ny), iz=1,nz)
>   endif
> 
>   close (12)
> 
>   call mpi_finalize(ierr)
> 
> 
> end program parallel_read_io
> ###
> 
> Here everything is working fine. However integrating this into a huger
> program, I get totally different data written out and read in.
> 
> The setup up is the same as in the experiment, but I need some more
> memory...
> 
> What might be the reason for such problems, and if I have an MPI error, how
> can I estimate this within a fortan program. I have only found examples for
> the error handling of MPI errors in C or C++. I would need an example for
> C.
> 
> So any hints or ideas?
> 
> Best wishes
> 
> Alexander
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


[OMPI users] problems with parallel IO

2011-08-25 Thread Alexander Beck-Ratzka
Hi Folks,

I have problems to retrieve my data thatI have written out with MPI parallel 
IO. Ins tests everything works fine, but within an huger environment, the data 
read in differ from those written out. 

Here the setup of my experiment:

# the writer #
program parallel_io

  use mpi

  implicit none

  integer,parameter :: nx=1,ny=300,nz=256,nv=12
  integer ierr, i, myrank, comm_size, BUFSIZE, thefile, intsize

  parameter (BUFSIZE=1075200)

  real,dimension(nv+2,nx,ny,nz) :: v1

  integer (kind=MPI_OFFSET_KIND) disp
  integer ix, iy, iz, nn, counter

  character(6) cname
  call mpi_init(ierr)
  call mpi_comm_size(mpi_comm_world, comm_size, ierr)
  call mpi_comm_rank(mpi_comm_world, myrank,ierr)

  counter=0
  do ix = 1,nz
 do iy=1,ny
do iz=1,nx
   do nn=1,nv+2
  v1(nn,ix,iy,iz) = counter*(myrank+20)/200.
  counter=counter+1
   end do
end do
 end do
  end do

  call mpi_barrier(mpi_comm_world,ierr)

  call mpi_type_extent(mpi_real, intsize, ierr)
  call mpi_file_open(mpi_comm_world, 'testfile', MPI_MODE_WRONLY + 
MPI_MODE_CREATE, mpi_info_null, thefile, ierr)
  call mpi_type_size(MPI_INTEGER, intsize, ierr)

  disp = myrank * BUFSIZE * intsize

  !  call mpi_file_set_view(thefile, disp, MPI_INTEGER, MPI_INTEGER, 'native', 
mpi_info_null, ierr)
  call mpi_file_write_at(thefile, disp, v1(1,1,1,1), BUFSIZE, MPI_REAL, 
mpi_status_ignore, ierr)

  call mpi_file_close(thefile, ierr)

  !  print the data read in...

  open (12, file='out000.dat-parallel-write-0')

  if (myrank.eq.0) then
 write (12,'(i4,e18.8)') myrank, 
v1(nn,ix,iy,iz),nn=1,nv+2),ix=1,nx),iy=1,ny), iz=1,nz)
  endif

  close (12)

  call mpi_finalize(ierr)


end program parallel_io

###

and the reader...

reader###
 program parallel_read_io

  use mpi

  implicit none
  integer,parameter :: nx=1,ny=300,nz=256,nv=12

  integer ierr, i, myrank, comm_size, BUFSIZE, thefile, realsize
  parameter (BUFSIZE=1075200)

  real,dimension(nv+2,nx,ny,nz) :: v1

  integer (kind=MPI_OFFSET_KIND) disp

  integer ix, iy, iz, nn

  call mpi_init(ierr)
  call mpi_comm_size(mpi_comm_world, comm_size, ierr)
  call mpi_comm_rank(mpi_comm_world, myrank,ierr)

  !  do i=0,BUFSIZE
  ! buf(i) = myrank*BUFSIZE + i
  !  end do

  call mpi_type_extent(mpi_integer, realsize, ierr)
  call mpi_file_open(mpi_comm_world, 'testfile', MPI_MODE_RDONLY, 
mpi_info_null, 
thefile, ierr)
  call mpi_type_size(MPI_REAL, realsize, ierr)

  disp = myrank * BUFSIZE * realsize
  print*, 'myrank: ', myrank, '  disp: ', disp, '  realsize: ', realsize

  !  call mpi_file_set_view(thefile, disp, MPI_INTEGER, MPI_INTEGER, 'native', 
mpi_info_null, ierr)
  !  call mpi_file_read(thefile, buf, BUFSIZE, MPI_INTEGER, mpi_status_ignore, 
ierr)

  call mpi_file_read_at(thefile, disp, v1(1,1,1,1), BUFSIZE, MPI_REAL, 
mpi_status_ignore, ierr)

  call mpi_file_close(thefile, ierr)

  call mpi_barrier(mpi_comm_world,ierr)

  !  print the data read in...

  open (12, file='out000.dat-parallel-read-0')

  if (myrank.eq.0) then
 write (12,'(i4,e18.8)') myrank, 
v1(nn,ix,iy,iz),nn=1,nv+2),ix=1,nx),iy=1,ny), iz=1,nz)
  endif

  close (12)

  call mpi_finalize(ierr)


end program parallel_read_io
###

Here everything is working fine. However integrating this into a huger program,
I get totally different data written out and read in.

The setup up is the same as in the experiment, but I need some more memory...

What might be the reason for such problems, and if I have an MPI error, how 
can I estimate this within a fortan program. I have only found examples for 
the error handling of MPI errors in C or C++. I would need an example for C.

So any hints or ideas?

Best wishes

Alexander



Re: [OMPI users] problems with parallel MPI-IO...

2011-07-19 Thread Alexander Beck-Ratzka
On Tuesday, July 19, 2011 16:13:38 Jonathan Dursi wrote:
> On 19 Jul 9:02AM, Alexander Beck-Ratzka wrote:
> >integer ierr, i, myrank, BUFSIZE, thefile, intsize
> >parameter (BUFSIZE=100)
> >integer buf(BUFSIZE)
> >
> >do i=0,BUFSIZE
> >
> >   buf(i) = myrank*BUFSIZE + i
> >   print*, 'i =', i, 'myrank =', myrank, 'buf(i)=',buf(i)
> >
> >end do
> 
> [...]
> 
> > When I am reading the data in again and print them out, I always have:
> > 
> > buf(0)=0
> 
> If you compile your code with -check bounds and run, you'll get an error
> pointing out that buf(0) is an illegal access; in Fortran arrays start at
> 1.
> 

Thanks a lot, that solved the problem!

Best wishes

Alexander


[OMPI users] problems with parallel MPI-IO...

2011-07-19 Thread Alexander Beck-Ratzka
Hi Folks,

I am using the following f90 example program for writing a file in parallel 
with MIP:

[snip]
  program parallel_io

  use mpi

  implicit none

  integer ierr, i, myrank, BUFSIZE, thefile, intsize
  parameter (BUFSIZE=100)
  integer buf(BUFSIZE)
  !  integer (kind=MPI_OFFSET_KIND) disp
  integer*8 disp

  call mpi_init(ierr)
  call mpi_comm_rank(mpi_comm_world, myrank,ierr)

  do i=0,BUFSIZE
 buf(i) = myrank*BUFSIZE + i
 print*, 'i =', i, 'myrank =', myrank, 'buf(i)=',buf(i)
  end do

  call mpi_file_open(mpi_comm_world, 'testfile', MPI_MODE_WRONLY + 
MPI_MODE_CREATE, mpi_info_null, thefile, ierr)
  call mpi_type_size(MPI_INTEGER, intsize, ierr)

  disp = myrank * BUFSIZE * intsize

  !  call mpi_file_set_view(thefile, disp, MPI_INTEGER, MPI_INTEGER, 'native', 
mpi_info_null, ierr)
  call mpi_file_write_at(thefile, disp, buf, BUFSIZE, MPI_INTEGER, 
mpi_status_ignore, ierr)

  call mpi_file_close(thefile, ierr)
  call mpi_finalize(ierr)


end program parallel_io
[snip]


And the follwoing program shall read all the data in again, and print them out

[snip]
  program parallel_read_io

  use mpi

  implicit none

  integer ierr, i, myrank, BUFSIZE, thefile, intsize
  parameter (BUFSIZE=100)
  integer buf(BUFSIZE)
  !  integer (kind=MPI_OFFSET_KIND) disp
  integer*8 disp

  call mpi_init(ierr)
  call mpi_comm_rank(mpi_comm_world, myrank,ierr)

  !  do i=0,BUFSIZE
  ! buf(i) = myrank*BUFSIZE + i
  !  end do

  call mpi_file_open(mpi_comm_world, 'testfile', MPI_MODE_RDONLY, 
mpi_info_null, 
thefile, ierr)
  call mpi_type_size(MPI_INTEGER, intsize, ierr)

  disp = myrank * BUFSIZE * intsize

  !  call mpi_file_set_view(thefile, disp, MPI_INTEGER, MPI_INTEGER, 'native', 
mpi_info_null, ierr)
  !  call mpi_file_read(thefile, buf, BUFSIZE, MPI_INTEGER, mpi_status_ignore, 
ierr)
  call mpi_file_read_at(thefile, disp, buf, BUFSIZE, MPI_INTEGER, 
mpi_status_ignore, ierr)

  call mpi_file_close(thefile, ierr)

  !  print the data read in...

  if (myrank.eq.1) then
 do i = 0,BUFSIZE
print*, 'i =', i, 'myrank =', myrank, 'buf(i)=', buf(i)
 end do
  endif

  call mpi_finalize(ierr)
[snip]

I have maid several tests, also with do loops only from 0 to (BUFSIZE-1), with 
MPI_FILE_SET_VIEW and MPI_FILE_READ, etc...

When I am reading the data in again and print them out, I always have:

buf(0)=0

for every rank, so I assume that something with the offset is wrong. I am using 
openmpi with an Intel f90 compiler.

What am I doing wrong?

Best wishes

Alexander