Hi
When is it thought that 1.6.1 goes public?
best,
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural Instigator @ Rádio Zero
http
all problems gone, thanks for the input and assistance.
cheers,
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural Instigator @ Rádio Zero
one)
Anyway, although is becomes obvious after tracking it I think it can be a
normal pitfall for the unaware...
best,
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance Computing, Turbulence
http
On Tue, 15 May 2012, Jeff Squyres wrote:
On May 15, 2012, at 2:19 PM, Ricardo Reis wrote:
INTEGER(kind=MPI_OFFSET_KIND) :: offset
MPI_OFFSET_KIND is insuficient to represent my offset...
Is it not a 64 bit integer for your compiler?
I'm still interested in the answer to this question
On Tue, 15 May 2012, Jeff Squyres wrote:
On May 15, 2012, at 10:53 AM, Ricardo Reis wrote:
My problem is rather that
INTEGER(kind=MPI_OFFSET_KIND) :: offset
MPI_OFFSET_KIND is insuficient to represent my offset...
Is it not a 64 bit integer for your compiler?
There *is* a bug in OMPI
My problem is rather that
INTEGER(kind=MPI_OFFSET_KIND) :: offset
MPI_OFFSET_KIND is insuficient to represent my offset...
best,
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance Computing
-2045256448
offset is of the type MPI_OFFSET_KIND which seems insuficient to hold the
correct size for the offset.
So... am I condemned to write my own MPI data type so I can write the
files? ideas... ?
best regards,
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic
what file system is this on?
gluster connected by infiniband. all disks in the same machine, everyone
speaks on infiniband.
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance Computing, Turbulence
some feedback.
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural Instigator @ Rádio Zero
http://www.radiozero.pt
http://www.flickr.com/ph
write_at_all
call.
Any ideas of what to do or where to look are welcomed.
best,
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural Instigator
with
HOSTFILE=hostfile
awk '{print $1" cpu="$2}' ${PE_HOSTFILE} > ${HOSTFILE}
mpirun -machinefile ${HOSTFILE} -np ${NSLOTS} ${EXEC}
)
best (sorry if I extended the answer)
Ricardo Reis
'Non Serviam'
PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering
Computatio
ess must be lower than 4Gb
There was a discussion a short time ago about this...
best,
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural Instigator @ Rádio Zero
http://www.radiozero
if you do `mpirun --version` in the said script (or node)?
(what I intend to say is how are you sure that is not lam mpirun that is
being called in that particular node?)
best,
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbul
d to catch their attention right now,
but eventually somebody will clarify this.
oh, just a small grain of sand... doesn't seems worth to stop the full
machine for it...
:)
many thanks all
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance
is exactly 2^31-1).
Thanks for the explanation. Then this should be updated in the spec no...?
cheers!
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural Instigator @ Rádio Zero
On Tue, 16 Nov 2010, Gus Correa wrote:
Ricardo Reis wrote:
and sorry to be such a nuisance...
but any motive for an MPI-IO "wall" between the 2.0 and 2.1 Gb?
Salve Ricardo Reis!
Is this "wall" perhaps the 2GB Linux file size limit on 32-bit systems?
No. Thi
and sorry to be such a nuisance...
but any motive for an MPI-IO "wall" between the 2.0 and 2.1 Gb?
(1 mpi process)
best,
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
?
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural Instigator @ Rádio Zero
http://www.radiozero.pt
Keep them Flying! Ajude a/help Aero Fénix!
http://www.aeronauta.com/aero.fenix
write!
the code is here:
http://aero.ist.utl.pt/~rreis/test_io.f90
can some kind soul just look at it and give some input?
or, simply, point me also to where fortran error n 3 meaning is
explained?
best and many thanks for your time,
Ricardo Reis
'Non Serviam'
PhD candidate
sh is just a simlink to bash...
2028.0 $ ls -l /bin/sh
lrwxrwxrwx 1 root root 4 Sep 7 2009 /bin/sh -> bash
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural Instigator @ Rád
should be on the watch to make this work? I've
already taken care of making the send and receive buffers THREAD_PRIVATE
cheers and thanks for your input,
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.
be an environment variable set, for instance, in
the init file of your shell.
Please read http://www.absoft.com/Support/FAQ/lixfaq_installation.htm
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
On Mon, 5 Apr 2010, Rob Latham wrote:
On Tue, Mar 30, 2010 at 11:51:39PM +0100, Ricardo Reis wrote:
If using the master/slace IO model, would it be better to cicle
through all the process and each one would write it's part of the
array into the file. This file would be open in "stream&
On Tue, 30 Mar 2010, Gus Correa wrote:
Salve Ricardo Reis!
Como vai a Radio Zero?
:) busy, busy, busy. we are preparing to celebrate Yuri's Night, April the
12th!
Doesn't this serialize the I/O operation across the processors,
whereas MPI_Gather followed by rank_0 I/O may perhaps move
te_to_file
closefile
endif
call MPI_Barrier(world,ierr)
enddo
cheers,
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
http://www.lasef.ist.utl.pt
Cultural Instigator @ Rádio Zero
http://www.radiozero
make a copy yourself and allow the original buffer to be freed.
Thanks. So in an asynchronous write, the old buffer would only be
available after the I/O ended. So maybe I really need to think to set some
process aside just for I/O...
Ricardo Reis
'Non Serviam'
PhD candidate @ Lasef
Computat
sh it.
Yes, I know. But this should function if the Barrier would be working has
supposed. I've seen it working previously and I'm seing it working in
other MPI implementations (mvapich)
So, what's the catch?
Grande abraço a um conhecedor de Pessoa e habitante do país de Walt
Whitman,
Ricardo Reis
st 0
ISTEP 2 IDX 3 my_rank 3 idest 1
*
ISTEP 3 IDX 0 my_rank 0 idest 3
ISTEP 3 IDX 1 my_rank 1 idest 2
ISTEP 3 IDX 2 my_rank 2 idest 1
ISTEP 3 IDX 3 my_rank 3 idest 0
- < expected output - cut here --
Ricardo Reis
'Non Serviam
On Wed, 25 Jul 2007, Jeff Squyres wrote:
I'm still awaiting access to the Intel 10 compilers to try to
reproduce this problem myself. Sorry for the delay...
What do you need for this to happen? The intel packages? I can give you
access to a machine if you wan't to try it out.
Ricardo
and it worked. ompi gives what is asked, no problem...
greets,
Ricardo Reis
'Non Serviam'
PhD student @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
<http://www.lasef.ist.utl.pt>
&
Cultural Instigator @ Rádio Zero
http://radio.ist.utl.pt
Intel Corporation. All rights reserved.
Do the intel compilers come with any error checking tools to give
more diagnostics?
yes, they come with their own debugger. I'll try to use it and send more
info when done.
thanks!,
Ricardo Reis
'Non Serviam'
PhD student @ Lasef
Computational Fluid
On Wed, 11 Jul 2007, Jeff Squyres wrote:
LAM uses C++ for the laminfo command and its wrapper compilers (mpicc
and friends). Did you use those successfully?
yes, no problem.
attached out from laminfo -all
strace laminfo
greets,
Ricardo Reis
'Non Serviam'
PhD
. Has I said previously I can compile and use
LAM MPI with my intel compiler instalation. I believe that LAM uses C++
inside no?
greets,
Ricardo Reis
'Non Serviam'
PhD student @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
<http://www.lasef.ist.utl
-trivial C++ apps to compile in this machine...
Do you wan't to suggest some? (hello_world works...)
greets,
Ricardo Reis
'Non Serviam'
PhD student @ Lasef
Computational Fluid Dynamics, High Performance Computing, Turbulence
<http://www.lasef.ist.utl.pt>
&
Cultural Instigat
Symbols already loaded for /lib/i686/cmov/libc.so.6
Symbols already loaded for /lib/i686/cmov/libdl.so.2
Symbols already loaded for /opt/intel/cc/10.0.023/lib/libimf.so
Symbols already loaded for /opt/intel/cc/10.0.023/lib/libintlc.so.5
Ricardo Reis
'Non Serviam'
PhD student @ Lasef
Computational
-np ) gives segmentation fault.
Ompi_info gives output and then segfaults. ompi_info --all segfaults
immediatly.
Added ompi_info log (without --all)
Added strace ompi_info --all log
Added strace mpirun log
greets,
Ricardo Reis
'Non Serviam'
PhD student @ Lasef
Computational Fluid
bit, no flags given to the compilers.
4999.0 $ uname -a
Linux umdrum 2.6.21.5-rt17 #2 SMP PREEMPT RT Mon Jun 25 23:02:11 WEST 2007
i686 GNU/Linux
5003.0 $ ldd --version
ldd (GNU libc) 2.5
help?
Ricardo Reis
'Non Serviam'
PhD student @ Lasef
Computational Fluid Dynamics, High
37 matches
Mail list logo