Hello all.
I'm using openmpi-1.3 in this example, linux, gcc-4.3.2, configured
with nothing special.
If I run the following simple C code under valgrind, single process, I
get some errors about reading and writing already-freed memory:
---
#include
#include
int delete_
On Wed, Feb 18, 2009 at 02:44:09PM -0800, Brian Austin wrote:
> I don't know whether this is the correct behavior, but it is the
> correct origin of my confusion.
> I suspected this would be attributed to the standard, but it is
> contrary to what I'm used to with C's fopen:
> I expected MPI_File_
On Wed, Feb 18, 2009 at 02:24:03PM -0700, Ralph Castain wrote:
> Hi Rob
>
> Guess I'll display my own ignorance here:
>
>>> MPI_File_open( MPI_COMM_WORLD, "foo.txt",
>>>MPI_MODE_CREATE | MPI_MODE_WRONLY,
>>>MPI_INFO_NULL, &fh );
>
>
> Since the file was opened with MPI_MODE
On Wed, Feb 18, 2009 at 11:10:51AM -0800, Brian Austin wrote:
> >> Can you confirm - are you -really- using 1.1.2???
> >>
> >> You might consider updating to something more recent, like 1.3.0 or
> >>at least 1.2.8. It would be interesting to know if you see the same
> >> problem.
>
> > Also, if yo
On Fri, Oct 31, 2008 at 11:19:39AM -0400, Antonio Molins wrote:
> Hi again,
>
> The problem in a nutshell: it looks like, when I use
> MPI_Type_create_darray with an argument array_of_gsizes where
> array_of_gsizes[0]>array_of_gsizes[1], the datatype returned goes
> through MPI_Type_commit()
On Thu, Oct 23, 2008 at 12:41:45AM -0200, Davi Vercillo C. Garcia (ダヴィ) wrote:
> Hi,
>
> I'm trying to run a code using OpenMPI and I'm getting this error:
>
> ADIOI_GEN_DELETE (line 22): **io No such file or directory
>
> I don't know why this occurs, I only know this happens when I use more
>
On Sat, Aug 16, 2008 at 08:05:14AM -0400, Jeff Squyres wrote:
> On Aug 13, 2008, at 7:06 PM, Yvan Fournier wrote:
>
>> I seem to have encountered a bug in MPI-IO, in which
>> MPI_File_get_position_shared hangs when called by multiple processes
>> in
>> a communicator. It can be illustrated by the
On Wed, Jul 23, 2008 at 09:47:56AM -0400, Robert Kubrick wrote:
> HDF5 supports parallel I/O through MPI-I/O. I've never used it, but I
> think the API is easier than direct MPI-I/O, maybe even easier than raw
> read/writes given its support for hierarchal objects and metadata.
In addition to t
On Wed, Jul 23, 2008 at 01:28:53PM +0100, Neil Storer wrote:
> Unless you have a parallel filesystem, such as GPFS, which is
> well-defined and does support file-locking, I would suggest writing to
> different files, or doing I/O via a single MPI task, or via MPI-IO.
I concur that NFS for a parall
On Wed, Jul 23, 2008 at 02:24:03PM +0200, Gabriele Fatigati wrote:
> >You could always effect your own parallel IO (e.g., use MPI sends and
> receives to coordinate parallel reads and writes), but >why? It's already
> done in the MPI-IO implementation.
>
> Just a moment: you're saying that i can
On Thu, May 29, 2008 at 04:48:49PM -0300, Davi Vercillo C. Garcia wrote:
> > Oh, I see you want to use ordered i/o in your application. PVFS
> > doesn't support that mode. However, since you know how much data each
> > process wants to write, a combination of MPI_Scan (to compute each
> > process
On Thu, May 29, 2008 at 04:24:18PM -0300, Davi Vercillo C. Garcia wrote:
> Hi,
>
> I'm trying to run my program in my environment and some problems are
> happening. My environment is based on PVFS2 over NFS (PVFS is mounted
> over NFS partition), OpenMPI and Ubuntu. My program uses MPI-IO and
> BZ
On Mon, Jan 28, 2008 at 03:26:14PM -0800, R C wrote:
> Hi,
> I compiled a molecular dynamics program DLPOLY3.09 on an AMD64 cluster
> running
> openmpi 1.2.4 with Portland group compilers.The program seems to run alright,
> however, each processor outputs:
>
> ADIOI_GEN_DELETE (line 22): **io N
On Tue, Jan 22, 2008 at 11:25:25AM -0500, Brock Palen wrote:
> Has anyone had trouble using flash with openmpi? We get segfaults
> when flash tries to write checkpoints.
segfaults are good if you also get core files. do the backtraces
from those core files look at all interesting?
==rob
--
On Fri, Jan 18, 2008 at 07:44:12PM -0500, Jeff Squyres wrote:
> FWIW, you might want to ask the ROMIO maintainers if this is a known
> problem. I unfortunately have no idea. :-\
Sorry, we're not much more help either... I know hdf5+pvfs+openMPI works.
What if you run the test programs in the
On Fri, Nov 02, 2007 at 12:18:54PM +0100, Oleg Morajko wrote:
> Is there any standard way of attaching/retrieving attributes to MPI_Request
> object?
>
> Eg. Typically there are dynamic user data created when starting the
> asynchronous operation and freed when it completes. It would be convenient
On Fri, Sep 14, 2007 at 02:31:51PM -0400, Jeff Squyres wrote:
> Ok. Maybe we'll just make a hard-coded string somewhere "ROMIO from
> MPICH2 vABC, on AA/BB/" or somesuch. That'll at least give some
> indication of what version you've got.
That sort-of reminds me: ROMIO (well, all of MPI
On Fri, Sep 14, 2007 at 02:16:46PM -0400, Jeff Squyres wrote:
> Rob -- is there a public constant/symbol somewhere where we can
> access some form of ROMIO's version number? If so, we can also make
> that query-able via ompi_info.
There really isn't. We used to have a VERSION variable in
con
On Fri, Sep 07, 2007 at 10:18:55AM -0400, Brock Palen wrote:
> Is there a way to find out which ADIO options romio was built with?
not easily. You can use 'nm' and look at the symbols :>
> Also does OpenMPI's romio come with pvfs2 support included? What
> about Luster or GPFS.
OpenMPI has ship
On Sun, Aug 26, 2007 at 06:44:18PM +0200, Bernd Schubert wrote:
> I'm presently trying to add lustre support to open-mpi's romio using this
> patch http://ft.ornl.gov/projects/io/src/adio-lustre-mpich2-v02.patch.
>
> It basically applies, only a few C files have been renamed in open-mpi, but
> the
On Tue, Jul 10, 2007 at 04:36:01PM +, jody wrote:
> I think there is still some problem.
> I create different datatypes by resizing MPI_SHORT with
> different negative lower bounds (depending on the rank)
> and the same extent (only depending on the number of processes).
>
> However, I get an
On Tue, Jul 10, 2007 at 04:36:01PM +, jody wrote:
> Error: Unsupported datatype passed to ADIOI_Count_contiguous_blocks
> [aim-nano_02:9] MPI_ABORT invoked on rank 0 in communicator
> MPI_COMM_WORLD with errorcode 1
Hi Jody:
OpenMPI uses an old version of ROMIO. You get this error becaus
On Mon, Jul 02, 2007 at 12:49:27PM -0500, Adams, Samuel D Contr AFRL/HEDR wrote:
> Anyway, so if anyone can tell me how I should configure my NFS server,
> or OpenMPI to make ROMIO work properly, I would appreciate it.
Well, as Jeff said, the only safe way to run NFS servers for ROMIO is
by dis
On Tue, Jan 30, 2007 at 04:55:09PM -0500, Ivan de Jesus Deras Tabora wrote:
> Then I find all the references to the MPI_Type_create_subarray and
> create a little program just to test that part of the code, the code I
> created is:
...
> After running this little program using mpirun, it raises the
On Tue, Jan 09, 2007 at 02:53:24PM -0700, Tom Lund wrote:
> Rob,
>Thank you for your informative reply. I had no luck finding the
> external32 data representation in any of several mpi implementations and
> thus I do need to devise an alternative strategy. Do you know of a good
> reference
On Mon, Jan 08, 2007 at 02:32:14PM -0700, Tom Lund wrote:
> Rainer,
>Thank you for taking time to reply to my querry. Do I understand
> correctly that external32 data representation for i/o is not
> implemented? I am puzzled since the MPI-2 standard clearly indicates
> the existence of ext
On Mon, Aug 14, 2006 at 10:57:34AM -0400, Brock Palen wrote:
> We will be evaluating pvfs2 (www.pvfs.org) in the future. Is their
> any special considerations to take to get romio support with openmpi
> with pvfs2 ?
Hi
Since I wrote the ad_pvfs2 driver for ROMIO, and spend a lot of time
on P
On Tue, Jul 11, 2006 at 12:14:51PM -0400, Abhishek Agarwal wrote:
> Hello,
>
> Is there a way of providing a specific port number in MPI_Info when using a
> MPI_Open_port command so that clients know which port number to connect.
The other replies have covered this pretty well but if you are
dea
On Tue, Mar 14, 2006 at 12:37:52PM -0600, Edgar Gabriel wrote:
> I think I know what goes wrong. Since they are in different 'universes',
> they will have exactly the same 'Open MPI name', and therefore the
> algorithm in intercomm_merge can not determine which process should be
> first and whic
On Tue, May 02, 2006 at 10:32:56PM +0200, Dries Kimpe wrote:
> It looks as if the problem is not really due to Open MPI, but to the
> included ROM-IO:
>
> All tests fail with the same error message:
>
> For example, test/test_double/test_write shows:
>
> Testing write ... Error: Unsupported data
On Tue, Mar 14, 2006 at 12:00:57PM -0600, Edgar Gabriel wrote:
> you are touching here a difficult area in Open MPI:
I don't doubt it. I haven't found an MPI implementation yet that does
this without any quirks or oddities :>
> - name publishing across independent jobs does unfortunatly not work
Hello
In playing around with process management routines, I found another
issue. This one might very well be operator error, or something
implementation specific.
I've got two processes (a and b), linked with openmpi, but started
independently (no mpiexec).
- A starts up and calls MPI_Init
- A
Hi
I've got a bit of an odd bug here. I've been playing around with MPI
process management routines and I notied the following behavior with
openmpi-1.0.1:
Two processes (a and b), linked with ompi, but started independently
(no mpiexec, just started the programs directly).
- a and b: call MPI_
33 matches
Mail list logo