Re: [OMPI devel] Memory leak

2017-07-25 Thread Gilles Gouaillardet
Samuel,

fwiw, the issue is fixed in upcoming Open MPI 3.0

Cheers,

Gilles

On Wed, Jul 26, 2017 at 3:43 AM, Samuel Poncé  wrote:
> Dear OpenMPI developpers,
>
> I would like to report a bug for openmpi/2.0.2
>
> This bug might have been corrected in earlier version. Appologies if it is
> the case.
> I tested that it work with openmpi/1.10.
>
> If you open and close a file a lot of times with openmpi 2.0.2, the memory
> increase linearly with the number of times the file is open.
>
> Here is a small Fortran file to reproduce the issue:
>
> ---
> PROGRAM test
>
> INCLUDE 'mpif.h'
>
> INTEGER :: iunepmatwp2, ierr, ii
>
> CALL MPI_INIT(ierr)
>
> DO ii=1, 1
>   print*,'ii ',ii
>   CALL
> MPI_FILE_OPEN(MPI_COMM_WORLD,'XXX.FILE',MPI_MODE_RDONLY,MPI_INFO_NULL,iunepmatwp2,ierr)
>
>   CALL MPI_FILE_CLOSE(iunepmatwp2,ierr)
> ENDDO
>
> CALL MPI_FINALIZE(ierr)
>
> END PROGRAM
>
> --
>
> where 'XXX.FILE' is big binary file (100 Mo or so).
>
> mpif90 -O2 -assume byterecl -g -traceback -nomodule -c  test.f90
> mpif90 -static-intel  -o test.x test.o
>
> So in the case of compilation with openmpi 2.0.2 the memory increase whereas
> with 1.10 the memory stay constant with the number of iteration.
>
>
> Best Regards,
> Samuel
>
> ___
> devel mailing list
> devel@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

[OMPI devel] Memory leak

2017-07-25 Thread Samuel Poncé
Dear OpenMPI developpers,

I would like to report a bug for openmpi/2.0.2

This bug might have been corrected in earlier version. Appologies if it is
the case.
I tested that it work with openmpi/1.10.

If you open and close a file a lot of times with openmpi 2.0.2, the memory
increase linearly with the number of times the file is open.

Here is a small Fortran file to reproduce the issue:

---
PROGRAM test

INCLUDE 'mpif.h'

INTEGER :: iunepmatwp2, ierr, ii

CALL MPI_INIT(ierr)

DO ii=1, 1
  print*,'ii ',ii
  CALL  MPI_FILE_OPEN(MPI_COMM_WORLD,'XXX.FILE',MPI_MODE_RDONLY,MPI_
INFO_NULL,iunepmatwp2,ierr)

  CALL MPI_FILE_CLOSE(iunepmatwp2,ierr)
ENDDO

CALL MPI_FINALIZE(ierr)

END PROGRAM

--

where 'XXX.FILE' is big binary file (100 Mo or so).

mpif90 -O2 -assume byterecl -g -traceback -nomodule -c  test.f90
mpif90 -static-intel  -o test.x test.o

So in the case of compilation with openmpi 2.0.2 the memory increase
whereas with 1.10 the memory stay constant with the number of iteration.


Best Regards,
Samuel
___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

Re: [OMPI devel] memory leak caused by possibly wrong initialization in ompi_ddt_duplicate()

2006-11-13 Thread George Bosilca

Andreas,

Thanks for the patch and the example. You're right, we should avoid  
copying the ref count of the old datatype. The new datatype need a  
ref count set to one when it get out of the dup function. I will  
commit the patch to the trunk.


  Thanks,
george.

On Nov 12, 2006, at 9:00 PM, Andreas Schäfer wrote:


Hi,

one of our projects recently exposed severe memory leakage when using
ROMIO to write a complex derived datatype (a struct made of other
structs) to a file. From our code we distilled the attached short
program to reproduce the leak.

After some Valgrind sessions, it appears as if the memcpy in
ompi_ddt_duplicate() is a bit overhasty, as it does copy the old
type's reference counter, too.

I don't know if this is the right way to fix it, but if I apply the
patch below to ompi the leak is fixed.

Cheers!
-Andreas


diff -ru openmpi-1.1.1/ompi/datatype/dt_create_dup.c
openmpi-1.1.1-fixed/ompi/datatype/dt_create_dup.c
--- openmpi-1.1.1/ompi/datatype/dt_create_dup.c 2006-06-14
21:56:41.0 +0200
+++ openmpi-1.1.1-fixed/ompi/datatype/dt_create_dup.c   2006-11-13
00:35:03.0 +0100
@@ -33,6 +33,7 @@
 int32_t old_index = pdt->d_f_to_c_index;

 memcpy( pdt, oldType, sizeof(ompi_datatype_t) );
+((opal_object_t *)pdt)->obj_reference_count = 1;
 pdt->desc.desc = temp;
 pdt->flags &= (~DT_FLAG_PREDEFINED);
 /* ompi_ddt_create() creates a new f_to_c index that was saved



___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel