How is the file read? From stdin? Or do they open it directly?

If the latter, then it sure sounds like a CGNS issue to me - looks like they 
are slurping in the entire file, and then forget to free the memory when they 
close it. I can’t think of any solution short of getting them to look at the 
problem.




> On Jun 17, 2015, at 5:28 AM, Manoj Vaghela <manoj.vagh...@gmail.com> wrote:
> 
> Hi,
> 
> While checking for memory issues related with CGNS 2.5.5, on a test machine, 
> the following output is display when just opening and closing CGNS file.
> 
> Can anybody please help me on this?
> This machine is Centos 7 (minimal installation) with GCC 4.8.3 and CentOS 7. 
> The gcc compiler is used. The CGNS library is statically compiled locally 
> with default configuration option.
> 
> ===================
> before open:files 0/0: memory 0/0
> after  open:files 1/1: memory 969250618/969250618
> no CGNS error reported
> no CGNS error reported
> before close:files 1/1: memory 969250618/969250618
> after  close:files 0/0: memory 969250618/969250618
> no CGNS error reported
> ===================
> 
> I am still not able to conclude on whether it is a OS/machine/CGNSLib 
> problem? Should I compile with dynamic option?
> 
> If I use MPI and open CGNS file only for reading, should it occupy whole 
> memory on each processor? Ideally it should free memory after cg_close(), but 
> it is not freeing.
> 
> Also, in major cases, the CGNS file would be of size of more than 2GB. Do I 
> need to compile with any specific flags? 
> 
> Please help.
> Thanks.
> 
> --
> regards,
> Manoj
> 
> 
> On Tue, Jun 2, 2015 at 2:19 AM, Manoj Vaghela <manoj.vagh...@gmail.com 
> <mailto:manoj.vagh...@gmail.com>> wrote:
> Dear Nathan,
> 
> After some initial debugging procedure, I found that the problem is with the 
> CGNS (v 2.5) file which I am reading by each processor. The CGNS file which 
> has 3-levels of userdefined data of descriptors/arrays is just read by each 
> processor only for getting some texts, which in turn takes 1% of memory 
> (totally to 5GB). I have no idea of why it is happening. I have asked this 
> memory related issue to the CGNS forum for help.
> 
> I am checking memory of each processor (process) using "top" command. Each 
> process shows its % memory usage, so in my case for 16 processors, it is 16 * 
> 1% = 16% (5GB) of total memory (=32GB) which is very huge for just extracting 
> only text data from the file.
> 
> Any comments are welcome.
> Thanks.
> 
> --
> regards,
> Manoj B Vaghela
> 
> On Mon, Jun 1, 2015 at 3:14 PM, Nathan Hjelm <hje...@lanl.gov 
> <mailto:hje...@lanl.gov>> wrote:
> 
> Just to be sure. How are you measuring the memory usage? If you are
> using /proc/meminfo are you subracting out the Cached memory usage?
> 
> -Nathan
> 
> On Mon, Jun 01, 2015 at 04:54:45AM -0400, Manoj Vaghela wrote:
> >    Hi OpenMPI users,
> >
> >    I have been using OpenMPI for quite a few years now. Recently I figured
> >    out some memory related issues which are quite bothering me.
> >
> >    I have OpenMPI 1.8.3 version installed on different machines. All 
> > machines
> >    are SMPs and linux x86_64. The Machine one and one-1 are installed with
> >    Red Hat Enterprise Linux Server release 5.4 and others are CentOS 7.
> >
> >    I am using 16 cores on each machine. If I see memory consumption for a
> >    finite volume problem of 3 million cells, it should take nearly 3GB of
> >    memory on each machine for 16 cores usage. The following are some of the
> >    values of memory consumption which I got.
> >
> >    machine       mem used(GB)       total memory(GB)           per
> >    core
> >
> >    memory usage(%)
> >    ==========================================================
> >    one            2.114413568                  66.075424
> >    0.2
> >    one-1         2.368967808                 24.676748
> >    0.6
> >    two             7.362867456                 32.869944
> >    1.4
> >    three          7.333295872                 16.368964
> >    2.8
> >    four            7.356667136                 32.842264
> >    1.4
> >    five             7.350758912
> >    32.815888                      1.4
> >
> >    I am wondering why machines two to five are taking high memory against 
> > the
> >    machines one and one-1 for the same setup files for this problem.
> >
> >    I have compiled OpenMPI with its default options on all machines.
> >
> >    It will help if somebody has any idea on this problem. Is there anything
> >    to be set while building OpenMPI ? or it is OS problem?
> >    Thanks.
> >
> >    Manoj
> >
> 
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org <mailto:us...@open-mpi.org>
> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users 
> > <http://www.open-mpi.org/mailman/listinfo.cgi/users>
> > Searchable archives: 
> > http://www.open-mpi.org/community/lists/users/2015/06/27006.php 
> > <http://www.open-mpi.org/community/lists/users/2015/06/27006.php>
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org <mailto:us...@open-mpi.org>
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users 
> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/06/27015.php 
> <http://www.open-mpi.org/community/lists/users/2015/06/27015.php>
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/06/27145.php 
> <http://www.open-mpi.org/community/lists/users/2015/06/27145.php>

Reply via email to