Re: Memleaks or not?

2011-09-03 Thread Reinhold Kainhofer
Am Wednesday, 24. August 2011, 20:30:20 schrieb Reinhold Kainhofer:
 Running lilypond on a lot of files in one run, I observe that lilypond's
 memory usage slowly goes up with time, i.e. it seems that lilypond does not
 properly free all memory used for one score, before it starts with the next
 one.

Following up on this, I have discovered the (check-memory) scheme function in 
lily.scm and let it display some Scheme memory stats when running on multiple 
files:

After the first file we have:
Processing `accidental-ancient.ly'
VMDATA: 91360
((gc-time-taken . 15) (cells-allocated . 4117699) (total-cells-allocated . 
5066751) (cell-heap-size . 10737028) (bytes-malloced . 3726859) (gc-malloc-
threshold . 4037751) (gc-times . 6) (gc-mark-time-taken . 15) (cells-marked . 
34142411.0) (cells-swept . 35398335.0) (malloc-yield . 23) (cell-yield . 17) 
(protected-objects . 1177) (cell-heap-segments (140744704 . 140779520) 
(2988156928 . 3044208640) (3047643136 . 3067260928) (3067570176 . 
3078098944)))


But after compiling all regtests starting with [abc]*.ly, we have:

Processing `custos.ly'
VMDATA: 120884
((gc-time-taken . 2623) (cells-allocated . 75745641) (total-cells-allocated . 
513621680) (cell-heap-size . 10737028) (bytes-malloced . 35585796) (gc-malloc-
threshold . 45739370) (gc-times . 227) (gc-mark-time-taken . 2623) (cells-
marked . 2013812174.0) (cells-swept . 2451995085.0) (malloc-yield . 12) (cell-
yield . 20) (protected-objects . 2576) (cell-heap-segments (140744704 . 
140779520) (2988156928 . 3044208640) (3047643136 . 3067260928) (3067570176 . 
3078098944)))


Notice that the VMDATA goes up (almost linearly in the files processed), and 
similarly the bytes-malloced, as well as the protected-objects and most other 
counts. 
The cells-allocated got up, too, although I don't really know what cells 
really means here.

Since I really don't know too much about the internals of guile, the question 
that immediately comes to my mind: Doesn't this indicate that we don't release 
all guile memory after processing a file? In particular, are we missing some 
objects from garbage collection, or are we not removing the used mark of some 
SCM objects after one file is finished?

Cheers,
Reinhold
-- 
--
Reinhold Kainhofer, reinh...@kainhofer.com, http://reinhold.kainhofer.com/
 * Financial  Actuarial Math., Vienna Univ. of Technology, Austria
 * http://www.fam.tuwien.ac.at/, DVR: 0005886
 * LilyPond, Music typesetting, http://www.lilypond.org

___
lilypond-devel mailing list
lilypond-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/lilypond-devel


Re: Memleaks or not?

2011-09-03 Thread Han-Wen Nienhuys
On Wed, Aug 24, 2011 at 3:30 PM, Reinhold Kainhofer
reinh...@kainhofer.com wrote:
 Running lilypond on a lot of files in one run, I observe that lilypond's
 memory usage slowly goes up with time, i.e. it seems that lilypond does not
 properly free all memory used for one score, before it starts with the next
 one.

 1) In Pango_font::text_stencil we have
    PangoLayout *layout = pango_layout_new (context_);
 but after using the layout, we never call g_object_unref (layout).

looks like a leak.

 3) There are many, many warnings about possibly lost memory in
 pango_layout_get_lines calls, but I don't see how they can be real memory
 leaks from the pango docs. Still, the numbers go up linearly in the number of
 files...

looks suspect. Pango_font::text_stencil allocates stuff but does not deallocate.

 4) In all-font-metrics.cc we have
  PangoFontMap *pfm = pango_ft2_font_map_new ();
  pango_ft2_fontmap_ = PANGO_FT2_FONT_MAP (pfm);
 And in All_font_metrics::~All_font_metrics we have:
  g_object_unref (pango_ft2_fontmap_);

 Still, valgrind reports that pango_ft2_fontmap_ is possibly lost, and the
 number of bytes goes up linearly in the number of files...

The font handling is fugly; there is a global variable holding a list
of a fonts somewhere. Getting this fixed is dirty work, without much
benefit.

For the scheme side: when running multiple files, a GC is run after
every file.  There is a warning about object should be dead that
should trigger if we find any live scheme objects that should have
died.  The GC allocation strategy may still cause memory to go up
overall, as it will accomodate for the peak memory use (ie. the most
expensive file).

-- 
Han-Wen Nienhuys - han...@xs4all.nl - http://www.xs4all.nl/~hanwen

___
lilypond-devel mailing list
lilypond-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/lilypond-devel


Memleaks or not?

2011-08-24 Thread Reinhold Kainhofer
Running lilypond on a lot of files in one run, I observe that lilypond's 
memory usage slowly goes up with time, i.e. it seems that lilypond does not 
properly free all memory used for one score, before it starts with the next 
one. 

In particular, running lilypond on all 1010 regtests the output of ps is (as 
time goes on):
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
reinhold 28946  127  4.7 108432 97856 pts/3R+   19:27   0:01 lilypond 
reinhold 28946 74.4  5.4 121564 110988 pts/3   R+   19:27   0:18 lilypond 
reinhold 28946 72.6  5.9 137168 123036 pts/3   R+   19:27   0:39 lilypond 
reinhold 28946 73.4  6.4 151544 133308 pts/3   R+   19:27   0:59 lilypond 
reinhold 28946 73.8  6.7 159784 139320 pts/3   S+   19:27   1:11 lilypond 
reinhold 28946 72.1  8.1 199532 166512 pts/3   R+   19:27   1:54 lilypond 
reinhold 28946 72.6 11.2 271932 230260 pts/3   R+   19:27   2:28 lilypond 
reinhold 28946 72.0 12.0 316108 247232 pts/3   R+   19:27   2:56 lilypond
reinhold 28946 72.0 12.6 341956 259276 pts/3   R+   19:27   3:25 lilypond
...
reinhold 28946 72.1 15.4 423400 315956 pts/3   S+   19:27   5:35 lilypond
reinhold 28946 72.4 18.8 526744 387140 pts/3   R+   19:27   7:47 lilypond
reinhold 28946 72.5 27.9 747168 572740 pts/3   R+   19:27   9:59 lilypond


Using the memcheck tool of valgrind, there are some lost memory warnings that 
might be leaks. Can anyone confirm that these are really leaks?

However, the leaks reported by valgrind do not explain why lilypond's memory 
increases on average per regtest file by ~650kB VSZ and ~475kB RSS.

All valgrind warnings in this mail are obtained by running valgrind on just 
two files. Most of the reportedly lost memory numbers go up practically linear 
in the number of files processed.



1) In Pango_font::text_stencil we have
PangoLayout *layout = pango_layout_new (context_);
but after using the layout, we never call g_object_unref (layout).


==28530== 4,080 bytes in 2 blocks are possibly lost in loss record 6,178 of 
6,263
==28530==at 0x402517B: memalign (vg_replace_malloc.c:581)
==28530==by 0x40251D8: posix_memalign (vg_replace_malloc.c:709)
==28530==by 0x43B3546: ??? (in /lib/i386-linux-
gnu/libglib-2.0.so.0.2800.6)
==28530==by 0x43B4A4C: g_slice_alloc (in /lib/i386-linux-
gnu/libglib-2.0.so.0.2800.6)
==28530==by 0x43B4B74: g_slice_alloc0 (in /lib/i386-linux-
gnu/libglib-2.0.so.0.2800.6)
==28530==by 0x432C8C6: g_type_create_instance (in /usr/lib/i386-linux-
gnu/libgobject-2.0.so.0.2800.6)
==28530==by 0x4309674: ??? (in /usr/lib/i386-linux-
gnu/libgobject-2.0.so.0.2800.6)
==28530==by 0x430CCE6: g_object_newv (in /usr/lib/i386-linux-
gnu/libgobject-2.0.so.0.2800.6)
==28530==by 0x430DA3F: g_object_new (in /usr/lib/i386-linux-
gnu/libgobject-2.0.so.0.2800.6)
==28530==by 0x42224C5: pango_layout_new (in /usr/lib/i386-linux-
gnu/libpango-1.0.so.0.2800.4)
==28530==by 0x81D02D3: Pango_font::text_stencil(Output_def*, std::string, 
bool) const (pango-font.cc:312)
==28530==by 0x8292E68: 
Text_interface::interpret_string(scm_unused_struct*, scm_unused_struct*, 
scm_unused_struct*) (text-interface.cc:79)
==28530==by 0x82932BE: 
Text_interface::interpret_markup(scm_unused_struct*, scm_unused_struct*, 
scm_unused_struct*) (text-interface.cc:95)



2) In includable-lexer.cc we use
  yy_switch_to_buffer (yy_create_buffer (file-get_istream (), YY_BUF_SIZE));
and in Source_file::~Source_file we have delete istream_;. On the other 
hand, running valgrind with 2 and with 3 .ly files shows that the memory 
recorded as possibly lost really increases.
Still, valgrind reports:


==28530== 118,852 bytes in 18 blocks are possibly lost in loss record 6,257 of 
6,263
==28530==at 0x402641D: operator new(unsigned int) 
(vg_replace_malloc.c:255)
==28530==by 0x44B89F7: std::string::_Rep::_S_create(unsigned int, unsigned 
int, std::allocatorchar const) (in /usr/lib/i386-linux-
gnu/libstdc++.so.6.0.14)
==28530==by 0x44BAC33: char* std::string::_S_constructchar const*(char 
const*, char const*, std::allocatorchar const, std::forward_iterator_tag) 
(in /usr/lib/i386-linux-gnu/libstdc++.so.6.0.14)
==28530==by 0x44BAD20: std::basic_stringchar, std::char_traitschar, 
std::allocatorchar ::basic_string(char const*, unsigned int, 
std::allocatorchar const) (in /usr/lib/i386-linux-gnu/libstdc++.so.6.0.14)
==28530==by 0x44B2B27: std::basic_istringstreamchar, 
std::char_traitschar, std::allocatorchar 
::basic_istringstream(std::string const, std::_Ios_Openmode) (in 
/usr/lib/i386-linux-gnu/libstdc++.so.6.0.14)
==28530==by 0x824A340: Source_file::get_istream() (source-file.cc:163)
==28530==by 0x8136886: Includable_lexer::new_input(std::string, Sources*) 
(includable-lexer.cc:93)
==28530==by 0x814B3A5: Lily_lexer::new_input(std::string, Sources*) (lily-
lexer.cc:269)
==28530==by 0x82D26AF: Lily_lexer::yylex() (lexer.ll:305)
==28530==by 0x82E4599: yylex(YYSTYPE*, Input*, void*)