Hi This problem does NOT occur when I modified the free_parser code. Could I modify the code? or should I wait for next release? thanks
2011/7/11 Yoshiyuki Asaba <[email protected]> > It looks like the sql parser does not release memory. > > ==8398== 1,490,944 bytes in 182 blocks are still reachable in loss > record 86 of 86 > ==8398== at 0x4C274A8: malloc (vg_replace_malloc.c:236) > ==8398== by 0x466525: pool_memory_alloc (pool_memory.c:110) > ==8398== by 0x44C1DE: scanner_init (scan.l:1052) > ==8399== at 0x4C274A8: malloc (vg_replace_malloc.c:236) > ==8398== by 0x46617F: raw_parser (parser.c:61) > ==8398== by 0x445DF7: SimpleQuery (pool_proto_modules.c:152) > ==8398== by 0x446E12: ProcessFrontendResponse > (pool_proto_modules.c:1974) > ==8398== by 0x419825: pool_process_query (pool_process_query.c:344) > ==8399== by 0x5C11623: register_state (regex_internal.c:960) > ==8398== by 0x40A9E1: do_child (child.c:361) > ==8398== by 0x4050A4: fork_a_child (main.c:1066) > ==8399== by 0x5C14900: create_cd_newstate (regex_internal.c:1707) > ==8398== by 0x4076BD: main (main.c:543) > > I'm not sure this is a correct fix or not. But this probably fixes the > issue. > > Index: pool_proto_modules.c > =================================================================== > RCS file: /cvsroot/pgpool/pgpool-II/pool_proto_modules.c,v > retrieving revision 1.98 > diff -c -r1.98 pool_proto_modules.c > *** pool_proto_modules.c 5 Jun 2011 23:03:06 -0000 1.98 > --- pool_proto_modules.c 10 Jul 2011 14:58:36 -0000 > *************** > *** 453,461 **** > free_parser(); > return POOL_END; > } > ! /* > free_parser(); > ! */ > } > return POOL_CONTINUE; > } > --- 453,461 ---- > free_parser(); > return POOL_END; > } > ! > free_parser(); > ! > } > return POOL_CONTINUE; > } > > Thanks, > > 2011/7/10 takehiro wada <[email protected]>: > > Hi > > > > I got info of valgrind. > > Do you have any idea why this problem occurs? > > > > ==11672== > > ==11672== HEAP SUMMARY: > > ==11672== in use at exit: 16,116,283 bytes in 3,990 blocks > > ==11672== total heap usage: 350,642 allocs, 346,652 frees, 48,943,488 > > bytes allocated > > ==11672== > > ==11672== 7 bytes in 1 blocks are possibly lost in loss record 2 of 38 > > ==11672== at 0x40268A4: malloc (vg_replace_malloc.c:236) > > ==11672== by 0x4161A9F: strdup (strdup.c:43) > > ==11672== by 0x805AA89: extract_string (pool_config.l:1651) > > ==11672== by 0x805783B: pool_get_config (pool_config.l:479) > > ==11672== by 0x804A7E1: main (main.c:310) > > ==11672== > > ==11672== 12 bytes in 1 blocks are definitely lost in loss record 5 of 38 > > ==11672== at 0x40268A4: malloc (vg_replace_malloc.c:236) > > ==11672== by 0x4161A9F: strdup (strdup.c:43) > > ==11672== by 0x805AA89: extract_string (pool_config.l:1651) > > ==11672== by 0x8059929: pool_get_config (pool_config.l:1253) > > ==11672== by 0x804A7E1: main (main.c:310) > > ==11672== > > ==11672== 40 bytes in 1 blocks are possibly lost in loss record 12 of 38 > > ==11672== at 0x40268A4: malloc (vg_replace_malloc.c:236) > > ==11672== by 0x4161A9F: strdup (strdup.c:43) > > ==11672== by 0x805AA89: extract_string (pool_config.l:1651) > > ==11672== by 0x805790A: pool_get_config (pool_config.l:497) > > ==11672== by 0x804A7E1: main (main.c:310) > > ==11672== > > ==11672== 1,993 (80 direct, 1,913 indirect) bytes in 1 blocks are > definitely > > lost in loss record 32 of 38 > > ==11672== at 0x40268A4: malloc (vg_replace_malloc.c:236) > > ==11672== by 0x8089252: save_ps_display_args (ps_status.c:173) > > ==11672== by 0x804C392: fork_a_child (main.c:1027) > > ==11672== by 0x804AE5C: main (main.c:518) > > ==11672== > > ==11672== LEAK SUMMARY: > > ==11672== definitely lost: 92 bytes in 2 blocks > > ==11672== indirectly lost: 1,913 bytes in 19 blocks > > ==11672== possibly lost: 47 bytes in 2 blocks > > ==11672== still reachable: 16,114,231 bytes in 3,967 blocks > > ==11672== suppressed: 0 bytes in 0 blocks > > ==11672== Reachable blocks (those to which a pointer was found) are not > > shown. > > ==11672== To see them, rerun with: --leak-check=full --show-reachable=yes > > ==11672== > > ==11672== For counts of detected and suppressed errors, rerun with: -v > > ==11672== ERROR SUMMARY: 4 errors from 4 contexts (suppressed: 29 from > 10) > > > > 2011/7/8 Yoshiyuki Asaba <[email protected]> > >> > >> Valgrind may help. Can you try the following steps? > >> > >> 1. Set num_init_children to 1 in pgpool.conf > >> 2. Launche pgpool with valgrind > >> $ valgrind --leak-check=full pgpool -n -d -f pgpool.conf >> > >> valgrind.log 2>&1 > >> 3. Run your script > >> 4. Stop pgpool > >> > >> Thanks, > >> > >> 2011/7/8 takehiro wada <[email protected]>: > >> > Hi > >> > This porblem occurs in 2.6.9-67, 2.6.26-2 and 2.6.38-8 of Linux > Kernels > >> > of > >> > Redhat, Debian and Ubuntu without depending on operating mode of > pgpool > >> > as > >> > far as I confirmed. > >> > On our production environment, this problem occurs, operating mode of > >> > pgpool > >> > is streaming replication. > >> > > >> > My pgpool.conf of testcase is below. > >> > > >> > pgpool.conf > >> > -------------------------------------------- > >> > pid_file_name = '/var/run/pgpool/pgpool.pid' > >> > port = 10000 > >> > backend_hostname0 = 'localhost' > >> > backend_port0 = 5432 > >> > -------------------------------------------- > >> > > >> > thanks > >> > > >> > 2011/7/7 Tatsuo Ishii <[email protected]> > >> >> > >> >> I have tried with your shell script with pgpool-II CVS HEAD. Between > >> >> 16:56 and 17:26, about 30k SELECTs have been executed so far. As you > >> >> can see RSS was growing from 1904 to 1912 but VSZ remained unchanged > >> >> 23208. Thus I think there's no memory leaking. Pgpool-II is operated > >> >> in streaming replication mode. This is Linux kernel 2.6.27. What mode > >> >> are you operating? What platform are you using? > >> >> > >> >> Thu Jul 7 16:56:43 JST 2011 > >> >> F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME > >> >> COMMAND > >> >> 1 48 29388 29356 20 0 23208 1904 ? S ? 0:03 > >> >> pgpool: t-ishii postgres 127.0.0.1(46350) idle > >> >> Thu Jul 7 17:01:43 JST 2011 > >> >> F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME > >> >> COMMAND > >> >> 1 48 29388 29356 20 0 23208 1904 ? S ? 0:04 > >> >> pgpool: t-ishii postgres 127.0.0.1(46350) idle > >> >> Thu Jul 7 17:06:43 JST 2011 > >> >> F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME > >> >> COMMAND > >> >> 1 48 29388 29356 20 0 23208 1908 ? S ? 0:05 > >> >> pgpool: t-ishii postgres 127.0.0.1(46350) idle > >> >> Thu Jul 7 17:11:43 JST 2011 > >> >> F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME > >> >> COMMAND > >> >> 1 48 29388 29356 20 0 23208 1908 ? S ? 0:06 > >> >> pgpool: t-ishii postgres 127.0.0.1(46350) idle > >> >> Thu Jul 7 17:16:43 JST 2011 > >> >> F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME > >> >> COMMAND > >> >> 1 48 29388 29356 20 0 23208 1908 ? S ? 0:07 > >> >> pgpool: t-ishii postgres 127.0.0.1(46350) idle > >> >> Thu Jul 7 17:21:43 JST 2011 > >> >> F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME > >> >> COMMAND > >> >> 1 48 29388 29356 20 0 23208 1908 ? S ? 0:08 > >> >> pgpool: t-ishii postgres 127.0.0.1(46350) idle > >> >> Thu Jul 7 17:26:43 JST 2011 > >> >> F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME > >> >> COMMAND > >> >> 1 48 29388 29356 20 0 23208 1912 ? S ? 0:09 > >> >> pgpool: t-ishii postgres 127.0.0.1(46350) idle > >> >> -- > >> >> Tatsuo Ishii > >> >> SRA OSS, Inc. Japan > >> >> English: http://www.sraoss.co.jp/index_en.php > >> >> Japanese: http://www.sraoss.co.jp > >> > > >> > > >> > > >> > -- > >> > ************************* > >> > Name:Takehiro Wada > >> > mail:[email protected] > >> > ************************* > >> > > >> > _______________________________________________ > >> > Pgpool-general mailing list > >> > [email protected] > >> > http://pgfoundry.org/mailman/listinfo/pgpool-general > >> > > >> > > >> > >> > >> > >> -- > >> Yoshiyuki Asaba > >> [email protected] > > > > > > > > -- > > ************************* > > Name:Takehiro Wada > > mail:[email protected] > > ************************* > > > > > > -- > Yoshiyuki Asaba > [email protected] > -- ************************* Name:Takehiro Wada mail:[email protected] *************************
_______________________________________________ Pgpool-general mailing list [email protected] http://pgfoundry.org/mailman/listinfo/pgpool-general
