RE: Postgresql JDBC process consumes more memory with partition tables update delete
Yes, same prepared statement from both psql and JDBC. We started to compare with one by one, and see big difference as explained. Psql and JDBC show big difference. Let's focuse on JDBC driver client ,why it consumes 160MB memory even table size is very small. But only consumes 25MB for non-partitioned tables with same table attributes and data volume size. -Original Message- From: Justin Pryzby Sent: Wednesday, September 7, 2022 12:15 AM To: James Pang (chaolpan) Cc: pgsql-performance@lists.postgresql.org Subject: Re: Postgresql JDBC process consumes more memory than psql client On Tue, Sep 06, 2022 at 04:15:03AM +, James Pang (chaolpan) wrote: > We make 2 comparisions between partitioned(256 HASH) and no-partitioned(same > data volume,same table attributes) , do same "UPDATE,DELETE " . > 1. with partitioned tables , the "RES" from top command memory increased > quickly to 160MB and keep stable there. > From auto_explain trace, we did saw partition pruning to specific > partition when execution the prepared sql statement by Postgresql JDBC . > 2. with no-partitioned tables, the "RES" from top command memory only keep > 24MB stable there. >Same auto_explain , and only table and index scan there by prepared > sql statement by Postgresql JDBC. > 3. with psql client , run the UPDATE/DELETE sql locally, partition pruning > works and the "RES" memory" is much less, it's about 9MB . > > Yesterday, when workload test, a lot of Postgresql JDBC connections > use 150-160MB memory , so we got ERROR: out of memory How many JDBC clients were there? Did you use the same number of clients when you used psql ? Otherwise it wasn't a fair test. Also, did you try using psql with PREPARE+EXECUTE ? I imagine memory use would match JDBC. It's probably not important, but if you set the log level high enough, you could log memory use more accurately using log_executor_stats (maxrss). > So, looks like something with Postgresql JDBC driver lead to the high memory > consumption when table is partitioned , even when table is no partitioned , > compared with psql client, it consumes more memory. Any suggestions to tune > that ? PG V13 , OS RHEL8 , Virtua machine on VMWARE. We make > shared_buffers=36% physical memory , effective_cache_size=70%physical memory > , total physical memory is about 128GB. I sent this before hoping to get answers to all the most common questions earlier, rather than being spread out over the first handful of emails. https://wiki.postgresql.org/wiki/Slow_Query_Questions version 13 point what ? what are the other non-default gucs ? what are the query plans ? -- Justin
RE: Postgresql JDBC process consumes more memory than psql client
Yes, same prepared statement from both psql and JDBC. We started to compare with one by one, and see big difference as explained. Psql and JDBC show big difference. Let's focuse on JDBC driver client ,why it consumes 160MB memory even table size is very small. -Original Message- From: Justin Pryzby Sent: Wednesday, September 7, 2022 12:15 AM To: James Pang (chaolpan) Cc: pgsql-performance@lists.postgresql.org Subject: Re: Postgresql JDBC process consumes more memory than psql client On Tue, Sep 06, 2022 at 04:15:03AM +, James Pang (chaolpan) wrote: > We make 2 comparisions between partitioned(256 HASH) and no-partitioned(same > data volume,same table attributes) , do same "UPDATE,DELETE " . > 1. with partitioned tables , the "RES" from top command memory increased > quickly to 160MB and keep stable there. > From auto_explain trace, we did saw partition pruning to specific > partition when execution the prepared sql statement by Postgresql JDBC . > 2. with no-partitioned tables, the "RES" from top command memory only keep > 24MB stable there. >Same auto_explain , and only table and index scan there by prepared > sql statement by Postgresql JDBC. > 3. with psql client , run the UPDATE/DELETE sql locally, partition pruning > works and the "RES" memory" is much less, it's about 9MB . > > Yesterday, when workload test, a lot of Postgresql JDBC connections > use 150-160MB memory , so we got ERROR: out of memory How many JDBC clients were there? Did you use the same number of clients when you used psql ? Otherwise it wasn't a fair test. Also, did you try using psql with PREPARE+EXECUTE ? I imagine memory use would match JDBC. It's probably not important, but if you set the log level high enough, you could log memory use more accurately using log_executor_stats (maxrss). > So, looks like something with Postgresql JDBC driver lead to the high memory > consumption when table is partitioned , even when table is no partitioned , > compared with psql client, it consumes more memory. Any suggestions to tune > that ? PG V13 , OS RHEL8 , Virtua machine on VMWARE. We make > shared_buffers=36% physical memory , effective_cache_size=70%physical memory > , total physical memory is about 128GB. I sent this before hoping to get answers to all the most common questions earlier, rather than being spread out over the first handful of emails. https://wiki.postgresql.org/wiki/Slow_Query_Questions version 13 point what ? what are the other non-default gucs ? what are the query plans ? -- Justin
Re: PostgresSQL 9.5.21 very slow to connect and perform basic queries
Hello Guys. I'd like to report back on this issue as I've been monitoring on this installation that has very large distinct sqls and I noticed something that isn't probably new here but I'd like to confirm that again. So after I reduced the pg_stat_statements.max from 10k to 3k pgss_query_texts.stat was peaking at a reasonable size of ~450MB and by monitoring the file size I was able to have a 1min window interval when the pgss_query_texts.stat gc was happening. but whenever a gc was detected a bunch of statements would get logged on the pg log as slow statements and all would report taking around 1s some statements are like "BEGIN", "COMMIT" then last week I asked for another reduction from 3k to 300 pg_stat_statements.max and those slow statement reports aren't happening anymore even if pgss_query_texts.stat gc still occurs. my question is: is it safe to assume that because the gc of pgss_query_texts.stat requires a global lock this is a limitation of pg_stat_statements current implementation? Thanks On Wed, Aug 3, 2022 at 11:17 AM Tom Lane wrote: > bruno da silva writes: > > *Question: *Besides the gc issue that you mentioned, having a large ( > 700MB > > or 1GB ) pgss_query_texts.stat could cause slowness in pg_stat_statement > > processing > > than leading to slower query responses with a 32bit PG? I'm thinking in > > reducing pg_stat_statements.max from 10k to 3k > > Whether or not we've fully identified the problem, I think cutting > pg_stat_statements.max is a good idea. Especially as long as you're > stuck on an unsupported PG version. > > regards, tom lane > -- Bruno da Silva
Re: Postgresql JDBC process consumes more memory than psql client
On Tue, Sep 06, 2022 at 04:15:03AM +, James Pang (chaolpan) wrote: > We make 2 comparisions between partitioned(256 HASH) and no-partitioned(same > data volume,same table attributes) , do same "UPDATE,DELETE " . > 1. with partitioned tables , the "RES" from top command memory increased > quickly to 160MB and keep stable there. > From auto_explain trace, we did saw partition pruning to specific > partition when execution the prepared sql statement by Postgresql JDBC . > 2. with no-partitioned tables, the "RES" from top command memory only keep > 24MB stable there. >Same auto_explain , and only table and index scan there by prepared > sql statement by Postgresql JDBC. > 3. with psql client , run the UPDATE/DELETE sql locally, partition pruning > works and the "RES" memory" is much less, it's about 9MB . > > Yesterday, when workload test, a lot of Postgresql JDBC connections use > 150-160MB memory , so we got ERROR: out of memory How many JDBC clients were there? Did you use the same number of clients when you used psql ? Otherwise it wasn't a fair test. Also, did you try using psql with PREPARE+EXECUTE ? I imagine memory use would match JDBC. It's probably not important, but if you set the log level high enough, you could log memory use more accurately using log_executor_stats (maxrss). > So, looks like something with Postgresql JDBC driver lead to the high memory > consumption when table is partitioned , even when table is no partitioned , > compared with psql client, it consumes more memory. Any suggestions to tune > that ? PG V13 , OS RHEL8 , Virtua machine on VMWARE. We make > shared_buffers=36% physical memory , effective_cache_size=70%physical memory > , total physical memory is about 128GB. I sent this before hoping to get answers to all the most common questions earlier, rather than being spread out over the first handful of emails. https://wiki.postgresql.org/wiki/Slow_Query_Questions version 13 point what ? what are the other non-default gucs ? what are the query plans ? -- Justin