Re: [PERFORM] how to improve perf of 131MM row table?

2014-06-26 Thread AJ Weber
On 6/26/2014 12:23 PM, Shaun Thomas wrote: On 06/26/2014 11:19 AM, Claudio Freire wrote: Try changing node_id in (...) into node.id in (...) That looks much better to my untrained eye! (Am I right?) Nested Loop (cost=218.29..21305.47 rows=53480 width=187) (actual time=42.347.. 43.617

Re: [PERFORM] how to improve perf of 131MM row table?

2014-06-26 Thread AJ Weber
FWIW: I tested removing the quotes around each value, and it did not change the plan (I am a little surprised too, but I guess PG is "smarter than that"). Thanks for the idea. On 6/26/2014 11:38 AM, AJ Weber wrote: I noticed this too. I am trying to find where the actual SQL is

Re: [PERFORM] how to improve perf of 131MM row table?

2014-06-26 Thread AJ Weber
11:35 AM, Claudio Freire wrote: On Thu, Jun 26, 2014 at 10:26 AM, AJ Weber wrote: OK, the sample query is attached (hopefully attachments are allowed) as "query.sql". The "master table" definition is attached as "table1.sql". The "detail table" defini

Re: [PERFORM] how to improve perf of 131MM row table?

2014-06-26 Thread AJ Weber
227265.29 rows=5733429 width=16) (actual time=0.004..1908.493 rows=5734255 loops=1) Buffers: shared hit=146433 read=23498 Total runtime: 4967.674 ms (15 rows) On 6/26/2014 10:37 AM, Shaun Thomas wrote: On 06/26/2014 09:22 AM, AJ Weber wrote: I sent the details as identified by pg

Re: [PERFORM] how to improve perf of 131MM row table?

2014-06-26 Thread AJ Weber
btree (id) CLUSTER would indicate to me that there is a PK on alf_node table, it is on column "id", it is of type btree, and the table is clustered around that index. Am I reading this totally wrong? The supporting table actually seems to have a multi-column PK defined, and a separate

Re: [PERFORM] how to improve perf of 131MM row table?

2014-06-26 Thread AJ Weber
you for the feedback and offer to help! -AJ On 6/26/2014 9:56 AM, Matheus de Oliveira wrote: On Thu, Jun 26, 2014 at 10:26 AM, AJ Weber <mailto:awe...@comcast.net>> wrote: OK, the sample query is attached (hopefully attachments are allowed) as "query.sql". The &

Re: [PERFORM] how to improve perf of 131MM row table?

2014-06-26 Thread AJ Weber
OK, the sample query is attached (hopefully attachments are allowed) as "query.sql". The "master table" definition is attached as "table1.sql". The "detail table" definition is attached as "table2.sql". The EXPLAIN (ANALYZE, BUFFERS) output is here: http://explain.depesz.com/s/vd5 Let me know

[PERFORM] how to improve perf of 131MM row table?

2014-06-25 Thread AJ Weber
Sorry for the semi-newbie question... I have a relatively sizable postgresql 9.0.2 DB with a few large tables (keep in mind "large" is relative, I'm sure there are plenty larger out there). One of my queries that seems to be bogging-down performance is a join between two tables on each of th

Re: [PERFORM] What setup would you choose for postgresql 9.2 installation?

2013-03-04 Thread AJ Weber
AJ On 3/4/2013 9:36 AM, Scott Marlowe wrote: On Mon, Mar 4, 2013 at 7:04 AM, AJ Weber wrote: Apologies for the tangential question, but how would pgpool2 "increase throughput"? Wouldn't the same number of statements be issued by your application? It would likely reduce the number

Re: [PERFORM] What setup would you choose for postgresql 9.2 installation?

2013-03-04 Thread AJ Weber
Apologies for the tangential question, but how would pgpool2 "increase throughput"? Wouldn't the same number of statements be issued by your application? It would likely reduce the number of concurrent connections, but that doesn't necessarily equate to "increased throughput". -AJ On 3/4/2

Re: [PERFORM] autovacuum fringe case?

2013-01-23 Thread AJ Weber
On 1/23/2013 2:13 PM, Jeff Janes wrote: On Wed, Jan 23, 2013 at 8:53 AM, AJ Weber wrote: I have a server that is IO-bound right now (it's 4 cores, and top indicates the use rarely hits 25%, but the Wait spikes above 25-40% regularly). How long do the spikes last? From what I can gath

[PERFORM] autovacuum fringe case?

2013-01-23 Thread AJ Weber
I have a server that is IO-bound right now (it's 4 cores, and top indicates the use rarely hits 25%, but the Wait spikes above 25-40% regularly). The server is running postgresql 9.0 and tomcat 6. As I have mentioned in a previous thread, I can't alter the hardware to add disks unfortunately,

[PERFORM] Analyze and default_statistics_target

2013-01-21 Thread AJ Weber
I was under the impression that the default_statistics_target was a percentage of rows to analyze. Maybe this is not the case? I ran an analyze during a "quiet point" last night and for a few of my large tables, I didn't get what I consider a reasonable sampling of rows. When running with "v

Re: [PERFORM] Two Necessary Kernel Tweaks for Linux Systems

2013-01-08 Thread AJ Weber
When I checked these, both of these settings exist on my CentOS 6.x host (2.6.32-279.5.1.el6.x86_64). However, the autogroup_enabled was already set to 0. (The migration_cost was set to the 0.5ms, default noted in the OP.) So I don't know if this is strictly limited to kernel 3.0. Is there

Re: [PERFORM] Partition table in 9.0.x?

2013-01-08 Thread AJ Weber
It does if you use it without an argument, to display all the tables in the search path: jjanes=# \d+ List of relations Schema | Name | Type | Owner | Size | Description +--+---++-+- public |

Re: [PERFORM] Partition table in 9.0.x?

2013-01-08 Thread AJ Weber
It probably does, but from psql command line, you can do \d+ and \di+ \d+ doesn't appear to display any size information. If you have little control over your storage and are already IO bound, and the tables are growing rapidly, you may need to rethink that "deletes are rare" bit. So the

Re: [PERFORM] Partition table in 9.0.x?

2013-01-06 Thread AJ Weber
All fair questions... Thank you for your detailed response! On 1/4/2013 11:03 PM, Jeff Janes wrote: On Friday, January 4, 2013, AJ Weber wrote: Hi all, I have a table that has about 73mm rows in it and growing. How big is the table in MB? Its indexes? Not sure on this. Will

[PERFORM] Partition table in 9.0.x?

2013-01-04 Thread AJ Weber
Hi all, I have a table that has about 73mm rows in it and growing. Running 9.0.x on a server that unfortunately is a little I/O constrained. Some (maybe) pertinent settings: default_statistics_target = 50 maintenance_work_mem = 512MB constraint_exclusion = on effective_cache_size = 5GB work_