PostgreSQL 9.3.3 RHEL 6.4Total db Server memory 64GB# -# PostgreSQL configuration file# -max_connections = 100shared_buffers = 16GBwork_mem = 32MB maintenance_work_mem = 1GBseq_page_cost = 1.0 random_p
We have 64GB of Memory on RHEL 6.4 shared_buffers = 8GB work_mem = 64MB maintenance_work_mem = 1GB effective_cache_size = 48GBI found this list of recommended parameters for memory management in PostgreSQL. About shared_buffers: Below 2GB, set to 20% of total system memory.Below 32GB, set to 25%
if we have the following trigger:CREATE TRIGGER admin_update_trigger BEFORE UPDATE ON admin_logger_overflow FOR EACH ROW WHEN ((old.start_date_time IS DISTINCT FROM new.start_date_time)) EXECUTE PROCEDURE update_logger_config();and the database call issues an: update admin_logger_overflow set sto
We are running PostgreSQL 9.1.6 with autovacuum = on and I am reporting on dead tuples using the pgstattuple extension. Each time I run the pgstattuple package our dead tuples counts decrease. My colleague is under the impression that dead tuples are only cleaned up via vacuum full only, while I su
Hi Greg,The labor_task_report table is already Partitioned by this_.work_date_time and this table contains approx. 15 billion rows. The other table labor_tasks is not partitioned. I'm thinking that the size of the external sort is part of the problem. if I remove the labor_tasks table from the SQL,
Can anyone offer suggestions on how I can optimize a query that contains the LIMIT OFFSET clause?The explain plan of the query is included in the notepad attachment.thanksThe rows skipped by an OFFSET clause still have to be computed inside the
server; therefore a large OFFSET might be inefficient
Two questions Please1.) Is there any way to clear the cache so that we can ensure that when we run "explain analyze" on a query and make some minor adjustments to that query and re-execute, the plan is not cached. Since the cached plan returns runtimes that are much lower than the initial execution
We re-tested these settings a few times after our initial test and realized that the execution time I posted was shewed, because the execution plan was cached after the initial run. Subsequent executions ran in a little over a second.
There ended up being no significant saving by setting these para
1.) Server settingmemory: 32960116kB = 32GB2.) Current Postgresql configuration settings of note in my environment.enable_hashjoin=offwork_mem = 16MB #random_page_cost-4.0 <- defaultmaintenance_work_mem=256MBshared_buffers = 8GBserverdb=# explain analyze select count(*) as y0_ from SARS_ACTS this_
serverdb=# set enable_hashjoin=off;SETserverdb=# explain select count(*) as y0_ from SARS_ACTS this_ inner join SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=tr1_.ID where tr1.ALGORITHM='SMAT'; QUERY PLAN--
PostgreSQL 9.1.6 on linux
Original Message
Subject: Re: [PERFORM] Very slow inner join query Unacceptable latency.
From: Jaime Casanova
Date: Tue, May 21, 2013 2:59 pm
To: Freddie Burgess
Cc: psql performance list
The SARS_ACTS table currently has 37,115,515 rowswe have indexed: idx_sars_acts_acts_run_id ON SARS_ACTS USING btree (sars_run_id)we have pk constraint on the SARS_ACTS_RUN table; sars_acts_run_pkey PRIMARY KEY (id )serverdb=# explain select count(*) as y0_ from SARS_ACTS this_ inner join SARS_ACTS
12 matches
Mail list logo