On Mon, Sep 26, 2011 at 2:11 PM, Timothy Garnett tgarn...@panjiva.comwrote:
Though maybe in a lot of common use situations people only supply values
that are known present so maybe this would make things worse more often then
better (maybe limit 1 or better EXISTS would be a hint the value is
Le Tue, 27 Sep 2011 12:45:00 +1000,
anthony.ship...@symstream.com a écrit :
On Monday 26 September 2011 19:39, Marc Cousin wrote:
Because Index Scans are sorted, not Bitmap Index Scans, which
builds a list of pages to visit, to be then visited by the Bitmap
Heap Scan step.
Marc.
On Tuesday 27 September 2011 18:54, Marc Cousin wrote:
The thing is, the optimizer doesn't know if your data will be in cache
when you will run your query… if you are sure most of your data is in
the cache most of the time, you could try to tune random_page_cost
(lower it) to reflect that data
On 27/09/11 22:05, anthony.ship...@symstream.com wrote:
What I really want is to just read a sequence of records in timestamp order
between two timestamps. The number of records to be read may be in the
millions totalling more than 1GB of data so I'm trying to read them a slice
at a time but I
1. First things first: vacuum cannot delete tuples that are still
visible to any old running transactions. You might have some very long
queries or transactions that prevent it from cleaning properly:
select * from pg_stat_activity where xact_start now()-interval '10 minutes';
2. On 8.3 and
We had performed VACUUM FULL on our production and performance has improved
a lot !
I started using pg_stattuple and pg_freespacemap for tracking freespace in
the tables and Indexes and is helping us a lot.
Thanks for all your inputs and help !
Regards,
VB
On Thu, Sep 22, 2011 at 12:11 AM,
Forgot to mention -
Kevin,
CLUSTER seems to be an very interesting concept to me.
I am thinking to test the CLUSTER TABLE on our production according to the
Index usage on the table.
Will let you know once i get the results.
Regards,
VB
On Tue, Sep 27, 2011 at 5:59 PM, Venkat Balaji
Hello Everyone,
I am preparing a plan to track the tables undergoing Full Table Scans for
most number of times.
If i track seq_scan from the pg_stat_user_tables, will that help
(considering the latest analyzed ones) ?
Please help !
Thanks
VB
Royce Ausburn royce...@inomial.com writes:
Since sending this first email I've up'd the autovacuum log level and I've
noticed that the same tables seem to be auto vacuum'd over and over againâ¦
Some of the tables are a bit surprising in that they're updated
semi-regularly, but not enough
On Tue, Sep 27, 2011 at 7:49 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Royce Ausburn royce...@inomial.com writes:
Since sending this first email I've up'd the autovacuum log level and I've
noticed that the same tables seem to be auto vacuum'd over and over again…
Some of the tables are a bit
On 27/09/2011, at 8:29 PM, Marti Raudsepp wrote:
1. First things first: vacuum cannot delete tuples that are still
visible to any old running transactions. You might have some very long
queries or transactions that prevent it from cleaning properly:
select * from pg_stat_activity where
Venkat Balaji venkat.bal...@verse.in wrote:
I am preparing a plan to track the tables undergoing Full Table
Scans for most number of times.
If i track seq_scan from the pg_stat_user_tables, will that help
(considering the latest analyzed ones) ?
Well, yeah; but be careful not to assume
Le Tue, 27 Sep 2011 19:05:09 +1000,
anthony.ship...@symstream.com a écrit :
On Tuesday 27 September 2011 18:54, Marc Cousin wrote:
The thing is, the optimizer doesn't know if your data will be in
cache when you will run your query… if you are sure most of your
data is in the cache most of
Thank you, Marti,
Is there any comprehensive survey of (at least most, if not all)
modern features of operating systems, for example I/O scheduling,
extent-based filesytems, etc.?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
Thanks a lot Kevin !!
Yes. I intended to track full table scans first to ensure that only small
tables or tables with very less pages are (as you said) getting scanned
full.
I am yet to identify slow running queries. Will surely hit back with them in
future.
Thanks
VB
On Tue, Sep 27, 2011 at
Hello Everyone,
I am implementing a PostgreSQL performance monitoring system (to monitor the
below) which would help us understand the database behavior -
1. Big Full Table Scans
2. Table with high IOs (hot tables)
3. Highly used Indexes
4. Tables undergoing high DMLs with index scans 0 (with
I would like to know the difference between n_tup_upd and n_tup_hot_upd.
Thanks
VB
On Tue, Sep 27, 2011 at 9:56 PM, Venkat Balaji venkat.bal...@verse.inwrote:
Thanks a lot Kevin !!
Yes. I intended to track full table scans first to ensure that only small
tables or tables with very less
Actually thinking about this a little more what we really want the planner
to do is to consider the codes one at a time till it finds one that exists.
If we write that out explicitly we get a good plan whether the ids are
select many rows or none.
= explain analyze
select 1 from (
select * from
Hi Sam,
The purpose of this (framework generated) code is to find out if there is at
least one row that has one of the selected hts_code_ids. We don't care
about anything that's returned other then whether at least one row exists or
not (rewriting the query with EXISTS generates that same plan).
Venkat Balaji venkat.bal...@verse.in wrote:
I would like to know the difference between n_tup_upd and
n_tup_hot_upd.
A HOT update is used when none of the updated columns are used in an
index and there is room for the new tuple (version of the row) on
the same page as the old tuple. This is
On 09/27/2011 12:54 PM, Ben Chobot wrote:
On Sep 26, 2011, at 10:52 AM, Maria L. Wilson wrote:
Our first try to solve this problem has been to convert these triggers
into a constraint trigger which allows for DEFERRABLE INITIALLY
DEFERRED flags. This, we are finding, is forcing the trigger
On 09/28/2011 12:26 AM, Venkat Balaji wrote:
Thanks a lot Kevin !!
Yes. I intended to track full table scans first to ensure that only
small tables or tables with very less pages are (as you said) getting
scanned full.
It can also be best to do a full table scan of a big table for some
On Tuesday 27 September 2011 19:22, Mark Kirkwood wrote:
The query that I've shown is one of a sequence of queries with the
timestamp range progressing in steps of 1 hour through the timestamp
range. All I want PG to do is find the range in the index, find the
matching records in the table
23 matches
Mail list logo