On Fri, Oct 9, 2009 at 11:11 PM, Tom Lane wrote:
> Scott Carey writes:
> > I've got 200,000 tables in one db (8.4), and some tools barely work. The
> > system catalogs get inefficient when large and psql especially has
> trouble.
> > Tab completion takes forever, even if I make a schema "s" wit
On Wed, Aug 13, 2008 at 10:59 AM, Decibel! <[EMAIL PROTECTED]> wrote:
> On Aug 12, 2008, at 4:59 PM, Chris Kratz wrote:
>
>> Ran into a re-occuring performance problem with some report queries again
>> today. In a nutshell, we have filters on either multiple joined tables
Hello All,
Ran into a re-occuring performance problem with some report queries again
today. In a nutshell, we have filters on either multiple joined tables, or
multiple columns on a single table that are highly correlated. So, the
estimates come out grossly incorrect (the planner has no way to k
s.
-Chris
On 3/18/08, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
>
> On Tue, 18 Mar 2008 11:35:08 -0400
> "Chris Kratz" <[EMAIL PROTECTED]> wrote:
>
> > Nondefault settings o
A number of weeks ago, I had posted a request for help regarding join
estimates in pg 8.2.6. In moderately complex to very complex ad hoc queries
in our system, we were consistently having the system massively
underestimate the number of rows coming out of join at a low level making
these queries
On 3/4/08, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> "Kevin Grittner" <[EMAIL PROTECTED]> writes:
> > On Tue, Mar 4, 2008 at 8:42 AM, in message
> > <[EMAIL PROTECTED]>, Chris Kratz
> > <[EMAIL PROTECTED]> wrote:
> >> So, I'
On 3/4/08, Kevin Grittner <[EMAIL PROTECTED]> wrote:
>
> >>> On Tue, Mar 4, 2008 at 8:42 AM, in message
> > Any other thoughts or suggestions?
>
>
> Make sure your effective_cache_size is properly configured.
>
> Increase random_page_cost and/or decrease seq_page_cost.
> You can play with the cost
Hello Everyone,
I had posted an issue previously that we've been unable to resolve.
An early mis-estimation in one or more subqueries causes the remainder
of the query to choose nested loops instead of a more efficient method
and runs very slowly (CPU Bound). I don't think there is any wa
On 2/18/08, Chris Kratz <[EMAIL PROTECTED]> wrote:
>
> On 2/11/08, Tom Lane <[EMAIL PROTECTED]> wrote:
> >
> > Chris Kratz <[EMAIL PROTECTED]> writes:
> > > The first frustration is that I can't get the transaction details scan
> > > to
On 2/11/08, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> Chris Kratz <[EMAIL PROTECTED]> writes:
> > The first frustration is that I can't get the transaction details scan
> > to get any more accurate. It thinks it will find 1407 records,
> > instead it f
On 2/11/08, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> Chris Kratz <[EMAIL PROTECTED]> writes:
> > -> Nested Loop (cost=42.74..161.76 rows=1 width=38) (actual
> > time=2.932..27.772 rows=20153 loops=1)
> > -> Hash Join (cost=10.89..22.58 rows
On 2/11/08, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> Chris Kratz <[EMAIL PROTECTED]> writes:
> > -> Nested Loop (cost=42.74..161.76 rows=1 width=38) (actual
> > time=2.932..27.772 rows=20153 loops=1)
> > -> Hash Join (cost=10.89..22.58 rows
Hello,
I've been wrestling w/ a complex query for another developer for
awhile today. The problem consistently seems to a mis-estimation of
the number of rows resulting from a join. This causes the query early
on to think it's only going to be processing 1 row and so it chooses
nested l
On Wednesday 17 October 2007 20:23, Tom Lane wrote:
> Chris Kratz <[EMAIL PROTECTED]> writes:
> > On Wednesday 17 October 2007 14:49, Tom Lane wrote:
> >> Evidently it's not realizing that every row of par will have a join
> >> partner, but why not? I suppo
On Wednesday 17 October 2007 14:49, Tom Lane wrote:
> Chris Kratz <[EMAIL PROTECTED]> writes:
> > I'm struggling to get postgres to run a particular query quickly.
>
> The key problem seems to be the join size misestimate here:
> >-> Hash Join
Hello Everyone,
I'm struggling to get postgres to run a particular query quickly. It seems
that very early on, the planner seems to mis-estimate the number of rows
returned by a join which causes it to assume that there is only 1 row as it
goes up the tree. It then picks a nested loop join wh
the awkwardness in the calculation. That calc gets used
for a lot of different things including column definitions when people want
to see the column on screen.
Thanks,
-Chris
On Wednesday 14 September 2005 05:13 am, Richard Huxton wrote:
> Chris Kratz wrote:
> > Hello All,
> >
&g
CASE WHEN (role_id IS NULL) THEN NULL::text WHEN (("begin" IS NOT
NULL) AND ("end" IS NOT NULL)) THEN CASE WHEN (('2005-09-13
16:43:18.721214'::timestamp without time zone >= "begin") AND ('2005-09-13
16:43:18.721214'::timestamp without time zon
On Thursday 10 February 2005 03:09 pm, Darcy Buskermolen wrote:
> On February 10, 2005 10:58 am, Tom Lane wrote:
> > Chris Kratz <[EMAIL PROTECTED]> writes:
> > > Does anyone have any idea why there be over a 4s difference between
> > > running the statement d
On Thursday 10 February 2005 01:58 pm, Tom Lane wrote:
> Chris Kratz <[EMAIL PROTECTED]> writes:
> > Does anyone have any idea why there be over a 4s difference between
> > running the statement directly and using explain analyze?
> >
> > Aggregate (cost=9848.12
Does anyone have any idea why there be over a 4s difference between running
the statement directly and using explain analyze? Multiple runs give the
same result and I've tested on several servers.
db=# \timing
Timing is on.
db=# select count(*) from answer;
count
530576
(1 row)
Time
On Wednesday 09 February 2005 05:08 pm, Merlin Moncure wrote:
> > Hello All,
> >
> > In contrast to what we hear from most others on this list, we find our
> > database servers are mostly CPU bound. We are wondering if this is
> > because
> > we have postgres configured incorrectly in some way, or
On Wednesday 09 February 2005 03:27 pm, you wrote:
---snip---
> > We continue to tune our individual queries where we can, but it seems we
> > still are waiting on the db a lot in our app. When we run most queries,
> > top shows the postmaster running at 90%+ constantly during the duration
> > of
On Wednesday 09 February 2005 03:59 pm, Greg Stark wrote:
> Chris Kratz <[EMAIL PROTECTED]> writes:
> > We continue to tune our individual queries where we can, but it seems we
> > still are waiting on the db a lot in our app. When we run most queries,
> > top shows t
ops=1)
Total runtime: 4916.942 ms
(3 rows)
Now here is a very curious thing. If I turn on timing and run the count
without explain analyze, I get...
count
514729
(1 row)
Time: 441.539 ms
How odd. Running the explain adds 4.5s to it. Running the explain again goes
back to almost 5
Hello All,
In contrast to what we hear from most others on this list, we find our
database servers are mostly CPU bound. We are wondering if this is because
we have postgres configured incorrectly in some way, or if we really need
more powerfull processor(s) to gain more performance from postg
On Friday 16 April 2004 5:12 pm, Tom Lane wrote:
> Chris Kratz <[EMAIL PROTECTED]> writes:
> > ... Or if worse comes to worse to actually kill long running
> > processes without taking down the whole db as we have had to do on
> > occasion.
>
> A quick "kill
;s because some of the users have mice that give double-clicks
> even when they only want one click.
Hmmm, never thought of doing that. Might be interesting to do something like
that in a few key places where we have problems.
> --
> Mike Nolan
--
Chris Kratz
Systems Analyst/Program
seconds.
>
> I've seen this behavior on both my development system and on the
> production server.
>
> The same query a while later might respond quickly again.
>
> I'm not sure where to look for the delay, either, and it is intermittent
> enough that I'm not e
r, or are there other things we should be
looking at hardware wise. Thank you for your time.
--
Chris Kratz
Systems Analyst/Programmer
VistaShare LLC
www.vistashare.com
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
the triggers. Time spent in triggers is not shown in the pg 7.3.4 version of
explain (nor would I necessarily expect it to).
Thanks for your time, expertise and responses.
-Chris
On Tuesday 09 March 2004 7:18 pm, Stephan Szabo wrote:
> On Wed, 3 Mar 2004, Chris Kratz wrote:
> > Which certa
of the data, we know that the
inherited tables don't need to be inspected. The table structure has worked
quite well up till now and we are hoping to not have to drop our foreign keys
and inheritance if possible. Any ideas?
Thanks for your time,
-Chris
--
Chris Kratz
Systems Analyst/P
32 matches
Mail list logo