Tom Lane wrote:
That guy doesn't actually have the foggiest idea what he's doing.
The reason there is no built-in capability to do that is that it *does
not work well*. Search the list archives for "priority inversion" to
find out why not.
I agree that that particular author seems clueless, bu
On Tue, 2006-06-06 at 16:11, Tom Lane wrote:
> Scott Marlowe <[EMAIL PROTECTED]> writes:
> > The cost of not aborting on the first sort is that you may never see
> > what the part of the plan is that's killing your query, since you never
> > get the actual plan.
>
> Well, you can get the plan with
Scott Marlowe <[EMAIL PROTECTED]> writes:
> The cost of not aborting on the first sort is that you may never see
> what the part of the plan is that's killing your query, since you never
> get the actual plan.
Well, you can get the plan without waiting a long time; that's what
plain EXPLAIN is for
On Tue, 2006-06-06 at 15:51, Jim C. Nasby wrote:
> On Tue, Jun 06, 2006 at 12:54:27PM -0500, Scott Marlowe wrote:
> > On Tue, 2006-06-06 at 12:50, Craig A. James wrote:
> > > Tom Lane wrote:
> > > >>The idea I just had was: why do we need EXPLAIN ANALYZE to run to
> > > >>completion? In severe case
On Tue, Jun 06, 2006 at 12:54:27PM -0500, Scott Marlowe wrote:
> On Tue, 2006-06-06 at 12:50, Craig A. James wrote:
> > Tom Lane wrote:
> > >>The idea I just had was: why do we need EXPLAIN ANALYZE to run to
> > >>completion? In severe cases like this thread, we might be able to
> > >>discover the
Joshua D. Drake wrote:
- in our env, clients occasionally hit max_connections. This is a
known and (sort of) desired pushback on load. However, that sometimes
knocks pg_autovacuum out.
That is when you use:
superuser_reserved_connections
Blush. Good point. Though, when we hit max_connection
Mischa Sandberg wrote:
Andrew Sullivan wrote:
On Thu, Jun 01, 2006 at 01:54:08PM +0200, Antoine wrote:
Hi,
We just don't seem to be getting much benefit from autovacuum. Running
a manual vacuum seems to still be doing a LOT, which suggests to me
that I should either run a cron job and disable a
Andrew Sullivan wrote:
On Thu, Jun 01, 2006 at 01:54:08PM +0200, Antoine wrote:
Hi,
We just don't seem to be getting much benefit from autovacuum. Running
a manual vacuum seems to still be doing a LOT, which suggests to me
that I should either run a cron job and disable autovacuum, or just
run a
On Tue, 2006-06-06 at 12:50, Craig A. James wrote:
> Tom Lane wrote:
> >>The idea I just had was: why do we need EXPLAIN ANALYZE to run to
> >>completion? In severe cases like this thread, we might be able to
> >>discover the root cause by a *partial* execution of the plan, as long as
> >>it was pr
Tom Lane wrote:
The idea I just had was: why do we need EXPLAIN ANALYZE to run to
completion? In severe cases like this thread, we might be able to
discover the root cause by a *partial* execution of the plan, as long as
it was properly instrumented. That way, the OP might have been able to
disco
Simon Riggs wrote:
Well, it's a big query. If it ought to take a second or two, and
instead is taking an hour or two (1800 times the expected runtime), that
might be close enough to "never" to exhaust Chris' patience. Besides,
we don't know whether the 1800 might itself be an underestimate (too
On Tue, Jun 06, 2006 at 11:37:46AM -0400, Greg Stark wrote:
> An alternate approach would be to implement a SIGINFO handler that
> prints out the explain analyze output for the data built up so far.
> You would be able to keep hitting C-t and keep getting updates
> until the query completes or you
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> On Tue, Jun 06, 2006 at 11:06:09AM -0400, Tom Lane wrote:
>> I don't think that helps, as it just replaces one uncertainty by
>> another: how far did the EXPLAIN really get towards completion of the
>> plan? You still don't have any hard data.
> Does t
Simon Riggs <[EMAIL PROTECTED]> writes:
> The idea I just had was: why do we need EXPLAIN ANALYZE to run to
> completion? In severe cases like this thread, we might be able to
> discover the root cause by a *partial* execution of the plan, as long as
> it was properly instrumented. That way, the O
Hmmm...It could generate NOTICEs whenever there is a drastic difference in
rowcount or actual time...
On Tue, June 6, 2006 11:29 am, Andrew Sullivan wrote:
> On Tue, Jun 06, 2006 at 11:06:09AM -0400, Tom Lane wrote:
>
>>> it was properly instrumented. That way, the OP might have been able
>>> to d
On Tue, Jun 06, 2006 at 11:06:09AM -0400, Tom Lane wrote:
> > it was properly instrumented. That way, the OP might have been able to
> > discover the root cause himself...
>
> I don't think that helps, as it just replaces one uncertainty by
> another: how far did the EXPLAIN really get towards com
Explain analyze could at least put an asterisk around actual time that
deviated by some factor from the estimated time.
On Tue, June 6, 2006 10:39 am, Simon Riggs wrote:
>
> This is a good example of a case where the inefficiency of EXPLAIN
> ANALYZE would be a contributory factor to it not actua
On Tue, Jun 06, 2006 at 11:06:09AM -0400, Tom Lane wrote:
> > The idea I just had was: why do we need EXPLAIN ANALYZE to run to
> > completion? In severe cases like this thread, we might be able to
> > discover the root cause by a *partial* execution of the plan, as long as
> > it was properly inst
Simon Riggs <[EMAIL PROTECTED]> writes:
> You mentioned it would be good if the OP had delivered an EXPLAIN
> ANALYZE; I agree(d). The lack of EXPLAIN ANALYZE is frequently because
> you can't get them to run to completion - more so when the query you
> wish to analyze doesn't appear to complete ei
On Mon, Jun 05, 2006 at 01:39:38PM -0700, Chris Beecroft wrote:
> Thanks Tom,
>
> I knew you would come through again!
>
> Query is now returning with results on our replicated database. Will
> vacuum analyze production now. So it seems to have done the trick. Now
> the question is has our aut
On Tue, 2006-06-06 at 10:43 -0400, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > On Mon, 2006-06-05 at 17:06 -0400, Tom Lane wrote:
> >> Well, it's a big query. If it ought to take a second or two, and
> >> instead is taking an hour or two (1800 times the expected runtime), that
>
Simon Riggs <[EMAIL PROTECTED]> writes:
> On Mon, 2006-06-05 at 17:06 -0400, Tom Lane wrote:
>> Well, it's a big query. If it ought to take a second or two, and
>> instead is taking an hour or two (1800 times the expected runtime), that
>> might be close enough to "never" to exhaust Chris' patienc
On Mon, 2006-06-05 at 17:06 -0400, Tom Lane wrote:
> Andrew Sullivan <[EMAIL PROTECTED]> writes:
> > On Mon, Jun 05, 2006 at 04:07:19PM -0400, Tom Lane wrote:
> >> I'm wondering about out-of-date or nonexistent ANALYZE stats, missing
> >> custom adjustments of statistics target settings, etc.
>
>
"Andrus" <[EMAIL PROTECTED]> writes:
>> 1) why are you using CHAR and not VARCHAR or TEXT? CHAR will give you
>> problems using an index, period.
> 1. I haven't seen any example where VARCHAR is better that CHAR for indexing
The advice you were given is good, even if the explanation is bad.
CHA
>> SELECT toode, nimetus
>> FROM toode
>> WHERE toode ILIKE 'x10%' ESCAPE '!'
>> ORDER BY UPPER(toode ),nimetus LIMIT 100
>>
>> runs 1 minute in first time for small table size.
>>
>> Toode field type is CHAR(20)
>
> 1) why are you using CHAR and not VARCHAR or TEXT? CHAR will give you
25 matches
Mail list logo