On Tue, Oct 17, 2017 at 2:29 AM, Thomas Munro
wrote:
> Vik Fearing asked off-list why hash joins appear to read slightly more
> temporary data than they write. The reason is that we notch up a
> phantom block read when we hit the end of each file. Harmless but it
> looks a bit weird and it's eas
Hi hackers,
Vik Fearing asked off-list why hash joins appear to read slightly more
temporary data than they write. The reason is that we notch up a
phantom block read when we hit the end of each file. Harmless but it
looks a bit weird and it's easily fixed.
Unpatched, a 16 batch hash join repor
On Fri, Jul 8, 2016 at 12:55 AM, Tom Lane wrote:
> Robert Haas writes:
>> On Thu, Jul 7, 2016 at 10:07 AM, Tom Lane wrote:
>>> Presumably the instrumentation data needed for that is not getting
>>> returned from the worker to the leader.
>
>> Yes.
>
>> ...
>> I'm not sure about the rest of you,
Robert Haas writes:
> On Thu, Jul 7, 2016 at 10:07 AM, Tom Lane wrote:
>> Presumably the instrumentation data needed for that is not getting
>> returned from the worker to the leader.
> Yes.
> ...
> I'm not sure about the rest of you, but I'd kind of like to finish
> this release and start work
On Thu, Jul 7, 2016 at 10:07 AM, Tom Lane wrote:
> Amit Kapila writes:
>> On Thu, Jul 7, 2016 at 1:23 PM, Fujii Masao wrote:
>>> I found $SUBJECT while trying to test parallel queries. Is this a bug?
>
> Presumably the instrumentation data needed for that is not getting
> returned from the worke
On Thu, Jul 7, 2016 at 7:37 PM, Tom Lane wrote:
> Amit Kapila writes:
>> On Thu, Jul 7, 2016 at 1:23 PM, Fujii Masao wrote:
>>> I found $SUBJECT while trying to test parallel queries. Is this a bug?
>
> Presumably the instrumentation data needed for that is not getting
> returned from the worker
Amit Kapila writes:
> On Thu, Jul 7, 2016 at 1:23 PM, Fujii Masao wrote:
>> I found $SUBJECT while trying to test parallel queries. Is this a bug?
Presumably the instrumentation data needed for that is not getting
returned from the worker to the leader. I would bet there's a lot
of other plan-n
On Thu, Jul 7, 2016 at 1:23 PM, Fujii Masao wrote:
> Hi,
>
> I found $SUBJECT while trying to test parallel queries. Is this a bug?
>
>
> In not parallel mode, EXPLAIN ANALYZE reports the information about
> Sort Method as follows.
>
> =# EXPLAIN ANALYZE SELECT * FROM pgbench_accounts ORDER BY bid
Hi,
I found $SUBJECT while trying to test parallel queries. Is this a bug?
In not parallel mode, EXPLAIN ANALYZE reports the information about
Sort Method as follows.
=# EXPLAIN ANALYZE SELECT * FROM pgbench_accounts ORDER BY bid;
QUER
chang chao writes:
> The actual rows(rows=9950) part in the following line contained in the above
> query plan seems strange.
> " -> Sort (cost=10.64..11.14 rows=200 width=520) (actual time=0.045..0.561
> rows=9950 loops=1)"
> Shouldn't it be 200?
No, that's probably correct, seeing that thi
Hi all
The test data and sql is in the following mail.
http://www.postgresql.org/message-id/sg2pr06mb114954351fa5dae7566854a984...@sg2pr06mb1149.apcprd06.prod.outlook.com
After I disabled hash join and nested-loop join(set enable_hashjoin = off; set
enable_nestloop = off;) and execute the query
On Tue, Mar 15, 2016 at 5:22 AM, Robert Haas wrote:
>
> On Sat, Mar 12, 2016 at 1:58 AM, Amit Kapila
wrote:
> > Yeah, that makes the addition of test for this functionality difficult.
> > Robert, do you have any idea what kind of test would have caught this
issue?
>
> Yep. Committed with that te
On Sat, Mar 12, 2016 at 1:58 AM, Amit Kapila wrote:
> Yeah, that makes the addition of test for this functionality difficult.
> Robert, do you have any idea what kind of test would have caught this issue?
Yep. Committed with that test:
DO $$
BEGIN
EXECUTE 'EXPLAIN ANALYZE SELECT * INTO T
On Sat, Mar 12, 2016 at 7:11 PM, Mithun Cy
wrote:
>
>
>
> On Sat, Mar 12, 2016 at 2:32 PM, Amit Kapila
wrote:
> >With force_parallel_mode=on, I could see many other failures as well. I
think it is better to have test, which tests this functionality with
>force_parallel_mode=regress
>
> as per us
Sorry there was some issue with my mail settings same mail got set more
than once.
--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com
On Sat, Mar 12, 2016 at 2:32 PM, Amit Kapila
wrote:
>With force_parallel_mode=on, I could see many other failures as well. I
think it is better to have test, which tests this functionality with
>force_parallel_mode=regress
as per user manual.
Setting this value to regress has all of the same eff
On Sat, Mar 12, 2016 at 2:32 PM, Amit Kapila
wrote:
>With force_parallel_mode=on, I could see many other failures as well. I
think it is better to have test, which tests this functionality with
>force_parallel_mode=regress
as per user manual.
Setting this value to regress has all of the same eff
On Sat, Mar 12, 2016 at 2:32 PM, Amit Kapila
wrote:
>With force_parallel_mode=on, I could see many other failures as well. I
think it is better to have test, which tests this functionality with
>force_parallel_mode=regress
as per user manual.
Setting this value to regress has all of the same eff
On Sat, Mar 12, 2016 at 2:02 PM, Mithun Cy
wrote:
>
>
>
> On Sat, Mar 12, 2016 at 12:28 PM, Amit Kapila
wrote
> >I don't see how this test will fail with force_parallel_mode=regress and
max_parallel_degree > 0 even without the patch proposed to fix the issue in
>hand. In short, I don't think thi
On Sat, Mar 12, 2016 at 12:28 PM, Amit Kapila
wrote
>I don't see how this test will fail with force_parallel_mode=regress and
max_parallel_degree > 0 even without the patch proposed to fix the issue in
>hand. In short, I don't think this test would have caught the issue, so I
don't see much advan
On Fri, Mar 11, 2016 at 3:34 PM, Mithun Cy
wrote:
>
> On Thu, Mar 10, 2016 at 9:39 PM, Robert Haas
wrote:
> >I guess there must not be an occurrence of this pattern in the
> >regression tests, or previous force_parallel_mode testing would have
> >found this problem. Perhaps this patch should add
On Thu, Mar 10, 2016 at 9:39 PM, Robert Haas wrote:
>I guess there must not be an occurrence of this pattern in the
>regression tests, or previous force_parallel_mode testing would have
>found this problem. Perhaps this patch should add one?
I have added the test to select_into.sql. Added Explai
On Thu, Mar 10, 2016 at 4:43 AM, Amit Kapila wrote:
> There should be a white space between 0:CURSOR_OPT_PARALLEL_OK. Also I
> don't see this comment is required as similar other usage doesn't have any
> such comment. Fixed these two points in the attached patch.
>
> In general, the patch looks
On Wed, Mar 9, 2016 at 8:18 PM, Mithun Cy
wrote:
>
> Hi All,
>
> Explain [Analyze] Select Into table. produces the plan which uses
parallel scans.
>
> Possible Fix:
>
> I tried to make a patch to fix this. Now in ExplainOneQuery if into
clause is
>
> defined then parallel plans are disabled as
Hi All,
Explain [Analyze] Select Into table. produces the plan which uses
parallel scans.
*Test:*
create table table1 (n int);
insert into table1 values (generate_series(1,500));
analyze table1;
set parallel_tuple_cost=0;
set max_parallel_degree=3;
postgres=# explain select into table
On 14/11/14 14:54, Tom Lane wrote:
> Jeremy Harris writes:
>> On 14/11/14 00:46, Simon Riggs wrote:
>>> Limit (cost= rows=20 width=175) (actual time= rows=20 loops=1)
>>> -> Sort (cost= rows=568733 width=175) (actual time=
>>> rows=20 loops=1)
>>> Sort Method: top-N heapsort
>
Jeremy Harris writes:
> On 14/11/14 00:46, Simon Riggs wrote:
>> Limit (cost= rows=20 width=175) (actual time= rows=20 loops=1)
>> -> Sort (cost= rows=568733 width=175) (actual time=
>> rows=20 loops=1)
>> Sort Method: top-N heapsort
> Going off on a tangent, when I was playing
On 14/11/14 00:46, Simon Riggs wrote:
> Limit (cost= rows=20 width=175) (actual time= rows=20 loops=1)
>-> Sort (cost= rows=568733 width=175) (actual time=
> rows=20 loops=1)
> Sort Method: top-N heapsort
Going off on a tangent, when I was playing with a merge-sort
David G Johnston writes:
> Tom Lane-2 wrote
>> [ shrug... ] The estimated value is the planner's estimate of what would
>> happen *if you ran the node to completion*, which in practice doesn't
>> happen because of the LIMIT.
> I don't see how a sort node cannot run to completion...
The sort m
Tom Lane-2 wrote
> [ shrug... ] The estimated value is the planner's estimate of what would
> happen *if you ran the node to completion*, which in practice doesn't
> happen because of the LIMIT.
I don't see how a sort node cannot run to completion...raising the thought
that the "actual" row cou
David G Johnston writes:
> Tom Lane-2 wrote
>> [ shrug... ] The estimated value is the planner's estimate of what would
>> happen *if you ran the node to completion*, which in practice doesn't
>> happen because of the LIMIT. The actual value is, well, the actual value.
>> We certainly should not
Tom Lane-2 wrote
> Simon Riggs <
> simon@
> > writes:
>> Limit (cost= rows=20 width=175) (actual time= rows=20 loops=1)
>>-> Sort (cost= rows=568733 width=175) (actual time=
>> rows=20 loops=1)
>> Sort Method: top-N heapsort
>
>> The Sort estimate shows 568733 row
Simon Riggs writes:
> Limit (cost= rows=20 width=175) (actual time= rows=20 loops=1)
>-> Sort (cost= rows=568733 width=175) (actual time=
> rows=20 loops=1)
> Sort Method: top-N heapsort
> The Sort estimate shows 568733 rows, whereas the actual rows are 20.
[ shru
Limit (cost= rows=20 width=175) (actual time= rows=20 loops=1)
-> Sort (cost= rows=568733 width=175) (actual time=
rows=20 loops=1)
Sort Method: top-N heapsort
The Sort estimate shows 568733 rows, whereas the actual rows are 20.
Both are correct, in a way.
The node
Kevin Grittner writes:
> Commit 2c92edad48796119c83d7dbe6c33425d1924626d has broken
> EXPLAIN's ANALYZE option in parentheses, which also makes some
> other options unusable.
> test=# EXPLAIN (ANALYZE) SELECT 1;
> ERROR: syntax error at or near "ANALYZE"
> LINE 1: EXPLAIN (ANALYZE) SELECT 1;
>
Commit 2c92edad48796119c83d7dbe6c33425d1924626d has broken
EXPLAIN's ANALYZE option in parentheses, which also makes some
other options unusable.
test=# EXPLAIN (ANALYZE) SELECT 1;
ERROR: syntax error at or near "ANALYZE"
LINE 1: EXPLAIN (ANALYZE) SELECT 1; ^
--
Kevin Grittner
Ente
On 22/11/11 10:26, Kevin Grittner wrote:
Rudyar wrote:
what tool you recommend for measure the query "real" query
execution time?
The -hackers list is for discussion to coordinate development of the
PostgreSQL database product. For user questions like this, please
pick a more appropriate li
Rudyar wrote:
> what tool you recommend for measure the query "real" query
> execution time?
The -hackers list is for discussion to coordinate development of the
PostgreSQL database product. For user questions like this, please
pick a more appropriate list based on the descriptions here:
ht
On 22/11/11 02:58, Kevin Grittner wrote:
Rudyar wrote:
I try to get the execution time of a query workload. I try using
explain analyze but this time is allways higher than the execution
time of a query across a client like pgadmin3
what is the reason about that difference?
It's the "observ
Rudyar wrote:
> I try to get the execution time of a query workload. I try using
> explain analyze but this time is allways higher than the execution
> time of a query across a client like pgadmin3
>
> what is the reason about that difference?
It's the "observer effect" -- there is a cost to
On 11/21/11, Rudyar wrote:
> Hello,
>
> I try to get the execution time of a query workload. I try using explain
> analyze but this time is allways higher than
> the execution time of a query across a client like pgadmin3
>
> what is the reason about that difference?
Analyze has to do a lot of ge
Hello,
I try to get the execution time of a query workload. I try using explain
analyze but this time is allways higher than
the execution time of a query across a client like pgadmin3
what is the reason about that difference?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.
On Mon, Nov 22, 2010 at 12:40 PM, Magnus Hagander wrote:
> I tried building this under windows, and got a bunch of errors.
Thanks!
--
greg
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hacke
On Mon, Nov 15, 2010 at 03:33, Greg Stark wrote:
> This is an update to my earlier patch to add getrusage resource
> tracking to EXPLAIN ANALYZE.
>
> With this patch you get something like:
>
> QUERY PLAN
> --
On Tue, Nov 16, 2010 at 2:53 PM, Greg Stark wrote:
>> Yeah, VERBOSE is kind of a catch-all for things that we don't have
>> individual flags for. But I think it's better for each piece of data
>> to depend on one setting, rather than a combination of two or more
>> settings. Otherwise you end up
On Tue, Nov 16, 2010 at 11:38 AM, Robert Haas wrote:
>> I think we should have a project policy of always printing memory and
>> disk usage in kB, MB, GB etc unless they're functions returning an
>> integer intended for machine use.
>
> rhaas=# set work_mem to '1048577kB';
Interesting. Though in
On Tue, Nov 16, 2010 at 12:19 PM, Greg Stark wrote:
> On Tue, Nov 16, 2010 at 2:43 AM, Robert Haas wrote:
>> I don't really think these changes to the INSTR macros make much
>> sense. The macros don't really add much notational convenience;
>> they're mostly wrappers to make the WIN32 and non-WI
On Tue, Nov 16, 2010 at 2:43 AM, Robert Haas wrote:
> I don't really think these changes to the INSTR macros make much
> sense. The macros don't really add much notational convenience;
> they're mostly wrappers to make the WIN32 and non-WIN32 cases work
> similarly for the instrumentation stuff,
On Sun, Nov 14, 2010 at 9:33 PM, Greg Stark wrote:
> This is an update to my earlier patch to add getrusage resource
> tracking to EXPLAIN ANALYZE.
>
> With this patch you get something like:
>
> QUERY PLAN
>
This is an update to my earlier patch to add getrusage resource
tracking to EXPLAIN ANALYZE.
With this patch you get something like:
QUERY PLAN
---
...Robert
On Jun 2, 2009, at 10:38 AM, Tom Lane wrote:
Robert Haas writes:
On Jun 2, 2009, at 9:41 AM, Simon Riggs
wrote:
You're right that the number of significant digits already exceeds
the
true accuracy of the computation. I think what Robert wants to see
is
the exact value used
Robert Haas writes:
> On Jun 2, 2009, at 9:41 AM, Simon Riggs wrote:
>> You're right that the number of significant digits already exceeds the
>> true accuracy of the computation. I think what Robert wants to see is
>> the exact value used in the calc, so the estimates can be checked more
>> thor
On Jun 2, 2009, at 9:41 AM, Simon Riggs wrote:
On Mon, 2009-06-01 at 20:30 -0700, Ron Mayer wrote:
What I'd find strange about "6.67 rows" in your example is more
that on
the estimated rows side, it seems to imply an unrealistically
precise estimate
in the same way that "667 rows" would s
On Mon, 2009-06-01 at 20:30 -0700, Ron Mayer wrote:
> What I'd find strange about "6.67 rows" in your example is more that on
> the estimated rows side, it seems to imply an unrealistically precise estimate
> in the same way that "667 rows" would seem unrealistically precise to me.
> Maybe roundi
Euler Taveira de Oliveira wrote:
> Robert Haas escreveu:
>> ...EXPLAIN ANALYZE reports the number of rows as an integer... Any
>> chance we could reconsider this decision? I often find myself wanting
>> to know the value that is here called ntuples, but rounding
>> ntuples/nloops off to the neare
On Fri, May 29, 2009 at 1:30 PM, Tom Lane wrote:
> Joshua Tolley writes:
>> On Thu, May 28, 2009 at 11:12:42PM -0400, Robert Haas wrote:
>>> On Thu, May 28, 2009 at 11:00 PM, Euler Taveira de Oliveira
Don't you think is too strange having, for example, 6.67 rows?
>>>
>>> No stranger than hav
Joshua Tolley writes:
> On Thu, May 28, 2009 at 11:12:42PM -0400, Robert Haas wrote:
>> On Thu, May 28, 2009 at 11:00 PM, Euler Taveira de Oliveira
>>> Don't you think is too strange having, for example, 6.67 rows?
>>
>> No stranger than having it say 7 when it's really not. Actually mine
>> mos
On Thu, May 28, 2009 at 11:12:42PM -0400, Robert Haas wrote:
> On Thu, May 28, 2009 at 11:00 PM, Euler Taveira de Oliveira
> > Don't you think is too strange having, for example, 6.67 rows?
>
> No stranger than having it say 7 when it's really not. Actually mine
> mostly come out 1 when the real
On Thu, May 28, 2009 at 11:00 PM, Euler Taveira de Oliveira
wrote:
> Robert Haas escreveu:
>> I have always assumed that there is some very good reason why EXPLAIN
>> ANALYZE reports the number of rows as an integer rather than a
>> floating point value, but in reading explain.c it seems that the
Robert Haas escreveu:
> I have always assumed that there is some very good reason why EXPLAIN
> ANALYZE reports the number of rows as an integer rather than a
> floating point value, but in reading explain.c it seems that the
> reason is just that we decided to round to zero decimal places. Any
>
I have always assumed that there is some very good reason why EXPLAIN
ANALYZE reports the number of rows as an integer rather than a
floating point value, but in reading explain.c it seems that the
reason is just that we decided to round to zero decimal places. Any
chance we could reconsider this
Greg,
Gregory Stark wrote:
I don't think DTrace is overkill either. The programmatic interface is
undocumented (but I've gotten Sun people to admit it exists -- I just have to
reverse engineer it from the existing code samples) but should be more or less
exactly what we need.
You probably k
"Decibel!" <[EMAIL PROTECTED]> writes:
> When a read() call returns, surely the kernel knows whether it actually
> issued
> a physical read request to satisfy that. I don't see any reason why you
> couldn't have a version of read() that returns that information. I also
> rather
> doubt that
On Dec 18, 2007, at 3:32 AM, Gregory Stark wrote:
Also, has anyone looked into adding a class of system calls that
would
actually tell us if the kernel issued physical IO? I find it hard
to believe
that other RDBMSes wouldn't like to have that info...
Yeah, I think that's called "DTrace"
"Decibel!" <[EMAIL PROTECTED]> writes:
> Also, has anyone looked into adding a class of system calls that would
> actually tell us if the kernel issued physical IO? I find it hard to believe
> that other RDBMSes wouldn't like to have that info...
Yeah, I think that's called "DTrace"
--
Greg
On 12/17/07, Decibel! <[EMAIL PROTECTED]> wrote:
> Also, has anyone looked into adding a class of system calls that
> would actually tell us if the kernel issued physical IO? I find it
> hard to believe that other RDBMSes wouldn't like to have that info...
Non-blocking style interfaces can help h
On Mon, 17 Dec 2007, Decibel! wrote:
Someone want to throw together some code that actually measures this? Maybe
something that keeps a histogram of how many instructions take place per I/O
request? If it turns out that counters do vary too much between CPUs, there
might be ways that we can ac
On Dec 14, 2007, at 11:10 PM, Neil Conway wrote:
But it occurred to me just now that the hardware instruction
counter available
on just about every platform would be good enough for a heuristic
guess at
whether the read(2) was cached.
I'm skeptical that this would be reliable enough to be v
On Dec 16, 2007 1:03 AM, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
> Gokulakannan Somasundaram wrote:
> > I was going to say that I'm really only interested in physical I/O.
> Logical
> >> I/O which is satisfied by the kernel cache is only marginally
> interesting
> >> and
> >> buffer fetches
Gokulakannan Somasundaram wrote:
I was going to say that I'm really only interested in physical I/O. Logical
I/O which is satisfied by the kernel cache is only marginally interesting
and
buffer fetches from Postgres's shared buffer is entirely uninteresting
from
the point of view of trying to fi
I was going to say that I'm really only interested in physical I/O. Logical
> I/O which is satisfied by the kernel cache is only marginally interesting
> and
> buffer fetches from Postgres's shared buffer is entirely uninteresting
> from
> the point of view of trying to figure out what is slowing d
"Gokulakannan Somasundaram" <[EMAIL PROTECTED]> writes:
> Hi,
> I already made a discussion about it. We can view the Logical I/Os. If
> we enable the log_statement_stats in the conf file and apply the following
> patch, it is possible. But putting it in Explain analyze makes more sense to
> m
Hi,
I already made a discussion about it. We can view the Logical I/Os. If
we enable the log_statement_stats in the conf file and apply the following
patch, it is possible. But putting it in Explain analyze makes more sense to
me.
*** postgresql-8.3beta1/src/backend/storage/buffer/bufmgr.c
On Fri, 2007-12-14 at 15:47 +, Gregory Stark wrote:
> I've wanted for a long time to have EXPLAIN ANALYZE output per-node I/O usage.
> This would be especially useful if we could distinguish hardware versus
> logical I/O though. And I always thought that would be very hard.
>
> My thought in t
I've wanted for a long time to have EXPLAIN ANALYZE output per-node I/O usage.
This would be especially useful if we could distinguish hardware versus
logical I/O though. And I always thought that would be very hard.
My thought in the past was that would could do it on Solaris by having
Postgres
Thumbs up on this from a lurker.
I recall a previous post about some sort of "progress bar" hack that
would show you where in a plan a currently executing query was at. Has
any work been done on this?
Josh Reich
Jim C. Nasby wrote:
On Mon, Dec 11, 2006 at 12:24:12AM +0100, Peter Eisentraut
On Mon, Dec 11, 2006 at 12:24:12AM +0100, Peter Eisentraut wrote:
> Simon Riggs wrote:
> > Well, I'd like a way of making EXPLAIN ANALYZE return something
> > useful within a reasonable amount of time. We can define that as the
> > amount of time that the user considers is their goal for the query.
Neil Conway wrote:
> On Tue, 2006-12-12 at 17:30 -0500, Bruce Momjian wrote:
> > * Have EXPLAIN ANALYZE highlight poor optimizer estimates
>
> > TODO updated:
> >
> > * Have EXPLAIN ANALYZE issue NOTICE messages when the estimated and
> > actual row counts differ by a specified per
On Tue, 2006-12-12 at 17:30 -0500, Bruce Momjian wrote:
> * Have EXPLAIN ANALYZE highlight poor optimizer estimates
> TODO updated:
>
> * Have EXPLAIN ANALYZE issue NOTICE messages when the estimated and
> actual row counts differ by a specified percentage
I don't think this
Richard Huxton wrote:
> Simon Riggs wrote:
> > Intermediate results are always better than none at all. I do understand
> > what a partial execution would look like - frequently it is the
> > preparatory stages that slow a query down - costly sorts, underestimated
> > hash joins etc. Other times it
Neil Conway <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Yeah ... a protocol change is *painful*, especially if you really want
>> clients to behave in a significantly new way.
> A backward-incompatible protocol change is painful, sure, but ISTM we
> could implement what Greg describes as a s
Simon Riggs wrote:
I like the idea, but its more work than I really wanted to get into
right now.
Well, from another point of view: do we need this feature so urgently
that there is not enough time to do it properly? IMHO, no.
-Neil
---(end of broadcast)-
Tom Lane wrote:
Yeah ... a protocol change is *painful*, especially if you really want
clients to behave in a significantly new way.
A backward-incompatible protocol change is painful, sure, but ISTM we
could implement what Greg describes as a straightforward extension to
the V3 protocol. The
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> On Mon, 2006-12-11 at 11:00 +, Gregory Stark wrote:
>> What I suggested was introducing a new FE/BE message type for analyze query
>> plans.
> I like the idea, but its more work than I really wanted to get into
> right now.
Yeah ... a protocol chang
On Mon, 2006-12-11 at 11:00 +, Gregory Stark wrote:
> "Tom Lane" <[EMAIL PROTECTED]> writes:
>
> > We might be able to finesse the protocol problem by teaching EA to
> > respond to query cancel by emitting the data-so-far as a NOTICE (like it
> > used to do many moons ago), rather than a stand
Simon Riggs wrote:
Intermediate results are always better than none at all. I do understand
what a partial execution would look like - frequently it is the
preparatory stages that slow a query down - costly sorts, underestimated
hash joins etc. Other times it is loop underestimation, which can
us
"Tom Lane" <[EMAIL PROTECTED]> writes:
> We might be able to finesse the protocol problem by teaching EA to
> respond to query cancel by emitting the data-so-far as a NOTICE (like it
> used to do many moons ago), rather than a standard query result, then
> allowing the query to error out. Howeve
On Sun, 2006-12-10 at 18:09 -0500, Tom Lane wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > The EA case is pretty straightforward though;
>
> Well, no its not, as you'll recall if you re-read the prior discussions.
> The killer problem is that it's unclear whether the early termination of
>
See your postgresql.conf for
log_planner_stats = true #false
log_executor_stats = true #false
#log_statement_stats = false
Oleg
On Fri, 11 Mar 2005, Ioannis Theoharis wrote:
Hi, i found this form of output of explain analyze, watching some old
mails in lists.
test4=# explain analyze select * from
Hi, i found this form of output of explain analyze, watching some old
mails in lists.
test4=# explain analyze select * from patients;
LOG: query: explain analyze select * from patients;
LOG: duration: 0.603887 sec
LOG: QUERY STATISTICS
! system usage stats:
! 0.624269 elapsed 0.458985 u
I'm curious if anyone's considered adding logic to count actual disk/cache
hits to report for EXPLAIN ANALYZE so that we get a more apples-to-apples
comparison?
The other question is whether anyone has got scripts or tools or what not
for testing and getting accurate numbers for the following tuni
91 matches
Mail list logo