On Aug 18, 2008, at 11:49 AM, Tom Lane wrote:
Perhaps what's also needed here is to measure just how accurate
the cpu_*
costs are. Perhaps they need to be raised somewhat if we're
underestimating
the cost of digging through 200 tuples on a heap page and the
benefit of a
binary search on
Tom Lane [EMAIL PROTECTED] writes:
Gregory Stark [EMAIL PROTECTED] writes:
On Tue, 2008-08-12 at 15:46 -0400, Tom Lane wrote:
This is only going to matter for a table of 1 block (or at least very
few blocks), and for such a table it's highly likely that it's in RAM
anyway. So I'm
On Mon, 2008-08-18 at 16:44 +0100, Gregory Stark wrote:
Tom Lane [EMAIL PROTECTED] writes:
Gregory Stark [EMAIL PROTECTED] writes:
On Tue, 2008-08-12 at 15:46 -0400, Tom Lane wrote:
This is only going to matter for a table of 1 block (or at least
very
few blocks), and for such a table
Gregory Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
I'm not necessarily opposed to making this change --- it does sound
kinda plausible --- but I want to see some hard evidence that it does
more good than harm before we put it in.
I don't want to see this thread
On Aug 13, 2008, at 10:45 PM, Andrew Gierth wrote:
You could likely expose a difference using LIMIT 1 in the subselect,
but that doesn't tell us anything we didn't already know (which is
that yes, index scan is much faster than seqscan even for 1-block
tables, except in the rare case when
On Tue, 2008-08-12 at 23:22 -0400, Tom Lane wrote:
Gregory Stark [EMAIL PROTECTED] writes:
On Tue, 2008-08-12 at 15:46 -0400, Tom Lane wrote:
This is only going to matter for a table of 1 block (or at least very
few blocks), and for such a table it's highly likely that it's in RAM
Proposal: Make the first block of a seq scan cost random_page_cost, then
after that every additional block costs seq_page_cost.
+1
AFAICS the cost cross-over is much higher than the actual elapsed time
cross-over for both narrow and wide tables.
Which makes absolute sense, since
On Aug 12, 2008, at 4:52 PM, Andrew Gierth wrote:
Tom == Tom Lane [EMAIL PROTECTED] writes:
Proposal: Make the first block of a seq scan cost
random_page_cost, then after that every additional block costs
seq_page_cost.
?Tom This is only going to matter for a table of 1 block (or at least
Decibel! == Decibel! [EMAIL PROTECTED] writes:
Decibel Roughly what I get on my MBP (I'm about a factor of 2
Decibel slower). This makes me think it's an issue of having to slog
Decibel through an entire page one row at a time vs just using the
Decibel index. To test this I tested selecting
On Wed, Aug 13, 2008 at 07:33:40PM +0100, Andrew Gierth wrote:
The following message is a courtesy copy of an article
that has been posted to pgsql.hackers as well.
Decibel! == Decibel! [EMAIL PROTECTED] writes:
Decibel Roughly what I get on my MBP (I'm about a factor of 2
Decibel
Decibel! [EMAIL PROTECTED] writes:
Makes sense, and yeah, I was wondering a bit about that. I'll try to
fake it out with offset 0 later on if no one beats me to it; I do still
think we could just be seeing the effect of slogging through 200 tuples
instead of going directly to the one we want.
Gregory Stark [EMAIL PROTECTED] writes:
That means going to the index meta page, find the fast root pointer, look up
that page, look at the single leaf page pointer, look up that page, and do a
binary search of the 200 leaf pointers. Once you find the resulting match,
look up the heap page and
On Aug 13, 2008, at 3:52 PM, Decibel! wrote:
The problem is that by looking for a constant row, you're actually
eliminating the entire effect being tested, because the uncorrelated
subselect is run ONCE as an initplan, and the entire query time is
then spent elsewhere. The differences in runtime
Decibel! [EMAIL PROTECTED] writes:
It'd be useful to get strace data on this, but OS X doesn't have
that :/ (and I'm on 10.4 so no dtrace either).
See ktrace.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
Decibel == Decibel! [EMAIL PROTECTED] writes:
Decibel OK, ran the test again via this query:
Decibel explain analyze select (select value from oneblock where id = i)
Decibel from generate_series(1,1) i, generate_series(1,10) j;
Decibel changing 1,1 to 200,200 as needed. I don't see
If we access a 1 block table using a SeqScan then it costs
seq_page_cost, or 1 by default.
If we access the same 1 block table using an IndexScan then the access
costs random_page_cost to access the index block and then
random_page_cost to access to the data block.
So the same block accessed
Simon Riggs [EMAIL PROTECTED] writes:
Proposal: Make the first block of a seq scan cost random_page_cost, then
after that every additional block costs seq_page_cost.
This is only going to matter for a table of 1 block (or at least very
few blocks), and for such a table it's highly likely that
On Tue, 2008-08-12 at 15:46 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
Proposal: Make the first block of a seq scan cost random_page_cost, then
after that every additional block costs seq_page_cost.
This is only going to matter for a table of 1 block (or at least very
Tom == Tom Lane [EMAIL PROTECTED] writes:
Proposal: Make the first block of a seq scan cost
random_page_cost, then after that every additional block costs
seq_page_cost.
Tom This is only going to matter for a table of 1 block (or at least
Tom very few blocks), and for such a table it's
Simon Riggs [EMAIL PROTECTED] writes:
On Tue, 2008-08-12 at 15:46 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
Proposal: Make the first block of a seq scan cost random_page_cost, then
after that every additional block costs seq_page_cost.
This is only going to matter for
On Tue, 2008-08-12 at 23:58 +0100, Gregory Stark wrote:
People lower random_page_cost because we're not doing a good job
estimating how much of a table is in cache.
Is it because of a bad estimate about how much of a table is in cache,
or a bad assumption about the distribution of access to a
Gregory Stark [EMAIL PROTECTED] writes:
On Tue, 2008-08-12 at 15:46 -0400, Tom Lane wrote:
This is only going to matter for a table of 1 block (or at least very
few blocks), and for such a table it's highly likely that it's in RAM
anyway. So I'm unconvinced that the proposed change represents
On Tue, 2008-08-12 at 23:58 +0100, Gregory Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
On Tue, 2008-08-12 at 15:46 -0400, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
Proposal: Make the first block of a seq scan cost random_page_cost, then
after that every additional
23 matches
Mail list logo