--- Tom Lane [EMAIL PROTECTED] wrote:
Can't see how we optimize your case without pessimizing more-common cases.
My case appears to be pretty common, i.e. 1 small and 1 large table with RI
constraint between them. In order to delete a record from the small table, the
large table must not have
In this precise example, could you not:
1. Check index for value
2. If found, seq-scan
Of course that's only going to be a sensible thing to do if you're expecting
one of two results:
1. Value not there
2. Lengthy seq-scan if it is there
Most of the queries are going to be for
On Thu, 2004-02-12 at 23:25, Tom Lane wrote:
Rod Taylor [EMAIL PROTECTED] writes:
Statistics say there are 10 values. Statistics list the 10 most common
values (all of them). Given this, would it not be reasonable to assume
that 239 is a recent addition (if there at all) to the table and
On Thu, 2004-02-12 at 20:10, Tom Lane wrote:
ow [EMAIL PROTECTED] writes:
Sounds pretty bad for my case. Any way to avoid the 10% scan?
Can't see how we optimize your case without pessimizing more-common cases.
Sorry.
Statistics say there are 10 values. Statistics list the 10 most common
On Thu, 12 Feb 2004, Tom Lane wrote:
Rod Taylor [EMAIL PROTECTED] writes:
Statistics say there are 10 values. Statistics list the 10 most common
values (all of them). Given this, would it not be reasonable to assume
that 239 is a recent addition (if there at all) to the table and not
Stephan Szabo [EMAIL PROTECTED] writes:
One thing is that IIRC we're going to ask for only one row when we do the
SPI_execp_current. However, unless I misremember, the behavior of for
update and limit means that saying limit 1 is potentially unsafe (if you
block on a row that goes away). Is
On Fri, 13 Feb 2004, Tom Lane wrote:
Stephan Szabo [EMAIL PROTECTED] writes:
One thing is that IIRC we're going to ask for only one row when we do the
SPI_execp_current. However, unless I misremember, the behavior of for
update and limit means that saying limit 1 is potentially unsafe (if
Stephan Szabo [EMAIL PROTECTED] writes:
On Fri, 13 Feb 2004, Tom Lane wrote:
I was looking at that last night. It seems like we could add a LIMIT at
least in some contexts. In the case at hand, we're just going to error
out immediately if we find a matching row, and so there's no need for
On Fri, 13 Feb 2004, Tom Lane wrote:
Stephan Szabo [EMAIL PROTECTED] writes:
On Fri, 13 Feb 2004, Tom Lane wrote:
I was looking at that last night. It seems like we could add a LIMIT at
least in some contexts. In the case at hand, we're just going to error
out immediately if we find a
--- Stephan Szabo [EMAIL PROTECTED] wrote:
Hmm, I'd wonder if maybe it's choosing a sequential scan in the second
case? As a random question, does increasing the statistics target on
Large.small_id and re-analyzing change its behavior?
Ran analyze, the result is the same. Here's more info:
On Thu, 12 Feb 2004, ow wrote:
--- Stephan Szabo [EMAIL PROTECTED] wrote:
Hmm, I'd wonder if maybe it's choosing a sequential scan in the second
case? As a random question, does increasing the statistics target on
Large.small_id and re-analyzing change its behavior?
Ran analyze, the
ow [EMAIL PROTECTED] writes:
When I try to delete record, it takes 3 min.
I think it must be using a seqscan for the foreign key check query.
Could you try this and show the results?
prepare foo(my.dint) as
SELECT 1 FROM ONLY my.large x WHERE small_id = $1 FOR UPDATE OF x;
explain analyze
--- Stephan Szabo [EMAIL PROTECTED] wrote:
You also did the alter table to up the statistics target on the column,
right?
Not really. I did not change the the default stats settings in the
postgresql.conf. Not sure what needs to be changed, can you clarify?
Thanks
On Thu, 12 Feb 2004, ow wrote:
--- Stephan Szabo [EMAIL PROTECTED] wrote:
You also did the alter table to up the statistics target on the column,
right?
Not really. I did not change the the default stats settings in the
postgresql.conf. Not sure what needs to be changed, can you clarify?
--- Tom Lane [EMAIL PROTECTED] wrote:
ow [EMAIL PROTECTED] writes:
When I try to delete record, it takes 3 min.
I think it must be using a seqscan for the foreign key check query.
Could you try this and show the results?
1) SELECT 1 FROM ONLY my.large x WHERE small_id = 201 FOR UPDATE OF
--- Tom Lane [EMAIL PROTECTED] wrote:
Well, there's the smoking gun all right. Why does it think there are
going to be 7893843 matching rows!? Could we see the pg_stats row for
the large.small_id column?
regards, tom lane
schemaname tablename attname
ow [EMAIL PROTECTED] writes:
schemanametablename attname null_frac avg_width n_distinct
most_common_vals
most_common_freqs histogram_boundscorrelation
mylarge small_id0 4 10 {7,3,5,1,4,2,8,10,6,9}
ow [EMAIL PROTECTED] writes:
--- Tom Lane [EMAIL PROTECTED] wrote:
I think it must be using a seqscan for the foreign key check query.
2) prepare foo(my.dint) as
SELECT 1 FROM ONLY my.large x WHERE small_id = $1 FOR UPDATE OF x;
explain analyze execute foo(201);
QUERY PLAN
Seq Scan on
ow [EMAIL PROTECTED] writes:
Sounds pretty bad for my case. Any way to avoid the 10% scan?
Can't see how we optimize your case without pessimizing more-common cases.
Sorry.
regards, tom lane
---(end of broadcast)---
TIP 9:
ow [EMAIL PROTECTED] writes:
--- Tom Lane [EMAIL PROTECTED] wrote:
Can't see how we optimize your case without pessimizing more-common cases.
I think other RDBMSs simply use preset value instead of partial table
scan when there's not enough stat info. Might be a better way.
The problem here
On Friday 13 February 2004 04:25, Tom Lane wrote:
Rod Taylor [EMAIL PROTECTED] writes:
Statistics say there are 10 values. Statistics list the 10 most common
values (all of them). Given this, would it not be reasonable to assume
that 239 is a recent addition (if there at all) to the table
PostgreSQL 7.4 on i386-linux-gnu, compiled by GCC 2.96
-- about 10 records
CREATE TABLE my.Small
(
id my.dint NOT NULL,
code my.dvalue NOT NULL,
CONSTRAINT pk_1 PRIMARY KEY (id),
) WITHOUT OIDS;
-- about 80M rows
CREATE TABLE my.Large
(
id my.dlong NOT NULL,
small_id my.dint NOT
On Wed, 11 Feb 2004, ow wrote:
PostgreSQL 7.4 on i386-linux-gnu, compiled by GCC 2.96
-- about 10 records
CREATE TABLE my.Small
(
id my.dint NOT NULL,
code my.dvalue NOT NULL,
CONSTRAINT pk_1 PRIMARY KEY (id),
) WITHOUT OIDS;
-- about 80M rows
CREATE TABLE my.Large
(
id
23 matches
Mail list logo