--- Tom Lane <[EMAIL PROTECTED]> wrote:
> Can't see how we optimize your case without pessimizing more-common cases.
My case appears to be pretty common, i.e. 1 small and 1 large table with RI
constraint between them. In order to delete a record from the small table, the
large table must not have
On Thu, 2004-02-12 at 20:10, Tom Lane wrote:
> ow <[EMAIL PROTECTED]> writes:
> > Sounds pretty bad for my case. Any way to avoid the 10% scan?
>
> Can't see how we optimize your case without pessimizing more-common cases.
> Sorry.
Statistics say there are 10 values. Statistics list the 10 most c
On Thu, 2004-02-12 at 23:25, Tom Lane wrote:
> Rod Taylor <[EMAIL PROTECTED]> writes:
> > Statistics say there are 10 values. Statistics list the 10 most common
> > values (all of them). Given this, would it not be reasonable to assume
> > that 239 is a recent addition (if there at all) to the tabl
> In this precise example, could you not:
> 1. Check index for value
> 2. If found, seq-scan
>
> Of course that's only going to be a sensible thing to do if you're expecting
> one of two results:
> 1. Value not there
> 2. Lengthy seq-scan if it is there
Most of the queries are going to b
On Fri, 13 Feb 2004, Tom Lane wrote:
> Stephan Szabo <[EMAIL PROTECTED]> writes:
> > On Fri, 13 Feb 2004, Tom Lane wrote:
> >> I was looking at that last night. It seems like we could add a LIMIT at
> >> least in some contexts. In the case at hand, we're just going to error
> >> out immediately
Stephan Szabo <[EMAIL PROTECTED]> writes:
> On Fri, 13 Feb 2004, Tom Lane wrote:
>> I was looking at that last night. It seems like we could add a LIMIT at
>> least in some contexts. In the case at hand, we're just going to error
>> out immediately if we find a matching row, and so there's no nee
On Fri, 13 Feb 2004, Tom Lane wrote:
> Stephan Szabo <[EMAIL PROTECTED]> writes:
> > One thing is that IIRC we're going to ask for only one row when we do the
> > SPI_execp_current. However, unless I misremember, the behavior of for
> > update and limit means that saying limit 1 is potentially un
Stephan Szabo <[EMAIL PROTECTED]> writes:
> One thing is that IIRC we're going to ask for only one row when we do the
> SPI_execp_current. However, unless I misremember, the behavior of for
> update and limit means that saying limit 1 is potentially unsafe (if you
> block on a row that goes away).
On Thu, 12 Feb 2004, Tom Lane wrote:
> Rod Taylor <[EMAIL PROTECTED]> writes:
> > Statistics say there are 10 values. Statistics list the 10 most common
> > values (all of them). Given this, would it not be reasonable to assume
> > that 239 is a recent addition (if there at all) to the table and n
On Friday 13 February 2004 04:25, Tom Lane wrote:
> Rod Taylor <[EMAIL PROTECTED]> writes:
> > Statistics say there are 10 values. Statistics list the 10 most common
> > values (all of them). Given this, would it not be reasonable to assume
> > that 239 is a recent addition (if there at all) to the
ow <[EMAIL PROTECTED]> writes:
> --- Tom Lane <[EMAIL PROTECTED]> wrote:
>> Can't see how we optimize your case without pessimizing more-common cases.
> I think other RDBMSs simply use preset value instead of partial table
> scan when there's not enough stat info. Might be a better way.
The probl
Rod Taylor <[EMAIL PROTECTED]> writes:
> Statistics say there are 10 values. Statistics list the 10 most common
> values (all of them). Given this, would it not be reasonable to assume
> that 239 is a recent addition (if there at all) to the table and not
> very common?
We don't know that it's 239
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> According to this entry, your small_id column only contains the ten
> values 1..10, roughly evenly distributed. So why are you probing for
> 239??
Let's say we have City (small) and Person (large) tables. A new city was added
(mistakenly) with id=239, it
ow <[EMAIL PROTECTED]> writes:
> Sounds pretty bad for my case. Any way to avoid the 10% scan?
Can't see how we optimize your case without pessimizing more-common cases.
Sorry.
regards, tom lane
---(end of broadcast)---
TIP
ow <[EMAIL PROTECTED]> writes:
> --- Tom Lane <[EMAIL PROTECTED]> wrote:
>> I think it must be using a seqscan for the foreign key check query.
> 2) prepare foo(my.dint) as
> SELECT 1 FROM ONLY "my"."large" x WHERE "small_id" = $1 FOR UPDATE OF x;
> explain analyze execute foo(201);
> QUERY PLAN
ow <[EMAIL PROTECTED]> writes:
> schemanametablename attname null_frac avg_width n_distinct
> most_common_vals
> most_common_freqs histogram_boundscorrelation
> mylarge small_id0 4 10 {7,3,5,1,4,2,8,10,6,9}
> {0.108667,0.105,0.1
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> Well, there's the smoking gun all right. Why does it think there are
> going to be 7893843 matching rows!? Could we see the pg_stats row for
> the large.small_id column?
>
> regards, tom lane
schemaname tablename attnam
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> ow <[EMAIL PROTECTED]> writes:
> > When I try to delete record, it takes > 3 min.
>
> I think it must be using a seqscan for the foreign key check query.
> Could you try this and show the results?
1) SELECT 1 FROM ONLY "my"."large" x WHERE "small_id" = 2
On Thu, 12 Feb 2004, ow wrote:
> --- Stephan Szabo <[EMAIL PROTECTED]> wrote:
> > You also did the alter table to up the statistics target on the column,
> > right?
>
> Not really. I did not change the the default stats settings in the
> postgresql.conf. Not sure what needs to be changed, can you
--- Stephan Szabo <[EMAIL PROTECTED]> wrote:
> You also did the alter table to up the statistics target on the column,
> right?
Not really. I did not change the the default stats settings in the
postgresql.conf. Not sure what needs to be changed, can you clarify?
Thanks
_
ow <[EMAIL PROTECTED]> writes:
> When I try to delete record, it takes > 3 min.
I think it must be using a seqscan for the foreign key check query.
Could you try this and show the results?
prepare foo(my.dint) as
SELECT 1 FROM ONLY "my"."large" x WHERE "small_id" = $1 FOR UPDATE OF x;
explain an
On Thu, 12 Feb 2004, ow wrote:
>
> --- Stephan Szabo <[EMAIL PROTECTED]> wrote:
> > Hmm, I'd wonder if maybe it's choosing a sequential scan in the second
> > case? As a random question, does increasing the statistics target on
> > Large.small_id and re-analyzing change its behavior?
>
> Ran anal
--- Stephan Szabo <[EMAIL PROTECTED]> wrote:
> Hmm, I'd wonder if maybe it's choosing a sequential scan in the second
> case? As a random question, does increasing the statistics target on
> Large.small_id and re-analyzing change its behavior?
Ran analyze, the result is the same. Here's more inf
ow <[EMAIL PROTECTED]> writes:
> When I try to delete record, it takes > 3 min. Why is it taking so long if
> practically the same select query (see (2)) is running very quickly. Anything
> that can be done to fix it?
What are those column datatypes?
regards, tom lane
--
On Wed, 11 Feb 2004, ow wrote:
> PostgreSQL 7.4 on i386-linux-gnu, compiled by GCC 2.96
>
> -- about 10 records
> CREATE TABLE my.Small
> (
> id my.dint NOT NULL,
> code my.dvalue NOT NULL,
> CONSTRAINT pk_1 PRIMARY KEY (id),
> ) WITHOUT OIDS;
>
> -- about 80M rows
> CREATE TABLE my.Large
>
PostgreSQL 7.4 on i386-linux-gnu, compiled by GCC 2.96
-- about 10 records
CREATE TABLE my.Small
(
id my.dint NOT NULL,
code my.dvalue NOT NULL,
CONSTRAINT pk_1 PRIMARY KEY (id),
) WITHOUT OIDS;
-- about 80M rows
CREATE TABLE my.Large
(
id my.dlong NOT NULL,
small_id my.dint NOT NULL,
26 matches
Mail list logo