On Friday 13 February 2004 04:25, Tom Lane wrote:
> Rod Taylor <[EMAIL PROTECTED]> writes:
> > Statistics say there are 10 values. Statistics list the 10 most common
> > values (all of them). Given this, would it not be reasonable to assume
> > that 239 is a recent addition (if there at all) to the
oh, ok understood.
What will happen for a timestamp field. Let us say c1 is a timestamp column.
sqlstr := 'insert into test(c1, c2) values
('||'\''||COALESCE(rec.c1,'NULL')||'\','
> > ||'\''||rec.c2||'\')';
If this case the query will be
insert into test(c1,c2) values ('2004-02-13', 'Hai'
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Dnia 2004-02-13 05:53, Użytkownik Kumar napisał:
I am having problem there. see what happens
sqlstr := 'insert into test(c1, c2) values ('||COALESCE(rec.c1,'NULL')||','
||'\''||rec.c2||'\')';
You are preparing a string, so make sure you have strings everywhere:
sqlstr := 'insert into test
There are 18321 records for 20040127 and so the estimate of 8839 for the = case is low
but it still does the right thing.
There are 227197 records between '20040127' and current_date so the estimate in the >=
case is accurate but the estimate for the between case is an order of magnitude too
lo
Dear all,
I solved it using ISNULL function.
sqlstr := 'insert into test(c1, c2) values ('||ISNULL(rec.c1,'NULL')||','
> ||'\''||rec.c2||'\')';
Thanks
kumar
- Original Message -
From: "Kumar" <[EMAIL PROTECTED]>
To: "Tomasz Myrta" <[EMAIL PROTECTED]>
Cc: "psql" <[EMAIL PROTECTED]>
I am having problem there. see what happens
sqlstr := 'insert into test(c1, c2) values ('||COALESCE(rec.c1,'NULL')||','
||'\''||rec.c2||'\')';
WARNING: Error occurred while executing PL/pgSQL function test_fn
WARNING: line 8 at assignment
ERROR: pg_atoi: error in "NULL": can't parse "N
ow <[EMAIL PROTECTED]> writes:
> --- Tom Lane <[EMAIL PROTECTED]> wrote:
>> Can't see how we optimize your case without pessimizing more-common cases.
> I think other RDBMSs simply use preset value instead of partial table
> scan when there's not enough stat info. Might be a better way.
The probl
Rod Taylor <[EMAIL PROTECTED]> writes:
> Statistics say there are 10 values. Statistics list the 10 most common
> values (all of them). Given this, would it not be reasonable to assume
> that 239 is a recent addition (if there at all) to the table and not
> very common?
We don't know that it's 239
Hi all,
I have a table with around 3M records in it and a few indexes on it. One of them is on
the day column. I get 10-20K new records a day. After running ANALYSE in psql I tried
the following queries:
buns=# explain select count(*) from cdr where day >= '20040127';
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> According to this entry, your small_id column only contains the ten
> values 1..10, roughly evenly distributed. So why are you probing for
> 239??
Let's say we have City (small) and Person (large) tables. A new city was added
(mistakenly) with id=239, it
ow <[EMAIL PROTECTED]> writes:
> Sounds pretty bad for my case. Any way to avoid the 10% scan?
Can't see how we optimize your case without pessimizing more-common cases.
Sorry.
regards, tom lane
---(end of broadcast)---
TIP
Bruce Momjian wrote:
Is this a TODO?
Probably -- something like:
Modify array literal representation to handle array index lower bound
of other than one
Joe
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http:/
Joe Conway wrote:
> Bruce Momjian wrote:
> > Is this a TODO?
>
> Probably -- something like:
>Modify array literal representation to handle array index lower bound
>of other than one
Added to TODO.
--
Bruce Momjian| http://candle.pha.pa.us
[EMAIL PROTECTED]
ow <[EMAIL PROTECTED]> writes:
> --- Tom Lane <[EMAIL PROTECTED]> wrote:
>> I think it must be using a seqscan for the foreign key check query.
> 2) prepare foo(my.dint) as
> SELECT 1 FROM ONLY "my"."large" x WHERE "small_id" = $1 FOR UPDATE OF x;
> explain analyze execute foo(201);
> QUERY PLAN
ow <[EMAIL PROTECTED]> writes:
> schemanametablename attname null_frac avg_width n_distinct
> most_common_vals
> most_common_freqs histogram_boundscorrelation
> mylarge small_id0 4 10 {7,3,5,1,4,2,8,10,6,9}
> {0.108667,0.105,0.1
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> Well, there's the smoking gun all right. Why does it think there are
> going to be 7893843 matching rows!? Could we see the pg_stats row for
> the large.small_id column?
>
> regards, tom lane
schemaname tablename attnam
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> ow <[EMAIL PROTECTED]> writes:
> > When I try to delete record, it takes > 3 min.
>
> I think it must be using a seqscan for the foreign key check query.
> Could you try this and show the results?
1) SELECT 1 FROM ONLY "my"."large" x WHERE "small_id" = 2
testing 1,2,3
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
On Thu, 12 Feb 2004, ow wrote:
> --- Stephan Szabo <[EMAIL PROTECTED]> wrote:
> > You also did the alter table to up the statistics target on the column,
> > right?
>
> Not really. I did not change the the default stats settings in the
> postgresql.conf. Not sure what needs to be changed, can you
--- Stephan Szabo <[EMAIL PROTECTED]> wrote:
> You also did the alter table to up the statistics target on the column,
> right?
Not really. I did not change the the default stats settings in the
postgresql.conf. Not sure what needs to be changed, can you clarify?
Thanks
_
ow <[EMAIL PROTECTED]> writes:
> When I try to delete record, it takes > 3 min.
I think it must be using a seqscan for the foreign key check query.
Could you try this and show the results?
prepare foo(my.dint) as
SELECT 1 FROM ONLY "my"."large" x WHERE "small_id" = $1 FOR UPDATE OF x;
explain an
On Thu, 12 Feb 2004, ow wrote:
>
> --- Stephan Szabo <[EMAIL PROTECTED]> wrote:
> > Hmm, I'd wonder if maybe it's choosing a sequential scan in the second
> > case? As a random question, does increasing the statistics target on
> > Large.small_id and re-analyzing change its behavior?
>
> Ran anal
--- Stephan Szabo <[EMAIL PROTECTED]> wrote:
> Hmm, I'd wonder if maybe it's choosing a sequential scan in the second
> case? As a random question, does increasing the statistics target on
> Large.small_id and re-analyzing change its behavior?
Ran analyze, the result is the same. Here's more inf
Dnia 2004-02-12 13:31, Użytkownik Kumar napisał:
The error is because of no value for column c1. If the column c1 is a
string I might have replace it with empty string. I don't want to
substitute with '0' which could work.
sqlstr := 'insert into test(c1, c2) values (' ||ISNULL(rec.c1,'0')||','
Dear Friends,
Postgres 7.3.4 on RH Linux 7.2.
I wanted to write a dynamic query for insert
statement.
create table test(c1 int, c2 varchar)
insert into test(c1, c2) values (1,'Hai1');insert into
test(c1, c2) values (NULL,'Hai2');
so I wrote a function called test_fn()
DECLARE
s
> Check if it is using indexes.
> 7.3 seems to ignore them somethimes, try upgrading to 7.4 where index use
> is apparently improved.
good,
i will upgrade anyway
but how can i check index usage when DELETE from table1; ?
EXPLAIN tells me only "seq scan on table1"
when many other tables involved
27 matches
Mail list logo