Re: [SQL] cascade delete

2004-02-12 Thread sad
> Check if it is using indexes.
> 7.3 seems to ignore them somethimes, try upgrading to 7.4 where index use
> is apparently  improved.

good, 
i will upgrade anyway
but how can i check index usage when DELETE from table1; ?
EXPLAIN tells me only "seq scan on table1" 
when many other tables involved in this DELETE by foreign keys



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings



[SQL] How to avoid nulls while writing string for dynamic query

2004-02-12 Thread Kumar



Dear Friends,
 
Postgres 7.3.4 on RH Linux 7.2.
 
I wanted to write a dynamic query for insert 
statement.
 
create table test(c1 int, c2 varchar)
 
insert into test(c1, c2) values (1,'Hai1');insert into 
test(c1, c2) values (NULL,'Hai2');
 
so I wrote a function called test_fn()
 
DECLARE    
sqlstr  VARCHAR(100); 
rec  RECORD;BEGIN     FOR rec 
IN SELECT * FROM test    
 LOOP          sqlstr := 
'insert into test(c1, c2) values 
(' ||rec.c1||','    
                
                
                
        
||'\''||rec.c2||'\')';RAISE NOTICE 
'%',sqlstr; execute 
sqlstr;    END LOOP;RETURN 'DONE';END;
 
NOTICE:  insert into test(c1, c2) values 
(1,'Hai1')NOTICE:  
 
So i have created a null function.
 
  sqlstr := 'insert into test(c1, c2) values 
(' ||ISNULL(rec.c1,'')||','    
                
                
                
        ||'\''||rec.c2||'\')';
Now I got results as
NOTICE:  insert into test(c1, c2) values 
(1,'Hai1')NOTICE:  insert into test(c1, c2) values 
(,'Hai2')WARNING:  Error occurred while executing PL/pgSQL function 
test_fnWARNING:  line 11 at execute statement
 
ERROR:  parser: parse error at or near "," at character 
34
 
The error is because of no value for column c1. If the column 
c1 is a string I might have replace it with empty string. I don't want to 
substitute with '0' which could work.
 
sqlstr := 'insert into test(c1, c2) values 
(' ||ISNULL(rec.c1,'0')||','    
                
                
                
        ||'\''||rec.c2||'\')';
 
NOTICE:  insert into test(c1, c2) values 
(1,'Hai1')NOTICE:  insert into test(c1, c2) values 
(0,'Hai2')
 
Total query runtime: 47 ms.Data retrieval runtime: 0 
ms.1 rows retrieved.
 
How can I do that. Please advise me.
 
Thanks
Kumar
 
 




Re: [SQL] How to avoid nulls while writing string for dynamic query

2004-02-12 Thread Tomasz Myrta
Dnia 2004-02-12 13:31, Użytkownik Kumar napisał:
The error is because of no value for column c1. If the column c1 is a 
string I might have replace it with empty string. I don't want to 
substitute with '0' which could work.
 
sqlstr := 'insert into test(c1, c2) values (' ||ISNULL(rec.c1,'0')||','

||'\''||rec.c2||'\')';
Substitute it with NULL value:
sqlstr := 'insert into test(c1, c2) values(' || coalesce(rec.c1,'NULL') 
||...

Regards,
Tomasz Myrta
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread ow

--- Stephan Szabo <[EMAIL PROTECTED]> wrote:
> Hmm, I'd wonder if maybe it's choosing a sequential scan in the second
> case?  As a random question, does increasing the statistics target on
> Large.small_id and re-analyzing change its behavior?

Ran analyze, the result is the same. Here's more info:

1) There's 1 row in "Large" for "small_id" = 239 
SELECT 1 FROM ONLY "my"."Large" x WHERE "small_id" = 239 FOR UPDATE OF x

Quick query. Explain shows index scan.

2) There are many rows in "Large" for "small_id" = 1
SELECT 1 FROM ONLY "my"."Large" x WHERE "small_id" = 1 FOR UPDATE OF x

Runs for about 3 min. Explain shows table scan.

3) delete from Small where id = 239
Runs for about 3 min. It does appear that table scan is used for FK
verification. But why? Am deleting "id = 239"  not "id = 1" and the query in
(1) runs very quickly. Had suspicion that wrong id is passed during FK
verification but FK constraint DOES work.

4) Domain types used in the example above
my.dint = int
my.dlong = int8
my.dvalue =  varchar(15)

Thanks




__
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread Stephan Szabo
On Thu, 12 Feb 2004, ow wrote:

>
> --- Stephan Szabo <[EMAIL PROTECTED]> wrote:
> > Hmm, I'd wonder if maybe it's choosing a sequential scan in the second
> > case?  As a random question, does increasing the statistics target on
> > Large.small_id and re-analyzing change its behavior?
>
> Ran analyze, the result is the same. Here's more info:

You also did the alter table to up the statistics target on the column,
right?

> 3) delete from Small where id = 239
> Runs for about 3 min. It does appear that table scan is used for FK
> verification. But why? Am deleting "id = 239"  not "id = 1" and the query in
> (1) runs very quickly. Had suspicion that wrong id is passed during FK
> verification but FK constraint DOES work.

It doesn't plan it as id=239 but as id=$1 and then executes it with
$1=239.  The plan data gets reused for other id values if it needs the
same fk action again later in the session.

I'd hoped that upping the statistics target and re-analyzing would make it
choose an index scan for the case where it doesn't know what constant is
going to be used.  Hmm, what is the estimated cost difference and real
time difference on id=1 between seqscan and index scan (explain analyze
output with and without enable_seqscan=off should show you).

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread Tom Lane
ow <[EMAIL PROTECTED]> writes:
> When I try to delete record, it takes > 3 min.

I think it must be using a seqscan for the foreign key check query.
Could you try this and show the results?

prepare foo(my.dint) as
SELECT 1 FROM ONLY "my"."large" x WHERE "small_id" = $1 FOR UPDATE OF x;

explain analyze execute foo(239);

When I try it I see an indexscan plan, but maybe there's some aspect of
your setup that's causing problems.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread ow

--- Stephan Szabo <[EMAIL PROTECTED]> wrote:
> You also did the alter table to up the statistics target on the column,
> right?

Not really. I did not change the the default stats settings in the
postgresql.conf. Not sure what needs to be changed, can you clarify?

Thanks




__
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread Stephan Szabo
On Thu, 12 Feb 2004, ow wrote:

> --- Stephan Szabo <[EMAIL PROTECTED]> wrote:
> > You also did the alter table to up the statistics target on the column,
> > right?
>
> Not really. I did not change the the default stats settings in the
> postgresql.conf. Not sure what needs to be changed, can you clarify?

Basically, run something like:
-- 1000 is just an arbitrary choice, but we can lower it later if this
-- works.
ALTER TABLE my.Large ALTER COLUMN small_id SET STATISTICS 1000;
ANALYZE my.Large;

This increases the target for just the column in question which means it
should have a better idea of the distribution and may make it make a
better guess if you've got a somewhat uneven distribution.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] test

2004-02-12 Thread beyaRecords - The home Urban music
testing 1,2,3

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread ow
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> ow <[EMAIL PROTECTED]> writes:
> > When I try to delete record, it takes > 3 min.
> 
> I think it must be using a seqscan for the foreign key check query.
> Could you try this and show the results? 

1) SELECT 1 FROM ONLY "my"."large" x WHERE "small_id" = 201 FOR UPDATE OF x;

QUERY PLAN
Index Scan using small_fk on large x  (cost=0.00..6.01 rows=1 width=6) (actual
time=0.251..0.251 rows=0 loops=1)
  Index Cond: ((small_id)::integer = 201)
Total runtime: 0.338 ms

2) prepare foo(my.dint) as
SELECT 1 FROM ONLY "my"."large" x WHERE "small_id" = $1 FOR UPDATE OF x;

explain analyze execute foo(201);

QUERY PLAN
Seq Scan on large x  (cost=0.00..1787052.30 rows=7893843 width=6) (actual
time=210566.301..210566.301 rows=0 loops=1)
  Filter: ((small_id)::integer = ($1)::integer)
Total runtime: 210566.411 ms

Thanks



__
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread ow

--- Tom Lane <[EMAIL PROTECTED]> wrote:
> Well, there's the smoking gun all right.  Why does it think there are
> going to be 7893843 matching rows!?  Could we see the pg_stats row for
> the large.small_id column?
> 
>   regards, tom lane

schemaname  tablename   attname null_frac   avg_width   n_distinct 
 most_common_vals
most_common_freqs   histogram_boundscorrelation
my  large   small_id0   4   10  {7,3,5,1,4,2,8,10,6,9}
{0.108667,0.105,0.104333,0.101333,0.100667,0.098,0.098,0.098,0.094,0.091}
0.0597573




__
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread Tom Lane
ow <[EMAIL PROTECTED]> writes:
> schemanametablename   attname null_frac   avg_width   n_distinct 
>  most_common_vals
> most_common_freqs histogram_boundscorrelation
> mylarge   small_id0   4   10  {7,3,5,1,4,2,8,10,6,9}
> {0.108667,0.105,0.104333,0.101333,0.100667,0.098,0.098,0.098,0.094,0.091}
>   0.0597573

According to this entry, your small_id column only contains the ten
values 1..10, roughly evenly distributed.  So why are you probing for
239??

The planner is certainly going to estimate a probe for an unspecified
value as retrieving 10% of the table, and under that assumption it's
quite right to use a seqscan.

If this estimate is not right, perhaps you could give us a more accurate
view of the column statistics?

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread Tom Lane
ow <[EMAIL PROTECTED]> writes:
> --- Tom Lane <[EMAIL PROTECTED]> wrote:
>> I think it must be using a seqscan for the foreign key check query.

> 2) prepare foo(my.dint) as
> SELECT 1 FROM ONLY "my"."large" x WHERE "small_id" = $1 FOR UPDATE OF x;

> explain analyze execute foo(201);

> QUERY PLAN
> Seq Scan on large x  (cost=0.00..1787052.30 rows=7893843 width=6) (actual
> time=210566.301..210566.301 rows=0 loops=1)
>   Filter: ((small_id)::integer = ($1)::integer)
> Total runtime: 210566.411 ms

Well, there's the smoking gun all right.  Why does it think there are
going to be 7893843 matching rows!?  Could we see the pg_stats row for
the large.small_id column?

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] array_lower /array_prepend doubt

2004-02-12 Thread Bruce Momjian
Joe Conway wrote:
> Bruce Momjian wrote:
> > Is this a TODO?
> 
> Probably -- something like:
>Modify array literal representation to handle array index lower bound
>of other than one

Added to TODO.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] array_lower /array_prepend doubt

2004-02-12 Thread Joe Conway
Bruce Momjian wrote:
Is this a TODO?
Probably -- something like:
  Modify array literal representation to handle array index lower bound
  of other than one
Joe

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread Tom Lane
ow <[EMAIL PROTECTED]> writes:
> Sounds pretty bad for my case. Any way to avoid the 10% scan?

Can't see how we optimize your case without pessimizing more-common cases.
Sorry.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match



Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread ow
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> According to this entry, your small_id column only contains the ten
> values 1..10, roughly evenly distributed.  So why are you probing for
> 239??

Let's say we have City (small) and Person (large) tables. A new city was added
(mistakenly) with id=239, it does not have any "persons" assigned yet. Hence,
we want to remove the wrong "city" record.

In any case, one can't remove record from "small" unless there are NO records
in "large", RI will not allow it. The initial problem was that I tried to
delete a record from "small" and it was taking about 3 min to do that.

> The planner is certainly going to estimate a probe for an unspecified
> value as retrieving 10% of the table, and under that assumption it's
> quite right to use a seqscan.

Sounds pretty bad for my case. Any way to avoid the 10% scan?

Thanks




__
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[SQL] Index question

2004-02-12 Thread David Witham
Hi all,

I have a table with around 3M records in it and a few indexes on it. One of them is on 
the day column. I get 10-20K new records a day. After running ANALYSE in psql I tried 
the following queries:

buns=# explain select count(*) from cdr where day >= '20040127';
QUERY PLAN
--
 Aggregate  (cost=85596.50..85596.50 rows=1 width=0)
   ->  Seq Scan on cdr  (cost=0.00..85053.86 rows=217055 width=0)
 Filter: ("day" >= '2004-01-27'::date)
(3 rows)

buns=# explain select count(*) from cdr where day = '20040127'; 
   QUERY PLAN   

 Aggregate  (cost=12950.10..12950.10 rows=1 width=0)
   ->  Index Scan using cdr_ix1 on cdr  (cost=0.00..12928.00 rows=8839 width=0)
 Index Cond: ("day" = '2004-01-27'::date)
(3 rows)

buns=# explain select count(*) from cdr where day between '20040127' and current_date;
   QUERY PLAN  
 

 Aggregate  (cost=20129.91..20129.91 rows=1 width=0)
   ->  Index Scan using cdr_ix1 on cdr  (cost=0.00..20095.66 rows=13699 width=0)
 Index Cond: (("day" >= '2004-01-27'::date) AND ("day" <= ('now'::text)::date))
(3 rows)

I understand that selecting count(*) will involve a scan at some stage, but I was 
surprised that the index wasn't used in the >= case, but was used in the between case.

Why is this so? Do I need to ANALYSE some more or is this just the way the query 
planner works?

Thanks,
David

David Witham
Telephony Platforms Architect
Unidial
Ph: 03 8628 3383
Fax: 03 8628 3399


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread Tom Lane
Rod Taylor <[EMAIL PROTECTED]> writes:
> Statistics say there are 10 values. Statistics list the 10 most common
> values (all of them). Given this, would it not be reasonable to assume
> that 239 is a recent addition (if there at all) to the table and not
> very common?

We don't know that it's 239 when we make the plan.  In order to know
that, we'd have to abandon caching of RI check query plans and re-plan
for each row.  That strikes me as inevitably a losing proposition.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread Tom Lane
ow <[EMAIL PROTECTED]> writes:
> --- Tom Lane <[EMAIL PROTECTED]> wrote:
>> Can't see how we optimize your case without pessimizing more-common cases.

> I think other RDBMSs simply use preset value instead of partial table
> scan when there's not enough stat info. Might be a better way.

The problem here cannot be described as "not enough stat info".  The
available stats are complete and they point very strongly to the
conclusion that searches in the large table should be seqscans.
To do otherwise would be folly in general, even if it happens to
be the correct thing in this particular example.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] How to avoid nulls while writing string for dynamic query

2004-02-12 Thread Kumar
I am having problem there. see what happens

sqlstr := 'insert into test(c1, c2) values ('||COALESCE(rec.c1,'NULL')||','
||'\''||rec.c2||'\')';

WARNING:  Error occurred while executing PL/pgSQL function test_fn
WARNING:  line 8 at assignment
ERROR:  pg_atoi: error in "NULL": can't parse "NULL"

Dont forgot that c1 is int.

when i have like this
sqlstr := 'insert into test(c1, c2) values ('||COALESCE(rec.c1,NULL)||','
||'\''||rec.c2||'\')';

NOTICE:  
WARNING:  Error occurred while executing PL/pgSQL function test_fn
WARNING:  line 11 at execute statement
ERROR:  cannot EXECUTE NULL query

That is the problem i am facing. Please shed some light.

Thanks
Kumar

- Original Message - 
From: "Tomasz Myrta" <[EMAIL PROTECTED]>
To: "Kumar" <[EMAIL PROTECTED]>
Cc: "psql" <[EMAIL PROTECTED]>
Sent: Thursday, February 12, 2004 6:13 PM
Subject: Re: [SQL] How to avoid nulls while writing string for dynamic query


> Dnia 2004-02-12 13:31, Użytkownik Kumar napisał:
> > The error is because of no value for column c1. If the column c1 is a
> > string I might have replace it with empty string. I don't want to
> > substitute with '0' which could work.
> >
> > sqlstr := 'insert into test(c1, c2) values (' ||ISNULL(rec.c1,'0')||','
> >
> > ||'\''||rec.c2||'\')';
>
> Substitute it with NULL value:
> sqlstr := 'insert into test(c1, c2) values(' || coalesce(rec.c1,'NULL')
> ||...
>
> Regards,
> Tomasz Myrta


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] How to avoid nulls while writing string for dynamic query

2004-02-12 Thread Kumar
Dear all,
I solved it using ISNULL function.
sqlstr := 'insert into test(c1, c2) values ('||ISNULL(rec.c1,'NULL')||','
> ||'\''||rec.c2||'\')';
Thanks
kumar
- Original Message - 
From: "Kumar" <[EMAIL PROTECTED]>
To: "Tomasz Myrta" <[EMAIL PROTECTED]>
Cc: "psql" <[EMAIL PROTECTED]>
Sent: Friday, February 13, 2004 10:23 AM
Subject: Re: [SQL] How to avoid nulls while writing string for dynamic query


> I am having problem there. see what happens
>
> sqlstr := 'insert into test(c1, c2) values
('||COALESCE(rec.c1,'NULL')||','
> ||'\''||rec.c2||'\')';
>
> WARNING:  Error occurred while executing PL/pgSQL function test_fn
> WARNING:  line 8 at assignment
> ERROR:  pg_atoi: error in "NULL": can't parse "NULL"
>
> Dont forgot that c1 is int.
>
> when i have like this
> sqlstr := 'insert into test(c1, c2) values ('||COALESCE(rec.c1,NULL)||','
> ||'\''||rec.c2||'\')';
>
> NOTICE:  
> WARNING:  Error occurred while executing PL/pgSQL function test_fn
> WARNING:  line 11 at execute statement
> ERROR:  cannot EXECUTE NULL query
>
> That is the problem i am facing. Please shed some light.
>
> Thanks
> Kumar
>
> - Original Message - 
> From: "Tomasz Myrta" <[EMAIL PROTECTED]>
> To: "Kumar" <[EMAIL PROTECTED]>
> Cc: "psql" <[EMAIL PROTECTED]>
> Sent: Thursday, February 12, 2004 6:13 PM
> Subject: Re: [SQL] How to avoid nulls while writing string for dynamic
query
>
>
> > Dnia 2004-02-12 13:31, Użytkownik Kumar napisał:
> > > The error is because of no value for column c1. If the column c1 is a
> > > string I might have replace it with empty string. I don't want to
> > > substitute with '0' which could work.
> > >
> > > sqlstr := 'insert into test(c1, c2) values ('
||ISNULL(rec.c1,'0')||','
> > >
> > > ||'\''||rec.c2||'\')';
> >
> > Substitute it with NULL value:
> > sqlstr := 'insert into test(c1, c2) values(' || coalesce(rec.c1,'NULL')
> > ||...
> >
> > Regards,
> > Tomasz Myrta
>
>
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] Index question

2004-02-12 Thread David Witham
There are 18321 records for 20040127 and so the estimate of 8839 for the = case is low 
but it still does the right thing.

There are 227197 records between '20040127' and current_date so the estimate in the >= 
case is accurate but the estimate for the between case is an order of magnitude too 
low. However, it used the index I wanted and the >= case didn't.

Regards,
David

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Friday, 13 February 2004 16:38
To: David Witham
Cc: [EMAIL PROTECTED]
Subject: Re: [SQL] Index question 


"David Witham" <[EMAIL PROTECTED]> writes:
> I understand that selecting count(*) will involve a scan at some
> stage, but I was surprised that the index wasn't used in the >= case,
> but was used in the between case.

Given the estimated row counts in your examples, the planner's choices
are not surprising.  You have not given us any information on whether
those estimates are accurate.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] How to avoid nulls while writing string for dynamic query

2004-02-12 Thread Tomasz Myrta
Dnia 2004-02-13 05:53, Użytkownik Kumar napisał:

I am having problem there. see what happens

sqlstr := 'insert into test(c1, c2) values ('||COALESCE(rec.c1,'NULL')||','
||'\''||rec.c2||'\')';
You are preparing a string, so make sure you have strings everywhere:
sqlstr := 'insert into test(c1, c2) values 
('||COALESCE(rec.c1::text,'NULL')||','||'\''||rec.c2||'\')';

Regards,
Tomasz Myrta
---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] How to unsubscribe

2004-02-12 Thread Mona




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] How to avoid nulls while writing string for dynamic query

2004-02-12 Thread Kumar
oh, ok understood.
What will happen for a timestamp field. Let us say c1 is a timestamp column.

sqlstr := 'insert into test(c1, c2) values
('||'\''||COALESCE(rec.c1,'NULL')||'\','
> > ||'\''||rec.c2||'\')';

If this case the query will be
insert into test(c1,c2) values ('2004-02-13', 'Hai')

If there is a null value encountered i will return an error for the
following query
insert into test(c1,c2) values ('NULL', 'Hai')
ERROR:  Bad timestamp external representation 'NULL'

I think using 'CASE' this could be solved. But instead is there any other
simple way to do it.

Thanks a lot Mr. Tomasz Myrta
Kumar


- Original Message - 
From: "Tomasz Myrta" <[EMAIL PROTECTED]>
To: "Kumar" <[EMAIL PROTECTED]>
Cc: "psql" <[EMAIL PROTECTED]>
Sent: Friday, February 13, 2004 12:03 PM
Subject: Re: [SQL] How to avoid nulls while writing string for dynamic query


> Dnia 2004-02-13 05:53, Użytkownik Kumar napisał:
>
> > I am having problem there. see what happens
> >
> > sqlstr := 'insert into test(c1, c2) values
('||COALESCE(rec.c1,'NULL')||','
> > ||'\''||rec.c2||'\')';
>
> You are preparing a string, so make sure you have strings everywhere:
> sqlstr := 'insert into test(c1, c2) values
> ('||COALESCE(rec.c1::text,'NULL')||','||'\''||rec.c2||'\')';
>
> Regards,
> Tomasz Myrta


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] 7.4 - FK constraint performance

2004-02-12 Thread Richard Huxton
On Friday 13 February 2004 04:25, Tom Lane wrote:
> Rod Taylor <[EMAIL PROTECTED]> writes:
> > Statistics say there are 10 values. Statistics list the 10 most common
> > values (all of them). Given this, would it not be reasonable to assume
> > that 239 is a recent addition (if there at all) to the table and not
> > very common?
>
> We don't know that it's 239 when we make the plan.  In order to know
> that, we'd have to abandon caching of RI check query plans and re-plan
> for each row.  That strikes me as inevitably a losing proposition.

In this precise example, could you not:
  1. Check index for value
  2. If found, seq-scan

Of course that's only going to be a sensible thing to do if you're expecting 
one of two results:
  1. Value not there
  2. Lengthy seq-scan if it is there
-- 
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])