Re: [SQL] How to avoid nulls while writing string for dynamic query

2004-02-13 Thread Tomasz Myrta
Dnia 2004-02-13 08:13, Użytkownik Kumar napisał:
oh, ok understood.
What will happen for a timestamp field. Let us say c1 is a timestamp column.
sqlstr := 'insert into test(c1, c2) values
('||'\''||COALESCE(rec.c1,'NULL')||'\','
   ||'\''||rec.c2||'\')';


If this case the query will be
insert into test(c1,c2) values ('2004-02-13', 'Hai')
If there is a null value encountered i will return an error for the
following query
insert into test(c1,c2) values ('NULL', 'Hai')
ERROR:  Bad timestamp external representation 'NULL'
It's because you can't use quotes with null. Valid query is:
insert into test(c1,c2) values (NULL, 'Hai');
Your dynamic query will then look like:

sqlstr := 'insert into test(c1, c2) values (' ||COALESCE('\'' || rec.c1 
|| '\'','NULL') ...

or more elegant:

sqlstr := 'insert into test(c1, c2) values (' 
||COALESCE(quote_literal(rec.c1),'NULL') ...

Regards,
Tomasz Myrta
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] How to avoid nulls while writing string for dynamic query

2004-02-13 Thread Kumar
Thanks Tomasz Myrta. It is wonderful. I am still amazed from where you guys
knowing the options like quote_literal, etc.

Kumar

- Original Message - 
From: "Tomasz Myrta" <[EMAIL PROTECTED]>
To: "Kumar" <[EMAIL PROTECTED]>
Cc: "psql" <[EMAIL PROTECTED]>
Sent: Friday, February 13, 2004 1:37 PM
Subject: Re: [SQL] How to avoid nulls while writing string for dynamic query


> Dnia 2004-02-13 08:13, Użytkownik Kumar napisał:
> > oh, ok understood.
> > What will happen for a timestamp field. Let us say c1 is a timestamp
column.
> >
> > sqlstr := 'insert into test(c1, c2) values
> > ('||'\''||COALESCE(rec.c1,'NULL')||'\','
> >
> >>>||'\''||rec.c2||'\')';
> >
> >
> > If this case the query will be
> > insert into test(c1,c2) values ('2004-02-13', 'Hai')
> >
> > If there is a null value encountered i will return an error for the
> > following query
> > insert into test(c1,c2) values ('NULL', 'Hai')
> > ERROR:  Bad timestamp external representation 'NULL'
> It's because you can't use quotes with null. Valid query is:
> insert into test(c1,c2) values (NULL, 'Hai');
>
> Your dynamic query will then look like:
>
> sqlstr := 'insert into test(c1, c2) values (' ||COALESCE('\'' || rec.c1
> || '\'','NULL') ...
>
> or more elegant:
>
> sqlstr := 'insert into test(c1, c2) values ('
> ||COALESCE(quote_literal(rec.c1),'NULL') ...
>
> Regards,
> Tomasz Myrta


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] How to avoid nulls while writing string for dynamic query

2004-02-13 Thread Tomasz Myrta
Dnia 2004-02-13 10:14, Użytkownik Kumar napisał:
Thanks Tomasz Myrta. It is wonderful. I am still amazed from where you guys
knowing the options like quote_literal, etc.
Kumar
Just read the manual ;-)
6.4. String Functions and Operators
Tomasz

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] column alias and group by/having/order

2004-02-13 Thread Rod Taylor
> select val1+val2 as val
> from some_table
> group by val having val>1;
> ERROR:  Attribute "val" not found
> 
> Is it a bug or a feature?

It's a mis-feature that group by accepts aliases of the select list.

Having is proper.



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] arrays and polygons

2004-02-13 Thread Tom Lane
"David" <[EMAIL PROTECTED]> writes:
> ALTER TABLE species ADD COLUMN location polygon[];

> INSERT INTO species (location) VALUES ('{((432,89), (45,87), (89,87)),
> ((54,78), (34,98))};

I think you'd need to double-quote each polygon within the array
literal.

'{"((432,89), (45,87), (89,87))", "..."}'

The array parser doesn't think parens are special, so it's not going to
magically distinguish array commas from polygon commas for you.

BTW, if you are using 7.4, the ARRAY[] constructor syntax might be
easier to use.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] 7.4 - FK constraint performance

2004-02-13 Thread Stephan Szabo
On Thu, 12 Feb 2004, Tom Lane wrote:

> Rod Taylor <[EMAIL PROTECTED]> writes:
> > Statistics say there are 10 values. Statistics list the 10 most common
> > values (all of them). Given this, would it not be reasonable to assume
> > that 239 is a recent addition (if there at all) to the table and not
> > very common?
>
> We don't know that it's 239 when we make the plan.  In order to know
> that, we'd have to abandon caching of RI check query plans and re-plan
> for each row.  That strikes me as inevitably a losing proposition.

One thing is that IIRC we're going to ask for only one row when we do the
SPI_execp_current.  However, unless I misremember, the behavior of for
update and limit means that saying limit 1 is potentially unsafe (if you
block on a row that goes away).  Is there anyway for us to let the planner
know this?


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings



Re: [SQL] column alias and group by/having/order

2004-02-13 Thread Tom Lane
Tomasz Myrta <[EMAIL PROTECTED]> writes:
> I found some strange column alias behaviour:

That's how it is.  The fact that you can reference output-column aliases
at all in GROUP BY or ORDER BY is a wart, because it conflicts with the
logical model of query evaluation --- the output list should be computed
last.  We have extended the SQL92 requirement (unadorned aliases in
ORDER BY) to allow the same in GROUP BY, but we don't take it to the
level of allowing them inside arbitrary expressions.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] 7.4 - FK constraint performance

2004-02-13 Thread Tom Lane
Stephan Szabo <[EMAIL PROTECTED]> writes:
> One thing is that IIRC we're going to ask for only one row when we do the
> SPI_execp_current.  However, unless I misremember, the behavior of for
> update and limit means that saying limit 1 is potentially unsafe (if you
> block on a row that goes away).  Is there anyway for us to let the planner
> know this?

I was looking at that last night.  It seems like we could add a LIMIT at
least in some contexts.  In the case at hand, we're just going to error
out immediately if we find a matching row, and so there's no need for
FOR UPDATE, is there?

However, I'm not sure it would help the OP anyway.  With the stats he
had, the planner would still take a seqscan, because it's going to
expect that it can find a match by probing the first ten or so rows of
the first page.  With anything close to the normal cost parameters,
that's going to look more expensive than an index probe.  Possibly if
the table had a few more values it would work.

But in general it would be a good idea if the planner knew that plan
evaluation would stop after the first row.  We could look at passing
that info down out-of-band instead of using LIMIT.  There's already
support for this to allow EXISTS() subqueries to be planned properly;
see the tuple_fraction stuff in planner.c.  We just can't get at it
via SPI ...

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] 7.4 - FK constraint performance

2004-02-13 Thread Stephan Szabo
On Fri, 13 Feb 2004, Tom Lane wrote:

> Stephan Szabo <[EMAIL PROTECTED]> writes:
> > One thing is that IIRC we're going to ask for only one row when we do the
> > SPI_execp_current.  However, unless I misremember, the behavior of for
> > update and limit means that saying limit 1 is potentially unsafe (if you
> > block on a row that goes away).  Is there anyway for us to let the planner
> > know this?
>
> I was looking at that last night.  It seems like we could add a LIMIT at
> least in some contexts.  In the case at hand, we're just going to error
> out immediately if we find a matching row, and so there's no need for
> FOR UPDATE, is there?

I think there still is, because a not yet committed transaction could have
deleted them all in which case I think the correct behavior is to wait and
if that transaction commits allow the action and if it rolls back to
error.

Really we'd want a different behavior where we're only blocking in these
cases if all the matching rows are locked by other transactions.

> However, I'm not sure it would help the OP anyway.  With the stats he
> had, the planner would still take a seqscan, because it's going to
> expect that it can find a match by probing the first ten or so rows of
> the first page.  With anything close to the normal cost parameters,
> that's going to look more expensive than an index probe.  Possibly if
> the table had a few more values it would work.

Hmm, that's true. It also doesn't help the real actions (cascade, set *)
since those really do need to get at all the rows, but it probably helps
in a reasonable number of cases.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] 7.4 - FK constraint performance

2004-02-13 Thread Tom Lane
Stephan Szabo <[EMAIL PROTECTED]> writes:
> On Fri, 13 Feb 2004, Tom Lane wrote:
>> I was looking at that last night.  It seems like we could add a LIMIT at
>> least in some contexts.  In the case at hand, we're just going to error
>> out immediately if we find a matching row, and so there's no need for
>> FOR UPDATE, is there?

> I think there still is, because a not yet committed transaction could have
> deleted them all in which case I think the correct behavior is to wait and
> if that transaction commits allow the action and if it rolls back to
> error.

Good point.  Okay, we can't put in a LIMIT.  But we could still hack the
planner to prefer a fast-start plan by passing an out-of-band tuple
fraction, for those RI plans where it's appropriate.  That would not
affect correctness.

>> However, I'm not sure it would help the OP anyway.  With the stats he
>> had, the planner would still take a seqscan, because it's going to
>> expect that it can find a match by probing the first ten or so rows of
>> the first page.  With anything close to the normal cost parameters,
>> that's going to look more expensive than an index probe.

s/more expensive/less expensive/ ... need more caffeine obviously ...

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] 7.4 - FK constraint performance

2004-02-13 Thread Stephan Szabo

On Fri, 13 Feb 2004, Tom Lane wrote:

> Stephan Szabo <[EMAIL PROTECTED]> writes:
> > On Fri, 13 Feb 2004, Tom Lane wrote:
> >> I was looking at that last night.  It seems like we could add a LIMIT at
> >> least in some contexts.  In the case at hand, we're just going to error
> >> out immediately if we find a matching row, and so there's no need for
> >> FOR UPDATE, is there?
>
> > I think there still is, because a not yet committed transaction could have
> > deleted them all in which case I think the correct behavior is to wait and
> > if that transaction commits allow the action and if it rolls back to
> > error.
>
> Good point.  Okay, we can't put in a LIMIT.  But we could still hack the
> planner to prefer a fast-start plan by passing an out-of-band tuple
> fraction, for those RI plans where it's appropriate.  That would not
> affect correctness.

Right, I can try to look through the stuff you pointed at in the previous
message over the weekend.

> >> However, I'm not sure it would help the OP anyway.  With the stats he
> >> had, the planner would still take a seqscan, because it's going to
> >> expect that it can find a match by probing the first ten or so rows of
> >> the first page.  With anything close to the normal cost parameters,
> >> that's going to look more expensive than an index probe.
>
> s/more expensive/less expensive/ ... need more caffeine obviously ...

Me too apparently, since I knew what you were saying and agreed despite
the wording.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] arrays and polygons

2004-02-13 Thread David
Thanks to you both that helped enormously, 

Dave


- Original Message - 
From: "Joe Conway" <[EMAIL PROTECTED]>
To: "Tom Lane" <[EMAIL PROTECTED]>
Cc: "David" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Friday, February 13, 2004 4:33 PM
Subject: Re: [SQL] arrays and polygons


> Tom Lane wrote:
> > "David" <[EMAIL PROTECTED]> writes:
> >>INSERT INTO species (location) VALUES ('{((432,89), (45,87), (89,87)),
> >>((54,78), (34,98))};
> > 
> > I think you'd need to double-quote each polygon within the array
> > literal.
> > 
> > '{"((432,89), (45,87), (89,87))", "..."}'
> > 
> > The array parser doesn't think parens are special, so it's not going to
> > magically distinguish array commas from polygon commas for you.
> > 
> > BTW, if you are using 7.4, the ARRAY[] constructor syntax might be
> > easier to use.
> 
> FWIW, here's what it would look like in 7.4.x:
> 
> regression=# select ARRAY['((432,89), (45,87), (89,87))'::polygon, 
> '((432,89), (45,87), (89,87))'];
>  array
> -
>   {"((432,89),(45,87),(89,87))","((432,89),(45,87),(89,87))"}
> (1 row)
> 
> You need to explicitly cast at least the first polygon in order to get 
> an array of polygons (versus an array of text).
> 
> HTH,
> 
> Joe
> 

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] arrays and polygons

2004-02-13 Thread Joe Conway
Tom Lane wrote:
"David" <[EMAIL PROTECTED]> writes:
INSERT INTO species (location) VALUES ('{((432,89), (45,87), (89,87)),
((54,78), (34,98))};
I think you'd need to double-quote each polygon within the array
literal.
	'{"((432,89), (45,87), (89,87))", "..."}'

The array parser doesn't think parens are special, so it's not going to
magically distinguish array commas from polygon commas for you.
BTW, if you are using 7.4, the ARRAY[] constructor syntax might be
easier to use.
FWIW, here's what it would look like in 7.4.x:

regression=# select ARRAY['((432,89), (45,87), (89,87))'::polygon, 
'((432,89), (45,87), (89,87))'];
array
-
 {"((432,89),(45,87),(89,87))","((432,89),(45,87),(89,87))"}
(1 row)

You need to explicitly cast at least the first polygon in order to get 
an array of polygons (versus an array of text).

HTH,

Joe

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match