[SQL] exception handling and CONTINUE

2008-07-08 Thread Marcin Krawczyk
Hi all. Can anyone tell me if there's a way to use CONTINUE clause outside
the loop ?
An example :

FOR a IN SELECT * FROM xxx
LOOP

INSERT INTO yyy VALUES (a.***, ..)

END LOOP;

EXCEPTION WHEN unique_violation THEN CONTINUE;

I get an error saying I can't use CONTINUE outside of a loop. Is there a way
around this ?

regards
mk


Re: [SQL] exception handling and CONTINUE

2008-07-08 Thread Alvaro Herrera
Marcin Krawczyk escribió:
 Hi all. Can anyone tell me if there's a way to use CONTINUE clause outside
 the loop ?
 An example :
 
 FOR a IN SELECT * FROM xxx
 LOOP
 
 INSERT INTO yyy VALUES (a.***, ..)
 
 END LOOP;
 
 EXCEPTION WHEN unique_violation THEN CONTINUE;

Try something like this:

for a in select * from xxx
loop
  begin
insert into yyy values (...)
  exception when unique_violation then
null;   -- noop, just for clarity
  end;
end loop;

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] exception handling and CONTINUE

2008-07-08 Thread Pavel Stehule
no, you can use CONTINUE only in loop. When you wont ignore exception,
just do nothing

 For example, the following two fragments of code are equivalent:

BEGIN
y := x / 0;
EXCEPTION
WHEN division_by_zero THEN
NULL;  -- ignore the error
END;

BEGIN
y := x / 0;
EXCEPTION
WHEN division_by_zero THEN  -- ignore the error
END;

http://www.postgresql.org/docs/8.3/static/plpgsql-statements.html

Regards
Pavel Stehule

2008/7/8 Marcin Krawczyk [EMAIL PROTECTED]:
 Hi all. Can anyone tell me if there's a way to use CONTINUE clause outside
 the loop ?
 An example :

 FOR a IN SELECT * FROM xxx
 LOOP

 INSERT INTO yyy VALUES (a.***, ..)

 END LOOP;

 EXCEPTION WHEN unique_violation THEN CONTINUE;

 I get an error saying I can't use CONTINUE outside of a loop. Is there a way
 around this ?

 regards
 mk

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] how to control the execution plan ?

2008-07-08 Thread Sabin Coanda
Hi Scott,

I add the answers below.


 Just wondering what the query plans look like here, both regular
 explain, and if you can wait for it to execute, explain analyze.


Just with explain, because the function craches when it is running:

Merge Join  (cost=141.41..188.32 rows=1 width=24)
  Merge Cond: (TABLE_C.PK_ID = bp.COL_C)
  -  Merge Left Join  (cost=62.33..96.69 rows=1000 width=44)
Merge Cond: (s.PK_ID = MY_FUNCTION_B.COL_D)
-  Index Scan using TABLE_D_pkey on TABLE_D s 
(cost=0.00..18.49 rows=349 width=4)
-  Sort  (cost=62.33..64.83 rows=1000 width=44)
  Sort Key: MY_FUNCTION_B.COL_D
  -  Function Scan on MY_FUNCTION_B  (cost=0.00..12.50 
rows=1000 width=44)
  -  Sort  (cost=79.08..79.09 rows=1 width=28)
Sort Key: bp.COL_C
-  Hash Join  (cost=10.59..79.07 rows=1 width=28)
  Hash Cond: (bp.COL_B = pn.PK_ID)
  -  Seq Scan on TABLE_A bp  (cost=0.00..68.46 rows=4 
width=32)
Filter: ((COL_A IS NOT NULL) AND ((COL_A)::text  
''::text) AND (((MY_FUNCTION_A(COL_A, NULL::boolean))::text || ' 
'::text) IS NULL))
  -  Hash  (cost=10.50..10.50 rows=7 width=4)
-  Seq Scan on TABLE_B pn  (cost=0.00..10.50 rows=7 
width=4)
  Filter: ((COL_E)::text ~~ 'Some%'::text)

 I'm guessing that the function is not indexed / indexable.  Is it
 marked immutable (and is it actually immutable) or stable (and is
 stable)?


The function is marked stable.

 If it's still to smart, you can run two queries, one to pull the set
 you want to work with from the custom function into a temp table, then
 analyze it, then run the query against that.
 Not an optimal solution, but it might be the fastest if you can't
 index your function.


In fact I would use that statement to define a permanent view, not in a 
procedure.

Finally I found a trick specifying not just WHERE (x.ALIAS_A::text ) IS 
NULL;, but combining with a constant and a join with a constant.
By the way, it doesn't works just with the constant or with the join :(
See the query and the plan below:

EXPLAIN SELECT *
FROM (
SELECT MY_FUNCTION_A(bp.COL_A, NULL::boolean) AS ALIAS_A, MY_AUX
FROM TABLE_A bp
CROSS JOIN (
SELECT '*'::character varying AS MY_AUX
) afp
JOIN TABLE_B pn ON bp.COL_B = pn.PK_ID
JOIN TABLE_C vbo ON bp.COL_C = vbo.PK_ID
WHERE pn.COL_E::text ~~ 'Some%'::text AND bp.COL_A IS NOT NULL 
AND bp.COL_A::text  ''::text
) x
WHERE (x.ALIAS_A::text || ' ' || MY_AUX) IS NULL;


Merge Join  (cost=131.68..178.60 rows=1 width=56)
  Merge Cond: (TABLE_C.PK_ID = bp.COL_C)
  -  Merge Left Join  (cost=62.33..96.69 rows=1000 width=44)
Merge Cond: (s.PK_ID = MY_FUNCTION_B.COL_D)
-  Index Scan using TABLE_D_pkey on TABLE_D s 
(cost=0.00..18.49 rows=349 width=4)
-  Sort  (cost=62.33..64.83 rows=1000 width=44)
  Sort Key: MY_FUNCTION_B.COL_D
  -  Function Scan on MY_FUNCTION_B  (cost=0.00..12.50 
rows=1000 width=44)
  -  Sort  (cost=69.36..69.36 rows=1 width=60)
Sort Key: bp.COL_C
-  Nested Loop  (cost=10.59..69.34 rows=1 width=60)
  Join Filter: MY_FUNCTION_A(bp.COL_A, 
NULL::boolean))::text || ' '::text) || (afp.MY_AUX)::text) IS NULL)
  -  Result  (cost=0.00..0.01 rows=1 width=0)
  -  Hash Join  (cost=10.59..68.94 rows=22 width=28)
Hash Cond: (bp.COL_B = pn.PK_ID)
-  Seq Scan on TABLE_A bp  (cost=0.00..54.90 
rows=862 width=32)
  Filter: ((COL_A IS NOT NULL) AND 
((COL_A)::text  ''::text))
-  Hash  (cost=10.50..10.50 rows=7 width=4)
  -  Seq Scan on TABLE_B pn  (cost=0.00..10.50 
rows=7 width=4)
Filter: ((COL_E)::text ~~ 'Some%'::text)

However I'm not sure there are no circumstances when the execution plan will 
detect my trick and will optimize the query again :((

Sabin 



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] exception handling and CONTINUE

2008-07-08 Thread Marcin Krawczyk
Thank you guys.

Alvaro your idea works tha way I wanted. Why didn't I think about it ? :)

regards

mk


[SQL] Best practice for long-lived journal tables: bigint or recycling IDs?

2008-07-08 Thread Mark Stosberg

Hello,

I have some tables that continually collect statistics, and then over time are
pruned as the stats are aggregated into more useful formats. 

For some of these tables, it it is fore-seeable that the associated sequences
would be incremented past the max value of the int type in the normal course
of things. 

I see two options to prepare for that:

1. Convert the primary keys to bigint, which should be good enough forever.
I suppose there would some minor storage and performance penalty. 

2. Reset the sequence at some point. There would no collisions, because the
older rows would have long been pruned-out. I suppose there is an improbable
edge case in which we restore some old data from tape and then are confused
because some new data has the same IDs, but as I said, these tables are used as
temporary holding locations, not permanent storage. 

Both options have some appeal for me. What have others done?

Mark

-- 
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Best practice for long-lived journal tables: bigint or recycling IDs?

2008-07-08 Thread Alvaro Herrera
Mark Stosberg wrote:
 
 Hello,
 
 I have some tables that continually collect statistics, and then over time are
 pruned as the stats are aggregated into more useful formats. 
 
 For some of these tables, it it is fore-seeable that the associated sequences
 would be incremented past the max value of the int type in the normal course
 of things. 
 
 I see two options to prepare for that:

3. Deal with wraparound by ensuring that the applications behave sanely

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Re: Best practice for long-lived journal tables: bigint or recycling IDs?

2008-07-08 Thread Mark Stosberg
On Tue, 8 Jul 2008 17:20:13 -0400
Alvaro Herrera [EMAIL PROTECTED] wrote:

 Mark Stosberg wrote:
  
  Hello,
  
  I have some tables that continually collect statistics, and then over time 
  are
  pruned as the stats are aggregated into more useful formats. 
  
  For some of these tables, it it is fore-seeable that the associated 
  sequences
  would be incremented past the max value of the int type in the normal 
  course
  of things. 
  
  I see two options to prepare for that:
 
 3. Deal with wraparound by ensuring that the applications behave sanely

Wrap-around?  

Exceeding the max size of int looks more like a brick wall than wrap-around 
to me:

 insert into t values (2147483648);
 ERROR:  integer out of range

Mark

-- 
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .

-- 
 . . . . . . . . . . . . . . . . . . . . . . . . . . . 
   Mark StosbergPrincipal Developer  
   [EMAIL PROTECTED] Summersault, LLC 
   765-939-9301 ext 202 database driven websites
 . . . . . http://www.summersault.com/ . . . . . . . .



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Re: Best practice for long-lived journal tables: bigint or recycling IDs?

2008-07-08 Thread Alvaro Herrera
Mark Stosberg wrote:
 On Tue, 8 Jul 2008 17:20:13 -0400
 Alvaro Herrera [EMAIL PROTECTED] wrote:
 
  3. Deal with wraparound by ensuring that the applications behave sanely
 
 Wrap-around?  
 
 Exceeding the max size of int looks more like a brick wall than wrap-around 
 to me:
 
  insert into t values (2147483648);
  ERROR:  integer out of range

Hmm, you can alter the sequence so that it wraps around at the point it
reaches INT_MAX.  So inserting this number would never actually happen.

alvherre=# create table t (a serial);
NOTICE:  CREATE TABLE créera des séquences implicites « t_a_seq » pour la 
colonne serial « t.a »
CREATE TABLE
alvherre=# alter sequence t_a_seq maxvalue 2147483647;
ALTER SEQUENCE
alvherre=# alter sequence t_a_seq cycle;
ALTER SEQUENCE
alvherre=# select setval('t_a_seq', 2147483645);
   setval   

 2147483645
(1 ligne)

alvherre=# insert into t default values;
INSERT 0 1
alvherre=# insert into t default values;
INSERT 0 1
alvherre=# insert into t default values;
INSERT 0 1
alvherre=# insert into t default values;
INSERT 0 1
alvherre=# insert into t default values;
INSERT 0 1
alvherre=# select * from t;
 a  

 2147483646
 2147483647
  1
  2
  3
(5 lignes)


-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql