[SQL] Very low performance on table with only 298 rows

2005-04-14 Thread Andreas Joseph Krogh
nbeweb=> EXPLAIN select count(*) from onp_web_index;
 QUERY PLAN
-
Hi all.
I have a problem with a table which only has 298 rows in it. A select count(*) 
takes forever and CTRL-C also takes a long time canceling the query. There 
are only 298 rows in the table when count(*) returns. How come it takes such 
a long time counting only 298 rows? Here is EXPLAIN ANALYZE:

nbeweb=> EXPLAIN ANALYZE select count(*) from onp_web_index;
QUERY PLAN
---
 Aggregate  (cost=205759.52..205759.52 rows=1 width=0) (actual 
time=179748.910..179748.913 rows=1 loops=1)
   ->  Seq Scan on onp_web_index  (cost=0.00..205455.41 rows=121641 width=0) 
(actual time=179735.956..179747.821 rows=298 loops=1)
 Total runtime: 179748.993 ms
(3 rows)


Here is the schema:

nbeweb=> \d onp_web_index;
  Table "public.onp_web_index"
   Column|   Type|   Modifiers
-+---+---
 id  | integer   | not null default 
nextval('public.onp_web_index_id_seq'::text)
 starturl_id | integer   | not null
 lang_id | integer   | not null
 url_host| character varying | not null
 url_path| character varying | not null
 title   | character varying | not null
 content | tsvector  | not null
 plain_text  | character varying | not null
Indexes:
"onp_web_index_pkey" primary key, btree (id)
"onp_web_index_url_host_key" unique, btree (url_host, url_path)
"onp_web_index_idx" gist (content)
Foreign-key constraints:
"$2" FOREIGN KEY (lang_id) REFERENCES code(id)
"$1" FOREIGN KEY (starturl_id) REFERENCES onp_web_index_starturl(id) ON 
DELETE CASCADE

Any hints on how I can "fix" this lousy performance?

-- 
Andreas Joseph Krogh <[EMAIL PROTECTED]>
Senior Software Developer / Manager
gpg public_key: http://dev.officenet.no/~andreak/public_key.asc
+-+
OfficeNet AS| The most difficult thing in the world is to |
Hoffsveien 17   | know how to do a thing and to watch |
PO. Box 425 Skøyen  | somebody else doing it wrong, without   |
0213 Oslo   | comment.|
NORWAY  | |
Phone : +47 22 13 01 00 | |
Direct: +47 22 13 10 03 | |
Mobile: +47 909  56 963 | |
+-+

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Very low performance on table with only 298 rows

2005-04-14 Thread Richard Huxton
Andreas Joseph Krogh wrote:
nbeweb=> EXPLAIN select count(*) from onp_web_index;
 QUERY PLAN
-
Hi all.
I have a problem with a table which only has 298 rows in it. A select count(*) 
takes forever and CTRL-C also takes a long time canceling the query. There 
are only 298 rows in the table when count(*) returns. How come it takes such 
a long time counting only 298 rows? Here is EXPLAIN ANALYZE:

nbeweb=> EXPLAIN ANALYZE select count(*) from onp_web_index;
QUERY PLAN
---
 Aggregate  (cost=205759.52..205759.52 rows=1 width=0) (actual 
time=179748.910..179748.913 rows=1 loops=1)
   ->  Seq Scan on onp_web_index  (cost=0.00..205455.41 rows=121641 width=0) 
(actual time=179735.956..179747.821 rows=298 loops=1)
 Total runtime: 179748.993 ms
Why does PG think you have 121641 rows in this table?
How often do you vacuum it?
How often do you analyse it?
Does "vacuum full verbose" show a lot of dead rows being removed?
I'm suspecting a *lot* of dead rows need to be removed.
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[SQL] dynamic 'INSERT' query?

2005-04-14 Thread Dinesh Pandey








How can we set A1, A2 values in dynamic 'INSERT’  query?

 

DECLARE

 _record    RECORD;

 _sql VARCHAR(2000);

 

FOR _record IN SELECT A1, A2 FROM A

LOOP

 

_sql := 'INSERT INTO B VALUES (:A1, :A2)’;

 

EXECUTE (_sql);

 

END LOOP;

 

=

I can do this as (but I need another way instead of using
|| operator).

 

_sql := 'INSERT INTO B VALUES (‘
|| _record.A1 || ’,’ || _record.A2 || ’)’;

=

 

Regards
Dinesh Pandey


--
Dinesh Pandey 
Sr. Software Engineer



 








Re: [SQL] Very low performance on table with only 298 rows

2005-04-14 Thread Andreas Joseph Krogh
On Thursday 14 April 2005 13:28, Richard Huxton wrote:
> Andreas Joseph Krogh wrote:
> > nbeweb=> EXPLAIN select count(*) from onp_web_index;
> >  QUERY PLAN
> > -
> > Hi all.
> > I have a problem with a table which only has 298 rows in it. A select
> > count(*) takes forever and CTRL-C also takes a long time canceling the
> > query. There are only 298 rows in the table when count(*) returns. How
> > come it takes such a long time counting only 298 rows? Here is EXPLAIN
> > ANALYZE:
> >
> > nbeweb=> EXPLAIN ANALYZE select count(*) from onp_web_index;
> > QUERY PLAN
> > -
> >-- Aggregate 
> > (cost=205759.52..205759.52 rows=1 width=0) (actual
> > time=179748.910..179748.913 rows=1 loops=1)
> >->  Seq Scan on onp_web_index  (cost=0.00..205455.41 rows=121641
> > width=0) (actual time=179735.956..179747.821 rows=298 loops=1)
> >  Total runtime: 179748.993 ms
>
> Why does PG think you have 121641 rows in this table?
> How often do you vacuum it?
> How often do you analyse it?
> Does "vacuum full verbose" show a lot of dead rows being removed?
>
> I'm suspecting a *lot* of dead rows need to be removed.

I vacuum the whole database every sunday with the command:
VACUUM ANALYZE
Do I need to issue a VACUUM FULL? I've never VACUUM FULL'ed it..

I have other much larger tables which take lot less time to count(*). Cant it 
be that this table is hosed somehow? Note: I have a column of type tsvector 
in there, cat it be that this column causes some unexpected trouble?

I'm issuing a VACUUM FULL verbose now...

-- 
Andreas Joseph Krogh <[EMAIL PROTECTED]>
Senior Software Developer / Manager
gpg public_key: http://dev.officenet.no/~andreak/public_key.asc
+-+
OfficeNet AS| The most difficult thing in the world is to |
Hoffsveien 17   | know how to do a thing and to watch |
PO. Box 425 Skøyen  | somebody else doing it wrong, without   |
0213 Oslo   | comment.|
NORWAY  | |
Phone : +47 22 13 01 00 | |
Direct: +47 22 13 10 03 | |
Mobile: +47 909  56 963 | |
+-+

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] dynamic 'INSERT' query?

2005-04-14 Thread Dinesh Pandey

FOR _record IN SELECT A1, A2 FROM A

LOOP

--[Dinesh Pandey] -- Sorry I didn't get this part.
_sql := "INSERT INTO B VALUES ( '*" || _record.A1 || "'** , '" || _record.A2
|| "' ***)";

EXECUTE (_sql);

END LOOP;



Dinesh Pandey wrote:

> How can we set A1, A2 values in dynamic 'INSERT' query?
>
> DECLARE
>
> _record RECORD;
>
> _sql VARCHAR(2000);
>
> FOR _record IN SELECT A1, A2 FROM A
>
> LOOP
>
> _sql := 'INSERT INTO B VALUES (*:A1, :A2*)';
>
> EXECUTE (_sql);
>
> END LOOP;
>
> =
>
> I can do this as (but I need another way instead of using || operator).
>
> _sql := 'INSERT INTO B VALUES (' || _record.A1 || ',' || _record.A2 || 
> ')';
>
> =
>
> Regards
> Dinesh Pandey
>
>
>
*---
---*
> *Dinesh Pandey*
> Sr. Software Engineer
>




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] dynamic 'INSERT' query?

2005-04-14 Thread Sean Davis
On Apr 14, 2005, at 7:37 AM, Dinesh Pandey wrote:
How can we set A1, A2 values in dynamic 'INSERT’  query?
 
DECLARE
 _record    RECORD;
 _sql VARCHAR(2000);
 
FOR _record IN SELECT A1, A2 FROM A
LOOP
 
_sql := 'INSERT INTO B VALUES (:A1, :A2)’;
 
EXECUTE (_sql);
 
END LOOP;
 
=
I can do this as (but I need another way instead of using || operator).
 
_sql := 'INSERT INTO B VALUES (‘ || _record.A1 || ’,’ || _record.A2 || 
’)’;

Dinesh,
I think what you are showing here IS the way to build up a dynamic sql 
statement.  I'm not sure that you can write a prepared statement within 
the body of a function, which would then look more like what you are 
suggesting you want to do--perhaps others on the list can enlighten us 
about that.  In any case, why won't using the || operator work for you?

Sean
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[SQL] Prepared query ?

2005-04-14 Thread Dinesh Pandey








How can I 


 Write a prepared query and 
 Set values at run time and 
 Execute that query for
 different values.


 

LOOP

 …..

 …..

sql := INSERT
INTO MYTABLE VALUES(?, ?);

……

--EXECUTE (sql);

END LOOP;

 

Regards
Dinesh Pandey


--
Dinesh Pandey 
Sr. Software Engineer

Second Foundation (India)
Pvt. Ltd.
Plot# 52
Industrial Area, Phase II
Chandigarh. (India)
PH: (O) 0172-2639202, Extn: 233 

 








Re: [SQL] dynamic 'INSERT' query?

2005-04-14 Thread Tom Lane
"Dinesh Pandey" <[EMAIL PROTECTED]> writes:
> FOR _record IN SELECT A1, A2 FROM A
> LOOP
> _sql := 'INSERT INTO B VALUES (:A1, :A2)';
> EXECUTE (_sql);

Why do you need a dynamic query here at all?  You could just do

  FOR _record IN SELECT A1, A2 FROM A
  LOOP
INSERT INTO B VALUES (_record.A1, _record.A2);

This would be much faster as well as simpler to code.

As far as I can see offhand, you only need EXECUTE when you want to
change table and/or field names from one execution to the next of
a particular query.  If you have such a problem, you are omitting
the important details ...

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Prepared query ?

2005-04-14 Thread Richard Huxton
Dinesh Pandey wrote:
How can I
   1. Write a prepared query and
   2. Set values at run time and
   3. Execute that query for different values.

LOOP
 …..
 …..
sql := INSERT INTO MYTABLE VALUES(?, ?);

--EXECUTE (sql);
You don't say what language you want to use, but assuming it's plpgsql, 
then for the simple case you can just do:

LOOP
  INSERT INTO mytable VALUES (var1, var2);
END LOOP
Now, let's say you wanted to change mytable as well as the values, you'd use
  sql := ''INSERT INTO '' || quote_ident(my_table_variable) || '' 
VALUES  ...etc'';
  EXECUTE sql;

Does that help?
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [SQL] dynamic 'INSERT' query?

2005-04-14 Thread Dinesh Pandey
Yes here the TABLE NAME is also coming at runtime.

Like In Oracle I have done in this way:
---
LOOP
v_sql := 'INSERT INTO ' || mytable || ' VALUES(:in_sentryid_id,
:in_node_id)';  
EXECUTE IMMEDIATE v_sql USING v_sentryid, v_nodeid; 
END LOOP;

How to convert this into Postgres?

Thanks
Dinesh Pandey



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] Prepared query ?

2005-04-14 Thread Dinesh Pandey








I am using PL/pgSQL.

 



Thanks

Dinesh Pandey











From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On
Behalf Of Frank Bax
Sent: Thursday, April 14, 2005
8:24 PM
To: pgsql-sql@postgresql.org
Subject: Re: [SQL] Prepared query
?



 

At
10:28 AM 4/14/05, Dinesh Pandey wrote:



How
can I 


 Write a prepared query and 
 Set values at run time and 
 Execute that query for different values. 



Depends on what language you are using - try interfaces mailing list.








Re: [SQL] Prepared query ?

2005-04-14 Thread Frank Bax


At 10:28 AM 4/14/05, Dinesh Pandey wrote:
How can
I 

Write a prepared query and 
Set values at run time and 
Execute that query for different values. 

Depends on what language you are using - try interfaces mailing
list.





Re: [SQL] Prepared query ?

2005-04-14 Thread Dinesh Pandey
Yes you got my problem correctly. I want to change tablename and values at
run time.

Is there any other way instead using "||" (append) ? like prepared
statement.

PREPARE fooplan (int, text, bool, numeric) AS
INSERT INTO foo VALUES($1, $2, $3, $4);
EXECUTE fooplan(1, 'Hunter Valley', 't', 200.00);


Thanks
Dinesh Pandey

-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 14, 2005 8:29 PM
To: [EMAIL PROTECTED]
Cc: 'PostgreSQL'
Subject: Re: [SQL] Prepared query ?

Dinesh Pandey wrote:
> How can I
> 
>1. Write a prepared query and
>2. Set values at run time and
>3. Execute that query for different values.

> LOOP
>  ...
>  ...
> sql := INSERT INTO MYTABLE VALUES(?, ?);

> --EXECUTE (sql);

You don't say what language you want to use, but assuming it's plpgsql, 
then for the simple case you can just do:

LOOP
   INSERT INTO mytable VALUES (var1, var2);
END LOOP

Now, let's say you wanted to change mytable as well as the values, you'd use
   sql := ''INSERT INTO '' || quote_ident(my_table_variable) || '' 
VALUES  ...etc'';
   EXECUTE sql;

Does that help?

-- 
   Richard Huxton
   Archonet Ltd



---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] row-attribute in EXPLAIN-output doesn't match count(*)

2005-04-14 Thread Andreas Joseph Krogh
Hi everybody, I have table which inly has 298 rows in it, but EXPLAIN ANALYZE 
shows that PG thinks it has as much as 160057 rows AFAICT from the 
EXPLAIN-output.

I'm using PG-8.0.0

Here is the schema:

nbeweb=> \d onp_web_index;
  Table "public.onp_web_index"
   Column|   Type|   Modifiers
-+---+---
 id  | integer   | not null default 
nextval('public.onp_web_index_id_seq'::text)
 starturl_id | integer   | not null
 lang_id | integer   | not null
 url_host| character varying | not null
 url_path| character varying | not null
 title   | character varying | not null
 content | tsvector  | not null
 plain_text  | character varying | not null
Indexes:
"onp_web_index_pkey" primary key, btree (id)
"onp_web_index_url_host_key" unique, btree (url_host, url_path)
"onp_web_index_idx" gist (content)
Foreign-key constraints:
"$2" FOREIGN KEY (lang_id) REFERENCES code(id)
"$1" FOREIGN KEY (starturl_id) REFERENCES onp_web_index_starturl(id) ON 
DELETE CASCADE

 I've just run a "VACUUM FULL VERBOSE" on the whole database, which gave the 
following output on the relevant table(onp_web_index):

INFO:  vacuuming "public.onp_web_index"
INFO:  "onp_web_index": found 0 removable, 160057 nonremovable row versions in 
206940 pages
DETAIL:  159759 dead row versions cannot be removed yet.
Nonremovable row versions range from 188 to 2036 bytes long.
There were 2205045 unused item pointers.
Total free space (including removable row versions) is 1562667708 bytes.
191561 pages are or will become empty, including 0 at the end of the table.
201809 pages containing 1562243144 free bytes are potential move destinations.
CPU 4.89s/0.88u sec elapsed 286.44 sec.
INFO:  index "onp_web_index_pkey" now contains 160057 row versions in 4562 
pages
DETAIL:  0 index row versions were removed.
4072 index pages have been deleted, 3429 are currently reusable.
CPU 0.09s/0.03u sec elapsed 4.77 sec.
INFO:  index "onp_web_index_url_host_key" now contains 160057 row versions in 
23508 pages
DETAIL:  0 index row versions were removed.
10589 index pages have been deleted, 9885 are currently reusable.
CPU 1.19s/0.30u sec elapsed 198.82 sec.
INFO:  index "onp_web_index_idx" now contains 160057 row versions in 52050 
pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 2.43s/0.53u sec elapsed 270.59 sec.
INFO:  "onp_web_index": moved 160057 row versions, truncated 206940 to 14892 
pages
DETAIL:  CPU 28.25s/615.27u sec elapsed 2458.49 sec.
INFO:  index "onp_web_index_pkey" now contains 160057 row versions in 4858 
pages
DETAIL:  160057 index row versions were removed.
3880 index pages have been deleted, 3237 are currently reusable.
CPU 0.15s/0.41u sec elapsed 7.93 sec.
INFO:  index "onp_web_index_url_host_key" now contains 160057 row versions in 
23508 pages
DETAIL:  160057 index row versions were removed.
10492 index pages have been deleted, 9788 are currently reusable.
CPU 1.07s/1.61u sec elapsed 90.54 sec.
INFO:  index "onp_web_index_idx" now contains 160057 row versions in 55361 
pages
DETAIL:  160057 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 2.40s/0.92u sec elapsed 286.41 sec.

Here it says that it removed 160057 rows, doesn't it? But still EXPLAIN shows 
the output below:

nbeweb=> EXPLAIN ANALYZE SELECT count(*) from onp_web_index;
QUERY PLAN
--
 Aggregate  (cost=16892.72..16892.72 rows=1 width=0) (actual 
time=362.055..362.058 rows=1 loops=1)
   ->  Seq Scan on onp_web_index  (cost=0.00..16492.57 rows=160057 width=0) 
(actual time=0.043..360.926 rows=298 loops=1)
 Total runtime: 380.155 ms
(3 rows)

nbeweb=> REINDEX table onp_web_index;
REINDEX
nbeweb=> EXPLAIN ANALYZE SELECT count(*) from onp_web_index;
QUERY PLAN
--
 Aggregate  (cost=16892.72..16892.72 rows=1 width=0) (actual 
time=229.076..229.079 rows=1 loops=1)
   ->  Seq Scan on onp_web_index  (cost=0.00..16492.57 rows=160057 width=0) 
(actual time=0.034..227.985 rows=298 loops=1)
 Total runtime: 229.157 ms
(3 rows)

nbeweb=> select count(*) from onp_web_index;
 count
---
   298
(1 row)


Is this normal, shall I just overlook the "rows=160057" output from EXPLAIN, 
or is something wrong?
What does the line "DETAIL:  159759 dead row versions cannot be removed yet." 
mean?

Thanks for any info.

-- 
Andreas Joseph Krogh <[EMAIL PROTECTED]>
Senior Software Developer / Manager
gpg public_key: http://dev

Re: [SQL] row-attribute in EXPLAIN-output doesn't match count(*)

2005-04-14 Thread Andrew Sullivan
On Thu, Apr 14, 2005 at 06:38:14PM +0200, Andreas Joseph Krogh wrote:
> Hi everybody, I have table which inly has 298 rows in it, but EXPLAIN ANALYZE 
> shows that PG thinks it has as much as 160057 rows AFAICT from the 
> EXPLAIN-output.

It does have 160057 rows:

> INFO:  vacuuming "public.onp_web_index"
> INFO:  "onp_web_index": found 0 removable, 160057 nonremovable row versions 
> in 
> 206940 pages
> DETAIL:  159759 dead row versions cannot be removed yet.
   ^^^

You no doubt have a long-running transaction keeping the rows from
being recovered.  Look for "idle in transaction" as a first guess.

> DETAIL:  160057 index row versions were removed.
> 0 index pages have been deleted, 0 are currently reusable.
> CPU 2.40s/0.92u sec elapsed 286.41 sec.
> 
> Here it says that it removed 160057 rows, doesn't it? 

No; that's the index row versions.

> Is this normal, shall I just overlook the "rows=160057" output from EXPLAIN, 
> or is something wrong?
> What does the line "DETAIL:  159759 dead row versions cannot be removed yet." 
> mean?

It means that those rows were marked invalidated after some existing
transaction started.  

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
The plural of anecdote is not data.
--Roger Brinner

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] Very low performance on table with only 298 rows

2005-04-14 Thread PFC

nbeweb=> EXPLAIN ANALYZE select count(*) from onp_web_index;
 Total runtime: 179748.993 ms
WOW.
It's possible your table is bloated.
What version of PG are you using ?
VACUUM FULL ANALYZE VERBOSE onp_web_index
if it solves your problem, good, else post the results from that 
command.
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] row-attribute in EXPLAIN-output doesn't match count(*)

2005-04-14 Thread Richard Huxton
Andreas Joseph Krogh wrote:
Hi everybody, I have table which inly has 298 rows in it, but EXPLAIN ANALYZE 
shows that PG thinks it has as much as 160057 rows AFAICT from the 
EXPLAIN-output.


 I've just run a "VACUUM FULL VERBOSE" on the whole database, which gave the 
following output on the relevant table(onp_web_index):

INFO:  vacuuming "public.onp_web_index"
INFO:  "onp_web_index": found 0 removable, 160057 nonremovable row versions in 
206940 pages
DETAIL:  159759 dead row versions cannot be removed yet.
A long-lived transaction can still see these row versions, so vacuum 
can't delete them. Until that transaction is committed/rolled back PG 
has to keep the old versions of those rows available, because it might 
be able to see them.

So - do you have an application/client that has had a single transaction 
open for a long time. A very long time unless you update this table a lot.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] row-attribute in EXPLAIN-output doesn't match count(*)

2005-04-14 Thread Andreas Joseph Krogh
On Thursday 14 April 2005 19:12, Richard Huxton wrote:
> Andreas Joseph Krogh wrote:
> > Hi everybody, I have table which inly has 298 rows in it, but EXPLAIN
> > ANALYZE shows that PG thinks it has as much as 160057 rows AFAICT from
> > the EXPLAIN-output.
> >
> >
> >  I've just run a "VACUUM FULL VERBOSE" on the whole database, which gave
> > the following output on the relevant table(onp_web_index):
> >
> > INFO:  vacuuming "public.onp_web_index"
> > INFO:  "onp_web_index": found 0 removable, 160057 nonremovable row
> > versions in 206940 pages
> > DETAIL:  159759 dead row versions cannot be removed yet.
>
> A long-lived transaction can still see these row versions, so vacuum
> can't delete them. Until that transaction is committed/rolled back PG
> has to keep the old versions of those rows available, because it might
> be able to see them.
>
> So - do you have an application/client that has had a single transaction
> open for a long time. A very long time unless you update this table a lot.

Hm, it's a web-app, and I don't hold a transaction open for a long time on 
purpose, but I have a line which says "idle in transaction" fromn "ps":

postgres  2754  1646  0 Apr06 ?00:00:07 postgres: nbe nbeweb 127.0.0.1 
idle in transaction

And by the way, I lied about the PG-version, it's 7.4.5, not 8.0.0

So, what you're suggesting is that a restart of the webapp should make vacuum 
able to delete those dead rows?

-- 
Andreas Joseph Krogh <[EMAIL PROTECTED]>
Senior Software Developer / Manager
gpg public_key: http://dev.officenet.no/~andreak/public_key.asc
+-+
OfficeNet AS| The most difficult thing in the world is to |
Hoffsveien 17   | know how to do a thing and to watch |
PO. Box 425 Skøyen  | somebody else doing it wrong, without   |
0213 Oslo   | comment.|
NORWAY  | |
Phone : +47 22 13 01 00 | |
Direct: +47 22 13 10 03 | |
Mobile: +47 909  56 963 | |
+-+

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] row-attribute in EXPLAIN-output doesn't match count(*)

2005-04-14 Thread Andrew Sullivan
On Thu, Apr 14, 2005 at 07:21:38PM +0200, Andreas Joseph Krogh wrote:
> 
> So, what you're suggesting is that a restart of the webapp should make vacuum 
> able to delete those dead rows?

Yes, but that'll only solve your problem for now.  You'll have the
problem again soon.  What's keeping open the transaction?

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
The fact that technology doesn't work is no bar to success in the marketplace.
--Philip Greenspun

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] PGCrypto: Realworld scenario and advice needed

2005-04-14 Thread Moran.Michael
Thank you, Joe.
 
Your solution provided below works great. Much better/simpler than my
original approach. You rock!
 
-Michael
 

  _  

From: Joe Conway [mailto:[EMAIL PROTECTED]
Sent: Mon 4/11/2005 3:26 PM
To: Moran.Michael
Cc: PostgreSQL
Subject: Re: [SQL] PGCrypto: Realworld scenario and advice needed



Moran.Michael wrote: 
> My initial attack plan was to do the following: 
>  
> 1. Call decrypt() with the old-passphrase to decrypt each table's existing

> data. 
> 2. Temporarily store the decrypted data in temp tables. 
> 3. Delete all rows of encrypted data from the original tables -- thereby 
> clearing the tables of all data encrypted with the old passphrase. 
> 4. Call encrypt() with the new passphrase to encrypt all data in the temp 
> tables -- thereby repopulating the production tables with data encrypted 
> with the new passphrase. 
> 5. Blow away the temp tables. 
>  
> But this seems like a tedious procedure. 
>  
> Is there any simple way to update ALL existing encrypted data with a new 
> passphrase, assuming you know the old passphrase and encryption type (i.e.

> AES, Blowfish, etc.) without having to go through the 5-step process 
> mentioned above? 

Why not use a single UPDATE command, e.g. something like: 

UPDATE tbl 
  SET f1 = encrypt(decrypt(f1, 'oldkey', 'aes'), 'newkey', 'aes'); 

Joe 


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] row-attribute in EXPLAIN-output doesn't match count(*)

2005-04-14 Thread Andreas Joseph Krogh
On Thursday 14 April 2005 19:33, Andrew Sullivan wrote:
> On Thu, Apr 14, 2005 at 07:21:38PM +0200, Andreas Joseph Krogh wrote:
> > So, what you're suggesting is that a restart of the webapp should make
> > vacuum able to delete those dead rows?
>
> Yes, but that'll only solve your problem for now.  You'll have the
> problem again soon.  What's keeping open the transaction?

Don't know... All my web-apps semm to have *one* line each in "ps" which says 
"dbuser dbname 127.0.0.1 idle in transaction". Those are java-web-apps which 
get their connections from a connection-pool(Apache-Commons ConnectionPool), 
but there is exactly *one* line pr. web-app which has the "idle in 
transaction" line, even tho they have many connections open each.

Any hints on how I can find out what's keeping the connection idle in a 
transaction? I realize now that I should probably ask that question on the 
pgsql-jdbc-list:-)

But thanks a lot for helping me finding the problem, appreciate it!

-- 
Andreas Joseph Krogh <[EMAIL PROTECTED]>
Senior Software Developer / Manager
gpg public_key: http://dev.officenet.no/~andreak/public_key.asc
+-+
OfficeNet AS| The most difficult thing in the world is to |
Hoffsveien 17   | know how to do a thing and to watch |
PO. Box 425 Skøyen  | somebody else doing it wrong, without   |
0213 Oslo   | comment.|
NORWAY  | |
Phone : +47 22 13 01 00 | |
Direct: +47 22 13 10 03 | |
Mobile: +47 909  56 963 | |
+-+

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] row-attribute in EXPLAIN-output doesn't match count(*)

2005-04-14 Thread Richard Huxton
Andreas Joseph Krogh wrote:
On Thursday 14 April 2005 19:12, Richard Huxton wrote:
So - do you have an application/client that has had a single transaction
open for a long time. A very long time unless you update this table a lot.

Hm, it's a web-app, and I don't hold a transaction open for a long time on 
purpose, but I have a line which says "idle in transaction" fromn "ps":

postgres  2754  1646  0 Apr06 ?00:00:07 postgres: nbe nbeweb 127.0.0.1 
idle in transaction
Some Java systems can open a transaction and leave it there idle. That 
one's been there since April 6th.

And by the way, I lied about the PG-version, it's 7.4.5, not 8.0.0
So, what you're suggesting is that a restart of the webapp should make vacuum 
able to delete those dead rows?
Exactly.
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] row-attribute in EXPLAIN-output doesn't match count(*)

2005-04-14 Thread Andrew Sullivan
On Thu, Apr 14, 2005 at 07:54:19PM +0200, Andreas Joseph Krogh wrote:
> Any hints on how I can find out what's keeping the connection idle in a 
> transaction? I realize now that I should probably ask that question on the 
> pgsql-jdbc-list:-)

Nope.  That's a problem with your pool software.  It's no doubt
issuing "BEGIN;" as soon as it connects.  It's BAD (broken as
designed) in that case.  You need to convince it not to do that, or
else you need to go around and kill -2 such connections from time to
time.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
I remember when computers were frustrating because they *did* exactly what 
you told them to.  That actually seems sort of quaint now.
--J.D. Baldwin

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] row-attribute in EXPLAIN-output doesn't match count(*)

2005-04-14 Thread Alvaro Herrera
On Thu, Apr 14, 2005 at 02:56:23PM -0400, Andrew Sullivan wrote:
> On Thu, Apr 14, 2005 at 07:54:19PM +0200, Andreas Joseph Krogh wrote:
> > Any hints on how I can find out what's keeping the connection idle in a 
> > transaction? I realize now that I should probably ask that question on the 
> > pgsql-jdbc-list:-)
> 
> Nope.  That's a problem with your pool software.  It's no doubt
> issuing "BEGIN;" as soon as it connects.  It's BAD (broken as
> designed) in that case.  You need to convince it not to do that, or
> else you need to go around and kill -2 such connections from time to
> time.

It is a Postgres limitation as well.  We _could_ make the server "really
start the transaction" at the point the first query is issued instead of
when the BEGIN is issued.  In fact this problem would go away if we did
that.

-- 
Alvaro Herrera (<[EMAIL PROTECTED]>)
Voy a acabar con todos los humanos / con los humanos yo acabaré
voy a acabar con todos / con todos los humanos acabaré (Bender)

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] row-attribute in EXPLAIN-output doesn't match count(*)

2005-04-14 Thread Andrew Sullivan
On Thu, Apr 14, 2005 at 03:04:47PM -0400, Alvaro Herrera wrote:
> 
> It is a Postgres limitation as well.  We _could_ make the server "really
> start the transaction" at the point the first query is issued instead of
> when the BEGIN is issued.  In fact this problem would go away if we did
> that.

Yeah; I seem to remember Tom Lane suggesting such an approach at one
point.  I had the vague idea it might happen in 8.0, but for some
reason I think it didn't.  (Anyway, that's probably off topic for
this list.)

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
When my information changes, I alter my conclusions.  What do you do sir?
--attr. John Maynard Keynes

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] row-attribute in EXPLAIN-output doesn't match count(*)

2005-04-14 Thread Greg Stark
Andrew Sullivan <[EMAIL PROTECTED]> writes:

> On Thu, Apr 14, 2005 at 07:21:38PM +0200, Andreas Joseph Krogh wrote:
> > 
> > So, what you're suggesting is that a restart of the webapp should make 
> > vacuum 
> > able to delete those dead rows?
> 
> Yes, but that'll only solve your problem for now.  You'll have the
> problem again soon.  What's keeping open the transaction?

This is presumably because of the long-standing issue that Postgres takes the
snapshot as soon as the BEGIN is issued. A lot of drivers issue a "COMMIT;
BEGIN;" right away even though it could be a long time before any actual work
is done.

Other databases (by which I mean Oracle) treat BEGIN as a noop. The snapshot
starts when the first SQL statement that needs a snapshot is executed. So
until a SELECT is issued the connection doesn't participate in any
transactional issues like keeping old versions of records around.

-- 
greg


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] row-attribute in EXPLAIN-output doesn't match count(*)

2005-04-14 Thread Andreas Joseph Krogh
On Thursday 14 April 2005 21:08, Andrew Sullivan wrote:
> On Thu, Apr 14, 2005 at 03:04:47PM -0400, Alvaro Herrera wrote:
> > It is a Postgres limitation as well.  We _could_ make the server "really
> > start the transaction" at the point the first query is issued instead of
> > when the BEGIN is issued.  In fact this problem would go away if we did
> > that.
>
> Yeah; I seem to remember Tom Lane suggesting such an approach at one
> point.  I had the vague idea it might happen in 8.0, but for some
> reason I think it didn't.  (Anyway, that's probably off topic for
> this list.)

I don't see this behaviour under 8.0.0, and it's presumably fixed:
http://archives.postgresql.org/pgsql-jdbc/2004-06/msg00018.php

Is it safe to issue a "kill -2 ", will the 
connection-pool reconnect?

-- 
Andreas Joseph Krogh <[EMAIL PROTECTED]>
Senior Software Developer / Manager
gpg public_key: http://dev.officenet.no/~andreak/public_key.asc
+-+
OfficeNet AS| The most difficult thing in the world is to |
Hoffsveien 17   | know how to do a thing and to watch |
PO. Box 425 Skøyen  | somebody else doing it wrong, without   |
0213 Oslo   | comment.|
NORWAY  | |
Phone : +47 22 13 01 00 | |
Direct: +47 22 13 10 03 | |
Mobile: +47 909  56 963 | |
+-+

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] row-attribute in EXPLAIN-output doesn't match count(*)

2005-04-14 Thread Andrew Sullivan
On Thu, Apr 14, 2005 at 09:57:03PM +0200, Andreas Joseph Krogh wrote:
> Is it safe to issue a "kill -2 ", will the 
> connection-pool reconnect?

The kill -2 will definitely work to shut the connectiond down
cleanly.  Actually, I should put that more precisely: you should send
SIGINT.  I don't know of any systems where that's not 2, but I
suppose it's logically possible.

As for the reconnection, it'll depend entirely on what your pool does
when its connections are closed.  You'll have to test it.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
Information security isn't a technological problem.  It's an economics
problem.
--Bruce Schneier

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] row-attribute in EXPLAIN-output doesn't match count(*)

2005-04-14 Thread Andreas Joseph Krogh
On Thursday 14 April 2005 22:44, Andrew Sullivan wrote:
> On Thu, Apr 14, 2005 at 09:57:03PM +0200, Andreas Joseph Krogh wrote:
> > Is it safe to issue a "kill -2 ", will the
> > connection-pool reconnect?
>
> The kill -2 will definitely work to shut the connectiond down
> cleanly.  Actually, I should put that more precisely: you should send
> SIGINT.  I don't know of any systems where that's not 2, but I
> suppose it's logically possible.
>
> As for the reconnection, it'll depend entirely on what your pool does
> when its connections are closed.  You'll have to test it.

Just did, and SIGINT didn't help(or I was too impatient), so I sent a SIGTERM, 
then it wen away. But - it didn't help on my "vacuum dead rows" problem, 
vacuum still won't reclaim those > 100.000 "dead" rows, not even after a 
web-app restart(shut down Tomcat, ran "vacuum full verbose" then started 
Tomcat again).

INFO:  vacuuming "public.onp_web_index"
INFO:  "onp_web_index": found 0 removable, 160057 nonremovable row versions in 
14892 pages
DETAIL:  159759 dead row versions cannot be removed yet.
...

Any hints?

-- 
Andreas Joseph Krogh <[EMAIL PROTECTED]>
Senior Software Developer / Manager
gpg public_key: http://dev.officenet.no/~andreak/public_key.asc
+-+
OfficeNet AS| The most difficult thing in the world is to |
Hoffsveien 17   | know how to do a thing and to watch |
PO. Box 425 Skøyen  | somebody else doing it wrong, without   |
0213 Oslo   | comment.|
NORWAY  | |
Phone : +47 22 13 01 00 | |
Direct: +47 22 13 10 03 | |
Mobile: +47 909  56 963 | |
+-+

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] row-attribute in EXPLAIN-output doesn't match count(*)

2005-04-14 Thread Tom Lane
Andreas Joseph Krogh <[EMAIL PROTECTED]> writes:
> On Thursday 14 April 2005 21:08, Andrew Sullivan wrote:
>> On Thu, Apr 14, 2005 at 03:04:47PM -0400, Alvaro Herrera wrote:
>>> It is a Postgres limitation as well.  We _could_ make the server "really
>>> start the transaction" at the point the first query is issued instead of
>>> when the BEGIN is issued.  In fact this problem would go away if we did
>>> that.

> I don't see this behaviour under 8.0.0, and it's presumably fixed:
> http://archives.postgresql.org/pgsql-jdbc/2004-06/msg00018.php

Well, recent JDBC drivers avoid the problem by postponing BEGIN,
but the issue still exists for some other client software.  It's
something that we probably should fix on the server side.

IIRC the thing that was unresolved when we last discussed this
was what time now() should reflect --- when you issue BEGIN,
or when things really start to happen?  To make the change fully
transparent we'd have to define now() as "when you issued BEGIN",
but there's a pretty good argument that now() should correspond
to the time of the transaction snapshot.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] row-attribute in EXPLAIN-output doesn't match count(*)

2005-04-14 Thread Tom Lane
Greg Stark <[EMAIL PROTECTED]> writes:
> This is presumably because of the long-standing issue that Postgres takes the
> snapshot as soon as the BEGIN is issued.

No, we don't set the snapshot until the first DML query is issued.  This
is critical for serializable transactions: you have to be able to take
locks before the snapshot is frozen.

There are at least three interesting events involved:
1  BEGIN command issued
2  First lock taken (presumably as a consequence of another command)
3  Transaction snapshot taken (ditto; might be a different command)

We have to start the transaction no later than event #2 since there has
to be something to hold the lock.  But it'd be easy enough to decouple
this from BEGIN, and it'd be good enough to solve the "COMMIT;BEGIN"
problem.

Which of these three times do you think now() ought to correspond to?
I recall having argued that it should be event #3 since that corresponds
to the database snapshot you see.  100% backwards compatibility would
require setting now() at event #1, but will anyone weep if we change that?

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] row-attribute in EXPLAIN-output doesn't match count(*)

2005-04-14 Thread Greg Stark
Tom Lane <[EMAIL PROTECTED]> writes:

> We have to start the transaction no later than event #2 since there has
> to be something to hold the lock.  But it'd be easy enough to decouple
> this from BEGIN, and it'd be good enough to solve the "COMMIT;BEGIN"
> problem.

Oh I think I finally figured out what you're saying here. 

So vacuum doesn't really know what tuples are actually visible to the
snapshots actually taken by a transaction? It's making the conservative
estimate that a snapshot could have been taken as early as the start of the
transaction even if no snapshot was taken until later?


> Which of these three times do you think now() ought to correspond to?
> I recall having argued that it should be event #3 since that corresponds
> to the database snapshot you see.  100% backwards compatibility would
> require setting now() at event #1, but will anyone weep if we change that?

I think it would be weird to perform a select and see records with dates after
now(). It would also be weird to perform a select and not see records inserted
before now(). I'm not sure any of the above guarantees those criteria for READ
COMMITTED mode, but I think I'm on the same general path as you.

-- 
greg


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[SQL] EXCEPTION handling in PL/pgSQL.

2005-04-14 Thread Dinesh Pandey



Hi,
 
I am new in 
Postgres.
 
Can any one tell me 
best way of EXCEPTION handling in PL/pgSQL.
 
1. I mean How to get 
Raised Exception messages in "EXCEPTION" block (I mean exception messages thrown 
by us OR by Postgres SQL Exception).
 
2. Is EXCEPTION 
datatype is available in Postgres as it is available in 
ORACLE.
 
RegardsDinesh Pandey--Dinesh Pandey Sr. Software Engineer
 


Re: [SQL] row-attribute in EXPLAIN-output doesn't match count(*)

2005-04-14 Thread Tom Lane
Greg Stark <[EMAIL PROTECTED]> writes:
> So vacuum doesn't really know what tuples are actually visible to the
> snapshots actually taken by a transaction? It's making the conservative
> estimate that a snapshot could have been taken as early as the start of the
> transaction even if no snapshot was taken until later?

Not quite, because what it looks at is the snapshot xmin, which each
backend publishes in the PGPROC array.  A backend that has started a
transaction but hasn't yet set its snapshot can be recognized.

IIRC the problem comes up indirectly, because such a backend affects
the xmins that are computed by *other* transactions.  What VACUUM is
actually using as the cutoff is the least xmin that it can see anywhere
in PGPROC --- and everyone else's xmin will be no higher than the XID
of the laggard, even if the laggard hasn't yet set its own xmin.

Thinking about this, it seems like xmin might not be quite the right
metric for this purpose.  It might be worth thinking about whether we
could do better with a little more info in PGPROC.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[SQL] Salam Kenal

2005-04-14 Thread Rizal



Dear All,
Salam kenal dari saya, saya adalah termasuk 
pendatang baru di dunia postgreSQL dan baru mendalami tentang nya.
mohon bantuan dari rekan- rekan mailing 
list.
trims,
 
 
RizalTechnical SupportPT. Garansindo Inter 
Global Jl. Taman Margasatwa No. 25Ragunan - Ps. MingguJakarta 
Selatan 12540Office : (021) 7883 Mobile : 08151620470Email  
: [EMAIL PROTECTED]