[GENERAL] vacuuming strangeness

2005-10-31 Thread Joe Maldonado
Hello all,

I am currently experiencing some strange behaviour when vacuuming an active table.

This table is constantly being updated by one process which gets a new connection every time it updates the table.

There is a second process which is selecting from this table, also aquiring a new connection prior to each operation.

While this is ongoing I manually issued "VACUUM VERBOSE
test_table;"  what happened there after was unexpected and
somewhat unnerving.

The VACUUM process does not seem to be able to exit, instead it seems to be stuck in some strange loop for some time.

Is this something to be expected?

Also, should I expect a performance improvement in selects and update statements after this type of operation?

My current measurements are that the only thing which is making a
difference for update/select statements is the use of a "VACUUM FULL",
though given the aggressive nature of the statement it is not something
I want to do on a high transaction db unless I absolutely must.

Thanks in advance,

-- 
Joe Maldonado


OUPUT:

db=# vacuum verbose test_table;
INFO:  vacuuming "public.test_table"
INFO:  "test_table": removed 1397848 row versions in 13064 pages
DETAIL:  CPU 0.92s/1.64u sec elapsed 9.38 sec.
INFO:  "test_table": removed 1397848 row versions in 13064 pages
DETAIL:  CPU 0.95s/1.67u sec elapsed 40.74 sec.
INFO:  "test_table": removed 1397848 row versions in 13064 pages
DETAIL:  CPU 0.84s/1.83u sec elapsed 11.86 sec.
INFO:  "test_table": removed 1397848 row versions in 13064 pages
DETAIL:  CPU 0.90s/1.47u sec elapsed 9.54 sec.
INFO:  "test_table": removed 1397848 row versions in 13064 pages
DETAIL:  CPU 0.79s/1.60u sec elapsed 10.06 sec.
INFO:  "test_table": removed 1397848 row versions in 13064 pages
DETAIL:  CPU 0.82s/1.63u sec elapsed 9.29 sec.
INFO:  "test_table": removed 1397848 row versions in 13064 pages
DETAIL:  CPU 0.84s/1.73u sec elapsed 10.46 sec.
INFO:  "test_table": removed 1397848 row versions in 13064 pages
DETAIL:  CPU 0.63s/1.69u sec elapsed 18.69 sec.
INFO:  "test_table": removed 1397848 row versions in 13064 pages
DETAIL:  CPU 0.42s/1.64u sec elapsed 10.60 sec.
INFO:  "test_table": removed 1397848 row versions in 13064 pages
DETAIL:  CPU 0.61s/1.45u sec elapsed 9.51 sec.
INFO:  "test_table": removed 1397848 row versions in 13064 pages
DETAIL:  CPU 0.51s/1.64u sec elapsed 10.96 sec.
INFO:  "test_table": removed 1397848 row versions in 13064 pages
DETAIL:  CPU 0.58s/1.75u sec elapsed 9.67 sec.
INFO:  "test_table": removed 1397848 row versions in 13064 pages
DETAIL:  CPU 0.76s/1.47u sec elapsed 23.65 sec.
INFO:  "test_table": removed 1397848 row versions in 13064 pages
DETAIL:  CPU 0.49s/1.60u sec elapsed 10.59 sec.
INFO:  "test_table": removed 1397848 row versions in 13064 pages
DETAIL:  CPU 0.58s/1.63u sec elapsed 9.17 sec.
INFO:  "test_table": removed 1397848 row versions in 13064 pages
DETAIL:  CPU 0.60s/1.55u sec elapsed 11.20 sec.
INFO:  "test_table": removed 991515 row versions in 9267 pages
DETAIL:  CPU 0.05s/1.45u sec elapsed 8.42 sec.
INFO:  "test_table": found 23357083 removable, 200037 nonremovable row versions in 442991 pages
DETAIL:  20 dead row versions cannot be removed yet.
There were 23842917 unused item pointers.
0 pages are entirely empty.
CPU 21.21s/32.60u sec elapsed 475.26 sec.
INFO:  vacuuming "pg_toast.pg_toast_2391937"
INFO:  index "pg_toast_2391937_index" now contains 0 row versions in 1 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.07 sec.
INFO:  "pg_toast_2391937": found 0 removable, 0 nonremovable row versions in 0 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.15 sec.
VACUUM



[GENERAL] not quite expected behaviour when using IN clause

2005-09-30 Thread Joe Maldonado
Hello all,

I apologize for the wide distribution but we recently ran into an
interesting behaviour using PostgreSQL 8.0.3 and did not know whether
this was a bug or intended behaviour.

When an IN clause contains a NULL value the entire in clause is considered as being false, thus no records are returned.  

Why doesn't IN evaluate NULL as a value?

so for example:

SELECT count(*) FROM test WHERE key NOT IN ('something');
returns the count of rows...

where
SELECT count(*) FROM test WHERE key NOT IN ('something', NULL);
does not.  table test does not have any NULL values in the key column.

the query plans follow...

mazu=# EXPLAIN ANALYZE SELECT count(*) FROM test WHERE key NOT IN ('something');
  
QUERY
PLAN  


 Aggregate  (cost=10022.44..10022.44 rows=1 width=0) (actual time=0.664..0.665 rows=1 loops=1)
   ->  Seq Scan on test 
(cost=1.00..10020.38 rows=826 width=0) (actual
time=0.030..0.349 rows=168 loops=1)
 Filter: (("key")::text <> 'something'::text)
 Total runtime: 0.826 ms
(4 rows)

mazu=# EXPLAIN ANALYZE SELECT count(*) FROM test WHERE key NOT IN ('something', NULL);

QUERY
PLAN


 Aggregate  (cost=10022.44..10022.44 rows=1 width=0) (actual time=0.027..0.029 rows=1 loops=1)
   ->  Result  (cost=1.00..10020.38
rows=826 width=0) (actual time=0.002..0.002 rows=0 loops=1)
 One-Time Filter: NULL::boolean
 ->  Seq Scan
on test  (cost=1.00..10020.38 rows=826 width=0) (never
executed)
  
Filter: (("key")::text <> 'something'::text)
 Total runtime: 0.110 ms
(6 rows)


-- Joe Maldonado


Re: [GENERAL] TRUNCATE locking problem

2005-07-21 Thread Joe Maldonado

Tom Lane wrote:


Joe Maldonado <[EMAIL PROTECTED]> writes:
 

While researching this locking issue I got some of the logs and found 
that in one of the cases there was a SELECT running for a long time, 
about 2 hours.  This select statement does not usually take more than a 
few seconds though, it appeared that TRUNCATE was waiting on it to 
finish before continuing.
   



 

The SELECT statement in question contains a sub SELECT in the FROM 
clause which in turn is joining with a view that contains the table 
which TRUNCATE is being executed against.
   



 

Is it possible that the SELECT was issues just before the TRUNCATE 
statement was issues and the view in the sub SELECT was waiting on 
TRUNCATE's lock?
   



No.  That would be a deadlock and would be reported as such.

regards, tom lane
 



Again many thanks :)

- Joe Maldonado

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] TRUNCATE locking problem

2005-07-21 Thread Joe Maldonado

Hello all again...

While researching this locking issue I got some of the logs and found 
that in one of the cases there was a SELECT running for a long time, 
about 2 hours.  This select statement does not usually take more than a 
few seconds though, it appeared that TRUNCATE was waiting on it to 
finish before continuing.


When I looked at the SELECT statement I found something which I think 
may have caused something of a deadlock though since I am quite ignorant 
as to how these locks interact I apologize if this is a dumb question :)


The SELECT statement in question contains a sub SELECT in the FROM 
clause which in turn is joining with a view that contains the table 
which TRUNCATE is being executed against.


Is it possible that the SELECT was issues just before the TRUNCATE 
statement was issues and the view in the sub SELECT was waiting on 
TRUNCATE's lock?


- Joe Maldonado

Joe Maldonado wrote:

Thanks...I just wanted to verify that it was the intended behaviour 
prior to going in and changing code :)


- Joe Maldonado

Tom Lane wrote:


Joe Maldonado <[EMAIL PROTECTED]> writes:
 

It seems that TRUNCATE is first posting a lock on the table and then 
waiting for other transactions to finish before truncating the table 
thus blocking all other operations.
  



That's what it's supposed to do.  If you have a problem with the length
of the delay, find the (other) transaction that's accessing the table
for too long and fix that.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match
 




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] TRUNCATE locking problem

2005-07-18 Thread Joe Maldonado
Thanks...I just wanted to verify that it was the intended behaviour 
prior to going in and changing code :)


- Joe Maldonado

Tom Lane wrote:


Joe Maldonado <[EMAIL PROTECTED]> writes:
 

It seems that TRUNCATE is first posting a lock on the table and then 
waiting for other transactions to finish before truncating the table 
thus blocking all other operations.
   



That's what it's supposed to do.  If you have a problem with the length
of the delay, find the (other) transaction that's accessing the table
for too long and fix that.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match
 




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] TRUNCATE locking problem

2005-07-18 Thread Joe Maldonado

Hello all,

We am running PostgreSQL 7.4.5 and recently we have noticed some strange 
behaviour with regards to the TRUNCATE statement.


I think it would help to provide a quick overview of what we are doing 
with the table in question in order to properly explain this.


The application which are using the database require frequent access to 
this table to perform lookups and all of these are via SELECT 
statements.  Most of these join the table in one way or the other to 
perform the lookup needed.  Every so often, once per day or so, a 
process will receive new data to populate this table.  Once the data is 
received the process TRUNCATEs the table and then performs a COPY 
operation to repopulate the table.


There is also an autovacuum process which routinely VACUUMs the database 
though the logs do not show that it is vacuuming when this happens.


The behaviour we are experiencing is that the TRUNCATE statement will 
aquire an ACCESS_EXCLUSIVE lock on the table and then go in to a waiting 
state.  While TRUNCATE is in this state no other process can SELECT on 
this table.


It seems that TRUNCATE is first posting a lock on the table and then 
waiting for other transactions to finish before truncating the table 
thus blocking all other operations.


Is this what is actually going on or am I missing something else? and is 
there a way to prevent this condition from happening?


Thanks in advance,

- Joe Maldonado

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] info on strange error messages on postgresql

2005-06-16 Thread Joe Maldonado
What's at the top and bottom of that?

>PG prints out a memory stats dump like this when it runs out of memory.
>The dump itself isn't much use to anyone but a developer; what you want
>to look into is what triggered it.  The error message appearing just
>after (or maybe just before, I forget) should be relevant.
>  
>

Sorry for the really late response but here is the  bottom of one of
these...

MdSmgr: 8192 total in 1 blocks; 6096 free (20 chunks); 2096 used
DynaHash: 8192 total in 1 blocks; 6928 free (0 chunks); 1264 used
DynaHashTable: 24576 total in 2 blocks; 14280 free (6 chunks); 10296 used
DynaHashTable: 8192 total in 1 blocks; 5080 free (0 chunks); 3112 used
DynaHashTable: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used
DynaHashTable: 8192 total in 1 blocks; 952 free (0 chunks); 7240 used
DynaHashTable: 8192 total in 1 blocks; 3000 free (0 chunks); 5192 used
DynaHashTable: 24576 total in 2 blocks; 9136 free (4 chunks); 15440 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ErrorContext: 8192 total in 1 blocks; 8176 free (3 chunks); 16 used
2005-04-21 22:30:47 [21315] ERROR:  out of memory
DETAIL:  Failed on request of size 268.
2005-04-21 22:30:47 [21315] ERROR:  current transaction is aborted,
commands ignored until end of transac
tion block
2005-04-21 22:30:47 [21315] ERROR:  current transaction is aborted,
commands ignored until end of transac
tion block
2005-04-21 22:30:47 [21315] ERROR:  current transaction is aborted,
commands ignored until end of transac
tion block
2005-04-21 22:30:47 [21315] ERROR:  current transaction is aborted,
commands ignored until end of transac
tion block
2005-04-21 22:30:47 [21315] ERROR:  current transaction is aborted,
commands ignored until end of transac
tion block
2005-04-21 22:30:47 [21315] ERROR:  current transaction is aborted,
commands ignored until end of transac
tion block
2005-04-21 22:30:47 [21315] ERROR:  current transaction is aborted,
commands ignored until end of transac
tion block
2005-04-21 22:30:47 [21315] ERROR:  current transaction is aborted,
commands ignored until end of transac
tion block
2005-04-21 22:30:47 [21315] ERROR:  current transaction is aborted,
commands ignored until end of transac
tion block


we're suspecting pg_autovacuum as being the culprit but cannot nail it
down quite yet.

-Joe

>  
>
>>followed by about 16GB of the following type of entries...
>>pg_temp_1486707494: 2048 total in 1 blocks; 768 free (0 chunks); 1280 used
>>
>>
>
>Could you have been trying to vacuum a ridiculously large number of
>tables, or some such?
>
>   regards, tom lane
>
>---(end of broadcast)---
>TIP 5: Have you checked our extensive FAQ?
>
>   http://www.postgresql.org/docs/faq
>  
>


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] info on strange error messages on postgresql

2005-04-29 Thread Joe Maldonado
Tom Lane wrote:

>Joe Maldonado <[EMAIL PROTECTED]> writes:
>  
>
>>I have these messages on my 7.4.7 database log...
>>TopMemoryContext: 87494704 total in 10676 blocks; 179400 free (61
>>chunks); 87315304 used
>>TopTransactionContext: 57344 total in 3 blocks; 648 free (5 chunks);
>>56696 used
>>DeferredTriggerXact: 0 total in 0 blocks; 0 free (0 chunks); 0 used
>>...
>>
>>
>
>What's at the top and bottom of that?
>  
>
Above this there is just a few expected error messages about a table
already existing, unfortunately this log happened between logrotates and
the log filled the logs partition to 100% so I do not have the post dump
messages.

The last entry in the log is ...
pg_temp_1486707604: 1024 total in 1 blocks; 640 free

>PG prints out a memory stats dump like this when it runs out of memory.
>The dump itself isn't much use to anyone but a developer; what you want
>to look into is what triggered it.  The error message appearing just
>after (or maybe just before, I forget) should be relevant.
>
>  
>
>>followed by about 16GB of the following type of entries...
>>pg_temp_1486707494: 2048 total in 1 blocks; 768 free (0 chunks); 1280 used
>>
>>
>
>Could you have been trying to vacuum a ridiculously large number of
>tables, or some such?
>  
>
At first I suspected vacuum to be the culprit of the 16G of data though
this db is being vacuumed by pg_autovacuum with (-s 900 -d 3 -S 0 -A 0
-V 0) as options and it has recently undergone a vacuum full and reindex
operation.

>   regards, tom lane
>  
>
One other data point is that the shared buffers are very high
max_connections = 512
shared_buffers = 10

though there are no other high memory consumers on that machine and it
has 4G of physical RAM

-Joe

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] info on strange error messages on postgresql

2005-04-28 Thread Joe Maldonado
Hello,

I have these messages on my 7.4.7 database log...
TopMemoryContext: 87494704 total in 10676 blocks; 179400 free (61
chunks); 87315304 used
TopTransactionContext: 57344 total in 3 blocks; 648 free (5 chunks);
56696 used
DeferredTriggerXact: 0 total in 0 blocks; 0 free (0 chunks); 0 used
SPI Plan: 3072 total in 2 blocks; 64 free (0 chunks); 3008 used
SPI Plan: 3072 total in 2 blocks; 64 free (0 chunks); 3008 used
SPI Plan: 3072 total in 2 blocks; 64 free (0 chunks); 3008 used
SPI Plan: 3072 total in 2 blocks; 64 free (0 chunks); 3008 used
SPI Plan: 3072 total in 2 blocks; 64 free (0 chunks); 3008 used
SPI Plan: 3072 total in 2 blocks; 64 free (0 chunks); 3008 used
SPI Plan: 3072 total in 2 blocks; 64 free (0 chunks); 3008 used
MessageContext: 8192 total in 1 blocks; 7248 free (1 chunks); 944 used
PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 3072 total in 2 blocks; 1136 free (13 chunks); 1936 used
CacheMemoryContext: 421519360 total in 60 blocks; 1116168 free (769
chunks); 420403192 used\\

followed by about 16GB of the following type of entries...
pg_temp_1486707494: 2048 total in 1 blocks; 768 free (0 chunks); 1280 used


Firstare these 2 sets of messages related and second what would
cause either or both sets of messages to appear in the log?

Appreciate any help :)

-Joe




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Filesystem options for storing pg_data

2005-04-20 Thread Joe Maldonado
Hello all,

I am in a position where I'm torn between using ext2 vs ext3 to keep the
pg_data, pg_xlog, and pg_clog contents.

The main concern is that switching to ext2 will not respond well to an
improper shutdown, power loss.

My question is what is the prefered filesystem to keep this data to be
able to optimize performance and still have some fault tolerance.

-Joe

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] truncate/create slowness

2005-03-31 Thread Joe Maldonado
Hello all,
I frequently find that TRUNCATE table and CREATE or REPLACE FUNCTION
are both very slow taking 50 secs or more to complete. We have to run
both commands every minute, so this makes our application
non-functional. But it is not a slow deterioration over time.
Sometimes they run under a second, and sometimes take over 50secs. As
for other jobs in the system, there is nothing that accesses these
tables/function when this happens. Also, when I do a strace on the
postgres backend that is running either of the commands, I find it
doing a ton of llseek/read as below. I don't find it in S state or
sleeping on a semop (I understand this would happen if the function
was being used by some other client).
So I believe most of this time is spent in I/O. Now, there is no
reason to believe slow I/O is the cause since the I/O on the system
does not vary much from one minute to the other.
pg_autovacuum is enabled. And this behavior does not necessarily
coincide with any vacuums being run simultaneously (although vacuum of
pg_class seems to slow down these commands considerably).
I suspect that pg_class has too many pages (49182 as below) and for
some reason the above commands spend a lot of time updating it.
vacuum/analyze on pg_class has not helped. Also, since the time taken
for these commands is not consistently fast or slow, I'm puzzled.
Any help/feedback will be great.
data
---
_llseek(9, 81469440, [81469440], SEEK_SET) = 0
read(9, "\335\1\0\0P10s\23\0\0\0\\\0\244\0\0 \1 @\236z\3\214\234"...,
8192) = 8192
_llseek(9, 81461248, [81461248], SEEK_SET) = 0
read(9, "\335\1\0\0l\310\311q\23\0\0\0\\\0\260\0\0 \1
D\236r\3\214"..., 8192) = 8192
_llseek(9, 81453056, [81453056], SEEK_SET) = 0
...
Query took 78 seconds:
CREATE OR  REPLACE FUNCTION a() RETURNS SETOF a_type AS' 
DECLARE q TEXT := ; rec a_type%ROWTYPE;
BEGIN q := ''SELECT * FROM a1'';
 FOR rec IN EXECUTE q LOOP 
  RETURN NEXT rec;
END LOOP;
RETURN; end; ' language 'plpgsql';

db=# select reltuples, relpages from pg_class where relname = 'pg_class';
reltuples | relpages
---+--
 1425 |49182
(1 row)
db=# vacuum analyze verbose pg_class;
INFO:  vacuuming "pg_catalog.pg_class"
INFO:  index "pg_class_oid_index" now contains 1404 row versions in 
14486 pages
DETAIL:  443 index row versions were removed.
14362 index pages have been deleted, 14350 are currently reusable.
CPU 0.04s/0.03u sec elapsed 0.07 sec.
INFO:  index "pg_class_relname_nsp_index" now contains 1404 row
versions in 52396 pages
DETAIL:  443 index row versions were removed.
51453 index pages have been deleted, 2 are currently reusable.
CPU 0.13s/0.09u sec elapsed 0.23 sec.
INFO:  "pg_class": removed 443 row versions in 37 pages
DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "pg_class": found 443 removable, 1404 nonremovable row versions
in 49182 pages
DETAIL:  114 dead row versions cannot be removed yet.
There were 2546542 unused item pointers.
0 pages are entirely empty.
CPU 0.32s/0.28u sec elapsed 0.67 sec.
INFO:  analyzing "pg_catalog.pg_class"
INFO:  "pg_class": 49182 pages, 1290 rows sampled, 1290 estimated total rows
VACUUM

-Joe
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] pg_atributes index space question

2005-03-18 Thread Joe Maldonado
Hello,
After a VACUUM FULL I saw that pg_attribute tables indexes haven't
been deleted as reported by a subsequent vacuum analyze. But the pages
corresponding to just the table has been deleted to 196 pages from
about 181557 pages. Are all system tables affected by this ? How can
we reclaim this space without shutting down postmaster ? Is this fixed
in any new release ?
db=# select relpages, reltuples from pg_catalog.pg_class where
relname = 'pg_attribute';
relpages | reltuples
--+---
 196 |  9965
(1 row)
db=# vacuum verbose analyze pg_catalog.pg_attribute;
INFO:  vacuuming "pg_catalog.pg_attribute"
INFO:  index "pg_attribute_relid_attnam_index" now contains 9965 row
versions in 181557 pages
DETAIL:  1518 index row versions were removed.
181263 index pages have been deleted, 2 are currently reusable.
thanks


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] view/pgpgsql functions bug

2005-02-25 Thread Joe Maldonado
Tom Lane wrote:
Joe Maldonado <[EMAIL PROTECTED]> writes:
 

After a create or replace view, the new view definition is not being
used by plpgsql functions that use the view. Is this a known bug ? Is
there a workaround it ?
   

Start a fresh backend session.  The old query plan is presumably being
cached by plpgsql.  (Yes, this is something we'd like to fix.)
regards, tom lane
Since we need processes that hold persistent DB connections to be able
to get data from the  changed view, does it make sense to use plpgsql
functions instead of views ?
I'll try to be precise about what I'm solving. If I have tables
A1..An, all of which I would like to truncate periodically and insert
with new data, but still block any kind of querying (queries are
selects with some kind of join on a subset of these tables, and
perhaps with other tables) until such truncates & inserts across all
tables are completed, then the thing that comes to mind is locking
each of these inorder A1..An and then doing truncates, inserts within
a transaction.
However this can lead to deadlocks or failed select queries (if
postgres detects the deadlock), since a select may have obtained data
from Aj and waiting on Ak, where k < j when the above transaction is
ongoing.
So my current solution is as follows. Maintain two sets of tables
A1..An, B1..Bn. Maintain functions A1func() etc for each of those
tables A1..An. All select queries use the functions instead of the
actual tables. Within a single transaction, insert new data into
B1..Bn, using create or replace, change these functions to use Bi
instead of Ai, truncate A1..An, and commit.
Could this scheme lead to inconsistencies ? I'm aware of one. A select
query that joins A1func() and A2func() could obtain old data from
A1func() and then obtain freshly imported data from A2func(). Since
the likelihood of this happening is rare (the above transaction
completes in milliseconds) I'm willing to compromise on this to avoid
explicitly locking N tables. Other than this, everything else seems to
work ok.
Since this requires some understanding internals of SELECT, JOIN etc..
I'd like to hear what you guys think about it.
Note that I prefer not to DROP tables since I have several
dependencies on A1..An and do not wish to recreate them periodically.
Joe
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
 


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] view/pgpgsql functions bug

2005-02-23 Thread Joe Maldonado
After a create or replace view, the new view definition is not being
used by plpgsql functions that use the view. Is this a known bug ? Is
there a workaround it ?
For instance, selecting from afunc() still returns the old view's results.
create table c ( a int );
create or replace view a as select * from c;
insert into c values (1);
create or replace function afunc() returns integer AS'
declare
   val int;
begin
   select into val a from A;
   return val;
end;
' language 'plpgsql';
select * from afunc(); 
afunc
---
1
(1 row)

create table d ( a int );
create or replace view a as select * from d;
insert into d values (2);
select * from afunc();
afunc
---
1
(1 row)
Thanks.
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] vacuuming slow

2005-02-22 Thread Joe Maldonado
Tom Lane wrote:
Joe Maldonado <[EMAIL PROTECTED]> writes:
 

Can concurrent updates/deletes slow down vacuum when it is progressing ? I
mean to ask if vacuum would have to redo or stall its work because of the
updates/deletes. Is it even possible that it goes into a long loop while
such updates occur ?
   

vacuum has to take an exclusive lock at the page level on any page it is
trying to remove tuples from.  The code is such that any ordinary
operations on the same page will block the vacuum; vacuum doesn't get
the lock until no one else is interested in the page.  Given
sufficiently heavy concurrent activity on the table, I suppose it could
take quite a while for vacuum to finish.
 

The reason for my question is that I'm seeing vacuuming take several hours
on a big table (~1million rows) that is frequently being updated (1000
updates/min). They are run around 2 times a day and each time it takes ~3
hrs. There are various other processes on the box using the database, but
the I/O load isn't very high. When vacuum runs, it causes various I/O
tasks to run very slowly.
   

However, if the scenario I described were your problem, the vacuum would
be spending most of its time just blocked waiting for page locks; it
therefore wouldn't have much effect on I/O.  What I suspect is that your
machine has no I/O bandwidth to spare and the extra demands of the
vacuum are just saturating your disk.  You might look at iostat or
vmstat output to see what's going on.  Also check with ps or top to
see if the vacuuming backend spends most of its time in "S" (sleep)
or "D" (disk IO) state.
 

most of the time is spent in D state. Also, the rest of the system isn't 
doing
much I/O. iostat shows that without vacuum (on SCSI Raid1), our application
is not using much I/O on an avg (although there is bursty I/O)

Device:rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/srkB/swkB/s
avgrq-sz avgqu-sz
/dev/sda 0.00   1.60  0.00  1.200.00   22.40 0.0011.20
  18.67 1.97
await  svctm  %util
164.17  42.50   5.10
and with vacuum
Device:rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/srkB/swkB/s
avgrq-sz avgqu-sz
/dev/sda 0.20  96.20  0.30 90.704.00 1468.80 2.00   734.40
  16.18  3050.21
await  svctm  %util
3244.55  10.99 100.00
A await time of ~3seconds is sure to slow down other queries from our
application and the system in general. But our tables and nature of
our I/O updates are bursty.
I have 2 questions :
1. Does Checkpointing write the pages that have been vacuumed or does
vacuum not affect Checkpointing I/O ?
2. Since vacuum in 7. 4 capable of disrupting disk latency (although
disk bandwidth is prob only 40% used) so much, given that it is in D
state most of the time, is rewriting the application to use temp
tables, truncate etc so as to avoid vacuum a good idea ?
Thanks.
If you are using PG 8.0 you could experiment with vacuum_cost_delay and
associated parameters.  This would slow down vacuum even more in terms
of elapsed time to finish, but it should reduce the impact on concurrent
processing.
			regards, tom lane
 

OK thanks
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] vacuuming slow

2005-02-22 Thread Joe Maldonado
Can concurrent updates/deletes slow down vacuum when it is progressing ? I
mean to ask if vacuum would have to redo or stall its work because of the
updates/deletes. Is it even possible that it goes into a long loop while
such updates occur ?
The reason for my question is that I'm seeing vacuuming take several hours
on a big table (~1million rows) that is frequently being updated (1000
updates/min). They are run around 2 times a day and each time it takes ~3
hrs. There are various other processes on the box using the database, but
the I/O load isn't very high. When vacuum runs, it causes various I/O
tasks to run very slowly.
To speed this up I have changed vacuum_mem from 8MB to 1Gb. This did help. 
I'm still collecting data to see how much it has improved it by. But I
suspect this isn't the end.

The other thing I do not understand is as follows: I looked at
pg_stat_all_tables and obtained n_tup_ins, upd, del etc on a production
machine to reproduce the long vacuum times on my local system. But when I
simulated inserts/updates/deletes to get at the same stats for the table,
vacuum ran surprisingly fast (production took 1.5hrs, local system 3mins).
On my local system though I had turned off other application processes. 
But I do not find my results reasonable since these processes are not I/O
intensive.

Thus, I'm trying to either (1) fix my application if it is inevitable, so
I don't do as many updates/min or (2) understand vacuuming better and fix
something I've missed.
Thanks all,
Joe
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] VACUUMING questions...

2004-11-04 Thread Joe Maldonado
Hello all,
I have a few somewhat simple questions
Does the postmaster vacuum it's internal (pg_*) tables? 
if not
   what is the best way to vacuum them without having to vacuum the 
entire db?
   and how often is this recommended to be done?

Thanks,
-Joe
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] sequences in schemas

2004-08-31 Thread Joe Maldonado
Hello,
How come within a create schema block I cannot create a sequence?
I have entered in:
CREATE SCHEMA joe
   CREATE SEQUENCE joe_seq start 1
   CREATE TABLE joe_table (int id, varchar name)
;
and I get a syntax error for SEQUENCE. though if it is just tables I do not
-Joe
---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] postmaster out of memory....

2004-02-28 Thread Joe Maldonado
Hello all!

when asking postgres to aggregate totals accross 4.5 or so Million records.  
The visible effect is that the postmaster will grow to the 3GB process limit and die 
without a core :(.
I have seen this same behaviour discussed back in 6.5 archives in the thread with subject "[SQL] How 
to avoid "Out of memory" using aggregate functions? ".  Is this fixed? Why is the 
postmaster exceeding it's 102MB sort mem size when doing these queries and not paging out the data?
-Joe Maldonado

--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Index storage question...

2004-02-14 Thread Joe Maldonado
Hello all,
	I am in the process of planning disk utilization for postgres and was 
wondering what was the storage size was for btree, rtree and hash indexes.

Thanks,

-Joe

--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] selects during vacuum

2003-07-16 Thread Joe Maldonado
Here it is...
things to note:
This table contains 1 record allways.
The record is updated once per second.
Every 167 seconds a vacuum analyze is run on the table
After some time it hangs in the analyze and blocks all
 access to that table including selects.


# vacuum analyze verbose 
# ;
NOTICE:  --Relation --
NOTICE:  Pages 513: Changed 1, Empty 0; Tup 80: Vac 0, Keep 79, UnUsed 4540.
Total CPU 0.02s/0.00u sec elapsed 0.02 sec.
NOTICE:  --Relation pg_toast_18119--
NOTICE:  Pages 0: Changed 0, Empty 0; Tup 0: Vac 0, Keep 0, UnUsed 0.
Total CPU 0.00s/0.00u sec elapsed 0.00 sec.
NOTICE:  Analyzing 

We dropped the table and recreated it and things have seemed to be
working right for now though I have an accelerated simulation of the
conditions running on another db to see if we can reproduce...

-Joe

On Tue, 2003-07-15 at 16:35, Richard Huxton wrote: 
> On Tuesday 15 Jul 2003 9:09 pm, Joe Maldonado wrote:
> > Hello,
> > Vacuum analyze is taking a really long time on a fairly small table and
> > during the time the vacuum is running all "select * from ;"
> > seems to hang untill the vacuum is done.  Any help on this matter would
> > be greatly appreciated...
> 
> Can you post the output of a "verbose" vacuum analyse? That will tell us what 
> it's up to.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] selects during vacuum

2003-07-15 Thread Joe Maldonado
Sorry forgot to mention we are running postgres 7.2.3.

-Joe
On Tue, 2003-07-15 at 16:15, scott.marlowe wrote:
> On 15 Jul 2003, Joe Maldonado wrote:
> 
> > Hello,
> > Vacuum analyze is taking a really long time on a fairly small table and
> > during the time the vacuum is running all "select * from ;"
> > seems to hang untill the vacuum is done.  Any help on this matter would
> > be greatly appreciated...
> 
> What version of postgresql are you running?


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly