Re: [GENERAL] Strange duplicate key violation error

2007-06-28 Thread Richard Huxton

Casey Duncan wrote:
There are in fact one of these tables for each schema, each one contains 
exactly one row (the "log" in the name is a bit misleading, these just 
contain the current replica state, not a running log).


2007-06-28 08:53:54.937 PDT [d:radio_reports_new u:slony s:4683d86f.3681 
3] ERROR:  duplicate key violates unique constraint "replica_sync_log_pkey"


I've never seen this error before in millions of updates to this table. 
It confuses me because the columns indexed by the primary key *are not* 
being modified, so how could the update violate the constraint? 
Furthermore there is only one record in the table anyway.


The updates are performed by daemons that are in charge of replicating 
the changes for one database each. So there is no concurrent updating on 
this table, only one process updates it.


OK, so a single slony process is updating a single row and causing a 
pkey violation. Has this happened only the once?


The error caused the daemon to pause, so I restarted it. The error did 
not recur the second time.


fwiw this is postgres 8.1.8 on Linux.


Well, 8.1.9 is the latest release, but I don't see anything concerning this.

Two things occur to me:
1. Slony adds its own triggers to assorted tables. I don't see how it 
could make a difference, but it might be worth keeping in mind.
2. Primary keys are enforced by unique btree indexes and you could have 
come across a strange corner-case where it's not updating correctly. If 
it's only happened the once, that will make it very hard to track down.


If it doesn't do it again in the next day or so, I'd be inclined to 
REINDEX the table, in case it is in a slightly odd state.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] date time function

2007-06-28 Thread A. Kretschmer
am  Thu, dem 28.06.2007, um 16:04:48 -0400 mailte Jasbinder Singh Bali 
folgendes:
> Hi,
> 
> I have a timestamp field in my talbe.
> I need to check its difference in days with the current date.
> 
> field name is time_stamp and I did it as follows:
> 
> select age(timestamp '2000-06-28 15:39:47.272045 ')
> 
> it gives me something like
> 
> 6 years 11 mons 29 days 08:20:12.727955
> 
> How can i convert this result into absolute number of days.

test=*# select current_date - '2000-06-28 15:39:47.272045 '::date;
 ?column?
--
 2557
(1 row)



Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-28 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Oh, I was just thinking in way for Bruce to get out of his current
> situation.

Oh, for that a manual "drop table" as superuser should work fine.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-28 Thread Alvaro Herrera
Tom Lane wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> Yeah, we had better investigate some way to clean them up.  It was never
> >> obvious before that it mattered to get rid of orphan temp tables, but I
> >> guess it does.
> 
> > Would it be enough to delete the tuple from pg_class?
> 
> No, you need a full DROP.  I don't see that that's harder than removing
> only the pg_class tuple --- the problem in either case is to be sure
> it's OK.  In particular, how to avoid a race condition against an
> incoming backend that adopts that BackendId?  Worst-case, you could be
> deleting a temp table he just made.

Oh, I was just thinking in way for Bruce to get out of his current
situation.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-28 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Yeah, we had better investigate some way to clean them up.  It was never
>> obvious before that it mattered to get rid of orphan temp tables, but I
>> guess it does.

> Would it be enough to delete the tuple from pg_class?

No, you need a full DROP.  I don't see that that's harder than removing
only the pg_class tuple --- the problem in either case is to be sure
it's OK.  In particular, how to avoid a race condition against an
incoming backend that adopts that BackendId?  Worst-case, you could be
deleting a temp table he just made.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-28 Thread Alvaro Herrera
Tom Lane wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > Well, it certainly seems like this shouldn't be happening.  Maybe the
> > table belonged to a session that crashed, but the pg_class entry has not
> > been cleaned up -- possibly because that backend has not connected to
> > that particular database.
> 
> Hm --- a crash would mean that the temp table would remain until some
> other session (a) connected to the same database (b) using the same
> BackendId (sinval slot number), and (c) decided to create some temp
> tables of its own.  So indeed it's not implausible that the table could
> hang around for a long time, especially if you were unlucky enough that
> the original creator had been using a very high BackendId slot.  (Which
> pg_temp schema is this table attached to, anyway?)

It's pg_temp_63.  Backend 63 is running in another database.  It seems
perfectly possible that a backend connects to database A, creates a temp
table, crashes, then connects to database B after restart and then keeps
running there forever :-(

> > Maybe autovacuum itself could do something about cleaning up this kind
> > of stuff on sight (--> dropping temp tables belonging to sessions that
> > crash).  I'm not sure though.
> 
> Yeah, we had better investigate some way to clean them up.  It was never
> obvious before that it mattered to get rid of orphan temp tables, but I
> guess it does.

Would it be enough to delete the tuple from pg_class?  I guess that will
leave behind the tuples in pg_attribute etc, but I don't see another way
to drop it ...  Maybe UPDATE to move it to the local temp schema and
then DROP it?

Or maybe it works to do DROP TABLE pg_temp_63.temp2394 as superuser ...?
I haven't tried.

> Another possibility is just to ignore temp tables while computing
> datvacuumxid.  A temp table that survives for > 2G transactions is going
> to be trouble, but I'm not sure there's anything we can usefully do
> about it anyway --- certainly autovacuum has no power to fix it.

Yes, I was going to suggest that, though it doesn't seem right.

-- 
Alvaro Herrera   Valdivia, Chile   ICBM: S 39º 49' 18.1", W 73º 13' 56.4"
"Tiene valor aquel que admite que es un cobarde" (Fernandel)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-28 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Well, it certainly seems like this shouldn't be happening.  Maybe the
> table belonged to a session that crashed, but the pg_class entry has not
> been cleaned up -- possibly because that backend has not connected to
> that particular database.

Hm --- a crash would mean that the temp table would remain until some
other session (a) connected to the same database (b) using the same
BackendId (sinval slot number), and (c) decided to create some temp
tables of its own.  So indeed it's not implausible that the table could
hang around for a long time, especially if you were unlucky enough that
the original creator had been using a very high BackendId slot.  (Which
pg_temp schema is this table attached to, anyway?)

> Maybe autovacuum itself could do something about cleaning up this kind
> of stuff on sight (--> dropping temp tables belonging to sessions that
> crash).  I'm not sure though.

Yeah, we had better investigate some way to clean them up.  It was never
obvious before that it mattered to get rid of orphan temp tables, but I
guess it does.

Another possibility is just to ignore temp tables while computing
datvacuumxid.  A temp table that survives for > 2G transactions is going
to be trouble, but I'm not sure there's anything we can usefully do
about it anyway --- certainly autovacuum has no power to fix it.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-28 Thread Alvaro Herrera

Bruce, please make sure to keep the list copied on replies.  I think
there is an important bug here and I don't want it to get lost just
because I lose track of it.  I'm also crossposting to pgsql-hackers.

Bruce McAlister wrote:

> okidoki, I tried this:
> 
> blueface-crm=# select relname, nspname from pg_class join pg_namespace
> on (relnamespace = pg_namespace.oid) where
> pg_is_other_temp_schema(relnamespace);
>  relname  |  nspname
> --+
>  temp4295 | pg_temp_63
> (1 row)
> 
> blueface-crm=# select pg_stat_get_backend_pid(63);
>  pg_stat_get_backend_pid
> -
> 6661
> (1 row)
> 
> blueface-crm=# select datname, client_addr, client_port from
> pg_stat_activity where procpid = '6661';
> datname | client_addr | client_port
> +-+-
>  whitelabel-ibb | 10.6.0.181  |1587
> (1 row)
> 
> Is that correct? If it is then I'm really confused, how can a connection
> to the whitelabel-ibb database create temporary tables in the
> blueface-crm database?

Well, it certainly seems like this shouldn't be happening.  Maybe the
table belonged to a session that crashed, but the pg_class entry has not
been cleaned up -- possibly because that backend has not connected to
that particular database.

Maybe autovacuum itself could do something about cleaning up this kind
of stuff on sight (--> dropping temp tables belonging to sessions that
crash).  I'm not sure though.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] date time function

2007-06-28 Thread Michael Glaesemann


On Jun 28, 2007, at 15:13 , Raymond O'Donnell wrote:


Cast your result to type INTERVAL - something like this:

postgres=# select (current_timestamp - timestamp  
'2007-05-01')::interval;


   interval
--
 58 days 21:10:36.748
(1 row)


The cast to interval is superfluous: timestamp - timestamp already  
gives you an interval result. Also, Postgres will interpret  
'2007-05-01' as a date in this context:


# select current_timestamp - '2007-05-01';
?column?
-
58 days 16:25:53.776695
(1 row)

Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] date time function

2007-06-28 Thread Raymond O'Donnell

On 28/06/2007 21:04, Jasbinder Singh Bali wrote:


How can i convert this result into absolute number of days.


Cast your result to type INTERVAL - something like this:

postgres=# select (current_timestamp - timestamp '2007-05-01')::interval;

   interval
--
 58 days 21:10:36.748
(1 row)

Of course, you'll need to decide how to handle the part of a day left over.

HTH,

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] date time function

2007-06-28 Thread Jasbinder Singh Bali

Hi,

I have a timestamp field in my talbe.
I need to check its difference in days with the current date.

field name is time_stamp and I did it as follows:

select age(timestamp '2000-06-28 15:39:47.272045')

it gives me something like

6 years 11 mons 29 days 08:20:12.727955

How can i convert this result into absolute number of days.

thanks,

~Jas


Re: [GENERAL] i need a rad/ide open source for work with postgresql

2007-06-28 Thread Raymond O'Donnell

On 28/06/2007 18:47, Mario Jose Canto Barea wrote:


why are you can make a good database relational server
as postgresql 8.1, and dont make a rad/ide open source
for programming with postgresql 8.1 as
delphi\c++builder\progress 4gl  ?


Because they do different jobs. The languages you mention are for making 
front-end GUIs (for the most part), whereas PostgreSQL is a database.


Have you looked at pgAdmin?

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] OFFSET and LIMIT - performance

2007-06-28 Thread David Wall



Network transmission costs alone would make the second way a loser.

Large OFFSETs are pretty inefficient because the backend generates and
discards the rows internally ... but at least it never converts them to
external form or ships them to the client.  Rows beyond the LIMIT are
not generated at all.
  
Some of this would depend on the query, too, I suspect, since an ORDER 
BY would require the entire result set to be determined, sorted and then 
the limit/offset could take place.  Regardless, it's better than 
filtering in the Java/client side to avoid sending it from the database 
backend to the client.


But how would that compare to using a cursor/fetch query.  It seems like 
the JDBC library will automatically use a cursor if you specify some 
params on the PreparedStatement, though the details escape me.  I think 
it's related to setFetchSize() and/or setMaxRows().Of course, those 
are not guaranteed to do anything special either, and you'd still need 
to retrieve and discard initial rows unless you can adjust your WHERE 
condition to find the "next set". 

If you have an ORDER BY on a unique field, for example, you could use 
that field to query the next set by remembering the last value in your 
previous query set (or select 1 more row than you need so you have the 
exact value that would be next) and specifying it in the WHERE clause.  
Even this could be an issue if updates would change the grouping.


LIMIT/OFFSET are not part of the SQL standard, too, should that matter 
for DB portability.  I believe mysql supports it, but it seems like 
Oracle didn't (at least at one time).


David


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] OFFSET and LIMIT - performance

2007-06-28 Thread Tom Lane
"Jan Bilek" <[EMAIL PROTECTED]> writes:
> I'm using PGDB with JDBC. In my app i need to select only portion of all =
> available rows. I know i can do it two ways:
> 1. I can use OFFSET and LIMIT SQL statements or
> 2. I can select all rows and then filter requested portion in Java.

> My question - Does the second way significantly affect performance =
> especially when used with JDBC?

Network transmission costs alone would make the second way a loser.

Large OFFSETs are pretty inefficient because the backend generates and
discards the rows internally ... but at least it never converts them to
external form or ships them to the client.  Rows beyond the LIMIT are
not generated at all.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] [ADMIN] i need a rad/ide open source for work with postgresql

2007-06-28 Thread Joshua D. Drake

Mario Jose Canto Barea wrote:

why are you can make a good database relational server
as postgresql 8.1, and dont make a rad/ide open source
for programming with postgresql 8.1 as
delphi\c++builder\progress 4gl  ?


Uhhh.. why not just use delphi, or c++builder with ODBC?

Joshua D. Drake






thanks







___ 
Do You Yahoo!? 
La mejor conexión a Internet y 2GB extra a tu correo por $100 al mes. http://net.yahoo.com.mx 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match




--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] a JOIN on same table, but 'slided over'

2007-06-28 Thread Gurjeet Singh

On 6/28/07, Alban Hertroys <[EMAIL PROTECTED]> wrote:



This is called a 'correlated subquery'. Basically the subquery is
performed for each record in the top query.

Google gave me this:

http://publib.boulder.ibm.com/infocenter/iseries/v5r3/index.jsp?topic=/sqlp/rbafycorrs.htm



I think the sub-section titled "Example: Correlated subquery in a WHERE
Clause" is appropriate to explain our query at hand.

Simply put, correlated queries are like nested FOR loops of any high level
programming language.

1. FOR( record R in result of outer-query )
2.   execute inner query, using any R.colname1
3.   compare R.colname2 with the result of the correlated-subquery
4.   produce R in output, iff the above comparison succeeded

Line 2 can be treated as another FOR loop, where every record of inner-query
is being processed, and comparing the local expressions with a column (or
expression) that comes from outer query.

The comparison in step 3 can be against any expression, with columns or
against a pure constant too!

For example, the following query produces the name of all the employees, who
manage at least one other employee.

select empno, ename
from   emp e1
where  exists (select 1
  from   emp e2
  where e2.mgr = e1.empno);

The only thing I would add for our query is that, that the outer SELECT of
our query produces a cartesian product (no join-condition between t1 and
t2), but only one row from t2 qualifies for the join, since the WHERE
condition is on a unique column, and the correlated subquery returns just
the required value (lowest of the IDs that are greater than current
t1.IDbeing processed).

I know the above one-line-paragraph may sound a bit cryptic for someone new
to correlated subqueries, but if you understand the example in the link
above, then this would start making some sense.

And there's probably more to find. Interestingly enough wikipedia

doesn't seem to have an article on the subject.






Regards,
--
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

17°29'34.37"N  78°30'59.76"E - Hyderabad *
18°32'57.25"N  73°56'25.42"E - Pune

Sent from my BlackLaptop device


[GENERAL] i need a rad/ide open source for work with postgresql

2007-06-28 Thread Mario Jose Canto Barea
why are you can make a good database relational server
as postgresql 8.1, and dont make a rad/ide open source
for programming with postgresql 8.1 as
delphi\c++builder\progress 4gl  ?




thanks







___ 
Do You Yahoo!? 
La mejor conexión a Internet y 2GB extra a tu correo por $100 al mes. 
http://net.yahoo.com.mx 


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Strange duplicate key violation error

2007-06-28 Thread Casey Duncan
I have this table "replica_sync_log" which is updated thousands of  
times a day to reflect the state of various schemas in a database  
which acts as an offline secondary to various other databases (each  
of the source databases is mapped to its own schema in the  
secondary). The table has the following definition:


Table "radio_prod_default.replica_sync_log"
  Column  |Type | Modifiers
--+-+---
db_host  | text| not null
db_port  | text| not null
db_name  | text| not null
last_sync_id | integer |
last_sync_time   | timestamp without time zone |
last_commit_time | timestamp without time zone |
Indexes:
"replica_sync_log_pkey" PRIMARY KEY, btree (db_host, db_port,  
db_name), tablespace "data1"

Tablespace: "data1"

There are in fact one of these tables for each schema, each one  
contains exactly one row (the "log" in the name is a bit misleading,  
these just contain the current replica state, not a running log).


Here is an error that I got this morning that doesn't make much sense  
to me:


2007-06-28 08:53:54.937 PDT [d:radio_reports_new u:slony s:4683d86f. 
3681 3] ERROR:  duplicate key violates unique constraint  
"replica_sync_log_pkey"
2007-06-28 08:53:54.937 PDT [d:radio_reports_new u:slony s:4683d86f. 
3681 4] STATEMENT:  UPDATE replica_sync_log SET last_sync_id =  
7147892, last_sync_time = '2007-06-27 23:28:04.586846'::timestamp,  
last_commit_time = 'Thu Jun 28 08:53:54 2007'::timestamp WHERE  
db_name = 'radio_prod_default' AND db_host = 'radiodb-default-1' AND  
db_port = '5432';


I've never seen this error before in millions of updates to this  
table. It confuses me because the columns indexed by the primary key  
*are not* being modified, so how could the update violate the  
constraint? Furthermore there is only one record in the table anyway.


The updates are performed by daemons that are in charge of  
replicating the changes for one database each. So there is no  
concurrent updating on this table, only one process updates it.


The error caused the daemon to pause, so I restarted it. The error  
did not recur the second time.


fwiw this is postgres 8.1.8 on Linux.

Thanks for any insights.

-Casey


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] using PREPAREd statements in CURSOR

2007-06-28 Thread Björn Lundin


28 jun 2007 kl. 16.45 skrev Tom Lane:


=?ISO-8859-1?Q?Bj=F6rn_Lundin?= <[EMAIL PROTECTED]> writes:

I'm connecting via libpq and want to
use prepared statements in a cursor.


You can't.


That explains why I could not find an example...


If you're just interested in fetching a large query result in  
sections,


No, I'm writing a sql-binding to libpq, for use with a current
system,  that is written with Oracle as db.
The processes all use a sql-binding to Oracle,
and in order not to rewrite them, I want to keep the cursor handling.

The processes are filled with code snippets like this:

prepare(Statement1,"select col3,col4 from table_a where col1 = :COL_A  
and col2 = :COL_B");

set(Statement1,"COLA_A",10);
set(Statement1,"COLA_B","ABC");
open_cursor(Statement1)
loop
  fetch(Statement1, end_of_set);
  exit when end_of_set;
  get(Statement1,"col3",var3);
  get(Statement1,"col4",var4);
end loop;
close_cursor(Statement);

--use var3 and var4 here

So I redesigned, and use plain strings, that I pass to
libpq. They are build on the fly.

Since I prefer keeping the cursor, over the prepared statements,
is there any performance gain I can do,
besides fetching say 100 rowa at a time, rather than 1.
(The fetch above is a wrapper to libpq's fetch)

I'm thinking, is it better to explicitly cast the bind variables in  
the statement string?

The above statement would be sent to libpq as

declare cursor xyz as select col3,col4 from table_a where col1 = 10  
and col2 = 'ABC'


Would it be better to send it as

declare cursor xyz as select col3,col4 from table_a where col1 =  
10::integer and col2 = 'ABC::text'


I will use integer, float, character(n), date, time w/o tz  
(should perhaps be 'ABC::character(3)')

Or should I use say int4 instead of integer?
The character(3) are constrained by the host language,
ie Ada.



A well-thought-out API proposal
would probably be favorably received.


Hmm, I would think that would be over my head...
But, in a way I'm glad that the PQPrepare is not an option,
passing variables in an array from Ada to C would
give at least some headache.

The 'set' approach would be easier, when interfacing from other  
languages, I think.




 (yes, DECLARE CURSOR
is planned differently than a plain select).


And which way is to be preferred?

/Björn





[GENERAL] OFFSET and LIMIT - performance

2007-06-28 Thread Jan Bilek
Hello,
I'm using PGDB with JDBC. In my app i need to select only portion of all 
available rows. I know i can do it two ways:
1. I can use OFFSET and LIMIT SQL statements or
2. I can select all rows and then filter requested portion in Java.

My question - Does the second way significantly affect performance especially 
when used with JDBC?
Does the LIMIT statement do anything else then browsing/filtering rows with 
cursor (= the same thing i'd do in Java)?
How are the selected rows processed in select queries? Does PG select all rows 
and then filter them using statements like LIMIT and OFFSET or applies the 
statements while processing query?

Thanks for your analyzes!

JB

Re: [GENERAL] Column Default Clause and User Defined Functions

2007-06-28 Thread Michael Glaesemann


On Jun 28, 2007, at 0:01 , Tom Lane wrote:


Whether that is a good idea is another question entirely ... it seems
a bit questionable, but on the other hand time-varying defaults like
"default now()" have time-honored usefulness, so I'm not quite sure
why I feel uncomfortable with it.


I thought it was probably possible do so by wrapping it in a  
function, but considered it in the same vein as wrapping queries in  
functions to use them in CHECK constraints. It's a way to fake out  
the checking and may lead to unexpected results if the data under the  
subquery changes. It's not quite as serious as CHECK constraints are  
used to ensure data integrity. And ISTM any subquery you'd put in a  
DEFAULT could just as well go into your INSERT, where it's more  
obvious what's going on. Though perhaps I'm being too conservative  
here: it could be convenient to put a commonly used subquery into the  
DEFAULT.


It looks like allowing functions other than those of the current date- 
time variety (which Postgres does support) is an extension of the SQL  
2003 spec (if I'm reading this correctly):


11.5 
Function
Specify the default for a column, domain, or attribute.
Format
 ::= DEFAULT 
 ::=

  | 
  | USER
  | CURRENT_USER
  | CURRENT_ROLE
  | SESSION_USER
  | SYSTEM_USER
  | CURRENT_PATH
  | 

Not that I would support limiting Postgres to a spec-strict  
definition of this :)


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-28 Thread Bruce McAlister
Alvaro Herrera wrote:
> > Bruce McAlister wrote:
> >
>> >> Alvaro Herrera wrote:
>> >>
>>> >>> Bruce McAlister wrote:
>>> >>>
  Martijn van Oosterhout wrote:
 
> > All the values here look OK, except one:
> >
> > On Thu, Jun 28, 2007 at 07:50:36AM +0100, Bruce McAlister wrote:
> >
>> >> blueface-crm=# select oid, relfrozenxid from pg_class
where relkind in
>> >> ('r', 't');
>> >>oid   | relfrozenxid
>> >> -+--
>> >>  2570051 |   2947120794
>> >>
> > Whatever this table is, the freeze XID isn't getting updated
for some
> > reason...
> >
>>> >>> Doh.
>>> >>>
>>> >>>
  This looks like a temporary relation,
 
  temp4295   |   2947120794
 
  Is there a way we can manually force these to update?
 
>>> >>> No.  Only the session that created the temp table can vacuum it.
>>> >>> Autovacuum skips temp tables.  I guess the only thing you can do
here is
>>> >>> close that session.
>>> >>>
>> >> How could I go about finding out which session created the
temporary table?
>> >>
> >
> > Do this:
> >
> > select relname, nspname
> > from pg_class join pg_namespace on (relnamespace = pg_namespace.oid)
> > where pg_is_other_temp_schema(relnamespace);
> >
> > It returns something like
> >
> >  relname |  nspname
> > -+---
> >  foo | pg_temp_1
> > (1 fila)
> >
> > So it is session with ID 1.  You can then find out the PID with
> >
> > alvherre=# select pg_stat_get_backend_pid(1);
> >  pg_stat_get_backend_pid
> > -
> >13416
> > (1 fila)
> >
> >
> >
okidoki, I tried this:

blueface-crm=# select relname, nspname from pg_class join pg_namespace
on (relnamespace = pg_namespace.oid) where
pg_is_other_temp_schema(relnamespace);
 relname  |  nspname
--+
 temp4295 | pg_temp_63
(1 row)

blueface-crm=# select pg_stat_get_backend_pid(63);
 pg_stat_get_backend_pid
-
6661
(1 row)

blueface-crm=# select datname, client_addr, client_port from
pg_stat_activity where procpid = '6661';
datname | client_addr | client_port
+-+-
 whitelabel-ibb | 10.6.0.181  |1587
(1 row)

Is that correct? If it is then I'm really confused, how can a connection
to the whitelabel-ibb database create temporary tables in the
blueface-crm database?


>> >> So this could be a potential issue for autovacuum then. If, for
example,
>> >>  our environment uses connection pooling. Then these connections are
>> >> persistent to the database? From Martjin's comments, I would
assume then
>> >> that the connection pooling driver (ODBC/NPGSQL etc etc) should
>> >> implement the "RESET SESSION" DDL after each transaction/query so that
>> >> we don't have these temporary tables lying about indefinately?
>> >>
> >
> > Right -- but sadly RESET SESSION (actually called DISCARD because RESET
> > was getting too overcrowded) is only available in 8.3.
> >
> > But you are right, temp tables and connection pools are a rather serious
> > issue, it seems.
> >
> >
Is there a way we can actually work around this in the 8.2 release? Is
there a way or a command that we could run manually over the connection
that would cleanup the session environment? Is it possible to do it
programatically?
>>> >>> I'm thinking that maybe should make vac_update_datfrozenxid
ignore temp
>>> >>> tables.  But this doesn't really work, because if we were to
truncate
>>> >>> pg_clog there would be tuples on the temp table marked with XIDs
that
>>> >>> are nowhere to be found.  Maybe we could make some noise about it
>>> >>> though.
>>> >>>
>>> >>> This is a problem only in recent releases (8.2) because we started
>>> >>> allowing the max freeze age be configurable.
>>> >>>
>> >> I think the max/min freeze age parameters we are using here are the
>> >> default ones, I think I just uncommented them.
>> >>
> >
> > Humm, I would like to think that the default freeze age is 2 billion ...
> > [checks the code]  but no, you are right, the default is 200 million.
> >
> >
Is 2 billion a better value to set it to?

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] autovacumm not working ?

2007-06-28 Thread Alvaro Herrera
Tomasz Rakowski wrote:
> 
>  How restart of database server influances autovacuum process ?
> 
> I think that somewhere on this mailing list I read that autovacuum in
> such case looses some important information and after database server
> restart will not behave as expected until VACUUM ANALYZE is executed.
> Is it true ?

No, this was only true in versions previous to 8.1.

Unless you stop the database in "immediate" mode.  If you do that, then
all the information that autovacuum uses is lost.  If you use "smart" or
"fast" it should work fine.

-- 
Alvaro Herrera http://www.amazon.com/gp/registry/DXLWNGRJD34J
"Porque francamente, si para saber manejarse a uno mismo hubiera que
rendir examen... ¿Quién es el machito que tendría carnet?"  (Mafalda)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] autovacumm not working ?

2007-06-28 Thread Tom Lane
Tomasz Rakowski <[EMAIL PROTECTED]> writes:
> The strange thing is that number of pages allocated for "t_ais_position" 
> table and "t_ais_position_pkey" index  haven't changed
> (so autovacuum works ok on them) , but  the number of pages allocated to 
> "ix_t_ais_position_update_time" index increased 
> from 250  to 2218 (x 9 times). 

> "ix_t_ais_position_update_time" index is created as "create index 
> ix_t_ais_position_update_time  on t_ais_position(update_time)", so it is 
> suppose to be updated very frequently (as main table). The other index 
> "t_ais_position_pkey" is on primary key and values doesn't change at all 

I think you may be running into one of the usage patterns that btree
vacuuming doesn't clean up very well.  Do you initially create a lot of
rows with nearby update_times, and eventually remove all but a few?
The worst case is where you're leaving only one live row per index page.
The current VACUUM code can only recycle index pages that have become
totally empty ...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] autovacumm not working ?

2007-06-28 Thread Tomasz Rakowski

 How restart of database server influances autovacuum process ?

I think that somewhere on this mailing list I read that autovacuum in such case 
looses some important information 
and after database server restart will not behave as expected until VACUUM 
ANALYZE is executed.
Is it true ?


Tomasz Rakowski



- Original Message 
From: Alvaro Herrera <[EMAIL PROTECTED]>
To: Tomasz Rakowski <[EMAIL PROTECTED]>
Cc: Matthew T. O'Connor <[EMAIL PROTECTED]>; pgsql-general@postgresql.org
Sent: Wednesday, June 27, 2007 2:50:40 PM
Subject: Re: [GENERAL] autovacumm not working ?

Tomasz Rakowski wrote:
> Matthew,
> 
>   Thank you for reply.
> 
>   I was trying to configure autovacuum for given table to be more aggressive 
> (min=100, scale factor=0.01).
>   Then waited for autovacuum to be activated for given table 
>   (watching  Server status window; about 1000 updates/min, 30.000 rows
>   in the table, so didn't have to wait too long).

Did you reload (pg_ctl reload) after changing the postgresql.conf
settings?  Also note that you can alter values for a specific table by
putting them in the pg_autovacuum table.

-- 
Alvaro Herrera  http://www.amazon.com/gp/registry/5ZYLFMCVHXC
"I am amazed at [the pgsql-sql] mailing list for the wonderful support, and
lack of hesitasion in answering a lost soul's question, I just wished the rest
of the mailing list could be like this."   (Fotis)
   (http://archives.postgresql.org/pgsql-sql/2006-06/msg00265.php)







   
Ready
 for the edge of your seat? 
Check out tonight's top picks on Yahoo! TV. 
http://tv.yahoo.com/

Re: [GENERAL] autovacumm not working ?

2007-06-28 Thread Tomasz Rakowski
Hi there,

I run VACUUM VERBOSE and the output from it is below:
-
INFO:  vacuuming "ais.t_ais_position"
INFO:  scanned index "t_ais_position_pkey" to remove 972 row versions
DETAIL:  CPU 0.00s/0.00u sec elapsed 0.32 sec.
INFO:  scanned index "ix_t_ais_position_update_time" to remove 972 row versions
DETAIL:  CPU 0.00s/0.01u sec elapsed 2.81 sec.
INFO:  scanned index "idx_ais_position" to remove 972 row versions
DETAIL:  CPU 0.00s/0.00u sec elapsed 1.29 sec.
INFO:  "t_ais_position": removed 972 row versions in 305 pages
DETAIL:  CPU 0.00s/0.00u sec elapsed 0.46 sec.
INFO:  index "t_ais_position_pkey" now contains 26582 row versions in 145 pages
DETAIL:  972 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "ix_t_ais_position_update_time" now contains 26582 row versions in 
250 pages
DETAIL:  972 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "idx_ais_position" now contains 26664 row versions in 246 pages
DETAIL:  972 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.93 sec.
INFO:  "t_ais_position": found 972 removable, 26582 nonremovable row versions 
in 498 pages
DETAIL:  22 dead row versions cannot be removed yet.
There were 9796 unused item pointers.
498 pages contain useful free space.
0 pages are entirely empty.
CPU 0.00s/0.01u sec elapsed 6.81 sec.

Query returned successfully with no result in 6889 ms.
--


Then I left system running for several hours. There was about 1 mln updates to 
the table (1000/min).
The number of rows in the table haven't changed much: from 26582 to 26962 rows.
Autovacuum was executed on avarage every 5 minutes (scale_factor for this table 
is 0.01, base_thresh is 100, naptime is default 1 min).

Then I run VACUUM VERBOSE one more time:


INFO:  vacuuming "ais.t_ais_position"
INFO:  scanned index "t_ais_position_pkey" to remove 2387 row versions
DETAIL:  CPU 0.00s/0.01u sec elapsed 0.32 sec.
INFO:  scanned index "ix_t_ais_position_update_time" to remove 2387 row versions
DETAIL:  CPU 0.00s/0.00u sec elapsed 19.22 sec.
INFO:  scanned index "idx_ais_position" to remove 2387 row versions
DETAIL:  CPU 0.00s/0.00u sec elapsed 2.17 sec.
INFO:  "t_ais_position": removed 2387 row versions in 489 pages
DETAIL:  CPU 0.00s/0.00u sec elapsed 1.54 sec.
INFO:  index "t_ais_position_pkey" now contains 26962 row versions in 146 pages
DETAIL:  2387 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "ix_t_ais_position_update_time" now contains 26962 row versions in 
2218 pages
DETAIL:  2387 index row versions were removed.
19 index pages have been deleted, 11 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "idx_ais_position" now contains 27306 row versions in 348 pages
DETAIL:  2387 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 1.68 sec.
INFO:  "t_ais_position": found 2387 removable, 26962 nonremovable row versions 
in 498 pages
DETAIL:  19 dead row versions cannot be removed yet.
There were 8001 unused item pointers.
498 pages contain useful free space.
0 pages are entirely empty.
CPU 0.00s/0.01u sec elapsed 26.06 sec.

Query returned successfully with no result in 26101 ms.
-

The strange thing is that number of pages allocated for "t_ais_position" table 
and "t_ais_position_pkey" index  haven't changed
(so autovacuum works ok on them) , but  the number of pages allocated to 
"ix_t_ais_position_update_time" index increased 
from 250  to 2218 (x 9 times). 

"ix_t_ais_position_update_time" index is created as "create index 
ix_t_ais_position_update_time  on t_ais_position(update_time)", so it is 
suppose to be updated very frequently (as main table). The other index 
"t_ais_position_pkey" is on primary key and values doesn't change at all 


Could somebody explain me that ? (something more then autovacuum doesn't keep 
with updates would be nice)

Is it still possible to use autovacuum for such tables or I really should 
switch to VACUUM run from cron ?

Do you think that if I run VACUUM from cron every 5 minutes I would see exactly 
the same behaviour ?

Tomasz Rakowski




- Original Message 
From: Alvaro Herrera <[EMAIL PROTECTED]>
To: Tomasz Rakowski <[EMAIL PROTECTED]>
Cc: Matthew T. O'Connor <[EMAIL PROTECTED]>; pgsql-general@postgresql.org
Sent: Wednesday, June 27, 2007 3:54:14 PM
Subject: Re: [GENERAL] autovacumm not working ?

Tomasz Rakowski wrote:
> Alvaro,
> 
>   I changed autovacuum parametrs for this specific table in pg_autovacuum
> 
> insert into pg_autovacuum(vacrelid,enabled,vac_base_thresh, 
> vac_scale_factor, anl_base_thresh, anl_scale_fact

Re: [GENERAL] using PREPAREd statements in CURSOR

2007-06-28 Thread Tom Lane
=?ISO-8859-1?Q?Bj=F6rn_Lundin?= <[EMAIL PROTECTED]> writes:
> I'm connecting via libpq and want to
> use prepared statements in a cursor.

You can't.

If you're just interested in fetching a large query result in sections,
there is protocol-level support for doing that without an explicit
cursor, but libpq doesn't expose that feature because it doesn't fit
into its API very well.  (I think JDBC does expose it, but that doesn't
help you if you want to code in C...)  A well-thought-out API proposal
would probably be favorably received.

If you wanted some other cursor feature like scrollability, the whole
thing is a bit problematic, because the prepared statement's plan was
not made with the intention of using it that way (yes, DECLARE CURSOR
is planned differently than a plain select).

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-28 Thread Bruce McAlister
Alvaro Herrera wrote:
> Bruce McAlister wrote:
>> Martijn van Oosterhout wrote:
>>> All the values here look OK, except one:
>>>
>>> On Thu, Jun 28, 2007 at 07:50:36AM +0100, Bruce McAlister wrote:
 blueface-crm=# select oid, relfrozenxid from pg_class where relkind in
 ('r', 't');
oid   | relfrozenxid
 -+--
  2570051 |   2947120794
>>> Whatever this table is, the freeze XID isn't getting updated for some
>>> reason...
> 
> Doh.
> 
>> This looks like a temporary relation,
>>
>> temp4295   |   2947120794
>>
>> Is there a way we can manually force these to update?
> 
> No.  Only the session that created the temp table can vacuum it.
> Autovacuum skips temp tables.  I guess the only thing you can do here is
> close that session.
> 

How could I go about finding out which session created the temporary table?

So this could be a potential issue for autovacuum then. If, for example,
 our environment uses connection pooling. Then these connections are
persistent to the database? From Martjin's comments, I would assume then
that the connection pooling driver (ODBC/NPGSQL etc etc) should
implement the "RESET SESSION" DDL after each transaction/query so that
we don't have these temporary tables lying about indefinately?

> I'm thinking that maybe should make vac_update_datfrozenxid ignore temp
> tables.  But this doesn't really work, because if we were to truncate
> pg_clog there would be tuples on the temp table marked with XIDs that
> are nowhere to be found.  Maybe we could make some noise about it
> though.
> 
> This is a problem only in recent releases (8.2) because we started
> allowing the max freeze age be configurable.
> 

I think the max/min freeze age parameters we are using here are the
default ones, I think I just uncommented them.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Execution variability

2007-06-28 Thread Alvaro Herrera
Vincenzo Romano escribió:

> The values are here below. I suppose that the "hashed"
> ones imply a default value.

Correct (widely known as "commented out")

> By the way, it seems that the problem arises with only one query, 
> while the other ones behave almost the same all the time.

Let's see the EXPLAIN ANALYZE.  Does it involve the big table?

-- 
Alvaro Herrera   Valdivia, Chile   ICBM: S 39º 49' 18.1", W 73º 13' 56.4"
"There is evil in the world. There are dark, awful things. Occasionally, we get
a glimpse of them. But there are dark corners; horrors almost impossible to
imagine... even in our worst nightmares." (Van Helsing, Dracula A.D. 1972)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-28 Thread Alvaro Herrera
Bruce McAlister wrote:
> Martijn van Oosterhout wrote:
> > All the values here look OK, except one:
> > 
> > On Thu, Jun 28, 2007 at 07:50:36AM +0100, Bruce McAlister wrote:
> >> blueface-crm=# select oid, relfrozenxid from pg_class where relkind in
> >> ('r', 't');
> >>oid   | relfrozenxid
> >> -+--
> >>  2570051 |   2947120794
> > 
> > Whatever this table is, the freeze XID isn't getting updated for some
> > reason...

Doh.

> This looks like a temporary relation,
> 
> temp4295   |   2947120794
> 
> Is there a way we can manually force these to update?

No.  Only the session that created the temp table can vacuum it.
Autovacuum skips temp tables.  I guess the only thing you can do here is
close that session.

I'm thinking that maybe should make vac_update_datfrozenxid ignore temp
tables.  But this doesn't really work, because if we were to truncate
pg_clog there would be tuples on the temp table marked with XIDs that
are nowhere to be found.  Maybe we could make some noise about it
though.

This is a problem only in recent releases (8.2) because we started
allowing the max freeze age be configurable.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Possible bug (or I don't understand how foreign keys should work with partitions)

2007-06-28 Thread Masaru Sugawara
On Fri, 22 Jun 2007 18:23:44 -0300
"Daniel van Ham Colchete" <[EMAIL PROTECTED]> wrote:


Hi, 

As far as I read the documents(see below), it seems to be correct
that no error message occurred in your case. 

http://www.postgresql.org/docs/8.2/interactive/ddl-inherit.html
-- All check constraints and not-null constraints on a parent table are 
-- automatically inherited by its children. Other types of constraints
-- (unique, primary key, and foreign key constraints) are not inherited. 


For example, you probably need to declare CREATE TABLE like the below.

[EMAIL PROTECTED] TABLE regsemail (dID serial PRIMARY KEY,
  rID integer, email text,
  FOREIGN KEY(rID) REFERENCES regs(rID)
  ON DELETE CASCADE);
  CREATE TABLE regsemail_00 (CHECK ( rID >= 0 AND rID < 10 ), 
  FOREIGN KEY(rID) REFERENCES regs_00(rID)
  ON DELETE CASCADE) INHERITS(regsemail);


By the way, why will you have such a huge number of e-mail addresses ?

-- 
Masaru Sugawara




> People,
> 
> either I don't understand how p
> People,
> 
> either I don't understand how partitions works or I think I found a bug
> here.
> 
> I'm using PostgreSQL-8.2.4 with Gentoo.
> 
> The code explains:
> 
> # First I create the table regs with 2 partitions:
> create table regs (rID serial primary key, name text, number int);
> create table regs_00 (  CHECK ( number >= 00 AND number < 10 )) INHERITS
> (regs);
> create rule insert_regs_00 AS ON INSERT TO regs WHERE ( number >= 00 AND
> number < 10 ) DO INSTEAD INSERT INTO regs_00 VALUES ( NEW.rID, NEW.name,
> NEW.number );
> create table regs_10 (  CHECK ( number >= 10 AND number < 20 )) INHERITS
> (regs);
> create rule insert_regs_10 AS ON INSERT TO regs WHERE ( number >= 10 AND
> number < 20 ) DO INSTEAD INSERT INTO regs_10 VALUES ( NEW.rID, NEW.name,
> NEW.number );
> 
> # Them I create the table regsemail also with 2 partitions but with a
> foreign key:
> create table regsemail (dID serial primary key, fk_regs_id integer
> REFERENCES regs (rID) ON DELETE CASCADE, email text);
> create table regsemail_00 (  CHECK ( fk_regs_id >= 0 AND fk_regs_id < 10 ))
> INHERITS (regsemail);
> CREATE RULE insert_regsemail_00 AS ON INSERT TO regsemail WHERE ( fk_regs_id
> >= 0 AND fk_regs_id < 10 ) DO INSTEAD INSERT INTO regsemail_00 VALUES (
> NEW.dID, NEW.fk_regs_id, NEW.email );
> create table regsemail_10 (  CHECK ( fk_regs_id >= 10 AND fk_regs_id < 20 ))
> INHERITS (regsemail);
> CREATE RULE insert_regsemail_10 AS ON INSERT TO regsemail WHERE ( fk_regs_id
> >= 10 AND fk_regs_id < 20 ) DO INSTEAD INSERT INTO regsemail_10 VALUES (
> NEW.dID, NEW.fk_regs_id, NEW.email );
> 
> # Insert four rows in regs (rID will go from 1 to 4):
> insert into regs (name, number) values ('Daniel', 4);
> insert into regs (name, number) values ('Daniel', 14);
> insert into regs (name, number) values ('Daniel', 5);
> insert into regs (name, number) values ('Daniel', 15);
> 
> # Insert a 'invalid' row in regsemail
> insert into regsemail (fk_regs_id, email) values (6, '[EMAIL PROTECTED]');
> # END!
> 
> I should get an error saying something like  "...violates foreign key
> constraint..." but I'm not getting anything. That's the bug. If I don't have
> the partitions them I get the error message (as I think I should).
> 
> The problem I'm trying to solve is: I'll have a 1.8 billion rows table
> (regs) and another one having at least one row to each row from the first
> one. The solution is very simple: partitions. The 1.8 billion rows is
> distributed uniformly in the days of the year, so I'll create one partition
> for each day. But I have to do something similar with the second table as
> well otherwise I wouldn't win much if I had to do a JOIN. I was testing how
> foreign keys would work in this case and ran into this.
> 
> Is this really a bug? If not, what am I doing wrong please?
> 
> Best regards,
> Daniel




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Execution variability

2007-06-28 Thread Vincenzo Romano
Hi.

The test system has 1GB Ram.
The main table has 20+ million rows.
All the other ones account for less than 10K rows.

The values are here below. I suppose that the "hashed"
ones imply a default value.

shared_buffers = 24MB
#temp_buffers = 8MB
#max_prepared_transactions = 5
work_mem = 16MB
#maintenance_work_mem = 16MB
#max_stack_depth = 2MB
max_fsm_pages = 153600
#max_fsm_relations = 1000
max_files_per_process = 1000
#shared_preload_libraries = ''

By the way, it seems that the problem arises with only one query, 
while the other ones behave almost the same all the time.

I thank you very much for your attention and help.

On Thursday 28 June 2007 13:17:54 Richard Huxton wrote:
> Vincenzo Romano wrote:
>  The very same query on the very same db shows very variable
>  timings. I'm the only one client on an unpupolated server so
>  I'd expect a rather constant timing.
> >>>
> >>> What's really weird is that after some time the timings get
> >>> back to normal. With no explicit action. Then, later, timings
> >>> get worse again.
> >
> > From the "top" command (I'm running Linux) the only process that
> > jumps high with the load is just the postrgres instance managing
> > the SQL connection.
> > I agree about "something else must be happening in the
> > background". All rthe available RAM gets used as well as some
> > swap.
> > During "fast" operations the used RAM remains low and no swap
> > happens.
>
> That suggests it's not the "same query" that's causing problems. If
> you're going into swap then performance will vary wildly. You may
> have allocated more memory to PostgreSQL than is available on the
> machine.
>
> > I would exclude any other "system" process.
> >
> > How can I log what the PGSQL is actually doing?
>
> See the "when to log" and "what to log" parts of this:
> http://www.postgresql.org/docs/8.2/static/runtime-config.html
>
> As postgres (or other superuser) you can do:
>ALTER DATABASE  SET log_min_duration_statement = 1000;
> That will log all statements that take longer than 1 second.
>
> Alternatively log_statement = 'all' will show all queries executed.
>
> You probably want to read the section on "Resource Consumption"
> linked above too. In particular work_mem is *per sort*, which means
> one query can use several times the amount set.
>
> If you post the values for the settings listed in chapter 17.4.1 of
> the manuals and a description of what your machine is like, what
> else it is doing then we might be able to suggest some changes.



-- 
Vincenzo Romano
--
Maybe Computer will never become as intelligent as Humans.
For sure they won't ever become so stupid. [VR-1988]

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Execution variability

2007-06-28 Thread Richard Huxton

Vincenzo Romano wrote:

The very same query on the very same db shows very variable
timings. I'm the only one client on an unpupolated server so I'd
expect a rather constant timing.

What's really weird is that after some time the timings get back
to normal. With no explicit action. Then, later, timings get
worse again.


From the "top" command (I'm running Linux) the only process that jumps 
high with the load is just the postrgres instance managing the SQL

connection.
I agree about "something else must be happening in the background".
All rthe available RAM gets used as well as some swap.
During "fast" operations the used RAM remains low and no swap
happens.


That suggests it's not the "same query" that's causing problems. If 
you're going into swap then performance will vary wildly. You may have 
allocated more memory to PostgreSQL than is available on the machine.



I would exclude any other "system" process.

How can I log what the PGSQL is actually doing?


See the "when to log" and "what to log" parts of this:
http://www.postgresql.org/docs/8.2/static/runtime-config.html

As postgres (or other superuser) you can do:
  ALTER DATABASE  SET log_min_duration_statement = 1000;
That will log all statements that take longer than 1 second.

Alternatively log_statement = 'all' will show all queries executed.

You probably want to read the section on "Resource Consumption" linked 
above too. In particular work_mem is *per sort*, which means one query 
can use several times the amount set.


If you post the values for the settings listed in chapter 17.4.1 of the 
manuals and a description of what your machine is like, what else it is 
doing then we might be able to suggest some changes.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-28 Thread Martijn van Oosterhout
On Thu, Jun 28, 2007 at 11:12:19AM +0100, Bruce McAlister wrote:
> I just want to verify that I understand you correctly here, do you mean
> that the temporary table is created by specific sql, for example, create
> temp table, then perform some actions on that temp table, then, either
> you remove the temp table, or, if you close the session/connection the
> postmaster will clean up the temp table? What happens if you're using
> connection pools, i mean are those sessions deemed "closed" after the
> queries complete, when the pool connections are persistent.

Yes, the temp table is private to the session and will be removed once
the session closes, if not sooner. As for connection pools, IIRC there
is a RESET SESSION command which should also get rid of the temporary
tables.

That's what's wierd about your case, I can beleive that autovacuum
ignores temporary tables. And somehow you've got a temporary table
that's been alive for hundreds of millions of transactions...

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-28 Thread Dave Page
Bruce McAlister wrote:
> Excuse my PGSQL ignorance, I'm new to PostgreSQL, and waiting for these
> PGSQL books to become available:
> 
> http://www.network-theory.co.uk/newtitles.html

I'm pretty sure you'll find those are just bound copies of
http://www.postgresql.org/docs/8.2/interactive/index.html

Those are the only docs/books that have been produced by the PostgreSQL
Global Development Group.

Regards, Dave

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Execution variability

2007-06-28 Thread Vincenzo Romano
On Thursday 28 June 2007 12:00:40 Richard Huxton wrote:
> Vincenzo Romano wrote:
> > On Wednesday 27 June 2007 23:46:25 Vincenzo Romano wrote:
> >> Hi all.
> >> I understand this can be a ridiculous question for most you.
> >>
> >> The very same query on the very same db shows very variable
> >> timings. I'm the only one client on an unpupolated server so I'd
> >> expect a rather constant timing.
> >
> > What's really weird is that after some time the timings get back
> > to normal. With no explicit action. Then, later, timings get
> > worse again.
> >
> > I fear it can be a "DBA problem" but but still have no clue.
>
> So what's the computer doing? If it is the "very same" query on the
> "very same" data then something else must be happening in the
> background. Check the output of top/vmstat (or Windows equivalents)
> when this happens and see if there's a clue.

>From the "top" command (I'm running Linux) the only process that jumps 
high with the load is just the postrgres instance managing the SQL
connection.
I agree about "something else must be happening in the background".
All rthe available RAM gets used as well as some swap.
During "fast" operations the used RAM remains low and no swap
happens.
I would exclude any other "system" process.

How can I log what the PGSQL is actually doing?

-- 
Vincenzo Romano
--
Maybe Computer will never become as intelligent as Humans.
For sure they won't ever become so stupid. [VR-1988]

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-28 Thread Bruce McAlister
Alban Hertroys wrote:
> Bruce McAlister wrote:
>> Which brings me onto a possibly related question. I've noticed that in
>> this particular database, that there are temporary tables that are
>> created. I'm not 100% sure how/why these temporary tables are being
>> created, but I do assume that it must be by some sort of SQL query that
>> runs against the database. How does postgresql handle these temporary
>> tables, i mean, if a temporary table is created by some sql query, is it
>> up to the user performing the query to remove the temporary table
>> afterwards, or does postgresql automagically remove the temporary table
>> when the query completes?
> 
> That would defeat the purpose of temporary tables. You usually create
> them to perform queries on a data set from another query from the same
> session.
> 

I just want to verify that I understand you correctly here, do you mean
that the temporary table is created by specific sql, for example, create
temp table, then perform some actions on that temp table, then, either
you remove the temp table, or, if you close the session/connection the
postmaster will clean up the temp table? What happens if you're using
connection pools, i mean are those sessions deemed "closed" after the
queries complete, when the pool connections are persistent.

Excuse my PGSQL ignorance, I'm new to PostgreSQL, and waiting for these
PGSQL books to become available:

http://www.network-theory.co.uk/newtitles.html

So at the moment I'm working on principles of databases as apposed to
actual intimate knowledge of PGSQL itself.

> AFAIK temporary tables are dropped when the session in which they were
> created terminates, or optionally on commit if specified that way (ON
> COMMIT DROP).
> 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Execution variability

2007-06-28 Thread Richard Huxton

Vincenzo Romano wrote:

On Wednesday 27 June 2007 23:46:25 Vincenzo Romano wrote:

Hi all.
I understand this can be a ridiculous question for most you.

The very same query on the very same db shows very variable
timings. I'm the only one client on an unpupolated server so I'd
expect a rather constant timing.



What's really weird is that after some time the timings get back to
normal. With no explicit action. Then, later, timings get worse again.

I fear it can be a "DBA problem" but but still have no clue.


So what's the computer doing? If it is the "very same" query on the 
"very same" data then something else must be happening in the 
background. Check the output of top/vmstat (or Windows equivalents) when 
this happens and see if there's a clue.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Image Archiving with postgres

2007-06-28 Thread Raymond O'Donnell

On 28/06/2007 00:58, Eddy D. Sanchez wrote:

I want to scan a large quantity of books and documents and store these 
like images, I want use postgres, anyone have experience with this kind 
of systems, can you suggest me an opensource solution ??


There have been several lively discussions on this list in the last 
eighteen months or so about storing binary files in a database vs 
storing them in the filesystem - you may find it useful to have a look 
through the archives.


Ray.


---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-28 Thread Alban Hertroys
Bruce McAlister wrote:
> Which brings me onto a possibly related question. I've noticed that in
> this particular database, that there are temporary tables that are
> created. I'm not 100% sure how/why these temporary tables are being
> created, but I do assume that it must be by some sort of SQL query that
> runs against the database. How does postgresql handle these temporary
> tables, i mean, if a temporary table is created by some sql query, is it
> up to the user performing the query to remove the temporary table
> afterwards, or does postgresql automagically remove the temporary table
> when the query completes?

That would defeat the purpose of temporary tables. You usually create
them to perform queries on a data set from another query from the same
session.

AFAIK temporary tables are dropped when the session in which they were
created terminates, or optionally on commit if specified that way (ON
COMMIT DROP).

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] a JOIN on same table, but 'slided over'

2007-06-28 Thread Alban Hertroys
Rafal Pietrak wrote:
> Gurjeet,
> 
> Focusing on the standars solution, I did some 'exercises' - works fine,
> just learning. 
> 
> But the ambarasing thing is, that I looks like I really don't get it,
> meaning - what exactly the internal query does. I've never ever seen or
> used a subquery with data/params from 'upper level' query used within a
> subquery - any time I've written a hierarchical query (e.g. with
> subqueries), the relations were always hierarchical. In other words, I
> was always able to run an internal subquery outside of the compound
> query and get consistant results. With this one I cannot do that due to
> the 'entanglement' of t3 and t1.

This is called a 'correlated subquery'. Basically the subquery is
performed for each record in the top query.

Google gave me this:
http://publib.boulder.ibm.com/infocenter/iseries/v5r3/index.jsp?topic=/sqlp/rbafycorrs.htm

And there's probably more to find. Interestingly enough wikipedia
doesn't seem to have an article on the subject.

> Postgress query plan from EXPLAIN doesn't help me here - probably I'm
> unable to interpret it correctly without 'a paradigm mind shift'.
> 
> So, would you mind commenting a little on how exactly the t1.id
> influences subquery (with t3), and the result influences back the
> selection of t1 set?
> 
> Will greatly apreciate that.


-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-28 Thread Bruce McAlister
Martijn van Oosterhout wrote:
> All the values here look OK, except one:
> 
> On Thu, Jun 28, 2007 at 07:50:36AM +0100, Bruce McAlister wrote:
>> blueface-crm=# select oid, relfrozenxid from pg_class where relkind in
>> ('r', 't');
>>oid   | relfrozenxid
>> -+--
>>  2570051 |   2947120794
> 
> Whatever this table is, the freeze XID isn't getting updated for some
> reason...
> 
> Have a nice day,

This looks like a temporary relation,

temp4295   |   2947120794

Is there a way we can manually force these to update?

Which brings me onto a possibly related question. I've noticed that in
this particular database, that there are temporary tables that are
created. I'm not 100% sure how/why these temporary tables are being
created, but I do assume that it must be by some sort of SQL query that
runs against the database. How does postgresql handle these temporary
tables, i mean, if a temporary table is created by some sql query, is it
up to the user performing the query to remove the temporary table
afterwards, or does postgresql automagically remove the temporary table
when the query completes?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] a JOIN on same table, but 'slided over'

2007-06-28 Thread Rafal Pietrak
Gurjeet,

Focusing on the standars solution, I did some 'exercises' - works fine,
just learning. 

But the ambarasing thing is, that I looks like I really don't get it,
meaning - what exactly the internal query does. I've never ever seen or
used a subquery with data/params from 'upper level' query used within a
subquery - any time I've written a hierarchical query (e.g. with
subqueries), the relations were always hierarchical. In other words, I
was always able to run an internal subquery outside of the compound
query and get consistant results. With this one I cannot do that due to
the 'entanglement' of t3 and t1.

Postgress query plan from EXPLAIN doesn't help me here - probably I'm
unable to interpret it correctly without 'a paradigm mind shift'.

So, would you mind commenting a little on how exactly the t1.id
influences subquery (with t3), and the result influences back the
selection of t1 set?

Will greatly apreciate that.

-R

On Tue, 2007-06-26 at 19:14 +0530, Gurjeet Singh wrote:
> I missed the ORDER BY clause... Here it goes:
> 
> selectt1.id as id, t2.id as "id+1",
> t1.thread as thread, t2.thread as "thread+1",
> t1.info as info, t2.info as "info+1"
> from test as t1, test as t2
> where t2.id = ( select min(id) from test as t3 where t3.id > t1.id )
> order by t1.id asc;
> 
> Also note that this query is much cheaper that the 'distinct on' query
> by more than two orders on magnitude ( 217.86 vs. 98040.67):
> 
> postgres=# explain
> postgres-# select
> postgres-# distinct on (t1.id)
> postgres-# t1.*, t2.*
> postgres-# from
> postgres-# test t1
> postgres-# join test t2 on t2.id > t1.id
> postgres-# order by t1.id asc, t2.id asc;
>QUERY PLAN 
> 
>  Unique  (cost=95798.00..98040.67 rows=1160 width=80)
>->  Sort  (cost=95798.00..96919.33 rows=448533 width=80) 
>  Sort Key: t1.id, t2.id
>  ->  Nested Loop  (cost=0.00..13827.29 rows=448533 width=80)
>->  Seq Scan on test t1  (cost=0.00..21.60 rows=1160
> width=40)
>->  Index Scan using test_id_key on test t2
> (cost=0.00..7.06 rows=387 width=40)
>  Index Cond: (t2.id > t1.id)
> (7 rows)
> Time: 5.003 ms
> postgres=# explain
> postgres-# select   t1.id as id, t2.id as "id+1",
> postgres-#  t1.thread as thread, t2.thread as "thread+1",
> postgres-#  t1.info as info, t2.info as "info+1"
> postgres-# from test as t1, test as t2
> postgres-# where t2.id = ( select min(id) from test as t3 where t3.id
> > t1.id )
> postgres-# order by t1.id asc;
> QUERY PLAN 
> --
>  Sort  (cost=214.96..217.86 rows=1160 width=80)
>Sort Key: t1.id
>->  Hash Join  (cost= 36.10..155.92 rows=1160 width=80)
>  Hash Cond: ((subplan) = t2.id)
>  ->  Seq Scan on test t1  (cost=0.00..21.60 rows=1160
> width=40)
>  ->  Hash  (cost=21.60..21.60 rows=1160 width=40)
>->  Seq Scan on test t2  (cost=0.00..21.60 rows=1160
> width=40)
>  SubPlan
>->  Result  (cost=0.13..0.14 rows=1 width=0)
>  InitPlan
>->  Limit  (cost= 0.00..0.13 rows=1 width=4)
>  ->  Index Scan using test_id_key on test t3
> (cost=0.00..51.02 rows=387 width=4)
>Index Cond: (id > $0)
>Filter: (id IS NOT NULL) 
> (14 rows)
> Time: 4.125 ms
> 
> 
> Best regards,
> -- 
> [EMAIL PROTECTED]
> [EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com
> 
> 17°29'34.37"N  78°30'59.76"E - Hyderabad *
> 18°32'57.25"N  73°56'25.42"E - Pune 
> 
> Sent from my BlackLaptop device
> 
> On 6/26/07, Gurjeet Singh <[EMAIL PROTECTED] > wrote:
> Hi Rafal,
> 
> Just a note that this is not standard SQL... 'distinct on'
> is an extension to SQL provided by postgres. 
> 
> Following query utilizes the standard SQL to get the same
> results:
> 
> selectt1.id as id, t2.id as "id+1", 
> t1.thread as thread, t2.thread as "thread+1",
> t1.info as info, t2.info as "info+1"
> from test as t1, test as t2
> where t2.id = ( select min(id) from test as t3 where t3.id >
> t1.id);
> 
> HTH
> -- 
> [EMAIL PROTECTED]
> [EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com
> 
> 17°29'34.37"N  78°30'59.76"E - Hyderabad *
> 18°32'57.25"N  73°56' 25.42 "E - Pune
> 
> Sent from my BlackLaptop device
> 
> 
> On 6/26/07, Rafal Pietrak <[EMAIL PROTECTED]> wrote:
> Marvelous! Thenx!
>   

[GENERAL] using PREPAREd statements in CURSOR

2007-06-28 Thread Björn Lundin

Hello!
I'm connecting via libpq and want to
use prepared statements in a cursor.
Is there a sample somewhere, since I cannot get
it to work.

sebjlun=# \d ssignal
   Table "public.ssignal"
 Column  | Type  | Modifiers
-+---+---
 ssignam | character(12) | not null
 ssigdes | character(30) | not null
 ssopcid | character(40) |
 sstatyp | integer   | not null
 sstatid | integer   | not null
 ssigtyp | integer   | not null
 ssigadd | integer   | not null
 ssigran | integer   | not null
Indexes:
"ssignalp1" PRIMARY KEY, btree (ssignam)
"ssignali3" btree (sstatyp, sstatid)

sebjlun=# prepare test as
sebjlun-# select * from SSIGNAL where SSTATYP=$1 and SSTATID=$2 order  
by SSIGNAM

sebjlun-# ;
PREPARE

sebjlun=# execute test(4,6);
   ssignam|ssigdes |  
ssopcid

  | sstatyp | sstatid | ssigtyp | ssigadd | ssigran
--+ 
+

--+-+-+-+-+-
 CD_PLC_SS| Counter delay output PLC   |
  |   4 |   6 |   2 |2103 |  16
 CD_SS_PLC| Counter delay input SS |
  |   4 |   6 |   2 |2003 |  16
 CN_PLC_SS| Counter number output PLC  |
  |   4 |   6 |   2 |2102 |  16
 CN_SS_PLC| Counter Number input SS|
  |   4 |   6 |   2 |2002 |  16
 

so far so good


sebjlun=# declare cursor ctest for test(4,6);
ERROR:  syntax error at or near "ctest"
LINE 1: declare cursor ctest for test(4,6);
   ^
sebjlun=#

How would I express that?

Combining cursors with parameter seems not to be the way either

sebjlun=# Declare C240 cursor for select * from SSIGNAL
where SSTATYP=$1 and SSTATID=$2 order by SSIGNAM;

ERROR:  there is no parameter $1
sebjlun=#





/Björn






Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-28 Thread Martijn van Oosterhout
All the values here look OK, except one:

On Thu, Jun 28, 2007 at 07:50:36AM +0100, Bruce McAlister wrote:
> blueface-crm=# select oid, relfrozenxid from pg_class where relkind in
> ('r', 't');
>oid   | relfrozenxid
> -+--
>  2570051 |   2947120794

Whatever this table is, the freeze XID isn't getting updated for some
reason...

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Execution variability

2007-06-28 Thread Vincenzo Romano
On Wednesday 27 June 2007 23:46:25 Vincenzo Romano wrote:
> Hi all.
> I understand this can be a ridiculous question for most you.
>
> The very same query on the very same db shows very variable
> timings. I'm the only one client on an unpupolated server so I'd
> expect a rather constant timing.
>
> INstead for a while the query become very slooow and the CPU
> reached 60 to 70% and the time needed is about 1.5 minutes0.
> Again with just me on it.
> Normally the same query rises the usage to a mere 5% to 7% with
> timing with the tenth of a second.
>
> I have disable both the autovacuum and the "stats_start_collector"
> in the
> attempt to disable possibe reasons for slow down. No luck in this.
>
> Do you have any hint?

What's really weird is that after some time the timings get back to
normal. With no explicit action. Then, later, timings get worse again.

I fear it can be a "DBA problem" but but still have no clue.

-- 
Vincenzo Romano
--
Maybe Computer will never become as intelligent as Humans.
For sure they won't ever become so stupid. [VR-1988]

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-28 Thread Bruce McAlister
Bruce McAlister wrote:

> I will run with DEBUG2 for a while and see if my output looks anything
> like this :)

I've been running in DEBUG2 mode for a couple days now and I can see the
extra information being logged into the log file, but it looks like the
autovacuum is not actually starting, it does not look anything like the
output you showed me, ie, what it is supposed to look like. Here's an
excerpt of our log for the last 15 - 20 minutes.

Jun 28 07:56:01 bfiedb01 postgres[17003]: [ID 748848 local0.debug]
[45371-1]17003 DEBUG:  autovacuum: processing database "blueface-crm"
Jun 28 07:57:01 bfiedb01 postgres[17025]: [ID 748848 local0.debug]
[45371-1]17025 DEBUG:  autovacuum: processing database "blueface-crm"
Jun 28 07:58:01 bfiedb01 postgres[17047]: [ID 748848 local0.debug]
[45371-1]17047 DEBUG:  autovacuum: processing database "blueface-crm"
Jun 28 07:58:41 bfiedb01 postgres[29224]: [ID 748848 local0.debug]
[5390-1]29224 DEBUG:  checkpoint starting
Jun 28 07:58:41 bfiedb01 postgres[29224]: [ID 748848 local0.debug]
[5391-1]29224 DEBUG:  checkpoint complete; 0 transaction log file(s)
added, 0 removed, 0 recycled
Jun 28 07:59:01 bfiedb01 postgres[17069]: [ID 748848 local0.debug]
[45371-1]17069 DEBUG:  autovacuum: processing database "blueface-crm"
Jun 28 08:00:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug]
[45371-1]29121 DEBUG:  forked new backend, pid=17098 socket=9
Jun 28 08:00:01 bfiedb01 postgres[17099]: [ID 748848 local0.debug]
[45372-1]17099 DEBUG:  autovacuum: processing database "blueface-crm"
Jun 28 08:00:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug]
[45372-1]29121 DEBUG:  server process (PID 17098) exited with exit
code 0
Jun 28 08:00:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug]
[45373-1]29121 DEBUG:  forked new backend, pid=17100 socket=9
Jun 28 08:00:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug]
[45374-1]29121 DEBUG:  server process (PID 17100) exited with exit
code 0
Jun 28 08:01:01 bfiedb01 postgres[17122]: [ID 748848 local0.debug]
[45375-1]17122 DEBUG:  autovacuum: processing database "blueface-crm"
Jun 28 08:02:01 bfiedb01 postgres[17144]: [ID 748848 local0.debug]
[45375-1]17144 DEBUG:  autovacuum: processing database "blueface-crm"
Jun 28 08:03:01 bfiedb01 postgres[17166]: [ID 748848 local0.debug]
[45375-1]17166 DEBUG:  autovacuum: processing database "blueface-crm"
Jun 28 08:03:41 bfiedb01 postgres[29224]: [ID 748848 local0.debug]
[5392-1]29224 DEBUG:  checkpoint starting
Jun 28 08:03:41 bfiedb01 postgres[29224]: [ID 748848 local0.debug]
[5393-1]29224 DEBUG:  recycled transaction log file
"000102880072"
Jun 28 08:03:41 bfiedb01 postgres[29224]: [ID 748848 local0.debug]
[5394-1]29224 DEBUG:  checkpoint complete; 0 transaction log file(s)
added, 0 removed, 1 recycled
Jun 28 08:04:01 bfiedb01 postgres[17188]: [ID 748848 local0.debug]
[45375-1]17188 DEBUG:  autovacuum: processing database "blueface-crm"
Jun 28 08:05:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug]
[45375-1]29121 DEBUG:  forked new backend, pid=17216 socket=9
Jun 28 08:05:01 bfiedb01 postgres[17217]: [ID 748848 local0.debug]
[45376-1]17217 DEBUG:  autovacuum: processing database "blueface-crm"
Jun 28 08:05:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug]
[45376-1]29121 DEBUG:  server process (PID 17216) exited with exit
code 0
Jun 28 08:05:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug]
[45377-1]29121 DEBUG:  forked new backend, pid=17218 socket=9
Jun 28 08:05:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug]
[45378-1]29121 DEBUG:  server process (PID 17218) exited with exit
code 0
Jun 28 08:06:01 bfiedb01 postgres[17240]: [ID 748848 local0.debug]
[45379-1]17240 DEBUG:  autovacuum: processing database "blueface-crm"
Jun 28 08:07:01 bfiedb01 postgres[17262]: [ID 748848 local0.debug]
[45379-1]17262 DEBUG:  autovacuum: processing database "blueface-crm"
Jun 28 08:08:01 bfiedb01 postgres[17286]: [ID 748848 local0.debug]
[45379-1]17286 DEBUG:  autovacuum: processing database "blueface-crm"
Jun 28 08:08:41 bfiedb01 postgres[29224]: [ID 748848 local0.debug]
[5395-1]29224 DEBUG:  checkpoint starting
Jun 28 08:08:41 bfiedb01 postgres[29224]: [ID 748848 local0.debug]
[5396-1]29224 DEBUG:  checkpoint complete; 0 transaction log file(s)
added, 0 removed, 0 recycled
Jun 28 08:09:01 bfiedb01 postgres[17308]: [ID 748848 local0.debug]
[45379-1]17308 DEBUG:  autovacuum: processing database "blueface-crm"
Jun 28 08:10:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug]
[45379-1]29121 DEBUG:  forked new backend, pid=17337 socket=9
Jun 28 08:10:01 bfiedb01 postgres[17338]: [ID 748848 local0.debug]
[45380-1]17338 DEBUG:  autovacuum: processing database "blueface-crm"
Jun 28 08:10:01 bfiedb01 postgres[29121]: [ID 748848 local0.debug]
[45380-1]29121 DEBUG:  server process (PID 17337) exited with exit
code 0
Jun 28 08:10:01 bfiedb01 postgres[29121]: [ID 74