[GENERAL] autovacuum log?

2007-12-13 Thread MG
Hello,

when I put the autovacuum on, where can I check what it does? Is there any 
log-file?

Regards
Michaela


[GENERAL] max_connections (postgresql.conf)

2007-03-14 Thread MG
Hello,

we are using SCO OpenServer6 and Postgresql 8.1.4.

We increased the parameter max_connections in the postgresql.conf to 300.

In Section 16.4.1 of the dokumentation we try to find out how to adjust 
depending parameters. But we can't figure it out.

What would be reasonable values for the parameters?

Regards
Michaela

Re: [GENERAL] WARNING: some databases have not been vacuumed in 1953945422 transactions

2007-02-21 Thread MG

Yes there are changes.
db1, db2 and template1 has been vacuumed over night, not template0.
20.02.07 datname - (12) age - (4)
 1 db1 1.075.878.187
 2 db2 1.075.847.556
 3 template1 1.976.569.889
 4 template0 1.976.569.889




 21.02.07 datname - (12) age - (4)
 1 db1 1.074.758.205
 2 db2 1.074.728.832
 3 template1 1.074.728.720
 4 template0 1.978.965.587


Regards
Michaela


- Original Message - 
From: "Albe Laurenz" <[EMAIL PROTECTED]>

To: "MG *EXTERN*" <[EMAIL PROTECTED]>
Cc: 
Sent: Monday, February 19, 2007 9:41 AM
Subject: Re: [GENERAL] WARNING: some databases have not been vacuumed in 
1953945422 transactions




But I don't understand why there are changes of the databases
template1 and
template0 at all?
I thought they are only templates.


I don't think that there were any changes to the template databases.

You detected a difference in age(datfrozenxid) - try selecting
datfrozenxid
itself and you will probably see that it does not change over time.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_dump: [tar archiver] write error appending to tar archive

2007-02-21 Thread MG

Where do I find the man page.

Regards
Michaela

- Original Message - 
From: "Tom Lane" <[EMAIL PROTECTED]>

To: "MG" <[EMAIL PROTECTED]>
Sent: Tuesday, February 13, 2007 3:46 PM
Subject: Re: [GENERAL] pg_dump: [tar archiver] write error appending to tar 
archive




"MG" <[EMAIL PROTECTED]> writes:
could you tell me the directory where the temp files are built. It 
doesn't

seem to be in /tmp.


It'd be whereever tmpfile() wants to put it; see the man page for that
function.

regards, tom lane




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] WARNING: some databases have not been vacuumed in 1953945422 transactions

2007-02-16 Thread MG

Hello Tom,

thanks for your answer.

But I don't understand why there are changes of the databases template1 and 
template0 at all?

I thought they are only templates.

Regards
Michaela

- Original Message - 
From: "Tom Lane" <[EMAIL PROTECTED]>

To: "MG" <[EMAIL PROTECTED]>
Cc: 
Sent: Thursday, February 15, 2007 4:57 PM
Subject: Re: [GENERAL] WARNING: some databases have not been vacuumed in 
1953945422 transactions




"MG" <[EMAIL PROTECTED]> writes:

Each night a shell script is being executed.



vacuumdb --analyze -U cmduser db1
vacuumdb --analyze -U cmduser db2


You need to hit template1 every so often, too.  You probably might
as well just do that every night; it won't take long.


The last weeks the following warnings are given out:
WARNING:  some databases have not been vacuumed in 1953945422 =
transactions



Is this critical?


Yes.

regards, tom lane




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] WARNING: some databases have not been vacuumed in 1953945422 transactions

2007-02-15 Thread MG
Hello,

we are using PostgreSQL 8.0.3 and have two databases in one cluster. db1 and 
db2.
Each night a shell script is being executed.

vacuumdb --analyze -U cmduser db1
vacuumdb --analyze -U cmduser db2

The last weeks the following warnings are given out:
WARNING:  some databases have not been vacuumed in 1953945422 transactions
HINT:  Better vacuum them within 193538225 transactions, or you may have a 
wraparound failure.

Now I made the sql-statement:
SELECT datname, age(datfrozenxid) FROM pg_database;

 Yesterday I got this result: 
  datname  age  
  db1 1.090.080.531 
  db2 1.940.858.511 
  template1 1.940.858.511 
  template0 1.940.858.511 


Today I got the following result:
 datname age 
  db1 1.075.558.667 
  db2 1.075.513.031 
  template1 1.955.716.521 
  template0 1.955.716.521 


Why are there changes of the databases template1 and template0 ?!?

Is this critical?

Regards
Michaela





[GENERAL] pg_dump: [tar archiver] write error appending to tar archive

2007-02-12 Thread MG
Hello,

we have a shell-script, which executes the pg_dump once a day. This script ran 
already for about 6 months successfully.

Now we got the following error:

pg_dump: [tar archiver] write error appending to tar archive (wrote 28186, 
attempted 32767)

There is enough space on the hard disk.

Any ideas?

Regards
Michaela





[GENERAL] RAID + PostgreSQL?

2006-06-26 Thread MG



Hello,
 
we are using PostgreSQL 8.0.3 together with RAID on OpenServer 6.
 
When we do a big SELECT-query the whole maschine becomes very very 
very slowly or stands.
The maschine has 3 GB RAM, so we suppose it`s the RAID.
 
Has anyone some experience with RAID + PostgreSQL?
 
Where does PostgreSQL saves his temporary files? Perhaps these are the 
reason for the bad performance.
 
Greetings
 
Michaela


Fw: [GENERAL] Performance UPDATE/INSERT

2006-04-11 Thread MG

Hello,
I have the following scenario:
The application read one record from a file, which contains 100 000 records.
The application checks on different conditions if this record is allready
save in the table 'test'.
If this record exists then the application manipulates record and updates
the record with UPDATE in PostgreSQL. This UPDATE affects allways 50
columns and PostgreSQL uses an index.
If this record doesn`t exit the application manipulate the record and makes
an INSERT.
Then it reads the next record from the file and so on ...

In on extreme case the application  makes 100 000  INSERTs, in the other
extreme case 100 000 UPDATES. Normally the UPDATES are more that INSERTs.
Here I notice that the first case (100 000  INSERTs)  takes about 30 minues,
but the 100 000 UPDATES about 1 hour.

I can't  prepare the file to  use an COPY, because the application
manipulates the records from the file in a complex way.
I've also tried with vaccum to get more performance, but hat no success.

Michaela


- Original Message - 
From: "A. Kretschmer" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, April 11, 2006 11:48 AM
Subject: Re: [GENERAL] Performance UPDATE/INSERT



am  11.04.2006, um 11:32:55 +0200 mailte MG folgendes:

I can`t use COPY for INSERTs, because I have to manipulate each record
indiviuell.

But the problem is  the UPDATEs


test=# \timing
Timing is on.
test=# update mira set y = 123;
UPDATE 15
Time: 1874.894 ms

150 000 Records, a simple table with 2 int-columns.

Have you run vacuum? Which version? Can you tell us the explain for the
update?




Greetings
Michaela

- Original Message - From: "A. Kretschmer"


Please, no silly TOFU.


HTH, Andreas
--
Andreas Kretschmer(Kontakt: siehe Header)
Heynitz:  035242/47215,  D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
===Schollglas Unternehmensgruppe===

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly








---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Performance UPDATE/INSERT

2006-04-11 Thread MG
I  can`t use COPY or put it in a transaction, because the application which 
writes in PostgreSQL decides if it`s an INSERT or a UPDATE which depends on 
different conditions and also manipulate the records.
In on extreme case it can be only INSERTs, in the other extreme case all can 
be UPDATES. Normally the UPDATES are more that INSERTs.


I have to improve the performance of the UPDATES and I was wondering why the 
UPDATEs need twice as long as the INSERTs.


Greetings
Michaela

- Original Message - 
From: "Thomas Pundt" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, April 11, 2006 11:30 AM
Subject: Re: [GENERAL] Performance UPDATE/INSERT



Hi,

On Tuesday 11 April 2006 11:08, MG wrote:
| I have about 100 000 records, which need about 30 minutes to write them
| with single INSERTs into PostgreSQL. If I go through these 100 000 
records

| and make an UPDATE on each record, it takes 1 hour.
|
| Can anyone tell me, about his experience of the performance of INSERT 
and

| UPDATE.

how about using COPY for the import? Or at least putting the INSERTs into
a transaction using BEGIN and COMMIT? I would do the same with the 
UPDATEs:

put it into a transaction frame.

Ciao,
Thomas

--
Thomas Pundt <[EMAIL PROTECTED]>  http://rp-online.de/ 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match






---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Performance UPDATE/INSERT

2006-04-11 Thread MG
I can`t use COPY for INSERTs, because I have to manipulate each record 
indiviuell.


But the problem is  the UPDATEs

Greetings
Michaela

- Original Message - 
From: "A. Kretschmer" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, April 11, 2006 11:20 AM
Subject: Re: [GENERAL] Performance UPDATE/INSERT



am  11.04.2006, um 11:08:57 +0200 mailte MG folgendes:

Hello,

I have about 100 000 records, which need about 30 minutes to write them 
with single INSERTs into PostgreSQL.


You should better use the COPY - command for bulk inserts. This is
faster.


HTH, Andreas
--
Andreas Kretschmer(Kontakt: siehe Header)
Heynitz:  035242/47215,  D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
===Schollglas Unternehmensgruppe===

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq






---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] Performance UPDATE/INSERT

2006-04-11 Thread MG



Hello,
 
I have about 100 000 records, which need about 30 minutes to write them 
with single INSERTs into PostgreSQL.
If I go through these 100 000 records and make an UPDATE on each record, it 
takes 1 hour.
 
Can anyone tell me, about his experience of the performance of INSERT and 
UPDATE.
 
Greetings
Michaela


[GENERAL] actual SQL statement

2006-02-23 Thread MG



Hello,
 
can I find out, what SQL statement the PostgreSQL-server is 
executing?
 
Thanks

Michaela
 
 


[GENERAL] LIMIT + OFFSET

2006-01-17 Thread MG



Hello,
 
I want to show an overview where you can place 16 data sets.
I use the sql-statement
SELECT F1,F2 FROM testtable limit 16 offset 0
 
To show different pages I vary the value for offset.
 
But if I want to stay on a special data set, I have the problem to find the 
right value for offset.
How can I realize this?
 
Thanks for your help
 
Michaela


[GENERAL] Sequence Manipulation Functions

2006-01-10 Thread MG



Hello,
 
I use PostgreSQL 8.0.3.
 
I want to get the information of the last value of a sequence.
 
The function 'currval' only gives the value back, if before a nextval is 
executed.

Return the value most recently obtained by nextval for this sequence in the current session. (An 
error is reported if nextval has never been called 
for this sequence in this session.) Notice that because this is returning a 
session-local value, it gives a predictable answer whether or not other sessions 
have executed nextval since the current session 
did. 
But that is not very helpful.
I noticed that the phpPgAdmin has that information


  
  
Name
Last value
Increment by
Max value
Min value
Cache value
Log count
Is cycled?
Is called?
  
adr_dsnr
108
1
9223372036854775807
1
1
25
No
Yes
So how can I get that information?
Thanks
Michaela
 
 


[GENERAL] Re: Error: 'cache lookup failed' w/trigger

2001-02-23 Thread mg

Meanwhile, I've found the source of my problem myself. When I drop a
function and re-create it (this is what phpPgAdmin does when "modifying" a
function), I also have to drop and re-create the corresponding trigger.
Again one of these 'gotchas' for beginners...

mg schrieb:

> Hello,
>
> I'm new to Postgresql and just tried to write my very first pgsql
> trigger procedure. So
> please forgive me if this is a stupid question.
>
> This is what I did:
> 1. using bash as user postgres (my db superuser) on my RH6.2 linux box:
> bash$ export PGLIB=/usr/lib/pgsql
> bash$ createlang plpgsql mydb
>
> 2. using phpPgAdmin, I then could create the following procedure:
>
> CREATE FUNCTION "datetimestamp"() RETURNS OPAQUE AS
> 'BEGIN NEW.modificationdatetime := \'now\' END;'
> LANGUAGE 'plpgsql'
>
> 3. Then I created a trigger:
> CREATE TRIGGER "accounts_datetimestamp" BEFORE INSERT OR UPDATE ON
> "accounts" FOR EACH ROW EXECUTE PROCEDURE datetimestamp();
>
> Now everytime I try to update something in my accounts table I get the
> message:
> ERROR: fmgr_info: function 51232: cache lookup failed
>
> What's wrong here?
> BTW: is there a source for trigger procedure examples? These trigger
> procedures could help me solve a lot of problems.
>
> Thanks for any suggestions
> Greetings from switzerland
> --Marcel




[GENERAL] Error: 'cache lookup failed' w/trigger

2001-02-23 Thread mg

Hello,

I'm new to Postgresql and just tried to write my very first pgsql
trigger procedure. So
please forgive me if this is a stupid question.

This is what I did:
1. using bash as user postgres (my db superuser) on my RH6.2 linux box:
bash$ export PGLIB=/usr/lib/pgsql
bash$ createlang plpgsql mydb

2. using phpPgAdmin, I then could create the following procedure:

CREATE FUNCTION "datetimestamp"() RETURNS OPAQUE AS
'BEGIN NEW.modificationdatetime := \'now\' END;'
LANGUAGE 'plpgsql'

3. Then I created a trigger:
CREATE TRIGGER "accounts_datetimestamp" BEFORE INSERT OR UPDATE ON
"accounts" FOR EACH ROW EXECUTE PROCEDURE datetimestamp();

Now everytime I try to update something in my accounts table I get the
message:
ERROR: fmgr_info: function 51232: cache lookup failed

What's wrong here?
BTW: is there a source for trigger procedure examples? These trigger
procedures could help me solve a lot of problems.

Thanks for any suggestions
Greetings from switzerland
--Marcel