Re: [GENERAL] Can't change working directory to C:/Documents and Settings in Windows

2007-07-02 Thread Magnus Hagander
On Sun, Jul 01, 2007 at 10:46:22PM -0300, Jorge Godoy wrote:
 On Sunday 01 July 2007 22:25:24 Casey Crosbie wrote:
  Jorge,
 
  Thanks for the suggestion. But unfortunately, I tried both
\cd C:/Document~1 and just \cd C:/Document~1 and neither worked.
 
 Sorry.  It should be up to 8 chars: Docume~1 or some variation like that 
 (I've seen ~2  due to some unknown reason).  This looks like a Windows 
 problem on finding directories with spaces in its name.  The same happens 
 with diacriticals...

To help others in the future, the way to find out what the directory short
name actually is, use dir /x c:\. 

Not that it was the problem this time, but I'm sure someone will need it at
some point.

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] postgressqlnot support inwindows 2000

2007-07-02 Thread Dave Page

Magnus Hagander wrote:

Yes, but it was not necessarily launched as msiexec. If the file was
just double-clicked on, the path to msiexec will be fetched from the
registry and not the system PATH. That's the only explanation I can find.


Not being installed on Windows 2000 is possible iirc - but breaking the 
path and/or renaming the .exe (and then updating the registry to match) 
seems like some really contrived breakage!!



Siva; did you extract both msi files from the zip file before running the 
installer?


That gives a different error message - it starts msiexec and then
msiexec is the one that complains. This error indicates that it can't
even find msiexec.exe to run.


So it does. I'm sure I've seen that one before though; can't remember 
where...


/D


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Is this a bug?

2007-07-02 Thread Albe Laurenz
Harry Jackson wrote:
 
 The following sql statement fails because the column user_id does
 not exist in the users table.
 
 =# select user_id from users WHERE  username = 'blah';
 ERROR:  column user_id does not exist
 LINE 1: select user_id from users WHERE  username = 'blah..
^
[...]

 =# delete from map_users_roles where user_id = (select user_id from
 users WHERE  username = 'blah');
 DELETE 33631
 
[...]
 
 Still, this was quite a suprise to me and I would consider this a bug.

 Thoughts?

It is not a bug, but it is one of the best examples why it is good to
always qualify columns that I have ever seen!

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Tables not created in proper schema

2007-07-02 Thread A. Kretschmer
am  Mon, dem 02.07.2007, um 11:13:54 +0530 mailte Ashish Karalkar folgendes:
 Hello All,
 I am trying to create databse with script.
 I run this script from root prompt with command
  
  
 $ su - postgres -c 'path to script.sql'
  
  
 In the script I follow following steps
  
  
 1) create user xyz
 2) create database xyz -O xyz
 3) create schema xyz

ALTER USER xyz SET SEARCH_PATH = ' ... ';

or set the search_path at the beginning of your sql-file.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] recovery_target_time ignored or recovery alwaysrecovers to end of WAL

2007-07-02 Thread Simon Riggs
On Sun, 2007-07-01 at 21:41 -0700, Jason L. Buberel wrote:
 I am trying to learn/practice the administrative steps that would need 
 to be taken in a 'fat finger' scenario, and I am running into problems. 
 I am trying to use 'recovery.conf' to set the database state to about 15 
 minutes ago in order to recover from accidentally deleting important 
 data. However, each time I restart the database in recovery mode, it 
 seems to always return me to the state it was in when I shut it down, 
 ignoring my 'recovery_target_time' setting.
 
 For example:
 
 1. I have a production 8.2.4 database running with WAL archiving enabled.
 2. Thinking I am logged into a development database I  issue the commands:
 
 start transaction;
 delete from billing_info;
 delete from customer_account;
 commit;
 
 3. I suddenly realize I was logged into the production database.
 4. I fall out of my chair, then regain consciousness 10 minutes later.
 5. I shutdown the database, and create a 'recovery.conf' file as follows:
 
 # pretend that 2007-07-01 20:50:00 PDT was 15 minutes ago.
 recovery_target_time = '2007-07-01 20:50:00 PDT'
 restore_command = 'cp /pgdata/archive_logs/%f %p'
 recovery_target_inclusive = 'false'
 
 6. I start the database, and I see the following log messages:
 
 LOG:  starting archive recovery
 LOG:  recovery_target_time = 2007-07-01 20:50:00-07
 LOG:  restore_command = cp /pgdata/archive_logs/%f %p
 LOG:  recovery_target_inclusive = false
 LOG:  checkpoint record is at F/7E0DD5A4
 LOG:  redo record is at F/7E0DD5A4; undo record is at 0/0; shutdown TRUE
 LOG:  next transaction ID: 0/693577; next OID: 35828734
 LOG:  next MultiXactId: 28; next MultiXactOffset: 55
 LOG:  automatic recovery in progress
 LOG:  record with zero length at F/7E0DD5EC
 LOG:  redo is not required
 LOG:  archive recovery complete
 LOG:  database system is ready
 
 7. I log back in to the database, expecting to see all of my 
 billing_info an customer_account records in place. But instead, the 
 tables are empty - just as they were when the db was shutdown.
 
 What have I don't wrong? Or is there some other procedure to use in 
 these situations?

Your example transactions are so large that going back 15 minutes is not
enough. You'll need to go back further.

recovery_target_time can only stop on a COMMIT or ABORT record. This is
because it makes no sense to recover half a transaction, only whole
transactions have meaning for recovery. So if the transactions are very
large, you need to go back further.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] ERROR: unexpected hash relation size:

2007-07-02 Thread Gerhard Hintermayer

... x should be y

I do get these messages one 2 of my servers running mostly on identical 
data. The servers are unix-based and are running 8.1.8 under linux (Gentoo).
On one server the message appears only after vacuum-ing, on the other I 
had some errors after inserting into the table too (about 12 inserts, no 
more messages except while vacuuming since then)


The affected table has only one index based upon an integer.

The table definition is:

p_code character varying(10)
a_nr integer
ch_nr integer
and some other columns

the Index is a hash base upon a_nr. The table stores protocols for 
batches with batch number ch_nr and order number a_nr and product 
identifier code p_code. None of these columns are primary keys.


On the backup_server I got eg. errors while inserting
INSERT INTO d_kochmi (p_code,a_nr,ch_nr ) 
VALUES('613CL8D110',56117,1 ...)

batch number (column 3) varied from 1 to 9.

Should I do some more inverstigation ?
On one server i dropped and recreated the index
(CREATE INDEX idx_d_kochmi ON d_kochmi (a_nr) ) and the error while 
running VACCUM was gone.


I'm wondering if the use of the hash index is disabled when such an 
error occurs or if the index is used with probably wrong data ?


Regards
Gerhard

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] table disk space usage query?

2007-07-02 Thread Dimitri Fontaine
Le jeudi 28 juin 2007, [EMAIL PROTECTED] a écrit :
 Hello all -

 I was looking for a way to find out how much disk space each table is
 using.

As of PostgreSQL 8.1 you can use the following:
http://www.postgresql.org/docs/8.1/static/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE

Example query to obtain per-table on-disk size:

select tablename,
pg_size_pretty(pg_relation_size(tablename)) as size,
pg_size_pretty(pg_total_relation_size(tablename)) as total,
pg_relation_size(tablename)
from  pg_tables
where schemaname = 'public' and tablename !~ '^pg_'
order by 4 desc;

Regards,
-- 
dim


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] shmctl EIDRM preventing startup

2007-07-02 Thread Martijn van Oosterhout
On Sun, Jul 01, 2007 at 10:39:01PM -0400, Tom Lane wrote:
 Alvaro Herrera [EMAIL PROTECTED] writes:
  Maybe what is happening is that an entirely unrelated process created a
  segment with that ID, attached to it, and then it was deleted.  I don't
  know how to check however.
 
 AFAIK, EIDRM should imply that the segment has been IPC_RMID'd but still
 exists because there are still processes attached to it.  So the thing
 to look for is processes still attached.  Not 100% sure how to do that,
 but I'm sure the info is exposed under /proc somehow...

If it's installed, this:

lsof |grep SYSV

Will list all processes attached to a SHM segemtn on the system. I
think ipcs can do the same. You can grep /proc/*/maps for the same
info.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


[GENERAL] importing limited set of rows/tuples

2007-07-02 Thread cha

Hi All,

I have eliminated certain tables while exporting as the size of the data in
the tables are huge.Though am having the schema which contains all the
tables.
 
Now I want to import limited set of records/rows ( 1000 tuples ) from the
eliminated tables.
 
Is this possible? If yes, please tell me how to accomplish this?
 
Secondly, like to know how postgresql take care of all the
constraints/referential integrity while importing the records from the
different tables?

Cheers,
Cha
-- 
View this message in context: 
http://www.nabble.com/importing-limited-set-of-rows-tuples-tf4011639.html#a11392319
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Tables not created in proper schema

2007-07-02 Thread Ashish Karalkar

Thanks for your replay.

I think the problem is with schema not being recognised.

following are the line from sql script:(This script is run as a postgres 
user with password authntication from .pgpass file)




${PG_PATH}/createuser qsweb -S -d -R -l -P -E -q -U postgres

${PG_PATH}/createdb -E UTF8 -O qsweb -U postgres qsweb

${PG_PATH}/createlang -U postgres -d qsweb plpgsql

${PG_PATH}/psql -d qsweb --command CREATE SCHEMA qsweb

${PG_PATH}/psql -d qsweb --command ALTER SCHEMA qsweb OWNER TO qsweb

${PG_PATH}/psql --command ALTER USER qsweb SET SEARCH_PATH='qsweb'



The output is right till  alter schema but then while setting the search 
path it says NOTICE:schema qsweb does not exist.




can you suggest what is going wrong

Thanks in advance

Ashish...















- Original Message - 
From: A. Kretschmer [EMAIL PROTECTED]

To: pgsql-general@postgresql.org
Sent: Monday, July 02, 2007 2:28 PM
Subject: Re: [GENERAL] Tables not created in proper schema


am  Mon, dem 02.07.2007, um 11:13:54 +0530 mailte Ashish Karalkar 
folgendes:

Hello All,
I am trying to create databse with script.
I run this script from root prompt with command


$ su - postgres -c 'path to script.sql'


In the script I follow following steps


1) create user xyz
2) create database xyz -O xyz
3) create schema xyz


ALTER USER xyz SET SEARCH_PATH = ' ... ';

or set the search_path at the beginning of your sql-file.


Andreas
--
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] importing limited set of rows/tuples

2007-07-02 Thread Dimitri Fontaine
Hi,

Le lundi 02 juillet 2007, cha a écrit :
 Now I want to import limited set of records/rows ( 1000 tuples ) from the
 eliminated tables.

 Is this possible? If yes, please tell me how to accomplish this?

If you have CSV or CSV-like data file format, you can use pgloader with the -C 
option, you can even load from any line in the file (given by number or data 
id, multi-column keys supported) :
  http://pgfoundry.org/projects/pgloader

Regards,
-- 
dim


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] recovery_target_time ignored or recovery alwaysrecovers to end of WAL

2007-07-02 Thread Jason L. Buberel

Simon,

Thanks for the tip. I had assumed that so long as I set 
'recovery_target_time' to a value that occurred before the 'fatal 
commit' and set the 'inclusive' flag to false that I would be able to 
return to just before the deletion occurred.


I'll play with it a bit more and see. I just want to know what to do in 
the future should a real emergency like this occur.


Thanks,
jason

Simon Riggs wrote:

On Sun, 2007-07-01 at 21:41 -0700, Jason L. Buberel wrote:
  
Your example transactions are so large that going back 15 minutes is not

enough. You'll need to go back further.

recovery_target_time can only stop on a COMMIT or ABORT record. This is
because it makes no sense to recover half a transaction, only whole
transactions have meaning for recovery. So if the transactions are very
large, you need to go back further.

  


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] shmctl EIDRM preventing startup

2007-07-02 Thread Michael Fuhr
On Mon, Jul 02, 2007 at 01:05:35PM +0200, Martijn van Oosterhout wrote:
 If it's installed, this:
 
 lsof |grep SYSV
 
 Will list all processes attached to a SHM segemtn on the system. I
 think ipcs can do the same. You can grep /proc/*/maps for the same
 info.

I already tried those; none show the shared memory key that the
postmaster is complaining about.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] tables are not listable by \dt

2007-07-02 Thread Rajarshi Guha
Hi, we recently upgraded our Postgres instllation from 7.4 to 8.2 by  
doing a dump and restore. Howveer after logging into the database (as  
a user that is not the superuser) and doing \dt I get the error:


No relations found

But when I do

SELECT relname, relpages FROM pg_class ORDER BY relpages DESC;

I get a list of the tables and their sizes.

I'm not even sure where to begin looking for the solution and any  
pointers would be much appreciated.


---
Rajarshi Guha  [EMAIL PROTECTED]
GPG Fingerprint: 0CCA 8EE2 2EEB 25E2 AB04  06F7 1BB9 E634 9B87 56EE
---
A mathematician is a device for turning coffee into theorems.
-- P. Erdos



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] ERROR: unexpected hash relation size:

2007-07-02 Thread Martijn van Oosterhout
On Mon, Jul 02, 2007 at 11:46:54AM +0200, Gerhard Hintermayer wrote:
 ... x should be y
 
 I do get these messages one 2 of my servers running mostly on identical 
 data. The servers are unix-based and are running 8.1.8 under linux (Gentoo).
 On one server the message appears only after vacuum-ing, on the other I 
 had some errors after inserting into the table too (about 12 inserts, no 
 more messages except while vacuuming since then)

Has the server crashed recently? Hash indexes are known not to be crash
safe. Using REINDEX on the index should fix it.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] tables are not listable by \dt

2007-07-02 Thread Michael Fuhr
On Mon, Jul 02, 2007 at 10:04:21AM -0400, Rajarshi Guha wrote:
 Hi, we recently upgraded our Postgres instllation from 7.4 to 8.2 by  
 doing a dump and restore. Howveer after logging into the database (as  
 a user that is not the superuser) and doing \dt I get the error:
 
 No relations found
 
 But when I do
 
 SELECT relname, relpages FROM pg_class ORDER BY relpages DESC;
 
 I get a list of the tables and their sizes.

Are the tables in schemas that are in your search_path?

-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] tables are not listable by \dt

2007-07-02 Thread Tom Lane
Rajarshi Guha [EMAIL PROTECTED] writes:
 Hi, we recently upgraded our Postgres instllation from 7.4 to 8.2 by  
 doing a dump and restore. Howveer after logging into the database (as  
 a user that is not the superuser) and doing \dt I get the error:
 No relations found
 But when I do
 SELECT relname, relpages FROM pg_class ORDER BY relpages DESC;
 I get a list of the tables and their sizes.

\dt does joins to pg_roles and pg_namespace ... are those nonempty?
What have you got search_path set to?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] tables are not listable by \dt

2007-07-02 Thread Mikko Partio

On 7/2/07, Rajarshi Guha [EMAIL PROTECTED] wrote:


Hi, we recently upgraded our Postgres instllation from 7.4 to 8.2 by
doing a dump and restore. Howveer after logging into the database (as
a user that is not the superuser) and doing \dt I get the error:

No relations found




Are you using the 8.2 version of psql?

Regards

MP


Re: [GENERAL] tables are not listable by \dt

2007-07-02 Thread Rajarshi Guha


On Jul 2, 2007, at 10:04 AM, Rajarshi Guha wrote:

Hi, we recently upgraded our Postgres instllation from 7.4 to 8.2  
by doing a dump and restore. Howveer after logging into the  
database (as a user that is not the superuser) and doing \dt I get  
the error:


No relations found

But when I do

SELECT relname, relpages FROM pg_class ORDER BY relpages DESC;

I get a list of the tables and their sizes.


Thanks to posters - it was indeed an error with the search path.

---
Rajarshi Guha  [EMAIL PROTECTED]
GPG Fingerprint: 0CCA 8EE2 2EEB 25E2 AB04  06F7 1BB9 E634 9B87 56EE
---
If you don't get a good night kiss, you get Kafka dreams.
-Hobbes



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] recovery_target_time ignored or recovery alwaysrecovers to end of WAL

2007-07-02 Thread Tom Lane
Simon Riggs [EMAIL PROTECTED] writes:
 On Sun, 2007-07-01 at 21:41 -0700, Jason L. Buberel wrote:
 I am trying to learn/practice the administrative steps that would need 
 to be taken in a 'fat finger' scenario, and I am running into problems. 

 Your example transactions are so large that going back 15 minutes is not
 enough. You'll need to go back further.
 recovery_target_time can only stop on a COMMIT or ABORT record. This is
 because it makes no sense to recover half a transaction, only whole
 transactions have meaning for recovery. So if the transactions are very
 large, you need to go back further.

No, that doesn't explain it.  As long as he set the stop time before the
commit of the unwanted transaction, it should do what he's expecting.
It might uselessly replay a lot of the actions of a long-running
transaction, but it will stop before the COMMIT xlog record when it
reaches it, and thus the transaction will not be committed.

What's actually happening according to the log output is that it's
running all the way to the end of WAL.  I can't really think of an
explanation for that other than a mistake in choosing the stop time,
ie, it's later than the commit of the unwanted transaction.  Or maybe
the WAL file is a stale copy that doesn't even contain the unwanted
commit?

Jason, if you can't figure it out you might grab xlogviewer
http://pgfoundry.org/projects/xlogviewer/
and see what it says the timestamps of the commit records in your WAL
files are.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] ERROR: unexpected hash relation size:

2007-07-02 Thread Tom Lane
Gerhard Hintermayer [EMAIL PROTECTED] writes:
 ... x should be y
 I do get these messages one 2 of my servers running mostly on identical 
 data. The servers are unix-based and are running 8.1.8 under linux (Gentoo).

Update.

http://developer.postgresql.org/pgdocs/postgres/release-8-1-9.html

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] ERROR: unexpected hash relation size:

2007-07-02 Thread Gerhard Hintermayer

Martijn van Oosterhout wrote:

On Mon, Jul 02, 2007 at 11:46:54AM +0200, Gerhard Hintermayer wrote:

... x should be y

I do get these messages one 2 of my servers running mostly on identical 
data. The servers are unix-based and are running 8.1.8 under linux (Gentoo).
On one server the message appears only after vacuum-ing, on the other I 
had some errors after inserting into the table too (about 12 inserts, no 
more messages except while vacuuming since then)


Has the server crashed recently? Hash indexes are known not to be crash
safe. Using REINDEX on the index should fix it.

Have a nice day,

No, the server(s) did not crash.
Will do the update Tom Lane sugested ASAP (unfortunately server is 
running 7*24 :-( )
Strange, that the error is encountered after more that 2 month running 
flawless.


Thanks

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] ERROR: unexpected hash relation size:

2007-07-02 Thread Gerhard Hintermayer

Tom Lane wrote:

Gerhard Hintermayer [EMAIL PROTECTED] writes:

... x should be y
I do get these messages one 2 of my servers running mostly on identical 
data. The servers are unix-based and are running 8.1.8 under linux (Gentoo).


Update.

http://developer.postgresql.org/pgdocs/postgres/release-8-1-9.html

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Thanks, will do that ASAP.
But strange, that this problem is encoutered after 2 month running 8.1.8 
flawless (without any changes to indices and database scheme)


Regards
Gerhard

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] recovery_target_time ignored or recovery alwaysrecovers to end of WAL

2007-07-02 Thread Jason L. Buberel

Harrumph -

I downloaded the latest xlogdump source, and built/installed it against 
my 8.2.4 source tree. When I execute it however, I am informed that all 
of my WAL files (either the 'active' copies in pg_xlog or the 'archived' 
copies in my /pgdata/archive_logs dir) appear to be malformed:


$ /opt/postgres-8.2.4/bin/xlogdump --port 54824 --host 127.0.0.1 --user 
postgres  ../../../archive_logs/*

../../../archive_logs/0001000F007C:
Bogus page magic number D05E at offset 0
invalid record length at F/7C1C
../../../archive_logs/0001000F007C.00550700.backup:
Partial page of 241 bytes ignored
../../../archive_logs/0001000F007D:
Bogus page magic number D05E at offset 0
invalid record length at F/7D1C
../../../archive_logs/0001000F007D.0006C01C.backup:
Partial page of 241 bytes ignored

Which does not help particularly much :)

I'll keep plugging away at this - perhaps my problem in setting the 
database state to a PITR is related to timezones or timestamp formatting?


-jason

Tom Lane wrote:

Jason, if you can't figure it out you might grab xlogviewer
http://pgfoundry.org/projects/xlogviewer/
and see what it says the timestamps of the commit records in your WAL
files are.
  


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] recovery_target_time ignored or recovery always recovers to end of WAL

2007-07-02 Thread Jason L. Buberel

Some more bits on this:

And playing with the date format does not seem to change the outcome 
(the db is always recovered to the most current state). In this case, I 
removed the timezone designation 'PDT' from my timestamp, and the db 
properly figured out that it is running in GMT-7 (pacific) time (see 
syslog ouptput below).


What worries me is the 'record with zero length', combined with my 
issues (in previous email) with the xlogdump not finding the right magic 
bits. Perhaps that (or problems related to it) are causing the recovery 
process to ignore my PITR information leading it to simply recover the 
database to the most recent state?


LOG:  database system was shut down at 2007-07-02 10:12:06 PDT
LOG:  starting archive recovery
LOG:  recovery_target_time = 2007-06-29 00:00:00-07
LOG:  restore_command = cp /pgdata/archive_logs/%f %p
LOG:  recovery_target_inclusive = false
LOG:  checkpoint record is at F/7E0DDA60
LOG:  redo record is at F/7E0DDA60; undo record is at 0/0; shutdown TRUE
LOG:  next transaction ID: 0/695227; next OID: 35828734
LOG:  next MultiXactId: 28; next MultiXactOffset: 55
LOG:  automatic recovery in progress
LOG:  record with zero length at F/7E0DDAA8
LOG:  redo is not required
LOG:  archive recovery complete
LOG:  database system is ready

-jason

Jason L. Buberel wrote:

Harrumph -

I downloaded the latest xlogdump source, and built/installed it 
against my 8.2.4 source tree. When I execute it however, I am informed 
that all of my WAL files (either the 'active' copies in pg_xlog or the 
'archived' copies in my /pgdata/archive_logs dir) appear to be malformed:


$ /opt/postgres-8.2.4/bin/xlogdump --port 54824 --host 127.0.0.1 
--user postgres  ../../../archive_logs/*

../../../archive_logs/0001000F007C:
Bogus page magic number D05E at offset 0
invalid record length at F/7C1C
../../../archive_logs/0001000F007C.00550700.backup:
Partial page of 241 bytes ignored
../../../archive_logs/0001000F007D:
Bogus page magic number D05E at offset 0
invalid record length at F/7D1C
../../../archive_logs/0001000F007D.0006C01C.backup:
Partial page of 241 bytes ignored

Which does not help particularly much :)

I'll keep plugging away at this - perhaps my problem in setting the 
database state to a PITR is related to timezones or timestamp formatting?


-jason

Tom Lane wrote:

Jason, if you can't figure it out you might grab xlogviewer
http://pgfoundry.org/projects/xlogviewer/
and see what it says the timestamps of the commit records in your WAL
files are.
  


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] How-To: Aggregate data from multiple rows into a delimited list.

2007-07-02 Thread D. Dante Lorenso
This is not a question, but a solution.  I just wanted to share this 
with others on the list in case it saves you a few hours of searching...


I wanted to select several rows of data and have them returned in a 
single record with the rows joined by a delimiter.  Turns out this is 
very easy to do in PostgreSQL:


  SELECT a.id, a.name,
ARRAY_TO_STRING(ARRAY(
  SELECT b.name
  FROM b
  WHERE b.id = a.id
  ORDER BY b.name ASC
), ',') AS b_names
  FROM a
  ORDER BY a.id ASC;

Sample data would look like this:

[table a]
 id | name
+--
1   | one
2   | two
3   | three
4   | four

[table b]
 id | name
+--
1   | pizza
1   | hot dog
2   | gorilla
2   | monkey
3   | apple
4   | cheese
4   | milk
4   | eggs

And the result would look like this:

 id | name  | b_names
+---+-
1   | one   | pizza,hot dog
2   | two   | gorilla,monkey
3   | three | apple
4   | four  | cheese,milk,eggs

This is an easy way to return attributes of a record from another table 
without having to issue multiple queries or deal with multiple result 
records.


Enjoy!

-- Dante


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] recovery_target_time ignored or recoveryalwaysrecovers to end of WAL

2007-07-02 Thread Simon Riggs
On Mon, 2007-07-02 at 09:21 -0700, Jason L. Buberel wrote:

 I downloaded the latest xlogdump source, and built/installed it against 
 my 8.2.4 source tree. When I execute it however, I am informed that all 
 of my WAL files (either the 'active' copies in pg_xlog or the 'archived' 
 copies in my /pgdata/archive_logs dir) appear to be malformed:
 
 $ /opt/postgres-8.2.4/bin/xlogdump --port 54824 --host 127.0.0.1 --user 
 postgres  ../../../archive_logs/*
 ../../../archive_logs/0001000F007C:
 Bogus page magic number D05E at offset 0
 invalid record length at F/7C1C
 ../../../archive_logs/0001000F007C.00550700.backup:
 Partial page of 241 bytes ignored
 ../../../archive_logs/0001000F007D:
 Bogus page magic number D05E at offset 0
 invalid record length at F/7D1C
 ../../../archive_logs/0001000F007D.0006C01C.backup:
 Partial page of 241 bytes ignored
 
 Which does not help particularly much :)
 
 I'll keep plugging away at this - perhaps my problem in setting the 
 database state to a PITR is related to timezones or timestamp formatting?

For now, remove these lines from xlogdump.c, l.82-86
  if (((XLogPageHeader) pageBuffer)-xlp_magic != XLOG_PAGE_MAGIC)
  {
printf(Bogus page magic number %04X at offset %X\n,
   ((XLogPageHeader) pageBuffer)-xlp_magic, logPageOff);
  }

The program is unfortunately release specific, which is not very useful
for you now. D05E is the correct magic number for 8.2.4.

I'll update that program once we have the main software done for 8.3. I
was hoping that Diogo would continue to support it.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] recovery_target_time ignored or recoveryalwaysrecovers to end of WAL

2007-07-02 Thread Simon Riggs
On Mon, 2007-07-02 at 11:06 -0400, Tom Lane wrote:
 Simon Riggs [EMAIL PROTECTED] writes:
  On Sun, 2007-07-01 at 21:41 -0700, Jason L. Buberel wrote:
  I am trying to learn/practice the administrative steps that would need 
  to be taken in a 'fat finger' scenario, and I am running into problems. 
 
  Your example transactions are so large that going back 15 minutes is not
  enough. You'll need to go back further.
  recovery_target_time can only stop on a COMMIT or ABORT record. This is
  because it makes no sense to recover half a transaction, only whole
  transactions have meaning for recovery. So if the transactions are very
  large, you need to go back further.
 
 No, that doesn't explain it.  As long as he set the stop time before the
 commit of the unwanted transaction, it should do what he's expecting.
 It might uselessly replay a lot of the actions of a long-running
 transaction, but it will stop before the COMMIT xlog record when it
 reaches it, and thus the transaction will not be committed.
 
 What's actually happening according to the log output is that it's
 running all the way to the end of WAL.  I can't really think of an
 explanation for that other than a mistake in choosing the stop time,
 ie, it's later than the commit of the unwanted transaction.  Or maybe
 the WAL file is a stale copy that doesn't even contain the unwanted
 commit?
 
 Jason, if you can't figure it out you might grab xlogviewer
 http://pgfoundry.org/projects/xlogviewer/
 and see what it says the timestamps of the commit records in your WAL
 files are.

There's a patch awaiting review that adds the time of the last committed
transaction into the LOG output. That should help in cases like this.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Restart after poweroutage

2007-07-02 Thread Tom Lane
Some time ago, Jon Lapham [EMAIL PROTECTED] wrote:
 Today I had a power outage which upon reboot seems to have done 
 something to cause Postgresql to not restart properly.  This has 
 happened to me before:
 http://archives.postgresql.org/pgsql-general/2006-09/msg00938.php

We finally tracked down the cause of this, and it is indeed a Linux
kernel bug: it's simply returning the wrong error code.  There'll be
a workaround in the next set of Postgres releases.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Insert speed new post

2007-07-02 Thread Terry Fielder
My first posting stalled because I posted from the wrong email account, 
here is the new posting, plus some more info:

I have a user application use log.

Under pg 7.x the system performed fine.

In 8.1.9, the insert statements seem to take a long time sometimes, upto 
several seconds or more.


Here is the table:
CREATE TABLE user_app_use_log
(
user_id int4 NOT NULL,
access_stamp timestamptz NOT NULL DEFAULT now(),
app_name char(50) NOT NULL,
url char(150) NOT NULL,
form_params text,
division_id char(3),
url_params text,
ip_address varchar(31)
)
WITHOUT OIDS;

There is no primary key, but the table is never updated, only inserted.
I removed the only index, with no improvement in performance (in case 
the 8.2 resolves index locking issues was the concern for an 8.1 install.


Should I add a primary key column of serial?  Will that help?

If anyone has any ideas it would be appreciated.

And in addition, I do a regular vacuum analyze, and to be clear this 
table has 948851 and rising records.  I USED to purge the table 
regularly, but not since SOX came around.  (But that happened prior to 
my upgrade from 7.4 to 8.1)
The server is a very powerful 8 CPU on SCSI Raid. 
iostat tells me its not backlogged on disk IO:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
  6.540.000.661.310.00   91.49

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
hda   0.51 2.0222.361292832   14285368
sda   0.00 0.01 0.00   4585   2552
sdb   0.65 4.66 7.3929758134720552
sdc  40.37   384.92  1072.08  245922466  684946704
sdd   0.34 0.00 7.392484720552
sde  40.27   389.03  1066.04  248548400  681086784
sdf  40.21   385.00  1072.58  245976056  685265296
dm-0  1.26 4.66 7.3929755814720552
dm-1  0.00 0.00 0.00   1662128
dm-2  1.26 4.65 7.3929730504720424
hdd   0.00 0.00 0.00140  0
md0 230.85   373.72  1783.57  238766922 1139514032

And top tells me minimal cpu load:
top - 16:28:55 up 7 days,  9:30,  2 users,  load average: 2.61, 2.82, 2.86
Tasks: 220 total,   1 running, 219 sleeping,   0 stopped,   0 zombie
Cpu0  :  2.3%us,  2.0%sy,  0.0%ni, 95.7%id,  0.0%wa,  0.0%hi,  0.0%si,  
0.0%st
Cpu1  :  2.0%us,  3.0%sy,  0.0%ni, 91.0%id,  2.3%wa,  0.7%hi,  1.0%si,  
0.0%st
Cpu2  :  0.0%us,  0.3%sy,  0.0%ni, 89.4%id, 10.3%wa,  0.0%hi,  0.0%si,  
0.0%st
Cpu3  :  0.0%us,  0.3%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  
0.0%st
Cpu4  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  
0.0%st
Cpu5  :  4.3%us,  0.3%sy,  0.0%ni, 95.0%id,  0.3%wa,  0.0%hi,  0.0%si,  
0.0%st
Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  
0.0%st
Cpu7  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  
0.0%st

Mem:  15894024k total, 15527992k used,   366032k free,   323760k buffers
Swap: 17880304k total, 1084k used, 17879220k free, 13912888k cached

 PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
20914 postgres  15   0  200m  93m  90m S4  0.6   1:14.89 postmaster
20014 postgres  15   0  200m  93m  90m S4  0.6   2:55.08 postmaster
2389 root  10  -5 000 S3  0.0  33:46.72 md0_raid5
15111 postgres  15   0  209m 102m  90m S2  0.7  25:32.37 postmaster
2577 root  10  -5 000 D1  0.0  22:59.43 kjournald
4949 root  15   0 12996 1336  792 S1  0.0  38:54.10 top



--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] recovery_target_time ignored or recoveryalwaysrecovers to end of WAL

2007-07-02 Thread Tom Lane
Simon Riggs [EMAIL PROTECTED] writes:
 On Mon, 2007-07-02 at 09:21 -0700, Jason L. Buberel wrote:
 I downloaded the latest xlogdump source, and built/installed it against 
 my 8.2.4 source tree. When I execute it however, I am informed that all 
 of my WAL files (either the 'active' copies in pg_xlog or the 'archived' 
 copies in my /pgdata/archive_logs dir) appear to be malformed:
 Bogus page magic number D05E at offset 0

 For now, remove these lines from xlogdump.c, l.82-86
   if (((XLogPageHeader) pageBuffer)-xlp_magic != XLOG_PAGE_MAGIC)

I don't think that's a very good solution; the reason the magic number
changed is that some of the record formats changed.  Jason needs a copy
that's actually appropriate to 8.2.

Howver there is something odd here, because the value of XLOG_PAGE_MAGIC
comes from src/include/access/xlog_internal.h and not from the text of
xlogdump.c itself.  What it looks like to me is that Jason compiled it
against the wrong set of Postgres header files.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] recovery_target_time ignored or recovery always recovers to end of WAL

2007-07-02 Thread Tom Lane
Jason L. Buberel [EMAIL PROTECTED] writes:
 What worries me is the 'record with zero length',

That's just the normal way of detecting end of WAL.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Insert speed new post

2007-07-02 Thread Tom Lane
Terry Fielder [EMAIL PROTECTED] writes:
 Under pg 7.x the system performed fine.

 In 8.1.9, the insert statements seem to take a long time sometimes, upto 
 several seconds or more.

 There is no primary key, but the table is never updated, only inserted.
 I removed the only index, with no improvement in performance (in case 
 the 8.2 resolves index locking issues was the concern for an 8.1 install.

Hmm.  With no indexes, inserts ought to be basically a constant-time
operation.  I suspect what you are looking at is stalls caused by
checkpoints or other competing disk activity.  I'd suggest watching the
output of vmstat 1 or local equivalent, and seeing if you can
correlate the slow inserts with bursts of disk activity.

Have you tuned the 8.1 installation?  I'm wondering about things like
checkpoint_segments and wal_buffers maybe being set lower than you had
them in 7.4.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] recovery_target_time ignored orrecoveryalwaysrecovers to end of WAL

2007-07-02 Thread Simon Riggs
On Mon, 2007-07-02 at 16:32 -0400, Tom Lane wrote:
 Simon Riggs [EMAIL PROTECTED] writes:
  On Mon, 2007-07-02 at 09:21 -0700, Jason L. Buberel wrote:
  I downloaded the latest xlogdump source, and built/installed it against 
  my 8.2.4 source tree. When I execute it however, I am informed that all 
  of my WAL files (either the 'active' copies in pg_xlog or the 'archived' 
  copies in my /pgdata/archive_logs dir) appear to be malformed:
  Bogus page magic number D05E at offset 0
 
  For now, remove these lines from xlogdump.c, l.82-86
if (((XLogPageHeader) pageBuffer)-xlp_magic != XLOG_PAGE_MAGIC)
 
 I don't think that's a very good solution; the reason the magic number
 changed is that some of the record formats changed.  Jason needs a copy
 that's actually appropriate to 8.2.

That was the hack for Jason, not the longterm solution. I've said I'll
work on that once other core software is done.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Invalid page header

2007-07-02 Thread Poul Møller Hansen
During some time I have had more problems with invalid data in different 
parts of a PostgreSQL database.


Until now it has been pointers to non present clog files and an index 
file, but now it's in a data file.

I'm getting this error when doing a backup:
invalid page header in block 5377 of relation events

Using pg_filedump I get the output below.
Is there any way to recover from that error except doing a restore of 
the complete database ?


The errors I get tells me there must be some kind of software / hardware 
failure on the server.

It's running Ubuntu 6.06LTS, in the beginning with LVM and XFS filesystem.
I expected that this combination could be the cause, so I took the 
server out of service for a week

and tested everything with a burn-in testing tool from the hardware vendor.
I even ran a destructive test on the hard drives. No fault found.

I reinstalled the same Ubuntu version now using ext3 filesystem and no LVM
and now I'm in trouble again.

Any suggestions on what to do ?


Thanks
Poul




***
* PostgreSQL File/Block Formatted Dump Utility - Version 8.1.1
*
* File: 17455
* Options used: -y -f -R 5377 5377
*
* Dump created on: Mon Jul  2 22:37:15 2007
***

Block 5377 
Header -
Block Offset: 0x02a02000 Offsets: Lower   0 (0x)
Block: Size 45568  Version  146Upper   0 (0x)
LSN:  logid  19268 recoff 0x  Special  57392 (0xe030)
Items:0   Free Space:0
Length (including item array): 24

Error: Invalid header information.

 : 444b  0100   DK..
 0010: 30e092b2 aa2a0*..   


Data --
Empty block - no items listed

Special Section -
Error: Invalid special section encountered.
Error: Special section points off page. Unable to dump contents.

*** End of Requested Range Encountered. Last Block Read: 5377 ***


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] General search problem - challenge

2007-07-02 Thread Postgres User

I have a table of around 6,000 places in the world.  Everytime my
server receives a ping, I'm grabbing the content of an article from an
RSS feed.  Then I search the article for the presence of any the 6000
terms.
A typical article is around 1200 words.

I don't need to save the article in a table and the search is
performed only once, so it's not about FTS.

Any thoughts on the best way to execute these searches using a
traditional language like C++ ?

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] General search problem - challenge

2007-07-02 Thread Steve Atkins


On Jul 2, 2007, at 3:36 PM, Postgres User wrote:


I have a table of around 6,000 places in the world.  Everytime my
server receives a ping, I'm grabbing the content of an article from an
RSS feed.  Then I search the article for the presence of any the 6000
terms.
A typical article is around 1200 words.

I don't need to save the article in a table and the search is
performed only once, so it's not about FTS.

Any thoughts on the best way to execute these searches using a
traditional language like C++ ?


That'll depend heavily on the performance you need and the
language you use. C++ is very different to C++/STL is very
different to C++/Qt.

Naive approach: On receiving an article, read all 6000 terms
from the search table. See if any of them are in the article, with
strstr(3).

If that's fast enough for you, you're done. If not, you'll need to
do some work to cache / precompile search patterns in core,
or preprocess the articles for fast multi-term search. It's very
unlikely you'd need to do that, though.

(Also, this is an application that screams I could be written
faster in perl than c++).

Cheers,
  Steve


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Restart after poweroutage

2007-07-02 Thread Jon Lapham

Tom Lane wrote:

Some time ago, Jon Lapham [EMAIL PROTECTED] wrote:
Today I had a power outage which upon reboot seems to have done 
something to cause Postgresql to not restart properly.  This has 
happened to me before:

http://archives.postgresql.org/pgsql-general/2006-09/msg00938.php


We finally tracked down the cause of this, and it is indeed a Linux
kernel bug: it's simply returning the wrong error code.  There'll be
a workaround in the next set of Postgres releases.


Thanks for the diligence!

--
-**-*-*---*-*---*-*---*-*-*-*---*-*---*-*-*-*-*---
 Jon Lapham  [EMAIL PROTECTED]Rio de Janeiro, Brasil
 Personal: http://www.jandr.org/
***-*--**---***---


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Insert speed new post

2007-07-02 Thread Terry Fielder

Responses below.

Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085



Tom Lane wrote:

Terry Fielder [EMAIL PROTECTED] writes:
  

Under pg 7.x the system performed fine.



  
In 8.1.9, the insert statements seem to take a long time sometimes, upto 
several seconds or more.



  

There is no primary key, but the table is never updated, only inserted.
I removed the only index, with no improvement in performance (in case 
the 8.2 resolves index locking issues was the concern for an 8.1 install.



Hmm.  With no indexes, inserts ought to be basically a constant-time
operation.  

My sentiments exactly.

I suspect what you are looking at is stalls caused by
checkpoints or other competing disk activity.  
There were warnings in the logs when I first deployed the 8.1 version.  
Sorry, I should have mentioned.  Here are the postgresql.conf changes I 
made based on what I know I need from my 7.4 install:

max_connections   increased to 250

shared_buffers increased to 11000

The below I increased based on HINT's in the log file.
max_fsm_pages increased to 80   (I have
max_fsm_relations to 1000

checkpoint_segments increased to 300

And much playing around with logging settings, done on my own.



I'd suggest watching the
output of vmstat 1 or local equivalent, and seeing if you can
correlate the slow inserts with bursts of disk activity.
  

I can do that, next peak period (tomorrow).

Have you tuned the 8.1 installation?
I have tweaked the settings based on HINT's as described above.  Is 
there a document or something else you are referring to?



  I'm wondering about things like
checkpoint_segments and wal_buffers maybe being set lower than you had
them in 7.4.
  

I left the default in 8.1, which is:
#fsync = on # turns forced synchronization 
on or off

#wal_sync_method = fsync# the default is the first option
#full_page_writes = on  # recover from partial page writes
#wal_buffers = 8# min 4, 8KB each
#commit_delay = 0   # range 0-10, in microseconds
#commit_siblings = 5# range 1-1000

# - Checkpoints -

#checkpoint_segments = 3# in logfile segments, min 1, 
16MB each

#increased by terry 20070402
checkpoint_segments = 300   # in logfile segments, min 1, 
16MB each

#checkpoint_timeout = 300   # range 30-3600, in seconds
#checkpoint_warning = 30# in seconds, 0 is off

Any ideas based on this?

Thanks for your help.

Terry


regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

  


Re: [GENERAL] What O/S or hardware feature would be useful for databases?

2007-07-02 Thread Ron Johnson

On 06/18/07 08:05, Merlin Moncure wrote:
[snip]


That being said, it's pretty clear to me we are in the last days of
the disk drive.


Oh, puhleeze.  Seagate, Hitachi, Fuji and WD aren't sitting around 
with their thumbs up their arses.In 3-4 years, large companies 
and spooky TLAs will be stuffing SANs with hundreds of 2TB drives.


My (young) kids will be out of college before the density/dollar of 
RAM gets anywhere near that of disks.  If it ever does.


What we are in, though, is the last decade of tape.


When solid state drives become prevalent in server
environments, database development will enter a new era...physical
considerations will play less and less a role in how systems are
engineered.


Oh, puhleeze redux.

There will always be physical considerations.  Why?

Even if static RAM drives *do* overtake spindles, you'll still need 
to engineer them properly.  Why?



1) There's always a bottleneck.

2) There's always more data to find the bottleneck.


So, to answer the OP, my answer would be to 'get rid of
the spinning disk!' :-)


--
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Experiences of PostgreSQL on-disk bitmap index patch

2007-07-02 Thread Ron Johnson

On 06/25/07 09:58, Tom Lane wrote:
[snip]


The fly in the ointment is that if the column value is so high
cardinality as all that, it's questionable whether you want an index
search at all rather than just seqscanning; and it's definite that
the index access cost will be only a fraction of the heap access cost.
So the prospects for actual net performance gain are a lot less than
the index-size argument makes them look.


Well they definitely are for data warehouses, in which many 
high-cardinality columns each have an index.


Because of their small disk size, ANDing them is fast and winnows 
down the result set.  That's the theory, of course.


--
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] [NOVICE] Recursive relationship - preventing cross-index entries.

2007-07-02 Thread Andrew Maclean

Thanks to you all for all your help and comments.
I finally ended up creating a trigger to check the constraints. This has the
added benefit that more than one constraint can be checked in the one
trigger.
As to whether it is the best model or not for what I want to do. This
question is more difficult to answer, but after giving it a lot of thought,
I think it is, mainly on the grounds of elegance and convenience. However
when I construct a GUI all may change!

Thankyou all for your input.




On 6/20/07, Sean Davis [EMAIL PROTECTED] wrote:


Andrew Maclean wrote:
 I got no answer so I am trying again.

 In a nutshell, if I have a recrusive relationship as outlined below, how
 do I implement a rule for the adjustments table that prevents the entry
 of an Id into the Ref column if the id exists in the Id column and vice
 versa?

 If I have a payments table which holds an Id and a payment and I also
 have an adjustments table that holds a payment id and a reference id so
 that adjustments can be made to payments.
 So the payments table looks like this:
 Id Payment
 1 500.0
 2 100.0
 3 1000.0
 4 50.0
 5 750.0
 6 50.0
 7 800.0
 8 1200.0

 and the adjustments table looks like this:
 Id Ref
 1 2
 3 4
 1 6
 3 5
 The idea is that, if for example Id=1 is a credit dard payment, then
 entries 2 and 6 could be payments that are already included in the
 credit card payment so we need to adjust the total payment to take this
 into account.

 This means that the payment for Id=1 ($500) in the payments table needs
 to be reduced by $100 (ref=2) and $50 (ref=6) , similarly, the payment
 for Id=3 ($1000) needs to be reduced by $850). So the question is:

  How do I design the adjustments table to ensure that:
 a) For any value entered in the Id column a check should occur to
 ensure that it does not exist in the Ref column.
 b) For any value entered in the Ref column, a check should occur to
 ensure that it does not exist in the Id column.

 In other words, looking at the adjustments table, I should be
 prevented from entering 2,4,6,5 in the Id column and 1, 3 in the Ref
 column.

I th8ink you can put a trigger on the table that can check the
constraints.

http://www.postgresql.org/docs/8.2/static/triggers.html

However, I wonder whether it might not make more sense to go with an
account system, with an account balance and credits and debits to the
account.  Is the system you are proposing really the best data model?

Sean

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly





--
___
Andrew J. P. Maclean
Centre for Autonomous Systems
The Rose Street Building J04
The University of Sydney  2006  NSW
AUSTRALIA
Ph: +61 2 9351 3283
Fax: +61 2 9351 7474
URL: http://www.acfr.usyd.edu.au/
___


Re: [GENERAL] recovery_target_time ignored or recovery always recovers to end of WAL

2007-07-02 Thread Jason L. Buberel
I now have a working xlogdump, which has allowed me to put together the 
following steps which I believe demonstrate that the recovery process 
insists on recovering to the most recent state.


Here is the sequence of events shown below:

1. Display contents of 'account_note' table
2. Update note field to be 'foobar'.
3. Dump trxnlog, note the new xid
4. Update note fied to be 'foobaz'.
5. Dump trxnlog, note the new xid
6. Update note field to be 'foobing'
7. Dump trxnlog, note the new xid
8. Create recovery.conf file with xid corresponding to 'foobar' update.
9. Stop/start database.
10. Display contents of 'account_note' table
11. Gasp in surpise at seeing 'note' field with value 'foobing'.
12. Expected to see original value 'foobar' (xid 696493, inclusive = true)
13. Scratch head in puzzlement.


altos_research=# select * from account_note;
account_note_id | customer_account_id | user_id_of_author | 
creation_date | note

-+-+---+---+---
 410805 |  410795 |258460 | 
2006-02-03| Ratel Investments
 441835 | 552 |258459 | 
2006-02-16| testing new account note fix.
2540171 | 2540085 |258460 | 
2006-09-16| requested to be removed
999 | 552 |258460 | 
2007-06-29| help me
999 | 552 |258460 | 
2007-06-29| help me
999 | 552 |258460 | 
2007-06-29| help me
999 | 552 |258460 | 
2007-06-29| help me
999 | 552 |258460 | 
2007-06-29| help me


altos_research=# begin transaction; update account_note set note = 
'foobar'; commit transaction;


##xlogdump of most recent file in pg_xlog:
xid: 695073 total length: 466 status: COMMITED
xid: 695081 total length: 131 status: COMMITED
xid: 695082 total length: 131 status: COMMITED
xid: 695083 total length: 131 status: COMMITED
xid: 695084 total length: 131 status: COMMITED
xid: 696493 total length: 1520 status: COMMITED - foobar trxn.

altos_research=# begin transaction; update account_note set note = 
'foobaz'; commit transaction;


##xlogdump of most recent file in pg_xlog:
xid: 695073 total length: 466 status: COMMITED
xid: 695081 total length: 131 status: COMMITED
xid: 695082 total length: 131 status: COMMITED
xid: 695083 total length: 131 status: COMMITED
xid: 695084 total length: 131 status: COMMITED
xid: 696493 total length: 1520 status: COMMITED - foobar trxn.
xid: 696498 total length: 824 status: COMMITED - foobaz trxn

altos_research=# begin transaction; update account_note set note = 
'foobing'; commit transaction;


##xlogdump of most recent file in pg_xlog:
xid: 695073 total length: 466 status: COMMITED
xid: 695081 total length: 131 status: COMMITED
xid: 695082 total length: 131 status: COMMITED
xid: 695083 total length: 131 status: COMMITED
xid: 695084 total length: 131 status: COMMITED
xid: 696493 total length: 1520 status: COMMITED - foobar trxn.
xid: 696498 total length: 824 status: COMMITED - foobaz trxn
xid: 696502 total length: 2672 status: COMMITED - foobing trxn

## created recovery.conf file:
recovery_target_xid = '696493'
restore_command = 'cp /pgdata/archive_logs/%f %p'
recovery_target_inclusive = 'true'

## stopped and started postgres, following syslog output:
Jul  2 20:51:10 localhost postgres-8.2[9125]: [3-1] LOG:  starting 
archive recovery
Jul  2 20:51:10 localhost postgres-8.2[9125]: [4-1] LOG:  
recovery_target_xid = 696493
Jul  2 20:51:10 localhost postgres-8.2[9125]: [5-1] LOG:  
restore_command = cp /pgdata/archive_logs/%f %p
Jul  2 20:51:10 localhost postgres-8.2[9125]: [6-1] LOG:  
recovery_target_inclusive = true
Jul  2 20:51:10 localhost postgres-8.2[9125]: [7-1] LOG:  checkpoint 
record is at F/7E0DF258
Jul  2 20:51:10 localhost postgres-8.2[9125]: [8-1] LOG:  redo record is 
at F/7E0DF258; undo record is at 0/0; shutdown TRUE
Jul  2 20:51:10 localhost postgres-8.2[9125]: [9-1] LOG:  next 
transaction ID: 0/696512; next OID: 35828734
Jul  2 20:51:10 localhost postgres-8.2[9125]: [10-1] LOG:  next 
MultiXactId: 28; next MultiXactOffset: 55
Jul  2 20:51:10 localhost postgres-8.2[9125]: [11-1] LOG:  automatic 
recovery in progress
Jul  2 20:51:10 localhost postgres-8.2[9125]: [12-1] LOG:  record with 
zero length at F/7E0DF2A0
Jul  2 20:51:10 localhost postgres-8.2[9125]: [13-1] LOG:  redo is not 
required
Jul  2 20:51:10 localhost postgres-8.2[9125]: [14-1] LOG:  archive 
recovery complete
Jul  2 20:51:10 localhost postgres-8.2[9125]: [15-1] LOG:  database 
system is ready


altos_research=# select * from account_note;
account_note_id | customer_account_id | user_id_of_author | 
creation_date | note


Re: [GENERAL] Invalid page header

2007-07-02 Thread Michael Harris (BR/EPA)
Hi,

I had a similar problem and overcame it by temporarily setting 
zero_damaged_pages, then doing a full vacuum and re-index on the affected table.

The rows contained in the corrupted page were lost but the rest of the table 
was OK after this.

Regards // Mike

-Original Message-
From: Poul Møller Hansen [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, 3 July 2007 6:57 AM
To: pgsql-general@postgresql.org
Subject: Invalid page header

During some time I have had more problems with invalid data in different 
parts of a PostgreSQL database.

Until now it has been pointers to non present clog files and an index 
file, but now it's in a data file.
I'm getting this error when doing a backup:
invalid page header in block 5377 of relation events

Using pg_filedump I get the output below.
Is there any way to recover from that error except doing a restore of 
the complete database ?

The errors I get tells me there must be some kind of software / hardware 
failure on the server.
It's running Ubuntu 6.06LTS, in the beginning with LVM and XFS filesystem.
I expected that this combination could be the cause, so I took the 
server out of service for a week
and tested everything with a burn-in testing tool from the hardware vendor.
I even ran a destructive test on the hard drives. No fault found.

I reinstalled the same Ubuntu version now using ext3 filesystem and no LVM
and now I'm in trouble again.

Any suggestions on what to do ?


Thanks
Poul




***
* PostgreSQL File/Block Formatted Dump Utility - Version 8.1.1
*
* File: 17455
* Options used: -y -f -R 5377 5377
*
* Dump created on: Mon Jul  2 22:37:15 2007
***

Block 5377 
Header -
 Block Offset: 0x02a02000 Offsets: Lower   0 (0x)
 Block: Size 45568  Version  146Upper   0 (0x)
 LSN:  logid  19268 recoff 0x  Special  57392 (0xe030)
 Items:0   Free Space:0
 Length (including item array): 24

 Error: Invalid header information.

  : 444b  0100   DK..
  0010: 30e092b2 aa2a0*..   

Data --
 Empty block - no items listed

Special Section -
 Error: Invalid special section encountered.
 Error: Special section points off page. Unable to dump contents.

*** End of Requested Range Encountered. Last Block Read: 5377 ***


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] recovery_target_time ignored or recovery always recovers to end of WAL

2007-07-02 Thread Jason L. Buberel
Minor correction to the output below - the final table dump actually 
contained the following - my apologies for the copy/paste error:


altos_research=# select * from account_note;
account_note_id | customer_account_id | user_id_of_author | 
creation_date |  note

-+-+---+---+-
 410805 |  410795 |258460 | 
2006-02-03| foobing
 441835 | 552 |258459 | 
2006-02-16| foobing
2540171 | 2540085 |258460 | 
2006-09-16| foobing
999 | 552 |258460 | 
2007-06-29| foobing
999 | 552 |258460 | 
2007-06-29| foobing
999 | 552 |258460 | 
2007-06-29| foobing
999 | 552 |258460 | 
2007-06-29| foobing
999 | 552 |258460 | 
2007-06-29| foobing


Which is the most recent transaction update.

-jason

Jason L. Buberel wrote:
I now have a working xlogdump, which has allowed me to put together 
the following steps which I believe demonstrate that the recovery 
process insists on recovering to the most recent state.


Here is the sequence of events shown below:

1. Display contents of 'account_note' table
2. Update note field to be 'foobar'.
3. Dump trxnlog, note the new xid
4. Update note fied to be 'foobaz'.
5. Dump trxnlog, note the new xid
6. Update note field to be 'foobing'
7. Dump trxnlog, note the new xid
8. Create recovery.conf file with xid corresponding to 'foobar' update.
9. Stop/start database.
10. Display contents of 'account_note' table
11. Gasp in surpise at seeing 'note' field with value 'foobing'.
12. Expected to see original value 'foobar' (xid 696493, inclusive = 
true)

13. Scratch head in puzzlement.


altos_research=# select * from account_note;
account_note_id | customer_account_id | user_id_of_author | 
creation_date | note
-+-+---+---+--- 

 410805 |  410795 |258460 | 
2006-02-03| Ratel Investments
 441835 | 552 |258459 | 
2006-02-16| testing new account note fix.
2540171 | 2540085 |258460 | 
2006-09-16| requested to be removed
999 | 552 |258460 | 
2007-06-29| help me
999 | 552 |258460 | 
2007-06-29| help me
999 | 552 |258460 | 
2007-06-29| help me
999 | 552 |258460 | 
2007-06-29| help me
999 | 552 |258460 | 
2007-06-29| help me


altos_research=# begin transaction; update account_note set note = 
'foobar'; commit transaction;


##xlogdump of most recent file in pg_xlog:
xid: 695073 total length: 466 status: COMMITED
xid: 695081 total length: 131 status: COMMITED
xid: 695082 total length: 131 status: COMMITED
xid: 695083 total length: 131 status: COMMITED
xid: 695084 total length: 131 status: COMMITED
xid: 696493 total length: 1520 status: COMMITED - foobar trxn.

altos_research=# begin transaction; update account_note set note = 
'foobaz'; commit transaction;


##xlogdump of most recent file in pg_xlog:
xid: 695073 total length: 466 status: COMMITED
xid: 695081 total length: 131 status: COMMITED
xid: 695082 total length: 131 status: COMMITED
xid: 695083 total length: 131 status: COMMITED
xid: 695084 total length: 131 status: COMMITED
xid: 696493 total length: 1520 status: COMMITED - foobar trxn.
xid: 696498 total length: 824 status: COMMITED - foobaz trxn

altos_research=# begin transaction; update account_note set note = 
'foobing'; commit transaction;


##xlogdump of most recent file in pg_xlog:
xid: 695073 total length: 466 status: COMMITED
xid: 695081 total length: 131 status: COMMITED
xid: 695082 total length: 131 status: COMMITED
xid: 695083 total length: 131 status: COMMITED
xid: 695084 total length: 131 status: COMMITED
xid: 696493 total length: 1520 status: COMMITED - foobar trxn.
xid: 696498 total length: 824 status: COMMITED - foobaz trxn
xid: 696502 total length: 2672 status: COMMITED - foobing trxn

## created recovery.conf file:
recovery_target_xid = '696493'
restore_command = 'cp /pgdata/archive_logs/%f %p'
recovery_target_inclusive = 'true'

## stopped and started postgres, following syslog output:
Jul  2 20:51:10 localhost postgres-8.2[9125]: [3-1] LOG:  starting 
archive recovery
Jul  2 20:51:10 localhost postgres-8.2[9125]: [4-1] LOG:  
recovery_target_xid = 696493
Jul  2 20:51:10 localhost postgres-8.2[9125]: [5-1] LOG:  
restore_command = cp /pgdata/archive_logs/%f %p
Jul  2 20:51:10 localhost postgres-8.2[9125]: [6-1] LOG:  
recovery_target_inclusive = true
Jul  

Re: [GENERAL] recovery_target_time ignored or recovery always recovers to end of WAL

2007-07-02 Thread Tom Lane
Jason L. Buberel [EMAIL PROTECTED] writes:
 ## stopped and started postgres, following syslog output:

You seem to have omitted all the interesting details about what you did
here; but stopping and starting postgres is certainly not intended to
cause it to discard data.  There would need to have been some steps
involving restoring a previous base backup and rolling forward through
archived xlog files.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] recovery_target_time ignored or recovery always recovers to end of WAL

2007-07-02 Thread Jason L. Buberel
I am now learning that fact, but recall the original scenario that I am 
trying to mimic:


1. Person accidentally deletes contents of important table.
2. Admin (me) wants to roll back db state to just prior to that deletion.
3. (Me) Assumes that by creating a recovery.conf file and setting the 
target to a an earlier trxn id and restarting the database would simply 
do the trick.


So now I think that my scenario should look more like:

1. Person accidentally deletes contents of important table.
2. Admin (me) wants to roll db state back to just prior to that delete.
3. (Me) Performs steps, in addition to creating the recovery.conf with 
the selected xid, that will cause the DB to restart at that PITR.


Now all I need to lock down are those 'additional steps needed to force 
the recovery process to only recover up to the specified xid and no 
further'.


Such as:

- Remove from pg_xlog all of the log files containing transactions that 
come after the selected xid?

- Other?

-jason


Tom Lane wrote:

Jason L. Buberel [EMAIL PROTECTED] writes:
  

## stopped and started postgres, following syslog output:



You seem to have omitted all the interesting details about what you did
here; but stopping and starting postgres is certainly not intended to
cause it to discard data.  There would need to have been some steps
involving restoring a previous base backup and rolling forward through
archived xlog files.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match
  


Re: [GENERAL] recovery_target_time ignored or recovery always recovers to end of WAL

2007-07-02 Thread Jason L. Buberel

I think that I now see the error of my ways.

When I shutdown my server, the files under the ./data/ directory still 
all point to 'now' and not 'a week ago when the backups were taken'. So 
the recover process insists on bringing the database to a PITR equal to 
'now'.


Instead, in order to achieve my goal I would have to restore to that 
backup, and rely on the contents of the archive_logs to have the 
recovery process return me to the selected xid PITR.


So is there any way to 'trick' or force the server to forget what it 
thinks 'now' is and instead to step back to the selected xid and make 
that the new version of 'now'?


-jason

Jason L. Buberel wrote:
I am now learning that fact, but recall the original scenario that I 
am trying to mimic:


1. Person accidentally deletes contents of important table.
2. Admin (me) wants to roll back db state to just prior to that deletion.
3. (Me) Assumes that by creating a recovery.conf file and setting the 
target to a an earlier trxn id and restarting the database would 
simply do the trick.


So now I think that my scenario should look more like:

1. Person accidentally deletes contents of important table.
2. Admin (me) wants to roll db state back to just prior to that delete.
3. (Me) Performs steps, in addition to creating the recovery.conf with 
the selected xid, that will cause the DB to restart at that PITR.


Now all I need to lock down are those 'additional steps needed to 
force the recovery process to only recover up to the specified xid and 
no further'.


Such as:

- Remove from pg_xlog all of the log files containing transactions 
that come after the selected xid?

- Other?

-jason


Tom Lane wrote:

Jason L. Buberel [EMAIL PROTECTED] writes:
  

## stopped and started postgres, following syslog output:



You seem to have omitted all the interesting details about what you did
here; but stopping and starting postgres is certainly not intended to
cause it to discard data.  There would need to have been some steps
involving restoring a previous base backup and rolling forward through
archived xlog files.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match