[GENERAL] Prepared statement parameters for an 'IN ()' clause

2007-10-17 Thread Jason L. Buberel
Can someone point me to an example of creating a prepared statement for 
a query with an 'IN' clause?


The query looks like this:

select value from table where
state = $1 and city = $2 and zip = $3 and
date in ( $4 );

For the prepared statement, I have tried:

prepare st1(text, text, text, text[] );

Then invoked it as:

execute st1('CA', 'SUNNYVALE', '94086', 
'{2007-10-01,2007-09-25,2007-09-15}' );


But the use of the text array as input parameter does not seem to be 
correctly used in the 'IN' clause. My query consistently returns no results.


Any quick syntax fix for this?

Thanks
Jason


Re: [GENERAL] Recommended method for creating file of zeros?

2007-10-16 Thread Jason L. Buberel
And thank you to Kevin - this did the trick perfectly. I've been able to 
recover everything successfully.


Regards,
Jason

Kevin Hunter wrote:

The tool is 'dd' and /dev.  /dev/zero in this case.  The summary of what
you asked:

$ dd if=/dev/zero of=./zblah count=1 bs=256k
1+0 records in
1+0 records out
262144 bytes (262 kB) copied, 0.00130993 seconds, 200 MB/s

$ dd if=/dev/zero of=./zblah count=1 bs=256000
1+0 records in
1+0 records out
256000 bytes (256 kB) copied, 0.00136915 seconds, 187 MB/s

HTH,

Kevin
  


[GENERAL] Recommended method for creating file of zeros?

2007-10-13 Thread Jason L. Buberel

I have a recover situation related to:

Oct 13 23:04:58 66-162-145-116 postgres[16955]: [1-1] LOG:  database 
system was shut down at 2007-10-13 23:04:54 PDT
Oct 13 23:04:58 66-162-145-116 postgres[16955]: [2-1] LOG:  checkpoint 
record is at F0/E21C
Oct 13 23:04:58 66-162-145-116 postgres[16955]: [3-1] LOG:  redo record 
is at F0/E21C; undo record is at F0/E21C; shutdown TRUE
Oct 13 23:04:58 66-162-145-116 postgres[16955]: [4-1] LOG:  next 
transaction ID: 172668192; next OID: 88470513
Oct 13 23:04:58 66-162-145-116 postgres[16955]: [5-1] LOG:  next 
MultiXactId: 32334; next MultiXactOffset: 69955
Oct 13 23:04:58 66-162-145-116 postgres[16955]: [6-1] PANIC:  could not 
access status of transaction 172668192
Oct 13 23:04:58 66-162-145-116 postgres[16955]: [6-2] DETAIL:  could not 
open file "pg_clog/00A4": No such file or directory
Oct 13 23:04:58 66-162-145-116 postgres[16953]: [1-1] LOG:  startup 
process (PID 16955) was terminated by signal 6
Oct 13 23:04:58 66-162-145-116 postgres[16953]: [2-1] LOG:  aborting 
startup due to startup process failure

~

Based on what I've read on the mail archives, the recommended fix is to 
create file '00A4' and fill it with 256k zeros. Is there a quick and 
easy linux-way of creating such a beast?


-jason


[GENERAL] Solutions for listening on multiple ports?

2007-10-09 Thread Jason L. Buberel
Is there a 'generally accepted' best practice for enabling a single 
postgres instance to listen for client connections on more than one 
ip/port combination?


As far as I can tell, the 'listen_address' and 'port' configuration 
variables can only accommodate single values:


listen_address = 127.0.0.1
port = 5432

What I would like to simulate is Apache's notation:

Listen: 127.0.0.1:5432
Listen: 192.168.0.1:54824
...

The force behind this is network security policies and such. I would 
prefer to not resort to kernel-level netfilter trickery to accomplish 
this, if possible.


Thanks,
Jason


Re: [GENERAL] Strange discrepancy in query performance...

2007-10-02 Thread Jason L. Buberel
I agree that this is a bug in JasperReports. I've been stepping throgh 
their code to determine where the paramter type is set to 
'java.lang.String', but have not yet figured out how their Java API will 
allow me to override that with 'java.lang.Integer' or something more 
appropriate.


If I figure something out, I'll post to the list.

Regards,
jason

Tom Lane wrote:

Hmm ... if Postgres were just given the parameter symbol with no type
information, I believe it would have assumed it was bigint (or in
general, the same type as what it's being compared to).  So your
problem suggests that Jasper is deliberately telling the backend that
that parameter is of type text.  If that's coming from something you
did in your code, you probably ought to change the code.  If not,
it seems like a bug/omission in Jasper.

regards, tom lane
  


Re: [GENERAL] Strange discrepancy in query performance...

2007-10-01 Thread Jason L. Buberel

Tom-right-as-usual:

Yep - you were right about the query plan for the prepared statement (a 
sequential scan of the table) differed a bit from the directly-executed 
version :)


For reference, when using JasperReports .jrxml files as the basis for 
the query, I only had to do to the following to 'force' postgres to 
treat the jasper report parameter as a number and not text, thereby 
allowing the correct index to be used:


select * from city summary where city_master_id = 
$P{city_master_id}::bigint ...


Query times went from 300+ seconds back down to ~100ms.

-jason

Tom Lane wrote:

"Jason L. Buberel" <[EMAIL PROTECTED]> writes:
  
In my syslog output, I see entries indicating that the 
JDBC-driver-originated query on a table named 'city_summary' are taking 
upwards of 300 seconds:



  

Oct  1 18:27:47 srv3 postgres-8.2[1625]: [12-1]
LOG:  duration: 307077.037 ms  execute S_42: select * from city_summary
  where  state = $1 and city_master_id = $2 and res_type = 
'single_family' and date = $3

  and range = 90 and zip = $4 and quartile  = '__ALL'
DETAIL:  parameters: $1 = 'CA', $2 = '291', $3 = '2007-09-28', $4 = '__ALL'



  
However, if I run the same query on the same host at the same time that 
the Java application is running, but from the psql  command line, it 
takes only 0.37 seconds:



  
time /opt/postgres-8.2.4/bin/psql --port 54824  -U postgres -d 

altos_research  -c 'select fact_id from city_summary where state = 
\'CA\' and city_master_id = 291 and zip = \'__ALL\' and quartile = 
\'__ALL\' and res_type = \'single_family\' and range = \'90\' and date = 
\'2007-09-28\';'



This is not, in fact, the same query --- the JDBC-originated one is
parameterized, which means it very possibly has a different plan
(since the planner doesn't know the particular values to plan for).

Try using PREPARE and EXPLAIN EXECUTE to examine the plan that is
being produced for the parameterized query.

regards, tom lane
  


[GENERAL] Strange discrepancy in query performance...

2007-10-01 Thread Jason L. Buberel
I'm hoping that someone on the list can help me understand an apparent 
discrepancy in the performance information that I'm collecting on a 
particularly troublesome query.


The configuration: pg-8.2.4 on RHEL4. log_min_duration_statement = 1m.

In my syslog output, I see entries indicating that the 
JDBC-driver-originated query on a table named 'city_summary' are taking 
upwards of 300 seconds:


Oct  1 18:27:47 srv3 postgres-8.2[1625]: [12-1]
LOG:  duration: 307077.037 ms  execute S_42: select * from city_summary
 where  state = $1 and city_master_id = $2 and res_type = 
'single_family' and date = $3

 and range = 90 and zip = $4 and quartile  = '__ALL'
DETAIL:  parameters: $1 = 'CA', $2 = '291', $3 = '2007-09-28', $4 = '__ALL'

However, if I run the same query on the same host at the same time that 
the Java application is running, but from the psql  command line, it 
takes only 0.37 seconds:


> time /opt/postgres-8.2.4/bin/psql --port 54824  -U postgres -d 
altos_research  -c 'select fact_id from city_summary where state = 
\'CA\' and city_master_id = 291 and zip = \'__ALL\' and quartile = 
\'__ALL\' and res_type = \'single_family\' and range = \'90\' and date = 
\'2007-09-28\';'


fact_id
--
46624925
(1 row)

0.00user 0.00system 0:00.37elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+285minor)pagefaults 0swaps

The output of 'explain' seems to indicate that the right index is being 
used:


QUERY PLAN
Index Scan using city_summary_pkey on city_summary  (cost=0.00..12.27 
rows=1 width=2721)

  Index Cond: ((date = '2007-09-28'::text) AND
   (state = 'CA'::text) AND
   (city_master_id = 334::bigint) AND
   (quartile = '__ALL'::text) AND
   (range = '90'::text))
  Filter: ((zip = '__ALL'::text) AND ((res_type)::text = 
'single_family'::text))

(3 rows)

The index looks like this:

# \d city_summary_pkey
Index "public.city_summary_pkey"
  Column   |  Type
+-
date   | text
state  | text
city_master_id | bigint
zip_master_id  | integer
res_type_master_id | bigint
quartile   | text
range  | text
primary key, btree, for table "public.city_summary"

Any ideas on why I am seeing such a big difference between the two 
measurements (JDBC/syslog vs. command line)?


Thanks,
Jason






---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Preferred usage for 'copy to' for a subset of data

2007-09-13 Thread Jason L. Buberel
For recent postgres releases, is there any effective difference 
(performance/memory/io) between:


create temp table foo as select * from bar where bar.date > '2007-01-01';
copy foo to '/tmp/bar.out';
drop table temp;

and this:

copy ( select * from bar where bar.date > '2007-01-01' ) to '/tmp/bar.out';

...that would lead me to use one method vs. the other on large data sets 
(1M+ records)?


Just wondering,
Jason



Re: [GENERAL] Alternative to drop index, load data, recreate index?

2007-09-13 Thread Jason L. Buberel

Depesz,

Thank you for the suggestion- I thought I had read up on that tool 
earlier but had somehow managed to forget about it when starting this 
phase of my investigation.


Needless to say, I can confirm the claims made on the project homepage 
when using very large data sets.


- Loading 1.2M records into an indexed table:
 - pg_bulkload: 5m 29s
 - copy to: 53m 20s

These results were obtained using pg-8.2.4 with pg_bulkload-2.2.0.

-jason

hubert depesz lubaczewski wrote:

On Mon, Sep 10, 2007 at 05:06:35PM -0700, Jason L. Buberel wrote:
  
I am considering moving to date-based partitioned tables (each table = 
one month-year of data, for example). Before I go that far - is there 
any other tricks I can or should be using to speed up my bulk data loading?



did you try pgbulkload? (http://pgbulkload.projects.postgresql.org/)

depesz

  


[GENERAL] Alternative to drop index, load data, recreate index?

2007-09-10 Thread Jason L. Buberel
When loading very large data exports (> 1 million records) I have found 
it necessary to use the following sequence to achieve even reasonable 
import performance:


1. Drop all indices on the recipient table
2. Use "copy recipient_table from '/tmp/input.file';"
3. Recreate all indices on the recipient table

However, I now have tables so large that even the 'recreate all indices' 
step is taking too long (15-20 minutes on 8.2.4).


I am considering moving to date-based partitioned tables (each table = 
one month-year of data, for example). Before I go that far - is there 
any other tricks I can or should be using to speed up my bulk data loading?


Thanks,
jason


Re: [GENERAL] recovery_target_time ignored or recovery always recovers to end of WAL

2007-07-03 Thread Jason L. Buberel
Understood all around - thanks to everyone for helping me clarify (in my 
head/understanding) how these pieces fit together.


One last request - can you glance over the plan below and let me know if 
it sounds sane? The goal again is to be able to recover to a PITR record 
in the very recent past after a mistake (accidental deletion) occurs. 
This is not meant to be a substitute for a real backup strategy:


1. On a daily basis, run a hot backup more/less as described in this 
Wiki article on postgresqlforums.com: http://shorl.com/naprofamynone


2. But instead of creating a .tar.gz file, simply 'rsync' the entire 
$PGDATA dir to another directory stored on a local disk.


3. When an accidental deletion occurs, shutdown the database, restore 
the $PGDATA dir from yesterday's rsync snapshot (after saving any 
unarchived pg_xlogs), and use the normal recovery process to effectively 
'roll back' the database state to what it was just prior to the accident.


Thanks again for all the helpful comments and clarifications. I am now a 
more clueful person as a result ;)

-jason



Erik Jones wrote:

On Jul 2, 2007, at 11:58 PM, Jason L. Buberel wrote:

I am now learning that fact, but recall the original scenario that I 
am trying to mimic:


1. Person accidentally deletes contents of important table.
2. Admin (me) wants to roll back db state to just prior to that 
deletion.
3. (Me) Assumes that by creating a recovery.conf file and setting the 
target to a an earlier trxn id and restarting the database would 
simply do the trick.


So now I think that my scenario should look more like:

1. Person accidentally deletes contents of important table.
2. Admin (me) wants to roll db state back to just prior to that delete.
3. (Me) Performs steps, in addition to creating the recovery.conf 
with the selected xid, that will cause the DB to restart at that PITR.


Now all I need to lock down are those 'additional steps needed to 
force the recovery process to only recover up to the specified xid 
and no further'.


Such as:

- Remove from pg_xlog all of the log files containing transactions 
that come after the selected xid?

- Other?

-jason


Whoa.  If what you're asking is for a database level rollback or undo 
type of function, I'm pretty sure that's not doable in the way you're 
asking.  Once a postgres cluster (data/*) has committed or rolled back 
transactions, you can not roll that particular cluster instance back, 
i.e. you can not stop the database and have it start from some 
previous transaction state and stop at an arbitrary point.  The only 
way to do this is if you have a filesystem level backup from a point 
in time previous to the point to which you wish to return along with 
all of the transaction logs from just before the point where the 
backup was taken up to the point to which you wish to return, which 
you can then bring up in recovery mode and have it play up until a 
transaction id you specify.


The main point here is that PITR requires and, is run on, a base backup.

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




Re: [GENERAL] recovery_target_time ignored or recovery always recovers to end of WAL

2007-07-02 Thread Jason L. Buberel

I think that I now see the error of my ways.

When I shutdown my server, the files under the ./data/ directory still 
all point to 'now' and not 'a week ago when the backups were taken'. So 
the recover process insists on bringing the database to a PITR equal to 
'now'.


Instead, in order to achieve my goal I would have to restore to that 
backup, and rely on the contents of the archive_logs to have the 
recovery process return me to the selected xid PITR.


So is there any way to 'trick' or force the server to forget what it 
thinks 'now' is and instead to step back to the selected xid and make 
that the new version of 'now'?


-jason

Jason L. Buberel wrote:
I am now learning that fact, but recall the original scenario that I 
am trying to mimic:


1. Person accidentally deletes contents of important table.
2. Admin (me) wants to roll back db state to just prior to that deletion.
3. (Me) Assumes that by creating a recovery.conf file and setting the 
target to a an earlier trxn id and restarting the database would 
simply do the trick.


So now I think that my scenario should look more like:

1. Person accidentally deletes contents of important table.
2. Admin (me) wants to roll db state back to just prior to that delete.
3. (Me) Performs steps, in addition to creating the recovery.conf with 
the selected xid, that will cause the DB to restart at that PITR.


Now all I need to lock down are those 'additional steps needed to 
force the recovery process to only recover up to the specified xid and 
no further'.


Such as:

- Remove from pg_xlog all of the log files containing transactions 
that come after the selected xid?

- Other?

-jason


Tom Lane wrote:

"Jason L. Buberel" <[EMAIL PROTECTED]> writes:
  

## stopped and started postgres, following syslog output:



You seem to have omitted all the interesting details about what you did
here; but "stopping and starting" postgres is certainly not intended to
cause it to discard data.  There would need to have been some steps
involving restoring a previous base backup and rolling forward through
archived xlog files.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match
  


Re: [GENERAL] recovery_target_time ignored or recovery always recovers to end of WAL

2007-07-02 Thread Jason L. Buberel
I am now learning that fact, but recall the original scenario that I am 
trying to mimic:


1. Person accidentally deletes contents of important table.
2. Admin (me) wants to roll back db state to just prior to that deletion.
3. (Me) Assumes that by creating a recovery.conf file and setting the 
target to a an earlier trxn id and restarting the database would simply 
do the trick.


So now I think that my scenario should look more like:

1. Person accidentally deletes contents of important table.
2. Admin (me) wants to roll db state back to just prior to that delete.
3. (Me) Performs steps, in addition to creating the recovery.conf with 
the selected xid, that will cause the DB to restart at that PITR.


Now all I need to lock down are those 'additional steps needed to force 
the recovery process to only recover up to the specified xid and no 
further'.


Such as:

- Remove from pg_xlog all of the log files containing transactions that 
come after the selected xid?

- Other?

-jason


Tom Lane wrote:

"Jason L. Buberel" <[EMAIL PROTECTED]> writes:
  

## stopped and started postgres, following syslog output:



You seem to have omitted all the interesting details about what you did
here; but "stopping and starting" postgres is certainly not intended to
cause it to discard data.  There would need to have been some steps
involving restoring a previous base backup and rolling forward through
archived xlog files.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match
  


Re: [GENERAL] recovery_target_time ignored or recovery always recovers to end of WAL

2007-07-02 Thread Jason L. Buberel
Minor correction to the output below - the final table dump actually 
contained the following - my apologies for the copy/paste error:


altos_research=# select * from account_note;
account_note_id | customer_account_id | user_id_of_author | 
creation_date |  note

-+-+---+---+-
 410805 |  410795 |258460 | 
2006-02-03| foobing
 441835 | 552 |258459 | 
2006-02-16| foobing
2540171 | 2540085 |258460 | 
2006-09-16| foobing
999 | 552 |258460 | 
2007-06-29| foobing
999 | 552 |258460 | 
2007-06-29| foobing
999 | 552 |258460 | 
2007-06-29| foobing
999 | 552 |258460 | 
2007-06-29| foobing
999 | 552 |258460 | 
2007-06-29| foobing


Which is the most recent transaction update.

-jason

Jason L. Buberel wrote:
I now have a working xlogdump, which has allowed me to put together 
the following steps which I believe demonstrate that the recovery 
process insists on recovering to the most recent state.


Here is the sequence of events shown below:

1. Display contents of 'account_note' table
2. Update note field to be 'foobar'.
3. Dump trxnlog, note the new xid
4. Update note fied to be 'foobaz'.
5. Dump trxnlog, note the new xid
6. Update note field to be 'foobing'
7. Dump trxnlog, note the new xid
8. Create recovery.conf file with xid corresponding to 'foobar' update.
9. Stop/start database.
10. Display contents of 'account_note' table
11. Gasp in surpise at seeing 'note' field with value 'foobing'.
12. Expected to see original value 'foobar' (xid 696493, inclusive = 
true)

13. Scratch head in puzzlement.


altos_research=# select * from account_note;
account_note_id | customer_account_id | user_id_of_author | 
creation_date | note
-+-+---+---+--- 

 410805 |  410795 |258460 | 
2006-02-03| Ratel Investments
 441835 | 552 |258459 | 
2006-02-16| testing new account note fix.
2540171 | 2540085 |258460 | 
2006-09-16| requested to be removed
999 | 552 |258460 | 
2007-06-29| help me
999 | 552 |258460 | 
2007-06-29| help me
999 | 552 |258460 | 
2007-06-29| help me
999 | 552 |258460 | 
2007-06-29| help me
999 | 552 |258460 | 
2007-06-29| help me


altos_research=# begin transaction; update account_note set note = 
'foobar'; commit transaction;


##xlogdump of most recent file in pg_xlog:
xid: 695073 total length: 466 status: COMMITED
xid: 695081 total length: 131 status: COMMITED
xid: 695082 total length: 131 status: COMMITED
xid: 695083 total length: 131 status: COMMITED
xid: 695084 total length: 131 status: COMMITED
xid: 696493 total length: 1520 status: COMMITED - foobar trxn.

altos_research=# begin transaction; update account_note set note = 
'foobaz'; commit transaction;


##xlogdump of most recent file in pg_xlog:
xid: 695073 total length: 466 status: COMMITED
xid: 695081 total length: 131 status: COMMITED
xid: 695082 total length: 131 status: COMMITED
xid: 695083 total length: 131 status: COMMITED
xid: 695084 total length: 131 status: COMMITED
xid: 696493 total length: 1520 status: COMMITED - foobar trxn.
xid: 696498 total length: 824 status: COMMITED - foobaz trxn

altos_research=# begin transaction; update account_note set note = 
'foobing'; commit transaction;


##xlogdump of most recent file in pg_xlog:
xid: 695073 total length: 466 status: COMMITED
xid: 695081 total length: 131 status: COMMITED
xid: 695082 total length: 131 status: COMMITED
xid: 695083 total length: 131 status: COMMITED
xid: 695084 total length: 131 status: COMMITED
xid: 696493 total length: 1520 status: COMMITED - foobar trxn.
xid: 696498 total length: 824 status: COMMITED - foobaz trxn
xid: 696502 total length: 2672 status: COMMITED - foobing trxn

## created recovery.conf file:
recovery_target_xid = '696493'
restore_command = 'cp /pgdata/archive_logs/%f %p'
recovery_target_inclusive = 'true'

## stopped and started postgres, following syslog output:
Jul  2 20:51:10 localhost postgres-8.2[9125]: [3-1] LOG:  starting 
archive recovery
Jul  2 20:51:10 localhost postgres-8.2[9125]: [4-1] LOG:  
recovery_target_xid = 696493
Jul  2 20:51:10 localhost postgres-8.2[9125]: [5-1] LO

Re: [GENERAL] recovery_target_time ignored or recovery always recovers to end of WAL

2007-07-02 Thread Jason L. Buberel
se 
system is ready


altos_research=# select * from account_note;
account_note_id | customer_account_id | user_id_of_author | 
creation_date | note

-+-+---+---+---
 410805 |  410795 |258460 | 
2006-02-03| Ratel Investments
 441835 | 552 |258459 | 
2006-02-16| testing new account note fix.
2540171 | 2540085 |258460 | 
2006-09-16| requested to be removed
999 | 552 |258460 | 
2007-06-29| help me
999 | 552 |258460 | 
2007-06-29| help me
999 | 552 |258460 | 
2007-06-29| help me
999 | 552 |258460 | 
2007-06-29| help me
999 | 552 |258460 | 
2007-06-29| help me



So now can someone tell me what I'm doing incorrectly :) ?

-jason


Simon Riggs wrote:

On Mon, 2007-07-02 at 16:32 -0400, Tom Lane wrote:
  

"Simon Riggs" <[EMAIL PROTECTED]> writes:


On Mon, 2007-07-02 at 09:21 -0700, Jason L. Buberel wrote:
  
I downloaded the latest xlogdump source, and built/installed it against 
my 8.2.4 source tree. When I execute it however, I am informed that all 
of my WAL files (either the 'active' copies in pg_xlog or the 'archived' 
copies in my /pgdata/archive_logs dir) appear to be malformed:

Bogus page magic number D05E at offset 0


For now, remove these lines from xlogdump.c, l.82-86
  if (((XLogPageHeader) pageBuffer)->xlp_magic != XLOG_PAGE_MAGIC)
  

I don't think that's a very good solution; the reason the magic number
changed is that some of the record formats changed.  Jason needs a copy
that's actually appropriate to 8.2.



That was the hack for Jason, not the longterm solution. I've said I'll
work on that once other core software is done.

  


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] recovery_target_time ignored or recovery always recovers to end of WAL

2007-07-02 Thread Jason L. Buberel

Some more bits on this:

And playing with the date format does not seem to change the outcome 
(the db is always recovered to the most current state). In this case, I 
removed the timezone designation 'PDT' from my timestamp, and the db 
properly figured out that it is running in GMT-7 (pacific) time (see 
syslog ouptput below).


What worries me is the 'record with zero length', combined with my 
issues (in previous email) with the xlogdump not finding the right magic 
bits. Perhaps that (or problems related to it) are causing the recovery 
process to ignore my PITR information leading it to simply recover the 
database to the most recent state?


LOG:  database system was shut down at 2007-07-02 10:12:06 PDT
LOG:  starting archive recovery
LOG:  recovery_target_time = 2007-06-29 00:00:00-07
LOG:  restore_command = "cp /pgdata/archive_logs/%f %p"
LOG:  recovery_target_inclusive = false
LOG:  checkpoint record is at F/7E0DDA60
LOG:  redo record is at F/7E0DDA60; undo record is at 0/0; shutdown TRUE
LOG:  next transaction ID: 0/695227; next OID: 35828734
LOG:  next MultiXactId: 28; next MultiXactOffset: 55
LOG:  automatic recovery in progress
LOG:  record with zero length at F/7E0DDAA8
LOG:  redo is not required
LOG:  archive recovery complete
LOG:  database system is ready

-jason

Jason L. Buberel wrote:

Harrumph -

I downloaded the latest xlogdump source, and built/installed it 
against my 8.2.4 source tree. When I execute it however, I am informed 
that all of my WAL files (either the 'active' copies in pg_xlog or the 
'archived' copies in my /pgdata/archive_logs dir) appear to be malformed:


$ /opt/postgres-8.2.4/bin/xlogdump --port 54824 --host 127.0.0.1 
--user postgres  ../../../archive_logs/*

../../../archive_logs/0001000F007C:
Bogus page magic number D05E at offset 0
invalid record length at F/7C1C
../../../archive_logs/0001000F007C.00550700.backup:
Partial page of 241 bytes ignored
../../../archive_logs/0001000F007D:
Bogus page magic number D05E at offset 0
invalid record length at F/7D1C
../../../archive_logs/0001000F007D.0006C01C.backup:
Partial page of 241 bytes ignored

Which does not help particularly much :)

I'll keep plugging away at this - perhaps my problem in setting the 
database state to a PITR is related to timezones or timestamp formatting?


-jason

Tom Lane wrote:

Jason, if you can't figure it out you might grab xlogviewer
http://pgfoundry.org/projects/xlogviewer/
and see what it says the timestamps of the commit records in your WAL
files are.
  


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] recovery_target_time ignored or recovery alwaysrecovers to end of WAL

2007-07-02 Thread Jason L. Buberel

Harrumph -

I downloaded the latest xlogdump source, and built/installed it against 
my 8.2.4 source tree. When I execute it however, I am informed that all 
of my WAL files (either the 'active' copies in pg_xlog or the 'archived' 
copies in my /pgdata/archive_logs dir) appear to be malformed:


$ /opt/postgres-8.2.4/bin/xlogdump --port 54824 --host 127.0.0.1 --user 
postgres  ../../../archive_logs/*

../../../archive_logs/0001000F007C:
Bogus page magic number D05E at offset 0
invalid record length at F/7C1C
../../../archive_logs/0001000F007C.00550700.backup:
Partial page of 241 bytes ignored
../../../archive_logs/0001000F007D:
Bogus page magic number D05E at offset 0
invalid record length at F/7D1C
../../../archive_logs/0001000F007D.0006C01C.backup:
Partial page of 241 bytes ignored

Which does not help particularly much :)

I'll keep plugging away at this - perhaps my problem in setting the 
database state to a PITR is related to timezones or timestamp formatting?


-jason

Tom Lane wrote:

Jason, if you can't figure it out you might grab xlogviewer
http://pgfoundry.org/projects/xlogviewer/
and see what it says the timestamps of the commit records in your WAL
files are.
  


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] recovery_target_time ignored or recovery alwaysrecovers to end of WAL

2007-07-02 Thread Jason L. Buberel

Simon,

Thanks for the tip. I had assumed that so long as I set 
'recovery_target_time' to a value that occurred before the 'fatal 
commit' and set the 'inclusive' flag to false that I would be able to 
return to just before the deletion occurred.


I'll play with it a bit more and see. I just want to know what to do in 
the future should a real emergency like this occur.


Thanks,
jason

Simon Riggs wrote:

On Sun, 2007-07-01 at 21:41 -0700, Jason L. Buberel wrote:
  
Your example transactions are so large that going back 15 minutes is not

enough. You'll need to go back further.

recovery_target_time can only stop on a COMMIT or ABORT record. This is
because it makes no sense to recover half a transaction, only whole
transactions have meaning for recovery. So if the transactions are very
large, you need to go back further.

  


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] recovery_target_time ignored or recovery always recovers to end of WAL

2007-07-01 Thread Jason L. Buberel
I am trying to learn/practice the administrative steps that would need 
to be taken in a 'fat finger' scenario, and I am running into problems. 
I am trying to use 'recovery.conf' to set the database state to about 15 
minutes ago in order to recover from accidentally deleting important 
data. However, each time I restart the database in recovery mode, it 
seems to always return me to the state it was in when I shut it down, 
ignoring my 'recovery_target_time' setting.


For example:

1. I have a production 8.2.4 database running with WAL archiving enabled.
2. Thinking I am logged into a development database I  issue the commands:

start transaction;
delete from billing_info;
delete from customer_account;
commit;

3. I suddenly realize I was logged into the production database.
4. I fall out of my chair, then regain consciousness 10 minutes later.
5. I shutdown the database, and create a 'recovery.conf' file as follows:

# pretend that 2007-07-01 20:50:00 PDT was 15 minutes ago.
recovery_target_time = '2007-07-01 20:50:00 PDT'
restore_command = 'cp /pgdata/archive_logs/%f %p'
recovery_target_inclusive = 'false'

6. I start the database, and I see the following log messages:

LOG:  starting archive recovery
LOG:  recovery_target_time = 2007-07-01 20:50:00-07
LOG:  restore_command = "cp /pgdata/archive_logs/%f %p"
LOG:  recovery_target_inclusive = false
LOG:  checkpoint record is at F/7E0DD5A4
LOG:  redo record is at F/7E0DD5A4; undo record is at 0/0; shutdown TRUE
LOG:  next transaction ID: 0/693577; next OID: 35828734
LOG:  next MultiXactId: 28; next MultiXactOffset: 55
LOG:  automatic recovery in progress
LOG:  record with zero length at F/7E0DD5EC
LOG:  redo is not required
LOG:  archive recovery complete
LOG:  database system is ready

7. I log back in to the database, expecting to see all of my 
billing_info an customer_account records in place. But instead, the 
tables are empty - just as they were when the db was shutdown.


What have I don't wrong? Or is there some other procedure to use in 
these situations?


Thanks,
jason

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] When an index and a constraint have the same name...

2007-01-30 Thread Jason L. Buberel

Thanks for taking a look Tom:

I am running postgres 8.1.4 on RedHet (CentOS) v4.0. Here is the 
description of the purchase_record table (somewhat abbreviated with 
uninvolved columns omitted):


# \d purchase_record
 Table "public.purchase_record"
  Column   |  Type   | Modifiers
+-+
purchase_record_id | bigint  | not null default 0
report_specification_id| bigint  |
Indexes:
   "pr_pkey" PRIMARY KEY, btree (purchase_record_id)
   "fki_pr_rs" btree (report_specification_id)
Foreign-key constraints:
   "pr_rs" FOREIGN KEY (report_specification_id) REFERENCES 
report_specification(report_specification_id) ON UPDATE RESTRICT ON 
DELETE CASCADE



# \d report_specification
  Table "public.report_specification"
Column  | Type  |   Modifiers
-+---+---
report_specification_id | bigint| not null
report_template_id  | bigint|
Indexes:
   "rs_pkey" PRIMARY KEY, btree (report_specification_id)
   "fki_rs_rt_fkey" btree (report_template_id)

Regards,
Jason

Tom Lane wrote:

[EMAIL PROTECTED] writes:
  

Seems as though I've gotten myself into something of a pickle:
I wound up with a fkey constraint and an index on the same table having the 
same name ('rs_fkey').



That shouldn't be a problem particularly.

  
The result is an error message when I try to drop the table (cascade) or 
even drop the constraint:

# alter table report_specification drop constraint rs_pkey;
NOTICE:  constraint pr_rs on table purchase_record depends on index rs_pkey
ERROR:  "rs_pkey" is an index



That seems odd.  What PG version is this exactly ("8.1" is not good
enough)?  What does psql show for "\d report_specification" and
"\d purchase_record"?

regards, tom lane
  


---(end of broadcast)---
TIP 6: explain analyze is your friend