Can someone point me to an example of creating a prepared statement for
a query with an 'IN' clause?
The query looks like this:
select value from table where
state = $1 and city = $2 and zip = $3 and
date in ( $4 );
For the prepared statement, I have tried:
prepare st1(text, text, text, text[
And thank you to Kevin - this did the trick perfectly. I've been able to
recover everything successfully.
Regards,
Jason
Kevin Hunter wrote:
The tool is 'dd' and /dev. /dev/zero in this case. The summary of what
you asked:
$ dd if=/dev/zero of=./zblah count=1 bs=256k
1+0 records in
1+0 reco
I have a recover situation related to:
Oct 13 23:04:58 66-162-145-116 postgres[16955]: [1-1] LOG: database
system was shut down at 2007-10-13 23:04:54 PDT
Oct 13 23:04:58 66-162-145-116 postgres[16955]: [2-1] LOG: checkpoint
record is at F0/E21C
Oct 13 23:04:58 66-162-145-116 postgres[169
Is there a 'generally accepted' best practice for enabling a single
postgres instance to listen for client connections on more than one
ip/port combination?
As far as I can tell, the 'listen_address' and 'port' configuration
variables can only accommodate single values:
listen_address = 127.
I agree that this is a bug in JasperReports. I've been stepping throgh
their code to determine where the paramter type is set to
'java.lang.String', but have not yet figured out how their Java API will
allow me to override that with 'java.lang.Integer' or something more
appropriate.
If I figu
Lane wrote:
"Jason L. Buberel" <[EMAIL PROTECTED]> writes:
In my syslog output, I see entries indicating that the
JDBC-driver-originated query on a table named 'city_summary' are taking
upwards of 300 seconds:
Oct 1 18:27:47 srv3 postgres-8.2[1625]: [12-1]
LO
I'm hoping that someone on the list can help me understand an apparent
discrepancy in the performance information that I'm collecting on a
particularly troublesome query.
The configuration: pg-8.2.4 on RHEL4. log_min_duration_statement = 1m.
In my syslog output, I see entries indicating that t
For recent postgres releases, is there any effective difference
(performance/memory/io) between:
create temp table foo as select * from bar where bar.date > '2007-01-01';
copy foo to '/tmp/bar.out';
drop table temp;
and this:
copy ( select * from bar where bar.date > '2007-01-01' ) to '/tmp/ba
1.2M records into an indexed table:
- pg_bulkload: 5m 29s
- copy to: 53m 20s
These results were obtained using pg-8.2.4 with pg_bulkload-2.2.0.
-jason
hubert depesz lubaczewski wrote:
On Mon, Sep 10, 2007 at 05:06:35PM -0700, Jason L. Buberel wrote:
I am considering moving to date-based
When loading very large data exports (> 1 million records) I have found
it necessary to use the following sequence to achieve even reasonable
import performance:
1. Drop all indices on the recipient table
2. Use "copy recipient_table from '/tmp/input.file';"
3. Recreate all indices on the recip
to effectively
'roll back' the database state to what it was just prior to the accident.
Thanks again for all the helpful comments and clarifications. I am now a
more clueful person as a result ;)
-jason
Erik Jones wrote:
On Jul 2, 2007, at 11:58 PM, Jason L. Buberel wrote:
I am no
7; is and instead to step back to the selected xid and make
that the new version of 'now'?
-jason
Jason L. Buberel wrote:
I am now learning that fact, but recall the original scenario that I
am trying to mimic:
1. Person accidentally deletes contents of important table.
2. Admin (me) w
emove from pg_xlog all of the log files containing transactions that
come after the selected xid?
- Other?
-jason
Tom Lane wrote:
"Jason L. Buberel" <[EMAIL PROTECTED]> writes:
## stopped and started postgres, following syslog output:
You seem to have omitted all the inte
| 552 |258460 |
2007-06-29| foobing
999 | 552 |258460 |
2007-06-29| foobing
Which is the most recent transaction update.
-jason
Jason L. Buberel wrote:
I now have a working xlogdump, which has allowed me to put together
the following
258460 |
2007-06-29| help me
999 | 552 |258460 |
2007-06-29| help me
999 | 552 |258460 |
2007-06-29| help me
So now can someone tell me what I'm doing incorrectly :) ?
-jason
Simon Riggs wrote:
On Mo
record with zero length at F/7E0DDAA8
LOG: redo is not required
LOG: archive recovery complete
LOG: database system is ready
-jason
Jason L. Buberel wrote:
Harrumph -
I downloaded the latest xlogdump source, and built/installed it
against my 8.2.4 source tree. When I execute it however, I a
Harrumph -
I downloaded the latest xlogdump source, and built/installed it against
my 8.2.4 source tree. When I execute it however, I am informed that all
of my WAL files (either the 'active' copies in pg_xlog or the 'archived'
copies in my /pgdata/archive_logs dir) appear to be malformed:
$
with it a bit more and see. I just want to know what to do in
the future should a real emergency like this occur.
Thanks,
jason
Simon Riggs wrote:
On Sun, 2007-07-01 at 21:41 -0700, Jason L. Buberel wrote:
Your example transactions are so large that going back 15 minutes is not
eno
I am trying to learn/practice the administrative steps that would need
to be taken in a 'fat finger' scenario, and I am running into problems.
I am trying to use 'recovery.conf' to set the database state to about 15
minutes ago in order to recover from accidentally deleting important
data. Howe
Thanks for taking a look Tom:
I am running postgres 8.1.4 on RedHet (CentOS) v4.0. Here is the
description of the purchase_record table (somewhat abbreviated with
uninvolved columns omitted):
# \d purchase_record
Table "public.purchase_record"
Column
20 matches
Mail list logo