The only issue I have with autovacuum is the fact that I have to briefly
stop/restart postgres every couple of days, which kills autovacuum and
it has no memory of previous work done. I work with several databases
with partitioned tables having high daily volume. Dropping partitioned
tables locks o
Sriram Dandapani wrote:
>
> If I were to specify in the pg_autovacuum catalog that certain high
> volume partitioned tables(that get dropped daily) be ignored, then
> when autovacuum finishes, will it update the transaction id wraparound
> counter (this way, I can get autovacuum to finish quickly )
"Paul B. Anderson" <[EMAIL PROTECTED]> writes:
> I have a large stored procedure with 16 arrays, mostly varchar, each
> with dimension 3000. The procedure works fine on a small number of rows
> used from these arrays but gets the following error for large rowsets:
> ERROR: invalid array subs
Hi Jim,Thanks for your info.Then in PostgreSQL 7.3 how can I use the WAL log to recover?Michael"Jim C. Nasby" <[EMAIL PROTECTED]> 說: On Thu, Sep 28, 2006 at 10:16:19PM +0800, Chan Michael wrote:> Hi,> > I am new to PostgreSQL and now want to know how can I recover from a database crash.> > I know
You might want to try 8.2 beta then, as support for NULLs in arrays was
just added. Testing of that magnitude would be great!
On Thu, Sep 28, 2006 at 09:08:19PM -0400, Paul B. Anderson wrote:
> There are definitely nulls in there, and in the fields where the error
> is signaled. I missed in my r
There are definitely nulls in there, and in the fields where the
error is signaled. I missed in my reading that they are not allowed.
Thanks.
Paul
Jim C. Nasby wrote:
On Thu, Sep 28, 2006 at 06:13:04PM -0400, Paul B. Anderson wrote:
I'm running RHEL 3.0 and postgreSQL 8.1.4, c
On Thu, Sep 28, 2006 at 06:13:04PM -0400, Paul B. Anderson wrote:
> I'm running RHEL 3.0 and postgreSQL 8.1.4, compiled from source.
Have you tried enforcing that there's no NULLs in the arrays? That code
is pretty new (in fact, I thought it was only in 8.2...), so it's
possible that therein lies
On Thu, Sep 28, 2006 at 02:51:24PM -0700, Sriram Dandapani wrote:
> I have a strange problem with the number of transactions generated
> within my application. I use jdbc batching to submit insert statements
> from a jboss app server to postgres 8.1.2.
>
> A batch can have from 100 to 3000 insert
On Thu, Sep 28, 2006 at 11:58:12AM -0700, Talha Khan wrote:
> datfrozenxid column of pg_database row is updated at the completion of any
> database wide vacuum operation. The value store here is the cuttoff xid used
> by the vacuum operation all xid's older then this cutoffxid are replaced by
> the
Sriram Dandapani wrote:
Autocommit is set to off once a connection is obtained from the jboss
pool. It is turned back on when it is closed (so that any idle
transactions are committed).
Don't know what to suggest then -- if autocommit is off then the JDBC
driver should be sending BEGIN before
FYI..I use the postgres8.1.404 jdbc driver
-Original Message-
From: Oliver Jowett [mailto:[EMAIL PROTECTED]
Sent: Thursday, September 28, 2006 3:24 PM
To: Sriram Dandapani
Cc: pgsql-jdbc@postgresql.org; pgsql-admin@postgresql.org
Subject: Re: [JDBC] number of transactions doubling
Sriram
Autocommit is set to off once a connection is obtained from the jboss
pool. It is turned back on when it is closed (so that any idle
transactions are committed).
I had the following 2 lines in my postgres-ds.xml which I commented
Even after this, I am seeing twice the number of transactions.
Sriram Dandapani wrote:
The target table has triggers that route data to appropriate tables. The
tables to which data is routed has check constraints that do further
inserts. (All of this happens in 1 jdbc transaction)
I expect JDBC Batching to generate fewer transactions depending on
batch
I'm running RHEL 3.0 and postgreSQL 8.1.4, compiled from source.
Paul
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can
I have a large stored procedure with 16 arrays, mostly varchar, each
with dimension 3000. The procedure works fine on a small number of rows
used from these arrays but gets the following error for large rowsets:
ERROR: invalid array subscripts
CONTEXT: PL/pgSQL function "name_search" line
If I were to specify in the pg_autovacuum catalog that
certain high volume partitioned tables(that get dropped daily) be ignored, then
when autovacuum finishes, will it update the transaction id wraparound counter
(this way, I can get autovacuum to finish quickly )
OR
Will I still need
Hi
I have a strange problem with the number of transactions
generated within my application. I use jdbc batching to submit insert
statements from a jboss app server to postgres 8.1.2.
A batch can have from 100 to 3000 inserts.
I noticed the following:
When I run the following
Interesting... age(xid) isn't documented anywhere.
No, vacuum shouldn't be generating a lot of xid's. My guess is that your
generating process actually does 2 transactions per row.
On Thu, Sep 28, 2006 at 11:16:24AM -0700, Sriram Dandapani wrote:
> When I run this query
>
>
>
> fwdb01=# selec
On Thu, Sep 28, 2006 at 03:26:36PM +, Mathias Laurent wrote:
> If I do :
> Session 1:
> decibel=# begin;
Does decibel have some meaning in some language other than english?
> BEGIN
> decibel=# select * from i where i=1 for update nowait;
> i
> ---
> 1
> (1 row)
>
> decibel=# begin;
> BEGIN
>
On Thu, 2006-09-28 at 12:32 -0400, Bruce Momjian wrote:
> Lane Van Ingen wrote:
> > Forgot about 'pg_ctl status'. That will work fine for my needs.
> >
> > Ray Stell mentioned it would be helpful to see a description of all the
> > things that go on from start to finish, in general or course. I a
On Thu, Sep 28, 2006 at 10:16:19PM +0800, Chan Michael wrote:
> Hi,
>
> I am new to PostgreSQL and now want to know how can I recover from a database
> crash.
>
> I know pg_dump and pg_restore but with pg_dump all transactions between every
> pg_dump will be lost. I found WAL in the doc and see
Hi sriram,
datfrozenxid column of pg_database row is updated at the completion of any database wide vacuum operation. The value store here is the cuttoff xid used by the vacuum operation all xid's older then this cutoffxid are replaced by theis xid so i think the behaviour being shown by your dat
When I run this query
fwdb01=# select current_timestamp,datname,age(datfrozenxid)
from pg_database;
now | datname | age
---+---+
2006-09-28 18:04:24.489935+00 | postgres | 1087834006
2006-09-28 18:04:
Lane Van Ingen wrote:
> Forgot about 'pg_ctl status'. That will work fine for my needs.
>
> Ray Stell mentioned it would be helpful to see a description of all the
> things that go on from start to finish, in general or course. I agree. It
> helps to know the logic going on behind the scenes when
If I do :
Session 1:
decibel=# begin;
BEGIN
decibel=# select * from i where i=1 for update nowait;
i
---
1
(1 row)
decibel=# begin;
BEGIN
decibel=# select * from i where i=2 for update nowait;
i
---
2
(1 row)
Session 2 :
decibel=# select * from i where i=1 for update nowait;
ERROR: could not o
What sort of lock does truncate require? If it is not an access exclusive lock, what locks would block the truncate?Thanks,Chris
Hi,I am new to PostgreSQL and now want to know how can I recover from a database crash.I know pg_dump and pg_restore but with pg_dump all transactions between every pg_dump will be lost. I found WAL in the doc and seems with it I can archieve point-in-time recovery. But I don't know the exact steps
27 matches
Mail list logo