Alvaro Herrera wrote:
It is a Postgres limitation as well. We _could_ make the server "really
start the transaction" at the point the first query is issued instead of
when the BEGIN is issued.
And also, really finish the transaction right after the last
statement is executed, instead of waiting u
On Friday 15 April 2005 18:34, Markus Schaber wrote:
> Hi, Andreas,
>
> Andreas Joseph Krogh schrieb:
> >>Could you try the latest postgresql-8.0-311.jdbcX.jar? The current
> >>versions should solve the idle in transaction problem, the previous ones
> >>tended to issue BEGIN; just after every COMMI
Hi, Andreas,
Andreas Joseph Krogh schrieb:
>>Could you try the latest postgresql-8.0-311.jdbcX.jar? The current
>>versions should solve the idle in transaction problem, the previous ones
>>tended to issue BEGIN; just after every COMMIT, so there's always an
>>open transaction.
> I could, but is i
Hi, Andrew,
Andrew Sullivan schrieb:
> Nope. That's a problem with your pool software. It's no doubt
> issuing "BEGIN;" as soon as it connects.
This problem may as well be caused by some versions of the postgresql
jdbc driver, no need to blame the pool software. This is fixed with the
current
On Friday 15 April 2005 18:22, Markus Schaber wrote:
> Hi, Andreas,
>
> Andreas Joseph Krogh schrieb:
> >>>So, what you're suggesting is that a restart of the webapp should make
> >>>vacuum able to delete those dead rows?
> >>
> >>Yes, but that'll only solve your problem for now. You'll have the
>
Hi, Andreas,
Andreas Joseph Krogh schrieb:
>>>So, what you're suggesting is that a restart of the webapp should make
>>>vacuum able to delete those dead rows?
>>Yes, but that'll only solve your problem for now. You'll have the
>>problem again soon. What's keeping open the transaction?
> Don't k
Greg Stark <[EMAIL PROTECTED]> writes:
> So vacuum doesn't really know what tuples are actually visible to the
> snapshots actually taken by a transaction? It's making the conservative
> estimate that a snapshot could have been taken as early as the start of the
> transaction even if no snapshot wa
Tom Lane <[EMAIL PROTECTED]> writes:
> We have to start the transaction no later than event #2 since there has
> to be something to hold the lock. But it'd be easy enough to decouple
> this from BEGIN, and it'd be good enough to solve the "COMMIT;BEGIN"
> problem.
Oh I think I finally figured ou
Greg Stark <[EMAIL PROTECTED]> writes:
> This is presumably because of the long-standing issue that Postgres takes the
> snapshot as soon as the BEGIN is issued.
No, we don't set the snapshot until the first DML query is issued. This
is critical for serializable transactions: you have to be able
Andreas Joseph Krogh <[EMAIL PROTECTED]> writes:
> On Thursday 14 April 2005 21:08, Andrew Sullivan wrote:
>> On Thu, Apr 14, 2005 at 03:04:47PM -0400, Alvaro Herrera wrote:
>>> It is a Postgres limitation as well. We _could_ make the server "really
>>> start the transaction" at the point the firs
On Thursday 14 April 2005 22:44, Andrew Sullivan wrote:
> On Thu, Apr 14, 2005 at 09:57:03PM +0200, Andreas Joseph Krogh wrote:
> > Is it safe to issue a "kill -2 ", will the
> > connection-pool reconnect?
>
> The kill -2 will definitely work to shut the connectiond down
> cleanly. Actually, I sho
On Thu, Apr 14, 2005 at 09:57:03PM +0200, Andreas Joseph Krogh wrote:
> Is it safe to issue a "kill -2 ", will the
> connection-pool reconnect?
The kill -2 will definitely work to shut the connectiond down
cleanly. Actually, I should put that more precisely: you should send
SIGINT. I don't know
On Thursday 14 April 2005 21:08, Andrew Sullivan wrote:
> On Thu, Apr 14, 2005 at 03:04:47PM -0400, Alvaro Herrera wrote:
> > It is a Postgres limitation as well. We _could_ make the server "really
> > start the transaction" at the point the first query is issued instead of
> > when the BEGIN is i
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> On Thu, Apr 14, 2005 at 07:21:38PM +0200, Andreas Joseph Krogh wrote:
> >
> > So, what you're suggesting is that a restart of the webapp should make
> > vacuum
> > able to delete those dead rows?
>
> Yes, but that'll only solve your problem for now
On Thu, Apr 14, 2005 at 03:04:47PM -0400, Alvaro Herrera wrote:
>
> It is a Postgres limitation as well. We _could_ make the server "really
> start the transaction" at the point the first query is issued instead of
> when the BEGIN is issued. In fact this problem would go away if we did
> that.
On Thu, Apr 14, 2005 at 02:56:23PM -0400, Andrew Sullivan wrote:
> On Thu, Apr 14, 2005 at 07:54:19PM +0200, Andreas Joseph Krogh wrote:
> > Any hints on how I can find out what's keeping the connection idle in a
> > transaction? I realize now that I should probably ask that question on the
> > p
On Thu, Apr 14, 2005 at 07:54:19PM +0200, Andreas Joseph Krogh wrote:
> Any hints on how I can find out what's keeping the connection idle in a
> transaction? I realize now that I should probably ask that question on the
> pgsql-jdbc-list:-)
Nope. That's a problem with your pool software. It's
Andreas Joseph Krogh wrote:
On Thursday 14 April 2005 19:12, Richard Huxton wrote:
So - do you have an application/client that has had a single transaction
open for a long time. A very long time unless you update this table a lot.
Hm, it's a web-app, and I don't hold a transaction open for a long
On Thursday 14 April 2005 19:33, Andrew Sullivan wrote:
> On Thu, Apr 14, 2005 at 07:21:38PM +0200, Andreas Joseph Krogh wrote:
> > So, what you're suggesting is that a restart of the webapp should make
> > vacuum able to delete those dead rows?
>
> Yes, but that'll only solve your problem for now.
On Thu, Apr 14, 2005 at 07:21:38PM +0200, Andreas Joseph Krogh wrote:
>
> So, what you're suggesting is that a restart of the webapp should make vacuum
> able to delete those dead rows?
Yes, but that'll only solve your problem for now. You'll have the
problem again soon. What's keeping open th
On Thursday 14 April 2005 19:12, Richard Huxton wrote:
> Andreas Joseph Krogh wrote:
> > Hi everybody, I have table which inly has 298 rows in it, but EXPLAIN
> > ANALYZE shows that PG thinks it has as much as 160057 rows AFAICT from
> > the EXPLAIN-output.
> >
> >
> > I've just run a "VACUUM FULL
Andreas Joseph Krogh wrote:
Hi everybody, I have table which inly has 298 rows in it, but EXPLAIN ANALYZE
shows that PG thinks it has as much as 160057 rows AFAICT from the
EXPLAIN-output.
I've just run a "VACUUM FULL VERBOSE" on the whole database, which gave the
following output on the rele
On Thu, Apr 14, 2005 at 06:38:14PM +0200, Andreas Joseph Krogh wrote:
> Hi everybody, I have table which inly has 298 rows in it, but EXPLAIN ANALYZE
> shows that PG thinks it has as much as 160057 rows AFAICT from the
> EXPLAIN-output.
It does have 160057 rows:
> INFO: vacuuming "public.onp_w
Hi everybody, I have table which inly has 298 rows in it, but EXPLAIN ANALYZE
shows that PG thinks it has as much as 160057 rows AFAICT from the
EXPLAIN-output.
I'm using PG-8.0.0
Here is the schema:
nbeweb=> \d onp_web_index;
Table "public.onp_web_index"
C
24 matches
Mail list logo