> Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> > Probably VACUUM works well for small to medium size tables, but not
> > for huge ones. I'm considering about to implement "on the spot
> > salvaging dead tuples".
>
> That's impossible on its face, except for the special case where the
> same transact
> Tatsuo,
>
> > I'm not clear what "pgPool only needs to monitor "update switching" by
> >
> > *connection* not by *table*" means. In your example:
> > > (1) 00:00 User A updates "My Profile"
> > > (2) 00:01 "My Profile" UPDATE finishes executing.
> > > (3) 00:02 User A sees "My Profile" re-displ
> The real issue with any such scheme is that you are putting maintenance
> costs into the critical paths of foreground processes that are executing
> user queries. I think that one of the primary advantages of the
> Postgres storage design is that we keep that work outside the critical
> path and
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> In english, each bucket defines a specific time period, and no two
> buckets can over-lap (though there's no constraints defined to actually
> prevent that). So reality is that each row in page_log.log will in fact
> only match one row in bucket (at leas
Tatsuo,
> I'm not clear what "pgPool only needs to monitor "update switching" by
>
> *connection* not by *table*" means. In your example:
> > (1) 00:00 User A updates "My Profile"
> > (2) 00:01 "My Profile" UPDATE finishes executing.
> > (3) 00:02 User A sees "My Profile" re-displayed
> > (6) 00:
On Sun, Jan 23, 2005 at 03:40:03PM -0500, Tom Lane wrote:
> There was some discussion in Toronto this week about storing bitmaps
> that would tell VACUUM whether or not there was any need to visit
> individual pages of each table. Getting rid of useless scans through
> not-recently-changed areas o
On Sat, Jan 22, 2005 at 10:18:00PM -0500, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > (SELECT b.bucket_id AS rrs_bucket_id, s.*
> > FROM rrs.bucket b
> > JOIN page_log.log s
> > ON (
> >
For reference, here's the discussion about this that took place on
hackers: http://lnk.nu/archives.postgresql.org/142.php
On Sun, Jan 23, 2005 at 01:16:20AM -0500, Christopher Browne wrote:
> A long time ago, in a galaxy far, far away, [EMAIL PROTECTED] (Greg Stark)
> wrote:
> > Dawid Kuroczko <
Simon Riggs <[EMAIL PROTECTED]> writes:
> Changing the idea slightly might be better: if a row update would cause
> a block split, then if there is more than one row version then we vacuum
> the whole block first, then re-attempt the update.
"Block split"? I think you are confusing tables with in
On Sat, 2005-01-22 at 16:10 -0500, Tom Lane wrote:
> Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> > Probably VACUUM works well for small to medium size tables, but not
> > for huge ones. I'm considering about to implement "on the spot
> > salvaging dead tuples".
>
> That's impossible on its face, ex
10 matches
Mail list logo