Alvaro Herrera wrote:
This seems a good idea. Possibly pushing the betas more aggresively to
current users would make them tested not only by PG hackers ...
Isn't this the purpose of the new alpha releases, at lease to some extent.
--
Sent via pgsql-hackers mailing list
Tom Lane wrote:
However, my comment above was too optimistic, because in an insert-only
scenario autovac would in fact not trigger VACUUM at all, only ANALYZE.
So it seems like we do indeed want to rejigger autovac's rules a bit
to account for the possibility of wanting to apply vacuum to get
Aidan Van Dyk wrote:
* Greg Stark [EMAIL PROTECTED] [081117 03:54]:
I thought of saying that too but it doesn't really solve the problem.
Think of what happens if someone sets a hint bit on a dirty page.
If the page is dirty from a real change, then it has a WAL backup block
record already,
Tom Lane wrote:
Decibel! [EMAIL PROTECTED] writes:
I think that's pretty seriously un-desirable. It's not at all
uncommon for databases to stick around for a very long time and then
jump ahead many versions. I don't think we want to tell people they
can't do that.
Of course they
Kevin Grittner wrote:
An idea for a possible enhancement to PostgreSQL: allow creation of a
temporary table without generating any disk I/O. (Creating and
dropping a three-column temporary table within a database transaction
currently generates about 150 disk writes).
If some circumstances
Tom Lane wrote:
Andrew Chernow [EMAIL PROTECTED] writes:
Be careful. From LockFileEx docs:
However, the time it takes for the operating system to unlock these
locks depends upon available system resources. Therefore, it is
recommended that your process explicitly unlock all files it has
Josh Berkus wrote:
For the September commitfest, 29 patches were applied (one to pgFoundry)
and 18 patches were sent back for more work.
More importantly, six *new* reviewers completed reviews of of various
patches: Abbas Butt, Alex Hunsaker, Markus Wanner, Ibrar Ahmed, Ryan
Bradetich and
Joshua D. Drake wrote:
Merlin Moncure wrote:
Well, there doesn't seem to be a TODO for partial/restartable vacuums,
which were mentioned upthread. This is a really desirable feature for
big databases and removes one of the reasons to partition large
tables.
I would agree that partial vacuums
Tom Lane wrote:
Matthew T. O'Connor [EMAIL PROTECTED] writes:
I think everyone agrees that partial vacuums would be useful / *A Good
Thing* but it's the implementation that is the issue.
I'm not sure how important it will really be once we have support for
dead-space-map-driven vacuum
Tom Lane wrote:
Greg Sabino Mullane [EMAIL PROTECTED] writes:
Code outside of core, is, in reality, less reviewed, less likely to work
well with recent PG versions, and more likely to cause problems. It's also
less likely to be found by people, less likely to be used by people, and
less likely
Jonah H. Harris wrote:
On Mon, Jul 21, 2008 at 10:19 PM, Tom Lane [EMAIL PROTECTED] wrote:
I don't find this a compelling argument, at least not without proof that
the various vacuum-improvement projects already on the radar screen
(DSM-driven vacuum, etc) aren't going to fix your problem.
Tom Lane wrote:
We might have to rearrange the logic a bit to make that happen (I'm not
sure what order things get tested in), but a log message does seem like
a good idea. I'd go for logging anytime an orphaned table is seen,
and dropping once it's past the anti-wraparound horizon.
Is there
Hans-Juergen Schoenig wrote:
i suggest to introduce a --with-long-xids flag which would give me 62 /
64 bit XIDs per vacuum on the entire database.
this should be fairly easy to implement.
i am not too concerned about the size of the tuple header here - if we
waste 500 gb of storage here i am
Alex Hunsaker wrote:
In fact I
would argue -patches should go away so we dont have that split.
+1I think the main argument for the split is to keep the large
patch emails off the hackers list, but I don't think that limit is so
high that it's a problem. People have to gzip their patches
Alex Hunsaker wrote:
A big part of my problem with the split is if there is a discussion
taking place on -hackers I want to be able to reply to the discussion
and say well, here is what I was thinking. Sending it to -patches
first waiting for it to hit the archive so I can link to it in my
Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
This is an interesting idea, but I think it's attacking the wrong
problem. To me, the problem here is that an ANALYZE should not block
CREATE INDEX or certain forms of ALTER TABLE.
I doubt that that will work; in particular I'm
Tom Lane wrote:
If you insist on crafting a solution that only fixes this problem for
pg_restore's narrow usage, you'll be back revisiting it before beta1
has been out a month.
I don't know much about what is involved in crafting these solutions,
but it seems we're close to beta and probably
Gregory Stark wrote:
I'm having trouble following what's going on with autovacuum and I'm finding
the existing logging insufficient. In particular that it's only logging vacuum
runs *after* the vacuum finishes makes it hard to see what vacuums are running
at any given time. Also, I want to see
Alvaro Herrera wrote:
Matthew T. O'Connor wrote:
Well, if a table has 10 rows, and we keep the current threshold of 1000
rows, then this table must have 1002 dead tuples (99% dead tuples, 1002
dead + 10 live) before being vacuumed. This seems wasteful because
there are 500 dead tuples
Alvaro Herrera wrote:
Jim C. Nasby wrote:
FWIW, I normally go with the 8.2 defaults, though I could see dropping
vacuum_scale_factor down to 0.1 or 0.15. I also think the thresholds
could be decreased further, maybe divide by 10.
How about pushing thresholds all the way down to 0?
As long
Andrew Dunstan wrote:
The situation with this patch is that I now have it in a state where I
think it could be applied, but there is one blocker, namely that we do
not have a way of preventing the interleaving of log messages from
different backends, which leads to garbled logs. This is an
Tom Lane wrote:
Matthew T. O'Connor [EMAIL PROTECTED] writes:
How about creating a log-writing-process? Postmaster could write to the
log files directly until the log-writer is up and running, then all
processes can send their log output through the log-writer.
We *have* a log-writing
Alvaro Herrera wrote:
Jim C. Nasby escribió:
There *is* reason to allow setting the naptime smaller, though (or at
least there was; perhaps Alvero's recent changes negate this need):
clusters that have a large number of databases. I've worked with folks
who are in a hosted environment and give
Alvaro Herrera wrote:
Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
But this is misleading (started postmaster with good value, then edited
postgresql.conf and entered -2):
17903 LOG: received SIGHUP, reloading configuration files
17903 LOG: -2 is outside the valid range for
Tom Lane wrote:
Andrew Hammond [EMAIL PROTECTED] writes:
Hmmm... it seems to me that points new users towards not using
autovacuum, which doesn't seem like the best idea. I think it'd be
better to say that setting the naptime really high is a Bad Idea.
It seems like we should have an upper
Florian G. Pflug wrote:
Work done so far:
-
.) Don't start autovacuum and bgwriter.
Do table stats used by the planner get replicated on a PITR slave? I
assume so, but if not, you would need autovac to do analyzes.
---(end of
Alvaro Herrera wrote:
Simon Riggs wrote:
On Wed, 2007-06-06 at 12:17 -0400, Matthew T. O'Connor wrote:
Florian G. Pflug wrote:
Work done so far:
-
.) Don't start autovacuum and bgwriter.
Do table stats used by the planner get replicated on a PITR slave? I
assume so
Tom Lane wrote:
ITAGAKI Takahiro [EMAIL PROTECTED] writes:
Our documentation says
| analyze threshold = analyze base threshold
| + analyze scale factor * number of tuples
| is compared to the total number of tuples inserted, updated, or deleted
| since the last ANALYZE.
Larry Rosenman wrote:
I might use that as the base then, since the hardware finishes getting here
tomorrow.
The other thing to consider is that CentOS 5 has Xen built right in, so
you should be able run VMs without VMWare on it.
---(end of
Joshua D. Drake wrote:
The big thing for me, is a single document, zero clicks, that is
searchable. PDF and plain text are the only thing that give me that. If
you are really zealous you can even use Beagle (which I don't) to
preindex the PDF for you for easy searching.
Lots of projects
Bruce Momjian wrote:
Tom Lane wrote:
Matthew T. O'Connor matthew@zeut.net writes:
Lots of projects publish their HTML docs in two formats: One Big HTML
file with everything; Broken up into many HTML files that link to each
other. This would allow you you have one big searchable document
My initial reaction is that this looks good to me, but still a few
comments below.
Alvaro Herrera wrote:
Here is a low-level, very detailed description of the implementation of
the autovacuum ideas we have so far.
launcher's dealing with databases
-
[ Snip ]
Tom Lane wrote:
Matthew T. O'Connor matthew@zeut.net writes:
It's not clear to me why a worker cares that there is a new worker,
since the new worker is going to ignore all the tables that are already
claimed by all worker todo lists.
That seems wrong to me, since it means that new workers
Jim C. Nasby wrote:
On Tue, Feb 27, 2007 at 12:00:41AM -0300, Alvaro Herrera wrote:
Jim C. Nasby wrote:
The advantage to keying this to autovac_naptime is that it means we
don't need another GUC, but after I suggested that before I realized
that's probably not the best idea. For example, I've
Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
On Tue, 2007-02-27 at 10:37 -0600, Jim C. Nasby wrote:
... The idea would be to give vacuum a target run time, and it
would monitor how much time it had remaining, taking into account how
long it should take to scan the indexes based on how
Alvaro Herrera wrote:
Jim C. Nasby wrote:
That's why I'm thinking it would be best to keep the maximum size of
stuff for the second worker small. It probably also makes sense to tie
it to time and not size, since the key factor is that you want it to hit
the high-update tables every X number
Alvaro Herrera wrote:
Matthew T. O'Connor wrote:
How can you determine what tables can be vacuumed within
autovacuum_naptime?
My assumption is that
pg_class.relpages * vacuum_cost_page_miss * vacuum_cost_delay = time to vacuum
This is of course not the reality, because the delay is not how
Alvaro Herrera wrote:
Matthew T. O'Connor wrote:
I'm not sure how pg_class.relpages is maintained but what happens to a
bloated table? For example, a 100 row table that is constantly updated
and hasn't been vacuumed in a while (say the admin disabled autovacuum
for a while), now that small
Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
Matthew T. O'Connor wrote:
I'm not sure it's a good idea to tie this to the vacuum cost delay
settings either, so let me as you this, how is this better than just
allowing the admin to set a new GUC variable like
Jim C. Nasby wrote:
On Mon, Feb 26, 2007 at 06:23:22PM -0500, Matthew T. O'Connor wrote:
I'm not sure how pg_class.relpages is maintained but what happens to a
bloated table? For example, a 100 row table that is constantly updated
and hasn't been vacuumed in a while (say the admin disabled
Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
The real problem is trying to set that up in such a fashion that keeps
hot tables frequently vacuumed;
Are we assuming that no single worker instance will vacuum a given table
more than once? (That's not a necessary assumption,
Tom Lane wrote:
BTW, to what extent might this whole problem be simplified if we adopt
chunk-at-a-time vacuuming (compare current discussion with Galy Lee)?
If the unit of work has a reasonable upper bound regardless of table
size, maybe the problem of big tables starving small ones goes away.
Tom Lane wrote:
Matthew T. O'Connor matthew@zeut.net writes:
Tom Lane wrote:
I'm inclined to propose an even simpler algorithm in which every worker
acts alike;
That is what I'm proposing except for one difference, when you catch up
to an older worker, exit.
No, that's a bad idea, because
Tom Lane wrote:
Matthew T. O'Connor matthew@zeut.net writes:
That does sounds simpler. Is chunk-at-a-time a realistic option for 8.3?
It seems fairly trivial to me to have a scheme where you do one
fill-workmem-and-scan-indexes cycle per invocation, and store the
next-heap-page-to-scan
Jim C. Nasby wrote:
On Mon, Feb 26, 2007 at 10:18:36PM -0500, Matthew T. O'Connor wrote:
Jim C. Nasby wrote:
Here is a worst case example: A DB with 6 tables all of which are highly
active and will need to be vacuumed constantly. While this is totally
hypothetical, it is how I envision
Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
The proposal to save enough state to be able to resume a vacuum at
pretty much any point in it's cycle might work; we'd have to benchmark
it. With the default maintenance_work_mem of 128M it would mean writing
out 64M of state every minute
Tom Lane wrote:
Matthew T. O'Connor matthew@zeut.net writes:
I'm not sure what you are saying here, are you now saying that partial
vacuum won't work for autovac? Or are you saying that saving state as
Jim is describing above won't work?
I'm saying that I don't like the idea of trying
Jim C. Nasby wrote:
On Wed, Feb 21, 2007 at 05:40:53PM -0500, Matthew T. O'Connor wrote:
My Proposal: If we require admins to identify hot tables tables, then:
1) Launcher fires-off a worker1 into database X.
2) worker1 deals with hot tables first, then regular tables.
3) Launcher
Jim C. Nasby wrote:
On Thu, Feb 22, 2007 at 09:32:57AM -0500, Matthew T. O'Connor wrote:
So the heuristic would be:
* Launcher fires off workers into a database at a given interval
(perhaps configurable?)
* Each worker works on tables in size order.
* If a worker ever catches up
Alvaro Herrera wrote:
Ok, scratch that :-) Another round of braindumping below.
I still think this is solution in search of a problem. The main problem
we have right now is that hot tables can be starved from vacuum. Most
of this proposal doesn't touch that. I would like to see that
Alvaro Herrera wrote:
After staring at my previous notes for autovac scheduling, it has become
clear that this basics of it is not really going to work as specified.
So here is a more realistic plan:
[Snip Detailed Description]
How does this sound?
On first blush, I'm not sure I like this
Alvaro Herrera wrote:
Matthew T. O'Connor wrote:
On first blush, I'm not sure I like this as it doesn't directly attack
the table starvation problem, and I think it could be a net loss of speed.
VACUUM is I/O bound, as such, just sending multiple vacuum commands at a
DB isn't going to make
Alvaro Herrera wrote:
This is how I think autovacuum should change with an eye towards being
able to run multiple vacuums simultaneously:
[snip details]
Does this raise some red flags? It seems straightforward enough to me;
I'll submit a patch implementing this, so that scheduling will
Alvaro Herrera wrote:
I'd like to hear other people's opinions on Darcy Buskermolen proposal
to have a log table, on which we'd register what did we run, at what
time, how long did it last, how many tuples did it clean, etc. I feel
having it on the regular text log is useful but it's not good
Alvaro Herrera wrote:
Matthew T. O'Connor wrote:
This still seems ambiguous to me, how would I handle a maintenance
window of Weekends from Friday at 8PM though Monday morning at 6AM? My
guess from what said is:
mon dom dow starttime endtime
null null6 20:00 null
null null
First, thanks for working on this. I hope to be helpful with the design
discussion and possibly some coding if I can find the time.
My initial reaction to this proposal is that it seems overly complex,
however I don't see a more elegant solution. I'm a bit concerned that
most users won't
Alvaro Herrera wrote:
Matthew T. O'Connor wrote:
Alvaro Herrera wrote:
pg_av_igroupmembers
groupid oid
month int
dom int
dow int
starttime timetz
endtime timetz
This seems to assume that the start and end time for an interval
Walter Cruz wrote:
The larger version is only hidden from everyone :)
http://people.planetpostgresql.org/mha/uploads/photo/conf/conference_group.jpg
http://people.planetpostgresql.org/mha/uploads/photo/conf/conference_group.jpg
Very cool, I was hoping someone would post this. Any chance we
Peter Eisentraut wrote:
Tom Lane wrote:
Not a solution for make installcheck,
Well, for make installcheck we don't have any control over whether
autovacuum has been turned on or off manually anyway. If you are
concerned about build farm reliability, the build farm scripts can
surely be
Jim C. Nasby wrote:
On Sat, Aug 26, 2006 at 01:32:17PM -0700, Joshua D. Drake wrote:
I am not exactly sure why we initdb at all. IMHO it would be better if
the start script just checked if there was a cluster. If not, it
wouldn't start, it would error with: You do not have a cluster, please
Joshua D. Drake wrote:
Matthew T. O'Connor wrote:
Jim C. Nasby wrote:
As Tom mentioned, it's for newbie-friendliness. While I can understand
that, I think it needs to be easy to shut that off.
I understand that, but it seems the whole problem of people
overwriting there data dir is because
Peter Eisentraut wrote:
Summarizing this thread, I see support for the following:
- autovacuum set to on by default in 8.2.
Yes.
- stats_row_level also defaults to on.
Yes.
(Perhaps stats_block_level should also default to on so it's not inconsistent,
seeing that everything else in on,
Tom Lane wrote:
Matthew T. O'Connor matthew@zeut.net writes:
While there is talk of removing this all together, I think it was also
agreed that as long as these values are there, they should be reduced.
I think the defaults in 8.1 are 1000/500, I think 200/100 was suggested.
ISTM
Peter Eisentraut wrote:
Am Donnerstag, 17. August 2006 18:40 schrieb Josh Berkus:
I'm in favor of this, but do we want to turn on vacuum_delay by default
as well?
People might complain that suddenly their vacuum runs take four times as long
(or whatever). Of course, if we turn on autovacuum
ITAGAKI Takahiro wrote:
Matthew T. O'Connor matthew@zeut.net wrote:
What is this based on? That is, based on what information is it
deciding to reduce the naptime?
If there are some vacuum or analyze jobs, the naptime is shortened
(i.e, autovacuum is accelerated). And if there are no jobs
Peter Eisentraut wrote:
Is it time to turn on autovacuum by default in 8.2? I know we wanted to
be on the side of caution with 8.1, but perhaps we should evaluate the
experiences now. Comments?
Would be fine by me, but I'm curious to see what the community has to
say. A few comments:
Bruce Momjian wrote:
Matthew T. O'Connor wrote:
Would be fine by me, but I'm curious to see what the community has to
say. A few comments:
Autovacuum can cause unpredictable performance issues, that is if it
vacuums in the middle of a busy day and people don't want that, of
course
Rod Taylor wrote:
The defaults could be a little more aggressive for both vacuum and
analyze scale_factor settings; 10% and 5% respectively.
I would agree with this, not sure of 10%/5% are right, but the general
feedback I have heard is that while the defaults in 8.1 are much better
than the
Bruce Momjian wrote:
Matthew T. O'Connor wrote:
and increasing the log level when autovacuum actually fires off a VACUUM
or ANALYZE command.
This was not done because the logging control only for autovacuum was
going to be added. Right now, if you want to see the vacuum activity,
you
Alvaro Herrera wrote:
My vision is a little more complex than that. You define group of
tables, and separately you define time intervals. For each combination
of group and interval you can configure certain parameters, like a
multiplier for the autovacuum thresholds and factors; and also the
Bruce Momjian wrote:
Matthew T. O'Connor wrote:
Any chance we can make this change before release? I think it's very
important to be able to look through the logs and *know* that you tables
are getting vacuumed or not.
Agreed. I just IM'ed Alvaro and he says pg_stat_activity should
Josh Berkus wrote:
Is it time to turn on autovacuum by default in 8.2? I know we wanted
to be on the side of caution with 8.1, but perhaps we should evaluate
the experiences now. Comments?
I'm in favor of this, but do we want to turn on vacuum_delay by default
as well?
I thought about
Jim C. Nasby wrote:
On Thu, Aug 17, 2006 at 03:00:00PM +0900, ITAGAKI Takahiro wrote:
IMO, the only reason at all for naptime is because there is a
non-trivial cost associated with checking a database to see if any
vacuuming is needed.
This cost is reduced significantly in the
Jim C. Nasby wrote:
On Thu, Aug 17, 2006 at 12:41:57PM -0400, Matthew T. O'Connor wrote:
Would be fine by me, but I'm curious to see what the community has to
say. A few comments:
Autovacuum can cause unpredictable performance issues, that is if it
vacuums in the middle of a busy day
Larry Rosenman wrote:
Alvaro Herrera wrote:
Bruce Momjian wrote:
Well, the problem is that it shows what it's *currently* doing, but it
doesn't let you know what has happened in the last day or whatever.
It can't answer has table foo been vacuumed recently? or what
tables haven't been
Alvaro Herrera wrote:
Matthew T. O'Connor wrote:
I assume you are suggesting that the base value be 0? Well for one
thing if the table doesn't have any rows that will result in constant
vacuuming of that table, so it needs to be greater than 0. For a small
table, say 100 rows
Alvaro Herrera wrote:
ITAGAKI Takahiro wrote:
In the case of a heavily update workload, the default naptime (60 seconds)
is too long to keep the number of dead tuples low. With my patch, the naptime
will be adjusted around 3 seconds at the case of pgbench (scale=10, 80 tps)
with default other
Darcy Buskermolen wrote:
Dear Community members,
It is with great enthuasim I announce that I have accepted an offer from
Joshua D. Drake of Command Prompt Inc, to join his team. As former Vice
President of Software Development with Wavefire Technologies Corp, I endeavor
to leverage over 10
Robert Treat wrote:
So, the things I hear most non-postgresql people complain about wrt postgresql
are:
no full text indexing built in
no replication built in
no stored procedures (with a mix of wanting in db cron facility)
the planner is not smart enough (with a mix of wanting hints)
vacuum
Bill Bartlett wrote:
Can't -- the main production database is over at a CoLo site with access
only available via SSH, and tightly-restricted SSH at that. Generally
one of the developers will SSH over to the server, pull out whatever
data is needed into a text file via psql or pg_dump, scp the
Tom Lane wrote:
Josh Berkus josh@agliodbs.com writes:
Other projects need even more intensive coding help. OpenOffice, for example,
doesn't offer the Postgres driver by default because it's still too buggy.
That seems like something that it'd be worth our while to help fix.
+1 (or +10 if
I think there are two things people typically want to know from the logs:
1) Is autovacuum running
2) Did autovacuum take action (issue a VACUUM or ANALYZE)
I don't think we need mention the name of each and every database we
touch, we can, but it should be at a lower level like DEBUG1 or
Stefan Kaltenbrunner wrote:
foo=# set maintenance_work_mem to 200;
SET
foo=# VACUUM ANALYZE verbose;
INFO: vacuuming information_schema.sql_features
ERROR: invalid memory alloc request size 204798
Just an FYI, I reported a similar problem on my 8.0.0 database a few
weeks ago. I
Alvaro Herrera wrote:
Csaba Nagy wrote:
Now when the queue tables get 1000 times dead space compared to their
normal size, I get performance problems. So tweaking vacuum cost delay
doesn't buy me anything, as not vacuum per se is the performance
problem, it's long run time for big tables is.
Csaba Nagy wrote:
So he rather needs Hannu Krosing's patch for simultaneous vacuum ...
Well, I guess that would be a good solution to the queue table
problem. The problem is that I can't deploy that patch on our production
systems without being fairly sure it won't corrupt any data... and
Tom Lane wrote:
Matthew T. O'Connor matthew@zeut.net writes:
That patch is a step forward if it's deemed OK by the powers that be.
However, autovacuum would still need to be taught to handle simultaneous
vacuums. I suppose that in the interim, you could disable autovacuum
Csaba Nagy wrote
From my POV, there must be a way to speed up vacuums on huge tables and
small percentage of to-be-vacuumed tuples... a 200 million rows table
with frequent updates of the _same_ record is causing me some pain right
now. I would like to have that table vacuumed as often as
Alvaro Herrera wrote:
Chris Browne wrote:
It strikes me as a slick idea for autovacuum to take on that
behaviour. If the daily backup runs for 2h, then it is quite futile
to bother vacuuming a table multiple times during that 2h period when
none of the tuples obsoleted during the 2h period
Tom Lane wrote:
I'd argue it's fine: there are tons of people using row-level stats
via autovacuum, and (AFAICT) just about nobody using 'em for any other
purpose. Certainly you never see anyone suggesting them as a tool for
investigating problems on pgsql-performance. Sure, it's a repurposing
Tom Lane wrote:
hmm... That's true. I don't think autovacuum doesn't anything to account
for the concept of rolledback inserts.
I think this is the fault of the stats system design. AFAICT from a
quick look at the code, inserted/updated/deleted tuples are reported
to the collector in
daveg wrote:
Apologies if this is old news, but pg_autovacuum in 8.0.x has the bad habit
of SEGVing and exiting when a table gets dropped out from under it. This
creates problems if you rely on pg_autovacuum for the bulk of your vacuuming
as it forgets it's statistics when it is restarted and so
Marc G. Fournier wrote:
As a couple of ppl have found out by becoming 'moderators' for the
mailing lists, there are *alot* of messages through the server that
aren't list subscribers, but are legit emails ...
Perhaps that shouldn't be allowed? Would it help things if all
non-subscriber
All the items you mentioned look like 8.2 issues to me. But here are
some thoughts.
Alvaro Herrera wrote:
* Enable autovacuum by default.
Get some field experience with it first, so the worst bugs are covered.
(Has anybody tested it?)
I have done some testing and it seems to be
Andrew - Supernews wrote:
On 2005-08-01, Matthew T. O'Connor matthew@zeut.net wrote:
* Stop a running VACUUM if the system load is too high.
What if vacuum used a vacuum delay that was equal to the vacuum delay
GUC settings * the system load. Or something more sophisticated
This is great news! I will do what I can to continue improving the code
and address these concerns as best I can. Many of the items below will
need to be addressed by Alvaro, but I will comment where I think I have
something useful to say :-)
Tom Lane wrote:
I've applied Alvaro's latest
Tom Lane wrote:
Matthew T. O'Connor matthew@zeut.net writes:
Speaking of which, I think I mentioned this to Alvaro, but I guess it
just didn't make it in. The pg_autovacuum table should have a few
additional columns that allow setting vacuum delay settings on a per
table basis. I also
Tom Lane wrote:
Denis Lussier [EMAIL PROTECTED] writes:
I got to thinking it'd be kewl if PgAdmin3 supported an interactive debugger
for pl/pgsql.
That's been kicked around before, although I don't think anyone wants to
tie it to pgAdmin specifically. Check the archives...
I
Tom Lane wrote:
One thing that neither Dave nor I wanted to touch is pg_autovacuum.
If that gets integrated into the backend by feature freeze then the
question is moot, but if it doesn't then we'll have to decide whether
autovac should preferentially connect to template1 or postgres. Neither
Sorry to do this on the hackers list, but I have tried to email Alvaro
off-list and my email keeps getting bounced so
Alvaro,
I was just wondering what the current status of your work with the
Autovacuum patch is. Also, if you would like to discuss anything and
also if I can help you.
Joshua D. Drake wrote:
Josh Berkus wrote:
I've personally seen at least a dozen user requests for autovacuum
in the backend, and had this conversation about 1,100 times:
NB: After a week, my database got really slow.
Me: How often are you running VACUUM ANALYZE?
NB: Running what?
Can't
1 - 100 of 301 matches
Mail list logo