Tom Lane wrote:
> Neil Conway <[EMAIL PROTECTED]> writes:
> > Attached is a patch that adds information about function volatility to
> > the output of psql's "\df+" slash command. I'll apply this to HEAD
> > tomorrow, barring any objections.
>
> +1, but are there not any documentation changes to m
ITAGAKI Takahiro wrote:
> ITAGAKI Takahiro <[EMAIL PROTECTED]> wrote:
>
> > Here is a patch that cancels autovacuum workers conflicting with
> > DROP TABLE, TRUNCATE and CLUSTER. It was discussed here:
> > http://archives.postgresql.org/pgsql-hackers/2007-06/msg00556.php
>
> I made an adjustment
On Tue, 2007-26-06 at 20:29 -0700, Joshua D. Drake wrote:
> Tom Lane wrote:
> > +1, but are there not any documentation changes to make?
Sure, I'll update the psql ref page.
> Well here is a question (just because I haven't seen it) is there a list
> of functions and their volatility in the docs
Tom Lane wrote:
Neil Conway <[EMAIL PROTECTED]> writes:
Attached is a patch that adds information about function volatility to
the output of psql's "\df+" slash command. I'll apply this to HEAD
tomorrow, barring any objections.
+1, but are there not any documentation changes to make?
Well he
Neil Conway <[EMAIL PROTECTED]> writes:
> Attached is a patch that adds information about function volatility to
> the output of psql's "\df+" slash command. I'll apply this to HEAD
> tomorrow, barring any objections.
+1, but are there not any documentation changes to make?
Gregory Stark wrote:
> I can imagine a scenario where you have a system that's very busy for 60s and
> then idle for 60s repeatedly. And for some reason you configure a
> checkpoint_timeout on the order of 20m or so (assuming you're travelling
> precisely 60mph).
Is that Scottish m?
--
Alvaro
"Greg Smith" <[EMAIL PROTECTED]> writes:
> If you write them twice, so what? You didn't even get to that point as an
> option until all the important stuff was taken care of and the system was
> near idle.
Well even if it's near idle you were still occupying the i/o system for a few
milliseconds.
On Tue, 26 Jun 2007, Heikki Linnakangas wrote:
I'm scheduling more DBT-2 tests at a high # of warehouses per Greg Smith's
suggestion just to see what happens, but I doubt that will change my mind on
the above decisions.
I don't either, at worst I'd expect a small documentation update perhaps
Attached is a patch that adds information about function volatility to
the output of psql's "\df+" slash command. I'll apply this to HEAD
tomorrow, barring any objections.
-Neil
Index: src/bin/psql/describe.c
===
RCS file: /home/neil
On Tue, 26 Jun 2007, Tom Lane wrote:
I'm not impressed with the idea of writing buffers because we might need
them someday; that just costs extra I/O due to re-dirtying in too many
scenarios.
This is kind of an interesting statement to me because it really
highlights the difference in how I
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
We could just allow any value up to 1.0, and note in the docs that you
should leave some headroom, unless you don't mind starting the next
checkpoint a bit late. That actually sounds pretty good.
Yeah, that sounds fine. There isn
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> We could just allow any value up to 1.0, and note in the docs that you
> should leave some headroom, unless you don't mind starting the next
> checkpoint a bit late. That actually sounds pretty good.
Yeah, that sounds fine. There isn't actually a
On Tue, 26 Jun 2007, Gregory Stark wrote:
What exactly happens if a checkpoint takes so long that the next checkpoint
starts. Aside from it not actually helping is there much reason to avoid this
situation? Have we ever actually tested it?
More segments get created, and because of how they are
Gregory Stark wrote:
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
We could just allow any value up to 1.0, and note in the docs that you should
leave some headroom, unless you don't mind starting the next checkpoint a bit
late. That actually sounds pretty good.
What exactly happens if a c
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
> We could just allow any value up to 1.0, and note in the docs that you should
> leave some headroom, unless you don't mind starting the next checkpoint a bit
> late. That actually sounds pretty good.
What exactly happens if a checkpoint takes so
The attached patch implements the following fixes to pg_ctl on Windows:
- Fix the -w (wait) option to work in Windows service mode, per bug
#3382. This is required on Windows because pg_ctl reports running status
to the service control manager when actually still in recovery/startup,
causing a
Michael Glaesemann wrote:
On Jun 26, 2007, at 13:49 , Heikki Linnakangas wrote:
Maximum is 0.9, to leave some headroom for fsync and any other things
that need to happen during a checkpoint.
I think it might be more user-friendly to make the maximum 1 (meaning as
much smoothing as you could
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
Barring any objections from committer, I'm finished with this patch.
Sounds great, I'll start looking this over.
I'm scheduling more DBT-2 tests at a high # of warehouses per Greg
Smith's suggestion just to see what happens, but
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Barring any objections from committer, I'm finished with this patch.
Sounds great, I'll start looking this over.
> I'm scheduling more DBT-2 tests at a high # of warehouses per Greg
> Smith's suggestion just to see what happens, but I doubt that w
On Jun 26, 2007, at 13:49 , Heikki Linnakangas wrote:
Maximum is 0.9, to leave some headroom for fsync and any other
things that need to happen during a checkpoint.
I think it might be more user-friendly to make the maximum 1 (meaning
as much smoothing as you could possibly get) and intern
On Tue, 26 Jun 2007, Tom Lane wrote:
I have no doubt that there are scenarios such as you are thinking about,
but it definitely seems like a corner case that doesn't justify keeping
the all-buffers scan. That scan is costing us extra I/O in ordinary
non-corner cases, so it's not free to keep it
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> To recap, the sequence is:
> 1. COPY FROM
> 2. checkpoint
> 3. VACUUM
> Now you have buffer cache full of dirty buffers with usage_count=1,
Well, it won't be very full, because VACUUM works in a limited number of
buffers (and did even before the B
On Mon, 25 Jun 2007, Tom Lane wrote:
right now, BgBufferSync starts over from the current clock-sweep point
on each call --- that is, each bgwriter cycle. So it can't really be
made to write very many buffers without excessive CPU work. Maybe we
should redefine it to have some static state c
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
Who's "we"? AFAICS, CVS HEAD will treat a large copy the same as any
other large heapscan.
Umm, I'm talking about populating a table with COPY *FROM*. That's not a
heap scan at all.
No wonder we're failing to c
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Who's "we"? AFAICS, CVS HEAD will treat a large copy the same as any
>> other large heapscan.
> Umm, I'm talking about populating a table with COPY *FROM*. That's not a
> heap scan at all.
No wonder we're failing to communicate
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
(Note that COPY per se will not trigger this behavior anyway, since it
will act in a limited number of buffers because of the recent buffer
access strategy patch.)
Actually we dropped it from COPY, because it didn'
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> (Note that COPY per se will not trigger this behavior anyway, since it
>> will act in a limited number of buffers because of the recent buffer
>> access strategy patch.)
> Actually we dropped it from COPY, because it didn't seem t
Tom Lane wrote:
(Note that COPY per se will not trigger this behavior anyway, since it
will act in a limited number of buffers because of the recent buffer
access strategy patch.)
Actually we dropped it from COPY, because it didn't seem to improve
performance in the tests we ran.
--
Heikki
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> This argument supposes that the bgwriter will do nothing while the COPY
>> is proceeding.
> It will clean buffers ahead of the COPY, but it won't write the buffers
> COPY leaves behind since they have usage_count=1.
Yeah, and th
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
One pathological case is a COPY of a table slightly smaller than
shared_buffers. That will fill the buffer cache. If you then have a
checkpoint, and after that a SELECT COUNT(*), or a VACUUM, the buffer
cache will be full of pages
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> ... that's what the LRU scan is for.
> Yeah, except the LRU scan is not doing a very good job at that. It will
> ignore buffers with usage_count > 0, and it only scans
> bgwriter_lru_percent buffers ahead of the clock hand.
Whi
Tom Lane wrote:
Anyway, if there are no XLOG records since the last checkpoint, there's
probably nothing in shared buffers that needs flushing. There might be
some dirty hint-bits, but the only reason to push those out is to make
some free buffers available, and doing that is not checkpoint's jo
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Hmm. But if we're going to do that, we might as well have a checkpoint
>> for our troubles, no? The reason for the current design is the
>> assumption that a bgwriter_all scan is less burdensome than a
>> checkpoint, but that is
Tom Lane wrote:
Greg Smith <[EMAIL PROTECTED]> writes:
The way transitions between completely idle and all-out bursts happen were
one problematic area I struggled with. Since the LRU point doesn't move
during the idle parts, and the lingering buffers have a usage_count>0, the
LRU scan won't t
On Mon, Jun 25, 2007 at 03:19:32PM -0400, Andrew Dunstan wrote:
>
>
> I wrote:
> >
> >
> >Would making a change like this in those 12 places be so ugly?
> >
>
> Specifically, I propose the following patch, which should fix the issues
> buildfarm apparently has with the XP command shell (or some
I wrote:
> > pgstat_drop_relation() is expecting relid (pg_class.oid) as the argument,
> > but we pass it relfilenode.
> I'm trying to fix the bug, because there is a possibility that some useless
> statistics data continue to occupy some parts of the statistics table.
Here is a patch to fix undro
Tom Lane wrote:
Hmm. But if we're going to do that, we might as well have a checkpoint
for our troubles, no? The reason for the current design is the
assumption that a bgwriter_all scan is less burdensome than a
checkpoint, but that is no longer true given this rewrite.
Per comments in Create
37 matches
Mail list logo