Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
Alvaro Herrera wrote:
I did it that way (i.e. added locking) and then realized that it
shouldn't really be a problem, because the only one who can be setting
vacuum flags is the process itself. Other processes can only read
in cache, and on tables that don't. Tables
large enough that the index doesn't fit in cache. And as a special case,
on a table just the right size that a normal index fits in cache, but a
thick one doesn't.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
be recipient of my reminder request. Really it's
nothing nice, if your work is repeatedly deleted or inserted to
current queue. Nobody can do any plans.
All I can say is that I can feel your pain. Let's hope and do our best
to make 8.4 smoother.
--
Heikki Linnakangas
EnterpriseDB http
uuid/uuid.h.
Attached is a patch that adds some autoconf magic to deal with that. I'm
testing ossp/uuid.h first, because presumably if that exists it's the
right one, while I suspect there might be other files called just uuid.h
on other systems that are not the same thing.
--
Heikki Linnakangas
if it really has to be
a reserved keyword to implement that syntax. Looking at the plpgsql
grammar close, we don't categorize keywords like we do in the main
grammar, so maybe what I'm saying doesn't make any sense.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Heikki Linnakangas wrote:
ITAGAKI Takahiro wrote:
Tom Lane [EMAIL PROTECTED] wrote:
ITAGAKI Takahiro [EMAIL PROTECTED] writes:
Here is a trivial fix of locking issue in pgstattuple().
Hmm, is this really a bug, and if so how far back does it go?
I'm thinking that having a pin on the buffer
Here's an updated version of the patch. There was a bogus assertion in
the previous one, comparing against mdsync_cycle_ctr instead of
mdunlink_cycle_ctr.
Heikki Linnakangas wrote:
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
The best I can think of is to rename the obsolete
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
The best I can think of is to rename the obsolete file to
relfilenode.stale, when it's scheduled for deletion at next
checkpoint, and check for .stale-suffixed files in GetNewRelFileNode,
and delete them immediately
with this?
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
/listitem
listitem
! paraPositional information must be greater than 0 and less than
16,384/para
/listitem
listitem
paraNo more than 256 positions per lexeme /para
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Index: doc/src/sgml/ref/alter_sequence.sgml
===
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/doc/src/sgml/ref/alter_sequence.sgml,v
retrieving revision 1.17
diff
be a problem.
BIO_new_mem_buf was introduced in OpenSSL 0.9.7. What versions do we
support?
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your
://www.openssl.org/support/faq.html#PROG2
How come we only bump into the crash with client certs?
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
for
no good reason.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
TransactionId*Is*InProgress is misspelled in a couple of comments in
twophase.c..
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Index: src/backend/access/transam/twophase.c
===
RCS file: /home/hlinnaka
.
It would be nice to slip this into 8.3...
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Index: src/timezone/localtime.c
===
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/timezone/localtime.c,v
retrieving revision
Alvaro Herrera wrote:
Heikki Linnakangas wrote:
+ -- Basic test
+ COPY xmltest TO
'/home/hlinnaka/pg_sandbox/pgsql.cvshead/src/test/regress/results/xmltest.data'
WITH BINARY;
+ TRUNCATE xmltest;
+ COPY xmltest FROM
'/home/hlinnaka/pg_sandbox/pgsql.cvshead/src/test/regress/results
Heikki Linnakangas wrote:
Alvaro Herrera wrote:
Heikki Linnakangas wrote:
+ -- Basic test
+ COPY xmltest TO
'/home/hlinnaka/pg_sandbox/pgsql.cvshead/src/test/regress/results/xmltest.data'
WITH BINARY;
+ TRUNCATE xmltest;
+ COPY xmltest FROM
'/home/hlinnaka/pg_sandbox/pgsql.cvshead/src
possible that I broke it in the process, I was only interested in
testing the performance characteristics of the simplified pruning scheme.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 5: don't
undetectably corrupted --- t_ctid
is 6 bytes and could easily cross a hardware sector boundary.
We're only changing the offsetnumber part of it, which is 2 bytes. That
shouldn't cross a hardware sector boundary on any reasonable hardware.
--
Heikki Linnakangas
EnterpriseDB http
not hang ourselves on that. I'm sure there's a way to deal
with the page format conversion if we have to. The crucial design
decision for HOT is when to prune and when to defragment the page, so
that when we're doing the UPDATE, there's room in the page.
--
Heikki Linnakangas
EnterpriseDB http
(PageRepairFragmentation). Tom wondered why they're
separated in the patch. As the patch stands, there is no reason, but I
feel that separating them and doing them at different times might be an
important piece in the puzzle.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
. Pruning
clearly does help.
Given that this test is pretty much the worst case scenario, I'm ok with
not pruning for the purpose of keeping chains short.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
set enable_seqscan=on;
DROP TABLE hottest;
CREATE TABLE hottest (id
Heikki Linnakangas wrote:
Explanations of the columns:
HEAD: CVS HEAD
HOT-pruned: CVS HEAD + HOT patch v15
HOT: CVS HEAD + HOT patch v15, but with heap_page_prune_defrag short
circuited to do nothing
HOT-opt: CVS HEAD + HOT patch v15, but with static
in source code. The POSDATAPTR and POSDATALEN
macros are still used, though it would now be more readable to access
the fields in WordEntryPosVector directly.
* Removed needfree field from DocRepresentation. It was always set to false.
* Miscellaneous other commenting and refactoring
--
Heikki
if there is one, to point to the latest live tuple, without
removing the dead tuples or the line pointers.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send
the line pointer as not used.
Otherwise someone might reuse the line pointer for another tuple, which
is a problem if you then crash. WAL replay would see an insert to a line
pointer that's already in use (the quick pruning wouldn't be WAL
logged), which would raise an error.
--
Heikki Linnakangas
tcop tsearch utils \
port port/win32 port/win32/arpa port/win32/netinet port/win32/sys
# Install all headers
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 5: don't forget to increase your
Teodor Sigaev wrote:
Heikki Linnakangas wrote:
* Defined new struct WordEntryPosVector that holds a uint16 length and a
variable size array of WordEntries. This replaces the previous
convention of a variable size uint16 array, with the first element
implying the length. WordEntryPosVector has
not very worried about the cost of
following the chain either. But that's something we can quite easily
measure if we want to.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 9: In versions below 8.0
Florian Pflug wrote:
Heikki Linnakangas wrote:
Tom Lane wrote:
Compared to what it currently takes to check the same tuple (a separate
index entry fetch and traversal to the heap page), this is already an
enormous performance improvement.
Though keep in mind that we kill index tuples
this far before I continue hacking.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
*** ../pgsql.tsearch-2/src/backend/utils/adt/tsginidx.c 2007-09-06 11:19:57.0 +0100
--- ./src/backend/utils/adt/tsginidx.c 2007-09-07 09:20:27.0 +0100
***
*** 22,28
Heikki Linnakangas wrote:
BTW, the encoding of the XML datatype looks pretty funky. xml_recv first
reads the xml string with pq_getmsgtext, which applies a client-server
conversion. Then the xml declaration is parsed, extracting the encoding
attribute. Then the string is converted again from
changes.
Thanks!
Can you please apply this fix for the bug Pavel found as well:
http://archives.postgresql.org/pgsql-hackers/2007-09/msg00127.php
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 7: You
. For that as well, we could prune only, but not defragment, the page
in the lookup path.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
When I suggested that we get rid of the LP_DELETE flag for heap tuples,
the tuple-level fragmentation and all that, and just take the vacuum
lock and call PageRepairFragmentation, I was thinking that we'd do it in
heap_update
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
Tom Lane wrote:
Another real problem with doing pruning only in UPDATE path is that
we may end up with long HOT chains if the page does not receive a
UPDATE, after many consecutive HOT updates.
How is that, if the same number
Alvaro Herrera wrote:
Pruning is going to take place on next vacuum anyway, isn't it?
Yes. If HOT is working well, you're not going to run vacuum very often,
though.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast
it is
optimization which can be left for a later version (8.4?)
Wait, did you mean that we don't do pruning at all in 8.3? That's a bad
idea, the main purpose of HOT is to be able to vacuum page at a time,
reducing the need to do regular vacuums.
--
Heikki Linnakangas
EnterpriseDB http
. It will work
well with fixed-size tuples, but not so well otherwise.
Hmm. I wonder if we could prune/defragment in bgwriter?
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 4: Have you searched our
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
Imagine a page with just one tuple on it:
1
After a bunch of updates, it looks like this
1 - 2 - 3 - 4 - 5
1 is the tuple the indexes point to, others are heap only.
But if we were attempting prune at every update
references to in our own backend, but even that seems like a
non-starter to me.
Yet another idea is to add an intent argument (or somehow pass it out
of line) to heap_fetch. You would prune the page in heap_fetch, but only
if you're fetching for the purpose of updating the tuple.
--
Heikki Linnakangas
Teodor Sigaev wrote:
Heikki Linnakangas wrote:
In any case, I think we need to calculate the CRC/hash in tsqueryrecv,
instead of trusting the client.
Agreed.
I started to write a patch for that, as I realized that we're
transferring the strings in a tsvector/tsquery in server encoding.
That's
Alvaro Herrera wrote:
Heikki Linnakangas escribió:
Hmm. I wonder if we could prune/defragment in bgwriter?
That would be best, if at all possible. You can prune without accessing
anything outside the page itself, right?
Yes, though you do need to have an oldest xmin to determine which
Florian Pflug wrote:
Heikki Linnakangas wrote:
That's a pretty sensitive tradeoff, we want to prune often to cut the
long HOT chains, but not too often because it's pretty expensive to
acquire the vacuum lock and move tuples around. I don't think we've
found the optimal solution yet
the first time we see we *can* prune.
Not necessarily. Pruning is expensive, you need to scan all tuples on
the page and write WAL record. And defragment the page if you consider
that part of pruning. You don't want to do it too aggressively.
--
Heikki Linnakangas
EnterpriseDB http
, before
bgwriter has the chance to prune anything on it.
But if it works reasonably well in typical scenarios, we can go with
that for 8.3 and improve later.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast
of reassigning it to the dummy PGPROC, is
ok because the transaction can't insert anything to the table after
PREPARE TRANSACTION.
Sounds valid to me, but better add some comments to note that the lock
is released early, in case it's going to be used for some other purpose
in the future.
--
Heikki
might have a natural limit so that you can't force
arbitrarily deep recursions, but check_stack_depth() is cheap enough
that seems best to just stick it into anything that might be a problem.
Patch attached. It's on top of the tsearch-refactor-2.patch I posted
earlier.
--
Heikki Linnakangas
places.
Ok. Probably easiest to do that by changing the palloc to palloc0 in
parse_tsquery.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
the CRC/hash in tsqueryrecv,
instead of trusting the client.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Bruce Momjian wrote:
Heikki Linnakangas wrote:
Tom Lane wrote:
Pavan Deolasee [EMAIL PROTECTED] writes:
Please see the version 14 of HOT patch attached.
I expected to find either a large new README, or some pretty substantial
additions to existing README files, to document how this all works
to run VACUUM before xid wrap-around?
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
posted earlier. It now
reflects the changes to how pruning works.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Use case
The best use case for HOT is a table that's frequently UPDATEd, and is large
enough that VACUUM is painful. On small tables that fit in cache
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
Should there be new a log_line_prefix percent code for virtual
transaction ids? Or should we change the meaning of %x to be virtual
transaction id instead of the real one
Heikki Linnakangas wrote:
Tom Lane wrote:
Something that was annoying me yesterday was that it was not clear
whether we had fixed every single place that uses a tsearch config file
to assume that the file is in UTF8 and should be converted to database
encoding. So I was thinking
And here's the attachment I forgot.
Heikki Linnakangas wrote:
Heikki Linnakangas wrote:
Tom Lane wrote:
Something that was annoying me yesterday was that it was not clear
whether we had fixed every single place that uses a tsearch config file
to assume that the file is in UTF8 and should
moved and renamed twice.
Here's a patch to unite them again.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Index: src/backend/tsearch/ts_parse.c
===
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend
recode_and_lowerstr anyway, so this simplifies the code a little bit. Is
there any external dictionary implementations that would require
different behavior?
- bunch of comments added, typos fixed, and other cleanup
The code still needs lots of love, but it's a start...
--
Heikki Linnakangas
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
- readstopwords calls recode_and_lowerstr directly, instead of using the
wordop function pointer in StopList struct. All callers used
recode_and_lowerstr anyway, so this simplifies the code a little bit. Is
there any external
to always run input files through pg_verify_mbstr.
We do it for stopwords, and synonym files (though incorrectly), but not
for thesaurus files or ispell files. It's probably best to do that
within the recode-function as well.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
about which
line pointers are not used. It would also be better if we didn't emit a
separate WAL record for defraging a page, if we also prune it at the
same time. I'm not that worried about WAL usage in general, but that
seems simple enough to fix.
--
Heikki Linnakangas
EnterpriseDB http
worth of WAL output.
I wonder what it would take to offload the CRC calculation to the wal
writer. And if that would then become a bottleneck, making it actually
counterproductive.
No, not in this release :).
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
or tombstone?
Sounds good to me. Stub is a bit generic, I'd go for tombstone.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose
_H
#define _H
boilerplate.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 6: explain analyze is your friend
is unsigned, and InvalidOffsetNumber is -1, so I fixed that
as well.
Patch attached.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
*** src/backend/storage/page/bufpage.c 2007-07-14 20:54:04.0 +0100
--- src/backend/storage/page/bufpage.c 2007-07-14 20:37:36.0
it.
Especially the glossary is helpful, since the patch introduces a lot of
new concepts.
I have some suggestions which I'll post separately, this just describes
the status quo of the patch.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Use case
The best use case
Heikki Linnakangas wrote:
I have some suggestions which I'll post separately,
A significant chunk of the complexity and new code in the patch comes
from pruning hot chains and reusing the space for new updates. Because
we can't reclaim dead space in the page like a VACUUM does, without
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
A much simpler approach would be to try to acquire the vacuum lock, and
compact the page the usual way, and fall back to a cold update if we
can't get the lock
Michael Glaesemann wrote:
On Jul 13, 2007, at 8:31 , Heikki Linnakangas wrote:
Row-level fragmentation
---
If there's no LP_DELETEd tuples large enough to fit the new tuple in,
the row-level fragmentation is repaired in the hope that some of the
slots were actually big
difference to your results is that the DELETE, VACUUM and INSERT
operations are much faster both with and without the patch. Most INSERTs
for example took 1 s, and in your results they took 15 s. Any idea why?
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
executed CREATE
there'll
be a lot of churn when tuples need to be moved back and forth, along
with updating indexes, but it'd take the overhead out of the critical path.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 7
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Heikki Linnakangas wrote:
For comparison, imola-328 has full_page_writes=off. Checkpoints last ~9
minutes there, and the graphs look very smooth. That suggests that
spreading the writes over a longer time wouldn't make a difference
Heikki Linnakangas wrote:
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
I'm scheduling more DBT-2 tests at a high # of warehouses per Greg
Smith's suggestion just to see what happens, but I doubt that will
change my mind on the above decisions.
When do you expect to have
Greg Smith wrote:
On Fri, 29 Jun 2007, Heikki Linnakangas wrote:
LOG: checkpoint complete; buffers written=5869 (35.8%); write=2.081
s, sync=4.851 s, total=7.066 s
My original patch converted the buffers written to MB. Easier to
estimate MB/s by eye; I really came to hate multiplying by 8K
log message for immediate checkpoint requests.
Differentiating between immediate checkpoints triggered by a command
like CHECKPOINT or CREATE DATABASE and normal checkpoints should be
enough in practice.
I'm done with this.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
Here's latest revision of Itagaki-sans Load Distributed Checkpoints patch:
Applied with some minor revisions to make some of the internal APIs a
bit cleaner; mostly, it seemed like a good idea to replace all those
bool parameters
a
checkpoint,
! and if one is already running it has to wait for it to finish
first. You
! can adjust varnamecheckpoint_completion_target/varname to
perform the
! checkpoints more aggressively.
/para
/listitem
listitem
--
Heikki Linnakangas
EnterpriseDB http
as we're writing the update.
* (Perhaps it'd make even more sense to checkpoint only when the
previous
* checkpoint record is in a different xlog page?)
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast
. We don't count them towards the progress
made ATM, but we probably should. Lastly, distributing the writes even a
little bit is going to be smoother than the current behavior anyway.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast
work around it's deficiencies.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
One pathological case is a COPY of a table slightly smaller than
shared_buffers. That will fill the buffer cache. If you then have a
checkpoint, and after that a SELECT COUNT(*), or a VACUUM, the buffer
cache will be full of pages
.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 6: explain analyze is your friend
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
Tom Lane wrote:
(Note that COPY per se will not trigger this behavior anyway, since it
will act in a limited number of buffers because of the recent buffer
access strategy patch.)
Actually we dropped it from COPY, because it didn't
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
Tom Lane wrote:
Who's we? AFAICS, CVS HEAD will treat a large copy the same as any
other large heapscan.
Umm, I'm talking about populating a table with COPY *FROM*. That's not a
heap scan at all.
No wonder we're failing
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
Barring any objections from committer, I'm finished with this patch.
Sounds great, I'll start looking this over.
I'm scheduling more DBT-2 tests at a high # of warehouses per Greg
Smith's suggestion just to see what happens, but I
Michael Glaesemann wrote:
On Jun 26, 2007, at 13:49 , Heikki Linnakangas wrote:
Maximum is 0.9, to leave some headroom for fsync and any other things
that need to happen during a checkpoint.
I think it might be more user-friendly to make the maximum 1 (meaning as
much smoothing as you
Gregory Stark wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
We could just allow any value up to 1.0, and note in the docs that you should
leave some headroom, unless you don't mind starting the next checkpoint a bit
late. That actually sounds pretty good.
What exactly happens
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
We could just allow any value up to 1.0, and note in the docs that you
should leave some headroom, unless you don't mind starting the next
checkpoint a bit late. That actually sounds pretty good.
Yeah, that sounds fine
run into
problems in that area.
Please describe the class of transactions and the service guarantees so
that we can reproduce that, and figure out what's the best solution.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast
at all (for example with full_page_writes=off).
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
On further thought, there is one workload where removing the non-LRU
part would be counterproductive:
If you have a system with a very bursty transaction rate, it's possible
that when it's time for a checkpoint, there hasn't been
Simon Riggs wrote:
On Fri, 2007-06-22 at 22:19 +0100, Heikki Linnakangas wrote:
However, I think shortening the checkpoint interval is a perfectly valid
solution to that.
Agreed. That's what checkpoint_timeout is for. Greg can't choose to use
checkpoint_segments as the limit
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
Tom Lane wrote:
(BTW, the patch seems
a bit schizoid about whether checkpoint_rate is int or float.)
Yeah, I've gone back and forth on the data type. I wanted it to be a
float, but guc code doesn't let you specify a float in KB
add an 'immediate'
parameter to pg_start_backup if necessary.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 6: explain analyze is your friend
, and doesn't show up on a laptop or a small server with a single disk.
That's one of the first things I'm planning to tackle when the 8.4 dev
cycle opens. And I'm planning to look at recovery times in general; I've
never even measured it before so who knows what comes up.
--
Heikki Linnakangas
anything to change, don't upgrade.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
with it
and why you would change it.
Sorry if I'm missing discussions abount the naming.
No, I chose _smoothing on my own. People didn't like
checkpoint_write_percent either (including).
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
I added a spinlock to protect the signaling fields between bgwriter and
backends. The current non-locking approach gets really difficult as the
patch adds two new flags, and both are more important than the existing
ckpt_time_warn
101 - 200 of 393 matches
Mail list logo