On 5/2/07, Gregory Stark [EMAIL PROTECTED] wrote:
Can we? I mean, sure you can break the patch up into chunks which might
make
it easier to read, but are any of the chunks useful alone?
Well I agree, it would be a tough job. I can try and break the patch into
several self-complete
On 5/2/07, Tom Lane [EMAIL PROTECTED] wrote:
* [PATCHES] HOT Patch - Ready for review /Pavan Deolasee/
This needs a *lot* of review. Can we break it down into more manageable
chunks? I'm not sure that anyone's got a full grasp of the implications
of this patch, and that's a scary thought
On 5/2/07, Heikki Linnakangas [EMAIL PROTECTED] wrote:
I'm going to go with pgdiagnostics. We could short it to just pgdiag,
but that feels too short :). We could make it pgdiagfuncs, but that's
not much shorter than pgdiagnostics.
Just to add more confusion :-), how about pginspect ?
On 5/2/07, Tom Lane [EMAIL PROTECTED] wrote:
* [pgsql-patches] Ctid chain following enhancement
/Pavan Deolasee/
I'm not very excited about this --- it seems to me to complicate the code
in some places that are not in fact performance-critical. While it
doesn't seem likely to break
On 4/1/07, Tom Lane [EMAIL PROTECTED] wrote:
Good point. I'm envisioning a procarray.c function along the
lines of
bool TransactionHasSnapshot(xid)
which returns true if the xid is currently listed in PGPROC
and has a nonzero xmin. CIC's cleanup wait loop would check
this and ignore
On 4/11/07, Tom Lane [EMAIL PROTECTED] wrote:
[ itch... ] The problem is with time-extended execution of
GetSnapshotData; what happens if the other guy lost the CPU for a good
long time while in the middle of GetSnapshotData? He might set his
xmin based on info you saw as long gone.
You
Tom Lane wrote:
Pavan Deolasee [EMAIL PROTECTED] writes:
Good point. I'm envisioning a procarray.c function along the
lines of
bool TransactionHasSnapshot(xid)
which returns true if the xid is currently listed in PGPROC
and has a nonzero xmin. CIC's cleanup wait loop would check
On 4/6/07, Tatsuo Ishii [EMAIL PROTECTED] wrote:
BTW, is anybody working on enabling the fill factor to the tables used
by pgbench? 8.3 will introduce HOT, and I think adding the feature
will make it easier to test HOT.
Please see if the attached patch looks good. It adds a new -F option
On 4/3/07, Bruce Momjian [EMAIL PROTECTED] wrote:
Where are we on this?
---
Tom Lane wrote:
[squint...] How can that fail during a reload if it worked the first
time? Needs a closer look at what's happening.
I noticed that the plan invalidation is not immediately effective.
Not sure whether it's worth fixing or has any other side-effects,
but thought would just post it.
I was testing the following scenario:
session1session2
CREATE TABLE test
(int a, int
On 4/3/07, Tom Lane [EMAIL PROTECTED] wrote:
I'm not particularly worried about missing a potential improvement
in the plan during the first command after a change is committed.
Me too. Just noticed it, so brought it up.
If the invalidation were something that *had* to be accounted for,
Please see the HOT version 6.3 patch posted on pgsql-patches.
I've implemented support for CREATE INDEX and CREATE INDEX
CONCURRENTLY based on the recent discussions. The implementation
is not yet complete and needs some more testing/work/discussion
before we can start considering it for review.
Isn't CREATE INDEX CONCURRENTLY prone deadlock conditions ?
I saw one with VACUUM today. But I think it can happen with other
commands like VACUUM FULL, CLUSTER, CREATE INDEX
CONCURRENTLY and so on. These commands conflict on the
ShareUpdateExclusiveLock held by CIC and hence would wait for
CIC
On 3/31/07, Tom Lane [EMAIL PROTECTED] wrote:
Pavan Deolasee [EMAIL PROTECTED] writes:
Isn't CREATE INDEX CONCURRENTLY prone deadlock conditions ?
Can you give a specific example?
txn1 - CREATE INDEX CONCURRENTLY (takes ShareUpdateExclusiveLock)
txn2 - VACUUM ANALYZE (waits
On 3/31/07, Tom Lane [EMAIL PROTECTED] wrote:
Hmm ... only if it's already set inVacuum true ... there's a window
where it has not.
I wonder whether we could change CIC so that the reference
snapshot lists only transactions that are running and have already
determined their serializable
Tom Lane wrote:
I'm getting tired of repeating this, but: the planner doesn't use a
snapshot. System catalogs run on SnapshotNow.
I am really sorry if I sound foolish here. I am NOT suggesting
that we use snapshot to read system catalogs. I understand
that system catalogs run on SnapshotNow
On 3/30/07, Florian G. Pflug [EMAIL PROTECTED] wrote:
What about doing
PREPARE myplan select ... ;
outside of a transaction? Will this be execute inside a transaction?
I checked that. PREPARE runs with ActiveSnapshot set.
Thanks,
Pavan
--
EnterpriseDB http://www.enterprisedb.com
On 3/30/07, Tom Lane [EMAIL PROTECTED] wrote:
That might work, but it doesn't seem to address the core objection:
there's no mechanism to cause the query to be replanned once the
snapshot is new enough, because no relcache inval will happen. So
most likely existing backends will keep using
On 3/30/07, Florian G. Pflug [EMAIL PROTECTED] wrote:
My idea was to store a list of xid's together with the cached plan that
are assumed to be uncommitted accoring to the IndexSnapshot. The query
is replanned if upon execution the IndexSnapshot assumes that one of
these xid's is committed.
On 3/31/07, Simon Riggs [EMAIL PROTECTED] wrote:
On Fri, 2007-03-30 at 13:54 -0400, Tom Lane wrote:
Hm. So anytime we reject a potentially useful index as being not valid
yet, we mark the plan as only good for this top-level transaction?
That seems possibly workable --- in particular it
On 3/30/07, Tom Lane [EMAIL PROTECTED] wrote:
I do not think you can assume that the plan won't be used later with
some older snapshot. Consider recursive plpgsql functions for a
counterexample: the inner occurrence might be the first to arrive at
a given line of the function, hence the first
On 3/29/07, Florian G. Pflug [EMAIL PROTECTED] wrote:
Yes, but the non-index plan PREPARE generated will be used until the end
of the session, nut only until the end of the transaction.
Frankly I don't know this works, but are you sure that the plan will
be used until the end of the session
On 3/29/07, Tom Lane [EMAIL PROTECTED] wrote:
It will replan at the first use of the plan after seeing the relcache
inval sent by commit of the index-creating transaction. If you have
two separate transactions to create an index and then mark it valid
later, everything's fine because there
Sorry to start another thread while we are still discussing CREATE
INDEX design, but I need help/suggestions to finish the patch on
time for 8.3
We earlier thought that CREATE INDEX CONCURRENTLY (CIC)
would be simpler to do because of the existing waits in CIC.
But one major problem with CIC is
On 3/29/07, Gregory Stark [EMAIL PROTECTED] wrote:
Besides, it seems if people are
happy to have indexes take a long time to build they could just do a
concurrent build.
I think we discussed this earlier. One of the down-side of CIC is that
it needs two complete heap scans. Apart from that
On 3/30/07, Simon Riggs [EMAIL PROTECTED] wrote:
Pavan, ISTM you have misunderstood Tom slightly.
Oh, yes. Now that I re-read Tom's comment, his plan invalidation
design and code, I understand things better.
Having the index invisible to all current transactions is acceptable.
Ok.
On 3/23/07, Pavan Deolasee [EMAIL PROTECTED] wrote:
Its slightly different for the HOT-chains created by this transaction
which
is creating the index. We should index the latest version of the row which
is not yet committed. But thats ok because when CREATE INDEX commits
this latest version
On 3/28/07, Tom Lane [EMAIL PROTECTED] wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
Couldn't you store the creating transaction's xid in pg_index, and
let other transaction check that against their snapshot like they
would for any tuple's xmin or xmax?
What snapshot? I keep having to
On 3/28/07, Simon Riggs [EMAIL PROTECTED] wrote:
Set it at the end, not the beginning.
At the end of what ? It does not help to set it at the end of CREATE
INDEX because the transaction may not commit immediately. In
the meantime, many new transactions may start with
transaction id
On 3/29/07, Florian G. Pflug [EMAIL PROTECTED] wrote:
Pavan Deolasee wrote:
Tom, please correct me if I am wrong. But ISTM that this idea might
work in this context. In get_relation_info(), we would check if
xcreate
xid stored in pg_index for the index under consideration is seen
committed
On 3/28/07, Tom Lane [EMAIL PROTECTED] wrote:
It seems a bit brute-force. Why didn't you use SearchSysCache(INDEXRELID)
the same as RelationInitIndexAccessInfo does?
I tried that initially, but it gets into infinite recursion during initdb.
And what's the point of
the extra tuple copy
On 3/26/07, Tom Lane [EMAIL PROTECTED] wrote:
It might be feasible to have RelationReloadClassinfo re-read the
pg_index row and apply only the updates for specific known-changeable
columns. The stuff it's worried about is the subsidiary data such
as support function fmgr lookup records, but
While experimenting with the proposed CREATE INDEX support with
HOT, I realized that SI invalidation are not sent properly for pg_index
updates.
I noticed the following comment in relcache.c
/*
* RelationReloadClassinfo - reload the pg_class row (only)
*
* This function is used only for
On 3/26/07, Tom Lane [EMAIL PROTECTED] wrote:
Hmm ... actually, CREATE INDEX CONCURRENTLY gets this wrong already, no?
I suspect that sessions existing at the time C.I.C is done will never
see the new index as valid, unless something else happens to make them
drop and rebuild their relcache
On 3/23/07, Hannu Krosing [EMAIL PROTECTED] wrote:
My argument is that its enough to index only the LIVE tuple which
is at the end of the chain if we don't use the new index for queries
in transactions which were started before CREATE INDEX.
You mean, which were started before CREATE
On 3/23/07, Florian G. Pflug [EMAIL PROTECTED] wrote:
Why exactly can't a SERIALIZABLE transaction use the index it created
itself? If you add a pointer to the root of all HOT update chains where
either the HEAD is alive, or some tuple is visible to the transaction
creating the index,
On 3/22/07, Csaba Nagy [EMAIL PROTECTED] wrote:
speaking with pavan off list he seems to think that only 'create
index' is outside transaction, not the other ddl flavors of it because
they are generally acquiring a excl lock. so, in that sense it is
possibly acceptable to me although still
On 3/21/07, Bruce Momjian [EMAIL PROTECTED] wrote:
A different idea is to flag the _index_ as using HOT for the table or
not, using a boolean in pg_index. The idea is that when a new index is
created, it has its HOT boolean set to false and indexes all tuples and
ignores HOT chains. Then
On 3/22/07, Tom Lane [EMAIL PROTECTED] wrote:
Pavan Deolasee [EMAIL PROTECTED] writes:
When CREATE INDEX starts, it acquires ShareLock on the table.
At this point we may have one or more HOT-update chains in the
table. Tuples in this chain may be visible to one or more running
transactions
On 3/23/07, Simon Riggs [EMAIL PROTECTED] wrote:
The ShareLock taken by CREATE INDEX guarantees all transactions that
wrote data to the table have completed and that no new data can be added
until after the index build commits. So the end of the chain is visible
to CREATE INDEX and won't
On 3/21/07, Simon Riggs [EMAIL PROTECTED] wrote:
It should do this, but its probably worth posting a TODO of minor items
like this, otherwise we'll lose focus on the major items.
Well, I didn't add anything new here. VACUUM validates the
number of index entries and heap entries. With HOT I
On 3/21/07, Bruce Momjian [EMAIL PROTECTED] wrote:
Bruce Momjian wrote:
I have read the HOT discussion and wanted to give my input. The major
issue is that CREATE INDEX might require a HOT chain to be split apart
if one of the new indexed columns changed in the HOT chain.
To expand a
On 3/21/07, Bruce Momjian [EMAIL PROTECTED] wrote:
I am worried that will require CREATE INDEX to wait for a long time.
Not unless there are long running transactions. We are not waiting
for the lock, but only for the current transactions to finish.
Is the pg_index xid idea too
On 3/21/07, Merlin Moncure [EMAIL PROTECTED] wrote:
On 3/21/07, Pavan Deolasee [EMAIL PROTECTED] wrote:
It seems much simpler to me do something like this. But important
question is whether the restriction that CREATE INDEX can not
be run in a transaction block is acceptable ?
yikes
On 3/21/07, Bruce Momjian [EMAIL PROTECTED] wrote:
Effectively, my idea is not to chill/break the HOT chains during index
creation, but rather to abandon them and wait for VACUUM to clean them
up.
My idea is much closer to the idea of a bit per index on every tuple,
except the tuple xmax and
On 3/22/07, Heikki Linnakangas [EMAIL PROTECTED] wrote:
Grzegorz Jaskiewicz wrote:
any idea how this patch is going to play with hot ? or should I just
give it a spin, and see if my world collapses :D
I've run tests with both patches applied. I haven't tried with the
latest HOT-versions, but
The version 5.0 of HOT WIP patch is posted on pgsql-patches. This
fixes the VACUUM FULL issue with HOT. In all the earlier versions,
I'd disabled VACUUM FULL.
When we move the HOT-chain, we move the chains but don't carry
the HOT_UPDATED or HEAP_ONLY flags and insert as many index
entries as
Simon Riggs wrote:
We *must* make CREATE INDEX CONCURRENTLY work with HOT. The good news is
I think we can without significant difficulty.
Yeah, I think CREATE INDEX CONCURRENTLY is much easier to solve. Though
I am not completely convinced that we can do that without much changes
to CREATE
There are few things I realized over the weekend while going
through the code:
1. It looks like a bad idea to use ALTER TABLE .. to chill a table
becuase ALTER TABLE takes AccessExclusive lock on the table.
But it would still be a good idea to have ALTER TABLE .. to turn
HOT-updates ON/OFF.
Heikki Linnakangas wrote:
Pavan Deolasee wrote:
2. Heikki suggested an approach where we add a byte
to tuple header and track HOT-ness of different indexes.
The idea looks good but had a downside of increasing tuple
header and complexity.
We would only need the extra byte in HOT-updated
Simon Riggs wrote:
This problem is solved by moving the wait (for all transactions in
reference snapshot to finish) so that it is now between the first and
second scans, as described.
During the second Vscan we would prune each block, so the only remaining
tuple in the block when the
How do we move forward with the CREATE INDEX issue with
HOT ? There are quite a few suggestions and objections.
Can we please discuss and decide on the plan ? I am very
comfortable with the current state of HOT, the results
are encouraging and I hope this issue does not become
a showstopper.
Tom Lane wrote:
Pavan Deolasee [EMAIL PROTECTED] writes:
While creating an index, if a HEAP_ONLY tuple is found,
CREATE INDEX [CONCURRENTLY] fails with an error and the
user needs to SET HOT OFF and then try again. While turning
HOT off, the entire table is CHILLed, holding AccessExclusive
Simon Riggs wrote:
As a result of the issues, I think Pavan is playing safe, to make sure
there is *an* option, so that we can build upwards from there. The
proposal is pragmatism only, while we discuss other approaches.
Absolutely true. I agree that CHILLing the table with AccessExclusive
Simon Riggs wrote:
We need to be clear that we already have a solution to CREATE INDEX
CONCURRENTLY. Do you agree that we do? Does anyone see a problem with
the posted design for that?
Hopefully it is only CREATE INDEX that we need to think about.
I agree. Lets first decide whether its
What is the safest way to access/modify the pg_class attribute
and still avoid any race conditions with the other backends ?
A specific example is: To solve the CREATE INDEX problem with
HOT, I am thinking of adding (along with other things) a pg_class
boolean attribute, say hot_update_enable.
Tom Lane wrote:
In what context are you proposing to do that, and won't this
high-strength lock in itself lead to deadlocks?
The whole thing sounds exceedingly ugly anyway --- for example
what happens if the backend doing the CREATE INDEX fails and
is therefore unable to clear the flag
Tom Lane wrote:
Pavan Deolasee [EMAIL PROTECTED] writes:
Any thoughts on the overall approach ?
Fragile and full of race conditions :-(.
Yes, it looks a bit complex. But IMHO we can get around that.
Do you have any ideas in mind about doing that ?
I thought from the beginning
that CREATE
Heikki Linnakangas wrote:
Tom Lane wrote:
What if we only applied
HOT to primary-key indexes, so that there was certainly not more than
one index per table that the property applies to?
The main objective of HOT is to enable retail vacuum of HOT-updated
tuples. Doing the above would make
Tom Lane wrote:
I've developed the attached patch against HEAD, and no longer see any
funny behavior. Would appreciate it if you'd test some more, though.
The patch works for me. With the patch applied, I don't see the
weird errors in the pgbench and other customized tests that I
used to
Please see the version 4.4 of HOT WIP patch posted on pgsql-patches.
I have fixed couple of bugs in the earlier version posted. Other than
that there are not any significant changes in the patch.
The row-level fragmentation had a bug where we were
unintentionally sorting the line pointers
On 3/10/07, Tom Lane [EMAIL PROTECTED] wrote:
I've been banging away on this since yesterday, and I think I've
achieved a full understanding of what's going on. There are three or
four different-looking pathologies but they all seem to arise from
the same problem: the update-chain-moving code
On 3/10/07, Tom Lane [EMAIL PROTECTED] wrote:
Also, we know this case works because it already is working: in the
situation where VACUUM happens to visit and remove the DEAD tuple(s)
before reaching the RECENTLY_DEAD tuples that link forward to them,
it treats the RECENTLY_DEAD tuples as a
On 3/10/07, Tom Lane [EMAIL PROTECTED] wrote:
Although this shouldn't happen anymore after fixing the chaining
conditions, I'm inclined to introduce an additional test to verify that
the starting tuple is actually MOVED_OFF after we finish the chain move.
If not, give up on repair_frag the
On 3/10/07, Pavan Deolasee [EMAIL PROTECTED] wrote:
scan_heap() would usually have collected the DEAD tuple in offsets_free
list. How do you plan to check if the tuple is in middle on a chain which
has
RECENTLY_DEAD tuple before the tuple under check ? Don't we need
to collect the TID
Please see HOT WIP patch, version 4.1 posted on -patches.
here are not any significant changes since the version 4.0 patch that
I posted a week back.
This patch includes some optimizations for efficiently looking
up LP_DELETEd tuples. I have used the recent changes made by
Tom/Heikki which
Pavan Deolasee wrote:
Thanks a lot, Tom. It seems to work fine for me. I will do some
more tests and report if I see any issue.
The problem mentioned before is hard to reproduce with the
suggested change, but its not completely gone away. I have
seen that again on CVS HEAD with the patch
Hi,
I am right now working on to get HOT and VACUUM FULL work
together. I hit upon a bug which I initially thought is
something that HOT has introduced. But I can reproduce
it with CVS HEAD as well.
Here is what I do: Create a table a simple table with
three columns and one index. Insert a
Tom Lane wrote:
Please check if this makes it go away for you --- I'm a bit busy
at the moment.
Thanks a lot, Tom. It seems to work fine for me. I will do some
more tests and report if I see any issue. Btw, the patch as per
your suggestion is attached.
Thanks,
Pavan
***
Simon Riggs wrote:
- VACUUM FULL - The best solution, for now, is to make VACUUM FULL
perform a reindex on all indexes on the table. Chilling may require us
to modify considerably more index entries than previously. UPDATE WAIT
would be very good, but probably should wait for the next
Simon Riggs wrote:
On Mon, 2007-03-05 at 21:39 +0530, Pavan Deolasee wrote:
Currently each tuple is moved individually. You'd need to inspect the
whole HOT chain on a page, calculate space for that and then try to move
them all in one go. I was originally thinking that would be a problem
Tom Lane wrote:
Mark Kirkwood [EMAIL PROTECTED] writes:
Shared Buffers Elapsed IO rate (from vmstat)
-- --- -
400MB 101 s122 MB/s
2MB 100 s
1MB 97 s
768KB93 s
512KB86 s
256KB77
Tom Lane wrote:
Nope, Pavan's nailed it: the problem is that after using a buffer, the
seqscan leaves it with usage_count = 1, which means it has to be passed
over once by the clock sweep before it can be re-used. I was misled in
the 32-buffer case because catalog accesses during startup had
Hi All,
The version 4.0 of HOT patch is very close to the state where
we can start considering it for testing for correctness as well
as benchmarking, if there is sufficient interest to give it a
chance for 8.3
I have very little clue about what community thinks about
HOT and the patch, but I
On 3/2/07, Tom Lane [EMAIL PROTECTED] wrote:
Pavan Deolasee [EMAIL PROTECTED] writes:
- Another problem with the current HOT patch is that it generates
tuple level fragmentation while reusing LP_DELETEd items when
the new tuple is of smaller size than the original one. Heikki
On 3/2/07, Tatsuo Ishii [EMAIL PROTECTED] wrote:
Just for curiosity, I would like to ask you why you need to modify
pgbench. pgbench can accept custom SQL scripts...
Oh yes, there was no real need to modify pgbench.
Thanks,
Pavan
--
EnterpriseDB http://www.enterprisedb.com
Hi All,
Here are some preliminary numbers with the HOT 4.0 patch that I sent
out earlier today. These are only indicative results and should not be
used to judge the performance of HOT in general. I have intentionally
used the setup favorable to HOT. The goal here is to point out the best
Hi All,
Please see the version 4.0 of HOT WIP patch posted on pgsql-patches.
I am having some trouble since afternoon posting the patch, tried
multiple times. So not sure if has made to -patches yet. If doesn't get
through on -patches this time also I would retry after few hours again.
On 3/1/07, Pavan Deolasee [EMAIL PROTECTED] wrote:
accounts 157895 (initial size) 49284 (increase)
accounts_pkey 19709 (initial size) 19705 (increase)
Just to clarify, the relation size and increase is in number of blocks.
Thanks,
Pavan
--
EnterpriseDB http
Merlin Moncure wrote:
On 3/1/07, Pavan Deolasee [EMAIL PROTECTED] wrote:
seems pretty solid except for one possible problem...at one point when
I dropped then later added the index on 'abalance', I got spammed
'WARNING: found a HOT-updated tuple' from psql prompt.
Thats intentional. We
Zeugswetter Andreas ADI SD wrote:
accounts 157895 (initial size) 49284 (increase)
accounts_pkey 19709 (initial size) 19705 (increase)
Just to clarify, the relation size and increase is in number
of blocks.
The numbers are quite impressive :-) Have you removed the
On 2/27/07, Heikki Linnakangas [EMAIL PROTECTED] wrote:
Pavan Deolasee wrote:
- What do we do with the LP_DELETEd tuples at the VACUUM time ?
In this patch, we are collecting them and vacuuming like
any other dead tuples. But is that the best thing to do ?
Since they don't need index
On 2/24/07, Joshua D. Drake [EMAIL PROTECTED] wrote:
Pavan Deolasee: HOT ( never met him )
I am working on it with the target of 8.3. I am posting WIP patches
since couple of weeks. One of the objectives of publishing WIP
patches, even though they are not well tested (for correctness
Please see the attached WIP HOT patch - version 3.2. It now
implements the logic for reusing heap-only dead tuples. When a
HOT-update chain is pruned, the heap-only tuples are marked
LP_DELETE. The lp_offset and lp_len fields in the line pointer are
maintained.
When a backend runs out of free
On 2/22/07, Zeugswetter Andreas ADI SD [EMAIL PROTECTED] wrote:
I very much like Hannu's idea, but it does present some issues.
I too liked Hannu's idea initially, but Tom raised a valid
concern that it does not address the basic issue of root
tuples. According to the idea, a DEAD
On 2/22/07, Zeugswetter Andreas ADI SD [EMAIL PROTECTED] wrote:
Yes, thats one option. Though given a choice I would waste
four bytes in the heap-page than inserting a new index entry.
No question about that. My point was, that it would mean wasting
the 2 (2 must be enough for a slot
On 2/22/07, Zeugswetter Andreas ADI SD [EMAIL PROTECTED] wrote:
I think you are still misunderstanding me, sorry if I am not beeing
clear
enough. When the row is hot-updated it is too late. You do not have room
in the root for the line pointer.
I think the word line pointer is causing
On 2/21/07, Simon Riggs [EMAIL PROTECTED] wrote:
I very much like Hannu's idea, but it does present some issues.
I too liked Hannu's idea initially, but Tom raised a valid concern
that it does not address the basic issue of root tuples. According
to the idea, a DEAD root tuple can be used
On 2/20/07, Hannu Krosing [EMAIL PROTECTED] wrote:
Ühel kenal päeval, T, 2007-02-20 kell 12:08, kirjutas Pavan Deolasee:
What do you do, if there are no live tuples on the page ? will this
un-HOTify the root and free all other tuples in HOT chain ?
Yes. The HOT-updated status of the root
On 2/20/07, Bruce Momjian [EMAIL PROTECTED] wrote:
Pavan Deolasee wrote:
When following a HOT-update chain from the index fetch, if we notice
that
the root tuple is dead and it is HOT-updated, we try to prune the chain
to
the smallest possible length. To do that, the share lock is upgraded
On 2/20/07, Tom Lane [EMAIL PROTECTED] wrote:
Pavan Deolasee [EMAIL PROTECTED] writes:
... Yes. The HOT-updated status of the root and all intermediate
tuples is cleared and their respective ctid pointers are made
point to themselves.
Doesn't that destroy the knowledge that they form
On 2/20/07, Bruce Momjian [EMAIL PROTECTED] wrote:
Tom Lane wrote:
Recently dead means still live to somebody, so those tids better not
change either. But I don't think that's what he meant. I'm more
worried about the deadlock possibilities inherent in trying to upgrade a
buffer lock.
On 2/19/07, Tom Lane [EMAIL PROTECTED] wrote:
Peter Eisentraut [EMAIL PROTECTED] writes:
Am Montag, 19. Februar 2007 13:12 schrieb Alvaro Herrera:
I don't understand -- what problem you got with NO OPERATION? It
seemed a sound idea to me.
It seems nonorthogonal. What if only some of the
Reposting - looks like the message did not get through in the first
attempt. My apologies if multiple copies are received.
This is the next version of the HOT WIP patch. Since the last patch that
I sent out, I have implemented the HOT-update chain pruning mechanism.
When following a HOT-update
On 2/17/07, Lukas Kahwe Smith [EMAIL PROTECTED] wrote:
I have emailed Gregory, Pavan and Simon only 2 days ago, so I am not
suprised to not haven gotten feedback yet.
Oops, I haven't received the email you mentioned ? Can you resend me the
same ?
Thanks,
Pavan
--
EnterpriseDB
On 2/16/07, Hannu Krosing [EMAIL PROTECTED] wrote:
Ühel kenal päeval, K, 2007-02-14 kell 10:41, kirjutas Tom Lane:
Hannu Krosing [EMAIL PROTECTED] writes:
OTOH, for same page HOT tuples, we have the command and trx ids stored
twice first as cmax,xmax of the old tuple and as cmin,xmin of
On 2/16/07, Zeugswetter Andreas ADI SD [EMAIL PROTECTED] wrote:
As described, you've made
that problem worse because you're trying to say we don't know which
of
the chain entries is pointed at.
There should be a flag, say HOT_CHAIN_ENTRY for the tuple the
it's called HEAP_UPDATE_ROOT
On 2/16/07, Zeugswetter Andreas ADI SD [EMAIL PROTECTED] wrote:
Oh sorry. Thanks for the clarification. Imho HEAP_UPDATE_ROOT should be
renamed for this meaning then (or what does ROOT mean here ?).
Maybe HEAP_UPDATE_CHAIN ?
Yes, you are right. There is some disconnect between what Simon
On 2/15/07, Heikki Linnakangas [EMAIL PROTECTED] wrote:
Do we actually ever want to remove dead tuples from the middle of the
chain? If a tuple in the middle of the chain is dead, surely every tuple
before it in the chain is dead as well, and we want to remove them as
well. I'm thinking,
This is a WIP patch based on the recent posting by Simon and discussions
thereafter. We are trying to do one piece at a time and intention is to post
the work ASAP so that we could get early and continuous feedback from
the community. We could then incorporate those suggestions in the next
WIP
601 - 700 of 737 matches
Mail list logo