cause
of HOT pruning and page defragmentation. If we just look at the WAL
overhead caused by 2PC, the decline is somewhere close to 50%. I took
numbers using simple 1PC for reference and to understand the overhead of
2PC.
HEAD (1PC): 382 bytes / transaction
Thanks,
Pavan
--
Pavan Deolasee
it. I've revised the patch by incrementing the
TWOPHASE_MAGIC identifier.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
reduce_gid_wal_v2.patch
Description: Binary data
--
Sent via pg
eady). That still
fits in the same aligned width, on both 32 as well as 64-bit machines. New
version attached.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
reduce_gid_wal_v3.patch
Description: Binary da
nished sentence here.
>
> + targetRecOff = tmpRecPtr % XLOG_BLCKSZ;
> targetRecOff, pageHeaderSize and targetPagePtr could be declared
> inside directly the new while loop.
>
Thanks Michael for reviewing the patch. I've fixed these issues and new
version is attached
table branches, IMHO it will be a good
idea to fix this before the upcoming minor releases.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
ld like to me to put together a sql based on this
description.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On Wed, Aug 31, 2016 at 10:38 PM, Claudio Freire
wrote:
>
> On Wed, Aug 31, 2016 at 1:45 PM, Pavan Deolasee
> wrote:
>
>> We discussed a few ideas to address the "Duplicate Scan" problem. For
>> example, we can teach Index AMs to discard any duplicate (key, CTI
On Thu, Sep 1, 2016 at 1:33 AM, Bruce Momjian wrote:
> On Wed, Aug 31, 2016 at 10:15:33PM +0530, Pavan Deolasee wrote:
> > Instead, what I would like to propose and the patch currently implements
> is to
> > restrict WARM update to once per chain. So the first non-HOT update t
likely to be pursued, in which case I can start reviewing that patch.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
t;waiting on author" but then the design draft may not get enough
attention. So I'm leaving it in the current state for others to look at.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
,
will post that patch as a separate thread.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
don't know if better and more elegant ways exist to
automatically assigning module/file/msgids. I couldn't think of any without
making excessive changes to all source files and hence did what I did. That
does not mean better ways don't exists.
Thanks,
Pavan
--
Pavan Deolasee
ame with __FILE__. I believe we need an
unique integer constant to make it a fast, O(1) lookup. I couldn't find any
other method to do that when I wrote the facility and hence did what I did.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
this issue as soon as possible.
>
>
Indeed, it's a bug. Thanks Stas for tracking it down and Michael for the
review and checking other places. I also looked at the patch and it seems
fine to me. FWIW I looked at all other places where TwoPhaseFileHeader is
referred and they look safe to
as we scan the table and see many more tuples
in the overflow region than we provisioned for. There will be some
challenges in converting representation mid-way, especially in terms of
memory allocation, but I think those can be sorted out if we think that the
idea has merit.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On Thu, Sep 8, 2016 at 8:42 PM, Claudio Freire
wrote:
> On Thu, Sep 8, 2016 at 11:54 AM, Pavan Deolasee
> wrote:
> > For example, for a table with 60 bytes wide tuple (including 24 byte
> > header), each page can approximately have 8192/60 = 136 tuples. Say we
> > provis
tion of both representations as well
as performance implications on index vacuum. I don't expect to see any
major difference in either heap scans.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
leanup buffer is
not required. Later we could extend it further such multiple workers can
vacuum a single index by splitting the work on physical boundaries, but
even that will ensure clear demarkation of work and hence no contention on
index blocks.
ISTM this will require further work and it p
On Tue, Sep 6, 2016 at 8:39 PM, Anastasia Lubennikova <
a.lubennik...@postgrespro.ru> wrote:
> 06.09.2016 07:44, Pavan Deolasee:
>
> 2. I don't understand the difference between PageGetItemHeapHeaderOnly()
> and PageGetItemHeap(). They seem to do exactly the same
an anticipate when the vacuum starts and use current
representation. (b) we can detect at run time and do a one time switch
between representation. You may argue that managing two representations is
clumsy, which I agree, but the code is completely isolated and probably not
more than a few hundred lines.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On Wed, Sep 14, 2016 at 8:47 AM, Pavan Deolasee
wrote:
>
>>
> Sawada-san offered to reimplement the patch based on what I proposed
> upthread. In the new scheme of things, we will allocate a fixed size bitmap
> of length 2D bits per page where D is average page density of li
gdb.
>
>
Can we not do this as gdb macros? My knowledge is rusty in this area and
lately I'm using LVM debugger (which probably does not have something
equivalent), but I believe gdb allows you to write useful macros. As a
bonus, you can then use them even for inspecting c
On Wed, Sep 14, 2016 at 3:46 PM, Pavan Deolasee
wrote:
>
>
> lately I'm using LVM debugger (which probably does not have something
> equivalent),
>
And I was so clueless about lldb's powerful scripting interface. For
example, you can write something like this in bms_ut
On Wed, Sep 14, 2016 at 5:32 PM, Robert Haas wrote:
> On Wed, Sep 14, 2016 at 5:45 AM, Pavan Deolasee
> wrote:
> > Another interesting bit about these small tables is that the largest used
> > offset for these tables never went beyond 291 which is the value of
> > MaxHe
here bitmap for every block or a set of blocks
will either be recorded or not, depending on whether a bit is set for the
range. If the bitmap exists, the indirection map will give out the offset
into the larger bitmap area. Seems similar to what you described.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
y blocks or block ranges have at least one dead tuple to
know if it's worthwhile to have some sort of indirection. Together that can
tell us how much compression can be achieved and allow us to choose the
most optimal representation.
Thanks,
Pavan
--
Pavan Deolasee http://www
y, the output
went to the logfile instead of coming on the debugger prompt. May be I did
something wrong or may be that's not inconvenient for those who use it
regularly.
So yeah, no objections to the patch. I was happy to discover what I did and
thought of sharing assuming others mig
On Thu, Sep 15, 2016 at 7:55 PM, Pavan Deolasee
wrote:
>
> (lldb) script print print_bms_members(lldb.frame.FindVariable ("a"))
> nwords = 1 bitmap: 0x200
>
>
Or even this if lldb.frame.FindVariable() is pushed inside the function:
(lldb) script print print_bms_membe
belong to). Given that usable offsets are also just 13 bits,
TID array needs only 4 bytes per TID instead of 6.
Many people are working on implementing different ideas, and I can
volunteer to write a patch on these lines unless someone wants to do that.
Thanks,
Pavan
--
Pavan Deolasee
On Fri, Sep 16, 2016 at 9:09 AM, Pavan Deolasee
wrote:
>
> I also realised that we can compact the TID array in step (b) above
> because we only need to store 17 bits for block numbers (we already know
> which 1GB segment they belong to). Given that usable offsets are also just
&g
On Fri, Sep 16, 2016 at 7:03 PM, Robert Haas wrote:
> On Thu, Sep 15, 2016 at 11:39 PM, Pavan Deolasee
> wrote:
> > But I actually wonder if we are over engineering things and
> overestimating
> > cost of memmove etc. How about this simpler approach:
>
> Don't fo
d a
bunch of xlog related stuff from htup.h to this new file. Hard to tell if
there were other users before that and they were all dropped in this one
commit or various commits leading to this.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Developmen
On Tue, Sep 20, 2016 at 8:34 PM, Tom Lane wrote:
> Pavan Deolasee writes:
> > I happened to notice this comment in src/include/storage/itemptr.h
>
> > * Note: because there is an item pointer in each tuple header and index
> > * tuple header on disk, it's very
mes an XID, but
does not do any write activity at all? Complete artificial workload, but
good enough to tell us if and how much the patch helps in the best case. We
can probably do that with a simple txid_current() call, right?
2. Each subsequent pgbench run will bloat the tables. Now that may not be
not find any
discussion or evidence of why SizeOfIptrData magic is no longer necessary
and to see if it was an unintentional change at some point.
> > While htup.h refactoring happened in 9.5, I don't see any point in back
> > patching this.
>
> Agreed. Pus
On Tue, Sep 6, 2016 at 8:39 PM, Anastasia Lubennikova <
a.lubennik...@postgrespro.ru> wrote:
> 06.09.2016 07:44, Pavan Deolasee:
>
>
> I don't know what to do with the CF entry itself. I could change the
> status to "waiting on author" but then the design draf
the same treatment
as far as pruning is concerned, but since they cause fresh index inserts, I
wonder if we need some mechanism to cleanup the dead line pointers and dead
index entries. This will become more important if we do something to
convert WARM chains into HOT chains, something that only VACUUM can do in
the design I've proposed so far.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
3DaXa-F7uFeLAeSYhQ4wFuaX3%2BytDuDj9c8Gx6S_ou%3Dw%40mail.gmail.com
[2]
https://www.postgresql.org/message-id/20160601134819.30392.85...@wrigleys.postgresql.org
[3]
https://www.postgresql.org/message-id/AMSPR06MB504CD8FE8AA30D4B7C958AAE39E0%40AMSPR06MB504.eurprd06.prod.outlook.com
--
Pavan Deolasee
is intact, then the query
won't return problematic FSMs. Of course, if the fix is applied to the
standby and is restarted, then corrupt FSMs can be detected.
>
> At the same time, I have translated your script into a TAP test, I
> found that more useful when testing..
>
> Tha
ize(oid) / block_size) != 0)
>
>
Yes, that will enough once the fix is in place.
I think this is a major bug and I would appreciate any ideas to get the
patch in a committable shape before the next minor release goes out. We
probably need a committer to get interested in this to make pr
e reports so far involved standbys, and the bug can also hit a standalone
master during crash recovery, I wonder if a race can occur even on a live
system.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
ossible with page locked in EXCLUSIVE
mode. So may be there is another problem somewhere or a crash recovery may
have left the FSM in inconsistent state.
Anyways, we seem good to go with the patch.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
erhead for normal cases (see my original
patch). If you've better ideas, it might be worth pursuing.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
index keys.
> There is no direct link between indirect index tuple and heap tuple, only
> logical link using PK. Thus, you would anyway have to recheck.
>
>
I agree. Also, I think the recheck mechanism will have to be something like
what I wrote for WARM i.e. only checking for index quals
) as corrupt_fsm
FROM pg_class
WHERE relkind = 'r'
) b
WHERE b.corrupt_fsm = true;
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Also, AFAICS we will need to backport
pg_truncate_visibility_map() to older releases because unless the VM is
truncated along with the FSM, VACUUM may not scan all pages and the FSM for
those pages won't be recorded.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.c
On Thu, Oct 20, 2016 at 11:34 AM, Michael Paquier wrote:
> On Thu, Oct 20, 2016 at 2:50 PM, Pavan Deolasee
> wrote:
> > Actually, if we could add an API which can truncate FSM to the given heap
> > block, then the user may not even need to run VACUUM, which could be
> cost
x27;s thoughts
> about which use-cases he expects indirect indexes to work better than WARM.
>
>
Yes, will be interesting to see that comparison. May be we need both or may
be just one. Even better may be they complement each other.. I'll also put
in some thoughts in this area.
Than
initial WARM patch is to show significant
improvement even with just 50% WARM updates, yet keep the patch simple. But
there are of course several things we can do to improve it further and
support other index types.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
fine NonLeafNodesPerPage (BLCKSZ / 2 - 1)
#define LeafNodesPerPage (NodesPerPage - NonLeafNodesPerPage)
/*
* Number of FSM "slots" on a FSM page. This is what should be used
* outside fsmpage.c.
*/
#define SlotsPerFSMPage LeafNodesPerPage
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
size')::BIGINT
>
>
+1 for doing that.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
oncurrently updated.
So not sure even that will catch every possible case.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
run some very long tests with data validation and haven't found
any new issues with the most recent patches.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
heck-world with --enable-tap-tests passes.
Comments/suggestions?
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
remove_ip_posid_blkid_ref_v3.patch
Description: Binary data
--
Sent via pgsql-hacker
On Thu, Feb 23, 2017 at 11:30 PM, Bruce Momjian wrote:
> On Wed, Feb 1, 2017 at 10:46:45AM +0530, Pavan Deolasee wrote:
> > > contains a WARM tuple. Alternate ideas/suggestions and review of
> the
> > design
> > > are welcome!
> >
> >
tes
insert new index entry *only* in affected index. That itself does a good
bit for performance.
So to answer your question: yes, joining two HOT chains via WARM is much
cheaper because it results in creating new index entries just for affected
indexes.
Thanks,
Pavan
--
Pavan Deolasee
ght be
worthwhile to see if patch causes any regression in these scenarios, though
I think it will be minimal or zero.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
te that in this example the chain has just one tuple, which
will be the case typically, but the algorithm can deal with the case where
there are multiple tuples but with matching index keys.
Hope this helps.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
x27;s why when a tuple is WARM updated, we carry
that information in the subsequent versions even when later updates are HOT
updates. The chain conversion algorithm will handle this by clearing those
bits and thus allowing index-only scans again.
Thanks,
Pavan
--
Pavan Deolasee
. Similarly,
we will convert all WARM chains to HOT chains and then check for
all-visibility of the page.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
a few times now and
haven't heard any objection so far. Well, that could either mean that
nobody has read those emails seriously or there is general acceptance to
that idea.. I am assuming latter :-))
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL
ing VACUUM which conflicts with CIC on the
relation lock, I don't see any risk of incorrectly skipping pages that the
second scan should have scanned.
Comments?
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Trai
On Thu, Mar 2, 2017 at 9:55 PM, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:
> On 2/22/17 08:38, Pavan Deolasee wrote:
> > One reason why these macros are not always used is because they
> > typically do assert-validation to ensure ip_posid has a valid value
On Wed, Mar 8, 2017 at 7:33 AM, Robert Haas wrote:
> On Tue, Mar 7, 2017 at 4:26 PM, Stephen Frost wrote:
> > Right, that's what I thought he was getting at and my general thinking
> > was that we would need a way to discover if a CIC is ongoing on the
> > relation and therefore heap_page_prune(
ke concurrent VM set) is in progress. We could use what Stephen
suggested upthread to find that state. But right now it's hard to think
because there is nothing on either side so we don't know what gets impacted
by aggressive VM set and how.
Thanks,
Pavan
--
Pavan Deolasee
ing your idea?
I wonder if we should instead invent something similar to IndexRecheck(),
but instead of running ExecQual(), this new routine will compare the index
values by the given HeapTuple against given IndexTuple. ISTM that for this
to work we'll need to modify all callers of index_getnext() and teach them
to invoke the AM specific recheck method if xs_tuple_recheck flag is set to
true by index_getnext().
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On Tue, Mar 14, 2017 at 7:19 PM, Alvaro Herrera
wrote:
> Pavan Deolasee wrote:
>
> > BTW I wanted to share some more numbers from a recent performance test. I
> > thought it's important because the latest patch has fully functional
> chain
> > conversion code as
On Tue, Mar 14, 2017 at 7:16 PM, Alvaro Herrera
wrote:
> Pavan Deolasee wrote:
> > On Tue, Mar 14, 2017 at 7:17 AM, Alvaro Herrera <
> alvhe...@2ndquadrant.com>
> > wrote:
>
> > > I have already commented about the executor involvement in btrecheck();
> &g
On Thu, Mar 16, 2017 at 12:53 PM, Robert Haas wrote:
> On Wed, Mar 15, 2017 at 3:44 PM, Pavan Deolasee
> wrote:
> > I couldn't find a better way without a lot of complex infrastructure.
> Even
> > though we now have ability to mark index pointers and we know that a
On Mon, Mar 20, 2017 at 8:11 PM, Robert Haas wrote:
> On Sun, Mar 19, 2017 at 3:05 AM, Pavan Deolasee
> wrote:
> > On Thu, Mar 16, 2017 at 12:53 PM, Robert Haas
> wrote:
>
> >>
> >> /me scratches head.
> >>
> >> Aren't
On Wed, Mar 15, 2017 at 12:46 AM, Alvaro Herrera
wrote:
> Pavan Deolasee wrote:
> > On Tue, Mar 14, 2017 at 7:17 AM, Alvaro Herrera <
> alvhe...@2ndquadrant.com>
> > wrote:
>
> > > I have already commented about the executor involvement in btrecheck();
> &g
On Tue, Mar 14, 2017 at 7:17 AM, Alvaro Herrera
wrote:
> > @@ -234,6 +236,21 @@ index_beginscan(Relation heapRelation,
> > scan->heapRelation = heapRelation;
> > scan->xs_snapshot = snapshot;
> >
> > + /*
> > + * If the index supports recheck, make sure that index tuple is
>
the header of each WAL record. What about just using the backup block
> header?
>
>
+1. We can also steal a few bits from ForkNumber field in the backup block
header if required.
Thanks,
Pavan
--
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee
; Moving that allocation out of the outer for loop it's currently in is
> *nothing* to do with performance, but about making the code easier to
> read.
>
>
+1.
Thanks,
Pavan
--
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee
g. I wonder if its a bug in gist index
build, though I could not spot anything at the first glance. FWIW changing
the char[] from 20 to 22 or 24 does not cause any failure in rangetypes
test. So I am thinking its some alignment issue (mine is a 64 bit build)
Thanks,
Pavan
--
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee
test_range_spgist where ir -|- int4range(100,500);
ir
---
[90,100)
[500,510)
(2 rows)
So the last INSERT suddenly makes one row disappear via the index scan
though its still reachable via seq scan. I tried looking at the SP-Gist
code but clearly I don't understand it a whole
On Wed, Jun 25, 2014 at 10:39 PM, Heikki Linnakangas <
hlinnakan...@vmware.com> wrote:
>
> I came up with the attached. There were several bugs:
>
>
I tested for the original bug report and patch definitely fixes that. I
don't feel qualified enough with SP-Gist to really comment on the other
bugs
On Wed, Jul 2, 2014 at 11:11 AM, Pavan Deolasee
wrote:
> On Wed, Jun 25, 2014 at 10:39 PM, Heikki Linnakangas <
> hlinnakan...@vmware.com> wrote:
>
>>
>> I came up with the attached. There were several bugs:
>>
>>
> I tested for the original bug report
I'm trying to understand what would it take to have this patch in an
acceptable form before the next commitfest. Both Abhijit and Andres has
done some extensive review of the patch and have given many useful
suggestions to Rahila. While she has incorporated most of them, I feel we
are still some di
change backup block header so that it contains the info for a
> "hole", e.g., location that a "hole" starts. No?
>
>
AFAICS its not required if we compress the stream of BkpBlock and the block
data. The current mechanism of constructing the additional rdata chain
items takes care of hole anyways.
Thanks,
Pavan
--
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee
possible. In this case since there is just one
index and as soon as we check the second column we know neither HOT nor
WARM is possible, we will return early. It might complicate the API a lot,
but I can give it a shot if that's what is needed to make progress.
Any other ideas?
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
e second idea would require, is that it can
> easily mask bugs.
Agree. That's probably one reason why Alvaro wrote the patch to start with.
I'll give the first of those two options a try.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
only on a narrow test
> case,
Hmm. I am kinda surprised you say that because I never thought it was a
narrow test case that we are targeting here. But may be I'm wrong.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On Tue, Mar 21, 2017 at 10:34 PM, Robert Haas wrote:
> On Tue, Mar 21, 2017 at 12:49 PM, Bruce Momjian wrote:
> > On Tue, Mar 21, 2017 at 09:25:49AM -0400, Robert Haas wrote:
> >> On Tue, Mar 21, 2017 at 8:41 AM, Pavan Deolasee
> >> > TBH I see many artificial s
E and we have just enough
space to do HOT update this time, but I can think that's too narrow).
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
0001_interesting_attrs_v19.patch
Description: Binary d
On Wed, Mar 22, 2017 at 8:43 AM, Pavan Deolasee
wrote:
>
>
> BTW may I request another test with the attached patch? In this patch, we
> check if the PageIsFull() even before deciding which attributes to check
> for modification. If the page is already full, there is hardly any ch
ce completely, but the pattern in your
tests looks very similar where the number slowly and steadily keeps going
up. If you do complete retest but run v18/v19 first and then run master,
may be we'll see a complete opposite picture?
Thanks,
Pavan
--
Pavan Deolasee http:
On Wed, Mar 22, 2017 at 4:53 PM, Mithun Cy
wrote:
> On Wed, Mar 22, 2017 at 3:44 PM, Pavan Deolasee
> wrote:
> >
> > This looks quite weird to me. Obviously these numbers are completely
> > non-comparable. Even the time for VACUUM FULL goes up with every run.
> >
Thanks Amit. v19 addresses some of the comments below.
On Thu, Mar 23, 2017 at 10:28 AM, Amit Kapila
wrote:
> On Wed, Mar 22, 2017 at 4:06 PM, Amit Kapila
> wrote:
> > On Tue, Mar 21, 2017 at 6:47 PM, Pavan Deolasee
> > wrote:
> >>
> >>>
> &g
ly when there are more than 1
indexes on the table, and sometimes those checks will go waste. I am ok if
we want to provide table-specific knob to disable WARM, but not sure if
others would like that idea.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
Postgre
On Thu, Mar 23, 2017 at 4:08 PM, Pavan Deolasee
wrote:
>
>
> On Thu, Mar 23, 2017 at 3:02 PM, Amit Kapila
> wrote:
>
>>
>>
>> That sounds like you are dodging the actual problem. I mean you can
>> put that same PageIsFull() check in master code as well a
On Thu, Mar 23, 2017 at 11:44 PM, Mithun Cy
wrote:
> Hi Pavan,
> On Thu, Mar 23, 2017 at 12:19 AM, Pavan Deolasee
> wrote:
> > Ok, no problem. I did some tests on AWS i2.xlarge instance (4 vCPU, 30GB
> > RAM, attached SSD) and results are shown below. But I think it is
&
On Thu, Mar 23, 2017 at 7:53 PM, Amit Kapila
wrote:
> On Thu, Mar 23, 2017 at 3:44 PM, Pavan Deolasee
>
> >
> > Yes, this is a very fair point. The way I proposed to address this
> upthread
> > is by introducing a set of threshold/scale GUCs specific to WARM. So
>
On Fri, Mar 24, 2017 at 4:04 PM, Amit Kapila
wrote:
> On Fri, Mar 24, 2017 at 12:25 AM, Pavan Deolasee
> wrote:
> >
> >
> > On Thu, Mar 23, 2017 at 7:53 PM, Amit Kapila
> >
> > The general sense I've got
> > here is that we're ok to push s
rue for both the
hash entries. That's a bummer as far as supporting WARM for hash indexes is
concerned, unless we find a way to avoid duplicate index entries.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
do WARM update or not in heap_update we only rely on binary
comparison. Could it happen that for two different binary heap values, we
still compute the same index attribute? Even when expression indexes are
not supported?
Thanks,
Pavan
> --
> Peter Geoghegan
>
--
Pavan Deolasee
I happened to notice a stale comment at the very beginning of vacuumlazy.c.
ISTM we forgot to fix that when we introduced FSM. With FSM, vacuum no
longer needed to track per-page free space info. I propose attached fix.
Thanks,
Pavan
--
Pavan Deolasee http://www
g that in Postgres. I've run these tests on OSX, will try on some linux
platform too.
Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On Tue, Mar 28, 2017 at 1:59 AM, Robert Haas wrote:
> On Thu, Mar 23, 2017 at 2:47 PM, Pavan Deolasee
> wrote:
> > It's quite hard to say that until we see many more benchmarks. As author
> of
> > the patch, I might have got repetitive with my benchmarks. But I'v
On Tue, Mar 28, 2017 at 7:49 AM, Bruce Momjian wrote:
> On Mon, Mar 27, 2017 at 04:29:56PM -0400, Robert Haas wrote:
> > On Thu, Mar 23, 2017 at 2:47 PM, Pavan Deolasee
> > wrote:
> > > It's quite hard to say that until we see many more benchmarks. As
> author
1 - 100 of 740 matches
Mail list logo