On Wed, Mar 22, 2017 at 4:53 PM, Mithun Cy
wrote:
> On Wed, Mar 22, 2017 at 3:44 PM, Pavan Deolasee
> wrote:
> >
> > This looks quite weird to me. Obviously these numbers are completely
> > non-comparable. Even the time for VACUUM FULL goes
On Wed, Mar 22, 2017 at 8:43 AM, Pavan Deolasee
wrote:
> Sorry, I did not mean to suggest that you set it up wrongly, I was just
> trying to point out that the test case itself may not be very practical.
That is cool np!, I was just trying to explain why those tests were
On 21 March 2017 at 20:04, Bruce Momjian wrote:
> Yes, but once it is written it will take years before those bits can be
> used on most installations.
Well the problem isn't most installations. On most installations it
should be pretty straightforward to check the oldest
On Wed, Mar 22, 2017 at 3:44 PM, Pavan Deolasee
wrote:
>
> This looks quite weird to me. Obviously these numbers are completely
> non-comparable. Even the time for VACUUM FULL goes up with every run.
>
> May be we can blame it on AWS instance completely, but the pattern
On Tue, Mar 21, 2017 at 6:47 PM, Pavan Deolasee
wrote:
>
>>
>
> Please find attached rebased patches.
>
Few comments on 0005_warm_updates_v18.patch:
1.
@@ -806,20 +835,35 @@ hashbucketcleanup(Relation rel, Bucket
cur_bucket, Buffer bucket_buf,
{
..
- if (callback &&
On Wed, Mar 22, 2017 at 3:51 AM, Mithun Cy
wrote:
>
> CREATE INDEX testindx ON testtab (col1, col2, col3, col4, col5, col6,
> col7, col8, col9);
> Performance measurement tests: Ran12 times to eliminate run to run
> latencies.
> ==
> VACUUM
On Wed, Mar 22, 2017 at 8:43 AM, Pavan Deolasee
wrote:
>
>
> BTW may I request another test with the attached patch? In this patch, we
> check if the PageIsFull() even before deciding which attributes to check
> for modification. If the page is already full, there is
On Wed, Mar 22, 2017 at 3:51 AM, Mithun Cy
wrote:
> On Tue, Mar 21, 2017 at 8:10 PM, Robert Haas
> wrote:
> > If the WAL writing hides the loss, then I agree that's not a big
> > concern. But if the loss is still visible even when WAL is
On Tue, Mar 21, 2017 at 8:10 PM, Robert Haas wrote:
> If the WAL writing hides the loss, then I agree that's not a big
> concern. But if the loss is still visible even when WAL is written,
> then I'm not so sure.
The tests table schema was taken from earlier tests what
On Tue, Mar 21, 2017 at 04:56:16PM -0300, Alvaro Herrera wrote:
> Bruce Momjian wrote:
> > On Tue, Mar 21, 2017 at 04:43:58PM -0300, Alvaro Herrera wrote:
> > > Bruce Momjian wrote:
> > >
> > > > I don't think it makes sense to try and save bits and add complexity
> > > > when we have no idea if
Bruce Momjian wrote:
> On Tue, Mar 21, 2017 at 04:43:58PM -0300, Alvaro Herrera wrote:
> > Bruce Momjian wrote:
> >
> > > I don't think it makes sense to try and save bits and add complexity
> > > when we have no idea if we will ever use them,
> >
> > If we find ourselves in dire need of
On Tue, Mar 21, 2017 at 04:43:58PM -0300, Alvaro Herrera wrote:
> Bruce Momjian wrote:
>
> > I don't think it makes sense to try and save bits and add complexity
> > when we have no idea if we will ever use them,
>
> If we find ourselves in dire need of additional bits, there is a known
>
Bruce Momjian wrote:
> I don't think it makes sense to try and save bits and add complexity
> when we have no idea if we will ever use them,
If we find ourselves in dire need of additional bits, there is a known
mechanism to get back 2 bits from old-style VACUUM FULL. I assume that
the reason
On Tue, Mar 21, 2017 at 11:54:25PM +0530, Pavan Deolasee wrote:
> We can also save HEAP_WARM_UPDATED flag since this is required only for
> abort-handling case. We can find a way to push that information down to the
> old
> tuple if UPDATE aborts and we detect the broken chain. Again, not fully
On Tue, Mar 21, 2017 at 11:45:09PM +0530, Pavan Deolasee wrote:
> Early in the discussion we talked about allowing multiple changes per
> WARM chain if they all changed the same index and were in the same
> direction so there were no duplicates, but it was complicated. There
> was
On Tue, Mar 21, 2017 at 07:05:15PM +0100, Petr Jelinek wrote:
> >> Well, I don't want to rule it out either, but if we do a release to
> >> which you can't pg_upgrade, it's going to be really painful for a lot
> >> of users. Many users can't realistically upgrade using pg_dump, ever.
> >> So
On Tue, Mar 21, 2017 at 10:34 PM, Robert Haas wrote:
> On Tue, Mar 21, 2017 at 12:49 PM, Bruce Momjian wrote:
> > On Tue, Mar 21, 2017 at 09:25:49AM -0400, Robert Haas wrote:
> >> On Tue, Mar 21, 2017 at 8:41 AM, Pavan Deolasee
> >> > TBH I see many
On Tue, Mar 21, 2017 at 2:03 PM, Petr Jelinek
wrote:
> This is why I like the idea of pluggable storage, if we ever get that it
> would buy us ability to implement completely different heap format
> without breaking pg_upgrade.
You probably won't be surprised to
On Tue, Mar 21, 2017 at 10:47 PM, Bruce Momjian wrote:
> On Tue, Mar 21, 2017 at 01:04:14PM -0400, Robert Haas wrote:
> > > I know we have talked about it, but not recently, and if everyone else
> > > is fine with it, I am too, but I have to ask these questions.
> >
> > I think
On 21/03/17 18:19, Bruce Momjian wrote:
> On Tue, Mar 21, 2017 at 01:14:00PM -0400, Robert Haas wrote:
>> On Tue, Mar 21, 2017 at 1:08 PM, Peter Geoghegan wrote:
>>> On Tue, Mar 21, 2017 at 10:04 AM, Robert Haas wrote:
I think that's a good question. I
On 21/03/17 18:14, Robert Haas wrote:
> On Tue, Mar 21, 2017 at 1:08 PM, Peter Geoghegan wrote:
>> On Tue, Mar 21, 2017 at 10:04 AM, Robert Haas wrote:
>>> I think that's a good question. I previously expressed similar
>>> concerns. On the one hand, it's
On Tue, Mar 21, 2017 at 01:14:00PM -0400, Robert Haas wrote:
> On Tue, Mar 21, 2017 at 1:08 PM, Peter Geoghegan wrote:
> > On Tue, Mar 21, 2017 at 10:04 AM, Robert Haas wrote:
> >> I think that's a good question. I previously expressed similar
> >> concerns.
On Tue, Mar 21, 2017 at 01:04:14PM -0400, Robert Haas wrote:
> > I know we have talked about it, but not recently, and if everyone else
> > is fine with it, I am too, but I have to ask these questions.
>
> I think that's a good question. I previously expressed similar
> concerns. On the one
On Tue, Mar 21, 2017 at 1:08 PM, Peter Geoghegan wrote:
> On Tue, Mar 21, 2017 at 10:04 AM, Robert Haas wrote:
>> I think that's a good question. I previously expressed similar
>> concerns. On the one hand, it's hard to ignore the fact that, in the
>> cases
On Tue, Mar 21, 2017 at 10:04 AM, Robert Haas wrote:
> I think that's a good question. I previously expressed similar
> concerns. On the one hand, it's hard to ignore the fact that, in the
> cases where this wins, it already buys us a lot of performance
> improvement. On
On Tue, Mar 21, 2017 at 12:49 PM, Bruce Momjian wrote:
> On Tue, Mar 21, 2017 at 09:25:49AM -0400, Robert Haas wrote:
>> On Tue, Mar 21, 2017 at 8:41 AM, Pavan Deolasee
>> > TBH I see many artificial scenarios here. It will be very useful if he can
>> > rerun the query with some
On Tue, Mar 21, 2017 at 09:25:49AM -0400, Robert Haas wrote:
> On Tue, Mar 21, 2017 at 8:41 AM, Pavan Deolasee
> > TBH I see many artificial scenarios here. It will be very useful if he can
> > rerun the query with some of these restrictions lifted. I'm all for
> > addressing whatever we can, but
On 2017-03-21 19:49:07 +0530, Pavan Deolasee wrote:
> On Tue, Mar 21, 2017 at 6:55 PM, Robert Haas wrote:
>
> >
> > I think that very wide columns and highly indexed tables are not
> > particularly unrealistic, nor do I think updating all the rows is
> > particularly
On 2017-03-21 08:04:11 -0400, Robert Haas wrote:
> On Tue, Mar 21, 2017 at 6:56 AM, Amit Kapila wrote:
> >> Hmm, that test case isn't all that synthetic. It's just a single
> >> column bulk update, which isn't anything all that crazy, and 5-10%
> >> isn't nothing.
> >>
>
On Tue, Mar 21, 2017 at 10:21 AM, Alvaro Herrera
wrote:
> Robert Haas wrote:
>> On Tue, Mar 21, 2017 at 10:01 AM, Amit Kapila
>> wrote:
>
>> > Sure, we can try that. I think we need to try it with
>> > synchronous_commit = off, otherwise, WAL
Robert Haas wrote:
> On Tue, Mar 21, 2017 at 10:01 AM, Amit Kapila wrote:
> > Sure, we can try that. I think we need to try it with
> > synchronous_commit = off, otherwise, WAL writes completely overshadows
> > everything.
>
> synchronous_commit = off is a much more
On Tue, Mar 21, 2017 at 6:55 PM, Robert Haas wrote:
>
> I think that very wide columns and highly indexed tables are not
> particularly unrealistic, nor do I think updating all the rows is
> particularly unrealistic.
Ok. But those who update 10M rows in a single
On Tue, Mar 21, 2017 at 10:01 AM, Amit Kapila wrote:
>> I think that very wide columns and highly indexed tables are not
>> particularly unrealistic, nor do I think updating all the rows is
>> particularly unrealistic. Sure, it's not everything, but it's
>> something.
On Tue, Mar 21, 2017 at 6:55 PM, Robert Haas wrote:
> On Tue, Mar 21, 2017 at 8:41 AM, Pavan Deolasee
> wrote:
>>> Yeah. So what's the deal with this? Is somebody working on figuring
>>> out a different approach that would reduce this overhead?
Amit Kapila wrote:
> I think it is because heap_getattr() is not that cheap. We have
> noticed the similar problem during development of scan key push down
> work [1].
One possibility to reduce the cost of that is to use whole tuple deform
instead of repeated individual heap_getattr() calls.
On Tue, Mar 21, 2017 at 8:41 AM, Pavan Deolasee
wrote:
>> Yeah. So what's the deal with this? Is somebody working on figuring
>> out a different approach that would reduce this overhead? Are we
>> going to defer WARM to v11? Or is the intent to just ignore the 5-10%
On Tue, Mar 21, 2017 at 5:34 PM, Robert Haas wrote:
> On Tue, Mar 21, 2017 at 6:56 AM, Amit Kapila
> wrote:
> >> Hmm, that test case isn't all that synthetic. It's just a single
> >> column bulk update, which isn't anything all that crazy, and
On Tue, Mar 21, 2017 at 6:56 AM, Amit Kapila wrote:
>> Hmm, that test case isn't all that synthetic. It's just a single
>> column bulk update, which isn't anything all that crazy, and 5-10%
>> isn't nothing.
>>
>> I'm kinda surprised it made that much difference, though.
On Thu, Mar 9, 2017 at 8:43 AM, Robert Haas wrote:
> On Wed, Mar 8, 2017 at 2:30 PM, Alvaro Herrera
> wrote:
>> Not really -- it's a bit slower actually in a synthetic case measuring
>> exactly the slowed-down case. See
>>
On Fri, Mar 10, 2017 at 11:37 PM, Alvaro Herrera
wrote:
> Robert Haas wrote:
>> On Wed, Mar 8, 2017 at 2:30 PM, Alvaro Herrera
>> wrote:
>> > Not really -- it's a bit slower actually in a synthetic case measuring
>> > exactly the slowed-down
On Sun, Mar 19, 2017 at 12:15 AM, Pavan Deolasee
wrote:
>> It seems like an important invariant for WARM is that any duplicate
>> index values ought to have different TIDs (actually, it's a bit
>> stricter than that, since btrecheck() cares about simple binary
>>
Pavan Deolasee wrote:
> On Tue, Mar 14, 2017 at 7:17 AM, Alvaro Herrera
> wrote:
> > I didn't like this comment very much. But it's not necessary: you have
> > already given relcache responsibility for setting rd_supportswarm. The
> > only problem seems to be that you
On Tue, Mar 14, 2017 at 7:17 AM, Alvaro Herrera
wrote:
> > @@ -234,6 +236,21 @@ index_beginscan(Relation heapRelation,
> > scan->heapRelation = heapRelation;
> > scan->xs_snapshot = snapshot;
> >
> > + /*
> > + * If the index supports recheck, make
On Wed, Mar 15, 2017 at 12:46 AM, Alvaro Herrera
wrote:
> Pavan Deolasee wrote:
> > On Tue, Mar 14, 2017 at 7:17 AM, Alvaro Herrera <
> alvhe...@2ndquadrant.com>
> > wrote:
>
> > > I have already commented about the executor involvement in btrecheck();
> > > that
On Mon, Mar 20, 2017 at 8:11 PM, Robert Haas wrote:
> On Sun, Mar 19, 2017 at 3:05 AM, Pavan Deolasee
> wrote:
> > On Thu, Mar 16, 2017 at 12:53 PM, Robert Haas
> wrote:
>
> >>
> >> /me scratches head.
> >>
> >> Aren't
On Sun, Mar 19, 2017 at 3:05 AM, Pavan Deolasee
wrote:
> On Thu, Mar 16, 2017 at 12:53 PM, Robert Haas wrote:
>> On Wed, Mar 15, 2017 at 3:44 PM, Pavan Deolasee
>> wrote:
>> > I couldn't find a better way without a lot
On Thu, Mar 16, 2017 at 12:53 PM, Robert Haas wrote:
> On Wed, Mar 15, 2017 at 3:44 PM, Pavan Deolasee
> wrote:
> > I couldn't find a better way without a lot of complex infrastructure.
> Even
> > though we now have ability to mark index pointers
On Wed, Mar 15, 2017 at 3:44 PM, Pavan Deolasee
wrote:
> I couldn't find a better way without a lot of complex infrastructure. Even
> though we now have ability to mark index pointers and we know that a given
> pointer either points to the pre-WARM chain or post-WARM
On Tue, Mar 14, 2017 at 7:16 PM, Alvaro Herrera
wrote:
> Pavan Deolasee wrote:
> > On Tue, Mar 14, 2017 at 7:17 AM, Alvaro Herrera <
> alvhe...@2ndquadrant.com>
> > wrote:
>
> > > I have already commented about the executor involvement in btrecheck();
> > > that doesn't
On Tue, Mar 14, 2017 at 12:19 PM, Alvaro Herrera
wrote:
> Impressive results.
Agreed.
It seems like an important invariant for WARM is that any duplicate
index values ought to have different TIDs (actually, it's a bit
stricter than that, since btrecheck() cares about
On Tue, Mar 14, 2017 at 7:19 PM, Alvaro Herrera
wrote:
> Pavan Deolasee wrote:
>
> > BTW I wanted to share some more numbers from a recent performance test. I
> > thought it's important because the latest patch has fully functional
> chain
> > conversion code as well as
Pavan Deolasee wrote:
> BTW I wanted to share some more numbers from a recent performance test. I
> thought it's important because the latest patch has fully functional chain
> conversion code as well as all WAL-logging related pieces are in place
> too. I ran these tests on a box borrowed from
Pavan Deolasee wrote:
> On Tue, Mar 14, 2017 at 7:17 AM, Alvaro Herrera
> wrote:
> > I have already commented about the executor involvement in btrecheck();
> > that doesn't seem good. I previously suggested to pass the EState down
> > from caller, but that's not a
On Tue, Mar 14, 2017 at 7:17 AM, Alvaro Herrera
wrote:
> > @@ -234,6 +236,21 @@ index_beginscan(Relation heapRelation,
> > scan->heapRelation = heapRelation;
> > scan->xs_snapshot = snapshot;
> >
> > + /*
> > + * If the index supports recheck, make
After looking at how index_fetch_heap and heap_hot_search_buffer
interact, I can't say I'm in love with the idea. I started thinking
that we should not have index_fetch_heap release the buffer lock only to
re-acquire it five lines later, so it should keep the buffer lock, do
the recheck and only
> @@ -234,6 +236,21 @@ index_beginscan(Relation heapRelation,
> scan->heapRelation = heapRelation;
> scan->xs_snapshot = snapshot;
>
> + /*
> + * If the index supports recheck, make sure that index tuple is saved
> + * during index scans.
> + *
> + * XXX
Robert Haas wrote:
> On Wed, Mar 8, 2017 at 2:30 PM, Alvaro Herrera
> wrote:
> > Not really -- it's a bit slower actually in a synthetic case measuring
> > exactly the slowed-down case. See
> >
On Wed, Mar 8, 2017 at 2:30 PM, Alvaro Herrera wrote:
> Not really -- it's a bit slower actually in a synthetic case measuring
> exactly the slowed-down case. See
> https://www.postgresql.org/message-id/cad__ougk12zqmwwjzim-yyud1y8jmmy6x9yectnif3rpp6h...@mail.gmail.com
Robert Haas wrote:
> On Wed, Mar 8, 2017 at 12:14 PM, Alvaro Herrera
> wrote:
> > Alvaro Herrera wrote:
> >> Here's a rebased set of patches. This is the same Pavan posted; I only
> >> fixed some whitespace and a trivial conflict in indexam.c, per
> >> 9b88f27cb42f.
>
On Wed, Mar 8, 2017 at 12:14 PM, Alvaro Herrera
wrote:
> Alvaro Herrera wrote:
>> Here's a rebased set of patches. This is the same Pavan posted; I only
>> fixed some whitespace and a trivial conflict in indexam.c, per 9b88f27cb42f.
>
> Jaime noted that I forgot the
On Wed, Mar 8, 2017 at 12:00 PM, Alvaro Herrera
wrote:
> Here's a rebased set of patches. This is the same Pavan posted; I only
> fixed some whitespace and a trivial conflict in indexam.c, per 9b88f27cb42f.
No attachments.
--
Robert Haas
EnterpriseDB:
Here's a rebased set of patches. This is the same Pavan posted; I only
fixed some whitespace and a trivial conflict in indexam.c, per 9b88f27cb42f.
--
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via
On Sat, Feb 25, 2017 at 10:50 AM, Pavan Deolasee
wrote:
> On Fri, Feb 24, 2017 at 9:47 PM, Robert Haas wrote:
>> And there are no bugs, right? :-)
>
> Yeah yeah absolutely nothing. Just like any other feature committed to
> Postgres so far ;-)
On Sat, Feb 25, 2017 at 10:50:57AM +0530, Pavan Deolasee wrote:
>
> On Fri, Feb 24, 2017 at 9:47 PM, Robert Haas wrote:
> And there are no bugs, right? :-)
>
> Yeah yeah absolutely nothing. Just like any other feature committed to
> Postgres
> so far ;-)
>
> I need
On Fri, Feb 24, 2017 at 9:47 PM, Robert Haas wrote:
>
> And there are no bugs, right? :-)
Yeah yeah absolutely nothing. Just like any other feature committed to
Postgres so far ;-)
I need to polish the chain conversion patch a bit and also add missing
support for redo,
On Fri, Feb 24, 2017 at 02:14:23PM +0530, Pavan Deolasee wrote:
>
>
> On Thu, Feb 23, 2017 at 11:53 PM, Bruce Momjian wrote:
>
> On Thu, Feb 23, 2017 at 03:03:39PM -0300, Alvaro Herrera wrote:
> > Bruce Momjian wrote:
> >
> > > As I remember, WARM only allows
On Fri, Feb 24, 2017 at 4:06 PM, Pavan Deolasee
wrote:
>> Wow, OK. In my view, that makes the chain conversion code pretty much
>> essential, because if you had WARM without chain conversion then the
>> visibility map gets more or less irrevocably less effective over
On Fri, Feb 24, 2017 at 3:42 PM, Robert Haas wrote:
>
>
> Wow, OK. In my view, that makes the chain conversion code pretty much
> essential, because if you had WARM without chain conversion then the
> visibility map gets more or less irrevocably less effective over time,
On Fri, Feb 24, 2017 at 3:31 PM, Pavan Deolasee
wrote:
> On Fri, Feb 24, 2017 at 3:23 PM, Robert Haas wrote:
>> I don't immediately see how this will work with index-only scans. If
>> the tuple is HOT updated several times, HOT-pruned back to a
On Fri, Feb 24, 2017 at 3:23 PM, Robert Haas wrote:
>
> I don't immediately see how this will work with index-only scans. If
> the tuple is HOT updated several times, HOT-pruned back to a single
> version, and then the page is all-visible, the index entries are
>
On Fri, Feb 24, 2017 at 2:42 PM, Pavan Deolasee
wrote:
> Let's take an example. Say, we have a table (a int, b int, c text) and two
> indexes on first two columns.
>
>H W
> H
> (1, 100, 'foo') -> (1, 100, 'bar')
On Fri, Feb 24, 2017 at 12:28 AM, Alvaro Herrera
wrote:
> Bruce Momjian wrote:
> > On Thu, Feb 23, 2017 at 03:45:24PM -0300, Alvaro Herrera wrote:
>
> > > > and potentially trim the first HOT chain as those tuples become
> > > > invisible.
> > >
> > > That can already
On Fri, Feb 24, 2017 at 2:13 PM, Robert Haas wrote:
> On Thu, Feb 23, 2017 at 9:21 PM, Bruce Momjian wrote:
> >
>
> > I have what might be a supid question. As I remember, WARM only allows
> > a single index-column change in the chain. Why are you
On Thu, Feb 23, 2017 at 11:53 PM, Bruce Momjian wrote:
> On Thu, Feb 23, 2017 at 03:03:39PM -0300, Alvaro Herrera wrote:
> > Bruce Momjian wrote:
> >
> > > As I remember, WARM only allows
> > > a single index-column change in the chain. Why are you seeing such a
> > > large
On Thu, Feb 23, 2017 at 9:21 PM, Bruce Momjian wrote:
> On Tue, Jan 31, 2017 at 04:52:39PM +0530, Pavan Deolasee wrote:
>> The other critical bug I found, which unfortunately exists in the master too,
>> is the index corruption during CIC. The patch includes the same fix that
On Thu, Feb 23, 2017 at 11:30 PM, Bruce Momjian wrote:
> On Wed, Feb 1, 2017 at 10:46:45AM +0530, Pavan Deolasee wrote:
> > > contains a WARM tuple. Alternate ideas/suggestions and review of
> the
> > design
> > > are welcome!
> >
> > t_infomask2 contains one
On Thu, Feb 23, 2017 at 03:58:59PM -0300, Alvaro Herrera wrote:
> Bruce Momjian wrote:
> > On Thu, Feb 23, 2017 at 03:45:24PM -0300, Alvaro Herrera wrote:
>
> > > > and potentially trim the first HOT chain as those tuples become
> > > > invisible.
> > >
> > > That can already happen even without
Bruce Momjian wrote:
> On Thu, Feb 23, 2017 at 03:45:24PM -0300, Alvaro Herrera wrote:
> > > and potentially trim the first HOT chain as those tuples become
> > > invisible.
> >
> > That can already happen even without WARM, no?
>
> Uh, the point is that with WARM those four early tuples can be
On Thu, Feb 23, 2017 at 03:45:24PM -0300, Alvaro Herrera wrote:
> Bruce Momjian wrote:
>
> > Well, let's walk through this. Let's suppose you have three updates
> > that stay on the same page and don't update any indexed columns --- that
> > would produce a HOT chain of four tuples. If you then
Bruce Momjian wrote:
> Well, let's walk through this. Let's suppose you have three updates
> that stay on the same page and don't update any indexed columns --- that
> would produce a HOT chain of four tuples. If you then do an update that
> changes an indexed column, prior to this patch, you
On Thu, Feb 23, 2017 at 03:26:09PM -0300, Alvaro Herrera wrote:
> Bruce Momjian wrote:
> > On Thu, Feb 23, 2017 at 03:03:39PM -0300, Alvaro Herrera wrote:
> > > Bruce Momjian wrote:
> > >
> > > > As I remember, WARM only allows
> > > > a single index-column change in the chain. Why are you
Bruce Momjian wrote:
> On Thu, Feb 23, 2017 at 03:03:39PM -0300, Alvaro Herrera wrote:
> > Bruce Momjian wrote:
> >
> > > As I remember, WARM only allows
> > > a single index-column change in the chain. Why are you seeing such a
> > > large performance improvement? I would have thought it would
On Thu, Feb 23, 2017 at 03:03:39PM -0300, Alvaro Herrera wrote:
> Bruce Momjian wrote:
>
> > As I remember, WARM only allows
> > a single index-column change in the chain. Why are you seeing such a
> > large performance improvement? I would have thought it would be that
> > high if we allowed
Bruce Momjian wrote:
> As I remember, WARM only allows
> a single index-column change in the chain. Why are you seeing such a
> large performance improvement? I would have thought it would be that
> high if we allowed an unlimited number of index changes in the chain.
The second update in a
On Wed, Feb 1, 2017 at 10:46:45AM +0530, Pavan Deolasee wrote:
> > contains a WARM tuple. Alternate ideas/suggestions and review of the
> design
> > are welcome!
>
> t_infomask2 contains one last unused bit,
>
>
> Umm, WARM is using 2 unused bits from t_infomask2. You mean
On Tue, Jan 31, 2017 at 04:52:39PM +0530, Pavan Deolasee wrote:
> The other critical bug I found, which unfortunately exists in the master too,
> is the index corruption during CIC. The patch includes the same fix that I've
> proposed on the other thread. With these changes, WARM stress is running
Hi Tom,
On Wed, Feb 1, 2017 at 3:51 AM, Tom Lane wrote:
>
> (I'm a little more concerned by Alvaro's apparent position that WARM
> is a done deal; I didn't think so.
Are there any specific aspects of the design that you're not comfortable
with? I'm sure there could be some
Pavan Deolasee wrote:
> Do you think we should apply the patch to remove ItemPointerCopy()? I will
> rework the HeapTupleHeaderGetNextTid() after that. Not that it depends on
> removing ItemPointerCopy(), but decided to postpone it until we make a call
> on that patch.
My inclination is not to.
On Thu, Feb 2, 2017 at 3:49 AM, Tom Lane wrote:
> Alvaro Herrera writes:
> > Tom Lane wrote:
> >> I think what we ought to do about this is invent additional API
> >> functions, say
> >>
> >> Oid CatalogTupleInsertWithInfo(Relation heapRel,
Alvaro Herrera writes:
> Tom Lane wrote:
>> I think what we ought to do about this is invent additional API
>> functions, say
>>
>> Oid CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
>> CatalogIndexState indstate);
>> void
Tom Lane wrote:
> The source of both of those problems is that in some places, we
> did CatalogOpenIndexes and then used the CatalogIndexState for
> multiple tuple inserts/updates before doing CatalogCloseIndexes.
> The patch dealt with these either by not touching them, just
> leaving the
Alvaro Herrera writes:
> Tom Lane wrote:
>> However, the patch misses an
>> important part of such an abstraction layer by not also converting
>> catalog-related simple_heap_delete() calls into some sort of
>> CatalogTupleDelete() operation. It is certainly a
On Tue, Jan 31, 2017 at 7:21 PM, Alvaro Herrera
wrote:
>
> > > +#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
> > > +do { \
> > > + AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
> > > + ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
> > > +}
On Wed, Feb 1, 2017 at 9:36 AM, Alvaro Herrera wrote:
>> I propose that we should finish the job by inventing CatalogTupleDelete(),
>> which for the moment would be a trivial wrapper around
>> simple_heap_delete(), maybe just a macro for it.
>>
>> If there's no
Tom Lane wrote:
> BTW, the reason I think it's good cleanup is that it's something that my
> colleagues at Salesforce also had to do as part of putting PG on top of a
> different storage engine that had different ideas about index handling.
> Essentially it's providing a bit of abstraction as to
On 2017-01-31 17:21:28 -0500, Tom Lane wrote:
> Andres Freund writes:
> > Hm, sorry for missing this earlier. I think CatalogUpdateIndexes() is
> > fairly widely used in extensions - it seems like a pretty harsh change
> > to not leave some backward compatibility layer in
Stephen Frost writes:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> (I'm a little more concerned by Alvaro's apparent position that WARM
>> is a done deal; I didn't think so. This particular change seems like
>> good cleanup anyhow, however.)
> Agreed.
BTW, the reason I think
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Andres Freund writes:
> > Hm, sorry for missing this earlier. I think CatalogUpdateIndexes() is
> > fairly widely used in extensions - it seems like a pretty harsh change
> > to not leave some backward compatibility layer in place.
>
Andres Freund writes:
> Hm, sorry for missing this earlier. I think CatalogUpdateIndexes() is
> fairly widely used in extensions - it seems like a pretty harsh change
> to not leave some backward compatibility layer in place.
If an extension is doing that, it is probably
On 2017-01-31 19:10:05 -0300, Alvaro Herrera wrote:
> Andres Freund wrote:
> > On 2017-01-31 14:10:01 -0300, Alvaro Herrera wrote:
>
> > > Hmm, I was thinking we would get rid of CatalogUpdateIndexes altogether.
> > > Two of the callers are in the new routines (which I propose to rename to
> > >
101 - 200 of 238 matches
Mail list logo