Re: zheap: a new storage format for PostgreSQL

2018-12-06 Thread Amit Kapila
On Thu, Dec 6, 2018 at 9:32 PM Robert Haas  wrote:
>
> On Thu, Dec 6, 2018 at 10:53 AM Pavel Stehule  wrote:
> > čt 6. 12. 2018 v 16:26 odesílatel Robert Haas  
> > napsal:
> >> On Thu, Dec 6, 2018 at 10:23 AM Pavel Stehule  
> >> wrote:
> >> > I have a problem to imagine it. When fill factor will be low, then there 
> >> > is high risk of high fragmentation - or there some body should to do 
> >> > defragmentation.
> >>
> >> I don't understand this.
> >
> > I don't know if zheap has or has not any tools for elimination 
> > fragmentation of space of page. But I expect so after some set of updates, 
> > when record size is mutable, the free space on page should be fragmented. 
> > Usually, when you have less memory, then fragmentation is faster.
>
> Still not sure I completely understand, but it's true that zheap
> sometimes needs to compact free space on a page.  For example, if
> you've got a page with a 100-byte hole, and somebody updates a tuple
> to make it 2 bytes bigger, you've got to shift that tuple and any that
> precede it backwards to reduce the size of the hole to 98 bytes, so
> that you can fit the new version of the tuple.  If, later, somebody
> shrinks that tuple back to the original size, you've now got 100 bytes
> of free space on the page, but they are fragmented: 98 bytes in the
> "hole," and 2 bytes following the newly-shrunk tuple.  If someone
> tries to insert a 100-byte tuple in that page, we'll need to
> reorganize the page a second time to bring all that free space back
> together in a single chunk.
>
> In my view, and I'm not sure if this is how the code currently works,
> we should have just one routine to do a zheap page reorganization
> which can cope with all possible scenarios.  I imagine that you would
> give it the page is it currently exists plus a "minimum tuple size"
> for one or more tuples on the page (which must not be smaller than the
> current size of that tuple, but could be bigger).  It then reorganizes
> the page so that every tuple for which a minimum size was given
> consumes exactly that amount of space, every other tuple consumes the
> minimum possible amount of space, and the remaining space goes into
> the hole.  So if you call this function with no minimal tuple sizes,
> it does a straight defragmentation; if you give it minimum tuple
> sizes, then it rearranges the page to make it suitable for a pending
> in-place update of those tuples.
>

Yeah, the code is also along these lines, however, as of now, the API
takes input for one tuple (it's offset number and delta space
(additional space required by update that updates tuple to a bigger
size)).  As of now, we don't have a requirement for multiple tuples,
but if there is a case, I think the API can be adapted.  One more
thing we do during repair-fragmentation is to arrange tuples in their
offset order so that future sequence scans can be faster.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-12-06 Thread Pavel Stehule
čt 6. 12. 2018 v 17:02 odesílatel Robert Haas 
napsal:

> On Thu, Dec 6, 2018 at 10:53 AM Pavel Stehule 
> wrote:
> > čt 6. 12. 2018 v 16:26 odesílatel Robert Haas 
> napsal:
> >> On Thu, Dec 6, 2018 at 10:23 AM Pavel Stehule 
> wrote:
> >> > I have a problem to imagine it. When fill factor will be low, then
> there is high risk of high fragmentation - or there some body should to do
> defragmentation.
> >>
> >> I don't understand this.
> >
> > I don't know if zheap has or has not any tools for elimination
> fragmentation of space of page. But I expect so after some set of updates,
> when record size is mutable, the free space on page should be fragmented.
> Usually, when you have less memory, then fragmentation is faster.
>
> Still not sure I completely understand, but it's true that zheap
> sometimes needs to compact free space on a page.  For example, if
> you've got a page with a 100-byte hole, and somebody updates a tuple
> to make it 2 bytes bigger, you've got to shift that tuple and any that
> precede it backwards to reduce the size of the hole to 98 bytes, so
> that you can fit the new version of the tuple.  If, later, somebody
> shrinks that tuple back to the original size, you've now got 100 bytes
> of free space on the page, but they are fragmented: 98 bytes in the
> "hole," and 2 bytes following the newly-shrunk tuple.  If someone
> tries to insert a 100-byte tuple in that page, we'll need to
> reorganize the page a second time to bring all that free space back
> together in a single chunk.
>
> In my view, and I'm not sure if this is how the code currently works,
> we should have just one routine to do a zheap page reorganization
> which can cope with all possible scenarios.  I imagine that you would
> give it the page is it currently exists plus a "minimum tuple size"
> for one or more tuples on the page (which must not be smaller than the
> current size of that tuple, but could be bigger).  It then reorganizes
> the page so that every tuple for which a minimum size was given
> consumes exactly that amount of space, every other tuple consumes the
> minimum possible amount of space, and the remaining space goes into
> the hole.  So if you call this function with no minimal tuple sizes,
> it does a straight defragmentation; if you give it minimum tuple
> sizes, then it rearranges the page to make it suitable for a pending
> in-place update of those tuples.
>
> Actually, I think Amit and I discussed further refining this by
> splitting the page reorganization function in half.  One half would
> make a plan for where to put each tuple on the page following the
> reorg, but would not actually do anything.  That would be executed
> before entering a critical section, and might fail if the requested
> minimum tuple sizes can't be satisfied.  The other half would take the
> previously-constructed plan as input and perform the reorganization.
> That would be done in the critical section.
>
>
Thank you for reply

Pavel

-- 
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Re: zheap: a new storage format for PostgreSQL

2018-12-06 Thread Robert Haas
On Thu, Dec 6, 2018 at 10:53 AM Pavel Stehule  wrote:
> čt 6. 12. 2018 v 16:26 odesílatel Robert Haas  napsal:
>> On Thu, Dec 6, 2018 at 10:23 AM Pavel Stehule  
>> wrote:
>> > I have a problem to imagine it. When fill factor will be low, then there 
>> > is high risk of high fragmentation - or there some body should to do 
>> > defragmentation.
>>
>> I don't understand this.
>
> I don't know if zheap has or has not any tools for elimination fragmentation 
> of space of page. But I expect so after some set of updates, when record size 
> is mutable, the free space on page should be fragmented. Usually, when you 
> have less memory, then fragmentation is faster.

Still not sure I completely understand, but it's true that zheap
sometimes needs to compact free space on a page.  For example, if
you've got a page with a 100-byte hole, and somebody updates a tuple
to make it 2 bytes bigger, you've got to shift that tuple and any that
precede it backwards to reduce the size of the hole to 98 bytes, so
that you can fit the new version of the tuple.  If, later, somebody
shrinks that tuple back to the original size, you've now got 100 bytes
of free space on the page, but they are fragmented: 98 bytes in the
"hole," and 2 bytes following the newly-shrunk tuple.  If someone
tries to insert a 100-byte tuple in that page, we'll need to
reorganize the page a second time to bring all that free space back
together in a single chunk.

In my view, and I'm not sure if this is how the code currently works,
we should have just one routine to do a zheap page reorganization
which can cope with all possible scenarios.  I imagine that you would
give it the page is it currently exists plus a "minimum tuple size"
for one or more tuples on the page (which must not be smaller than the
current size of that tuple, but could be bigger).  It then reorganizes
the page so that every tuple for which a minimum size was given
consumes exactly that amount of space, every other tuple consumes the
minimum possible amount of space, and the remaining space goes into
the hole.  So if you call this function with no minimal tuple sizes,
it does a straight defragmentation; if you give it minimum tuple
sizes, then it rearranges the page to make it suitable for a pending
in-place update of those tuples.

Actually, I think Amit and I discussed further refining this by
splitting the page reorganization function in half.  One half would
make a plan for where to put each tuple on the page following the
reorg, but would not actually do anything.  That would be executed
before entering a critical section, and might fail if the requested
minimum tuple sizes can't be satisfied.  The other half would take the
previously-constructed plan as input and perform the reorganization.
That would be done in the critical section.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: zheap: a new storage format for PostgreSQL

2018-12-06 Thread Pavel Stehule
čt 6. 12. 2018 v 16:26 odesílatel Robert Haas 
napsal:

> On Thu, Dec 6, 2018 at 10:23 AM Pavel Stehule 
> wrote:
> > I have a problem to imagine it. When fill factor will be low, then there
> is high risk of high fragmentation - or there some body should to do
> defragmentation.
>
> I don't understand this.
>

I don't know if zheap has or has not any tools for elimination
fragmentation of space of page. But I expect so after some set of updates,
when record size is mutable, the free space on page should be fragmented.
Usually, when you have less memory, then fragmentation is faster.

Pavel


>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Re: zheap: a new storage format for PostgreSQL

2018-12-06 Thread Robert Haas
On Thu, Dec 6, 2018 at 10:23 AM Pavel Stehule  wrote:
> I have a problem to imagine it. When fill factor will be low, then there is 
> high risk of high fragmentation - or there some body should to do 
> defragmentation.

I don't understand this.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: zheap: a new storage format for PostgreSQL

2018-12-06 Thread Pavel Stehule
čt 6. 12. 2018 v 16:12 odesílatel Robert Haas 
napsal:

> On Thu, Dec 6, 2018 at 2:11 AM Pavel Stehule 
> wrote:
> >> > I am sorry, I know zero about zheap - does zheap use fill factor? if
> yes, why?
> >>
> >> Good question.  It is required because tuples can expand (Update tuple
> >> to bigger length).  In such cases, we try to perform in-place update
> >> if there is a space in the page.  So, having fillfactor can help.
> >
> > Thank you for reply :)
>
> I suspect fillfactor is *more* likely to help with zheap than with the
> current heap.  With the current heap, you need to leave enough space
> to store entire copies of the tuples to try to get HOT updates.  But
> with zheap you only need enough room for the anticipate growth in the
> tuples.
>
> For instance, let's say that you plan to update 30% of the tuples in a
> table and make them 1 byte larger.  With the heap, you'd need to leave
> ~ 3/13 = 23% of each page empty, plus a little bit more to allow for
> the storage growth.  So to make all of those updates HOT, you would
> probably need a fillfactor of roughly 75%.  Unfortunately, that will
> make your table larger by one-third, which is terrible.
>
> On the other hand, with zheap, you only need to leave enough room for
> the increased amount of tuple data.  If you've got 121 items per page,
> as in Mithun's statistics, that means you need 121 bytes of free space
> to do all the updates in place.  That means you need a fillfactor of 1
> - (121/8192) = ~98%.  To be conservative you can set a fillfactor of
> say 95%.  Your table will only get slightly bigger, and all of your
> updates will be in place, and everything will be great.  At least with
> respect to fillfactor -- zheap is not free of other problems.
>

I have a problem to imagine it. When fill factor will be low, then there is
high risk of high fragmentation - or there some body should to do
defragmentation.


> Of course, you don't really set fillfactor based on an expectation of
> a single round of tuple updates, but imagine that the workload goes on
> for a while, with tuples getting bigger and smaller again as the exact
> values being stored change.  In a heap table, you need LOTS of empty
> space on each page to get HOT updates.  In a zheap table, you need
> very little, because the updates are in place.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Re: zheap: a new storage format for PostgreSQL

2018-12-06 Thread Robert Haas
On Thu, Dec 6, 2018 at 2:11 AM Pavel Stehule  wrote:
>> > I am sorry, I know zero about zheap - does zheap use fill factor? if yes, 
>> > why?
>>
>> Good question.  It is required because tuples can expand (Update tuple
>> to bigger length).  In such cases, we try to perform in-place update
>> if there is a space in the page.  So, having fillfactor can help.
>
> Thank you for reply :)

I suspect fillfactor is *more* likely to help with zheap than with the
current heap.  With the current heap, you need to leave enough space
to store entire copies of the tuples to try to get HOT updates.  But
with zheap you only need enough room for the anticipate growth in the
tuples.

For instance, let's say that you plan to update 30% of the tuples in a
table and make them 1 byte larger.  With the heap, you'd need to leave
~ 3/13 = 23% of each page empty, plus a little bit more to allow for
the storage growth.  So to make all of those updates HOT, you would
probably need a fillfactor of roughly 75%.  Unfortunately, that will
make your table larger by one-third, which is terrible.

On the other hand, with zheap, you only need to leave enough room for
the increased amount of tuple data.  If you've got 121 items per page,
as in Mithun's statistics, that means you need 121 bytes of free space
to do all the updates in place.  That means you need a fillfactor of 1
- (121/8192) = ~98%.  To be conservative you can set a fillfactor of
say 95%.  Your table will only get slightly bigger, and all of your
updates will be in place, and everything will be great.  At least with
respect to fillfactor -- zheap is not free of other problems.

Of course, you don't really set fillfactor based on an expectation of
a single round of tuple updates, but imagine that the workload goes on
for a while, with tuples getting bigger and smaller again as the exact
values being stored change.  In a heap table, you need LOTS of empty
space on each page to get HOT updates.  In a zheap table, you need
very little, because the updates are in place.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: zheap: a new storage format for PostgreSQL

2018-12-05 Thread Pavel Stehule
čt 6. 12. 2018 v 8:08 odesílatel Amit Kapila 
napsal:

> On Thu, Dec 6, 2018 at 12:30 PM Pavel Stehule 
> wrote:
> >
> > čt 6. 12. 2018 v 7:55 odesílatel Mithun Cy 
> napsal:
> >>
> >> On Thu, Dec 6, 2018 at 11:13 AM Amit Kapila 
> wrote:
> >> >
> >> > On Thu, Dec 6, 2018 at 10:03 AM Pavel Stehule <
> pavel.steh...@gmail.com> wrote:
> >> > >
> >> > > čt 6. 12. 2018 v 5:02 odesílatel Mithun Cy <
> mithun...@enterprisedb.com> napsal:
> >> > >>
> >> > >> COPY command seems to have improved very slightly with zheap in
> both with size of wal and execution time. I also did some tests with insert
> statement where I could see some regression in zheap when compared to heap
> with respect to execution time. With further more investigation I will
> reply here.
> >> > >>
> >> > >
> >> > > 20% of size reduction looks like effect of fill factor.
> >> > >
> >> >
> >> > I think it is because of smaller zheap tuple sizes.  Mithun can tell
> >> > more about setup whether he has used different fillfactor or anything
> >> > else which could lead to such a big difference.
> >>
> >> Yes default fillfactor is unaltered, zheap tuples sizes are less and
> >> alinged each at 2 Bytes
> >>
> >
> > I am sorry, I know zero about zheap - does zheap use fill factor? if
> yes, why?
> >
>
> Good question.  It is required because tuples can expand (Update tuple
> to bigger length).  In such cases, we try to perform in-place update
> if there is a space in the page.  So, having fillfactor can help.
>
>
Thank you for reply :)

Pavel


-- 
> With Regards,
> Amit Kapila.
> EnterpriseDB: http://www.enterprisedb.com
>


Re: zheap: a new storage format for PostgreSQL

2018-12-05 Thread Amit Kapila
On Thu, Dec 6, 2018 at 12:30 PM Pavel Stehule  wrote:
>
> čt 6. 12. 2018 v 7:55 odesílatel Mithun Cy  
> napsal:
>>
>> On Thu, Dec 6, 2018 at 11:13 AM Amit Kapila  wrote:
>> >
>> > On Thu, Dec 6, 2018 at 10:03 AM Pavel Stehule  
>> > wrote:
>> > >
>> > > čt 6. 12. 2018 v 5:02 odesílatel Mithun Cy  
>> > > napsal:
>> > >>
>> > >> COPY command seems to have improved very slightly with zheap in both 
>> > >> with size of wal and execution time. I also did some tests with insert 
>> > >> statement where I could see some regression in zheap when compared to 
>> > >> heap with respect to execution time. With further more investigation I 
>> > >> will reply here.
>> > >>
>> > >
>> > > 20% of size reduction looks like effect of fill factor.
>> > >
>> >
>> > I think it is because of smaller zheap tuple sizes.  Mithun can tell
>> > more about setup whether he has used different fillfactor or anything
>> > else which could lead to such a big difference.
>>
>> Yes default fillfactor is unaltered, zheap tuples sizes are less and
>> alinged each at 2 Bytes
>>
>
> I am sorry, I know zero about zheap - does zheap use fill factor? if yes, why?
>

Good question.  It is required because tuples can expand (Update tuple
to bigger length).  In such cases, we try to perform in-place update
if there is a space in the page.  So, having fillfactor can help.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-12-05 Thread Pavel Stehule
čt 6. 12. 2018 v 7:55 odesílatel Mithun Cy 
napsal:

> On Thu, Dec 6, 2018 at 11:13 AM Amit Kapila 
> wrote:
> >
> > On Thu, Dec 6, 2018 at 10:03 AM Pavel Stehule 
> wrote:
> > >
> > > čt 6. 12. 2018 v 5:02 odesílatel Mithun Cy 
> napsal:
> > >>
> > >> COPY command seems to have improved very slightly with zheap in both
> with size of wal and execution time. I also did some tests with insert
> statement where I could see some regression in zheap when compared to heap
> with respect to execution time. With further more investigation I will
> reply here.
> > >>
> > >
> > > 20% of size reduction looks like effect of fill factor.
> > >
> >
> > I think it is because of smaller zheap tuple sizes.  Mithun can tell
> > more about setup whether he has used different fillfactor or anything
> > else which could lead to such a big difference.
>
> Yes default fillfactor is unaltered, zheap tuples sizes are less and
> alinged each at 2 Bytes
>
>
I am sorry, I know zero about zheap - does zheap use fill factor? if yes,
why? I though it was sense just for current format.

Regards

Pavel


> Length of each item. (all Items are identical)
> =
> postgres=# SELECT lp_len FROM
> zheap_page_items(get_raw_page('pgbench_zheap', 9)) limit 1;
>  lp_len
> 
> 102
> (1 row)
>
> postgres=# SELECT lp_len FROM
> heap_page_items(get_raw_page('pgbench_heap', 9)) limit 1;
>  lp_len
> 
> 121
> (1 row)
>
> Total tuples per page
> =
> postgres=# SELECT count(*) FROM
> zheap_page_items(get_raw_page('pgbench_zheap', 9));
>  count
> ---
> 76
> (1 row)
>
> postgres=# SELECT count(*) FROM
> heap_page_items(get_raw_page('pgbench_heap', 9));
>  count
> ---
> 61
> (1 row)
>
> because of this zheap takes less space as reported above.
>

>
> --
> Thanks and Regards
> Mithun Chicklore Yogendra
> EnterpriseDB: http://www.enterprisedb.com
>


Re: zheap: a new storage format for PostgreSQL

2018-12-05 Thread Mithun Cy
On Thu, Dec 6, 2018 at 11:13 AM Amit Kapila  wrote:
>
> On Thu, Dec 6, 2018 at 10:03 AM Pavel Stehule  wrote:
> >
> > čt 6. 12. 2018 v 5:02 odesílatel Mithun Cy  
> > napsal:
> >>
> >> COPY command seems to have improved very slightly with zheap in both with 
> >> size of wal and execution time. I also did some tests with insert 
> >> statement where I could see some regression in zheap when compared to heap 
> >> with respect to execution time. With further more investigation I will 
> >> reply here.
> >>
> >
> > 20% of size reduction looks like effect of fill factor.
> >
>
> I think it is because of smaller zheap tuple sizes.  Mithun can tell
> more about setup whether he has used different fillfactor or anything
> else which could lead to such a big difference.

Yes default fillfactor is unaltered, zheap tuples sizes are less and
alinged each at 2 Bytes

Length of each item. (all Items are identical)
=
postgres=# SELECT lp_len FROM
zheap_page_items(get_raw_page('pgbench_zheap', 9)) limit 1;
 lp_len

102
(1 row)

postgres=# SELECT lp_len FROM
heap_page_items(get_raw_page('pgbench_heap', 9)) limit 1;
 lp_len

121
(1 row)

Total tuples per page
=
postgres=# SELECT count(*) FROM
zheap_page_items(get_raw_page('pgbench_zheap', 9));
 count
---
76
(1 row)

postgres=# SELECT count(*) FROM
heap_page_items(get_raw_page('pgbench_heap', 9));
 count
---
61
(1 row)

because of this zheap takes less space as reported above.


-- 
Thanks and Regards
Mithun Chicklore Yogendra
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-12-05 Thread Amit Kapila
On Thu, Dec 6, 2018 at 10:03 AM Pavel Stehule  wrote:
>
> čt 6. 12. 2018 v 5:02 odesílatel Mithun Cy  
> napsal:
>>
>> COPY command seems to have improved very slightly with zheap in both with 
>> size of wal and execution time. I also did some tests with insert statement 
>> where I could see some regression in zheap when compared to heap with 
>> respect to execution time. With further more investigation I will reply here.
>>
>
> 20% of size reduction looks like effect of fill factor.
>

I think it is because of smaller zheap tuple sizes.  Mithun can tell
more about setup whether he has used different fillfactor or anything
else which could lead to such a big difference.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-12-05 Thread Pavel Stehule
čt 6. 12. 2018 v 5:02 odesílatel Mithun Cy 
napsal:

> > On Thu, Mar 1, 2018 at 7:39 PM Amit Kapila 
> wrote:
>
> I did some testing for performance of COPY command for zheap against heap,
> here are my results,
> Machine : cthulhu, (is a 8 node numa machine with 500GB of RAM)
> server non default settings: shared buffers 32GB, max_wal_size = 20GB,
> min_wal_size = 15GB
>
> Test tables and data:
> 
> I have used pgbench_accounts table of pgbench tool as data source with 3
> different scale factors 100, 1000, 2000. Both heap and zheap table is
> lookalike of pgbench_accounts
>
> CREATE TABLE pgbench_zheap (LIKE pgbench_accounts) WITH
> (storage_engine='zheap');
> CREATE TABLE pgbench_heap (LIKE pgbench_accounts) WITH
> (storage_engine='heap');
>
> Test Commands:
> Command to generate datafile: COPY pgbench_accounts TO '/mnt/data-mag/
> mithun.cy/zheapperfbin/bin/pgbench.data';
>
> Command to load from datafile:
> COPY pgbench_heap FROM '/mnt/data-mag/
> mithun.cy/zheapperfbin/bin/pgbench.data'; -- heap table
> COPY pgbench_zheap FROM '/mnt/data-mag/
> mithun.cy/zheapperfbin/bin/pgbench.data'; -- zheap table
>
> Results
> ==
>
> Scale factor : 100
> 
> zheap table size : 1028 MB
> heap table size: 1281 MB
> -- table size reduction: 19% size reduction.
> zheap wal size: 1007 MB
> heap wal size: 1024 MB
> -- wal size difference: 1.6% size reduction.
> zheap COPY  execution time: 24869.451 ms
> heap COPY  execution time: 25858.773 ms
> -- % of improvement -- 3.8% reduction in execution time for zheap
>
> Scale factor : 1000
> -
> zheap table size : 10 GB
> heap table size: 13 GB
> -- table size reduction: 23% size reduction.
> zheap wal size: 10071 MB
> heap wal size: 10243 MB
> -- wal size difference: 1.67% size reduction.
> zheap COPY  execution time: 270790.235 ms
> heap COPY  execution time:  280325.632 ms
> -- % of improvement -- 3.4% reduction in execution time for zheap
>
> Scale factor : 2000
> -
> zheap table size : 20GB
> heap table size: 25GB
> -- table size reduction: 20% size reduction.
> zheap wal size: 20142 MB
> heap wal size: 20499 MB
> -- wal size difference: 1.7% size reduction.
> zheap COPY  execution time: 523702.904 ms
> heap COPY  execution time: 537537.720 ms
> -- % of improvement -- 2.5 % reduction in execution time for zheap
>
>
> COPY command seems to have improved very slightly with zheap in both with
> size of wal and execution time. I also did some tests with insert statement
> where I could see some regression in zheap when compared to heap with
> respect to execution time. With further more investigation I will reply
> here.
>
>
20% of size reduction looks like effect of fill factor.

Regards

Pavel

-- 
> Thanks and Regards
> Mithun Chicklore Yogendra
> EnterpriseDB: http://www.enterprisedb.com
>


Re: zheap: a new storage format for PostgreSQL

2018-12-05 Thread Mithun Cy
> On Thu, Mar 1, 2018 at 7:39 PM Amit Kapila 
wrote:

I did some testing for performance of COPY command for zheap against heap,
here are my results,
Machine : cthulhu, (is a 8 node numa machine with 500GB of RAM)
server non default settings: shared buffers 32GB, max_wal_size = 20GB,
min_wal_size = 15GB

Test tables and data:

I have used pgbench_accounts table of pgbench tool as data source with 3
different scale factors 100, 1000, 2000. Both heap and zheap table is
lookalike of pgbench_accounts

CREATE TABLE pgbench_zheap (LIKE pgbench_accounts) WITH
(storage_engine='zheap');
CREATE TABLE pgbench_heap (LIKE pgbench_accounts) WITH
(storage_engine='heap');

Test Commands:
Command to generate datafile: COPY pgbench_accounts TO '/mnt/data-mag/
mithun.cy/zheapperfbin/bin/pgbench.data';

Command to load from datafile:
COPY pgbench_heap FROM '/mnt/data-mag/
mithun.cy/zheapperfbin/bin/pgbench.data'; -- heap table
COPY pgbench_zheap FROM '/mnt/data-mag/
mithun.cy/zheapperfbin/bin/pgbench.data'; -- zheap table

Results
==

Scale factor : 100

zheap table size : 1028 MB
heap table size: 1281 MB
-- table size reduction: 19% size reduction.
zheap wal size: 1007 MB
heap wal size: 1024 MB
-- wal size difference: 1.6% size reduction.
zheap COPY  execution time: 24869.451 ms
heap COPY  execution time: 25858.773 ms
-- % of improvement -- 3.8% reduction in execution time for zheap

Scale factor : 1000
-
zheap table size : 10 GB
heap table size: 13 GB
-- table size reduction: 23% size reduction.
zheap wal size: 10071 MB
heap wal size: 10243 MB
-- wal size difference: 1.67% size reduction.
zheap COPY  execution time: 270790.235 ms
heap COPY  execution time:  280325.632 ms
-- % of improvement -- 3.4% reduction in execution time for zheap

Scale factor : 2000
-
zheap table size : 20GB
heap table size: 25GB
-- table size reduction: 20% size reduction.
zheap wal size: 20142 MB
heap wal size: 20499 MB
-- wal size difference: 1.7% size reduction.
zheap COPY  execution time: 523702.904 ms
heap COPY  execution time: 537537.720 ms
-- % of improvement -- 2.5 % reduction in execution time for zheap


COPY command seems to have improved very slightly with zheap in both with
size of wal and execution time. I also did some tests with insert statement
where I could see some regression in zheap when compared to heap with
respect to execution time. With further more investigation I will reply
here.

-- 
Thanks and Regards
Mithun Chicklore Yogendra
EnterpriseDB: http://www.enterprisedb.com


Re: zheap: a new storage format for PostgreSQL

2018-11-21 Thread Amit Kapila
On Tue, Nov 20, 2018 at 12:53 PM Darafei "Komяpa" Praliaskouski
 wrote:
>>
>> > In PostGIS workloads, UPDATE table SET geom = ST_CostyFunction(geom, 
>> > magicnumber); is one of biggest time-eaters that happen upon initial load 
>> > and clean up of your data. It is commonly followed by CLUSTER table using 
>> > table_geom_idx; to make sure you're back at full speed and no VACUUM is 
>> > needed, and your table (usually static after that) is more-or-less 
>> > spatially ordered. I see that zheap can remove the need for VACUUM, which 
>> > is a big win already. If you can do something that will allow reorder of 
>> > tuples according to index happen during an UPDATE that rewrites most of 
>> > table, that would be a game changer :)
>> >
>>
>> If the tuples are already in the order of the index, then we would
>> retain the order, otherwise, we might not want to anything special for
>> ordering w.r.t index.  I think this is important as we are not sure of
>> the user's intention and I guess it won't be easy to do such
>> rearrangement during Update statement.
>
>
> User's clustering intention is recorded in existence of CLUSTER index over 
> table. That's not used by anything other than CLUSTER command now though.
>
> When I was looking into current heap implementation it seemed that it's 
> possible to hook in a lookup for a couple blocks with values adjacent to the 
> new value, and prefer them to FSM lookup and "current page", for clustered 
> table. Due to dead tuples, free space is going to end very very soon in usual 
> heap, so it probably doesn't make sense there - you're consuming space with 
> old one in old page and new one in new page.
>
> If I understand correctly, in zheap an update would not result in a dead 
> tuple in old page, so space is not going to end immediately, and this may 
> unblock path for such further developments. That is, if there is a spot where 
> to plug in such or similar logic in code :)
>

Yeah, in zheap the dead tuples will be less or may not be there in
many cases, but I am not sure how much it can help for your use case.

> I've described the business case in [1].
>

I am not sure but maybe you need something like Clustered Index where
heap pages are linked via leaf pages of btree.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-11-19 Thread Komяpa
>
> > In PostGIS workloads, UPDATE table SET geom = ST_CostyFunction(geom,
> magicnumber); is one of biggest time-eaters that happen upon initial load
> and clean up of your data. It is commonly followed by CLUSTER table using
> table_geom_idx; to make sure you're back at full speed and no VACUUM is
> needed, and your table (usually static after that) is more-or-less
> spatially ordered. I see that zheap can remove the need for VACUUM, which
> is a big win already. If you can do something that will allow reorder of
> tuples according to index happen during an UPDATE that rewrites most of
> table, that would be a game changer :)
> >
>
> If the tuples are already in the order of the index, then we would
> retain the order, otherwise, we might not want to anything special for
> ordering w.r.t index.  I think this is important as we are not sure of
> the user's intention and I guess it won't be easy to do such
> rearrangement during Update statement.
>

User's clustering intention is recorded in existence of CLUSTER index over
table. That's not used by anything other than CLUSTER command now though.

When I was looking into current heap implementation it seemed that it's
possible to hook in a lookup for a couple blocks with values adjacent to
the new value, and prefer them to FSM lookup and "current page", for
clustered table. Due to dead tuples, free space is going to end very very
soon in usual heap, so it probably doesn't make sense there - you're
consuming space with old one in old page and new one in new page.

If I understand correctly, in zheap an update would not result in a dead
tuple in old page, so space is not going to end immediately, and this may
unblock path for such further developments. That is, if there is a spot
where to plug in such or similar logic in code :)

I've described the business case in [1].

1:
https://www.postgresql.org/message-id/flat/CAC8Q8tLBeAxR%2BBXWuKK%2BHP5m8tEVYn270CVrDvKXt%3D0PkJTY9g%40mail.gmail.com

-- 
Darafei Praliaskouski
Support me: http://patreon.com/komzpa


Re: zheap: a new storage format for PostgreSQL

2018-11-19 Thread Amit Kapila
On Mon, Nov 19, 2018 at 6:36 PM Daniel Westermann
 wrote:
>
> >Yes, we need to connect to the database for performing rollback
> >actions.  Once the rollback for that database is over, undo apply
> >worker will exit and you should be able to drop the database.
>
> Thank you, Amit.
> Can you have a look at this one?
>
> create table t1 ( a text ) partition by list (a);
> create table t1_1 PARTITION of t1 (a) for values in ('a');
> create table t1_2 PARTITION of t1 (a) for values in ('b');
> create table t1_3 PARTITION of t1 (a) for values in ('c');
> create table t1_4 PARTITION of t1 (a) default;
>
> postgres=# \d+ t1
>Table "public.t1"
>  Column | Type | Collation | Nullable | Default | Storage  | Stats target | 
> Description
> +--+---+--+-+--+--+-
>  a  | text |   |  | | extended |  |
> Partition key: LIST (a)
> Partitions: t1_1 FOR VALUES IN ('a'),
> t1_2 FOR VALUES IN ('b'),
> t1_3 FOR VALUES IN ('c'),
> t1_4 DEFAULT
> Options: storage_engine=zheap
>
>
> insert into t1 select 'a' from generate_series ( 1, 100 );
> insert into t1 select 'b' from generate_series ( 1, 100 );
> insert into t1 select 'c' from generate_series ( 1, 100 );
>
> postgres=# begin;
> BEGIN
> postgres=# update t1 set a = 'd' where a = 'a';
> UPDATE 100
> postgres=# rollback;
> ROLLBACK
>

Here, you are doing a big rollback, so I guess it will be pushed to
background unless you increase the value of 'rollback_overflow_size'.
You can confirm that by checking if any undo apply worker is active
and rollback finishes immediately.

> postgres=# select * from t1 where a = 'd';
> postgres=# select * from t1 where a = 'd';
> postgres=# select * from t1 where a = 'd';
>
> The selects at the end take seconds
>

I think what is happening is as rollback is still in progress, the
scan needs to fetch the data from undo and it will be slow.

> and a lot of checkpoints are happening.
>

It is because Rollbacks also write WAL and you are doing a big
Rollback which will lead to re-write of the entire table.

I guess if you allow rollback to complete before issuing a select, you
will see better results.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-11-19 Thread Daniel Westermann
>Yes, we need to connect to the database for performing rollback
>actions.  Once the rollback for that database is over, undo apply
>worker will exit and you should be able to drop the database.

Thank you, Amit.
Can you have a look at this one?

create table t1 ( a text ) partition by list (a);
create table t1_1 PARTITION of t1 (a) for values in ('a');
create table t1_2 PARTITION of t1 (a) for values in ('b');
create table t1_3 PARTITION of t1 (a) for values in ('c');
create table t1_4 PARTITION of t1 (a) default;

postgres=# \d+ t1
   Table "public.t1"
 Column | Type | Collation | Nullable | Default | Storage  | Stats target | 
Description 
+--+---+--+-+--+--+-
 a  | text |   |  | | extended |  | 
Partition key: LIST (a)
Partitions: t1_1 FOR VALUES IN ('a'),
t1_2 FOR VALUES IN ('b'),
t1_3 FOR VALUES IN ('c'),
t1_4 DEFAULT
Options: storage_engine=zheap


insert into t1 select 'a' from generate_series ( 1, 100 );
insert into t1 select 'b' from generate_series ( 1, 100 );
insert into t1 select 'c' from generate_series ( 1, 100 );

postgres=# begin;
BEGIN
postgres=# update t1 set a = 'd' where a = 'a';
UPDATE 100
postgres=# rollback;
ROLLBACK
postgres=# select * from t1 where a = 'd';
postgres=# select * from t1 where a = 'd';
postgres=# select * from t1 where a = 'd';

The selects at the end take seconds and a lot of checkpoints are happening.

Regards
Daniel








Re: zheap: a new storage format for PostgreSQL

2018-11-19 Thread Amit Kapila
On Mon, Nov 19, 2018 at 3:59 PM Daniel Westermann
 wrote:
>
> > Thanks, it makes difference and keep us motivated for making progress.
> +1
>
> Is it intended behavior that a database can not be dropped when undo apply is 
> running in the background?
>

Yes, we need to connect to the database for performing rollback
actions.  Once the rollback for that database is over, undo apply
worker will exit and you should be able to drop the database.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-11-19 Thread Daniel Westermann

> Thanks, it makes difference and keep us motivated for making progress.
+1

Is it intended behavior that a database can not be dropped when undo apply is 
running in the background?

zheap=# update pgbench_accounts set filler = 'bbb' where mod(aid,10) = 0;
UPDATE 100
zheap=# rollback;
ROLLBACK
zheap=# drop database zheap;
ERROR:  cannot drop the currently open database
zheap=# \c postgres
You are now connected to database "postgres" as user "postgres".
postgres=# drop database zheap;
ERROR:  database "zheap" is being accessed by other users
DETAIL:  There is 1 other session using the database.
postgres=# drop database zheap;
ERROR:  database "zheap" is being accessed by other users
DETAIL:  There is 1 other session using the database.
postgres=#

Regards
Daniel


Re: zheap: a new storage format for PostgreSQL

2018-11-18 Thread Amit Kapila
On Sun, Nov 18, 2018 at 3:42 PM Darafei "Komяpa" Praliaskouski
 wrote:
>
> On Sat, Nov 17, 2018 at 8:51 AM Adam Brusselback  
> wrote:
>>
>> >  I don't know how much what I write on this thread is read by others or
>> how useful this is for others who are following this work
>>
>> I've been following this thread and many others like it, silently soaking it 
>> up, because I don't feel like i'd have anything useful to add in most cases. 
>> It is very interesting seeing the development take place though, so just 
>> know it's appreciated at least from my perspective.
>
> I'm also following the development and have hopes about it going forward. Not 
> much low-level details I can comment on though :)
>
> In PostGIS workloads, UPDATE table SET geom = ST_CostyFunction(geom, 
> magicnumber); is one of biggest time-eaters that happen upon initial load and 
> clean up of your data. It is commonly followed by CLUSTER table using 
> table_geom_idx; to make sure you're back at full speed and no VACUUM is 
> needed, and your table (usually static after that) is more-or-less spatially 
> ordered. I see that zheap can remove the need for VACUUM, which is a big win 
> already. If you can do something that will allow reorder of tuples according 
> to index happen during an UPDATE that rewrites most of table, that would be a 
> game changer :)
>

If the tuples are already in the order of the index, then we would
retain the order, otherwise, we might not want to anything special for
ordering w.r.t index.  I think this is important as we are not sure of
the user's intention and I guess it won't be easy to do such
rearrangement during Update statement.

> Another story is Visibility Map and Index-Only Scans. Right now there is a 
> huge gap between the insert of rows and the moment they are available for 
> index only scan, as VACUUM is required. Do I understand correctly that for 
> zheap this all can be inverted, and UNDO can become "invisibility map" that 
> may be quite small and discarded quickly?
>

Yeah, eventually that is our goal with the help of delete-marking in
indexes, however, for the first version, we still need to rely on
visibility maps for index-only-scans.

Thank you for showing interest in this work.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-11-18 Thread Komяpa
On Sat, Nov 17, 2018 at 8:51 AM Adam Brusselback 
wrote:

> >  I don't know how much what I write on this thread is read by others or
> how useful this is for others who are following this work
>
> I've been following this thread and many others like it, silently soaking
> it up, because I don't feel like i'd have anything useful to add in most
> cases. It is very interesting seeing the development take place though, so
> just know it's appreciated at least from my perspective.
>


I'm also following the development and have hopes about it going forward.
Not much low-level details I can comment on though :)

In PostGIS workloads, UPDATE table SET geom = ST_CostyFunction(geom,
magicnumber); is one of biggest time-eaters that happen upon initial load
and clean up of your data. It is commonly followed by CLUSTER table using
table_geom_idx; to make sure you're back at full speed and no VACUUM is
needed, and your table (usually static after that) is more-or-less
spatially ordered. I see that zheap can remove the need for VACUUM, which
is a big win already. If you can do something that will allow reorder of
tuples according to index happen during an UPDATE that rewrites most of
table, that would be a game changer :)

Another story is Visibility Map and Index-Only Scans. Right now there is a
huge gap between the insert of rows and the moment they are available for
index only scan, as VACUUM is required. Do I understand correctly that for
zheap this all can be inverted, and UNDO can become "invisibility map" that
may be quite small and discarded quickly?




-- 
Darafei Praliaskouski
Support me: http://patreon.com/komzpa


Re: zheap: a new storage format for PostgreSQL

2018-11-17 Thread Hakan Kocaman
Adam Brusselback  schrieb am Sa., 17. Nov. 2018
um 06:51 Uhr:

> >  I don't know how much what I write on this thread is read by others or
> how useful this is for others who are following this work
>
> I've been following this thread and many others like it, silently soaking
> it up, because I don't feel like i'd have anything useful to add in most
> cases. It is very interesting seeing the development take place though, so
> just know it's appreciated at least from my perspective.
>

+1
count me in

kind regards
hakan kocaman


Re: zheap: a new storage format for PostgreSQL

2018-11-16 Thread Amit Kapila
On Sat, Nov 17, 2018 at 11:21 AM Adam Brusselback
 wrote:
>
> >  I don't know how much what I write on this thread is read by others or
> how useful this is for others who are following this work
>
> I've been following this thread and many others like it, silently soaking it 
> up, because I don't feel like i'd have anything useful to add in most cases. 
> It is very interesting seeing the development take place though, so just know 
> it's appreciated at least from my perspective.
>

Thanks, it makes difference and keep us motivated for making progress.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-11-16 Thread Adam Brusselback
>  I don't know how much what I write on this thread is read by others or
how useful this is for others who are following this work

I've been following this thread and many others like it, silently soaking
it up, because I don't feel like i'd have anything useful to add in most
cases. It is very interesting seeing the development take place though, so
just know it's appreciated at least from my perspective.


Re: zheap: a new storage format for PostgreSQL

2018-11-16 Thread Amit Kapila
On Thu, Nov 1, 2018 at 12:13 PM Amit Kapila  wrote:
>
>
> Now, we have a working solution for this problem.  The extended
> transaction slots are stored in TPD pages (those contains only
> transaction slot arrays) which are interleaved with regular pages.
> For a detailed idea, you can see atop src/backend/access/zheap/tpd.c.
> We still have a caveat here which is once the TPD pages are pruned
> (the TPD page can be pruned if all the transaction slots are old
> enough to matter), they are not added to FSM for reuse.  We are
> working on a patch for this which we expect to finish in a week or so.
>

Now, this work is also committed to zheap-branch.  The basic idea is
that if all the TPD entries are old enough that they can be pruned,
then we clean such a page and record the same in FSM.   The empty
pages from FSM can be used either by zheap or TPD when required.   We
have one optimization where without going through each of the TPD
entry, we can decide whether the entire page can be pruned.   We have
used tpd_latest_xid_epoch stored in the page header to prune the
entire TPD page. Basically, if tpd_latest_xid_epoch precedes
oldestXidhaving undo, then we can assume all the entries in the page
can be pruned.

Another interesting feature which is now working in zheap is ALTER
TABLE .. SET TABLESPACE.  The basic idea is the same as heap (copy the
relation page-by-page) except that in zheap we can have some pending
aborts (as sometimes rollback requests are pushed to undo worker), so
we finish those aborts before copying the page to a new tablespace.  I
think if we want we could do without it as well, but as we already
making the page-dirty and writing, it seems wise to complete the
aborts.

Now, single-user-mode is also working.  In single-user-mode, we always
perform the rollback requests in the foreground as there is no undo
worker/s present.  Also we discard the undo at commit as we won't need
it later.

Other than that we have made miscellaneous code-improvements and
bug-fixes in the branch.

The next big step now is to port it over pluggable storage for which
Andres has done the legwork and we will take it forward.  The other
thing we are going to focus next is performance optimization of code
in various scenarios.

I don't know how much what I write on this thread is read by others or
how useful this is for others who are following this work, but I am
trying to be precise here, so feel free to ask for more information.


--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-11-11 Thread Amit Kapila
On Sun, Nov 11, 2018 at 11:55 PM Kuntal Ghosh
 wrote:
>
> On Sat, Nov 10, 2018 at 8:51 PM Daniel Westermann
>  wrote:
> >
> > >>Thanks. Initializing the variable seems like the right fix here.
> >
> > ... just had a warning when recompiling from the latest sources on CentOS 7:
> >
> > labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing 
> > -fwrapv -fexcess-precision=standard -O2 -I../../../../src/include  
> > -D_GNU_SOURCE -I/usr/include/libxml2   -c -o tpd.o tpd.c
> > tpd.c: In function ‘TPDFreePage’:
> > tpd.c:1003:15: warning: variable ‘curblkno’ set but not used 
> > [-Wunused-but-set-variable]
> >   BlockNumber  curblkno = InvalidBlockNumber;
> >^

This variable is used only for Asserts, so we need to use
PG_USED_FOR_ASSERTS_ONLY while declaring it.

> Thanks Daniel for testing zheap and reporting the issue. We'll push a
> fix for the same.
>

Pushed the fix now.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-11-11 Thread Kuntal Ghosh
On Sat, Nov 10, 2018 at 8:51 PM Daniel Westermann
 wrote:
>
> >>Thanks. Initializing the variable seems like the right fix here.
>
> ... just had a warning when recompiling from the latest sources on CentOS 7:
>
> labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing 
> -fwrapv -fexcess-precision=standard -O2 -I../../../../src/include  
> -D_GNU_SOURCE -I/usr/include/libxml2   -c -o tpd.o tpd.c
> tpd.c: In function ‘TPDFreePage’:
> tpd.c:1003:15: warning: variable ‘curblkno’ set but not used 
> [-Wunused-but-set-variable]
>   BlockNumber  curblkno = InvalidBlockNumber;
>^
Thanks Daniel for testing zheap and reporting the issue. We'll push a
fix for the same.



-- 
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-11-10 Thread Daniel Westermann
>>Thanks. Initializing the variable seems like the right fix here.

... just had a warning when recompiling from the latest sources on CentOS 7:

labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing 
-fwrapv -fexcess-precision=standard -O2 -I../../../../src/include  
-D_GNU_SOURCE -I/usr/include/libxml2   -c -o tpd.o tpd.c
tpd.c: In function ‘TPDFreePage’:
tpd.c:1003:15: warning: variable ‘curblkno’ set but not used 
[-Wunused-but-set-variable]
  BlockNumber  curblkno = InvalidBlockNumber;
   ^

Not sure if this is important but as I could not find anything on this thread 
related to this I thought I'd report it

Regards
Daniel


Re: zheap: a new storage format for PostgreSQL

2018-11-05 Thread Tomas Vondra
On 11/5/18 4:00 AM, Amit Kapila wrote:
> On Sat, Nov 3, 2018 at 9:30 AM Amit Kapila  wrote:
>> On Fri, Nov 2, 2018 at 6:41 PM Tomas Vondra
>>  wrote:
>>> I'm sure
>>> it's not the only place where we do something like this, and the other
>>> places don't trigger the valgrind warning, so how do those places do
>>> this? heapam seems to call fetch_att in the end, which essentially calls
>>> Int32GetDatum/Int16GetDatum/CharGetDatum, so why not to use the same
>>> trick here?
>>>
>>
>> This is because, in zheap, we have omitted all alignment padding for
>> pass-by-value types.  See the description in my previous email [1].  I
>> think here we need to initialize ret_datum at the beginning of the
>> function unless you have some better idea.
>>
> 
> I have pushed a fix on the above lines in zheap-branch, but I am open
> to change it if you have better ideas for the same.
> 

Thanks. Initializing the variable seems like the right fix here.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: zheap: a new storage format for PostgreSQL

2018-11-04 Thread Amit Kapila
On Sat, Nov 3, 2018 at 9:30 AM Amit Kapila  wrote:
> On Fri, Nov 2, 2018 at 6:41 PM Tomas Vondra
>  wrote:
> > I'm sure
> > it's not the only place where we do something like this, and the other
> > places don't trigger the valgrind warning, so how do those places do
> > this? heapam seems to call fetch_att in the end, which essentially calls
> > Int32GetDatum/Int16GetDatum/CharGetDatum, so why not to use the same
> > trick here?
> >
>
> This is because, in zheap, we have omitted all alignment padding for
> pass-by-value types.  See the description in my previous email [1].  I
> think here we need to initialize ret_datum at the beginning of the
> function unless you have some better idea.
>

I have pushed a fix on the above lines in zheap-branch, but I am open
to change it if you have better ideas for the same.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-11-02 Thread Amit Kapila
On Fri, Nov 2, 2018 at 6:41 PM Tomas Vondra
 wrote:
>
> On 11/02/2018 12:12 PM, Amit Kapila wrote:
> > On Thu, Nov 1, 2018 at 7:26 PM Tomas Vondra
> >  wrote:
> >>
> >> On 11/01/2018 07:43 AM, Amit Kapila wrote:
> >>>
> >>> You can find the latest code at https://github.com/EnterpriseDB/zheap
> >>>
> >>
> >> Seems valgrind complains about a couple of places in the code - nothing
> >> major, might be noise, but probably worth a look.
> >>
> >
> > I have looked at the report and one of those seems to be problematic,
> > so I have pushed the fix for the same.  The other one for below stack
> > seems to be bogus:
> > ==7569==  Uninitialised value was created by a stack allocation
> > ==7569==at 0x59043D: znocachegetattr (zheapam.c:6206)
> > ==7569==
> > {
> >
> >Memcheck:Cond
> >fun:ZHeapDetermineModifiedColumns
> >fun:zheap_update
> >
> > I have checked in the function znocachegetattr that if we initialize
> > the value of ret_datum, it fixes the reported error, but actually
> > there is no need for doing it as the code always assign the valid
> > value to this variable.  I have left it as is for now as I am not sure
> > whether there is any value in doing such an initialization.
> >
>
> Well, the problem is the ret_datum is modified like this:
>
>
> thisatt = TupleDescAttr(tupleDesc, attnum);
> if (thisatt->attbyval)
> memcpy(&ret_datum, tp + off, thisatt->attlen);
> else
> ret_datum = PointerGetDatum((char *) (tp + off));
>
> which means that for cases with attlen < sizeof(Datum), this ends up
> leaving part of the value undefined. So it's a valid issue.
>

Agreed.

> I'm sure
> it's not the only place where we do something like this, and the other
> places don't trigger the valgrind warning, so how do those places do
> this? heapam seems to call fetch_att in the end, which essentially calls
> Int32GetDatum/Int16GetDatum/CharGetDatum, so why not to use the same
> trick here?
>

This is because, in zheap, we have omitted all alignment padding for
pass-by-value types.  See the description in my previous email [1].  I
think here we need to initialize ret_datum at the beginning of the
function unless you have some better idea.

One thing unrelated to the above problem is that I have forgotten to
mention in my previous email that Daniel Westermann whom I have cc'ed
in this email has reported few bugs in this branch which seems to have
fixed.  He seems to be interested in doing more tests.  Daniel, I
encourage you to share your findings here.

Thanks, Tomas and Daniel for looking into the branch and reporting
problems, it is really helpful.

[1] - 
https://www.postgresql.org/message-id/CAA4eK1Lwb%2BrGeB_z%2BjUbnSndvgnsDUK%2B9tjfng4sy1AZyrHqRg%40mail.gmail.com

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-11-02 Thread Tomas Vondra



On 11/02/2018 12:12 PM, Amit Kapila wrote:
> On Thu, Nov 1, 2018 at 7:26 PM Tomas Vondra
>  wrote:
>>
>> On 11/01/2018 07:43 AM, Amit Kapila wrote:
>>>
>>> You can find the latest code at https://github.com/EnterpriseDB/zheap
>>>
>>
>> Seems valgrind complains about a couple of places in the code - nothing
>> major, might be noise, but probably worth a look.
>>
> 
> I have looked at the report and one of those seems to be problematic,
> so I have pushed the fix for the same.  The other one for below stack
> seems to be bogus:
> ==7569==  Uninitialised value was created by a stack allocation
> ==7569==at 0x59043D: znocachegetattr (zheapam.c:6206)
> ==7569==
> {
>
>Memcheck:Cond
>fun:ZHeapDetermineModifiedColumns
>fun:zheap_update
> 
> I have checked in the function znocachegetattr that if we initialize
> the value of ret_datum, it fixes the reported error, but actually
> there is no need for doing it as the code always assign the valid
> value to this variable.  I have left it as is for now as I am not sure
> whether there is any value in doing such an initialization.
> 

Well, the problem is the ret_datum is modified like this:


thisatt = TupleDescAttr(tupleDesc, attnum);
if (thisatt->attbyval)
memcpy(&ret_datum, tp + off, thisatt->attlen);
else
ret_datum = PointerGetDatum((char *) (tp + off));

which means that for cases with attlen < sizeof(Datum), this ends up
leaving part of the value undefined. So it's a valid issue. I'm sure
it's not the only place where we do something like this, and the other
places don't trigger the valgrind warning, so how do those places do
this? heapam seems to call fetch_att in the end, which essentially calls
Int32GetDatum/Int16GetDatum/CharGetDatum, so why not to use the same
trick here?

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: zheap: a new storage format for PostgreSQL

2018-11-02 Thread Amit Kapila
On Thu, Nov 1, 2018 at 7:26 PM Tomas Vondra
 wrote:
>
> On 11/01/2018 07:43 AM, Amit Kapila wrote:
> >
> > You can find the latest code at https://github.com/EnterpriseDB/zheap
> >
>
> Seems valgrind complains about a couple of places in the code - nothing
> major, might be noise, but probably worth a look.
>

I have looked at the report and one of those seems to be problematic,
so I have pushed the fix for the same.  The other one for below stack
seems to be bogus:
==7569==  Uninitialised value was created by a stack allocation
==7569==at 0x59043D: znocachegetattr (zheapam.c:6206)
==7569==
{
   
   Memcheck:Cond
   fun:ZHeapDetermineModifiedColumns
   fun:zheap_update

I have checked in the function znocachegetattr that if we initialize
the value of ret_datum, it fixes the reported error, but actually
there is no need for doing it as the code always assign the valid
value to this variable.  I have left it as is for now as I am not sure
whether there is any value in doing such an initialization.

Thanks for the report.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-11-01 Thread Tomas Vondra

On 11/01/2018 07:43 AM, Amit Kapila wrote:


You can find the latest code at https://github.com/EnterpriseDB/zheap



Seems valgrind complains about a couple of places in the code - nothing 
major, might be noise, but probably worth a look.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
==7569== Conditional jump or move depends on uninitialised value(s)
==7569==at 0x5914C8: ZHeapDetermineModifiedColumns (zheapam.c:)
==7569==by 0x587453: zheap_update (zheapam.c:2176)
==7569==by 0x7577D3: ExecUpdate (nodeModifyTable.c:1426)
==7569==by 0x759C0B: ExecModifyTable (nodeModifyTable.c:2629)
==7569==by 0x729687: ExecProcNodeFirst (execProcnode.c:445)
==7569==by 0x71D8E9: ExecProcNode (executor.h:241)
==7569==by 0x720194: ExecutePlan (execMain.c:1711)
==7569==by 0x71DF11: standard_ExecutorRun (execMain.c:367)
==7569==by 0x71DD3F: ExecutorRun (execMain.c:310)
==7569==by 0x9113CA: ProcessQuery (pquery.c:161)
==7569==by 0x912CE8: PortalRunMulti (pquery.c:1286)
==7569==by 0x9122C1: PortalRun (pquery.c:799)
==7569==by 0x90C1C2: exec_simple_query (postgres.c:1215)
==7569==by 0x9104D9: PostgresMain (postgres.c:4244)
==7569==by 0x86A911: BackendRun (postmaster.c:4388)
==7569==by 0x86A0DF: BackendStartup (postmaster.c:4079)
==7569==by 0x8665F7: ServerLoop (postmaster.c:1711)
==7569==by 0x865EA3: PostmasterMain (postmaster.c:1384)
==7569==by 0x78E3AB: main (main.c:228)
==7569==  Uninitialised value was created by a stack allocation
==7569==at 0x59043D: znocachegetattr (zheapam.c:6206)
==7569== 
{
   
   Memcheck:Cond
   fun:ZHeapDetermineModifiedColumns
   fun:zheap_update
   fun:ExecUpdate
   fun:ExecModifyTable
   fun:ExecProcNodeFirst
   fun:ExecProcNode
   fun:ExecutePlan
   fun:standard_ExecutorRun
   fun:ExecutorRun
   fun:ProcessQuery
   fun:PortalRunMulti
   fun:PortalRun
   fun:exec_simple_query
   fun:PostgresMain
   fun:BackendRun
   fun:BackendStartup
   fun:ServerLoop
   fun:PostmasterMain
   fun:main
}
==8811== Conditional jump or move depends on uninitialised value(s)
==8811==at 0x5914C8: ZHeapDetermineModifiedColumns (zheapam.c:)
==8811==by 0x587453: zheap_update (zheapam.c:2176)
==8811==by 0x7577D3: ExecUpdate (nodeModifyTable.c:1426)
==8811==by 0x759C0B: ExecModifyTable (nodeModifyTable.c:2629)
==8811==by 0x729687: ExecProcNodeFirst (execProcnode.c:445)
==8811==by 0x71D8E9: ExecProcNode (executor.h:241)
==8811==by 0x720194: ExecutePlan (execMain.c:1711)
==8811==by 0x71DF11: standard_ExecutorRun (execMain.c:367)
==8811==by 0x71DD3F: ExecutorRun (execMain.c:310)
==8811==by 0x9113CA: ProcessQuery (pquery.c:161)
==8811==by 0x912CE8: PortalRunMulti (pquery.c:1286)
==8811==by 0x9122C1: PortalRun (pquery.c:799)
==8811==by 0x90C1C2: exec_simple_query (postgres.c:1215)
==8811==by 0x9104D9: PostgresMain (postgres.c:4244)
==8811==by 0x86A911: BackendRun (postmaster.c:4388)
==8811==by 0x86A0DF: BackendStartup (postmaster.c:4079)
==8811==by 0x8665F7: ServerLoop (postmaster.c:1711)
==8811==by 0x865EA3: PostmasterMain (postmaster.c:1384)
==8811==by 0x78E3AB: main (main.c:228)
==8811==  Uninitialised value was created by a stack allocation
==8811==at 0x59043D: znocachegetattr (zheapam.c:6206)
==8811== 
{
   
   Memcheck:Cond
   fun:ZHeapDetermineModifiedColumns
   fun:zheap_update
   fun:ExecUpdate
   fun:ExecModifyTable
   fun:ExecProcNodeFirst
   fun:ExecProcNode
   fun:ExecutePlan
   fun:standard_ExecutorRun
   fun:ExecutorRun
   fun:ProcessQuery
   fun:PortalRunMulti
   fun:PortalRun
   fun:exec_simple_query
   fun:PostgresMain
   fun:BackendRun
   fun:BackendStartup
   fun:ServerLoop
   fun:PostmasterMain
   fun:main
}
==8811== Conditional jump or move depends on uninitialised value(s)
==8811==at 0x574A76: InsertUndoRecord (undorecord.c:135)
==8811==by 0x56F405: InsertPreparedUndo (undoinsert.c:840)
==8811==by 0x5848E0: zheap_insert (zheapam.c:896)
==8811==by 0x755F21: ExecInsert (nodeModifyTable.c:692)
==8811==by 0x759BB2: ExecModifyTable (nodeModifyTable.c:2622)
==8811==by 0x729687: ExecProcNodeFirst (execProcnode.c:445)
==8811==by 0x71D8E9: ExecProcNode (executor.h:241)
==8811==by 0x720194: ExecutePlan (execMain.c:1711)
==8811==by 0x71DF11: standard_ExecutorRun (execMain.c:367)
==8811==by 0x71DD3F: ExecutorRun (execMain.c:310)
==8811==by 0x9113CA: ProcessQuery (pquery.c:161)
==8811==by 0x912CE8: PortalRunMulti (pquery.c:1286)
==8811==by 0x9122C1: PortalRun (pquery.c:799)
==8811==by 0x90C1C2: exec_simple_query (postgres.c:1215)
==8811==by 0x9104D9: PostgresMain (postgres.c:4244)
==8811==by 0x86A911: BackendRun (postmaster.c:4388)
==8811==by 0x86A0DF: BackendStartup (postmaster.c:4079)
==8811==by 0x8665F7: ServerLoop (postmaster.c:1711)
==8811==by 0x865EA3: PostmasterMain (postmaster.c

Re: zheap: a new storage format for PostgreSQL

2018-10-31 Thread Amit Kapila
On Sat, May 26, 2018 at 6:33 PM Amit Kapila  wrote:
> On Fri, Mar 2, 2018 at 4:05 PM, Alexander Korotkov
>  wrote:
>
> It's been a while since we have updated the progress on this project,
> so here is an update.
>

Yet, another update.

>  This is based on the features that were not
> working (as mentioned in Readme.md) when the branch was published.
> 1. TID Scans are working now.
> 2. Insert .. On Conflict is working now.
> 3. Tuple locking is working with a restriction that if there are more
> concurrent lockers on a page than the number of transaction slots on a
> page, then some of the lockers will wait till others get committed.
> We are working on a solution to extend the number of transaction slots
> on a separate set of pages which exist in heap, but will contain only
> transaction data.
>

Now, we have a working solution for this problem.  The extended
transaction slots are stored in TPD pages (those contains only
transaction slot arrays) which are interleaved with regular pages.
For a detailed idea, you can see atop src/backend/access/zheap/tpd.c.
We still have a caveat here which is once the TPD pages are pruned
(the TPD page can be pruned if all the transaction slots are old
enough to matter), they are not added to FSM for reuse.  We are
working on a patch for this which we expect to finish in a week or so.

Toast tables are working now, the toast data is stored in zheap.
Apart from having a consistency for storing toast data in the same
storage engine as main data, it has the advantage of early cleanup
which means the space for deleted rows can be reclaimed as soon as the
transaction commits.  This is good for toast tables as each update in
toast table is a DELETE+INSERT.

Alignment of tuples is changed such that we don’t have align padding
between the tuple header and the tuple data as we always make a copy
of the tuple to support in-place updates. Likewise, we ideally don't
need any alignment padding between tuples. However, there are places
in zheap code where we access tuple header directly from page (ex.
zheap_delete, zheap_update, etc.) for which we want them to be aligned
at the two-byte boundary).   We omit all alignment padding for
pass-by-value types. Even in the current heap, we never point directly
to such values, so the alignment padding doesn’t help much; it lets us
fetch the value using a single instruction, but that is all.
Pass-by-reference types will work as they do in the heap. We can't
directly access unaligned values; instead, we need to use memcpy.  We
believe that the space savings will more than pay for the additional
CPU costs.

Vacuum full is implemented in such a way that we don't copy the
information required for MVCC-aware scans.  We copy only LIVE tuples
in new heap and freeze them before storing in new heap.  This is not a
good idea as we lose all the visibility information of tuples, but
OTOH, the same can't be copied from the original tuple as that is
maintained in undo and we don't have the facility to modify
undorecords.  We can either allow to modify undo records or write
special kind of undo records which will capture the required
visibility information.  I think it will be tricky to do this and not
sure if it is valuable to put a whole lot of effort without making
basic things work and another thing is that after zheap, the need of
vacuum will anyway be minimized to a good extent.

Serializable isolation is also supported, we don't need to make any
major changes except for making it understand ZheapTuple (used TID in
the required API's).  I think this part needs some changes after
integration with pluggable storage API.   We have a special handling
for the tuples which are in-place updated or the latest transaction
that modified that tuple got aborted. In that case, we check whether
the latest committed transaction that modified that tuple is a
concurrent transaction. Based on that, we take a decision on whether
we have any serialization conflict.

In zheap, for sub-transactions we don't need to generate new xid as
the visibility information for a particular tuple is present in undo
and on Rollabck To Savepoint, we apply the required undo to make the
state of the tuples as they were before the particular transaction.
This gives us a performance/scalability boost when sub-transactions
are involved as we don't need to acquire XIDGenLock for
subtransaction.  Apart from the above benefits, we need this for zheap
as otherwise the undo chain for each transaction won't be linear and
we save allocating additional slots for the each transaction id at the
page level.

Undo workers and transaction rollbacks are working now.  My colleague
Dilip has posted a separate patch [1] for this as this can have some
use cases without zheap as well and Thomas has just posted a patch
using that facility.

Some of the other features like row movement for an update of
partition key are also handled.

In short, now most of the user-visible features are working.  The make
installch

Re: zheap: a new storage format for PostgreSQL

2018-05-26 Thread Amit Kapila
On Fri, Mar 2, 2018 at 4:05 PM, Alexander Korotkov
 wrote:
> On Fri, Mar 2, 2018 at 1:31 PM, Amit Kapila  wrote:
>>
>> On Fri, Mar 2, 2018 at 1:50 PM, Tsunakawa, Takayuki
>>  wrote:
>> > From: Amit Kapila [mailto:amit.kapil...@gmail.com]
>> >> At EnterpriseDB, we (me and some of my colleagues) are working from
>> >> more
>> >> than a year on the new storage format in which only the latest version
>> >> of
>> >> the data is kept in main storage and the old versions are moved to an
>> >> undo
>> >> log.  We call this new storage format "zheap".  To be clear, this
>> >> proposal
>> >> is for PG-12.
>> >
>> > Wonderful!  BTW, what "z" stand for?  Ultimate?
>> >
>>
>> There is no special meaning to 'z'.  We have discussed quite a few
>> names (like newheap, nheap, zheap and some more on those lines), but
>> zheap sounds better.  IIRC, one among Robert or Thomas has come up
>> with this name.
>
>
> I would propose "zero-bloat heap" disambiguation of zheap.  Seems like fair
> enough explanation for me without need to rename :)
>

It's been a while since we have updated the progress on this project,
so here is an update.  This is based on the features that were not
working (as mentioned in Readme.md) when the branch was published.
1. TID Scans are working now.
2. Insert .. On Conflict is working now.
3. Tuple locking is working with a restriction that if there are more
concurrent lockers on a page than the number of transaction slots on a
page, then some of the lockers will wait till others get committed.
We are working on a solution to extend the number of transaction slots
on a separate set of pages which exist in heap, but will contain only
transaction data.  There are also some corner cases where it doesn't
work for Rollbacks.
4. Foreign keys are working.
5. Vacuum/Autovacuum is working.
6. Rollback prepared transactions.

Apart from this, we have fixed some other open issues.  I think to
discuss some of the designs, we need to start separate threads (like
Thomas has already started a thread on undo logs[1]), but it is also
okay to discuss on this thread as well.  One specific thing where we
need some input is about testing of this new heap.  As of now, the
idea we are using to test it is by having a guc parameter
(storage_engine) which if set to zheap, all the regression tests will
create tables in zheap and the operations are zheap specific.  This
basically works okay, but the results are different than expected in
some cases like (a) in-place updates cause rows to be printed in
different order (b) ctid based tests gives different results because
zheap has a metapage and TPD pages, (c) \d+ show storage_engine as an
option, etc.  We workaround it by either creating a separate .out file
for zheap or sometimes by masking the expected different output (like
we don't allow to compare additional storage_engine option as output
of \d+).  I know this is not the best way to test a new storage
engine, but for now it helped us a lot.  I think we need some generic
way to test new storage engines.  I am not sure if it good to discuss
it here or does this belong to Pluggable API thread.

Any thoughts?

[1] - 
https://www.postgresql.org/message-id/CAEepm%3D2EqROYJ_xYz4v5kfr4b0qw_Lq_6Pe8RTEC8rx3upWsSQ%40mail.gmail.com

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-03-02 Thread Mark Kirkwood



On 03/03/18 05:03, Robert Haas wrote:

On Fri, Mar 2, 2018 at 5:35 AM, Alexander Korotkov
 wrote:

I would propose "zero-bloat heap" disambiguation of zheap.  Seems like fair
enough explanation for me without need to rename :)

It will be possible to bloat a zheap table in certain usage patterns.
For example, if you bulk-load the table with a ton of data, commit the
transaction, delete every other row, and then never insert any more
rows ever again, the table is bloated: it's twice as large as it
really needs to be, and we have no provision for shrinking it.  In
general, I think it's very hard to keep bulk deletes from leaving
bloat in the table, and to the extent that it *is* possible, we're not
doing it.  One could imagine, for example, an index-organized table
that automatically combines adjacent pages when they're empty enough,
and that also relocates data to physically lower-numbered pages
whenever possible.  Such a storage engine might automatically shrink
the on-disk footprint after a large delete, but we have no plans to go
in that direction.

Rather, our assumption is that the bloat most people care about comes
from updates.  By performing updates in-place as often as possible, we
hope to avoid bloating both the heap (because we're not adding new row
versions to it which then have to be removed) and the indexes (because
if we don't add new row versions at some other TID, then we don't need
to add index pointers to that new TID either, or remove the old index
pointers to the old TID).  Without delete-marking, we can basically
optimize the case that is currently handled via HOT updates: no
indexed columns have changed.  However, the in-place update has a
major advantage that it still works even when the page is completely
full, provided that the row does not expand.  As Amit's results show,
that can hugely reduce bloat and increase performance in the face of
long-running concurrent transactions.  With delete-marking, we can
also optimize the case where indexed columns have been changed.  We
don't know exactly how well this will work yet because the code isn't
written and therefore can't be benchmarked, but am hopeful that that
in-place updates will be a big win here too.

So, I would not describe a zheap table as zero-bloat, but it should
involve a lot less bloat than our standard heap.



For folk doing ETL type data warehousing this should be great, as the 
typical workload tends to be like: COPY (or similar) from foreign data 
source, then do several sets of UPDATES to fix/check/scrub the 
data...which tends to result in huge bloat with the current heap design 
(despite telling people 'you can do it another way to' to avoid bloat - 
I guess it seems to be more intuitive to just to do it as described).


regards
Mark




Re: zheap: a new storage format for PostgreSQL

2018-03-02 Thread Amit Kapila
On Fri, Mar 2, 2018 at 7:06 PM, Aleksander Alekseev
 wrote:
> Hello Amit,
>
>> Sometime back Robert has proposed a solution to reduce the bloat in
>> PostgreSQL [1] which has some other advantages of its own as well.  To
>> recap, in the existing heap, we always create a new version of a tuple on
>> an update which must eventually be removed by periodic vacuuming or by
>> HOT-pruning, but still in many cases space is never reclaimed completely.
>> A similar problem occurs for tuples that are deleted.  This leads to bloat
>> in the database.
>
> This is an impressive work!
>

Thanks.

> Personally I would like to note that performance is probably not a
> priority at this stage.
>

Right, but we are also trying to see that we just don't fall off the
cliff for some more common workloads.

> Most important parts, in my humble opinion at
> least, are correctness, maintainability (tests, documentation, how
> readable the code is), extendability (e.g. an ability to add point in
> time recovery in the future), interfaces and heap format.
>

+1.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-03-02 Thread Robert Haas
On Fri, Mar 2, 2018 at 5:35 AM, Alexander Korotkov
 wrote:
> I would propose "zero-bloat heap" disambiguation of zheap.  Seems like fair
> enough explanation for me without need to rename :)

It will be possible to bloat a zheap table in certain usage patterns.
For example, if you bulk-load the table with a ton of data, commit the
transaction, delete every other row, and then never insert any more
rows ever again, the table is bloated: it's twice as large as it
really needs to be, and we have no provision for shrinking it.  In
general, I think it's very hard to keep bulk deletes from leaving
bloat in the table, and to the extent that it *is* possible, we're not
doing it.  One could imagine, for example, an index-organized table
that automatically combines adjacent pages when they're empty enough,
and that also relocates data to physically lower-numbered pages
whenever possible.  Such a storage engine might automatically shrink
the on-disk footprint after a large delete, but we have no plans to go
in that direction.

Rather, our assumption is that the bloat most people care about comes
from updates.  By performing updates in-place as often as possible, we
hope to avoid bloating both the heap (because we're not adding new row
versions to it which then have to be removed) and the indexes (because
if we don't add new row versions at some other TID, then we don't need
to add index pointers to that new TID either, or remove the old index
pointers to the old TID).  Without delete-marking, we can basically
optimize the case that is currently handled via HOT updates: no
indexed columns have changed.  However, the in-place update has a
major advantage that it still works even when the page is completely
full, provided that the row does not expand.  As Amit's results show,
that can hugely reduce bloat and increase performance in the face of
long-running concurrent transactions.  With delete-marking, we can
also optimize the case where indexed columns have been changed.  We
don't know exactly how well this will work yet because the code isn't
written and therefore can't be benchmarked, but am hopeful that that
in-place updates will be a big win here too.

So, I would not describe a zheap table as zero-bloat, but it should
involve a lot less bloat than our standard heap.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: zheap: a new storage format for PostgreSQL

2018-03-02 Thread Aleksander Alekseev
Hello Amit,

> Sometime back Robert has proposed a solution to reduce the bloat in
> PostgreSQL [1] which has some other advantages of its own as well.  To
> recap, in the existing heap, we always create a new version of a tuple on
> an update which must eventually be removed by periodic vacuuming or by
> HOT-pruning, but still in many cases space is never reclaimed completely.
> A similar problem occurs for tuples that are deleted.  This leads to bloat
> in the database.

This is an impressive work!

Personally I would like to note that performance is probably not a
priority at this stage. Most important parts, in my humble opinion at
least, are correctness, maintainability (tests, documentation, how
readable the code is), extendability (e.g. an ability to add point in
time recovery in the future), interfaces and heap format. There is some
saying on premature optimization... don't remember exact words and who
said this.

-- 
Best regards,
Aleksander Alekseev


signature.asc
Description: PGP signature


Re: zheap: a new storage format for PostgreSQL

2018-03-02 Thread Thomas Munro
On Fri, Mar 2, 2018 at 11:35 PM, Alexander Korotkov
 wrote:
> On Fri, Mar 2, 2018 at 1:31 PM, Amit Kapila  wrote:
>> On Fri, Mar 2, 2018 at 1:50 PM, Tsunakawa, Takayuki
>>  wrote:
>> > Wonderful!  BTW, what "z" stand for?  Ultimate?
>>
>> There is no special meaning to 'z'.  We have discussed quite a few
>> names (like newheap, nheap, zheap and some more on those lines), but
>> zheap sounds better.  IIRC, one among Robert or Thomas has come up
>> with this name.
>
> I would propose "zero-bloat heap" disambiguation of zheap.  Seems like fair
> enough explanation for me without need to rename :)

Nice.

A weird idea I had is that it adds a Z dimension to your tables.
That's a bit... far fetched, I admit.

-- 
Thomas Munro
http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-03-02 Thread Alexander Korotkov
On Fri, Mar 2, 2018 at 1:31 PM, Amit Kapila  wrote:

> On Fri, Mar 2, 2018 at 1:50 PM, Tsunakawa, Takayuki
>  wrote:
> > From: Amit Kapila [mailto:amit.kapil...@gmail.com]
> >> At EnterpriseDB, we (me and some of my colleagues) are working from more
> >> than a year on the new storage format in which only the latest version
> of
> >> the data is kept in main storage and the old versions are moved to an
> undo
> >> log.  We call this new storage format "zheap".  To be clear, this
> proposal
> >> is for PG-12.
> >
> > Wonderful!  BTW, what "z" stand for?  Ultimate?
> >
>
> There is no special meaning to 'z'.  We have discussed quite a few
> names (like newheap, nheap, zheap and some more on those lines), but
> zheap sounds better.  IIRC, one among Robert or Thomas has come up
> with this name.
>

I would propose "zero-bloat heap" disambiguation of zheap.  Seems like fair
enough explanation for me without need to rename :)

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: zheap: a new storage format for PostgreSQL

2018-03-02 Thread Amit Kapila
On Fri, Mar 2, 2018 at 1:50 PM, Tsunakawa, Takayuki
 wrote:
> From: Amit Kapila [mailto:amit.kapil...@gmail.com]
>> At EnterpriseDB, we (me and some of my colleagues) are working from more
>> than a year on the new storage format in which only the latest version of
>> the data is kept in main storage and the old versions are moved to an undo
>> log.  We call this new storage format "zheap".  To be clear, this proposal
>> is for PG-12.
>
> Wonderful!  BTW, what "z" stand for?  Ultimate?
>

There is no special meaning to 'z'.  We have discussed quite a few
names (like newheap, nheap, zheap and some more on those lines), but
zheap sounds better.  IIRC, one among Robert or Thomas has come up
with this name.

>
>
> Below are my first questions and comments.
>
> (1)
> This is a pure simple question from the user's perspective.  What kind of 
> workloads would you recommend zheap and heap respectively?
>

I think you have already mentioned some of the important use cases for
zheap, namely, update-intensive workloads and probably the cases where
users have long-running queries with updates.

>  Are you going to recommend zheap for all use cases, and will heap be 
> deprecated?
>

Oh, no. I don't think so.  We have yet not measured zheap's
performance in very many scenarios, so it is difficult to say about
all the cases, but I think eventually Deletes, Updates that update
most of index columns and Rollbacks will be somewhat costlier in
zheap.  Now, I think at this stage we can't measure everything because
(a) few things are not implemented and (b) we have not done much on
performance optimization of code.


> I felt zheap would be better for update-intensive workloads.  Then, how about 
> insert-and-read-mostly databases like a data warehouse?  zheap seems better 
> for that, since the database size is reduced.  Although data loading may 
> generate more transaction logs for undo, that increase is offset by the 
> reduction of the tuple header in WAL.
>

We have done optimization where we don't need to WAL-log the complete
undo data as it can be regenerated from page during recovery if
full_page_writes are enabled.

> zheap allows us to run long-running analytics and reporting queries 
> simultaneously with updates without the concern on database bloat, so zheap 
> is a way toward HTAP, right?
>

I think so.

>
> (2)
> Can zheap be used for system catalogs?
>

As of now, we are not planning to support it for system catalogs, as
it involves much more work, but I think if we want we can do it.

>
> (3)
>> Scenario 1: A 15 minutes simple-update pgbench test with scale factor 100
>> shows 5.13% TPS improvement with 64 clients. The performance improvement
>> increases as we increase the scale factor; at scale factor 1000, it
>> reaches11.5% with 64 clients.
>
> What was the fillfactor?
>

Default.

>  What would be the comparison when HOT works effectively for heap?
>

I guess this is the case where HOT works effectively.

>
> (4)
> "Undo logs are not yet crash-safe. Fsync and some recovery details are yet to 
> be implemented."
>
> "We also want to make FSM crash-safe, since we can’t count on
> VACUUM to recover free space that we neglect to record."
>
> Would these directly affect the response time of each transaction?
>

Not the first one, but the second one might depend upon on the actual
implementation, but I think it is difficult to predict much at this
stage.

>
> )5)
> "The tuple header is reduced from 24 bytes to 5 bytes (8 bytes with 
> alignment):
> 2 bytes each for informask and infomask2, and one byte for t_hoff.  I think we
> might be able to squeeze some space from t_infomask, but for now, I have kept
> it as two bytes.  All transactional information is stored in undo, so fields
> that store such information are not needed here."
>
> "To check the visibility of a
> tuple, we fetch the transaction slot number stored in the tuple header, and
> then get the transaction id and undo record pointer from transaction slot."
>
> Where in the tuple header is the transaction slot number stored?
>

In t_infomask2, refer zhtup.h.

>
> (6)
> "As of now, we have four transaction slots per
> page, but this can be changed.  Currently, this is a compile-time option;  we
> can decide later whether such an option is desirable in general for users."
>
> "The one known problem with the fixed number of slots is that
> it can lead to deadlock, so we are planning to add  a mechanism to allow the
> array of transactions slots to be continued on a separate overflow page.   We
> also need such a mechanism to support cases where a large number of
> transactions acquire SHARE or KEY SHARE locks on a single page."
>
> I wish for this.  I was bothered with deadlocks with Oracle and had to tune 
> INITRANS with CREATE TABLE.  The fixed number of slots introduces a new 
> configuration parameter, which adds something the DBA has to be worried about 
> and monitor a statistics figure for tuning.
>

Yeah.

>
> (7)
> What index AMs does "ind

Re: zheap: a new storage format for PostgreSQL

2018-03-02 Thread Kuntal Ghosh
On Fri, Mar 2, 2018 at 2:42 AM, Alexander Korotkov
 wrote:
>
> I think results representation should be improved.  You show total size of 
> the database, but it's hard to understand how bloat degree was really 
> decreased, assuming that there are both update and append-only tables.  So, I 
> propose to show the results in per table manner.
>
> What is total number of transactions processed in both cases?  It would be 
> also more fair to compare sizes for the same number of processed transactions.
>
> Also, what are index sizes?  What are undo log sizes for zheap?
>
I've added the table sizes and TPS in the performance results. As of
now, we've just performed stress testing using pgbench. We've plans
for performing other tests including:
1. Introduce random delay in the transactions instead of keeping a
transaction open for 15 minutes.
2. Combination of ROLLBACK and COMMIT (As suggested by Fabien)
3. PGbench tests for fixed number of transaction.
4. Modify the distribution (As suggested by Alexander Korotkov)

Do let me know if any other tests are required.

-- 
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com


zheap_perf_data_1.pdf
Description: Adobe PDF document


Re: zheap: a new storage format for PostgreSQL

2018-03-02 Thread Amit Kapila
On Fri, Mar 2, 2018 at 1:35 PM, Fabien COELHO  wrote:
>
> Hello Amit,
>
>> At EnterpriseDB, we (me and some of my colleagues) are working from more
>> than a year on the new storage format in which only the latest version of
>> the data is kept in main storage and the old versions are moved to an undo
>> log.  [...]
>
>
> This looks more than great!
>

Thanks.

>> *We’ve shown the performance improvement of zheap over heap in a few
>> different pgbench scenarios. [...]
>
>
>> 2. Transaction aborts will be expensive.
>
>
> ISTM that some scenarii should also test the performance impact when the
> zheap storage is expected to be worse than the heap storage, i.e. with some
> rollback which will exercise the undo stuff. There does not seem to be any
> in your report, I apologise if I misread it.
>

No, there isn't any.  One idea, we have to mitigate this cost is to
allow rollbacks to happen in the background.  Currently, the patch for
the same is being worked upon.

> I would suggest that you can use pgbench scripts such as:
>
>   -- commit.sql
>   \set aid random(1, 10 * :scale)
>   BEGIN;
>   UPDATE pgbench_accounts
> SET abalance = abalance + 1
> WHERE aid = :aid;
>   COMMIT;
>
> and
>
>   -- rollback.sql
>   \set aid random(1, 10 * :scale)
>   BEGIN;
>   UPDATE pgbench_accounts
> SET abalance = abalance + 1
> WHERE aid = :aid;
>   ROLLBACK;
>
> that can run with various weights to change how much rollback is injected,
> eg 1% rollback rate is achieved with:
>
>   pgbench -T 10 -P 1 -M prepared -r \
> -f SQL/commit.sql@99 -f SQL/rollback.sql@1
>
> Also, I would be wary of doing only max speed test, and consider more
> realistic --rate tests where the tps is fixed.
>

Your suggestions are good, we will try to do some tests based on these
ideas after making some more progress in the Rollbacks (there is some
pending work in Rollbacks as mentioned in README.md).

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-03-02 Thread Amit Kapila
On Fri, Mar 2, 2018 at 9:29 AM, Mark Kirkwood
 wrote:
> On 02/03/18 16:53, Alvaro Herrera wrote:
>
>> I think it was impolite to post this on the very same day the commitfest
>> started.  We have enough patches as it is ...
>>
>
> To be fair - he did say things like "wanting feedback..." and "shows an
> example of using pluggable storage.." and for PG 12. If he held onto the
> patches and waited - he'd get criticism of the form "you should have given a
> heads up earlier...".
>
>
> P.s: awesome work.
>

Thanks.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-03-02 Thread Amit Kapila
On Fri, Mar 2, 2018 at 9:23 AM, Alvaro Herrera  wrote:
> I think it was impolite to post this on the very same day the commitfest
> started.  We have enough patches as it is ...
>

I can understand your concern, but honestly, I have no intention to
hinder the current commit fest work.  We are preparing to post this
for more than a month, but it took some time to finish the
documentation and to fix some other issues.  I could have posted this
after the CF as well, but I was not sure if there is any benefit in
delaying, because, at this stage, we are not expecting much of code
review, but some feedback on high-level design and I think it can
certainly help pluggable API project.  I think the chances of getting
pluggable API in this release is remote, but maybe we can get some
small portion of it.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



RE: zheap: a new storage format for PostgreSQL

2018-03-02 Thread Tsunakawa, Takayuki
From: Amit Kapila [mailto:amit.kapil...@gmail.com]
> At EnterpriseDB, we (me and some of my colleagues) are working from more
> than a year on the new storage format in which only the latest version of
> the data is kept in main storage and the old versions are moved to an undo
> log.  We call this new storage format "zheap".  To be clear, this proposal
> is for PG-12.

Wonderful!  BTW, what "z" stand for?  Ultimate?


> Credits
> 
> Robert did much of the basic design work.  The design and development of
> various subsystems of zheap have been done by a team comprising of me, Dilip
> Kumar, Kuntal Ghosh, Mithun CY, Ashutosh Sharma, Rafia Sabih, Beena Emerson,
> and Amit Khandekar.  Thomas Munro wrote the undo storage system.  Marc
> Linster has provided unfailing management support, and Andres Freund has
> provided some design input (and criticism).  Neha Sharma and Tushar Ahuja
> are helping with the testing of this project.

What a gorgeous star team!


Below are my first questions and comments.

(1)
This is a pure simple question from the user's perspective.  What kind of 
workloads would you recommend zheap and heap respectively?  Are you going to 
recommend zheap for all use cases, and will heap be deprecated?  I think we 
need to be clear on this in the manual, at least before the final release.

I felt zheap would be better for update-intensive workloads.  Then, how about 
insert-and-read-mostly databases like a data warehouse?  zheap seems better for 
that, since the database size is reduced.  Although data loading may generate 
more transaction logs for undo, that increase is offset by the reduction of the 
tuple header in WAL.

zheap allows us to run long-running analytics and reporting queries 
simultaneously with updates without the concern on database bloat, so zheap is 
a way toward HTAP, right?


(2)
Can zheap be used for system catalogs?  If yes, we won't be bothered with 
system catalog bloat, e.g. as a result of repeated creation and deletion of 
temporary tables.


(3)
> Scenario 1: A 15 minutes simple-update pgbench test with scale factor 100
> shows 5.13% TPS improvement with 64 clients. The performance improvement
> increases as we increase the scale factor; at scale factor 1000, it
> reaches11.5% with 64 clients.

What was the fillfactor?  What would be the comparison when HOT works 
effectively for heap?


(4)
"Undo logs are not yet crash-safe. Fsync and some recovery details are yet to 
be implemented."

"We also want to make FSM crash-safe, since we can’t count on
VACUUM to recover free space that we neglect to record."

Would these directly affect the response time of each transaction?  Do you 
predict that the performance difference will get smaller when these are 
implemented?


)5)
"The tuple header is reduced from 24 bytes to 5 bytes (8 bytes with alignment):
2 bytes each for informask and infomask2, and one byte for t_hoff.  I think we
might be able to squeeze some space from t_infomask, but for now, I have kept
it as two bytes.  All transactional information is stored in undo, so fields
that store such information are not needed here."

"To check the visibility of a
tuple, we fetch the transaction slot number stored in the tuple header, and
then get the transaction id and undo record pointer from transaction slot."

Where in the tuple header is the transaction slot number stored?


(6)
"As of now, we have four transaction slots per
page, but this can be changed.  Currently, this is a compile-time option;  we
can decide later whether such an option is desirable in general for users."

"The one known problem with the fixed number of slots is that
it can lead to deadlock, so we are planning to add  a mechanism to allow the
array of transactions slots to be continued on a separate overflow page.   We
also need such a mechanism to support cases where a large number of
transactions acquire SHARE or KEY SHARE locks on a single page."

I wish for this.  I was bothered with deadlocks with Oracle and had to tune 
INITRANS with CREATE TABLE.  The fixed number of slots introduces a new 
configuration parameter, which adds something the DBA has to be worried about 
and monitor a statistics figure for tuning.


(7)
What index AMs does "indexes which lack delete-marking support" apply to?

Can we be freed from vacuum in a typical use case where only zheap and B-tree 
indexes are used?


(8)
How does rollback after subtransaction rollback work?  Does the undo of a whole 
transaction skip the undo of the subtransaction?



(9)
Will the prepare of 2pc transactions be slower, as they have to safely save 
undo log?

Regards
Takayuki Tsunakawa





Re: zheap: a new storage format for PostgreSQL

2018-03-02 Thread Fabien COELHO


Hello Amit,


At EnterpriseDB, we (me and some of my colleagues) are working from more
than a year on the new storage format in which only the latest version of
the data is kept in main storage and the old versions are moved to an undo
log.  [...]


This looks more than great!


*We’ve shown the performance improvement of zheap over heap in a few
different pgbench scenarios. [...]



2. Transaction aborts will be expensive.


ISTM that some scenarii should also test the performance impact when the 
zheap storage is expected to be worse than the heap storage, i.e. with 
some rollback which will exercise the undo stuff. There does not seem to 
be any in your report, I apologise if I misread it.


I would suggest that you can use pgbench scripts such as:

  -- commit.sql
  \set aid random(1, 10 * :scale)
  BEGIN;
  UPDATE pgbench_accounts
SET abalance = abalance + 1
WHERE aid = :aid;
  COMMIT;

and

  -- rollback.sql
  \set aid random(1, 10 * :scale)
  BEGIN;
  UPDATE pgbench_accounts
SET abalance = abalance + 1
WHERE aid = :aid;
  ROLLBACK;

that can run with various weights to change how much rollback is injected,
eg 1% rollback rate is achieved with:

  pgbench -T 10 -P 1 -M prepared -r \
-f SQL/commit.sql@99 -f SQL/rollback.sql@1

Also, I would be wary of doing only max speed test, and consider more 
realistic --rate tests where the tps is fixed.


--
Fabien.

Re: zheap: a new storage format for PostgreSQL

2018-03-01 Thread Mark Kirkwood

On 02/03/18 16:53, Alvaro Herrera wrote:


I think it was impolite to post this on the very same day the commitfest
started.  We have enough patches as it is ...



To be fair - he did say things like "wanting feedback..." and "shows an 
example of using pluggable storage.." and for PG 12. If he held onto the 
patches and waited - he'd get criticism of the form "you should have 
given a heads up earlier...".


This is earlier :-)

Best wishes

Mark

P.s: awesome work.



Re: zheap: a new storage format for PostgreSQL

2018-03-01 Thread Alvaro Herrera
I think it was impolite to post this on the very same day the commitfest
started.  We have enough patches as it is ...

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: zheap: a new storage format for PostgreSQL

2018-03-01 Thread Amit Kapila
On Fri, Mar 2, 2018 at 2:42 AM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:

> On Thu, Mar 1, 2018 at 5:09 PM, Amit Kapila 
> wrote:
>
>> Preliminary performance results
>> ---
>>
>> *We’ve shown the performance improvement of zheap over heap in a few
>> different pgbench scenarios.  All of these tests were run with data that
>> fits in shared_buffers (32GB), and 16 transaction slots per zheap page.
>> Scenario-1 and Scenario-2 has used synchronous_commit = off and Scenario-3
>> and Scenario-4 has used synchronous_commit = on*
>>
>
> What hardware did you use for benchmarks?
>
Also, I note that you have 4 transaction slots per zheap page in github
> code while you use 16 in benchmarks.
>
> #define MAX_PAGE_TRANS_INFO_SLOTS 4
>
> I would also note that in the code you preserve only 3 bits for
> transaction slot number.  So, one have to redefine 3 macros to change
> transaction slot number to the value you used in the benchmarks.
>
> #define ZHEAP_XACT_SLOT 0x3800 /* 3 bits (12, 13 and 14) for transaction
> slot */
> #define ZHEAP_XACT_SLOT_MASK 0x000B /* 11 - mask to retrieve transaction
> slot */
>
> I'm only starting reviewing this, but it makes me think that we need
> transaction slots number to be tunable (or even auto-tunable).
>
>
Yeah, that is the plan.  So, the idea is that for now we will give compile
time option to configure the number of slots (the patch for the same is
ready, currently we are testing it), later we can even give the option to
user at relation level or whatever we decides.  Why I think it makes sense
to give an option at relation level is that for larger relations, we can do
with very few transaction slots considering that the chances of many
transactions operating on the same page are less, it is only for smaller
relations that we need more number of slots.  OTOH, there could be
workloads where we can expect many concurrent transactions on the same
page.  However, for now if you want to test, the patch to increase
transaction slots is attached, you need to change the value of few macros
according to the number of slots you want.


> BTW, last two macros don't look properly named for me.  I would rather
> rename them in a following way:
> ZHEAP_XACT_SLOT_MASK => ZHEAP_XACT_SLOT_OFFSET
>

How about ZHEAP_XACT_SLOT_SHIFT?  I see similar things named with *_SHIFT
suffix in code .


> ZHEAP_XACT_SLOT => ZHEAP_XACT_SLOT_MASK
>
>
makes sense.  I will change it.


>
>> *Scenario 1: A 15 minutes simple-update pgbench test with scale factor
>> 100 shows 5.13% TPS improvement with 64 clients. The performance
>> improvement increases as we increase the scale factor; at scale factor
>> 1000, it reaches11.5% with 64 clients.Scale FactorHEAPZHEAP
>> (tables)*ImprovementBefore test1001281 MB1149 MB-10.3%100013 GB11
>> GB-15.38%After test1004.08 GB3 GB-26.47%100015 GB12.6 GB-16%* The size of
>> zheap tables increase because of the insertions in pgbench_history table.*
>>
>
> I think results representation should be improved.  You show total size of
> the database, but it's hard to understand how bloat degree was really
> decreased, assuming that there are both update and append-only tables.  So,
> I propose to show the results in per table manner.
>
>
Fair enough, Kuntal has done this testing.  He will share the results as
you have requested.


> What is total number of transactions processed in both cases?  It would be
> also more fair to compare sizes for the same number of processed
> transactions.
>
> Also, what are index sizes?  What are undo log sizes for zheap?
>
>

There shouldn't be any change in the index sizes and by the end of tests
undo is completely discarded.  I think to see the impact of undo size, we
need some different tests where in we can keep the transaction open till
end of test or some such.


> I also suggest to use Zipfian distribution in testing.  It's more close to
> real world workloads.  And it would be a good stress test for both HOT and
> transaction slots.
>
>
Yeah, we can do such tests, but keep in mid this code is still a work in
progress and lot of things are going to change and till now we have not
done much optimization in the code to improve the performance numbers.

Thanks a lot for showing interest in this work!

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


increase_slots_15.patch
Description: Binary data


Re: zheap: a new storage format for PostgreSQL

2018-03-01 Thread Alexander Korotkov
On Thu, Mar 1, 2018 at 5:09 PM, Amit Kapila  wrote:

> Preliminary performance results
> ---
>
> *We’ve shown the performance improvement of zheap over heap in a few
> different pgbench scenarios.  All of these tests were run with data that
> fits in shared_buffers (32GB), and 16 transaction slots per zheap page.
> Scenario-1 and Scenario-2 has used synchronous_commit = off and Scenario-3
> and Scenario-4 has used synchronous_commit = on*
>

What hardware did you use for benchmarks?
Also, I note that you have 4 transaction slots per zheap page in github
code while you use 16 in benchmarks.

#define MAX_PAGE_TRANS_INFO_SLOTS 4

I would also note that in the code you preserve only 3 bits for transaction
slot number.  So, one have to redefine 3 macros to change transaction slot
number to the value you used in the benchmarks.

#define ZHEAP_XACT_SLOT 0x3800 /* 3 bits (12, 13 and 14) for transaction
slot */
#define ZHEAP_XACT_SLOT_MASK 0x000B /* 11 - mask to retrieve transaction
slot */

I'm only starting reviewing this, but it makes me think that we need
transaction slots number to be tunable (or even auto-tunable).

BTW, last two macros don't look properly named for me.  I would rather
rename them in a following way:
ZHEAP_XACT_SLOT_MASK => ZHEAP_XACT_SLOT_OFFSET
ZHEAP_XACT_SLOT => ZHEAP_XACT_SLOT_MASK


> *Scenario 1: A 15 minutes simple-update pgbench test with scale factor 100
> shows 5.13% TPS improvement with 64 clients. The performance improvement
> increases as we increase the scale factor; at scale factor 1000, it
> reaches11.5% with 64 clients.Scale FactorHEAPZHEAP
> (tables)*ImprovementBefore test1001281 MB1149 MB-10.3%100013 GB11
> GB-15.38%After test1004.08 GB3 GB-26.47%100015 GB12.6 GB-16%* The size of
> zheap tables increase because of the insertions in pgbench_history table.*
>

I think results representation should be improved.  You show total size of
the database, but it's hard to understand how bloat degree was really
decreased, assuming that there are both update and append-only tables.  So,
I propose to show the results in per table manner.

What is total number of transactions processed in both cases?  It would be
also more fair to compare sizes for the same number of processed
transactions.

Also, what are index sizes?  What are undo log sizes for zheap?

I also suggest to use Zipfian distribution in testing.  It's more close to
real world workloads.  And it would be a good stress test for both HOT and
transaction slots.

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: zheap: a new storage format for PostgreSQL

2018-03-01 Thread Amit Kapila
On Thu, Mar 1, 2018 at 9:00 PM, Satyanarayana Narlapuram
 wrote:
>
>>> Cons
>
>>> ---
>>> 1. Deletes can be somewhat expensive.
>>> 2. Transaction aborts will be expensive.
>>> 3. Updates that update most of the indexed columns can be somewhat
>>> expensive.
>
> Given transaction aborts are expensive, is there any impact on the crash
> recovery?

I don't think there should be any direct impact of aborts on recovery
time as we start processing the undo records after recovery is done.
Basically, we invoke undo worker after recovery which performs the
aborts in the background.

>Did you perform any tests on the recovery duration?
>

Not yet, but I think we will do it after making some more progress.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: zheap: a new storage format for PostgreSQL

2018-03-01 Thread Hartmut Holzgraefe

On 01.03.2018 16:30, Satyanarayana Narlapuram wrote:
Given transaction aborts are expensive, is there any impact on the crash 
recovery?


In InnoDB/XtraDB, which has used the "move old row versions to UNDO log" 
since the very beginning, rollbacks are indeed costly, and especially

so on recovery when the UNDO log pages are not yet cached in RAM.

There's is a cost trade of between this kind of "optimistic MVCC" and
rollback/recovery that one has to be aware of.

We get support issues about this at MariaDB every once in a while, but
it is not happening that often.

I can dig up some more info on this from the InnoDB side if you are
interested ...

--
hartmut



Re: zheap: a new storage format for PostgreSQL

2018-03-01 Thread Satyanarayana Narlapuram

>> Cons

>> ---
>> 1. Deletes can be somewhat expensive.
>> 2. Transaction aborts will be expensive.
>> 3. Updates that update most of the indexed columns can be somewhat expensive.

Given transaction aborts are expensive, is there any impact on the crash 
recovery? Did you perform any tests on the recovery duration?

Thanks,
Satya





From: Amit Kapila 
Sent: Thursday, March 1, 2018 7:05:12 AM
To: PostgreSQL Hackers
Subject: Re: zheap: a new storage format for PostgreSQL

On Thu, Mar 1, 2018 at 7:39 PM, Amit Kapila  wrote:
>
> Preliminary performance results
> ---
>

I have not used plain text mode in my previous email due to which
performance results might not be clear in some email clients, so
attaching it again in the form of pdf.

--
With Regards,
Amit Kapila.
EnterpriseDB: 
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.enterprisedb.com&data=04%7C01%7CSatyanarayana.Narlapuram%40microsoft.com%7Cad676656345544116aa008d57f85e87d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636555135932006655%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwifQ%3D%3D%7C-1&sdata=7z7XUUdXr3CZe71y%2F7kVto%2BzJB5IogypcRHODu8yAu0%3D&reserved=0


Re: zheap: a new storage format for PostgreSQL

2018-03-01 Thread Amit Kapila
On Thu, Mar 1, 2018 at 7:39 PM, Amit Kapila  wrote:
>
> Preliminary performance results
> ---
>

I have not used plain text mode in my previous email due to which
performance results might not be clear in some email clients, so
attaching it again in the form of pdf.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


zheap_perf_data.pdf
Description: Adobe PDF document


zheap: a new storage format for PostgreSQL

2018-03-01 Thread Amit Kapila
Sometime back Robert has proposed a solution to reduce the bloat in
PostgreSQL [1] which has some other advantages of its own as well.  To
recap, in the existing heap, we always create a new version of a tuple on
an update which must eventually be removed by periodic vacuuming or by
HOT-pruning, but still in many cases space is never reclaimed completely.
A similar problem occurs for tuples that are deleted.  This leads to bloat
in the database.

At EnterpriseDB, we (me and some of my colleagues) are working from more
than a year on the new storage format in which only the latest version of
the data is kept in main storage and the old versions are moved to an undo
log.  We call this new storage format "zheap".  To be clear, this proposal
is for PG-12.  The purpose of posting this at this stage is that it can
help as an example to be integrated with pluggable storage API patch and to
get some early feedback on the design.  The purpose of this email is to
introduce the overall project, however, I think going forward, we need to
discuss some of the subsystems (like Indexing, Tuple locking, Vacuum for
non-delete-marked indexes, Undo Log Storage, Undo Workers, etc. ) in
separate threads.

The three main advantages of this new format are:
1. Provide better control over bloat (a) by allowing in-place updates in
common cases and (b) by reusing space as soon as a transaction that has
performed a delete or non-in-place-update has committed.  In short, with
this new storage, whenever possible, we’ll avoid creating bloat in the
first place.

2. Reduce write amplification both by avoiding rewrites of heap pages (for
setting hint-bits, freezing, etc.) and by making it possible to do an
update that touches indexed columns without updating every index.

3. Reduce the tuple size by (a) shrinking the tuple header and (b)
eliminating most alignment padding.

You can check README.md in the project folder [1] to understand how to use
it and also what are the open issues. The detailed design of the project is
present at src/backend/access/zheap/README.  The code for this project is
being developed in Github repository [1].  You can also read about this
project from Robert's recent blog [2].  I have also added few notes on
integration with pluggable API on zheap wiki page [3].

Preliminary performance results
---











*We’ve shown the performance improvement of zheap over heap in a few
different pgbench scenarios.  All of these tests were run with data that
fits in shared_buffers (32GB), and 16 transaction slots per zheap page.
Scenario-1 and Scenario-2 has used synchronous_commit = off and Scenario-3
and Scenario-4 has used synchronous_commit = onScenario 1: A 15 minutes
simple-update pgbench test with scale factor 100 shows 5.13% TPS
improvement with 64 clients. The performance improvement increases as we
increase the scale factor; at scale factor 1000, it reaches11.5% with 64
clients.Scale FactorHEAPZHEAP (tables)*ImprovementBefore test1001281 MB1149
MB-10.3%100013 GB11 GB-15.38%After test1004.08 GB3 GB-26.47%100015 GB12.6
GB-16%* The size of zheap tables increase because of the insertions in
pgbench_history table.Scenario 2: To show the effect of bloat, we’ve
performed another test similar to the previous scenario, but a transaction
is kept open for the first 15 minutes of a 30-minute test. This restricts
HOT-pruning for the heap and undo-discarding for zheap for the first half
of the test. Scale factor 1000 - 75.86% TPS improvement for zheap at 64
client count.  Scale factor 3000 - 98.18% TPS improvement for zheap at 64
client count.Scale FactorHEAPZHEAP (tables)*ImprovementAfter test100019
GB14 GB-26.3%300045 GB37 GB-17.7%* The size of zheap tables increase
because of the insertions in pgbench_history table.The reason for this huge
performance improvement is that when the long-running transaction gets
committed after 900 seconds, autovacuum workers start working and degrade
the performance of heap for a long time. In addition, the heap tables are
also bloated by a significant amount. On the other hand, the undo worker
discards the undo very quickly, and we don't have any bloat in the zheap
relations. In brief, zheap clusters the bloats in undo segments. We just
need to determine the how much undo can be discarded and remove it, which
is cheap.Scenario 3: A 15 minutes simple-update pgbench test with scale
factor 100 shows 6% TPS improvement with 64 clients. The performance
improvement increases as we increase the scale factor to 1000 achieving
11.8% with 64 clients.Scale FactorHEAPZHEAP (tables)*ImprovementBefore
test1001281 MB1149 MB-10.3%100013 GB11 GB-15.38%After test1002.88 GB2.20
GB-23.61%100013.9 GB11.7 GB-15.8%* The size of zheap tables increase
because of the insertions in pgbench_history table.Scenario 4: To amplify
the effect of bloats in scenario 3, we’ve performed another test similar to
scenario, but a transaction is kept open for the first 15 minutes of a 30
minute