[ 
https://issues.apache.org/jira/browse/ASTERIXDB-3407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wail Y. Alkowaileet updated ASTERIXDB-3407:
-------------------------------------------
    Description: 
An issue was observed when testing cloud caching:
{noformat}
Failure while trying to read a page fromdisk
org.apache.hyracks.api.exceptions.HyracksDataException: java.io.IOException: 
FAILED_TO_UNCOMPRESS(5) {noformat}
 

Given the following code snippet in 
[CloudMegaPageReadContext.java|https://github.com/apache/asterixdb/blob/master/hyracks-fullstack/hyracks/hyracks-storage-am-lsm-btree-column/src/main/java/org/apache/hyracks/storage/am/lsm/btree/column/cloud/buffercache/read/CloudMegaPageReadContext.java]
{noformat}
 @Override
    public void onPin(ICachedPage page) throws HyracksDataException {
        CloudCachedPage cachedPage = (CloudCachedPage) page;
        if (gapStream != null && cachedPage.skipCloudStream()) {
            /*
             * This page is requested but the buffer cache has a valid copy in 
memory. Also, the page itself was
             * requested to be read from the cloud. Since this page is valid, 
no buffer cache read() will be performed.
             * As the buffer cache read() is also responsible for persisting 
the bytes read from the cloud, we can end
             * up writing the bytes of this page in the position of another 
page. Therefore, we should skip the bytes
             * for this particular page to avoid placing the bytes of this page 
into another page's position.
             */
            try {
                long remaining = cachedPage.getCompressedPageSize();
                while (remaining > 0) {
                    remaining -= gapStream.skip(remaining);
                }
            } catch (IOException e) {
                throw HyracksDataException.create(e);
            }
        }
    } {noformat}
 

The issue appeared when the following sequence is performed to read a range of 
columnar pages

1- Reading N valid pages from the buffer cache

2- Doing a read of a page that requires to be retrieved from the cloud. A 
stream will be created. However, 
[pageCounter|https://github.com/apache/asterixdb/blob/7b3f1eb481a54119529d1372df148468613dd69e/hyracks-fullstack/hyracks/hyracks-storage-am-lsm-btree-column/src/main/java/org/apache/hyracks/storage/am/lsm/btree/column/cloud/buffercache/read/CloudMegaPageReadContext.java#L53]
 was never incremented when the N valid pages where pinned.

3- The [created 
stream|https://github.com/apache/asterixdb/blob/7b3f1eb481a54119529d1372df148468613dd69e/hyracks-fullstack/hyracks/hyracks-storage-am-lsm-btree-column/src/main/java/org/apache/hyracks/storage/am/lsm/btree/column/cloud/buffercache/read/CloudMegaPageReadContext.java#L191]
 will start from the page that requires to be read from the cloud in addition 
to 
[numberOfContiguousPages|https://github.com/apache/asterixdb/blob/7b3f1eb481a54119529d1372df148468613dd69e/hyracks-fullstack/hyracks/hyracks-storage-am-lsm-btree-column/src/main/java/org/apache/hyracks/storage/am/lsm/btree/column/cloud/buffercache/read/CloudMegaPageReadContext.java#L52]
 - 
[pageCounter|https://github.com/apache/asterixdb/blob/7b3f1eb481a54119529d1372df148468613dd69e/hyracks-fullstack/hyracks/hyracks-storage-am-lsm-btree-column/src/main/java/org/apache/hyracks/storage/am/lsm/btree/column/cloud/buffercache/read/CloudMegaPageReadContext.java#L53C17-L53C28].
 The issue is 
[pageCounter|https://github.com/apache/asterixdb/blob/7b3f1eb481a54119529d1372df148468613dd69e/hyracks-fullstack/hyracks/hyracks-storage-am-lsm-btree-column/src/main/java/org/apache/hyracks/storage/am/lsm/btree/column/cloud/buffercache/read/CloudMegaPageReadContext.java#L53C17-L53C28]
 was never incremented (i.e., it is 0). Thus, we will request more pages than 
we suppose to read. This will be OK if the component has those pages. However, 
if the requested range of pages exceeds the component's pages, then an 
undefined behavior will occur. In the case with compressed pages, we got an 
incompressible page. 

  was:
An issue was observed when testing cloud caching:
{noformat}
Failure while trying to read a page from 
diskorg.apache.hyracks.api.exceptions.HyracksDataException: 
java.io.IOException: FAILED_TO_UNCOMPRESS(5) {noformat}
 

Given the following code snippet in 
[CloudMegaPageReadContext.java|https://github.com/apache/asterixdb/blob/master/hyracks-fullstack/hyracks/hyracks-storage-am-lsm-btree-column/src/main/java/org/apache/hyracks/storage/am/lsm/btree/column/cloud/buffercache/read/CloudMegaPageReadContext.java]
{noformat}
 @Override
    public void onPin(ICachedPage page) throws HyracksDataException {
        CloudCachedPage cachedPage = (CloudCachedPage) page;
        if (gapStream != null && cachedPage.skipCloudStream()) {
            /*
             * This page is requested but the buffer cache has a valid copy in 
memory. Also, the page itself was
             * requested to be read from the cloud. Since this page is valid, 
no buffer cache read() will be performed.
             * As the buffer cache read() is also responsible for persisting 
the bytes read from the cloud, we can end
             * up writing the bytes of this page in the position of another 
page. Therefore, we should skip the bytes
             * for this particular page to avoid placing the bytes of this page 
into another page's position.
             */
            try {
                long remaining = cachedPage.getCompressedPageSize();
                while (remaining > 0) {
                    remaining -= gapStream.skip(remaining);
                }
            } catch (IOException e) {
                throw HyracksDataException.create(e);
            }
        }
    } {noformat}
 

The issue appeared when the following sequence is performed to read a range of 
columnar pages

1- Reading N valid pages from the buffer cache

2- Doing a read of a page that requires to be retrieved from the cloud. A 
stream will be created. However, 
[pageCounter|https://github.com/apache/asterixdb/blob/7b3f1eb481a54119529d1372df148468613dd69e/hyracks-fullstack/hyracks/hyracks-storage-am-lsm-btree-column/src/main/java/org/apache/hyracks/storage/am/lsm/btree/column/cloud/buffercache/read/CloudMegaPageReadContext.java#L53]
 was never incremented when the N valid pages where pinned.

3- The [created 
stream|https://github.com/apache/asterixdb/blob/7b3f1eb481a54119529d1372df148468613dd69e/hyracks-fullstack/hyracks/hyracks-storage-am-lsm-btree-column/src/main/java/org/apache/hyracks/storage/am/lsm/btree/column/cloud/buffercache/read/CloudMegaPageReadContext.java#L191]
 will start from the page that requires to be read from the cloud in addition 
to 
[numberOfContiguousPages|https://github.com/apache/asterixdb/blob/7b3f1eb481a54119529d1372df148468613dd69e/hyracks-fullstack/hyracks/hyracks-storage-am-lsm-btree-column/src/main/java/org/apache/hyracks/storage/am/lsm/btree/column/cloud/buffercache/read/CloudMegaPageReadContext.java#L52]
 - 
[pageCounter|https://github.com/apache/asterixdb/blob/7b3f1eb481a54119529d1372df148468613dd69e/hyracks-fullstack/hyracks/hyracks-storage-am-lsm-btree-column/src/main/java/org/apache/hyracks/storage/am/lsm/btree/column/cloud/buffercache/read/CloudMegaPageReadContext.java#L53C17-L53C28].
 The issue is 
[pageCounter|https://github.com/apache/asterixdb/blob/7b3f1eb481a54119529d1372df148468613dd69e/hyracks-fullstack/hyracks/hyracks-storage-am-lsm-btree-column/src/main/java/org/apache/hyracks/storage/am/lsm/btree/column/cloud/buffercache/read/CloudMegaPageReadContext.java#L53C17-L53C28]
 was never incremented (i.e., it is 0). Thus, we will request more pages than 
we suppose to read. This will be OK if the component has those pages. However, 
if the requested range of pages exceeds the component's pages, then an 
undefined behavior will occur. In the case with compressed pages, we got an 
incompressible page. 


> Increment page counter when skipping a valid page's cloud bytes
> ---------------------------------------------------------------
>
>                 Key: ASTERIXDB-3407
>                 URL: https://issues.apache.org/jira/browse/ASTERIXDB-3407
>             Project: Apache AsterixDB
>          Issue Type: Bug
>          Components: STO - Storage
>    Affects Versions: 0.9.10
>            Reporter: Wail Y. Alkowaileet
>            Assignee: Wail Y. Alkowaileet
>            Priority: Critical
>             Fix For: 0.9.10
>
>
> An issue was observed when testing cloud caching:
> {noformat}
> Failure while trying to read a page fromdisk
> org.apache.hyracks.api.exceptions.HyracksDataException: java.io.IOException: 
> FAILED_TO_UNCOMPRESS(5) {noformat}
>  
> Given the following code snippet in 
> [CloudMegaPageReadContext.java|https://github.com/apache/asterixdb/blob/master/hyracks-fullstack/hyracks/hyracks-storage-am-lsm-btree-column/src/main/java/org/apache/hyracks/storage/am/lsm/btree/column/cloud/buffercache/read/CloudMegaPageReadContext.java]
> {noformat}
>  @Override
>     public void onPin(ICachedPage page) throws HyracksDataException {
>         CloudCachedPage cachedPage = (CloudCachedPage) page;
>         if (gapStream != null && cachedPage.skipCloudStream()) {
>             /*
>              * This page is requested but the buffer cache has a valid copy 
> in memory. Also, the page itself was
>              * requested to be read from the cloud. Since this page is valid, 
> no buffer cache read() will be performed.
>              * As the buffer cache read() is also responsible for persisting 
> the bytes read from the cloud, we can end
>              * up writing the bytes of this page in the position of another 
> page. Therefore, we should skip the bytes
>              * for this particular page to avoid placing the bytes of this 
> page into another page's position.
>              */
>             try {
>                 long remaining = cachedPage.getCompressedPageSize();
>                 while (remaining > 0) {
>                     remaining -= gapStream.skip(remaining);
>                 }
>             } catch (IOException e) {
>                 throw HyracksDataException.create(e);
>             }
>         }
>     } {noformat}
>  
> The issue appeared when the following sequence is performed to read a range 
> of columnar pages
> 1- Reading N valid pages from the buffer cache
> 2- Doing a read of a page that requires to be retrieved from the cloud. A 
> stream will be created. However, 
> [pageCounter|https://github.com/apache/asterixdb/blob/7b3f1eb481a54119529d1372df148468613dd69e/hyracks-fullstack/hyracks/hyracks-storage-am-lsm-btree-column/src/main/java/org/apache/hyracks/storage/am/lsm/btree/column/cloud/buffercache/read/CloudMegaPageReadContext.java#L53]
>  was never incremented when the N valid pages where pinned.
> 3- The [created 
> stream|https://github.com/apache/asterixdb/blob/7b3f1eb481a54119529d1372df148468613dd69e/hyracks-fullstack/hyracks/hyracks-storage-am-lsm-btree-column/src/main/java/org/apache/hyracks/storage/am/lsm/btree/column/cloud/buffercache/read/CloudMegaPageReadContext.java#L191]
>  will start from the page that requires to be read from the cloud in addition 
> to 
> [numberOfContiguousPages|https://github.com/apache/asterixdb/blob/7b3f1eb481a54119529d1372df148468613dd69e/hyracks-fullstack/hyracks/hyracks-storage-am-lsm-btree-column/src/main/java/org/apache/hyracks/storage/am/lsm/btree/column/cloud/buffercache/read/CloudMegaPageReadContext.java#L52]
>  - 
> [pageCounter|https://github.com/apache/asterixdb/blob/7b3f1eb481a54119529d1372df148468613dd69e/hyracks-fullstack/hyracks/hyracks-storage-am-lsm-btree-column/src/main/java/org/apache/hyracks/storage/am/lsm/btree/column/cloud/buffercache/read/CloudMegaPageReadContext.java#L53C17-L53C28].
>  The issue is 
> [pageCounter|https://github.com/apache/asterixdb/blob/7b3f1eb481a54119529d1372df148468613dd69e/hyracks-fullstack/hyracks/hyracks-storage-am-lsm-btree-column/src/main/java/org/apache/hyracks/storage/am/lsm/btree/column/cloud/buffercache/read/CloudMegaPageReadContext.java#L53C17-L53C28]
>  was never incremented (i.e., it is 0). Thus, we will request more pages than 
> we suppose to read. This will be OK if the component has those pages. 
> However, if the requested range of pages exceeds the component's pages, then 
> an undefined behavior will occur. In the case with compressed pages, we got 
> an incompressible page. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to