[ 
https://issues.apache.org/jira/browse/ASTERIXDB-3601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18014169#comment-18014169
 ] 

ASF subversion and git services commented on ASTERIXDB-3601:
------------------------------------------------------------

Commit b19148f4604a5bc8323db64d703e6031b1a4ccc9 in asterixdb's branch 
refs/heads/master from Ritik Raj
[ https://gitbox.apache.org/repos/asf?p=asterixdb.git;h=b19148f460 ]

[ASTERIXDB-3601][STO] Fixing Merge failure

- user model changes: no
- storage format changes: no
- interface changes: no

Details:
While bulkloading during merge, we calculate the present
columns in each of the leaf. But, there can be case where
a cursor gets closed because all the tuples have been read.
Closing a range cursor releases the page, hence can be reused.

While Bulkloading, even after the rangeCursor was closed, the
leaf was being asked for the present set of columns. Since, the
page has been reused, it contained differnt buffer, which when
read was giving wrong column details.

Hence, fixing this by calculating the info while reset happens
with new leaf, which always comes before closing the cursor.

Ext-ref: MB-67570
Change-Id: I87b3a084d01986dd5c2abd9452a2ad5619fbab15
Reviewed-on: https://asterix-gerrit.ics.uci.edu/c/asterixdb/+/20038
Integration-Tests: Jenkins <[email protected]>
Tested-by: Jenkins <[email protected]>
Reviewed-by: Peeyush Gupta <[email protected]>


> Support unlimited number of columns in columnar storage
> -------------------------------------------------------
>
>                 Key: ASTERIXDB-3601
>                 URL: https://issues.apache.org/jira/browse/ASTERIXDB-3601
>             Project: Apache AsterixDB
>          Issue Type: Improvement
>          Components: STO - Storage
>    Affects Versions: 0.9.10
>            Reporter: Ritik Raj
>            Assignee: Ritik Raj
>            Priority: Major
>              Labels: triaged
>             Fix For: 0.9.10
>
>
> There may be cases where the number of columns is too large to fit within the 
> standard pageSize. This can prevent a tuple from being added to the dataset 
> and may also trigger a flush failure, leading to repeated I/O retries that 
> ultimately fail.
> To mitigate this issue, we can expand pageZero to accommodate the necessary 
> metadata.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to