[ 
https://issues.apache.org/jira/browse/ARROW-661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe L. Korn resolved ARROW-661.
-------------------------------
    Resolution: Fixed

Issue resolved by pull request 404
[https://github.com/apache/arrow/pull/404]

> [C++] Add a Flatbuffer metadata type that supports array data over 2^31 - 1 
> elements
> ------------------------------------------------------------------------------------
>
>                 Key: ARROW-661
>                 URL: https://issues.apache.org/jira/browse/ARROW-661
>             Project: Apache Arrow
>          Issue Type: New Feature
>          Components: C++
>            Reporter: Wes McKinney
>            Assignee: Wes McKinney
>
> Users of Arrow C++ have reported needing to store large arrays using our 
> shared memory IPC machinery. While data over 2^31 - 1 elements is "out of 
> spec" as far as what it means to "implement Arrow", it would be useful to 
> have a "LargeRecordBatch" type that permits storing large arrays. Other Arrow 
> implementations in general will not need to support reading and writing this 
> data -- this is closely related to the tensor discussion in ARROW-550.
> {code}
> struct LargeFieldNode {
>   length: long;
>   null_count: long;
> }
> table LargeRecordBatch {
>   length: long;
>   nodes: [LargeFieldNode];
>   buffers: [Buffer];
> }
> {code}
> An initial implementation of this will be marked experimental so that we can 
> get the code into the hands of users (like the Ray project 
> https://github.com/ray-project/ray) in case there's some ongoing discussion 
> to be had about handling very large vectors/arrays.
> Reading and writing this metadata should be achievable without a huge amount 
> of code duplication, I will see what I can do. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to