GitHub user nongli reopened a pull request:

    https://github.com/apache/spark/pull/10628

    [SPARK-12635][SQL] Add ColumnarBatch, an in memory columnar format for 
execution.

    There are many potential benefits of having an efficient in memory columnar 
format as an alternate
    to UnsafeRow. This patch introduces ColumnarBatch/ColumnarVector which 
starts this effort. The
    remaining implementation can be done as follow up patches.
    
    As stated in the in the JIRA, there are useful external components that 
operate on memory in a
    simple columnar format. ColumnarBatch would serve that purpose and could 
server as a
    zero-serialization/zero-copy exchange for this use case.
    
    This patch supports running the underlying data either on heap or off heap. 
On heap runs a bit
    faster but we would need offheap for zero-copy exchanges. Currently, this 
mode is hidden behind one
    interface (ColumnVector).
    
    This differs from Parquet or the existing columnar cache because this is 
*not* intended to be used
    as a storage format. The focus is entirely on CPU efficiency as we expect 
to only have 1 of these
    batches in memory per task. The layout of the values is just dense arrays 
of the value type.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/nongli/spark spark-12635

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/10628.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #10628
    
----
commit 84c657bdbc19105608e0d07fd59587b84eaaebcf
Author: Nong <non...@gmail.com>
Date:   2016-01-01T05:12:44Z

    [SPARK-12635][SQL] Add ColumnarBatch, an in memory columnar format for 
execution.
    
    There are many potential benefits of having an efficient in memory columnar 
format as an alternate
    to UnsafeRow. This patch introduces ColumnarBatch/ColumnarVector which 
starts this effort. The
    remaining implementation can be done as follow up patches.
    
    As stated in the in the JIRA, there are useful external components that 
operate on memory in a
    simple columnar format. ColumnarBatch would serve that purpose and could 
server as a
    zero-serialization/zero-copy exchange for this use case.
    
    This patch supports running the underlying data either on heap or off heap. 
On heap runs a bit
    faster but we would need offheap for zero-copy exchanges. Currently, this 
mode is hidden behind one
    interface (ColumnVector).
    
    This differs from Parquet or the existing columnar cache because this is 
*not* intended to be used
    as a storage format. The focus is entirely on CPU efficiency as we expect 
to only have 1 of these
    batches in memory per task.

commit c547ec5df19fcd8dbd1becdd4fa98fd6246d5732
Author: Nong Li <n...@databricks.com>
Date:   2016-01-07T05:14:08Z

    CR

commit 48574de403e05c3d3833356056eb88eb3c286d29
Author: Nong Li <n...@databricks.com>
Date:   2016-01-12T00:40:26Z

    Fix double put api.

commit 91b6fc04fd6100ce14f1bab1fb2893464c79fbbf
Author: Nong Li <n...@databricks.com>
Date:   2016-01-12T02:06:30Z

    Fix imports and rebase.

commit 36c8ddb6b51b3a7ee33950732287644a758615e7
Author: Nong Li <n...@databricks.com>
Date:   2016-01-12T05:52:08Z

    Support java 7 iterator interface.

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to