[
https://issues.apache.org/jira/browse/ARROW-4083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725486#comment-16725486
]
Wes McKinney commented on ARROW-4083:
-------------------------------------
Well, we need to confront the fact that dictionary encoding is used in some
applications as a compression technique, especially in databases. Parquet files
are a canonical example.
It is not practical in general to convert everything to dense representation
both because of memory use and performance issues. I think we are going to get
ourselves into a trouble if we implement stream-oriented data processing code
that requires that a field in an entire stream either be all dictionary encoded
or all dense.
We can always wait until it becomes a problem, but I would like to think it
through carefully right now to see what we may have to do to accommodate this.
> [C++] Allowing ChunkedArrays to contain a mix of DictionaryArray and dense
> Array (of the dictionary type)
> ---------------------------------------------------------------------------------------------------------
>
> Key: ARROW-4083
> URL: https://issues.apache.org/jira/browse/ARROW-4083
> Project: Apache Arrow
> Issue Type: Improvement
> Components: C++
> Reporter: Wes McKinney
> Priority: Major
> Fix For: 0.13.0
>
>
> In some applications we may receive a stream of some dictionary encoded data
> followed by some non-dictionary encoded data. For example this happens in
> Parquet files when the dictionary reaches a certain configurable size
> threshold.
> We should think about how we can model this in our in-memory data structures,
> and how it can flow through to relevant computational components (i.e.
> certain data flow observers -- like an Aggregation -- might need to be able
> to process either a dense or dictionary encoded version of a particular array
> in the same stream)
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)