[
https://issues.apache.org/jira/browse/PARQUET-2075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17676912#comment-17676912
]
ASF GitHub Bot commented on PARQUET-2075:
-----------------------------------------
gszadovszky commented on PR #1014:
URL: https://github.com/apache/parquet-mr/pull/1014#issuecomment-1382754916
> * I'd prefer creating a new JIRA for this refactor to be a prerequisite.
Merging multiple files to a single one with customized pruning, encryption, and
codec is also in my mind and will be supported later. I will create separate
JIRAs as sub-tasks of PARQUET-2075 and work on them progressively.
Perfect! :)
> * Putting the original `created_by` into `key_value_metadata` is a good
idea. However, it is tricky if a file has been rewritten for several times.
What about adding a key named `original_created_by` to `key_value_metadata` and
concatenating all old `created_by`s to it?
It sounds good to me. Maybe have the latest one at the beginning and use the
separator `'\n'`?
> Unified Rewriter Tool
> -----------------------
>
> Key: PARQUET-2075
> URL: https://issues.apache.org/jira/browse/PARQUET-2075
> Project: Parquet
> Issue Type: New Feature
> Reporter: Xinli Shang
> Assignee: Gang Wu
> Priority: Major
>
> During the discussion of PARQUET-2071, we came up with the idea of a
> universal tool to translate the existing file to a different state while
> skipping some level steps like encoding/decoding, to gain speed. For example,
> only decompress pages and then compress directly. For PARQUET-2071, we only
> decrypt and then encrypt directly. This will be useful for the existing data
> to onboard Parquet features like column encryption, zstd etc.
> We already have tools like trans-compression, column pruning etc. We will
> consolidate all these tools with this universal tool.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)