[
https://issues.apache.org/jira/browse/SQOOP-1390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Qian Xu updated SQOOP-1390:
---------------------------
Attachment: 1390.patch
Implemented ParquetJob, ParquetOutputFormat and ParquetImportMapper with helper
of Kite SDK's dataset.
> Import data to HDFS as a set of Parquet files
> ---------------------------------------------
>
> Key: SQOOP-1390
> URL: https://issues.apache.org/jira/browse/SQOOP-1390
> Project: Sqoop
> Issue Type: Sub-task
> Components: tools
> Reporter: Qian Xu
> Assignee: Qian Xu
> Attachments: 1390.patch
>
>
> Parquet files keep data in contiguous chunks by column, appending new records
> to a dataset requires rewriting substantial portions of existing a file or
> buffering records to create a new file. Parquet may have storage and query
> benefits.
> The JIRA proposes to add the possibility to import an individual table from a
> RDBMS into HDFS as a set of Parquet files. We will also provide a
> command-line interface.
> Example invocation:
> sqoop import --connect JDBC_URI --table TABLE --as-parquetfile
> --target-dir /path/to/files
> The major items are listed as follows:
> * Implement ParquetImportMapper
> * Hook up the ParquetOutputFormat and ParquetImportMapper in the import job.
> Note that as Parquet is a columnar storage format, it doesn't make sense to
> write to it directly from record-based tools. So we'd consider to use Kite
> SDK to simplify the handling of Parquet specific things.
--
This message was sent by Atlassian JIRA
(v6.2#6252)