[ 
https://issues.apache.org/jira/browse/SQOOP-1390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qian Xu updated SQOOP-1390:
---------------------------

    Description: 
Parquet files keep data in contiguous chunks by column, appending new records 
to a dataset requires rewriting substantial portions of existing a file or 
buffering records to create a new file. Parquet may have storage and query 
benefits.

The JIRA proposes to add the possibility to import an individual table from a 
RDBMS into HDFS as a set of Parquet files. We will also provide a command-line 
interface.

Example invocation: 
    sqoop import --connect JDBC_URI --table TABLE --as-parquetfile --target-dir 
/path/to/files

The major items are listed as follows:
* Implement ParquetImportMapper
* Hook up the ParquetOutputFormat and ParquetImportMapper in the import job.

Note that as Parquet is a columnar storage format, it doesn't make sense to 
write to it directly from record-based tools. So we'd consider to use Kite SDK 
to simplify the handling of Parquet specific things.


  was:
Parquet files keep data in contiguous chunks by column, appending new records 
to a dataset requires rewriting substantial portions of existing a file or 
buffering records to create a new file. So while Parquet may have storage and 
query benefits, it doesn't make sense to write to it directly from record-based 
tools. We'd consider to use Kite SDK to simplify the handling of Parquet 
specific things.

The following listed the major areas for this:
* Implement ParquetImportMapper
* Hook up the ParquetOutputFormat and ParquetImportMapper in the import job.

        Summary: Import data to HDFS as a set of Parquet files  (was: Convert 
Sqoop format to Parquet format via MapReduce)

Updated the description (SQOOP-1389 is now merged)

> Import data to HDFS as a set of Parquet files
> ---------------------------------------------
>
>                 Key: SQOOP-1390
>                 URL: https://issues.apache.org/jira/browse/SQOOP-1390
>             Project: Sqoop
>          Issue Type: Sub-task
>          Components: tools
>            Reporter: Qian Xu
>            Assignee: Qian Xu
>
> Parquet files keep data in contiguous chunks by column, appending new records 
> to a dataset requires rewriting substantial portions of existing a file or 
> buffering records to create a new file. Parquet may have storage and query 
> benefits.
> The JIRA proposes to add the possibility to import an individual table from a 
> RDBMS into HDFS as a set of Parquet files. We will also provide a 
> command-line interface.
> Example invocation: 
>     sqoop import --connect JDBC_URI --table TABLE --as-parquetfile 
> --target-dir /path/to/files
> The major items are listed as follows:
> * Implement ParquetImportMapper
> * Hook up the ParquetOutputFormat and ParquetImportMapper in the import job.
> Note that as Parquet is a columnar storage format, it doesn't make sense to 
> write to it directly from record-based tools. So we'd consider to use Kite 
> SDK to simplify the handling of Parquet specific things.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to