[
https://issues.apache.org/jira/browse/SQOOP-1393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14131143#comment-14131143
]
Richard commented on SQOOP-1393:
--------------------------------
I have tested the following sample. It works without setting the jobtracker
{{-jt my_jobtracker:xxxx}}, since hadoop 2.5.1 is used.
{code}
bin/sqoop import --connect jdbc:mysql://server-391/test --username admin
--password admin --target-dir /user/pkhadloya/sqoop/extusersegments --table
test --hive-import --hive-table extusersegments --create-hive-table
--as-parquetfile
{code}
[~tispratik], which exact hadoop version do you use? Would you please attach
the detail error message for further investigation? Thanks.
> Import data from database to Hive as Parquet files
> --------------------------------------------------
>
> Key: SQOOP-1393
> URL: https://issues.apache.org/jira/browse/SQOOP-1393
> Project: Sqoop
> Issue Type: Sub-task
> Components: tools
> Reporter: Qian Xu
> Assignee: Richard
> Fix For: 1.4.6
>
> Attachments: patch.diff, patch_v2.diff, patch_v3.diff
>
>
> Import data to Hive as Parquet file can be separated into two steps:
> 1. Import an individual table from an RDBMS to HDFS as a set of Parquet files.
> 2. Import the data into Hive by generating and executing a CREATE TABLE
> statement to define the data's layout in Hive with Parquet format table
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)