Thanks a lot Karthik, Girish and Laurent
On Fri, Oct 24, 2014 at 9:30 PM, Laurent H wrote:
> That's right, it's better to use oozie schedulor for your production
> environment ! (can check easily treament status & logs) Check the link
> below : http://oozie.apache.org/docs/4.0.0/DG_SqoopActionEx
My comment was in response to the suggestion to use PySpark. Perhaps I
misunderstand what PySpark is. It was my understanding that it let you work
with Spark in Python. Is that not correct?
B.
From: Edward Capriolo
Sent: Tuesday, October 21, 2014 11:06 AM
To: user@hadoop.apache.org
Subject: R
I just noticed that when I run a "hadoop jar
my-fat-jar-with-all-dependencies.jar" , it unjars the job jar in
/tmp/hadoop-username/hadoop-unjar-/ and extracts all the classes in
there.
the fat jar is pretty big, so it took up a lot of space (particularly
inodes ) and ran out of quota.
I wonde
That's right, it's better to use oozie schedulor for your production
environment ! (can check easily treament status & logs) Check the link
below : http://oozie.apache.org/docs/4.0.0/DG_SqoopActionExtension.html
--
Laurent HATIER - Consultant Big Data & Business Intelligence chez CapGemini
fr.li
Ravi
If you are using oozie in your production environment one option is to plugin
your sqoop job into the oozie workflow xml using oozie sqoop action.
Thanks
Girish
Sent from my iPhone
> On Oct 24, 2014, at 4:17 AM, Dhandapani, Karthik
> wrote:
>
> Hi,
>
> There is an option.
>
> Use --
Hi,
There is an option.
Use --password-file Set path for file containing authentication
password
http://sqoop.apache.org/docs/1.4.4/SqoopUserGuide.html
All the dynamic parameter values can be passed in as unix variables to automate
the sqoop script for different tables. Copy the be
Hi all,
1) Can anyone please suggest me , how to automate the Sqoop scripts in the
production environment.
I need to import data from Oracle tables to Hadoop Hive tables using the
below scripts.
sqoop import --connect jdbc:oracle:thin:@:1521/ --username
username --password *password *--table