Thanks for the suggestion,

I tried to use the 1st example you gave.
Here is my .sh file (saved as try_bash_sqoop.sh)
sqoop --options-file /home_dir/z070061/sqoop_import_param_td.txt 
--fields-terminated-by '\t' --warehouse-dir ../../user/z070061/UDC_DPCI/ 
--table inovbidt.UDC_DPCI
sqoop --options-file /home_dir/z070061/sqoop_import_param_td.txt 
--fields-terminated-by '\t' --warehouse-dir ../../user/z070061/UDC_STR_DC_MAP/ 
--table inovbidt.UDC_STR_DC_MAP

When I run it using below command
bash try_bash_sqoop.sh

it only run the last sqoop command successfully and failing the 1st one. If I 
run the command separately, then it's runs fine without any error.
Attached is the code and the log file (copy from screen output)
Thanks
Sandipan


From: Abraham Elmahrek [mailto:[email protected]]
Sent: Tuesday, April 08, 2014 7:26 PM
To: [email protected]
Subject: Re: Run sqoop from .sh script

Hey there,

You should just be able to put your sqoop commands in series:
    sqoop import --connect jdbc:mysql:///test --username root --password hue 
--table b --split-by a
    sqoop import --connect jdbc:mysql:///test --username root --password hue 
--table c --split-by a

The CLI should return 0 or 1 depending on the success of the job. So you can 
provide conditions:
    sqoop import --connect jdbc:mysql:///test --username root --password hue 
--table b --split-by a
    if [[ $? -e 0 ]]; then
      sqoop import --connect jdbc:mysql:///test --username root --password hue 
--table c --split-by a
    fi;

Also, this means you can use "set -e" to tell the script to exit if any one 
command fails:
    set -e
    sqoop import --connect jdbc:mysql:///test --username root --password hue 
--table b --split-by a
    sqoop import --connect jdbc:mysql:///test --username root --password hue 
--table c --split-by a

For running the commands first, then adding to a .sh file, you can manually add 
the commands you've previously ran. You can find these commands via bash 
history. Type "history" from the command line to get a full list of commands 
you've entered for your current session.

-Abe

On Tue, Apr 8, 2014 at 4:51 AM, Sandipan.Ghosh 
<[email protected]<mailto:[email protected]>> wrote:

Hi,

I want to run multiple Sqoop commands saving into a .sh file then executing 
that from bash shell.

How will I do it?

Thanks
Snadipan

bash-4.1$ bash try_bash_sqoop.sh
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
14/04/10 00:58:05 INFO sqoop.Sqoop: Running Sqoop version: 1.4.3-cdh4.5.0
14/04/10 00:58:06 WARN tool.BaseSqoopTool: Setting your password on the 
command-line is insecure. Consider using -P instead.
14/04/10 00:58:06 INFO manager.SqlManager: Using default fetchSize of 1000
14/04/10 00:58:06 INFO tool.CodeGenTool: Beginning code generation
 AS t WHERE 1=007 INFO manager.SqlManager: Executing SQL statement: SELECT t.* 
FROM inovbidt.UDC_DPCI
14/04/10 00:58:07 ERROR tool.ImportTool: Imported Failed: Attempted to generate 
class with no columns!
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
14/04/10 00:58:08 INFO sqoop.Sqoop: Running Sqoop version: 1.4.3-cdh4.5.0
14/04/10 00:58:08 WARN tool.BaseSqoopTool: Setting your password on the 
command-line is insecure. Consider using -P instead.
14/04/10 00:58:08 INFO manager.SqlManager: Using default fetchSize of 1000
14/04/10 00:58:08 INFO tool.CodeGenTool: Beginning code generation
14/04/10 00:58:33 INFO manager.SqlManager: Executing SQL statement: SELECT t.* 
FROM inovbidt.UDC_STR_DC_MAP AS t WHERE 1=0
14/04/10 00:58:34 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is 
/apps/tdp/software/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/bin/../lib/hadoop-0.20-mapreduce
14/04/10 00:58:34 INFO orm.CompilationManager: Found hadoop core jar at: 
/apps/tdp/software/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/bin/../lib/hadoop-0.20-mapreduce/hadoop-core.jar
Note: 
/tmp/sqoop-z070061/compile/289b7772769910039ed315db9a7557a8/inovbidt_UDC_STR_DC_MAP.java
 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
14/04/10 00:58:35 INFO orm.CompilationManager: Writing jar file: 
/tmp/sqoop-z070061/compile/289b7772769910039ed315db9a7557a8/inovbidt.UDC_STR_DC_MAP.jar
14/04/10 00:58:35 INFO teradata.TeradataManager: Beginning Teradata import
14/04/10 00:58:36 INFO util.TeradataUtil: JDBC URL used by Teradata manager: 
jdbc:teradata://10.67.192.180/DATABASE=INOVBIDT,LOGMECH=LDAP,TYPE=FASTEXPORT
14/04/10 00:58:36 INFO mapreduce.ImportJobBase: Beginning import of 
inovbidt.UDC_STR_DC_MAP
14/04/10 00:58:36 INFO util.TeradataUtil: Current database used by Teradata 
manager: INOVBIDT
14/04/10 00:58:36 INFO imports.TeradataInputFormat: Staging table is turned OFF
14/04/10 00:58:37 WARN mapred.JobClient: Use GenericOptionsParser for parsing 
the arguments. Applications should implement Tool for the same.
14/04/10 00:58:39 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT 
MIN("UDC"), MAX("UDC") FROM inovbidt.UDC_STR_DC_MAP
14/04/10 00:58:39 INFO mapred.JobClient: Running job: job_201403121312_6012
14/04/10 00:58:40 INFO mapred.JobClient:  map 0% reduce 0%
14/04/10 00:58:56 INFO mapred.JobClient:  map 75% reduce 0%
14/04/10 00:58:59 INFO mapred.JobClient:  map 100% reduce 0%
14/04/10 00:59:00 INFO mapred.JobClient: Job complete: job_201403121312_6012
14/04/10 00:59:00 INFO mapred.JobClient: Counters: 23
14/04/10 00:59:00 INFO mapred.JobClient:   File System Counters
14/04/10 00:59:00 INFO mapred.JobClient:     FILE: Number of bytes read=0
14/04/10 00:59:00 INFO mapred.JobClient:     FILE: Number of bytes 
written=831512
14/04/10 00:59:00 INFO mapred.JobClient:     FILE: Number of read operations=0
14/04/10 00:59:00 INFO mapred.JobClient:     FILE: Number of large read 
operations=0
14/04/10 00:59:00 INFO mapred.JobClient:     FILE: Number of write operations=0
14/04/10 00:59:00 INFO mapred.JobClient:     HDFS: Number of bytes read=425
14/04/10 00:59:00 INFO mapred.JobClient:     HDFS: Number of bytes written=52946
14/04/10 00:59:00 INFO mapred.JobClient:     HDFS: Number of read operations=4
14/04/10 00:59:00 INFO mapred.JobClient:     HDFS: Number of large read 
operations=0
14/04/10 00:59:00 INFO mapred.JobClient:     HDFS: Number of write operations=4
14/04/10 00:59:00 INFO mapred.JobClient:   Job Counters
14/04/10 00:59:00 INFO mapred.JobClient:     Launched map tasks=4
14/04/10 00:59:00 INFO mapred.JobClient:     Total time spent by all maps in 
occupied slots (ms)=48401
14/04/10 00:59:00 INFO mapred.JobClient:     Total time spent by all reduces in 
occupied slots (ms)=0
14/04/10 00:59:00 INFO mapred.JobClient:     Total time spent by all maps 
waiting after reserving slots (ms)=0
14/04/10 00:59:00 INFO mapred.JobClient:     Total time spent by all reduces 
waiting after reserving slots (ms)=0
14/04/10 00:59:00 INFO mapred.JobClient:   Map-Reduce Framework
14/04/10 00:59:00 INFO mapred.JobClient:     Map input records=1883
14/04/10 00:59:00 INFO mapred.JobClient:     Map output records=1883
14/04/10 00:59:00 INFO mapred.JobClient:     Input split bytes=425
14/04/10 00:59:00 INFO mapred.JobClient:     Spilled Records=0
14/04/10 00:59:00 INFO mapred.JobClient:     CPU time spent (ms)=13040
14/04/10 00:59:00 INFO mapred.JobClient:     Physical memory (bytes) 
snapshot=1226002432
14/04/10 00:59:00 INFO mapred.JobClient:     Virtual memory (bytes) 
snapshot=19536457728
14/04/10 00:59:00 INFO mapred.JobClient:     Total committed heap usage 
(bytes)=8232108032
14/04/10 00:59:00 INFO mapreduce.ImportJobBase: Transferred 51.7051 KB in 
24.0544 seconds (2.1495 KB/sec)
14/04/10 00:59:00 INFO mapreduce.ImportJobBase: Retrieved 1883 records.
bash-4.1$

Attachment: try_bash_sqoop.sh
Description: try_bash_sqoop.sh

Reply via email to