Hi, After trying a lot of different options and following the way the options are supplied in this article http://www.devx.com/Java/hadoop-using-sqoop-for-data-splitting.html I was able to make it work.
The trick invoved adding the --append option and moving all the options related to the incremental append above the connect option in the following way: /home/hadoop/sqoop/bin/sqoop job --create import-events-inc2 \ --meta-connect jdbc:hsqldb:hsql://localhost:16000/sqoop \ -- import \ --append \ --check-column id \ --incremental append \ --last-value 215885892 \ --connect 'jdbc:mysql://mysql01:3306/db' \ --username user \ --password pass \ --table events \ --direct \ -m 10 \ --hive-import \ --target-dir /data/events \ -- \ --skip-lock-tables --single-transaction --quick Cheers, Juan. On Fri, Mar 6, 2015 at 11:55 AM, Juan Martin Pampliega <[email protected] > wrote: > Hi, > > I am using *sqoop-1.4.5.bin__hadoop-1.0.0* in Amazon EMR. > > I have compiled it for hadoop 2 with *ant clean package > -Dhadoopversion=200*. > > I am running an incremental job which I have saved in the metastore with > the following configuration: > > > /home/hadoop/sqoop/bin/sqoop job --create import-events-inc2 \ > --meta-connect jdbc:hsqldb:hsql://localhost:16000/sqoop \ > -- import \ > --connect 'jdbc:mysql://mysql01:3306/db' \ > --username user \ > --password pass \ > --table events \ > --check-column id \ > --incremental append \ > --last-value 215885892 \ > --direct \ > -m 10 \ > --hive-import \ > --target-dir /data/events \ > -- \ > --skip-lock-tables --single-transaction --quick > > When I execute the job with: > > /home/hadoop/sqoop/bin/sqoop job --exec events-inc \ > --meta-connect jdbc:hsqldb:hsql://localhost:16000/sqoop > > It works fine and brings the data to HDFS and does not throw any ERRORS. > > The problem is that the property incremental.last.value in the > SQOOP_SESSIONS table is not being modified. > > I tried moving the metastore to an outside database (MySQL) and I keep > having the same problem. > > Any idea what the problem is or things I should try to fix it? > > Cheers, > Juan. >
