Hey there, Sqoop shouldn't have any restrictions on the amount of data you transfer. It simply parallelizes the transfer process in a database agnostic fashion. A couple of notes that should hopefully help:
- I believe "target-dir" is specific to HDFS. So it seems unnecessary here. - I believe "--append" is for HDFS only. Are you looking for "append" mode? Could you run your command with the --verbose option and attach the output to this thread? Place --verbose at the beginning of your import arguments (right after the word "import" in your command). -Abe On Fri, Jun 13, 2014 at 12:07 AM, Alberto Crespi <[email protected]> wrote: > Hello, > > I'm using CDH 5.0.1 > i'm trying to import data into hbase using sqoop. > i launch my sqoop line-command: > sudo -u hdfs sqoop import --connect jdbc:mysql://10.0.0.221/db > --username XXX --password XXX --table test -m 1 --target-dir /user/import > --incremental lastmodified --check-column date --append --hbase-table > forHive --column-family infos > During the log, i have this error: > Error during import: HBase jars are not present in classpath, cannot > import to HBase! > > I set my $HBASE_HOME : export HBASE_HOME=/usr/lib/hbase > after that i change my hbase_home because i use CDH with this string: > export HBASE_HOME=/opt/cloudera/parcels/CDH/lib/hbase > and i also try: > export HBASE_HOME=/opt/cloudera/parcels/CDH/lib/hbase/lib > > But in every case i have this error: > ERROR tool.ImportTool: Error during import: HBase jars are not present in > classpath, cannot import to HBase! > > What kind of HBASE_HOME must i set? > > and is possible use builk-load HBase with sqoop for massive import? > > thanks >
