Hi,

I am executing a saved job int the following way

sqoop job \
  -D mapreduce.task.timeout=0 \
  -D mapreduce.map.maxattempts=8 \
  --exec ${JOB_NAME} \
  --meta-connect ${SQOOP_METASTORE} \
  -- --hive-partition-value "${HIVE_PARTITION}"

The job starts ok, but it does not apply the supplied mapreduce options.

For example, the map tasks fail with a timeout of 600 secs which is the
Hadoop default rather than having no timeout as the option
mapreduce.task.timeout=0 implies.

I have executed this job as a direct import without saving it and it
applies the options correctly and finishes fine. The problem is I need to
do an incremental job by the table's PK so I need to save the job in the
metastore.

Any ideas on how to fix this?

Cheers,
Juan.

Reply via email to