Hello, now is working OK
I think the problem was on pgpass file

Thanks

2017-01-25 0:38 GMT+01:00 Anna Szonyi <[email protected]>:

> Hi,
>
> Could you please provide some more information? Maybe a portion of the
> logs with --verbose that are relevant (practically anything after "Starting
> pg_bulkload with arguments:" would be helpful).
>
> Sqoop actually just starts the pg_bulkload as a separate process and the
> return value of 2 is the actual return value from the process itself.
>
> Could you also please rerun the process (pg_bulkload) by itself without
> Sqoop and see if it succeeds that way: you can add the --verbose option to
> Sqoop, look for a log message like "Starting pg_bulkload with
> arguments:"... this should actually write out all the precise switches,
> arguments sqoop uses to call pg_bulkload, so you can rerun exactly as sqoop
> would have and see if the return value is different this way.
>
> Regards,
> Anna
>
>
> On Thu, Jan 19, 2017 at 7:03 AM, Alberto Ramón <[email protected]>
> wrote:
>
>> Hello
>>
>> System: pgsql-9.6 and HDP 2.4
>> Error on MapReduce:
>>
>>  *Exception running child : java.lang.RuntimeException: Unexpected return 
>> value from pg_bulkload: 2*
>>
>> I saw on code: * PostgreSQL connection error *
>> (https://github.com/ossc-db/pg_bulkload/blob/6f7240eb1438b8dcc58b68667639abc32ed69b03/bin/pgut/pgut.h#L96)
>>
>> But in a previous Step do "INFO manager.SqlManager: Executing SQL statement" 
>> and its sucessful
>>
>>  sqoop export \
>>     -Dmapred.reduce.tasks=1 \
>>     -Dpgbulkload.bin="/usr/pgsql-9.6/bin/pg_bulkload" \
>>     -Dpgbulkload.input.field.delim=$'\t' \
>>     -Dpgbulkload.check.constraints="NO" \
>>     -Dpgbulkload.parse.errors="INFINITE" \
>>     -Dpgbulkload.duplicate.errors="INFINITE" \
>>     --connect jdbc:postgresql://X.X.X.X:5432/postgis_performancetest \
>>     --connection-manager org.apache.sqoop.manager.PGBulkloadManager \
>>     --table test \
>>     --username postgres --password 1234 --export-dir=/user/hdfs/data -m 1\
>>
>> I tried to launch as root and HDFS user
>>
>
>

Reply via email to