Hi Sanjiv,
when you use

-Doraoop.import.consistent.read=true \

from
https://sqoop.apache.org/docs/1.4.5/SqoopUserGuide.html

Set to true to ensure all mappers read from the same point in time.
The System Change Number (SCN) is passed down to all mappers, which
use the Oracle Flashback Query to query the table as at that SCN.

=>

your oracle needs to save that 'snapshot' of all the data somewhere
but at the same time it needs keep data changing as new dml comes on
the table...

you need that space available
contact your oracle admin for details

Best, Mario
Kind Regards,
Mario Amatucci


On 25 September 2015 at 08:32, @Sanjiv Singh <[email protected]> wrote:
> Hi Mario,
>
> Thanks for the reply.
>
> Please help me understand "maybe is the case you have not enough undo space
> on oracle".
>
> Are you talking about disk or memory configured ? help me verify the same on
> Oracle.
>
> Any help is highly appreciated !!!
>
>
>
> Regards,
> Sanjiv Singh
>
>
> Regards
> Sanjiv Singh
> Mob :  +091 9990-447-339
>
> On Fri, Sep 25, 2015 at 11:57 AM, Mario Amatucci <[email protected]>
> wrote:
>>
>> Hi Sanjiv,
>> maybe is the case you have not enough undo space on oracle; I saw that
>> error on my case when loading data. Can you try with just 1 (smallest)
>> partition?
>> Kind Regards,
>> Mario Amatucci
>>
>>
>> On 25 September 2015 at 06:23, @Sanjiv Singh <[email protected]>
>> wrote:
>> > Hi David,
>> >
>> > PFA for log file with "—verbose" added to sqoop command.
>> >
>> >
>> > Sqoop version: 1.4.5
>> > hadoop-2.6.0
>> >
>> > Let me know if need other details.
>> >
>> >
>> >
>> > Regards
>> > Sanjiv Singh
>> > Mob :  +091 9990-447-339
>> >
>> > On Fri, Sep 25, 2015 at 6:04 AM, David Robson
>> > <[email protected]> wrote:
>> >>
>> >> Hi Sanjiv,
>> >>
>> >>
>> >>
>> >> Could you please run the failing command again and add “—verbose” to
>> >> generate debug logging and post the full log file?
>> >>
>> >>
>> >>
>> >> David
>> >>
>> >>
>> >>
>> >> From: @Sanjiv Singh [mailto:[email protected]]
>> >> Sent: Thursday, 24 September 2015 10:10 PM
>> >> To: [email protected]
>> >> Cc: Sanjiv Singh
>> >> Subject: OraOop : Sqoop Direct Oracle import failed with error "Error:
>> >> java.io.IOException: SQLException in nextKeyValue"
>> >>
>> >>
>> >>
>> >> Hi Folks,
>> >>
>> >>
>> >>
>> >> I am trying to import partitioned Oracle table through "OraOop" -
>> >> direct
>> >> mode to Hive and getting error.
>> >>
>> >>  I tried with other permutation and combination of sqoop parameters,
>> >> here
>> >> is what i have tried.
>> >>
>> >> Worked (chunk.method=PARTITION and only 1 mapper):
>> >>
>> >>
>> >>
>> >> -Doraoop.import.partitions='OLD_DAYS,SYS_P41,SYS_P42,SYS,SYS_P68,SYS_P69,SYS_P70,SYS_P71'
>> >> \
>> >> -Doraoop.chunk.method=PARTITION  \
>> >> --m 1  \
>> >> --direct \
>> >>
>> >> Worked (chunk.method=PARTITION  removed and 100 mappers):
>> >>
>> >>
>> >>
>> >> -Doraoop.import.partitions='OLD_DAYS,SYS_P41,SYS_P42,SYS,SYS_P68,SYS_P69,SYS_P70,SYS_P71'
>> >> \
>> >> --m 100  \
>> >> --direct \
>> >>
>> >> Doesn't work (chunk.method=PARTITION and  100 mappers):
>> >>
>> >>
>> >>
>> >> -Doraoop.import.partitions='OLD_DAYS,SYS_P41,SYS_P42,SYS,SYS_P68,SYS_P69,SYS_P70,SYS_P71'
>> >> \
>> >> -Doraoop.chunk.method=PARTITION  \
>> >> --m 100  \
>> >> --direct \
>> >>
>> >> Through other combination are working , Can you please help me
>> >> understand
>> >> why chunk.method=PARTITION with multiple mappers failing.   ?
>> >>
>> >> Is there something need to be done on hive for partition ?
>> >>
>> >> please help me in resolving the issue ?
>> >>
>> >> Any help is highly appreciated
>> >>
>> >>
>> >>
>> >>
>> >> See below full sqoop command which is failing. and error logs.
>> >>
>> >> Sqoop Import command (which is failing):
>> >>
>> >> $SQOOP_HOME/bin/sqoop import  \
>> >> -Doraoop.disabled=false \
>> >>
>> >>
>> >> -Doraoop.import.partitions='OLD_DAYS,SYS_P41,SYS_P42,SYS,SYS_P68,SYS_P69,SYS_P70,SYS_P71'
>> >> \
>> >> -Doraoop.chunk.method=PARTITION  \
>> >> -Doraoop.import.consistent.read=true \
>> >> -Dmapred.child.java.opts="-Djava.security.egd=file:/dev/../dev/urandom"
>> >> \
>> >> --connect jdbc:oracle:thin:@host:port/db \
>> >> --username ***** \
>> >> --password ***** \
>> >> --table DATE_DATA \
>> >> --direct \
>> >> --hive-import \
>> >> --hive-table tempDB.DATE_DATA \
>> >> --split-by D_DATE_SK \
>> >> --m 100  \
>> >> --delete-target-dir \
>> >> --target-dir /tmp/34/DATE_DATA
>> >>
>> >>
>> >> Error logs :
>> >>
>> >> 2015-09-24 16:23:57,068 [myid:] - INFO  [main:Job@1452] - Task Id :
>> >> attempt_1442839036383_0051_m_000006_0, Status : FAILED
>> >> Error: java.io.IOException: SQLException in nextKeyValue
>> >>     at
>> >>
>> >> org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:277)
>> >>     at
>> >>
>> >> org.apache.sqoop.manager.oracle.OraOopDBRecordReader.nextKeyValue(OraOopDBRecordReader.java:351)
>> >>     at
>> >>
>> >> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
>> >>     at
>> >>
>> >> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
>> >>     at
>> >>
>> >> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
>> >>     at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>> >>     at
>> >>
>> >> org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
>> >>     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
>> >>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>> >>     at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>> >>     at java.security.AccessController.doPrivileged(Native Method)
>> >>     at javax.security.auth.Subject.doAs(Subject.java:415)
>> >>     at
>> >>
>> >> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>> >>     at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
>> >> Caused by: java.sql.SQLSyntaxErrorException: ORA-00933: SQL command not
>> >> properly ended
>> >>
>> >>     at
>> >>
>> >> oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:91)
>> >>     at
>> >>
>> >> oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
>> >>     at
>> >>
>> >> oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:206)
>> >>     at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
>> >>     at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
>> >>     at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1034)
>> >>     at
>> >>
>> >> oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:194)
>> >>     at
>> >>
>> >> oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:791)
>> >>     at
>> >>
>> >> oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:866)
>> >>     at
>> >>
>> >> oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1186)
>> >>     at
>> >>
>> >> oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3387)
>> >>     at
>> >>
>> >> oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3431)
>> >>     at
>> >>
>> >> oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1491)
>> >>     at
>> >>
>> >> org.apache.sqoop.mapreduce.db.DBRecordReader.executeQuery(DBRecordReader.java:111)
>> >>     at
>> >>
>> >> org.apache.sqoop.manager.oracle.OraOopDBRecordReader.executeQuery(OraOopDBRecordReader.java:417)
>> >>     at
>> >>
>> >> org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:235)
>> >>     ... 13 more
>> >>
>> >>
>> >>
>> >>
>> >> Regards
>> >> Sanjiv Singh
>> >> Mob :  +091 9990-447-339
>> >
>> >
>
>

Reply via email to