[ 
https://issues.apache.org/jira/browse/SQOOP-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirek Szymanski updated SQOOP-2564:
-----------------------------------
    Environment: Teradata 13, Oracle 11g

> Problem with Teradata input table with INTEGER and FLOAT columns
> ----------------------------------------------------------------
>
>                 Key: SQOOP-2564
>                 URL: https://issues.apache.org/jira/browse/SQOOP-2564
>             Project: Sqoop
>          Issue Type: Bug
>    Affects Versions: 1.99.6
>         Environment: Teradata 13, Oracle 11g
>            Reporter: Mirek Szymanski
>
> I'm trying to transfer data between Teradata and Oracle databases.
> If the input table in Teradata has column of type INTEGER (columntype 'I' in 
> dbc.columnsV) or FLOAT (columntype 'F') to Oracle table - exception is thrown 
> "org.apache.sqoop.common.SqoopException: MAPRED_EXEC_0017:Error occurs during 
> extractor run"
> Exception in case of FLOAT column:
> 2015-09-14 10:59:09,200 INFO [AsyncDispatcher event handler] 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics 
> report from attempt_1441804561641_0239_m_000001_0: Error: 
> org.apache.sqoop.common.SqoopException: MAPRED_EXEC_0017:Error occurs during 
> extractor run
>       at org.apache.sqoop.job.mr.SqoopMapper.run(SqoopMapper.java:99)
>       at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
>       at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>       at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>       at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.ClassCastException: java.lang.Double cannot be cast to 
> java.lang.Float
>       at 
> org.apache.sqoop.connector.common.SqoopIDFUtils.toCSVFloatingPoint(SqoopIDFUtils.java:164)
>       at 
> org.apache.sqoop.connector.common.SqoopIDFUtils.toCSV(SqoopIDFUtils.java:606)
>       at 
> org.apache.sqoop.connector.idf.CSVIntermediateDataFormat.toCSV(CSVIntermediateDataFormat.java:116)
>       at 
> org.apache.sqoop.connector.idf.CSVIntermediateDataFormat.setObjectData(CSVIntermediateDataFormat.java:87)
>       at 
> org.apache.sqoop.job.mr.SqoopMapper$SqoopMapDataWriter.writeArrayRecord(SqoopMapper.java:125)
>       at 
> org.apache.sqoop.connector.jdbc.GenericJdbcExtractor.extract(GenericJdbcExtractor.java:91)
>       at 
> org.apache.sqoop.connector.jdbc.GenericJdbcExtractor.extract(GenericJdbcExtractor.java:38)
>       at org.apache.sqoop.job.mr.SqoopMapper.run(SqoopMapper.java:95)
>       ... 7 more
> In case of INTEGER column:
> ...
> java.lang.ClassCastException: java.lang.Integer cannot be cast to 
> java.math.BigDecimal
>       at 
> org.apache.sqoop.connector.common.SqoopIDFUtils.toCSVDecimal(SqoopIDFUtils.java:176)
>       at 
> org.apache.sqoop.connector.common.SqoopIDFUtils.toCSV(SqoopIDFUtils.java:602)
>       at 
> org.apache.sqoop.connector.idf.CSVIntermediateDataFormat.toCSV(CSVIntermediateDataFormat.java:116)
>       at 
> org.apache.sqoop.connector.idf.CSVIntermediateDataFormat.setObjectData(CSVIntermediateDataFormat.java:87)
>       at 
> org.apache.sqoop.job.mr.SqoopMapper$SqoopMapDataWriter.writeContent(SqoopMapper.java:149)
>       ... 11 more
> The fix (or maybe a workaround) is to remove the cast to BigDecimal from 
> SqoopIDFUtils.toCSVDecimal().  When the cast is removed, my scenario works 
> fine.
> I wonder if the casts are necessary in toCSVFloatingPoint and toCSVDecimal 
> methods. What is the real purpose of them - some kind of type checking?
> If this is type checking - why don't use something more explicit, like 
> instanceof()?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to