[ 
https://issues.apache.org/jira/browse/SPARK-18877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15756554#comment-15756554
 ] 

Navya Krishnappa edited comment on SPARK-18877 at 3/30/17 12:45 PM:
--------------------------------------------------------------------

Thank you for replying [~dongjoon]. Can you help me in understanding whether 
the above mentioned PR will resolve the below mentioned issue.

I have another issue with respect to the decimal scale. When i'm trying to read 
the below mentioned csv source file and creating an parquet file from that 
throws an java.lang.IllegalArgumentException: Invalid DECIMAL scale: -9 
exception.


The source file content is 
Row(column name)
9.03E+12
1.19E+11

 Refer the given code used read the csv file and creating an parquet file:

//Read the csv file
Dataset dataset = getSqlContext().read()
.option(HEADER, "true")
.option(PARSER_LIB, "commons")
.option(INFER_SCHEMA, "true")
.option(DELIMITER, ",")
.option(QUOTE, "\"")
.option(ESCAPE, "
")
.option(MODE, Mode.PERMISSIVE)
.csv(sourceFile)

// create an parquet file
dataset.write().parquet("//path.parquet")


Stack trace:

Caused by: java.lang.IllegalArgumentException: Invalid DECIMAL scale: -9
        at org.apache.parquet.Preconditions.checkArgument(Preconditions.java:55)
        at 
org.apache.parquet.schema.Types$PrimitiveBuilder.decimalMetadata(Types.java:410)
        at 
org.apache.parquet.schema.Types$PrimitiveBuilder.build(Types.java:324)
        at 
org.apache.parquet.schema.Types$PrimitiveBuilder.build(Types.java:250)
        at org.apache.parquet.schema.Types$Builder.named(Types.java:228)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter.convertField(ParquetSchemaConverter.scala:412)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter.convertField(ParquetSchemaConverter.scala:321)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter$$anonfun$convert$1.apply(ParquetSchemaConverter.scala:313)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter$$anonfun$convert$1.apply(ParquetSchemaConverter.scala:313)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
        at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
        at org.apache.spark.sql.types.StructType.foreach(StructType.scala:95)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
        at org.apache.spark.sql.types.StructType.map(StructType.scala:95)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter.convert(ParquetSchemaConverter.scala:313)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.init(ParquetWriteSupport.scala:85)
        at 
org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:288)
        at 
org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:262)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetFileFormat.scala:562)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:139)
        at 
org.apache.spark.sql.execution.datasources.BaseWriterContainer.newOutputWriter(WriterContainer.scala:131)
        at 
org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:247)
        at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
        at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
        at org.apache.spark.scheduler.Task.run(Task.scala:86)



was (Author: navya krishnappa):
Thank you for replying [~dongjoon]. Can you help me in understanding whether 
the above mentioned PR will resolve the below mentioned issue.

I have another issue with respect to the decimal scale. When i'm trying to read 
the below mentioned csv source file and creating an parquet file from that 
throws an java.lang.IllegalArgumentException: Invalid DECIMAL scale: -9 
exception.


The source file content is 
Row(column name)
9.03E+12
1.19E+11

 Refer the given code used read the csv file and creating an parquet file:

//Read the csv file
Dataset dataset = getSqlContext().read()
.option(DAWBConstant.HEADER, "true")
.option(DAWBConstant.PARSER_LIB, "commons")
.option(DAWBConstant.INFER_SCHEMA, "true")
.option(DAWBConstant.DELIMITER, ",")
.option(DAWBConstant.QUOTE, "\"")
.option(DAWBConstant.ESCAPE, "
")
.option(DAWBConstant.MODE, Mode.PERMISSIVE)
.csv(sourceFile)

// create an parquet file
dataset.write().parquet("//path.parquet")


Stack trace:

Caused by: java.lang.IllegalArgumentException: Invalid DECIMAL scale: -9
        at org.apache.parquet.Preconditions.checkArgument(Preconditions.java:55)
        at 
org.apache.parquet.schema.Types$PrimitiveBuilder.decimalMetadata(Types.java:410)
        at 
org.apache.parquet.schema.Types$PrimitiveBuilder.build(Types.java:324)
        at 
org.apache.parquet.schema.Types$PrimitiveBuilder.build(Types.java:250)
        at org.apache.parquet.schema.Types$Builder.named(Types.java:228)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter.convertField(ParquetSchemaConverter.scala:412)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter.convertField(ParquetSchemaConverter.scala:321)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter$$anonfun$convert$1.apply(ParquetSchemaConverter.scala:313)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter$$anonfun$convert$1.apply(ParquetSchemaConverter.scala:313)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
        at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
        at org.apache.spark.sql.types.StructType.foreach(StructType.scala:95)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
        at org.apache.spark.sql.types.StructType.map(StructType.scala:95)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter.convert(ParquetSchemaConverter.scala:313)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.init(ParquetWriteSupport.scala:85)
        at 
org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:288)
        at 
org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:262)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetFileFormat.scala:562)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:139)
        at 
org.apache.spark.sql.execution.datasources.BaseWriterContainer.newOutputWriter(WriterContainer.scala:131)
        at 
org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:247)
        at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
        at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
        at org.apache.spark.scheduler.Task.run(Task.scala:86)


> Unable to read given csv data. Excepion: java.lang.IllegalArgumentException: 
> requirement failed: Decimal precision 28 exceeds max precision 20
> ----------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-18877
>                 URL: https://issues.apache.org/jira/browse/SPARK-18877
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.2
>            Reporter: Navya Krishnappa
>            Assignee: Dongjoon Hyun
>             Fix For: 2.0.3, 2.1.1, 2.2.0
>
>
> When reading below mentioned csv data, even though the maximum decimal 
> precision is 38, following exception is thrown 
> java.lang.IllegalArgumentException: requirement failed: Decimal precision 28 
> exceeds max precision 20
> Decimal
> 2323366225312000000000000000
> 24335739714000000
> 23233662253000
> 232336622530000



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to