[ https://issues.apache.org/jira/browse/SPARK-22439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16249341#comment-16249341 ]
Navya Krishnappa commented on SPARK-22439: ------------------------------------------ [~sowen] Thank you for your response. According to the issue, If we just add the header to the above-given data it is working fine. I don't understand only with header change why it is not working. Let me know if you need more inputs. SourceFile: Column1 8.95977565356765764E+20 8.95977565356765764E+20 8.95977565356765764E+20 Source code1: Dataset dataset = getSqlContext().read() .option(PARSER_LIB, "commons") .option(INFER_SCHEMA, "true") .option(HEADER, "true") .option(DELIMITER, ",") .option(QUOTE, "\"") .option(ESCAPE, " ") .option(MODE, Mode.PERMISSIVE) .csv(sourceFile); dataset.numericColumns() Columns1 - decimal(18,-3) > Not able to get numeric columns for the file having decimal values > ------------------------------------------------------------------ > > Key: SPARK-22439 > URL: https://issues.apache.org/jira/browse/SPARK-22439 > Project: Spark > Issue Type: Bug > Components: Java API, SQL > Affects Versions: 2.2.0 > Reporter: Navya Krishnappa > > When reading the below-mentioned decimal value by specifying header as true. > SourceFile: > 8.95977565356765764E+20 > 8.95977565356765764E+20 > 8.95977565356765764E+20 > Source code1: > Dataset dataset = getSqlContext().read() > .option(PARSER_LIB, "commons") > .option(INFER_SCHEMA, "true") > .option(HEADER, "true") > .option(DELIMITER, ",") > .option(QUOTE, "\"") > .option(ESCAPE, " > ") > .option(MODE, Mode.PERMISSIVE) > .csv(sourceFile); > dataset.numericColumns() > Result: > Caused by: java.util.NoSuchElementException: None.get > at scala.None$.get(Option.scala:347) > at scala.None$.get(Option.scala:345) > at > org.apache.spark.sql.Dataset$$anonfun$numericColumns$2.apply(Dataset.scala:223) > at > org.apache.spark.sql.Dataset$$anonfun$numericColumns$2.apply(Dataset.scala:222) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186) -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org