[ https://issues.apache.org/jira/browse/SPARK-21246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon resolved SPARK-21246. ---------------------------------- Resolution: Invalid Of course, it follows the schema an user specified. {code} scala> peopleDF.schema == schema res9: Boolean = true {code} and this throws an exception as the schema is mismatched. I don't think at least the same schema is not an issue here. I am resolving this. > Unexpected Data Type conversion from LONG to BIGINT > --------------------------------------------------- > > Key: SPARK-21246 > URL: https://issues.apache.org/jira/browse/SPARK-21246 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 1.6.1 > Environment: Using Zeppelin Notebook or Spark Shell > Reporter: Monica Raj > > The unexpected conversion occurred when creating a data frame out of an > existing data collection. The following code can be run in zeppelin notebook > to reproduce the bug: > import org.apache.spark.sql.types._ > import org.apache.spark.sql.Row > val schemaString = "name" > val lstVals = Seq(3) > val rowRdd = sc.parallelize(lstVals).map(x => Row( x )) > rowRdd.collect() > // Generate the schema based on the string of schema > val fields = schemaString.split(" ") > .map(fieldName => StructField(fieldName, LongType, nullable = true)) > val schema = StructType(fields) > print(schema) > val peopleDF = sqlContext.createDataFrame(rowRdd, schema) -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org