[ https://issues.apache.org/jira/browse/SPARK-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15708082#comment-15708082 ]
Thomas Powell commented on SPARK-17608: --------------------------------------- Yes the confusing thing at the moment is the roundtripping so this sounds like a good solution. > Long type has incorrect serialization/deserialization > ----------------------------------------------------- > > Key: SPARK-17608 > URL: https://issues.apache.org/jira/browse/SPARK-17608 > Project: Spark > Issue Type: Bug > Components: SparkR > Affects Versions: 2.0.0 > Reporter: Thomas Powell > > Am hitting issues when using {{dapply}} on a data frame that contains a > {{bigint}} in its schema. When this is converted to a SparkR data frame a > "bigint" gets converted to a R {{numeric}} type: > https://github.com/apache/spark/blob/master/R/pkg/R/types.R#L25. > However, the R {{numeric}} type gets converted to > {{org.apache.spark.sql.types.DoubleType}}: > https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala#L97. > The two directions therefore aren't compatible. If I use the same schema when > using dapply (and just an identity function) I will get type collisions > because the output type is a double but the schema expects a bigint. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org