[ https://issues.apache.org/jira/browse/SPARK-18011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15827032#comment-15827032 ]
Miao Wang edited comment on SPARK-18011 at 1/17/17 11:27 PM: ------------------------------------------------------------- [~felixcheung] I did intensive debug in both R side and scala side. On R side, I debugged `createDataFrame.default` and `parallelize`, which converts the data.frame into RDD and DataFrame. The code of turning the data into RDD is done in `parallelize`: sliceLen <- ceiling(length(coll) / numSlices) slices <- split(coll, rep(1: (numSlices + 1), each = sliceLen)[1:length(coll)]) serializedSlices <- lapply(slices, serialize, connection = NULL) I add debug message after the `serialize`: lapply(serializedSlices, function(`x`) {message(paste("unserialized ", unserialize(x)))}) The data `NA` is unserialized successfully. Then, the serialized data is transferred to Scala side by jrdd <- callJStatic("org.apache.spark.api.r.RRDD", "createRDDFromArray", sc, serializedSlices) and returns a handle of the RDD in `jrdd`, which is later used by `createDataFrame.default`. I did not find anything wrong here. On the Scala side, the problem happens in def readString(in: DataInputStream): String = { val len = in.readInt() <=== it encounters the problem when reading `NA` as a string. readStringBytes(in, len) } Then, I changed the logic as follows: def readString(in: DataInputStream): String = { var len = in.readInt() if (len < 0) { len = 3<===== I enforce reading 3 bytes in this case, because I believe that it is the case of `NA` } readStringBytes(in, len) } Then I run the following commands in sparkR: > a <- as.Date(NA) > b <- as.data.frame(a) > c <- collect(select(createDataFrame(b), "*")) > c a 1 NA It executes correctly without hitting the exception handling (I add debug information in the handling logic. If it is hit, error message will be print on the console and I verified that it is print out without the above logic). So, we can conclude that the problem is caused by `serialize` function with my local R installation, which serialize `NA` as string without packing its length before the actual value. Since `unserialize` can decode the seralized data, this protocol should be by R design when handling `NA` as `Date` type. I don't find the source code of `serialize` in R source code, which calls Internal(serialize(object, connection, type, version, refhook)) For the fix, we can either leave it as it is by an exception handling or explicitly add a handling in readString when index is negative. What do you think? Thanks! was (Author: wm624): [~felixcheung] I did intensive debug in both R side and scala side. On R side, I debugged `createDataFrame.default` and `parallelize`, which converts the data.frame into RDD and DataFrame. The code of turning the data into RDD is done in `parallelize`: sliceLen <- ceiling(length(coll) / numSlices) slices <- split(coll, rep(1: (numSlices + 1), each = sliceLen)[1:length(coll)]) serializedSlices <- lapply(slices, serialize, connection = NULL) I add debug message after the `serialize`: lapply(serializedSlices, function(x) {message(paste("unserialized ", unserialize(x)))}) The data `NA` is unserialized successfully. Then, the serialized data is transferred to Scala side by jrdd <- callJStatic("org.apache.spark.api.r.RRDD", "createRDDFromArray", sc, serializedSlices) and returns a handle of the RDD in `jrdd`, which is later used by `createDataFrame.default`. I did not find anything wrong here. On the Scala side, the problem happens in def readString(in: DataInputStream): String = { val len = in.readInt() <=== it encounters the problem when reading `NA` as a string. readStringBytes(in, len) } Then, I changed the logic as follows: def readString(in: DataInputStream): String = { var len = in.readInt() if (len < 0) { len = 3<===== I enforce reading 3 bytes in this case, because I believe that it is the case of `NA` } readStringBytes(in, len) } Then I run the following commands in sparkR: > a <- as.Date(NA) > b <- as.data.frame(a) > c <- collect(select(createDataFrame(b), "*")) > c a 1 NA It executes correctly without hitting the exception handling (I add debug information in the handling logic. If it is hit, error message will be print on the console and I verified that it is print out without the above logic). So, we can conclude that the problem is caused by `serialize` function with my local R installation, which serialize `NA` as string without packing its length before the actual value. Since `unserialize` can decode the seralized data, this protocol should be by R design when handling `NA` as `Date` type. I don't find the source code of `serialize` in R source code, which calls Internal(serialize(object, connection, type, version, refhook)) For the fix, we can either leave it as it is by an exception handling or explicitly add a handling in readString when index is negative. What do you think? Thanks! > SparkR serialize "NA" throws exception > -------------------------------------- > > Key: SPARK-18011 > URL: https://issues.apache.org/jira/browse/SPARK-18011 > Project: Spark > Issue Type: Bug > Components: SparkR > Reporter: Miao Wang > > For some versions of R, if Date has "NA" field, backend will throw negative > index exception. > To reproduce the problem: > {code} > > a <- as.Date(c("2016-11-11", "NA")) > > b <- as.data.frame(a) > > c <- createDataFrame(b) > > dim(c) > 16/10/19 10:31:24 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1) > java.lang.NegativeArraySizeException > at org.apache.spark.api.r.SerDe$.readStringBytes(SerDe.scala:110) > at org.apache.spark.api.r.SerDe$.readString(SerDe.scala:119) > at org.apache.spark.api.r.SerDe$.readDate(SerDe.scala:128) > at org.apache.spark.api.r.SerDe$.readTypedObject(SerDe.scala:77) > at org.apache.spark.api.r.SerDe$.readObject(SerDe.scala:61) > at > org.apache.spark.sql.api.r.SQLUtils$$anonfun$bytesToRow$1.apply(SQLUtils.scala:161) > at > org.apache.spark.sql.api.r.SQLUtils$$anonfun$bytesToRow$1.apply(SQLUtils.scala:160) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.immutable.Range.foreach(Range.scala:160) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.AbstractTraversable.map(Traversable.scala:104) > at org.apache.spark.sql.api.r.SQLUtils$.bytesToRow(SQLUtils.scala:160) > at > org.apache.spark.sql.api.r.SQLUtils$$anonfun$5.apply(SQLUtils.scala:138) > at > org.apache.spark.sql.api.r.SQLUtils$$anonfun$5.apply(SQLUtils.scala:138) > at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) > at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) > at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithoutKey$(Unknown > Source) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:372) > at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) > at > org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:126) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) > at org.apache.spark.scheduler.Task.run(Task.scala:99) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org