[ https://issues.apache.org/jira/browse/SPARK-32478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon updated SPARK-32478: --------------------------------- Description: Currently, the error message is confusing when the output schema type is not matched with the actual R DataFrame in gapply: {code} ./bin/sparkR --conf spark.sql.execution.arrow.sparkr.enabled=true {code} {code} df <- createDataFrame(list(list(a=1L, b="2"))) count(gapply(df, "a", function(key, group) { group }, structType("a int, b int"))) {code} {code} org.apache.spark.SparkException: Job aborted due to stage failure: Task 43 in stage 2.0 failed 1 times, most recent failure: Lost task 43.0 in stage 2.0 (TID 2, 192.168.35.193, executor driver): java.lang.UnsupportedOperationException at org.apache.spark.sql.vectorized.ArrowColumnVector$ArrowVectorAccessor.getInt(ArrowColumnVector.java:212) ... {code} We should probably also document that the type should be matched always. was: Currently, the error message is confusing when the output schema type is not matched with the actual R DataFrame in gapply: {code} df <- createDataFrame(list(list(a=1L, b="2"))) count(gapply(df, "a", function(key, group) { group }, structType("a int, b int"))) {code} {code} org.apache.spark.SparkException: Job aborted due to stage failure: Task 43 in stage 2.0 failed 1 times, most recent failure: Lost task 43.0 in stage 2.0 (TID 2, 192.168.35.193, executor driver): java.lang.UnsupportedOperationException at org.apache.spark.sql.vectorized.ArrowColumnVector$ArrowVectorAccessor.getInt(ArrowColumnVector.java:212) ... {code} We should probably also document that the type should be matched always. > Error message to show the schema mismatch in gapply with Arrow vectorization > ---------------------------------------------------------------------------- > > Key: SPARK-32478 > URL: https://issues.apache.org/jira/browse/SPARK-32478 > Project: Spark > Issue Type: Improvement > Components: SparkR > Affects Versions: 3.0.0 > Reporter: Hyukjin Kwon > Priority: Major > > Currently, the error message is confusing when the output schema type is not > matched with the actual R DataFrame in gapply: > {code} > ./bin/sparkR --conf spark.sql.execution.arrow.sparkr.enabled=true > {code} > {code} > df <- createDataFrame(list(list(a=1L, b="2"))) > count(gapply(df, "a", function(key, group) { group }, structType("a int, b > int"))) > {code} > {code} > org.apache.spark.SparkException: Job aborted due to stage failure: Task 43 > in stage 2.0 failed 1 times, most recent failure: Lost task 43.0 in stage 2.0 > (TID 2, 192.168.35.193, executor driver): > java.lang.UnsupportedOperationException > at > org.apache.spark.sql.vectorized.ArrowColumnVector$ArrowVectorAccessor.getInt(ArrowColumnVector.java:212) > ... > {code} > We should probably also document that the type should be matched always. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org