[ 
https://issues.apache.org/jira/browse/SPARK-40114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun reassigned SPARK-40114:
-------------------------------------

    Assignee: Hyukjin Kwon

> Arrow 9.0.0 support with SparkR
> -------------------------------
>
>                 Key: SPARK-40114
>                 URL: https://issues.apache.org/jira/browse/SPARK-40114
>             Project: Spark
>          Issue Type: Bug
>          Components: SparkR
>    Affects Versions: 3.4.0
>            Reporter: Hyukjin Kwon
>            Assignee: Hyukjin Kwon
>            Priority: Major
>
> {code}
> == Failed 
> ======================================================================
> -- 1. Error (test_sparkSQL_arrow.R:103:3): dapply() Arrow optimization 
> ---------
> Error in `readBin(con, raw(), as.integer(dataLen), endian = "big")`: invalid 
> 'n' argument
> Backtrace:
>  1. SparkR::collect(ret)
>       at test_sparkSQL_arrow.R:103:2
>  2. SparkR::collect(ret)
>  3. SparkR (local) .local(x, ...)
>  7. SparkR:::readRaw(conn)
>  8. base::readBin(con, raw(), as.integer(dataLen), endian = "big")
> -- 2. Error (test_sparkSQL_arrow.R:133:3): dapply() Arrow optimization - type 
> sp
> Error in `readBin(con, raw(), as.integer(dataLen), endian = "big")`: invalid 
> 'n' argument
> Backtrace:
>  1. SparkR::collect(ret)
>       at test_sparkSQL_arrow.R:133:2
>  2. SparkR::collect(ret)
>  3. SparkR (local) .local(x, ...)
>  7. SparkR:::readRaw(conn)
>  8. base::readBin(con, raw(), as.integer(dataLen), endian = "big")
> -- 3. Error (test_sparkSQL_arrow.R:143:3): dapply() Arrow optimization - type 
> sp
> Error in `readBin(con, raw(), as.integer(dataLen), endian = "big")`: invalid 
> 'n' argument
> Backtrace:
>   1. testthat::expect_true(all(collect(ret) == rdf))
>        at test_sparkSQL_arrow.R:143:2
>   5. SparkR::collect(ret)
>   6. SparkR (local) .local(x, ...)
>  10. SparkR:::readRaw(conn)
>  11. base::readBin(con, raw(), as.integer(dataLen), endian = "big")
> -- 4. Error (test_sparkSQL_arrow.R:184:3): gapply() Arrow optimization 
> ---------
> Error in `readBin(con, raw(), as.integer(dataLen), endian = "big")`: invalid 
> 'n' argument
> Backtrace:
>  1. SparkR::collect(ret)
>       at test_sparkSQL_arrow.R:184:2
>  2. SparkR::collect(ret)
>  3. SparkR (local) .local(x, ...)
>  7. SparkR:::readRaw(conn)
>  8. base::readBin(con, raw(), as.integer(dataLen), endian = "big")
> -- 5. Error (test_sparkSQL_arrow.R:217:3): gapply() Arrow optimization - type 
> sp
> Error in `readBin(con, raw(), as.integer(dataLen), endian = "big")`: invalid 
> 'n' argument
> Backtrace:
>  1. SparkR::collect(ret)
>       at test_sparkSQL_arrow.R:217:2
>  2. SparkR::collect(ret)
>  3. SparkR (local) .local(x, ...)
>  7. SparkR:::readRaw(conn)
>  8. base::readBin(con, raw(), as.integer(dataLen), endian = "big")
> -- 6. Error (test_sparkSQL_arrow.R:229:3): gapply() Arrow optimization - type 
> sp
> Error in `readBin(con, raw(), as.integer(dataLen), endian = "big")`: invalid 
> 'n' argument
> Backtrace:
>   1. testthat::expect_true(all(collect(ret) == rdf))
>        at test_sparkSQL_arrow.R:229:2
>   5. SparkR::collect(ret)
>   6. SparkR (local) .local(x, ...)
>  10. SparkR:::readRaw(conn)
>  11. base::readBin(con, raw(), as.integer(dataLen), endian = "big")
> -- 7. Failure (test_sparkSQL_arrow.R:247:3): SPARK-32478: gapply() Arrow 
> optimiz
> `count(...)` threw an error with unexpected message.
> Expected match: "expected IntegerType, IntegerType, got IntegerType, 
> StringType"
> Actual message: "org.apache.spark.SparkException: Job aborted due to stage 
> failure: Task 0 in stage 29.0 failed 1 times, most recent failure: Lost task 
> 0.0 in stage 29.0 (TID 54) (APPVYR-WIN executor driver): 
> org.apache.spark.SparkException: R unexpectedly exited.\nR worker produced 
> errors: The tzdb package is not installed. Timezones will not be available to 
> Arrow compute functions.\nError in arrow::write_arrow(df, raw()) : 
> write_arrow has been removed\nCalls: <Anonymous> -> writeRaw -> writeInt -> 
> writeBin -> <Anonymous>\nExecution halted\n\r\n\tat 
> org.apache.spark.api.r.BaseRRunner$ReaderIterator$$anonfun$1.applyOrElse(BaseRRunner.scala:144)\r\n\tat
>  
> org.apache.spark.api.r.BaseRRunner$ReaderIterator$$anonfun$1.applyOrElse(BaseRRunner.scala:137)\r\n\tat
>  
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)\r\n\tat
>  
> org.apache.spark.sql.execution.r.ArrowRRunner$$anon$2.read(ArrowRRunner.scala:194)\r\n\tat
>  
> org.apache.spark.sql.execution.r.ArrowRRunner$$anon$2.read(ArrowRRunner.scala:123)\r\n\tat
>  
> org.apache.spark.api.r.BaseRRunner$ReaderIterator.hasNext(BaseRRunner.scala:113)\r\n\tat
>  scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491)\r\n\tat 
> scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)\r\n\tat 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage3.hashAgg_doAggregateWithoutKey_0$(Unknown
>  Source)\r\n\tat 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage3.processNext(Unknown
>  Source)\r\n\tat 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)\r\n\tat
>  
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)\r\n\tat
>  scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)\r\n\tat 
> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)\r\n\tat
>  
> org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)\r\n\tat
>  
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:101)\r\n\tat
>  
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)\r\n\tat
>  org.apache.spark.scheduler.Task.run(Task.scala:139)\r\n\tat 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)\r\n\tat
>  org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1490)\r\n\tat 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)\r\n\tat 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\r\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\r\n\tat
>  java.lang.Thread.run(Thread.java:748)\r\nCaused by: 
> java.net.SocketException: Connection reset\r\n\tat 
> java.net.SocketInputStream.read(SocketInputStream.java:210)\r\n\tat 
> java.net.SocketInputStream.read(SocketInputStream.java:141)\r\n\tat 
> java.io.BufferedInputStream.fill(BufferedInputStream.java:246)\r\n\tat 
> java.io.BufferedInputStream.read(BufferedInputStream.java:265)\r\n\tat 
> java.io.DataInputStream.readInt(DataInputStream.java:387)\r\n\tat 
> org.apache.spark.sql.execution.r.ArrowRRunner$$anon$2.read(ArrowRRunner.scala:154)\r\n\t...
>  20 more\r\n\nDriver stacktrace:\r\n\tat 
> org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2706)\r\n\tat
>  
> org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2642)\r\n\tat
>  
> org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2641)\r\n\tat
>  
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)\r\n\tat
>  
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)\r\n\tat
>  scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)\r\n\tat 
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2641)\r\n\tat
>  
> org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1189)\r\n\tat
>  
> org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1189)\r\n\tat
>  scala.Option.foreach(Option.scala:407)\r\n\tat 
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1189)\r\n\tat
>  
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2897)\r\n\tat
>  
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2836)\r\n\tat
>  
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2825)\r\n\tat
>  org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)\r\nCaused 
> by: org.apache.spark.SparkException: R unexpectedly exited.\nR worker 
> produced errors: The tzdb package is not installed. Timezones will not be 
> available to Arrow compute functions.\nError in arrow::write_arrow(df, raw()) 
> : write_arrow has been removed\nCalls: <Anonymous> -> writeRaw -> writeInt -> 
> writeBin -> <Anonymous>\nExecution halted\n\r\n\tat 
> org.apache.spark.api.r.BaseRRunner$ReaderIterator$$anonfun$1.applyOrElse(BaseRRunner.scala:144)\r\n\tat
>  
> org.apache.spark.api.r.BaseRRunner$ReaderIterator$$anonfun$1.applyOrElse(BaseRRunner.scala:137)\r\n\tat
>  
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)\r\n\tat
>  
> org.apache.spark.sql.execution.r.ArrowRRunner$$anon$2.read(ArrowRRunner.scala:194)\r\n\tat
>  
> org.apache.spark.sql.execution.r.ArrowRRunner$$anon$2.read(ArrowRRunner.scala:123)\r\n\tat
>  
> org.apache.spark.api.r.BaseRRunner$ReaderIterator.hasNext(BaseRRunner.scala:113)\r\n\tat
>  scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491)\r\n\tat 
> scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)\r\n\tat 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage3.hashAgg_doAggregateWithoutKey_0$(Unknown
>  Source)\r\n\tat 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage3.processNext(Unknown
>  Source)\r\n\tat 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)\r\n\tat
>  
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)\r\n\tat
>  scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)\r\n\tat 
> org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)\r\n\tat
>  
> org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)\r\n\tat
>  
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:101)\r\n\tat
>  
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)\r\n\tat
>  org.apache.spark.scheduler.Task.run(Task.scala:139)\r\n\tat 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)\r\n\tat
>  org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1490)\r\n\tat 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)\r\n\tat 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\r\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\r\n\tat
>  java.lang.Thread.run(Thread.java:748)\r\nCaused by: 
> java.net.SocketException: Connection reset\r\n\tat 
> java.net.SocketInputStream.read(SocketInputStream.java:210)\r\n\tat 
> java.net.SocketInputStream.read(SocketInputStream.java:141)\r\n\tat 
> java.io.BufferedInputStream.fill(BufferedInputStream.java:246)\r\n\tat 
> java.io.BufferedInputStream.read(BufferedInputStream.java:265)\r\n\tat 
> java.io.DataInputStream.readInt(DataInputStream.java:387)\r\n\tat 
> org.apache.spark.sql.execution.r.ArrowRRunner$$anon$2.read(ArrowRRunner.scala:154)\r\n\t...
>  20 more\r\n"
> Backtrace:
>   1. testthat::expect_error(...)
>        at test_sparkSQL_arrow.R:247:2
>   7. SparkR::count(...)
>   8. SparkR:::callJMethod(x@sdf, "count")
>   9. SparkR:::invokeJava(isStatic = FALSE, objId$id, methodName, ...)
>  10. SparkR:::handleErrors(returnStatus, conn)
> == DONE 
> ========================================================================
> {code}
> https://ci.appveyor.com/project/HyukjinKwon/spark/builds/44490387



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to