[ https://issues.apache.org/jira/browse/SPARK-35717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17362696#comment-17362696 ]
Hyukjin Kwon commented on SPARK-35717: -------------------------------------- [~hoeze] I would like to try reproducing this one. Would you mind sharing the sample version of your {code}df{code}? > pandas_udf crashes in conjunction with .filter() > ------------------------------------------------ > > Key: SPARK-35717 > URL: https://issues.apache.org/jira/browse/SPARK-35717 > Project: Spark > Issue Type: Bug > Components: PySpark > Affects Versions: 3.0.0, 3.1.1, 3.1.2 > Environment: Centos 8 with PySpark from conda > Reporter: F. H. > Priority: Major > > I wrote the following UDF that always returns some "byte"-type array: > > {code:python} > from typing import Iterator > @f.pandas_udf(returnType=t.ByteType()) > def spark_gt_mapping_fn(batch_iter: Iterator[pd.Series]) -> > Iterator[pd.Series]: > mapping = dict() > mapping[(-1, -1)] = -1 > mapping[(0, 0)] = 0 > mapping[(0, 1)] = 1 > mapping[(1, 0)] = 1 > mapping[(1, 1)] = 2 > def gt_mapping_fn(v): > if len(v) != 2: > return -3 > else: > a, b = v > return mapping.get((a, b), -2) > > for x in batch_iter: > yield x.apply(gt_mapping_fn).astype("int8") > {code} > > However, every time I'd like to filter on the resulting column, I get the > following error: > {code:python} > # works: > ( > df > .select(spark_gt_mapping_fn(f.col("genotype.calls")).alias("GT")) > .limit(10).toPandas() > ) > # fails: > ( > df > .select(spark_gt_mapping_fn(f.col("genotype.calls")).alias("GT")) > .filter("GT > 0") > .limit(10).toPandas() > ) > {code} > {code:java} > Py4JJavaError: An error occurred while calling o672.collectToPython. : > org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in > stage 9.0 failed 4 times, most recent failure: Lost task 0.3 in stage 9.0 > (TID 125) (ouga05.cmm.in.tum.de executor driver): > org.apache.spark.util.TaskCompletionListenerException: Memory was leaked by > query. Memory leaked: (16384) Allocator(stdin reader for python3) > 0/16384/34816/9223372036854775807 (res/actual/peak/limit) at > org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:145) > at > org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:124) > at org.apache.spark.scheduler.Task.run(Task.scala:147) at > org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) Driver stacktrace: at > org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2258) > at > org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2207) > at > org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2206) > at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) > at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at > org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2206) > at > org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1079) > at > org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1079) > at scala.Option.foreach(Option.scala:407) at > org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1079) > at > org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2445) > at > org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2387) > at > org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2376) > at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at > org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:868) at > org.apache.spark.SparkContext.runJob(SparkContext.scala:2196) at > org.apache.spark.SparkContext.runJob(SparkContext.scala:2217) at > org.apache.spark.SparkContext.runJob(SparkContext.scala:2236) at > org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:472) at > org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:425) at > org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:47) > at > org.apache.spark.sql.Dataset.$anonfun$collectToPython$1(Dataset.scala:3519) > at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3687) at > org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103) > at > org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163) > at > org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90) > at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) at > org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) > at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3685) at > org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3516) at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at > py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at > py4j.Gateway.invoke(Gateway.java:282) at > py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at > py4j.commands.CallCommand.execute(CallCommand.java:79) at > py4j.GatewayConnection.run(GatewayConnection.java:238) at > java.lang.Thread.run(Thread.java:748) Caused by: > org.apache.spark.util.TaskCompletionListenerException: Memory was leaked by > query. Memory leaked: (16384) Allocator(stdin reader for python3) > 0/16384/34816/9223372036854775807 (res/actual/peak/limit) at > org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:145) > at > org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:124) > at org.apache.spark.scheduler.Task.run(Task.scala:147) at > org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ... 1 more > {code} > I tried this with different versions of PySpark and PyArrow, always with the > same result. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org