[ 
https://issues.apache.org/jira/browse/SPARK-21770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100948#comment-17100948
 ] 

David Mavashev edited comment on SPARK-21770 at 5/6/20, 4:24 PM:
-----------------------------------------------------------------

Hi,

Im using version 2.4.5, I'm hitting the above issue, in which the whole job is 
failing because of a single row that gets a 0 vector probabilities:

 
{code:java}
class: SparkException, cause: Failed to execute user defined 
function($anonfun$2: 
(struct<type:tinyint,size:int,indices:array<int>,values:array<double>>) => 
struct<type:tinyint,size:int,indices:array<int>,values:array<double>>) 
org.apache.spark.SparkException: Job aborted due to stage failure: Task 5 in 
stage 10251.0 failed 1 times, most recent failure: Lost task 5.0 in stage 
10251.0 (TID 128916, localhost, executor driver): 
org.apache.spark.SparkException: Failed to execute user defined 
function($anonfun$2: 
(struct<type:tinyint,size:int,indices:array<int>,values:array<double>>) => 
struct<type:tinyint,size:int,indices:array<int>,values:array<double>>)
        at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
        at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$27.apply(RDD.scala:972)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$27.apply(RDD.scala:972)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:123)
        at 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: requirement failed: Can't 
normalize the 0-vector.
        at scala.Predef$.require(Predef.scala:224)
        at 
org.apache.spark.ml.classification.ProbabilisticClassificationModel$.normalizeToProbabilitiesInPlace(ProbabilisticClassifier.scala:244)
        at 
org.apache.spark.ml.classification.DecisionTreeClassificationModel.raw2probabilityInPlace(DecisionTreeClassifier.scala:198)
        at 
org.apache.spark.ml.classification.ProbabilisticClassificationModel.raw2probability(ProbabilisticClassifier.scala:172)
        at 
org.apache.spark.ml.classification.ProbabilisticClassificationModel$$anonfun$2.apply(ProbabilisticClassifier.scala:124)
        at 
org.apache.spark.ml.classification.ProbabilisticClassificationModel$$anonfun$2.apply(ProbabilisticClassifier.scala:124)
        ... 19 more
{code}
What should be the correct handling to make this work, this is randomly 
happening on models we generate with Random Forest Classifier.

 


was (Author: davidmav86):
Hi,

I'm hitting the above issue, in which the whole job is failing because of a 
single row that gets a 0 vector probabilities:

 
{code:java}
class: SparkException, cause: Failed to execute user defined 
function($anonfun$2: 
(struct<type:tinyint,size:int,indices:array<int>,values:array<double>>) => 
struct<type:tinyint,size:int,indices:array<int>,values:array<double>>) 
org.apache.spark.SparkException: Job aborted due to stage failure: Task 5 in 
stage 10251.0 failed 1 times, most recent failure: Lost task 5.0 in stage 
10251.0 (TID 128916, localhost, executor driver): 
org.apache.spark.SparkException: Failed to execute user defined 
function($anonfun$2: 
(struct<type:tinyint,size:int,indices:array<int>,values:array<double>>) => 
struct<type:tinyint,size:int,indices:array<int>,values:array<double>>)
        at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
        at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$27.apply(RDD.scala:972)
        at 
org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$27.apply(RDD.scala:972)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
        at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:123)
        at 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: requirement failed: Can't 
normalize the 0-vector.
        at scala.Predef$.require(Predef.scala:224)
        at 
org.apache.spark.ml.classification.ProbabilisticClassificationModel$.normalizeToProbabilitiesInPlace(ProbabilisticClassifier.scala:244)
        at 
org.apache.spark.ml.classification.DecisionTreeClassificationModel.raw2probabilityInPlace(DecisionTreeClassifier.scala:198)
        at 
org.apache.spark.ml.classification.ProbabilisticClassificationModel.raw2probability(ProbabilisticClassifier.scala:172)
        at 
org.apache.spark.ml.classification.ProbabilisticClassificationModel$$anonfun$2.apply(ProbabilisticClassifier.scala:124)
        at 
org.apache.spark.ml.classification.ProbabilisticClassificationModel$$anonfun$2.apply(ProbabilisticClassifier.scala:124)
        ... 19 more
{code}
What should be the correct handling to make this work, this is randomly 
happening on models we generate with Random Forest Classifier.

 

> ProbabilisticClassificationModel: Improve normalization of all-zero raw 
> predictions
> -----------------------------------------------------------------------------------
>
>                 Key: SPARK-21770
>                 URL: https://issues.apache.org/jira/browse/SPARK-21770
>             Project: Spark
>          Issue Type: Improvement
>          Components: ML
>    Affects Versions: 2.3.0
>            Reporter: Siddharth Murching
>            Assignee: Weichen Xu
>            Priority: Minor
>             Fix For: 2.3.0
>
>
> Given an n-element raw prediction vector of all-zeros, 
> ProbabilisticClassifierModel.normalizeToProbabilitiesInPlace() should output 
> a probability vector of all-equal 1/n entries



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to