[jira] [Comment Edited] (SPARK-21770) ProbabilisticClassificationModel: Improve normalization of all-zero raw predictions

2020-05-06 Thread David Mavashev (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-21770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100948#comment-17100948
 ] 

David Mavashev edited comment on SPARK-21770 at 5/6/20, 4:24 PM:
-

Hi,

Im using version 2.4.5, I'm hitting the above issue, in which the whole job is 
failing because of a single row that gets a 0 vector probabilities:

 
{code:java}
class: SparkException, cause: Failed to execute user defined 
function($anonfun$2: 
(struct,values:array>) => 
struct,values:array>) 
org.apache.spark.SparkException: Job aborted due to stage failure: Task 5 in 
stage 10251.0 failed 1 times, most recent failure: Lost task 5.0 in stage 
10251.0 (TID 128916, localhost, executor driver): 
org.apache.spark.SparkException: Failed to execute user defined 
function($anonfun$2: 
(struct,values:array>) => 
struct,values:array>)
at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at 
org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$27.apply(RDD.scala:972)
at 
org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$27.apply(RDD.scala:972)
at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: requirement failed: Can't 
normalize the 0-vector.
at scala.Predef$.require(Predef.scala:224)
at 
org.apache.spark.ml.classification.ProbabilisticClassificationModel$.normalizeToProbabilitiesInPlace(ProbabilisticClassifier.scala:244)
at 
org.apache.spark.ml.classification.DecisionTreeClassificationModel.raw2probabilityInPlace(DecisionTreeClassifier.scala:198)
at 
org.apache.spark.ml.classification.ProbabilisticClassificationModel.raw2probability(ProbabilisticClassifier.scala:172)
at 
org.apache.spark.ml.classification.ProbabilisticClassificationModel$$anonfun$2.apply(ProbabilisticClassifier.scala:124)
at 
org.apache.spark.ml.classification.ProbabilisticClassificationModel$$anonfun$2.apply(ProbabilisticClassifier.scala:124)
... 19 more
{code}
What should be the correct handling to make this work, this is randomly 
happening on models we generate with Random Forest Classifier.

 


was (Author: davidmav86):
Hi,

I'm hitting the above issue, in which the whole job is failing because of a 
single row that gets a 0 vector probabilities:

 
{code:java}
class: SparkException, cause: Failed to execute user defined 
function($anonfun$2: 
(struct,values:array>) => 
struct,values:array>) 
org.apache.spark.SparkException: Job aborted due to stage failure: Task 5 in 
stage 10251.0 failed 1 times, most recent failure: Lost task 5.0 in stage 
10251.0 (TID 128916, localhost, executor driver): 
org.apache.spark.SparkException: Failed to execute user defined 
function($anonfun$2: 
(struct,values:array>) => 
struct,values:array>)
at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at 
org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$27.apply(RDD.scala:972)
at 

[jira] [Comment Edited] (SPARK-21770) ProbabilisticClassificationModel: Improve normalization of all-zero raw predictions

2017-08-18 Thread Siddharth Murching (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16131868#comment-16131868
 ] 

Siddharth Murching edited comment on SPARK-21770 at 8/18/17 7:48 AM:
-

Good question:

* Predictions on all-zero input don't change (they remain 0 for 
RandomForestClassifier and DecisionTreeClassifier, which are the only models 
that call normalizeToProbabilitiesInPlace())
* This proposal seeks to make predicted probabilities more interpretable when 
raw model output is all-zero
* Regardless, it currently seems impossible for normalizeToProbabilitiesInPlace 
to ever be called on all-zero input, since that'd mean a DecisionTree leaf node 
had a class count array (raw output) of all zeros.

More detail: both DecisionTreeClassifier and RandomForestClassifier inherit 
Classifier's [implementation of 
raw2prediction()|https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/classification/Classifier.scala#L221],
 which just takes an argmax ([preferring earlier maximal 
entries|https://github.com/apache/spark/blob/master/mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala#L176])
 over the model's output vector. A raw model output of all-equal entries would 
result in a prediction of 0 either way.



was (Author: siddharth murching):
Good question:

* Predictions on all-zero input don't change (they remain 0 for 
RandomForestClassifier and DecisionTreeClassifier, which are the only models 
that call normalizeToProbabilitiesInPlace())
* This proposal seeks to make predicted probabilities more interpretable when 
raw model output is all-zero
* Regardless, it currently seems impossible for normalizeToProbabilitiesInPlace 
to ever be called on all-zero input, since that'd mean a DecisionTree leaf node 
had a class count array (raw output) of all zeros.

Specifically, both DecisionTreeClassifier and RandomForestClassifier inherit 
Classifier's [implementation of 
raw2prediction()|https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/classification/Classifier.scala#L221],
 which just takes an argmax ([preferring earlier maximal 
entries|https://github.com/apache/spark/blob/master/mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala#L176])
 over the model's output vector. A raw model output of all-equal entries would 
result in a prediction of 0 either way.


> ProbabilisticClassificationModel: Improve normalization of all-zero raw 
> predictions
> ---
>
> Key: SPARK-21770
> URL: https://issues.apache.org/jira/browse/SPARK-21770
> Project: Spark
>  Issue Type: Improvement
>  Components: ML
>Affects Versions: 2.3.0
>Reporter: Siddharth Murching
>Priority: Minor
>
> Given an n-element raw prediction vector of all-zeros, 
> ProbabilisticClassifierModel.normalizeToProbabilitiesInPlace() should output 
> a probability vector of all-equal 1/n entries



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org