[ 
https://issues.apache.org/jira/browse/SPARK-26646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-26646:
---------------------------------
    Description: 
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/101356/console
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/101358/console
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/101254/console
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/100941/console
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/100327/console

{code}
======================================================================
FAIL: test_training_and_prediction 
(pyspark.mllib.tests.test_streaming_algorithms.StreamingLogisticRegressionWithSGDTests)
Test that the model improves on toy data with no. of batches
----------------------------------------------------------------------
Traceback (most recent call last):
  File 
"/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/mllib/tests/test_streaming_algorithms.py",
 line 367, in test_training_and_prediction
    self._eventually(condition, timeout=60.0)
  File 
"/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/mllib/tests/test_streaming_algorithms.py",
 line 69, in _eventually
    lastValue = condition()
  File 
"/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/mllib/tests/test_streaming_algorithms.py",
 line 362, in condition
    self.assertGreater(errors[1] - errors[-1], 0.3)
AssertionError: -0.070000000000000062 not greater than 0.3

----------------------------------------------------------------------
Ran 13 tests in 198.327s

FAILED (failures=1, skipped=1)

Had test failures in pyspark.mllib.tests.test_streaming_algorithms with 
python3.4; see logs.
{code}

It apparently became less flaky after increasing the time at SPARK-26275 but 
looks now it became flacky due to unexpected results.

  was:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/101356/console
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/101358/console

{code}
======================================================================
FAIL: test_training_and_prediction 
(pyspark.mllib.tests.test_streaming_algorithms.StreamingLogisticRegressionWithSGDTests)
Test that the model improves on toy data with no. of batches
----------------------------------------------------------------------
Traceback (most recent call last):
  File 
"/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/mllib/tests/test_streaming_algorithms.py",
 line 367, in test_training_and_prediction
    self._eventually(condition, timeout=60.0)
  File 
"/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/mllib/tests/test_streaming_algorithms.py",
 line 69, in _eventually
    lastValue = condition()
  File 
"/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/mllib/tests/test_streaming_algorithms.py",
 line 362, in condition
    self.assertGreater(errors[1] - errors[-1], 0.3)
AssertionError: -0.070000000000000062 not greater than 0.3

----------------------------------------------------------------------
Ran 13 tests in 198.327s

FAILED (failures=1, skipped=1)

Had test failures in pyspark.mllib.tests.test_streaming_algorithms with 
python3.4; see logs.
{code}

It apparently became less flaky after increasing the time at SPARK-26275 but 
looks now it became flacky due to unexpected results.


> Flaky test: pyspark.mllib.tests.test_streaming_algorithms 
> StreamingLogisticRegressionWithSGDTests.test_training_and_prediction
> ------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-26646
>                 URL: https://issues.apache.org/jira/browse/SPARK-26646
>             Project: Spark
>          Issue Type: Test
>          Components: MLlib, PySpark
>    Affects Versions: 3.0.0
>            Reporter: Hyukjin Kwon
>            Priority: Major
>
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/101356/console
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/101358/console
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/101254/console
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/100941/console
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/100327/console
> {code}
> ======================================================================
> FAIL: test_training_and_prediction 
> (pyspark.mllib.tests.test_streaming_algorithms.StreamingLogisticRegressionWithSGDTests)
> Test that the model improves on toy data with no. of batches
> ----------------------------------------------------------------------
> Traceback (most recent call last):
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 367, in test_training_and_prediction
>     self._eventually(condition, timeout=60.0)
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 69, in _eventually
>     lastValue = condition()
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder/python/pyspark/mllib/tests/test_streaming_algorithms.py",
>  line 362, in condition
>     self.assertGreater(errors[1] - errors[-1], 0.3)
> AssertionError: -0.070000000000000062 not greater than 0.3
> ----------------------------------------------------------------------
> Ran 13 tests in 198.327s
> FAILED (failures=1, skipped=1)
> Had test failures in pyspark.mllib.tests.test_streaming_algorithms with 
> python3.4; see logs.
> {code}
> It apparently became less flaky after increasing the time at SPARK-26275 but 
> looks now it became flacky due to unexpected results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to