[jira] [Updated] (SPARK-19989) Flaky Test: org.apache.spark.sql.kafka010.KafkaSourceStressSuite
[ https://issues.apache.org/jira/browse/SPARK-19989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hyukjin Kwon updated SPARK-19989: - Labels: bulk-closed flaky-test (was: flaky-test) > Flaky Test: org.apache.spark.sql.kafka010.KafkaSourceStressSuite > > > Key: SPARK-19989 > URL: https://issues.apache.org/jira/browse/SPARK-19989 > Project: Spark > Issue Type: Bug > Components: SQL, Structured Streaming, Tests >Affects Versions: 2.2.0 >Reporter: Kay Ousterhout >Priority: Minor > Labels: bulk-closed, flaky-test > > This test failed recently here: > https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/74683/testReport/junit/org.apache.spark.sql.kafka010/KafkaSourceStressSuite/stress_test_with_multiple_topics_and_partitions/ > And based on Josh's dashboard > (https://spark-tests.appspot.com/test-details?suite_name=org.apache.spark.sql.kafka010.KafkaSourceStressSuite_name=stress+test+with+multiple+topics+and+partitions), > seems to fail a few times every month. Here's the full error from the most > recent failure: > Error Message > {code} > org.scalatest.exceptions.TestFailedException: Error adding data: replication > factor: 1 larger than available brokers: 0 > kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:117) > kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:403) > org.apache.spark.sql.kafka010.KafkaTestUtils.createTopic(KafkaTestUtils.scala:173) > > org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:903) > > org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:901) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:93) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:92) > scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:316) > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData.addData(KafkaSourceSuite.scala:92) > > org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:494) > {code} > {code} > sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: > Error adding data: replication factor: 1 larger than available brokers: 0 > kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:117) > kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:403) > > org.apache.spark.sql.kafka010.KafkaTestUtils.createTopic(KafkaTestUtils.scala:173) > > org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:903) > > org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:901) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:93) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:92) > scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:316) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData.addData(KafkaSourceSuite.scala:92) > > org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:494) > == Progress == >AssertOnQuery(, ) >CheckAnswer: >StopStream > > StartStream(ProcessingTime(0),org.apache.spark.util.SystemClock@5d888be0,Map()) >AddKafkaData(topics = Set(stress4, stress2, stress1, stress5, stress3), > data = Range(0, 1, 2, 3, 4, 5, 6, 7, 8), message = ) >CheckAnswer: [1],[2],[3],[4],[5],[6],[7],[8],[9] >StopStream > > StartStream(ProcessingTime(0),org.apache.spark.util.SystemClock@1be724ee,Map()) >AddKafkaData(topics = Set(stress4, stress2, stress1, stress5, stress3), > data = Range(9, 10, 11, 12, 13, 14), message = ) >CheckAnswer: > [1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[13],[14],[15] >StopStream >AddKafkaData(topics = Set(stress4, stress2, stress1, stress5, stress3), > data = Range(), message = ) > => AddKafkaData(topics = Set(stress4, stress6, stress2, stress1, stress5, > stress3), data = Range(15), message = Add topic stress7) >AddKafkaData(topics = Set(stress4, stress6, stress2, stress1, stress5, > stress3), data = Range(16, 17, 18, 19, 20, 21, 22), message = Add partition) >AddKafkaData(topics = Set(stress4, stress6, stress2, stress1, stress5, > stress3), data = Range(23, 24), message = Add partition) >AddKafkaData(topics = Set(stress4, stress6, stress2, stress8, stress1, > stress5, stress3),
[jira] [Updated] (SPARK-19989) Flaky Test: org.apache.spark.sql.kafka010.KafkaSourceStressSuite
[ https://issues.apache.org/jira/browse/SPARK-19989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun updated SPARK-19989: -- Target Version/s: (was: 2.3.0) > Flaky Test: org.apache.spark.sql.kafka010.KafkaSourceStressSuite > > > Key: SPARK-19989 > URL: https://issues.apache.org/jira/browse/SPARK-19989 > Project: Spark > Issue Type: Bug > Components: SQL, Structured Streaming, Tests >Affects Versions: 2.2.0 >Reporter: Kay Ousterhout >Priority: Minor > Labels: flaky-test > > This test failed recently here: > https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/74683/testReport/junit/org.apache.spark.sql.kafka010/KafkaSourceStressSuite/stress_test_with_multiple_topics_and_partitions/ > And based on Josh's dashboard > (https://spark-tests.appspot.com/test-details?suite_name=org.apache.spark.sql.kafka010.KafkaSourceStressSuite_name=stress+test+with+multiple+topics+and+partitions), > seems to fail a few times every month. Here's the full error from the most > recent failure: > Error Message > {code} > org.scalatest.exceptions.TestFailedException: Error adding data: replication > factor: 1 larger than available brokers: 0 > kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:117) > kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:403) > org.apache.spark.sql.kafka010.KafkaTestUtils.createTopic(KafkaTestUtils.scala:173) > > org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:903) > > org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:901) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:93) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:92) > scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:316) > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData.addData(KafkaSourceSuite.scala:92) > > org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:494) > {code} > {code} > sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: > Error adding data: replication factor: 1 larger than available brokers: 0 > kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:117) > kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:403) > > org.apache.spark.sql.kafka010.KafkaTestUtils.createTopic(KafkaTestUtils.scala:173) > > org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:903) > > org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:901) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:93) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:92) > scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:316) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData.addData(KafkaSourceSuite.scala:92) > > org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:494) > == Progress == >AssertOnQuery(, ) >CheckAnswer: >StopStream > > StartStream(ProcessingTime(0),org.apache.spark.util.SystemClock@5d888be0,Map()) >AddKafkaData(topics = Set(stress4, stress2, stress1, stress5, stress3), > data = Range(0, 1, 2, 3, 4, 5, 6, 7, 8), message = ) >CheckAnswer: [1],[2],[3],[4],[5],[6],[7],[8],[9] >StopStream > > StartStream(ProcessingTime(0),org.apache.spark.util.SystemClock@1be724ee,Map()) >AddKafkaData(topics = Set(stress4, stress2, stress1, stress5, stress3), > data = Range(9, 10, 11, 12, 13, 14), message = ) >CheckAnswer: > [1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[13],[14],[15] >StopStream >AddKafkaData(topics = Set(stress4, stress2, stress1, stress5, stress3), > data = Range(), message = ) > => AddKafkaData(topics = Set(stress4, stress6, stress2, stress1, stress5, > stress3), data = Range(15), message = Add topic stress7) >AddKafkaData(topics = Set(stress4, stress6, stress2, stress1, stress5, > stress3), data = Range(16, 17, 18, 19, 20, 21, 22), message = Add partition) >AddKafkaData(topics = Set(stress4, stress6, stress2, stress1, stress5, > stress3), data = Range(23, 24), message = Add partition) >AddKafkaData(topics = Set(stress4, stress6, stress2, stress8, stress1, > stress5, stress3), data = Range(), message =
[jira] [Updated] (SPARK-19989) Flaky Test: org.apache.spark.sql.kafka010.KafkaSourceStressSuite
[ https://issues.apache.org/jira/browse/SPARK-19989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Armbrust updated SPARK-19989: - Target Version/s: 2.3.0 (was: 2.2.0) > Flaky Test: org.apache.spark.sql.kafka010.KafkaSourceStressSuite > > > Key: SPARK-19989 > URL: https://issues.apache.org/jira/browse/SPARK-19989 > Project: Spark > Issue Type: Bug > Components: SQL, Structured Streaming, Tests >Affects Versions: 2.2.0 >Reporter: Kay Ousterhout >Priority: Minor > Labels: flaky-test > > This test failed recently here: > https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/74683/testReport/junit/org.apache.spark.sql.kafka010/KafkaSourceStressSuite/stress_test_with_multiple_topics_and_partitions/ > And based on Josh's dashboard > (https://spark-tests.appspot.com/test-details?suite_name=org.apache.spark.sql.kafka010.KafkaSourceStressSuite_name=stress+test+with+multiple+topics+and+partitions), > seems to fail a few times every month. Here's the full error from the most > recent failure: > Error Message > {code} > org.scalatest.exceptions.TestFailedException: Error adding data: replication > factor: 1 larger than available brokers: 0 > kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:117) > kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:403) > org.apache.spark.sql.kafka010.KafkaTestUtils.createTopic(KafkaTestUtils.scala:173) > > org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:903) > > org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:901) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:93) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:92) > scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:316) > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData.addData(KafkaSourceSuite.scala:92) > > org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:494) > {code} > {code} > sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: > Error adding data: replication factor: 1 larger than available brokers: 0 > kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:117) > kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:403) > > org.apache.spark.sql.kafka010.KafkaTestUtils.createTopic(KafkaTestUtils.scala:173) > > org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:903) > > org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:901) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:93) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:92) > scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:316) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData.addData(KafkaSourceSuite.scala:92) > > org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:494) > == Progress == >AssertOnQuery(, ) >CheckAnswer: >StopStream > > StartStream(ProcessingTime(0),org.apache.spark.util.SystemClock@5d888be0,Map()) >AddKafkaData(topics = Set(stress4, stress2, stress1, stress5, stress3), > data = Range(0, 1, 2, 3, 4, 5, 6, 7, 8), message = ) >CheckAnswer: [1],[2],[3],[4],[5],[6],[7],[8],[9] >StopStream > > StartStream(ProcessingTime(0),org.apache.spark.util.SystemClock@1be724ee,Map()) >AddKafkaData(topics = Set(stress4, stress2, stress1, stress5, stress3), > data = Range(9, 10, 11, 12, 13, 14), message = ) >CheckAnswer: > [1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[13],[14],[15] >StopStream >AddKafkaData(topics = Set(stress4, stress2, stress1, stress5, stress3), > data = Range(), message = ) > => AddKafkaData(topics = Set(stress4, stress6, stress2, stress1, stress5, > stress3), data = Range(15), message = Add topic stress7) >AddKafkaData(topics = Set(stress4, stress6, stress2, stress1, stress5, > stress3), data = Range(16, 17, 18, 19, 20, 21, 22), message = Add partition) >AddKafkaData(topics = Set(stress4, stress6, stress2, stress1, stress5, > stress3), data = Range(23, 24), message = Add partition) >AddKafkaData(topics = Set(stress4, stress6, stress2, stress8, stress1, > stress5, stress3), data = Range(),
[jira] [Updated] (SPARK-19989) Flaky Test: org.apache.spark.sql.kafka010.KafkaSourceStressSuite
[ https://issues.apache.org/jira/browse/SPARK-19989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Armbrust updated SPARK-19989: - Description: This test failed recently here: https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/74683/testReport/junit/org.apache.spark.sql.kafka010/KafkaSourceStressSuite/stress_test_with_multiple_topics_and_partitions/ And based on Josh's dashboard (https://spark-tests.appspot.com/test-details?suite_name=org.apache.spark.sql.kafka010.KafkaSourceStressSuite_name=stress+test+with+multiple+topics+and+partitions), seems to fail a few times every month. Here's the full error from the most recent failure: Error Message {code} org.scalatest.exceptions.TestFailedException: Error adding data: replication factor: 1 larger than available brokers: 0 kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:117) kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:403) org.apache.spark.sql.kafka010.KafkaTestUtils.createTopic(KafkaTestUtils.scala:173) org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:903) org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:901) org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:93) org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:92) scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:316) org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData.addData(KafkaSourceSuite.scala:92) org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:494) {code} {code} sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: Error adding data: replication factor: 1 larger than available brokers: 0 kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:117) kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:403) org.apache.spark.sql.kafka010.KafkaTestUtils.createTopic(KafkaTestUtils.scala:173) org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:903) org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:901) org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:93) org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:92) scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:316) org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData.addData(KafkaSourceSuite.scala:92) org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:494) == Progress == AssertOnQuery(, ) CheckAnswer: StopStream StartStream(ProcessingTime(0),org.apache.spark.util.SystemClock@5d888be0,Map()) AddKafkaData(topics = Set(stress4, stress2, stress1, stress5, stress3), data = Range(0, 1, 2, 3, 4, 5, 6, 7, 8), message = ) CheckAnswer: [1],[2],[3],[4],[5],[6],[7],[8],[9] StopStream StartStream(ProcessingTime(0),org.apache.spark.util.SystemClock@1be724ee,Map()) AddKafkaData(topics = Set(stress4, stress2, stress1, stress5, stress3), data = Range(9, 10, 11, 12, 13, 14), message = ) CheckAnswer: [1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[13],[14],[15] StopStream AddKafkaData(topics = Set(stress4, stress2, stress1, stress5, stress3), data = Range(), message = ) => AddKafkaData(topics = Set(stress4, stress6, stress2, stress1, stress5, stress3), data = Range(15), message = Add topic stress7) AddKafkaData(topics = Set(stress4, stress6, stress2, stress1, stress5, stress3), data = Range(16, 17, 18, 19, 20, 21, 22), message = Add partition) AddKafkaData(topics = Set(stress4, stress6, stress2, stress1, stress5, stress3), data = Range(23, 24), message = Add partition) AddKafkaData(topics = Set(stress4, stress6, stress2, stress8, stress1, stress5, stress3), data = Range(), message = Add topic stress9) AddKafkaData(topics = Set(stress4, stress6, stress2, stress8, stress1, stress5, stress3), data = Range(25, 26, 27, 28, 29, 30, 31, 32, 33), message = ) AddKafkaData(topics = Set(stress4, stress6, stress2, stress8, stress1, stress5, stress3), data = Range(), message = ) AddKafkaData(topics = Set(stress4, stress6, stress2, stress8, stress1, stress5, stress3), data = Range(), message = ) AddKafkaData(topics = Set(stress4, stress6, stress2, stress8, stress1, stress5, stress3), data = Range(34, 35, 36, 37, 38, 39), message = ) AddKafkaData(topics = Set(stress4, stress6, stress2, stress8,
[jira] [Updated] (SPARK-19989) Flaky Test: org.apache.spark.sql.kafka010.KafkaSourceStressSuite
[ https://issues.apache.org/jira/browse/SPARK-19989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Armbrust updated SPARK-19989: - Target Version/s: 2.2.0 > Flaky Test: org.apache.spark.sql.kafka010.KafkaSourceStressSuite > > > Key: SPARK-19989 > URL: https://issues.apache.org/jira/browse/SPARK-19989 > Project: Spark > Issue Type: Bug > Components: SQL, Structured Streaming, Tests >Affects Versions: 2.2.0 >Reporter: Kay Ousterhout >Priority: Minor > Labels: flaky-test > > This test failed recently here: > https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/74683/testReport/junit/org.apache.spark.sql.kafka010/KafkaSourceStressSuite/stress_test_with_multiple_topics_and_partitions/ > And based on Josh's dashboard > (https://spark-tests.appspot.com/test-details?suite_name=org.apache.spark.sql.kafka010.KafkaSourceStressSuite_name=stress+test+with+multiple+topics+and+partitions), > seems to fail a few times every month. Here's the full error from the most > recent failure: > Error Message > {code} > org.scalatest.exceptions.TestFailedException: Error adding data: replication > factor: 1 larger than available brokers: 0 > kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:117) > kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:403) > org.apache.spark.sql.kafka010.KafkaTestUtils.createTopic(KafkaTestUtils.scala:173) > > org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:903) > > org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:901) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:93) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:92) > scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:316) > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData.addData(KafkaSourceSuite.scala:92) > > org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:494) > {code} > {code} > sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException: > Error adding data: replication factor: 1 larger than available brokers: 0 > kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:117) > kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:403) > > org.apache.spark.sql.kafka010.KafkaTestUtils.createTopic(KafkaTestUtils.scala:173) > > org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:903) > > org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:901) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:93) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:92) > scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:316) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData.addData(KafkaSourceSuite.scala:92) > > org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:494) > == Progress == >AssertOnQuery(, ) >CheckAnswer: >StopStream > > StartStream(ProcessingTime(0),org.apache.spark.util.SystemClock@5d888be0,Map()) >AddKafkaData(topics = Set(stress4, stress2, stress1, stress5, stress3), > data = Range(0, 1, 2, 3, 4, 5, 6, 7, 8), message = ) >CheckAnswer: [1],[2],[3],[4],[5],[6],[7],[8],[9] >StopStream > > StartStream(ProcessingTime(0),org.apache.spark.util.SystemClock@1be724ee,Map()) >AddKafkaData(topics = Set(stress4, stress2, stress1, stress5, stress3), > data = Range(9, 10, 11, 12, 13, 14), message = ) >CheckAnswer: > [1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[13],[14],[15] >StopStream >AddKafkaData(topics = Set(stress4, stress2, stress1, stress5, stress3), > data = Range(), message = ) > => AddKafkaData(topics = Set(stress4, stress6, stress2, stress1, stress5, > stress3), data = Range(15), message = Add topic stress7) >AddKafkaData(topics = Set(stress4, stress6, stress2, stress1, stress5, > stress3), data = Range(16, 17, 18, 19, 20, 21, 22), message = Add partition) >AddKafkaData(topics = Set(stress4, stress6, stress2, stress1, stress5, > stress3), data = Range(23, 24), message = Add partition) >AddKafkaData(topics = Set(stress4, stress6, stress2, stress8, stress1, > stress5, stress3), data = Range(), message = Add
[jira] [Updated] (SPARK-19989) Flaky Test: org.apache.spark.sql.kafka010.KafkaSourceStressSuite
[ https://issues.apache.org/jira/browse/SPARK-19989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liwei Lin updated SPARK-19989: -- Component/s: Structured Streaming > Flaky Test: org.apache.spark.sql.kafka010.KafkaSourceStressSuite > > > Key: SPARK-19989 > URL: https://issues.apache.org/jira/browse/SPARK-19989 > Project: Spark > Issue Type: Bug > Components: SQL, Structured Streaming, Tests >Affects Versions: 2.2.0 >Reporter: Kay Ousterhout >Priority: Minor > Labels: flaky-test > > This test failed recently here: > https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/74683/testReport/junit/org.apache.spark.sql.kafka010/KafkaSourceStressSuite/stress_test_with_multiple_topics_and_partitions/ > And based on Josh's dashboard > (https://spark-tests.appspot.com/test-details?suite_name=org.apache.spark.sql.kafka010.KafkaSourceStressSuite_name=stress+test+with+multiple+topics+and+partitions), > seems to fail a few times every month. Here's the full error from the most > recent failure: > Error Message > org.scalatest.exceptions.TestFailedException: Error adding data: replication > factor: 1 larger than available brokers: 0 > kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:117) > kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:403) > org.apache.spark.sql.kafka010.KafkaTestUtils.createTopic(KafkaTestUtils.scala:173) > > org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:903) > > org.apache.spark.sql.kafka010.KafkaSourceStressSuite$$anonfun$16$$anonfun$apply$mcV$sp$17$$anonfun$37.apply(KafkaSourceSuite.scala:901) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:93) > > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData$$anonfun$addData$1.apply(KafkaSourceSuite.scala:92) > scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:316) > org.apache.spark.sql.kafka010.KafkaSourceTest$AddKafkaData.addData(KafkaSourceSuite.scala:92) > > org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:494) >== Progress ==AssertOnQuery(, )CheckAnswer: > StopStream > StartStream(ProcessingTime(0),org.apache.spark.util.SystemClock@5d888be0,Map()) > AddKafkaData(topics = Set(stress4, stress2, stress1, stress5, stress3), > data = Range(0, 1, 2, 3, 4, 5, 6, 7, 8), message = )CheckAnswer: > [1],[2],[3],[4],[5],[6],[7],[8],[9]StopStream > StartStream(ProcessingTime(0),org.apache.spark.util.SystemClock@1be724ee,Map()) > AddKafkaData(topics = Set(stress4, stress2, stress1, stress5, stress3), > data = Range(9, 10, 11, 12, 13, 14), message = )CheckAnswer: > [1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[13],[14],[15] > StopStreamAddKafkaData(topics = Set(stress4, stress2, stress1, stress5, > stress3), data = Range(), message = ) => AddKafkaData(topics = Set(stress4, > stress6, stress2, stress1, stress5, stress3), data = Range(15), message = Add > topic stress7)AddKafkaData(topics = Set(stress4, stress6, stress2, > stress1, stress5, stress3), data = Range(16, 17, 18, 19, 20, 21, 22), message > = Add partition)AddKafkaData(topics = Set(stress4, stress6, stress2, > stress1, stress5, stress3), data = Range(23, 24), message = Add partition) > AddKafkaData(topics = Set(stress4, stress6, stress2, stress8, stress1, > stress5, stress3), data = Range(), message = Add topic stress9) > AddKafkaData(topics = Set(stress4, stress6, stress2, stress8, stress1, > stress5, stress3), data = Range(25, 26, 27, 28, 29, 30, 31, 32, 33), message > = )AddKafkaData(topics = Set(stress4, stress6, stress2, stress8, stress1, > stress5, stress3), data = Range(), message = )AddKafkaData(topics = > Set(stress4, stress6, stress2, stress8, stress1, stress5, stress3), data = > Range(), message = )AddKafkaData(topics = Set(stress4, stress6, stress2, > stress8, stress1, stress5, stress3), data = Range(34, 35, 36, 37, 38, 39), > message = )AddKafkaData(topics = Set(stress4, stress6, stress2, stress8, > stress1, stress5, stress3), data = Range(40, 41, 42, 43), message = ) > AddKafkaData(topics = Set(stress4, stress6, stress2, stress8, stress1, > stress5, stress3), data = Range(44), message = Add partition) > AddKafkaData(topics = Set(stress4, stress6, stress2, stress8, stress1, > stress5, stress3), data = Range(45, 46, 47, 48, 49, 50, 51, 52), message = > Add partition)AddKafkaData(topics = Set(stress4, stress6, stress2, > stress8, stress1, stress5, stress3), data = Range(53, 54, 55), message = ) > AddKafkaData(topics = Set(stress4, stress6, stress2, stress8, stress1, >