Sean, "stress test for failOnDataLoss=false" is because Kafka consumer may
be thrown NPE when a topic is deleted. I added some logic to retry on such
failure, however, it may still fail when topic deletion is too frequent
(the stress test). Just reopened
https://issues.apache.org/jira/browse/SPARK-18588.

Anyway, this is just a best effort to deal with Kafka issue, and in
practice, people won't delete topic frequently, so this is not a release
blocker.

On Fri, Dec 9, 2016 at 2:55 AM, Sean Owen <so...@cloudera.com> wrote:

> As usual, the sigs / hashes are fine and licenses look fine.
>
> I am still seeing some test failures. A few I've seen over time and aren't
> repeatable, but a few seem persistent. ANyone else observed these? I'm on
> Ubuntu 16 / Java 8 building for -Pyarn -Phadoop-2.7 -Phive
>
> If anyone can confirm I'll investigate the cause if I can. I'd hesitate to
> support the release yet unless the build is definitely passing for others.
>
>
> udf3Test(test.org.apache.spark.sql.JavaUDFSuite)  Time elapsed: 0.281 sec
>  <<< ERROR!
> java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.
> JavaTypeInference$.inferDataType(Lcom/google/common/reflect/TypeToken;)
> Lscala/Tuple2;
> at test.org.apache.spark.sql.JavaUDFSuite.udf3Test(JavaUDFSuite.java:107)
>
>
>
> - caching on disk *** FAILED ***
>   java.util.concurrent.TimeoutException: Can't find 2 executors before
> 30000 milliseconds elapsed
>   at org.apache.spark.ui.jobs.JobProgressListener.waitUntilExecutorsUp(
> JobProgressListener.scala:584)
>   at org.apache.spark.DistributedSuite.org$apache$spark$DistributedSuite$$
> testCaching(DistributedSuite.scala:154)
>   at org.apache.spark.DistributedSuite$$anonfun$32$$
> anonfun$apply$1.apply$mcV$sp(DistributedSuite.scala:191)
>   at org.apache.spark.DistributedSuite$$anonfun$32$$anonfun$apply$1.apply(
> DistributedSuite.scala:191)
>   at org.apache.spark.DistributedSuite$$anonfun$32$$anonfun$apply$1.apply(
> DistributedSuite.scala:191)
>   at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(
> Transformer.scala:22)
>   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
>   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>   at org.scalatest.Transformer.apply(Transformer.scala:22)
>   at org.scalatest.Transformer.apply(Transformer.scala:20)
>   ...
>
>
> - stress test for failOnDataLoss=false *** FAILED ***
>   org.apache.spark.sql.streaming.StreamingQueryException: Query [id =
> 3b191b78-7f30-46d3-93f8-5fbeecce94a2, runId = 
> 0cab93b6-19d8-47a7-88ad-d296bea72405]
> terminated with exception: null
>   at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$
> spark$sql$execution$streaming$StreamExecution$$runBatches(
> StreamExecution.scala:262)
>   at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(
> StreamExecution.scala:160)
>   ...
>   Cause: java.lang.NullPointerException:
>   ...
>
>
>
> On Thu, Dec 8, 2016 at 4:40 PM Reynold Xin <r...@databricks.com> wrote:
>
>> Please vote on releasing the following candidate as Apache Spark version
>> 2.1.0. The vote is open until Sun, December 11, 2016 at 1:00 PT and passes
>> if a majority of at least 3 +1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Spark 2.1.0
>> [ ] -1 Do not release this package because ...
>>
>>
>> To learn more about Apache Spark, please see http://spark.apache.org/
>>
>> The tag to be voted on is v2.1.0-rc2 (080717497365b83bc202ab16812ced
>> 93eb1ea7bd)
>>
>> List of JIRA tickets resolved are:  https://issues.apache.
>> org/jira/issues/?jql=project%20%3D%20SPARK%20AND%
>> 20fixVersion%20%3D%202.1.0
>>
>> The release files, including signatures, digests, etc. can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-bin/
>>
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/pwendell.asc
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1217
>>
>> The documentation corresponding to this release can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-docs/
>>
>>
>> (Note that the docs and staging repo are still being uploaded and will be
>> available soon)
>>
>>
>> =======================================
>> How can I help test this release?
>> =======================================
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions.
>>
>> ===============================================================
>> What should happen to JIRA tickets still targeting 2.1.0?
>> ===============================================================
>> Committers should look at those and triage. Extremely important bug
>> fixes, documentation, and API tweaks that impact compatibility should be
>> worked on immediately. Everything else please retarget to 2.1.1 or 2.2.0.
>>
>

Reply via email to