+1 This release passes all tests on the graphframes and tensorframes
packages.

On Wed, Jun 22, 2016 at 7:19 AM, Cody Koeninger <c...@koeninger.org> wrote:

> If we're considering backporting changes for the 0.8 kafka
> integration, I am sure there are people who would like to get
>
> https://issues.apache.org/jira/browse/SPARK-10963
>
> into 1.6.x as well
>
> On Wed, Jun 22, 2016 at 7:41 AM, Sean Owen <so...@cloudera.com> wrote:
> > Good call, probably worth back-porting, I'll try to do that. I don't
> > think it blocks a release, but would be good to get into a next RC if
> > any.
> >
> > On Wed, Jun 22, 2016 at 11:38 AM, Pete Robbins <robbin...@gmail.com>
> wrote:
> >> This has failed on our 1.6 stream builds regularly.
> >> (https://issues.apache.org/jira/browse/SPARK-6005) looks fixed in 2.0?
> >>
> >> On Wed, 22 Jun 2016 at 11:15 Sean Owen <so...@cloudera.com> wrote:
> >>>
> >>> Oops, one more in the "does anybody else see this" department:
> >>>
> >>> - offset recovery *** FAILED ***
> >>>   recoveredOffsetRanges.forall(((or: (org.apache.spark.streaming.Time,
> >>> Array[org.apache.spark.streaming.kafka.OffsetRange])) =>
> >>>
> >>>
> earlierOffsetRangesAsSets.contains(scala.Tuple2.apply[org.apache.spark.streaming.Time,
> >>>
> >>>
> scala.collection.immutable.Set[org.apache.spark.streaming.kafka.OffsetRange]](or._1,
> >>>
> >>>
> scala.this.Predef.refArrayOps[org.apache.spark.streaming.kafka.OffsetRange](or._2).toSet[org.apache.spark.streaming.kafka.OffsetRange]))))
> >>> was false Recovered ranges are not the same as the ones generated
> >>> (DirectKafkaStreamSuite.scala:301)
> >>>
> >>> This actually fails consistently for me too in the Kafka integration
> >>> code. Not timezone related, I think.
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> > For additional commands, e-mail: dev-h...@spark.apache.org
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>

Reply via email to