Please vote on releasing the following candidate as Apache Spark version
1.6.3. The vote is open until Thursday, Oct 20, 2016 at 18:00 PDT and
passes if a majority of at least 3+1 PMC votes are cast.
[ ] +1 Release this package as Apache Spark 1.6.3
[ ] -1 Do not release this package because ...
(I don't think 2.0.2 will be released for a while if at all but that's not
what you're asking I think)
It's a fairly safe change, but also isn't exactly a fix in my opinion.
Because there are some other changes to make it all work for SPARC, I think
it's more realistic to look to the 2.1.0 release
IIRC this was all about shading of dependencies, not changes to the source.
On Mon, Oct 17, 2016 at 6:26 PM Ryan Blue wrote:
> Are these changes that the Hive community has rejected? I don't see a
> compelling reason to have a long-term Spark fork of Hive.
>
> rb
>
> On Sat, Oct 15, 2016 at 5:27
Are these changes that the Hive community has rejected? I don't see a
compelling reason to have a long-term Spark fork of Hive.
rb
On Sat, Oct 15, 2016 at 5:27 AM, Steve Loughran
wrote:
>
> On 15 Oct 2016, at 01:28, Ryan Blue wrote:
>
> The Spark 2 branch is based on this one: https://github.c
i just noticed that jenkins was still in quiet mode this morning due
to a hung build. i killed the build, backups happened, and the queue
is now happily building.
sorry for any delay!
shane
-
To unsubscribe e-mail: dev-unsubscr
Hi,
Apologies if I’ve asked this question before but I didn’t see it in the list
and I’m certain that my last surviving brain cell has gone on strike over my
attempt to reduce my caffeine intake…
Posting this to both user and dev because I think the question / topic jumps in
to both camps.
A
Hi Devs/All,
I am seeing a huge variation on spark Task Deserialization Time for my
collect and reduce operations. while most tasks complete within 100ms a few
take mote than a couple of seconds which slows the entire program down. I
have attached a screen shot of the web UI where you can see the
I would very much like to see SPARK-16962 included in 2.0.2 as it addresses
unaligned memory access patterns that crash non-x86 platforms. I believe
this falls in the category of "correctness fix". We (Oracle SAE) have
applied the fixes for SPARK-16962 to branch-2.0 and have not encountered any
p
Maybe my mail was not clear enough.
I didn't want to write "lets focus on Flink" or any other framework. The idea
with benchmarks was to show two things:
- why some people are doing bad PR for Spark
- how - in easy way - we can change it and show that Spark is still on the top
No more, no le
SPARK-17841 three line bugfix that has a week old PR
SPARK-17812 being able to specify starting offsets is a must have for
a Kafka mvp in my opinion, already has a PR
SPARK-17813 I can put in a PR for this tonight if it'll be considered
On Mon, Oct 17, 2016 at 12:28 AM, Reynold Xin wrote:
> Si
I think narrowly focusing on Flink or benchmarks is missing my point.
My point is evolve or die. Spark's governance and organization is
hampering its ability to evolve technologically, and it needs to
change.
On Sun, Oct 16, 2016 at 9:21 PM, Debasish Das wrote:
> Thanks Cody for bringing up a v
Thanks a lot Steve!
On Mon, Oct 17, 2016 at 4:59 PM, Steve Loughran wrote:
>
> On 17 Oct 2016, at 10:02, Prasun Ratn wrote:
>
> Hi
>
> I want to run some Spark applications with some changes in Kryo serializer.
>
> Please correct me, but I think I need to recompile spark (instead of
> just the S
On 17 Oct 2016, at 10:02, Prasun Ratn
mailto:prasun.r...@gmail.com>> wrote:
Hi
I want to run some Spark applications with some changes in Kryo serializer.
Please correct me, but I think I need to recompile spark (instead of
just the Spark applications) in order to use the newly built Kryo
seri
Hi all,
I am trying to write a custom Source for counting errors and output that
with Spark sink mechanism ( CSV or JMX ) and having some problems
understanding how this works.
1. I defined the Source, added counters created with MetricRegistry and
registered the Source
> SparkEnv.get().metricsS
Hi
I want to run some Spark applications with some changes in Kryo serializer.
Please correct me, but I think I need to recompile spark (instead of
just the Spark applications) in order to use the newly built Kryo
serializer?
I obtained Kryo 3.0.3 source and built it (mvn package install).
Next
There’s no need to compare to Flink’s Streaming Model. Spark should focus more
on how to go beyond itself.
From the beginning, Spark’s success comes from it’s unified model can satisfiy
SQL,Streaming, Machine Learning Models and Graphs Jobs …… all in One. But From
1.6 to 2.0, the abstraction
16 matches
Mail list logo