+1 (non-binding)
* Built the release from source.
* Compiled Java and Scala apps that interact with HDFS against it.
* Ran them in local mode.
* Ran them against a pseudo-distributed YARN cluster in both yarn-client
mode and yarn-cluster mode.
On Tue, May 13, 2014 at 9:09 PM, witgo wi...@qq.com
I'm cancelling this vote in favor of rc6.
On Tue, May 13, 2014 at 8:01 AM, Sean Owen so...@cloudera.com wrote:
On Tue, May 13, 2014 at 2:49 PM, Sean Owen so...@cloudera.com wrote:
On Tue, May 13, 2014 at 9:36 AM, Patrick Wendell pwend...@gmail.com wrote:
The release files, including
Please vote on releasing the following candidate as Apache Spark version 1.0.0!
This patch has a few minor fixes on top of rc5. I've also built the
binary artifacts with Hive support enabled so people can test this
configuration. When we release 1.0 we might just release both vanilla
and
Hi Sandy,
I assume you are referring to caching added to datanodes via new caching
api via NN ? (To preemptively mmap blocks).
I have not looked in detail, but does NN tell us about this in block
locations?
If yes, we can simply make those process local instead of node local for
executors on
Hi Matei,
Yes, I'm 100% positive the jar on the executors is the same version. I am
building everything and deploying myself. Additionally, while debugging the
issue, I forked spark's git repo and added additional logging, which I
could see in the driver and executors. These debugging jars
SHA-1 is being end-of-lived so I’d actually say switch to 512 for all of them
instead.
On May 13, 2014, at 6:49 AM, Sean Owen so...@cloudera.com wrote:
On Tue, May 13, 2014 at 9:36 AM, Patrick Wendell pwend...@gmail.com wrote:
The release files, including signatures, digests, etc. can be
The docs for how to run Spark on Mesos have changed very little since
0.6.0, but setting it up is much easier now than then. Does it make sense
to revamp with the below changes?
You no longer need to build mesos yourself as pre-built versions are
available from Mesosphere: