Hi Ted et al,
[INFO]
[INFO] BUILD SUCCESS
[INFO]
[INFO] Total time: 10:41 min
[INFO] Finished at: 2016-04-09T19:21:02-04:00
[INFO] Final Memory:
Sent PR:
https://github.com/apache/spark/pull/12276
I was able to get build going past mllib-local module.
FYI
On Sat, Apr 9, 2016 at 12:40 PM, Ted Yu wrote:
> The broken build was caused by the following:
>
> [SPARK-14462][ML][MLLIB] add the mllib-local build to maven
The broken build was caused by the following:
[SPARK-14462][ML][MLLIB] add the mllib-local build to maven pom
See
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-hadoop-2.7/607/
FYI
On Sat, Apr 9, 2016 at 12:01 PM, Jacek Laskowski wrote:
> Hi,
>
> Is this
Hi,
Is this me or the build is broken today? I'm looking for help as it looks scary.
$ ./build/mvn -Pyarn -Phadoop-2.6 -Dhadoop.version=2.7.2 -Phive
-Phive-thriftserver -DskipTests clean install
[INFO] --- scala-maven-plugin:3.2.2:testCompile
(scala-test-compile-first) @ spark-mllib-local_2.11
> On 8 Apr 2016, at 10:01, Wojciech Indyk wrote:
>
> Hello!
> TL;DR Could you explain how (and which) Kerberos tokens should be
> delegated from driver to workers? Does it depend on spark mode?
Hadoop tokens, not kerberos tickets...though the original k tickets are
The driver has the data and wouldn't need to rerun.
On Friday, April 8, 2016, Sung Hwan Chung wrote:
> Hello,
>
> Say, that I'm doing a simple rdd.map followed by collect. Say, also, that
> one of the executors finish all of its tasks, but there are still other
> executors