Hi Krishna,
Thanks for providing the notebook! I tried and found that the problem
is with PySpark's zip. I created a JIRA to track the issue:
https://issues.apache.org/jira/browse/SPARK-4841
-Xiangrui
On Thu, Dec 11, 2014 at 1:55 PM, Krishna Sankar ksanka...@gmail.com wrote:
K-Means iPython
This vote is closed in favor of RC2.
On Fri, Dec 5, 2014 at 2:02 PM, Patrick Wendell pwend...@gmail.com wrote:
Hey All,
Thanks all for the continued testing!
The issue I mentioned earlier SPARK-4498 was fixed earlier this week
(hat tip to Mark Hamstra who contributed to fix).
In the
Hey All,
Thanks all for the continued testing!
The issue I mentioned earlier SPARK-4498 was fixed earlier this week
(hat tip to Mark Hamstra who contributed to fix).
In the interim a few smaller blocker-level issues with Spark SQL were
found and fixed (SPARK-4753, SPARK-4552, SPARK-4761).
+1 (non-binding)
Checked on CentOS 6.5, compiled from the source.
Ran various examples in stand-alone master and three slaves, and
browsed the web UI.
On Sat, Nov 29, 2014 at 2:16 PM, Patrick Wendell pwend...@gmail.com wrote:
Please vote on releasing the following candidate as Apache Spark
Will do. Am on the road - will annotate an iPython notebook with what works
what didn't work ...
Cheers
k/
On Wed, Dec 3, 2014 at 4:19 PM, Xiangrui Meng men...@gmail.com wrote:
Krishna, could you send me some code snippets for the issues you saw
in naive Bayes and k-means? -Xiangrui
On Sun,
+1 (non-binding)
Verified on OSX 10.10.2, built from source,
spark-shell / spark-submit jobs
ran various simple Spark / Scala queries
ran various SparkSQL queries (including HiveContext)
ran ThriftServer service and connected via beeline
ran SparkSVD
On Mon Dec 01 2014 at 11:09:26 PM Patrick
+1 (non-binding)
Installed version pre-built for Hadoop on a private HPC
ran PySpark shell w/ iPython
loaded data using custom Hadoop input formats
ran MLlib routines in PySpark
ran custom workflows in PySpark
browsed the web UI
Noticeable improvements in stability and performance during large
+1. I also tested on Windows just in case, with jars referring other jars
and python files referring other python files. Path resolution still works.
2014-12-02 10:16 GMT-08:00 Jeremy Freeman freeman.jer...@gmail.com:
+1 (non-binding)
Installed version pre-built for Hadoop on a private HPC
+1 tested on yarn.
Tom
On Friday, November 28, 2014 11:18 PM, Patrick Wendell
pwend...@gmail.com wrote:
Please vote on releasing the following candidate as Apache Spark version 1.2.0!
The tag to be voted on is v1.2.0-rc1 (commit 1056e9ec1):
--
From: Patrick Wendell;pwend...@gmail.com;
Date: Sat, Nov 29, 2014 01:16 PM
To: dev@spark.apache.orgdev@spark.apache.org;
Subject: [VOTE] Release Apache Spark 1.2.0 (RC1)
Please vote on releasing the following candidate as Apache Spark version
1.2.0!
The tag to be voted
Hi everyone,
There’s an open bug report related to Spark standalone which could be a
potential release-blocker (pending investigation / a bug fix):
https://issues.apache.org/jira/browse/SPARK-4498. This issue seems
non-deterministc and only affects long-running Spark standalone deployments,
+0.9 from me. Tested it on Mac and Windows (someone has to do it) and while
things work, I noticed a few recent scripts don't have Windows equivalents,
namely https://issues.apache.org/jira/browse/SPARK-4683 and
https://issues.apache.org/jira/browse/SPARK-4684. The first one at least would
be
Hey All,
Just an update. Josh, Andrew, and others are working to reproduce
SPARK-4498 and fix it. Other than that issue no serious regressions
have been reported so far. If we are able to get a fix in for that
soon, we'll likely cut another RC with the patch.
Continued testing of RC1 is
+1 (non-binding)
-- Original --
From: Patrick Wendell;pwend...@gmail.com;
Date: Sat, Nov 29, 2014 01:16 PM
To: dev@spark.apache.orgdev@spark.apache.org;
Subject: [VOTE] Release Apache Spark 1.2.0 (RC1)
Please vote on releasing the following candidate
+1
1 Compiled binaries
2 All Tests Pass
3 Ran python and scala examples for spark and Mllib on local and master + 4
workers
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Release-Apache-Spark-1-2-0-RC1-tp9546p9552.html
Sent from the Apache
Thanks for pointing this out, Matei. I don't think a minor typo like
this is a big deal. Hopefully it's clear to everyone this is the 1.2.0
release vote, as indicated by the subject and all of the artifacts.
On Sat, Nov 29, 2014 at 1:26 AM, Matei Zaharia matei.zaha...@gmail.com wrote:
Hey
+1
1 Compiled binaries
2 All Tests Pass
Regards,
Vaquar khan
On 30 Nov 2014 04:21, Krishna Sankar ksanka...@gmail.com wrote:
+1
1. Compiled OSX 10.10 (Yosemite) mvn -Pyarn -Phadoop-2.4
-Dhadoop.version=2.4.0 -DskipTests clean package 16:46 min (slightly slower
connection)
2. Tested pyspark,
Please vote on releasing the following candidate as Apache Spark version 1.2.0!
The tag to be voted on is v1.2.0-rc1 (commit 1056e9ec1):
https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=1056e9ec13203d0c51564265e94d77a054498fdb
The release files, including signatures, digests, etc.
Krishna,
Docs don't block the rc voting because docs can be updated in parallel with
release candidates, until the point a release is made.
On Fri, Nov 28, 2014 at 9:55 PM, Krishna Sankar ksanka...@gmail.com wrote:
Looks like the documentation hasn't caught up with the new features.
On the
Hey Patrick, unfortunately you got some of the text here wrong, saying 1.1.0
instead of 1.2.0. Not sure it will matter since there can well be another RC
after testing, but we should be careful.
Matei
On Nov 28, 2014, at 9:16 PM, Patrick Wendell pwend...@gmail.com wrote:
Please vote on
20 matches
Mail list logo