Actually +1 from me...
This is a recommendAll feature we are testing which is really compute
intensive...
For ranking metric calculation I was trying to run through the Netflix
matrix and generate a ranked list of recommendation for all 17K products
and perhaps it needs more compute than what is
+1 Release this package as Apache Spark 1.1.1
On 20 Nov 2014 04:22, Andrew Or and...@databricks.com wrote:
I will start with a +1
2014-11-19 14:51 GMT-08:00 Andrew Or and...@databricks.com:
Please vote on releasing the following candidate as Apache Spark version
1
.1.1.
This release
-1 from me...same FetchFailed issue as what Hector saw...
I am running Netflix dataset and dumping out recommendation for all users.
It shuffles around 100 GB data on disk to run a reduceByKey per user on
utils.BoundedPriorityQueue...The code runs fine with MovieLens1m dataset...
I gave Spark 10
+1 (binding).
Don't see any evidence of regressions at this point. The issue
reported by Hector was not related to this rlease.
On Sun, Nov 23, 2014 at 9:50 AM, Debasish Das debasish.da...@gmail.com wrote:
-1 from me...same FetchFailed issue as what Hector saw...
I am running Netflix dataset
Hi,
I wanted to try 1.1.1-rc2 because we're running into SPARK-3633, but
therc releases not being tagged with -rcX means the pre-built artifacts
are basically useless to me.
(Pedantically, to test a release, I have to upload it into our internal
repo, to compile jobs, start clusters, etc.
Interesting, perhaps we could publish each one with two IDs, of which the rc
one is unofficial. The problem is indeed that you have to vote on a hash for a
potentially final artifact.
Matei
On Nov 23, 2014, at 7:54 PM, Stephen Haberman stephen.haber...@gmail.com
wrote:
Hi,
I wanted to
Hey Stephen,
Thanks for bringing this up. Technically when we call a release vote
it needs to be on the exact commit that will be the final release.
However, one thing I've thought of doing for a while would be to
publish the maven artifacts using a version tag with $VERSION-rcX even
if the
Awesome, sounds great, guys; thanks for understanding.
Depending on how badly I need 1.1.1-rc2 (I'll check my jobs tomorrow) I'll
just build a local version for now. Should be easy, it's just been awhile.
:-)
Thanks,
Stephen
On Sun Nov 23 2014 at 11:01:09 PM Patrick Wendell pwend...@gmail.com
http://maven.apache.org/plugins/maven-install-plugin/
examples/specific-local-repo.html
Hm, I didn't know about that plugin--assuming it does all of the
jar/pom/sources/etc., then, yes, that could work...
At first glance, I'm not sure it'll bring over the pom with all of the
transitive
+1 (non binding)
Signatures and license looks good. I built the plain-vanilla
distribution and ran tests. While I still see the Java 8 + Hive test
failure, I think we've established this is ignorable.
On Wed, Nov 19, 2014 at 11:51 PM, Andrew Or and...@databricks.com wrote:
I will start with a
+1
Built successfully and ran the
python examples.
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Release-Apache-Spark-1-1-1-RC2-tp9439p9452.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.
I'm still seeing the fetch failed error and updated
https://issues.apache.org/jira/browse/SPARK-3633
On Thu, Nov 20, 2014 at 10:21 AM, Marcelo Vanzin van...@cloudera.com
wrote:
+1 (non-binding)
. ran simple things on spark-shell
. ran jobs in yarn client cluster modes, and standalone
I think it is a race condition caused by netty deactivating a channel while
it is active.
Switched to nio and it works fine
--conf spark.shuffle.blockTransferService=nio
On Thu, Nov 20, 2014 at 10:44 AM, Hector Yee hector@gmail.com wrote:
I'm still seeing the fetch failed error and updated
This is whatever was in http://people.apache.org/~andrewor14/spark-1
.1.1-rc2/
On Thu, Nov 20, 2014 at 11:48 AM, Matei Zaharia matei.zaha...@gmail.com
wrote:
Hector, is this a comment on 1.1.1 or on the 1.2 preview?
Matei
On Nov 20, 2014, at 11:39 AM, Hector Yee hector@gmail.com wrote:
Ah, I see. But the spark.shuffle.blockTransferService property doesn't exist in
1.1 (AFAIK) -- what exactly are you doing to get this problem?
Matei
On Nov 20, 2014, at 11:50 AM, Hector Yee hector@gmail.com wrote:
This is whatever was in
Whoops I must have used the 1.2 preview and mixed them up.
spark-shell -version shows version 1.2.0
Will update the bug https://issues.apache.org/jira/browse/SPARK-4516 to 1.2
On Thu, Nov 20, 2014 at 11:59 AM, Matei Zaharia matei.zaha...@gmail.com
wrote:
Ah, I see. But the
I will start with a +1
2014-11-19 14:51 GMT-08:00 Andrew Or and...@databricks.com:
Please vote on releasing the following candidate as Apache Spark version 1
.1.1.
This release fixes a number of bugs in Spark 1.1.0. Some of the notable
ones are
- [SPARK-3426] Sort-based shuffle compression
+1. Checked version numbers and doc. Tested a few ML examples with
Java 6 and verified some recently merged bug fixes. -Xiangrui
On Wed, Nov 19, 2014 at 2:51 PM, Andrew Or and...@databricks.com wrote:
I will start with a +1
2014-11-19 14:51 GMT-08:00 Andrew Or and...@databricks.com:
Please
+1
1. Compiled OSX 10.10 (Yosemite) mvn -Pyarn -Phadoop-2.4
-Dhadoop.version=2.4.0 -DskipTests clean package 10:49 min
2. Tested pyspark, mlib
2.1. statistics OK
2.2. Linear/Ridge/Laso Regression OK
2.3. Decision Tree, Naive Bayes OK
2.4. KMeans OK
2.5. rdd operations OK
2.6. recommendation OK
19 matches
Mail list logo