-1
This bug SPARK-16515 in Spark 2.0 breaks our cases which can run on 1.6.
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Release-Apache-Spark-2-0-0-RC4-tp18317p18341.html
Sent from the Apache Spark Developers List mailing list archive at Nabbl
+1 (non-binding, of course)
1. Compiled OS X 10.11.5 (El Capitan) OK Total time: 26:27 min
mvn clean package -Pyarn -Phadoop-2.7 -DskipTests
2. Tested pyspark, mllib (iPython 4.0)
2.0 Spark version is 2.0.0
2.1. statistics (min,max,mean,Pearson,Spearman) OK
2.2. Linear/Ridge/Lasso Regression
Yes. https://github.com/apache/spark/pull/11796
On Fri, Jul 15, 2016 at 2:50 PM, Krishna Sankar wrote:
> Can't find the "spark-assembly-2.0.0-hadoop2.7.0.jar" after compilation.
> Usually it is in the assembly/target/scala-2.11
> Has the packaging changed for 2.0.0 ?
> Cheers
>
>
> On Thu, Jul
Can't find the "spark-assembly-2.0.0-hadoop2.7.0.jar" after compilation.
Usually it is in the assembly/target/scala-2.11
Has the packaging changed for 2.0.0 ?
Cheers
On Thu, Jul 14, 2016 at 11:59 AM, Reynold Xin wrote:
> Please vote on releasing the following candidate as Apache Spark version
>
Thanks for the info Burak, I will check the repo you mention, do you know
concretely what is the 'magic' that spark-packages need or if is there any
document with info about it ?
On Fri, Jul 15, 2016 at 10:12 PM, Luciano Resende
wrote:
>
> On Fri, Jul 15, 2016 at 10:48 AM, Jacek Laskowski wrote
On Fri, Jul 15, 2016 at 10:48 AM, Jacek Laskowski wrote:
> +1000
>
> Thanks Ismael for bringing this up! I meant to have send it earlier too
> since I've been struggling with a sbt-based Scala project for a Spark
> package myself this week and haven't yet found out how to do local
> publishing.
>
Hi Ismael and Jacek,
If you use Maven for building your applications, you may use the
spark-package command line tool (
https://github.com/databricks/spark-package-cmd-tool) to perform packaging.
It requires you to build your jar using maven first, and then does all the
extra magic that Spark Pack
Hi all,
I started Hive Thrift Server with command,
/sbin/start-thriftserver.sh --master yarn -hiveconf
hive.server2.thrift.port 10003
The Thrift server started at the particular node without any error.
When doing the same, except pointing to different node to start the server,
./sbin/start-th
I just retargeted SPARK-16011 to 2.1.
On Fri, Jul 15, 2016 at 10:43 AM, Shivaram Venkataraman <
shiva...@eecs.berkeley.edu> wrote:
> Hashes, sigs match. I built and ran tests with Hadoop 2.3 ("-Pyarn
> -Phadoop-2.3 -Phive -Pkinesis-asl -Phive-thriftserver"). I couldn't
> get the following tests
+1000
Thanks Ismael for bringing this up! I meant to have send it earlier too
since I've been struggling with a sbt-based Scala project for a Spark
package myself this week and haven't yet found out how to do local
publishing.
If such a guide existed for Maven I could use it for sbt easily too :-
Hashes, sigs match. I built and ran tests with Hadoop 2.3 ("-Pyarn
-Phadoop-2.3 -Phive -Pkinesis-asl -Phive-thriftserver"). I couldn't
get the following tests to pass but I think it might be something
specific to my setup as Jenkins on branch-2.0 seems quite stable.
[error] Failed tests:
[error] o
Signatures and hashes are OK. I built and ran tests successfully on
Ubuntu 16 + Java 8 with "-Phive -Phadoop-2.7 -Pyarn". Although I
encountered a few tests failures, none were repeatable.
Regarding other issues brought up so far:
SPARK-16522
Does not seem quite enough to be a blocker if it's jus
Hello, I would like to know if there is an easy way to package a new
spark-package
with maven, I just found this repo, but I am not an sbt user.
https://github.com/databricks/sbt-spark-package
One more question, is there a formal specification or documentation of what
do
you need to include in a
13 matches
Mail list logo