+1 (non-binding)
Verified on OSX 10.10.2, built from source,
spark-shell / spark-submit jobs
ran various simple Spark / Scala queries
ran various SparkSQL queries (including HiveContext)
ran ThriftServer service and connected via beeline
ran SparkSVD
On Mon Dec 01 2014 at 11:09:26 PM Patrick
Thanks.
And @Reynold, sorry my bad, Guess I should have used something like
Stackoverflow!
On Tue, Dec 2, 2014 at 12:18 PM, Reynold Xin r...@databricks.com wrote:
Oops my previous response wasn't sent properly to the dev list. Here you
go for archiving.
Yes you can. Scala classes are
Thanks Sean, I followed suit (brew install zinc) and that is working.
2014-12-01 22:39 GMT-08:00 Sean Owen so...@cloudera.com:
I'm having no problems with the build or zinc on my Mac. I use zinc
from brew install zinc.
On Tue, Dec 2, 2014 at 3:02 AM, Stephen Boesch java...@gmail.com wrote:
Apologies if people get this more than once -- I sent mail to dev@spark
last night and don't see it in the archives. Trying the incubator list
now...wanted to make sure it doesn't get lost in case it's a bug...
-- Forwarded message --
From: Yana Kadiyska yana.kadiy...@gmail.com
+1 (non-binding)
Installed version pre-built for Hadoop on a private HPC
ran PySpark shell w/ iPython
loaded data using custom Hadoop input formats
ran MLlib routines in PySpark
ran custom workflows in PySpark
browsed the web UI
Noticeable improvements in stability and performance during large
Hi all,
I've noticed a bunch of times lately where a pull request changes to be
pretty different from the original pull request, and the title /
description never get updated. Because the pull request title and
description are used as the commit message, the incorrect description lives
on
I second that !
Would also be great if the JIRA was updated accordingly too.
Regards,
Mridul
On Wed, Dec 3, 2014 at 1:53 AM, Kay Ousterhout kayousterh...@gmail.com wrote:
Hi all,
I've noticed a bunch of times lately where a pull request changes to be
pretty different from the original pull
+1. I also tested on Windows just in case, with jars referring other jars
and python files referring other python files. Path resolution still works.
2014-12-02 10:16 GMT-08:00 Jeremy Freeman freeman.jer...@gmail.com:
+1 (non-binding)
Installed version pre-built for Hadoop on a private HPC
Also a note on this for committers - it's possible to re-word the
title during merging, by just running git commit -a --amend before
you push the PR.
- Patrick
On Tue, Dec 2, 2014 at 12:50 PM, Mridul Muralidharan mri...@gmail.com wrote:
I second that !
Would also be great if the JIRA was
I am happy to announce the availability of Spark 1.1.1! This is a
maintenance release with many bug fixes, most of which are concentrated in
the core. This list includes various fixes to sort-based shuffle, memory
leak, and spilling issues. Contributions from this release came from 55
developers.
+1 tested on yarn.
Tom
On Friday, November 28, 2014 11:18 PM, Patrick Wendell
pwend...@gmail.com wrote:
Please vote on releasing the following candidate as Apache Spark version 1.2.0!
The tag to be voted on is v1.2.0-rc1 (commit 1056e9ec1):
Following on Mark's Maven examples, here is another related issue I'm
having:
I'd like to compile just the `core` module after a `mvn clean`, without
building an assembly JAR first. Is this possible?
Attempting to do it myself, the steps I performed were:
- `mvn compile -pl core`: fails because
On Tue, Dec 2, 2014 at 2:40 PM, Ryan Williams
ryan.blake.willi...@gmail.com wrote:
Following on Mark's Maven examples, here is another related issue I'm
having:
I'd like to compile just the `core` module after a `mvn clean`, without
building an assembly JAR first. Is this possible?
Out of
Hey Ryan,
What if you run a single mvn install to install all libraries
locally - then can you mvn compile -pl core? I think this may be the
only way to make it work.
- Patrick
On Tue, Dec 2, 2014 at 2:40 PM, Ryan Williams
ryan.blake.willi...@gmail.com wrote:
Following on Mark's Maven
Marcelo: by my count, there are 19 maven modules in the codebase. I am
typically only concerned with core (and therefore its two dependencies as
well, `network/{shuffle,common}`).
The `mvn package` workflow (and its sbt equivalent) that most people
apparently use involves (for me)
On Tue, Dec 2, 2014 at 3:39 PM, Ryan Williams
ryan.blake.willi...@gmail.com wrote:
Marcelo: by my count, there are 19 maven modules in the codebase. I am
typically only concerned with core (and therefore its two dependencies as
well, `network/{shuffle,common}`).
But you only need to compile
On Tue Dec 02 2014 at 4:46:20 PM Marcelo Vanzin van...@cloudera.com wrote:
On Tue, Dec 2, 2014 at 3:39 PM, Ryan Williams
ryan.blake.willi...@gmail.com wrote:
Marcelo: by my count, there are 19 maven modules in the codebase. I am
typically only concerned with core (and therefore its two
On Tue, Dec 2, 2014 at 4:40 PM, Ryan Williams
ryan.blake.willi...@gmail.com wrote:
But you only need to compile the others once.
once... every time I rebase off master, or am obliged to `mvn clean` by some
other build-correctness bug, as I said before. In my experience this works
out to a few
In Hive 13 (which is the default for Spark 1.2), parquet is included and
thus we no longer include the Hive parquet bundle. You can now use the
included
ParquetSerDe: org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe
If you want to compile Spark 1.2 with Hive 12 instead you can pass
Hello everyone,
Could anybody tell me how to import and call the 3rd party java classes from
inside spark?
Here's my case:
I have a jar file (the directory layout is com.xxx.yyy.zzz) which contains
some java classes, and I need to call some of them in spark code.
I used the statement import
I think you can place the jar in lib/ in SPARK_HOME, and then compile without
any change to your class path. This could be a temporary way to include your
jar. You can also put them in your pom.xml.
Thanks,
Daoyuan
-Original Message-
From: flyson [mailto:m_...@msn.com]
Sent:
21 matches
Mail list logo