Yeah the best bet is to use ./build/mvn --force (otherwise we'll still
use your system maven).
- Patrick
On Mon, Aug 3, 2015 at 1:26 PM, Sean Owen so...@cloudera.com wrote:
That statement is true for Spark 1.4.x. But you've reminded me that I
failed to update this doc for 1.5, to say Maven
Hello,
In developing new third-party pipeline components for Spark ML 1.4 (see
dl4j-spark-ml), I encountered a few gaps in the earlier effort to make the ML
Developer APIs public (SPARK-5995).I plan to file issues after we discuss
on this thread. The below is a list of types that are
Hi,
I was looking at the spark-submit and spark-shell --help on both (Spark 1.3.1
and Spark 1.5-snapshot) versions and the Spark documentation for submitting
Spark applications to YARN. It seems to be there is some mismatch in the
preferred syntax and documentation.
Spark documentation
Please drop me from this list
Trevor Grant
Data Scientist
https://github.com/rawkintrevo
http://stackexchange.com/users/3002022/rawkintrevo
*Fortunate is he, who is able to know the causes of things. -Virgil*
Are these about the right rules of engagement for now until the
release candidate?
- Don't merge new features or improvements into 1.5 unless they're
Important and Have Been Discussed
- Docs and tests are OK to merge into 1.5
- Bug fixes can be merged into 1.5, with increasing conservativeness
as
I agree that it's high time to start changing/removing target versions,
especially if component maintainers have a good idea of what is not needed
for 1.5. I'll start doing that on ML.
On Mon, Aug 3, 2015 at 12:05 PM, Sean Owen so...@cloudera.com wrote:
Are these about the right rules of
I sent a note to the Mesos developers and created
https://github.com/apache/spark/pull/7899 to change the repository
pointer. There are 3-4 open PRs right now in the mesos/spark-ec2
repository and I'll work on migrating them to amplab/spark-ec2 later
today.
My thoughts on moving the python script
When I tried to compile against hbase 1.1.1, I got:
[ERROR]
/home/hbase/ssoh/src/main/scala/org/apache/spark/sql/hbase/SparkSqlRegionObserver.scala:124:
overloaded method next needs result type
[ERROR] override def next(result: java.util.List[Cell], limit: Int) =
next(result)
Is there plan to
The way to do that is to follow the Unsubscribe link here for dev@spark:
http://spark.apache.org/community.html
We can't drop you. You have to do it yourself.
Nick
On Mon, Aug 3, 2015 at 1:54 PM Trevor Grant trevor.d.gr...@gmail.com
wrote:
Please drop me from this list
Trevor Grant
Data
Hi Mesos developers
The Apache Spark project has been hosting using
https://github.com/mesos/spark-ec2 as a supporting repository for some
of our EC2 scripts. This is a remnant from the days when the Spark
project itself was hosted at github.com/mesos/spark. Based on
discussions in the Spark
HBase 1.0 should work fine even though we have not completed full tests yet.
Support of 1.1 should be able to be added with a minimal effort.
Thanks,
Yan
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Monday, August 03, 2015 10:33 AM
To: Bing Xiao (Bing)
Cc: dev@spark.apache.org;
If you use build/mvn or are already using Maven 3.3.3 locally (i.e.
via brew on OS X), then this won't affect you, but I wanted to call
attention to https://github.com/apache/spark/pull/7852 which makes
Maven 3.3.3 the minimum required to build Spark. This heads off
problems from some behavior
Hi Devs,
Just an announcement that I've cut Spark's branch-1.5 to form the basis of
the 1.5 release. Other than a few stragglers, this represents the end of
active feature development for Spark 1.5. *If committers are merging any
features (outside of alpha modules), please shoot me an email so I
Based on the latest spark code(commit
608353c8e8e50461fafff91a2c885dca8af3aaa8) and used the same Spark SQL query
to test two group of combined configuration and seemed that currently it
don't work fine in tungsten-sort shuffle manager from below results:
*Test 1# (PASSED)*
Thanks Sean. I noticed this one while building Spark version 1.5.0-SNAPSHOT
this morning.
WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireMavenVersion failed
with message:
Detected Maven Version: 3.2.5 is not in the allowed range 3.3.3.
Should we be using maven 3.3.3 locally or
Thanks Sean. Reason I asked this is, in Building Spark documentation of 1.4.1,
I still see this.
https://spark.apache.org/docs/latest/building-spark.html
https://spark.apache.org/docs/latest/building-spark.html
Building Spark using Maven requires Maven 3.0.4 or newer and Java 6+.
But I
Using ./build/mvn should always be fine. Your local mvn is fine too if
it's 3.3.3 or later (3.3.3 is the latest). That's what any brew users
on OS X out there will have, by the way.
On Mon, Aug 3, 2015 at 8:37 PM, Guru Medasani gdm...@gmail.com wrote:
Thanks Sean. I noticed this one while
That statement is true for Spark 1.4.x. But you've reminded me that I
failed to update this doc for 1.5, to say Maven 3.3.3 is required.
Patch coming up.
On Mon, Aug 3, 2015 at 9:12 PM, Guru Medasani gdm...@gmail.com wrote:
Thanks Sean. Reason I asked this is, in Building Spark documentation of
Just note that if you have mvn in your path, you need to use build/mvn
--force.
On Mon, Aug 3, 2015 at 12:38 PM, Sean Owen so...@cloudera.com wrote:
Using ./build/mvn should always be fine. Your local mvn is fine too if
it's 3.3.3 or later (3.3.3 is the latest). That's what any brew users
on
Would it be reasonable to start un-targeting non-bug non-blocker
issues? like, would anyone yell if I started doing that? that would
leave ~100 JIRAs, which still seems like more than can actually go
into the release. And anyone can re-target as desired.
I think the maintainers of the
20 matches
Mail list logo