way
> that breaks something. I see there's already a reset() call in here to
> try to avoid that.
>
> Well, seems worth a PR, especially if you can demonstrate some
> performance gains.
>
> On Wed, Oct 24, 2018 at 3:09 PM Patrick Brown
> wrote:
> >
> > Hi,
&g
in merging in, and how I
might go about that.
Thanks,
Patrick
Yep, that sounds reasonable to me!
On Fri, Mar 30, 2018 at 5:50 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> +1
>
> Original message
> From: Ryan Blue <rb...@netflix.com>
> Date: 3/30/18 2:28 PM (GMT-08:00)
> To: Patrick Woody <patrick.woo...
irst few values, so it must be that the intent was a global sort.
>
> On Fri, Mar 30, 2018 at 6:51 AM, Patrick Woody <patrick.woo...@gmail.com>
> wrote:
>
>> Right, you could use this to store a global ordering if there is only one
>>> write (e.g., CTAS). I don
> partition ordinal for the source to store would be required.
>>
>> For the second point that ordering is useful for statistics and
>> compression, I completely agree. Our best practices doc tells users to
>> always add a global sort when writing because you get the benefit of
.
>
> For your first use case, an explicit global ordering, the problem is that
> there can’t be an explicit global ordering for a table when it is populated
> by a series of independent writes. Each write could have a global order,
> but once those files are written, you have to deal with mul
t;> other side of a join. And, it looks like task order matters? Maybe I'm
>>>>>> missing something?
>>>>>>
>>>>>> I think that we should design the write side independently based on
>>>>>> what data stores actually need, and
h
>>>> partition, i.e., how would the partition’s place in the global ordering be
>>>> passed?
>>>>
>>>> To your other questions, you might want to have a look at the recent
>>>> SPIP I’m working on to consolidate and clean up logical pla
Hey all,
I saw in some of the discussions around DataSourceV2 writes that we might
have the data source inform Spark of requirements for the input data's
ordering and partitioning. Has there been a proposed API for that yet?
Even one level up it would be helpful to understand how I should be
I've filed JIRA SPARK-22055 & SPARK-22054 to port the
> release scripts and allow injecting of the RM's key.
>
> On Mon, Sep 18, 2017 at 8:11 PM, Patrick Wendell <patr...@databricks.com>
> wrote:
>
>> For the current release - maybe Holden could just s
For the current release - maybe Holden could just sign the artifacts with
her own key manually, if this is a concern. I don't think that would
require modifying the release pipeline, except to just remove/ignore the
existing signatures.
- Patrick
On Mon, Sep 18, 2017 at 7:56 PM, Reynold Xin &l
://github.com/apache/spark/tree/master/dev/create-release
- Patrick
On Mon, Sep 18, 2017 at 6:23 PM, Patrick Wendell <patr...@databricks.com>
wrote:
> One thing we could do is modify the release tooling to allow the key to be
> injected each time, thus allowing any RM to insert t
One thing we could do is modify the release tooling to allow the key to be
injected each time, thus allowing any RM to insert their own key at build
time.
Patrick
On Mon, Sep 18, 2017 at 4:56 PM Ryan Blue <rb...@netflix.com> wrote:
> I don't understand why it is necessary to share a re
then they themselves can do quite a
bit of nefarious things anyways.
It is true that we trust all previous release managers instead of only one.
We could probably rotate the jenkins credentials periodically in order to
compensate for this, if we think this is a nontrivial risk.
- Patrick
On Sun, Sep 17, 2017
JIRA
| |
|
Patrick.
De : Katherine Prevost <k...@hypatian.org>
À : Jörn Franke <jornfra...@gmail.com>; Katherine Prevost <prevo...@cert.org>
Cc : dev@spark.apache.org
Envoyé le : Mercredi 16 août 2017 11h55
Objet : Re: Questions about the future of UDTs and En
Hey all,
Just wondering if anyone has had issues with this or if it is expected that
the semantic around the memory management is different here.
Thanks
-Pat
On Tue, Apr 19, 2016 at 9:32 AM, Patrick Woody <patrick.woo...@gmail.com>
wrote:
> Hey all,
>
> I had a question about
Hey all,
I had a question about the MemoryStore for the BlockManager with the
unified memory manager v.s. the legacy mode.
In the unified format, I would expect the max size of the MemoryStore to be
* *
in the same way that when using the StaticMemoryManager it is
* *
.
Instead it
Hey Michael,
Any update on a first cut of the RC?
Thanks!
-Pat
On Mon, Feb 15, 2016 at 6:50 PM, Michael Armbrust
wrote:
> I'm not going to be able to do anything until after the Spark Summit, but
> I will kick off RC1 after that (end of week). Get your patches in
+1
On Wed, Dec 16, 2015 at 6:15 PM, Ted Yu wrote:
> Ran test suite (minus docker-integration-tests)
> All passed
>
> +1
>
> [INFO] Spark Project External ZeroMQ .. SUCCESS [
> 13.647 s]
> [INFO] Spark Project External Kafka ...
by has some idea of how things are going
and can chime in, etc.
Once an RC is cut then we do mostly rely on the mailing list for
discussion. At that point the number of known issues is small enough I
think to discuss in an all-to-all fashion.
- Patrick
On Wed, Dec 2, 2015 at 1:25 PM, Sean Owen <
of
years, with minimal impact for users.
- Patrick
On Tue, Nov 10, 2015 at 3:35 PM, Nicholas Chammas <
nicholas.cham...@gmail.com> wrote:
> > For this reason, I would *not* propose doing major releases to break
> substantial API's or perform large re-architecting that prevent users f
need to continue to
support maven - the coupling is intentional. But getting involved in the
build in general would be completely welcome.
- Patrick
On Thu, Nov 5, 2015 at 10:53 PM, Sean Owen <so...@cloudera.com> wrote:
> Maven isn't 'legacy', or supported for the benefit of third parti
I believe this is some bug in our tests. For some reason we are using way
more memory than necessary. We'll probably need to log into Jenkins and
heap dump some running tests and figure out what is going on.
On Mon, Nov 2, 2015 at 7:42 AM, Ted Yu wrote:
> Looks like
I verified that the issue with build binaries being present in the source
release is fixed. Haven't done enough vetting for a full vote, but did
verify that.
On Sun, Oct 25, 2015 at 12:07 AM, Reynold Xin wrote:
> Please vote on releasing the following candidate as Apache
de me w/some specific failures so i can look
> in to them more closely?
>
> On Mon, Oct 19, 2015 at 12:27 PM, Patrick Wendell <pwend...@gmail.com>
> wrote:
> > Hey Shane,
> >
> > It also appears that every Spark build is failing right now. Could it be
> > related
I think many of them are coming form the Spark 1.4 builds:
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/Spark-1.4-Maven-pre-YARN/3900/console
On Mon, Oct 19, 2015 at 1:44 PM, Patrick Wendell <pwend...@gmail.com> wrote:
> This is what I'
Hey Shane,
It also appears that every Spark build is failing right now. Could it be
related to your changes?
- Patrick
On Mon, Oct 19, 2015 at 11:13 AM, shane knapp <skn...@berkeley.edu> wrote:
> worker 05 is back up now... looks like the machine OOMed and needed
> to be kicked
I would tend to agree with this approach. We should audit all
@Experimenetal labels before the 1.6 release and clear them out when
appropriate.
- Patrick
On Wed, Oct 14, 2015 at 2:13 AM, Sean Owen <so...@cloudera.com> wrote:
> Someone asked, is "ML pipelines" stable? I said,
today.
In terms of fixing the underlying issues, I am not sure whether there is a
JIRA for it yet, but we should make one if not. Does anyone know?
- Patrick
On Wed, Oct 14, 2015 at 12:13 PM, Jakob Odersky <joder...@gmail.com> wrote:
> Hi everyone,
>
> I've been having trouble
It's really easy to create and modify those builds. If the issue is that we
need to add SBT or Maven to the existing one, it's a short change. We can
just have it build both of them. I wasn't aware of things breaking before
in one build but not another.
- Patrick
On Mon, Oct 12, 2015 at 9:21 AM
was
not using the most current version of the build scripts. See related links:
https://issues.apache.org/jira/browse/SPARK-10511
https://github.com/apache/spark/pull/8774/files
I can update our build environment and we can repackage the Spark 1.5.1
source tarball. To not include sources.
- Patrick
*to not include binaries.
On Sun, Oct 11, 2015 at 9:35 PM, Patrick Wendell <pwend...@gmail.com> wrote:
> I think Daniel is correct here. The source artifact incorrectly includes
> jars. It is inadvertent and not part of our intended release process. This
> was something I noticed
of the
source tree, including some effort to generate jars on the fly which a lot
of our tests use. I am not sure whether it's a firm policy that you can't
have jars in test folders, though. If it is, we could probably do some
magic to get rid of these few ones that have crept in.
- Patrick
On Sun
tic, but we ended up removing it from
our source tree and adding things to download it for the user.
- Patrick
On Sun, Oct 11, 2015 at 10:12 PM, Sean Owen <so...@cloudera.com> wrote:
> No we are voting on the artifacts being released (too) in principle.
> Although of course the art
I would push back slightly. The reason we have the PR builds taking so long
is death by a million small things that we add. Doing a full 2.11 compile
is order minutes... it's a nontrivial increase to the build times.
It doesn't seem that bad to me to go back post-hoc once in a while and fix
2.11
the foreseeable future?
>
> Nick
>
>
> On Tue, Oct 6, 2015 at 1:13 AM Patrick Wendell <pwend...@gmail.com> wrote:
>
>> The missing artifacts are uploaded now. Things should propagate in the
>> next 24 hours. If there are still issues past then ping this
case, getting some high level view of the functionality you imagine
would be helpful to give more detailed feedback.
- Patrick
On Tue, Oct 6, 2015 at 3:12 PM, Holden Karau <hol...@pigscanfly.ca> wrote:
> Hi Spark Devs,
>
> So this has been brought up a few times before, and genera
The missing artifacts are uploaded now. Things should propagate in the next
24 hours. If there are still issues past then ping this thread. Thanks!
- Patrick
On Mon, Oct 5, 2015 at 2:41 PM, Nicholas Chammas <nicholas.cham...@gmail.com
> wrote:
> Thanks for looking into this Josh.
&
BTW - the merge window for 1.6 is September+October. The QA window is
November and we'll expect to ship probably early december. We are on a
3 month release cadence, with the caveat that there is some
pipelining... as we finish release X we are already starting on
release X+1.
- Patrick
On Thu
Ah - I can update it. Usually i do it after the release is cut. It's
just a standard 3 month cadence.
On Thu, Oct 1, 2015 at 3:55 AM, Sean Owen wrote:
> My guess is that the 1.6 merge window should close at the end of
> November (2 months from now)? I can update it but wanted
Hey Richard,
My assessment (just looked before I saw Sean's email) is the same as
his. The NOTICE file embeds other projects' licenses. If those
licenses themselves have pointers to other files or dependencies, we
don't embed them. I think this is standard practice.
- Patrick
On Thu, Sep 24
people are
supportive of this plan I can offer to help spend some time thinking
about any potential corner cases, etc.
- Patrick
On Wed, Sep 23, 2015 at 3:13 PM, Marcelo Vanzin <van...@cloudera.com> wrote:
> Hey all,
>
> This is something that we've discussed several times internal
I just added snapshot builds for 1.5. They will take a few hours to
build, but once we get them working should publish every few hours.
https://amplab.cs.berkeley.edu/jenkins/view/Spark-Packaging
- Patrick
On Mon, Sep 21, 2015 at 10:36 PM, Bin Wang <wbi...@gmail.com> wrote:
> Howev
.
I've documented this on the wiki:
https://cwiki.apache.org/confluence/display/SPARK/Useful+Developer+Tools
- Patrick
-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h
There is already code in place that restricts which tests run
depending on which code is modified. However, changes inside of
Spark's core currently require running all dependent tests. If you
have some ideas about how to improve that heuristic, it would be
great.
- Patrick
On Tue, Aug 25, 2015
patches didn't
introduce problems.
https://amplab.cs.berkeley.edu/jenkins/view/Spark-QA-Test/
- Patrick
-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org
I have a follow up on this:
I see on JIRA that the idea of having a GLMNET implementation was more or
less abandoned, since a OWLQN implementation was chosen to construct a model
using L1/L2 regularization.
However, GLMNET has the property of returning a multitide of models
(corresponding to
Hey Meihua,
If you are a user of Spark, one thing that is really helpful is to run
Spark 1.5 on your workload and report any issues, performance
regressions, etc.
- Patrick
On Mon, Aug 3, 2015 at 11:49 PM, Akhil Das ak...@sigmoidanalytics.com wrote:
I think you can start from here
https
Yeah the best bet is to use ./build/mvn --force (otherwise we'll still
use your system maven).
- Patrick
On Mon, Aug 3, 2015 at 1:26 PM, Sean Owen so...@cloudera.com wrote:
That statement is true for Spark 1.4.x. But you've reminded me that I
failed to update this doc for 1.5, to say Maven
Hey All,
I got it up and running - it was a newly surfaced bug in the build scripts.
- Patrick
On Wed, Jul 29, 2015 at 6:05 AM, Bharath Ravi Kumar reachb...@gmail.com wrote:
Hey Patrick,
Any update on this front please?
Thanks,
Bharath
On Fri, Jul 24, 2015 at 8:38 PM, Patrick Wendell
. I would vouch for having user continuity, for instance still
have a shim ec2/spark-ec2 script that could perhaps just download
and unpack the real script from github.
- Patrick
On Fri, Jul 31, 2015 at 2:13 PM, Shivaram Venkataraman
shiva...@eecs.berkeley.edu wrote:
Yes - It is still in progress
the best behavior would be. Ideally in my mind if the same shortname
were registered twice we'd force the user to use a fully qualified name and
say the short name is ambiguous.
Patrick
On Jul 30, 2015 9:44 AM, Joseph Batchik josephbatc...@gmail.com wrote:
Hi all,
There are now starting
Thanks ted for pointing this out. CC to Ryan and TD
On Tue, Jul 28, 2015 at 8:25 AM, Ted Yu yuzhih...@gmail.com wrote:
Hi,
I noticed that ReceiverTrackerSuite is failing in master Jenkins build for
both hadoop profiles.
The failure seems to start with:
.
It's not worth waiting any time to try and figure out how to fix it,
or blocking on tracking down the commit author. This is because every
hour that we have the PRB broken is a major cost in terms of developer
productivity.
- Patrick
Hey Bharath,
There was actually an incompatible change to the build process that
broke several of the Jenkins builds. This should be patched up in the
next day or two and nightly builds will resume.
- Patrick
On Fri, Jul 24, 2015 at 12:51 AM, Bharath Ravi Kumar
reachb...@gmail.com wrote:
I
people on specific patches if they want
a soundingboard to understand whether it makes sense to backport.
- Patrick
-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h
I think we should just revert this patch on all affected branches. No
reason to leave the builds broken until a fix is in place.
- Patrick
On Sun, Jul 19, 2015 at 6:03 PM, Josh Rosen rosenvi...@gmail.com wrote:
Yep, I emailed TD about it; I think that we may need to make a change to the
pull
and why you might sense some frustration.
[1]
https://web.archive.org/web/20061020220358/http://www.apache.org/dev/release.html
[2]
https://web.archive.org/web/20061231050046/http://www.apache.org/dev/release.html
- Patrick
On Tue, Jul 14, 2015 at 10:09 AM, Sean Busbey bus...@cloudera.com wrote
](link). I think this would preserve
discoverability while also placing the information on the wiki, which
seems to be the main ask of the policy.
- Patrick
On Sun, Jul 19, 2015 at 2:32 AM, Sean Owen so...@cloudera.com wrote:
I am going to make an edit to the download page on the web site to
start
+1 from me too
On Sat, Jul 18, 2015 at 3:32 AM, Ted Yu yuzhih...@gmail.com wrote:
+1 to removing commit messages.
On Jul 18, 2015, at 1:35 AM, Sean Owen so...@cloudera.com wrote:
+1 to removing them. Sometimes there are 50+ commits because people
have been merging from master into their
/java/org/apache/spark/JavaSparkListener.java#L23
I think it might be reasonable that the Scala trait provides only
source compatibitly and the Java class provides binary compatibility.
- Patrick
On Wed, Jul 15, 2015 at 11:47 AM, Marcelo Vanzin van...@cloudera.com wrote:
Hey all,
Just noticed
Actually the java one is a concrete class.
On Wed, Jul 15, 2015 at 12:14 PM, Patrick Wendell pwend...@gmail.com wrote:
One related note here is that we have a Java version of this that is
an abstract class - in the doc it says that it exists more or less to
allow for binary compatibility
-release-1-4-1.html
Comprehensive list of fixes - http://s.apache.org/spark-1.4.1
Thanks to the 85 developers who worked on this release!
Please contact me directly for errata in the release notes.
- Patrick
-
To unsubscribe, e-mail
This vote passes with 14 +1 (7 binding) votes and no 0 or -1 votes.
+1 (14):
Patrick Wendell
Reynold Xin
Sean Owen
Burak Yavuz
Mark Hamstra
Michael Armbrust
Andrew Or
York, Brennon
Krishna Sankar
Luciano Resende
Holden Karau
Tom Graves
Denny Lee
Sean McNamara
- Patrick
On Wed, Jul 8, 2015 at 10
Thanks Sean O. I was thinking something like NOTE: Nightly builds are
meant for development and testing purposes. They do not go through
Apache's release auditing process and are not official releases.
- Patrick
On Sun, Jul 12, 2015 at 3:39 PM, Sean Owen so...@cloudera.com wrote:
(This sounds
I think we can close this vote soon. Any addition votes/testing would
be much appreciated!
On Fri, Jul 10, 2015 at 11:30 AM, Sean McNamara
sean.mcnam...@webtrends.com wrote:
+1
Sean
On Jul 8, 2015, at 11:55 PM, Patrick Wendell pwend...@gmail.com wrote:
Please vote on releasing
policy asks us not to include links that encourage
non-developers to download the builds. Stating clearly that the
audience for those links is developers, in my interpretation that
would satisfy the letter and spirit of this policy.
- Patrick
On Sat, Jul 11, 2015 at 11:53 AM, Sean Owen so
+1
On Wed, Jul 8, 2015 at 10:55 PM, Patrick Wendell pwend...@gmail.com wrote:
Please vote on releasing the following candidate as Apache Spark version
1.4.1!
This release fixes a handful of known issues in Spark 1.4.0, listed here:
http://s.apache.org/spark-1.4.1
The tag to be voted
Yeah - we can fix the docs separately from the release.
- Patrick
On Wed, Jul 8, 2015 at 10:03 AM, Mark Hamstra m...@clearstorydata.com wrote:
HiveSparkSubmitSuite is fine for me, but I do see the same issue with
DataFrameStatSuite -- OSX 10.10.4, java
1.7.0_75, -Phive -Phive-thriftserver
in order to get that fix.
- Patrick
On Wed, Jul 8, 2015 at 12:00 PM, Josh Rosen rosenvi...@gmail.com wrote:
I've filed https://issues.apache.org/jira/browse/SPARK-8903 to fix the
DataFrameStatSuite test failure. The problem turned out to be caused by a
mistake made while resolving a merge
Please vote on releasing the following candidate as Apache Spark version 1.4.1!
This release fixes a handful of known issues in Spark 1.4.0, listed here:
http://s.apache.org/spark-1.4.1
The tag to be voted on is v1.4.1-rc4 (commit dbaa5c2):
This vote is cancelled in favor of RC4.
- Patrick
On Tue, Jul 7, 2015 at 12:06 PM, Patrick Wendell pwend...@gmail.com wrote:
Please vote on releasing the following candidate as Apache Spark version
1.4.1!
This release fixes a handful of known issues in Spark 1.4.0, listed here:
http
Hey All,
This vote is cancelled in favor of RC3.
- Patrick
On Fri, Jul 3, 2015 at 1:15 PM, Patrick Wendell pwend...@gmail.com wrote:
Please vote on releasing the following candidate as Apache Spark version
1.4.1!
This release fixes a handful of known issues in Spark 1.4.0, listed here
Hi Tomo,
For now you can do that as a work around. We are working on a fix for
this in the master branch but it may take a couple of days since the
issue is fairly complicated.
- Patrick
On Sat, Jul 4, 2015 at 7:00 AM, tomo cocoa cocoatom...@gmail.com wrote:
Hi all,
I have a same error
-Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean
package
this also gave the ‘Dependency-reduced POM’ loop
Robin
On 3 Jul 2015, at 23:41, Patrick Wendell pwend...@gmail.com wrote:
What if you use the built-in maven (i.e. build/mvn). It might be that
we require a newer version of maven than
Let's continue the disucssion on the other thread relating to the master build.
On Fri, Jul 3, 2015 at 4:13 PM, Patrick Wendell pwend...@gmail.com wrote:
Thanks - it appears this is just a legitimate issue with the build,
affecting all versions of Maven.
On Fri, Jul 3, 2015 at 4:02 PM
typical users
won't have this bug.
2. Add a profile that re-enables that setting.
3. Use the above profile when publishing release artifacts to maven central.
4. Hope that we don't hit this bug for publishing.
- Patrick
On Fri, Jul 3, 2015 at 3:51 PM, Tarek Auel tarek.a...@gmail.com wrote
://github.com/apache/spark/commit/bc51bcaea734fe64a90d007559e76f5ceebfea9e
On Fri, Jul 3, 2015 at 4:36 PM, Patrick Wendell pwend...@gmail.com wrote:
Okay I did some forensics with Sean Owen. Some things about this bug:
1. The underlying cause is that we added some code to make the tests
of sub
Can you try using the built in maven build/mvn...? All of our builds
are passing on Jenkins so I wonder if it's a maven version issue:
https://amplab.cs.berkeley.edu/jenkins/view/Spark-QA-Compile/
- Patrick
On Fri, Jul 3, 2015 at 3:14 PM, Ted Yu yuzhih...@gmail.com wrote:
Please take a look
, family: mac
Let me nuke it and reinstall maven.
Cheers
k/
On Fri, Jul 3, 2015 at 3:41 PM, Patrick Wendell pwend...@gmail.com wrote:
What if you use the built-in maven (i.e. build/mvn). It might be that
we require a newer version of maven than you have. The release itself
is built with maven
the time of the RC voting is an
interesting topic, Sean I like your most recent proposal. Maybe we can
put that on the wiki or start a DISCUSS thread to cover that topic.
On Tue, Jun 23, 2015 at 10:37 PM, Patrick Wendell pwend...@gmail.com wrote:
Please vote on releasing the following candidate
Please vote on releasing the following candidate as Apache Spark version 1.4.1!
This release fixes a handful of known issues in Spark 1.4.0, listed here:
http://s.apache.org/spark-1.4.1
The tag to be voted on is v1.4.1-rc2 (commit 07b95c7):
Hey Krishna - this is still the current release candidate.
- Patrick
On Sun, Jun 28, 2015 at 12:14 PM, Krishna Sankar ksanka...@gmail.com wrote:
Patrick,
Haven't seen any replies on test results. I will byte ;o) - Should I test
this version or is another one in the wings ?
Cheers
k
Hey Tom - no one voted on this yet, so I need to keep it open until
people vote. But I'm not aware of specific things we are waiting for.
Anyone else?
- Patrick
On Fri, Jun 26, 2015 at 7:10 AM, Tom Graves tgraves...@yahoo.com wrote:
So is this open for vote then or are we waiting on other
at this release means
we are targeting such that we get around 70% of issues merged. That
actually doesn't seem so bad to me since there is some uncertainty in
the process. B
- Patrick
On Wed, Jun 24, 2015 at 1:54 AM, Sean Owen so...@cloudera.com wrote:
There are 44 issues still targeted for 1.4.1. None
Please vote on releasing the following candidate as Apache Spark version 1.4.1!
This release fixes a handful of known issues in Spark 1.4.0, listed here:
http://s.apache.org/spark-1.4.1
The tag to be voted on is v1.4.1-rc1 (commit 60e08e5):
is that it's much more efficient for us as the Spark
maintainers to pay this cost rather than to force a lot of our users
to deal with painful upgrades.
On Sat, Jun 13, 2015 at 1:39 AM, Steve Loughran ste...@hortonworks.com wrote:
On 12 Jun 2015, at 17:12, Patrick Wendell pwend...@gmail.com
vs the inconvenience for users.
- Patrick
On Fri, Jun 12, 2015 at 8:45 AM, Nicholas Chammas
nicholas.cham...@gmail.com wrote:
I'm personally in favor, but I don't have a sense of how many people still
rely on Hadoop 1.
Nick
2015년 6월 12일 (금) 오전 9:13, Steve Loughran
ste...@hortonworks.com님이
Hi All,
I'm happy to announce the availability of Spark 1.4.0! Spark 1.4.0 is
the fifth release on the API-compatible 1.X line. It is Spark's
largest release ever, with contributions from 210 developers and more
than 1,000 commits!
A huge thanks go to all of the individuals and organizations
This vote passes! Thanks to everyone who voted. I will get the release
artifacts and notes up within a day or two.
+1 (23 votes):
Reynold Xin*
Patrick Wendell*
Matei Zaharia*
Andrew Or*
Timothy Chen
Calvin Jia
Burak Yavuz
Krishna Sankar
Hari Shreedharan
Ram Sriharsha*
Kousuke Saruta
Sandy Ryza
Hey Hector,
It's not a bad idea. I think we'd want to do this by virtue of
allowing custom repositories, so users can add bintray or others.
- Patrick
On Wed, Jun 10, 2015 at 6:23 PM, Hector Yee hector@gmail.com wrote:
Hi Spark devs,
Is it possible to add jcenter or bintray support
Hi All,
Thanks for the continued voting! I'm going to leave this thread open
for another few days to continue to collect feedback.
- Patrick
On Tue, Jun 2, 2015 at 8:53 PM, Patrick Wendell pwend...@gmail.com wrote:
Please vote on releasing the following candidate as Apache Spark version
in there, but they are not in the logs, then let us know
(that would be a bug).
- Patrick
On Sun, Jun 7, 2015 at 9:06 AM, Akhil Das ak...@sigmoidanalytics.com wrote:
Are you seeing the same behavior on the driver UI? (that running on port
4040), If you click on the stage id header you can sort the stages based on
IDs
feel differently.
- Patrick
-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org
I will give +1 as well.
On Wed, Jun 3, 2015 at 11:59 PM, Reynold Xin r...@databricks.com wrote:
Let me give you the 1st
+1
On Tue, Jun 2, 2015 at 10:47 PM, Patrick Wendell pwend...@gmail.com wrote:
He all - a tiny nit from the last e-mail. The tag is v1.4.0-rc4. The
exact commit and all
randomSplit
9a88be1 [SPARK-6013] [ML] Add more Python ML examples for spark.ml
2bd4460 [SPARK-7954] [SPARKR] Create SparkContext in sparkRSQL init
On Fri, May 29, 2015 at 4:40 PM, Patrick Wendell pwend...@gmail.com wrote:
Please vote on releasing the following candidate as Apache Spark version
1.4.0
Please vote on releasing the following candidate as Apache Spark version 1.4.0!
The tag to be voted on is v1.4.0-rc3 (commit 22596c5):
https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=
22596c534a38cfdda91aef18aa9037ab101e4251
The release files, including signatures, digests, etc.
He all - a tiny nit from the last e-mail. The tag is v1.4.0-rc4. The
exact commit and all other information is correct. (thanks Shivaram
who pointed this out).
On Tue, Jun 2, 2015 at 8:53 PM, Patrick Wendell pwend...@gmail.com wrote:
Please vote on releasing the following candidate as Apache
?
Thanks for helping test!
- Patrick
On Mon, Jun 1, 2015 at 5:18 PM, Bobby Chowdary
bobby.chowdar...@gmail.com wrote:
Hive Context works on RC3 for Mapr after adding
spark.sql.hive.metastore.sharedPrefixes as suggested in SPARK-7819. However,
there still seems to be some other issues with native
Thanks for all the discussion on the vote thread. I am canceling this
vote in favor of RC3.
On Sun, May 24, 2015 at 12:22 AM, Patrick Wendell pwend...@gmail.com wrote:
Please vote on releasing the following candidate as Apache Spark version
1.4.0!
The tag to be voted on is v1.4.0-rc2 (commit
1 - 100 of 522 matches
Mail list logo