I took the liberty of adding this to the wiki, where it can change
further if needed.
https://cwiki.apache.org/confluence/display/SPARK/Committers#Committers-PolicyonBackportingBugFixes
On Fri, Jul 24, 2015 at 8:57 PM, Patrick Wendell pwend...@gmail.com wrote:
Hi All,
A few times I've been
Hi all,
When I run `dev/lint-python` at the latest master branch, I got an error
message as follows.
Is the lint script broken? Or is there any problems with my environment?
```
$ ./dev/lint-python
./dev/lint-python: line 64: syntax error near unexpected token `'
./dev/lint-python: line 64: `
Hi Sean,
Thank you for answering my question.
It seems that I used an old version bash which is the default Mac bash.
```
$ bash --version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin14)
Copyright (C) 2007 Free Software Foundation, Inc.
share_history
```
Thanks,
Yu
-
-- Yu
I do not see any problem like that on master. The syntax looks valid.
Do you have an old version bash?
On Mon, Jul 27, 2015 at 2:29 PM, Yu Ishikawa
yuu.ishikawa+sp...@gmail.com wrote:
Hi all,
When I run `dev/lint-python` at the latest master branch, I got an error
message as follows.
Is the
I'm on 4.3.39, though that is probably newer than what comes with Macs
in general as I use brew to get newer versions of lots of things (this
may be a good option for you in general if you're a developer). What
OS X are you on -- is it also old? or is this likely to be a more
widespread problem?
I am running a spark application in YARN having 2 executors with Xms/Xmx as
32 Gigs and spark.yarn.excutor.memoryOverhead as 6 gigs.
I am seeing that the app's physical memory is ever increasing and finally
gets killed by node manager
2015-07-25 15:07:05,354 WARN
Dear Spark developers,
Below is the GraphX Pregel code snippet from
https://spark.apache.org/docs/latest/graphx-programming-guide.html#pregel-api:
(it does not contain caching step):
while (activeMessages 0 i maxIterations) {
// Receive the messages:
I'm using 10.10.4. And Xcode is version 6.4. Maybe, it isn't old.
I guess the old bash version causes the problem. I'll try to install another
bash with brew.
-
-- Yu Ishikawa
--
View this message in context:
Thank you, your explanation does make sense to me. Do you think that one join
will work if `mapReduceTriplets` is replaced by the new `aggregateMessages`?
The latter does not return the vertices that did not receive a message.
From: Robin East [mailto:robin.e...@xense.co.uk]
Sent: Monday, July
I have promoted https://issues.apache.org/jira/browse/SPARK-9202 to a
blocker to ensure that we get a fix for it before 1.5.0 I'm pretty swamped
with other tasks for the next few days, but I'd be happy to shepherd a
bugfix patch for this (this should be pretty straightforward and the JIRA
ticket
There is this pull request: https://github.com/apache/spark/pull/5713
We mean to merge it for 1.5. Maybe you can help review it too?
On Mon, Jul 27, 2015 at 11:23 AM, Vyacheslav Baranov
slavik.bara...@gmail.com wrote:
Hi all,
For now it's possible to convert RDD of case class to DataFrame:
Does scoverage work with the spark build in 2.11? That sounds like a big
win
On Sun, Jul 26, 2015 at 1:29 PM, Josh Rosen rosenvi...@gmail.com wrote:
Given that 2.11 may be more stringent with respect to warnings, we might
consider building with 2.11 instead of 2.10 in the pull request
I am having the same issue, but the python style checks are failing on the
Jenkins build server. Is anyone else having this problem? Failed build is
here:
https://amplab.cs.berkeley.edu/jenkins/job/SlowSparkPullRequestBuilder/121/console
Pedro Rodriguez
On Mon, Jul 27, 2015 at 7:10 AM, Yu
Should there be any delay in Jenkins using that? I rebased/pushed code to
most recent master after the hotfix commit (and double checked just now),
but the build still fails.
On Mon, Jul 27, 2015 at 1:11 PM, Reynold Xin r...@databricks.com wrote:
I just pushed a hotfix to disable Pylint.
On
I just pushed a hotfix to disable Pylint.
On Mon, Jul 27, 2015 at 1:09 PM, Pedro Rodriguez ski.rodrig...@gmail.com
wrote:
I am having the same issue, but the python style checks are failing on the
Jenkins build server. Is anyone else having this problem? Failed build is
here:
Hello !
Can both methods be compare in term of performance ? Tried the pull request
and it felt slow compare to manual mapping.
Cheers,
Jonathan
On Mon, Jul 27, 2015, 8:51 PM Reynold Xin r...@databricks.com wrote:
There is this pull request: https://github.com/apache/spark/pull/5713
We mean
Hi dev@spark, I wanted to quickly ping about Spree
http://www.hammerlab.org/2015/07/25/spree-58-a-live-updating-web-ui-for-spark/,
a live-updating web UI for Spark that I released on Friday (along with some
supporting infrastructure), and mention a couple things that came up while
I worked on it
Hi all,
I have been developing a custom recovery implementation for spark masters
and workers using hazlecast clustering.
in the Spark worker code [1], we see that a list of masters needs to be
provided at the worker start up, in order to achieve high availability.
this effectively means that
Hi Yan,
Is it possible to access the hbase table through spark sql jdbc layer ?
Thanks.
Deb
On Jul 22, 2015 9:03 PM, Yan Zhou.sc yan.zhou...@huawei.com wrote:
Yes, but not all SQL-standard insert variants .
*From:* Debasish Das [mailto:debasish.da...@gmail.com]
*Sent:* Wednesday, July
HBase in this case is no different from any other Spark SQL data sources, so
yes you should be able to access HBase data through Astro from Spark SQL’s JDBC
interface.
Graphically, the access path is as follows:
Spark SQL JDBC Interface - Spark SQL Parser/Analyzer/Optimizer-Astro
Optimizer-
20 matches
Mail list logo