Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1905#issuecomment-52881556
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19036/consoleFull)
for PR 1905 at commit
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/1846#issuecomment-52881674
@marmbrus this should be ready to be merged.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2035#issuecomment-52882546
I think we need to modify this file: ` sql/hive-thriftserver/pom.xml`
```xml
dependency
groupIdorg.spark-project.hive/groupId
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1944#issuecomment-52882618
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19037/consoleFull)
for PR 1944 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1905#issuecomment-52882907
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19038/consoleFull)
for PR 1905 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2035#issuecomment-52883408
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19033/consoleFull)
for PR 2035 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-52883816
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19040/consoleFull)
for PR 1290 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2031#issuecomment-52883843
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19034/consoleFull)
for PR 2031 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2078#issuecomment-52884177
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19035/consoleFull)
for PR 2078 at commit
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/2035#issuecomment-52884390
I pulled the latest master and only update the
`sql/hive-thriftserver/pom.xml` as you suggested, however, both
netty-3.2.2.Final.jar netty-3.6.6.Final.jar exist
Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/2053#issuecomment-52884496
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user u0jing commented on the pull request:
https://github.com/apache/spark/pull/2034#issuecomment-52884450
@chenghao-intel : Thank for your advice.
Add the golden answer files .
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user YanTangZhai commented on the pull request:
https://github.com/apache/spark/pull/2059#issuecomment-52884506
Hi @JoshRosen SparkContext1 creates broadcastManager and initializes
HttpBroadcast object. HttpBroadcast creates httpserver and broadcastDir and so
on. However
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/2035#issuecomment-52884684
Sorry for the confusing, I've removed the `SQL` from the PR title.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2035#issuecomment-52884754
`./make-distribution.sh -Pyarn -Phadoop-2.3 -Phive-thriftserver -Phive
-Dhadoop.version=2.3.0`.
`./bin/spark-sql --hiveconf hive.root.logger=INFO,console` seems to
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2053#issuecomment-52884822
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19041/consoleFull)
for PR 2053 at commit
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/2035#issuecomment-52884992
Exactly, I did the same test under hadoop-2.3, it plays well, but not for
hadoop.version=2.0.0-mr1-cdh4.3.0. Probably the maven exclusion rule are not
well
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/1905#issuecomment-52885813
How do you plan to resolve the protobuf version requirement/conflict (akka
2.3 specifically requires protobuf 2.5, hadoop1 specifically requires 2.4), I
think
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/1905#issuecomment-52886252
Also I have taken out repl port from this PR, that would make it easier to
review.
There was a bug I have noticed in scala 2.11, where in some tests -
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2070#discussion_r16523887
--- Diff: docs/mllib-dimensionality-reduction.md ---
@@ -119,14 +137,13 @@ statistical method to find a rotation such that the
first coordinate has the lar
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/2063#issuecomment-52886441
I've merged this into master and branch-1.1. Thanks!!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1944#issuecomment-52886560
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19042/consoleFull)
for PR 1944 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2063
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1944#issuecomment-52886607
@marmbrus Rebased. Also included the `-S` command line option fix in #1886,
since we should include this in 1.1 release and unfortunately #1886 conflicts
with master
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1616#issuecomment-52886900
Thanks a bunch for updating this; this seems like an important fix and I'd
like to try to get it included soon in a release. I'll try my best to review
this tomorrow
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1905#issuecomment-52887418
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19038/consoleFull)
for PR 1905 at commit
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/2077#issuecomment-52887465
Can you upload a screenshot of the UI?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1944#issuecomment-52887603
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19037/consoleFull)
for PR 1944 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-52887795
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19040/consoleFull)
for PR 1290 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2053#issuecomment-5284
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19041/consoleFull)
for PR 2053 at commit
Github user bgreeven commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-52889153
I have updated the code. Indeed the LeastSquaresGradientANN.compute
function was the culprit.
I removed the Breeze instructions, and replaced them by simple
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/2054#issuecomment-52889531
Updated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2034#issuecomment-52889606
`.q` files are input files for the `HiveCompatibilitySuite` test cases.
@chenghao-intel Thanks for the explanation.
---
If your project is set up for it, you can
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1968#issuecomment-52889655
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2077#issuecomment-52891108
Would you mind to add `[Core]` after `[SPARK-975]` in the PR title?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1944#issuecomment-52892899
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19042/consoleFull)
for PR 1944 at commit
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/2019#issuecomment-52893814
@arahuja I've modified. Can you test with new PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2019#issuecomment-52894136
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19043/consoleFull)
for PR 2019 at commit
Github user chutium commented on a diff in the pull request:
https://github.com/apache/spark/pull/1959#discussion_r16527681
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/parquet/ParquetTypes.scala ---
@@ -373,9 +373,11 @@ private[parquet] object ParquetTypesConverter
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2019#issuecomment-52895490
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19045/consoleFull)
for PR 2019 at commit
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/1905#issuecomment-52896332
I was able to run all the tests locally with `dev/run-tests`. I am not sure
why jenkins is failing.
---
If your project is set up for it, you can reply to this email
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1944#issuecomment-52897065
I'll rebase this PR to #1886 after merging this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/2079
[SPARK-2988] - Port repl to scala 2.11.
Why do we need to port the repl for scala 2.11, especially when scala repl
has class based wrappers ?
We need to do this, because they are not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2079#issuecomment-52897341
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19046/consoleFull)
for PR 2079 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2019#issuecomment-52898460
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19043/consoleFull)
for PR 2019 at commit
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/2019#issuecomment-52898610
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2019#issuecomment-52899199
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19047/consoleFull)
for PR 2019 at commit
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2079#issuecomment-52900079
This patch is continuation of #1905 and should be merged after that. The
diffs will be simplified once the other PR is merged.
---
If your project is set up for it,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2019#issuecomment-52900626
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19045/consoleFull)
for PR 2019 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2076#discussion_r16530145
--- Diff: core/src/main/scala/org/apache/spark/ui/storage/StorageTab.scala
---
@@ -73,8 +73,9 @@ class StorageListener(storageStatusListener:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2079#issuecomment-52901077
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19048/consoleFull)
for PR 2079 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1905#issuecomment-52901542
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19049/consoleFull)
for PR 1905 at commit
Github user chutium commented on a diff in the pull request:
https://github.com/apache/spark/pull/1959#discussion_r16530668
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/parquet/ParquetTypes.scala ---
@@ -373,9 +373,11 @@ private[parquet] object ParquetTypesConverter
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/2080
[SPARK-2963] REGRESSION - The description about how to build for using CLI
and Thrift JDBC server is absent in proper document -
#1885 resolved documentation issue about building for using CLI and
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2075#issuecomment-52902320
I think this is the same issue already addressed by
https://github.com/apache/spark/pull/1726/files
---
If your project is set up for it, you can reply to this email and
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2080#issuecomment-52902357
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19050/consoleFull)
for PR 2080 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2019#issuecomment-52904034
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19047/consoleFull)
for PR 2019 at commit
Github user avulanov commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-52905520
@bgreeven In the meantime I did another implementation of neural networks
classifier with arbitrary number of hidden layers. It uses `GradientDescent` to
train on
Github user tanyatik commented on a diff in the pull request:
https://github.com/apache/spark/pull/2062#discussion_r16532536
--- Diff:
core/src/main/scala/org/apache/spark/deploy/master/DriverInfo.scala ---
@@ -33,4 +33,17 @@ private[spark] class DriverInfo(
@transient var
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/1726#issuecomment-52906646
@pwendell Could you take a look? @tdas approves and I think this may
resolve https://github.com/apache/spark/pull/2075 too
---
If your project is set up for it, you can
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2080#issuecomment-52906753
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19050/consoleFull)
for PR 2080 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1905#issuecomment-52906960
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19049/consoleFull)
for PR 1905 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2014#issuecomment-52907543
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19051/consoleFull)
for PR 2014 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2079#issuecomment-52907540
**Tests timed out** after a configured wait of `120m`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user uncleGen commented on the pull request:
https://github.com/apache/spark/pull/2076#issuecomment-52908740
@srowen yes! Not only in StorageTab, ExectutorTab may also lose some
rdd-infos which have been overwritten by following rdd in a same task.
StorageTab: when
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1804#issuecomment-52909994
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19052/consoleFull)
for PR 1804 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2079#issuecomment-52910803
**Tests timed out** after a configured wait of `120m`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2014#issuecomment-52911412
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19051/consoleFull)
for PR 2014 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2014#issuecomment-52912040
Ha, false positive. It picked up a line of text in `README.md` that
contains a class declaration. Maybe the checker can avoid known non-source
extensions. The unit test
GitHub user darabos opened a pull request:
https://github.com/apache/spark/pull/2081
Add SSDs to block device mapping
On `m3.2xlarge` instances the 2x80GB SSDs are inaccessible if not added to
the block device mapping when the instance is created. They work when added
with this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2081#issuecomment-52913235
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1804#issuecomment-52914255
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19052/consoleFull)
for PR 1804 at commit
GitHub user chuxi opened a pull request:
https://github.com/apache/spark/pull/2082
SPARK-2096 [SQL]: Correctly parse dot notations for accessing an array of
structs
For example, arrayOfStruct is an array of structs and every element of
this array has a field called field1.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2082#issuecomment-52916269
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/2020#issuecomment-52917400
@vanzin yes I believe you are correct, in most cases it just passes
succeeded. We really need to bring client up to par but I was hoping to move
it to unmanaged AM so
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/1958#issuecomment-52917580
I see @pwendell temporarily removed the test:
https://github.com/apache/spark/commit/1d5e84a99076d3e0168dd2f4626c7911e7ba49e7#diff-d41d8cd98f00b204e9800998ecf8427e
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/1890#issuecomment-52918929
Unless user tries addJar, should not be relevant to yarn modes,
Regards,
Mridul
On Thu, Aug 21, 2014 at 5:37 AM, Reynold Xin
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/2078#issuecomment-52920194
You can potentially have NPE's in some of these log message changes
proposed.
The reason to keep the information minimal was to ensure we get atleast
some
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/2082#discussion_r16539291
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/json/JsonSuite.scala
---
@@ -292,24 +292,29 @@ class JsonSuite extends QueryTest {
Github user adrian-wang commented on a diff in the pull request:
https://github.com/apache/spark/pull/2082#discussion_r16539375
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -108,6 +109,8 @@ abstract class LogicalPlan
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1726#issuecomment-52922473
I tested, but compile failed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rnowling commented on the pull request:
https://github.com/apache/spark/pull/1964#issuecomment-52924889
@erikerlandson Breeze uses non-JVM data structures so it can use BLAS. The
malloc and copying could be both expensive and lead to 2-3x higher memory usage.
This
Github user chuxi commented on a diff in the pull request:
https://github.com/apache/spark/pull/2082#discussion_r16540807
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/json/JsonSuite.scala
---
@@ -292,24 +292,29 @@ class JsonSuite extends QueryTest {
sql(select
Github user chuxi commented on a diff in the pull request:
https://github.com/apache/spark/pull/2082#discussion_r16540895
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -108,6 +109,8 @@ abstract class LogicalPlan extends
Github user chuxi commented on a diff in the pull request:
https://github.com/apache/spark/pull/2082#discussion_r16541208
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/json/JsonSuite.scala
---
@@ -292,24 +292,29 @@ class JsonSuite extends QueryTest {
sql(select
Github user chuxi commented on a diff in the pull request:
https://github.com/apache/spark/pull/2082#discussion_r16540937
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -108,6 +109,8 @@ abstract class LogicalPlan extends
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1726#discussion_r16541517
--- Diff: external/flume-sink/pom.xml ---
@@ -65,12 +66,9 @@
/exclusions
/dependency
dependency
-
Github user tanyatik commented on a diff in the pull request:
https://github.com/apache/spark/pull/2062#discussion_r16541679
--- Diff:
core/src/main/scala/org/apache/spark/deploy/master/DriverInfo.scala ---
@@ -33,4 +33,17 @@ private[spark] class DriverInfo(
@transient var
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2075#issuecomment-52927703
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19053/consoleFull)
for PR 2075 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2078#issuecomment-52932355
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19054/consoleFull)
for PR 2078 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2078#issuecomment-52932490
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19054/consoleFull)
for PR 2078 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2019#issuecomment-52933110
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19055/consoleFull)
for PR 2019 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2019#issuecomment-52933277
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19055/consoleFull)
for PR 2019 at commit
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/2078#issuecomment-52933408
@mridulm I don't think, It's not loud to print information that a key is
related to which host:port.
Printing only key(.toString()) is rather loud because it's not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2019#issuecomment-52934582
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19057/consoleFull)
for PR 2019 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2078#issuecomment-52934588
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19056/consoleFull)
for PR 2078 at commit
Github user dnprock commented on the pull request:
https://github.com/apache/spark/pull/2077#issuecomment-52934791
Attached a screenshot.
![screen shot 2014-08-21 at 8 16 01
am](https://cloud.githubusercontent.com/assets/497205/3998201/27896df0-2946-11e4-9ea3-d4e4f40928c1.png)
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2075#issuecomment-52936126
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19053/consoleFull)
for PR 2075 at commit
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/1684#issuecomment-52936217
sorry I haven't had time to get back to this. Originally there were some
issues with the order and make sure all the options worked properly from all
the various ways
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/2083
[WIP][SPARK-3098]In some cases, the result of RDD.distinct is inconsistent
cc @srowen
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark
1 - 100 of 205 matches
Mail list logo