Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1891#issuecomment-51879912
QA results for PR 1891:- This patch FAILED unit tests.- This patch
merges cleanly- This patch adds no public classesFor more
information see test
ouptut:https://amplab.c
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1902#issuecomment-51879950
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your pro
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1900#issuecomment-51879862
QA results for PR 1900:- This patch PASSES unit tests.- This patch
merges cleanly- This patch adds no public classesFor more
information see test
ouptut:https://amplab.c
GitHub user larryxiao opened a pull request:
https://github.com/apache/spark/pull/1902
[SPARK-2981][GraphX] EdgePartition1D Int overflow
minor fix
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/larryxiao/spark 2981
Alternatively
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1760#issuecomment-51878493
QA results for PR 1760:- This patch FAILED unit tests.- This patch
merges cleanly- This patch adds no public classesFor more
information see test
ouptut:https://amplab.c
Github user Ishiihara commented on the pull request:
https://github.com/apache/spark/pull/1871#issuecomment-51878228
@mateiz The performance of PrimitiveKeyOpenHashMap is on par with
mutable.HashMap. For one partition case, the PrimitiveKeyOpenHashMap is
slightly faster than using big
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1889#issuecomment-51877969
QA tests have started for PR 1889. This patch merges cleanly. View
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18363/consoleFull
---
If
Github user ueshin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1889#discussion_r16097181
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/dataTypes.scala
---
@@ -372,7 +372,7 @@ object MapType {
* The `valueContain
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1901#issuecomment-51877331
QA tests have started for PR 1901. This patch merges cleanly. View
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18362/consoleFull
---
If
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/1889#discussion_r16097043
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/dataTypes.scala
---
@@ -372,7 +372,7 @@ object MapType {
* The `valueConta
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1880#issuecomment-51877140
Opened #1901 for precise initial buffer size estimation.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
GitHub user liancheng opened a pull request:
https://github.com/apache/spark/pull/1901
[SPARK-2650][SQL] More precise initial buffer size estimation for in-memory
column buffer
This is a follow up of #1880.
Since the row number within a single batch is known, we can estimat
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1900#issuecomment-51876764
QA tests have started for PR 1900. This patch merges cleanly. View
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18361/consoleFull
---
If
GitHub user Ishiihara opened a pull request:
https://github.com/apache/spark/pull/1900
[MLlib] Correctly set vectorSize and alpha
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/Ishiihara/spark Word2Vec-bugfix
Alternatively you
Github user ueshin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1889#discussion_r16096763
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/dataTypes.scala
---
@@ -372,7 +372,7 @@ object MapType {
* The `valueContain
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1760#issuecomment-51876493
Be aware that the `udf_unix_timestamp` case is timezone sensitive. That's
why we reset timezone to "America/Los_Angeles" in `beforeAll`. This may be
related to your tes
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1891#issuecomment-51876017
QA tests have started for PR 1891. This patch merges cleanly. View
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18360/consoleFull
---
If
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/1891#issuecomment-51875922
test this please, let's see if he likes me :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your p
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1849
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/1849#issuecomment-51875312
Thanks! I've merged this into both master and branch-1.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. I
Github user ash211 closed the pull request at:
https://github.com/apache/spark/pull/1850
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/1850#issuecomment-51875295
Superseded by Graham's better fix here:
https://github.com/apache/spark/pull/1890
---
If your project is set up for it, you can reply to this email and have your
reply ap
Github user bgreeven commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-51875281
SteepestDescend -> SteepestDescent can be changed. Thanks for noticing.
Hung Pham, did it work out for you now?
---
If your project is set up for it, you can re
Github user tianyi commented on the pull request:
https://github.com/apache/spark/pull/1760#issuecomment-51875134
I'm sorry for forgetting run the test yesterday.
this time, i passed all the test on my laptop except udf_unix_timestamp
function, I guess it should be a enviorment pr
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1760#issuecomment-51875106
QA tests have started for PR 1760. This patch merges cleanly. View
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18359/consoleFull
---
If
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1891#issuecomment-51874701
Embarrassing... I don't know why Jenkins doesn't respond me, he used to :(
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1897#issuecomment-51873603
QA results for PR 1897:- This patch PASSES unit tests.- This patch
merges cleanly- This patch adds no public classesFor more
information see test
ouptut:https://amplab.c
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1632#discussion_r16095492
--- Diff:
core/src/main/scala/org/apache/spark/network/ConnectionManager.scala ---
@@ -22,6 +22,7 @@ import java.nio._
import java.nio.channels._
impo
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/1632#issuecomment-51873138
@sarutak You are right that using poll wouldn't clear up the internal state
in ConnectionManager. I think @JoshRosen 's idea of using a shared timer pool
or re-using som
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/1632#discussion_r16095425
--- Diff:
core/src/main/scala/org/apache/spark/network/ConnectionManager.scala ---
@@ -72,6 +73,7 @@ private[spark] class ConnectionManager(
// d
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1632#discussion_r16095403
--- Diff:
core/src/main/scala/org/apache/spark/network/ConnectionManager.scala ---
@@ -72,6 +73,7 @@ private[spark] class ConnectionManager(
// def
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1897#issuecomment-51872737
QA results for PR 1897:- This patch PASSES unit tests.- This patch
merges cleanly- This patch adds no public classesFor more
information see test
ouptut:https://amplab.c
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1632#issuecomment-51872672
@JoshRosen Thanks!
I'll try it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does n
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1632#issuecomment-51872227
@sarutak I left updates on a couple of my earlier comments. This solution
can work and I have a few suggestions for minor cleanup (e.g. re-using a Timer).
---
If your
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1632#discussion_r16095159
--- Diff:
core/src/main/scala/org/apache/spark/network/ConnectionManager.scala ---
@@ -652,19 +655,25 @@ private[spark] class ConnectionManager(
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1632#discussion_r16094973
--- Diff:
core/src/main/scala/org/apache/spark/network/ConnectionManager.scala ---
@@ -836,9 +845,14 @@ private[spark] class ConnectionManager(
def s
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1632#discussion_r16094928
--- Diff:
core/src/main/scala/org/apache/spark/network/ConnectionManager.scala ---
@@ -836,9 +845,14 @@ private[spark] class ConnectionManager(
def s
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1878#discussion_r16094838
--- Diff: examples/src/main/python/mllib/statistical_summary.py ---
@@ -0,0 +1,60 @@
+#
--- End diff --
`correlations.py` for `pearson` and `sp
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1878#discussion_r16094850
--- Diff: python/pyspark/mllib/linalg.py ---
@@ -160,6 +161,15 @@ def squared_distance(self, other):
j += 1
return result
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1878#discussion_r16094851
--- Diff: python/pyspark/mllib/linalg.py ---
@@ -160,6 +161,15 @@ def squared_distance(self, other):
j += 1
return result
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1878#discussion_r16094844
--- Diff:
examples/src/main/scala/org/apache/spark/examples/mllib/RandomAndSampledRDDs.scala
---
@@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Soft
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1878#discussion_r16094836
--- Diff: examples/src/main/python/mllib/random_and_sampled_rdds.py ---
@@ -0,0 +1,88 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under on
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1878#discussion_r16094835
--- Diff: examples/src/main/python/mllib/random_and_sampled_rdds.py ---
@@ -0,0 +1,88 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under on
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1878#discussion_r16094834
--- Diff: examples/src/main/python/mllib/random_and_sampled_rdds.py ---
@@ -0,0 +1,88 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under on
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1878#discussion_r16094832
--- Diff: examples/src/main/python/mllib/random_and_sampled_rdds.py ---
@@ -0,0 +1,88 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under on
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1878#discussion_r16094831
--- Diff: examples/src/main/python/mllib/random_and_sampled_rdds.py ---
@@ -0,0 +1,88 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under on
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1878#discussion_r16094833
--- Diff: examples/src/main/python/mllib/random_and_sampled_rdds.py ---
@@ -0,0 +1,88 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under on
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1897#issuecomment-51871303
QA tests have started for PR 1897. This patch merges cleanly. View
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18358/consoleFull
---
If
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1885#issuecomment-51870702
Thanks @marmbrus !
The main issue I mention in this ticket is how to build to use CLI / Thrift
JDBC server is not written on the proper place.
As you said, ex
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/1889#discussion_r16094479
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/dataTypes.scala
---
@@ -372,7 +372,7 @@ object MapType {
* The `valueConta
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1765
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1765#issuecomment-51870399
I went ahead and merged this since it improves perf and my only comments
were cosmetic. Thanks!
---
If your project is set up for it, you can reply to this email and h
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1897#issuecomment-51870344
QA tests have started for PR 1897. This patch merges cleanly. View
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18356/consoleFull
---
If
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/1765#discussion_r16094402
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins.scala ---
@@ -170,6 +164,9 @@ case class HashOuterJoin(
def output = left
Github user ueshin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1889#discussion_r16094328
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/dataTypes.scala
---
@@ -372,7 +372,7 @@ object MapType {
* The `valueContain
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1760#issuecomment-51869764
QA results for PR 1760:- This patch FAILED unit tests.- This patch
merges cleanly- This patch adds no public classesFor more
information see test
ouptut:https://amplab.c
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1885#issuecomment-51869746
Thanks for adding this! One suggestion: instead of just adding
documentation about the thrift server we should probably make these general
sections about Spark SQL's Hi
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1760#issuecomment-51869591
QA tests have started for PR 1760. This patch merges cleanly. View
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18355/consoleFull
---
If
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1851#issuecomment-51869583
QA tests have started for PR 1851. This patch merges cleanly. View
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18354/consoleFull
---
If
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1760#issuecomment-51869442
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
ena
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/1889#discussion_r16094101
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/dataTypes.scala
---
@@ -372,7 +372,7 @@ object MapType {
* The `valueConta
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1851#issuecomment-51869386
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
ena
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1851#issuecomment-51869368
add to whitelist
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this featu
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1880#issuecomment-51869212
Merged to master and 1.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1880
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1880#issuecomment-51869125
@liancheng, thanks for reviewing! Would you mind creating a JIRA/followup
PR to set the defaults correctly as you propose?
---
If your project is set up for it, you ca
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1888
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1888#issuecomment-51869014
Thanks! I've merged to master and 1.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project d
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1887#issuecomment-51868903
Thanks! I've merged this into master and 1.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your pr
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1887
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/1849#issuecomment-51868853
Yes, LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
ena
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/1846#discussion_r16093801
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/CreateTableAsSelect.scala
---
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1881
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1632#issuecomment-51868710
I think the current solution is better. `LinkedBlockingQueue .poll` will
bring a lot of problems.
---
If your project is set up for it, you can reply to this email and hav
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1852
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1881#issuecomment-51868653
Thanks! I've merged to master and 1.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project doe
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1852#issuecomment-51868595
Thanks! I've merged this to master and 1.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your projec
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1853
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1853#issuecomment-51868479
Thanks! I've merged this to master and 1.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proje
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1768
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1768#issuecomment-51868430
Thanks! I've merged to master and 1.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project doe
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1891#issuecomment-51868406
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1632#issuecomment-51868341
O.K. I'll try to resolve using poll somehow.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proje
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1632#issuecomment-51867803
The reason why I din't use Await.ready and Await.result is because those
are blocking method. Current way which use onComplete callback is non-blocking.
---
If your proj
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1632#issuecomment-51867662
Hi @shivaram , @JoshRosen
At first, I have an idea to use poll. I thought it's the easy way.
But, if we use poll and catch TimeoutException, I think,
Connect
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/1862#issuecomment-51867619
LGTM. Merged into both master and branch-1.1. (This is a much better
algorithm than SGD and we have tested it since v1.0.) Thanks @dbtsai for adding
it!
---
If your proj
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1862
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1899#issuecomment-51867038
Why is it useful to have the cluster name be different from the security
group prefix? If I want to re-use an existing security group, I can just name
my cluster after
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/1816#issuecomment-51866562
@pwendell @rxin - After lots of futzing around and some gold plating, I can
say this PR is ready for another review. Apologies for how long this took.
Highlights
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1632#issuecomment-51866549
@shivaram That's a really good suggestion. I'll try to write a failing
unit test that directly uses BasicBlockFetcherIterator so that we can test your
approach.
---
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/1849#issuecomment-51866552
We can create a binary search version in Python in another PR. For this PR,
does it look good to you?
---
If your project is set up for it, you can reply to this email an
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1898#issuecomment-51866410
QA results for PR 1898:- This patch PASSES unit tests.- This patch
merges cleanly- This patch adds no public classesFor more
information see test
ouptut:https://amplab.c
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1816#issuecomment-51866292
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18350/consoleFull)
for PR 1816 at commit
[`c1be644`](https://github.com/a
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1733
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/1851#issuecomment-51866184
updated
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled an
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/1733#issuecomment-51866151
LGTM. Merged into both master and branch-1.1. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If yo
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1896#issuecomment-51865757
I've merged this into `master` and `branch-1.1`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/1851#issuecomment-51865781
yeah, my mistake , we should also use SUBMISSION_ARGS+=("$1") to instead
SUBMISSION_ARGS+=($1) , it will be ok
---
If your project is set up for it, you can reply to this e
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1896
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1896#issuecomment-51865679
@mridulm To address your earlier comment, the resulting
`currentLocalityIndex` will always be valid because
computeCurrentLocalityLevels() always returns an array conta
1 - 100 of 340 matches
Mail list logo