GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/1885
[SPARK-2963]
...L
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/sarutak/spark SPARK-2963
Alternatively you can review and apply these changes
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1885#issuecomment-51745369
QA tests have started for PR 1885. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18301/consoleFull
---
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1467#issuecomment-51745751
O.K. I got it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user sarutak closed the pull request at:
https://github.com/apache/spark/pull/1467
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user tianyi commented on the pull request:
https://github.com/apache/spark/pull/1885#issuecomment-51746115
i think the building detail has already added in
https://github.com/apache/spark/blob/master/docs/sql-programming-guide.md
---
If your project is set up for it, you can
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1256#issuecomment-51746411
QA tests have started for PR 1256. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18302/consoleFull
---
Github user DannyGuoHT commented on the pull request:
https://github.com/apache/spark/pull/495#issuecomment-51747013
i don't get how this patch can resolve this issue, because this patch just
change the host to 0.0.0.0.
---
If your project is set up for it, you can reply to this
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1885#issuecomment-51747945
Hi @tianyi .
I know that document but the document is outdated. We can not use -Phive
option is no longer to use ThriftServer. In master branch, that moved into a
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/1886
[SPARK-2964] [SQL] Wrong silent option in spark-sql script
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/sarutak/spark SPARK-2964
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1885#issuecomment-51748196
QA results for PR 1885:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1886#issuecomment-51748453
QA tests have started for PR 1886. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18303/consoleFull
---
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/1880#discussion_r16039955
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/columnar/InMemoryColumnarTableScan.scala
---
@@ -90,22 +101,31 @@ private[sql] case class
Github user tianyi commented on the pull request:
https://github.com/apache/spark/pull/1885#issuecomment-51748813
@sarutak the lastest sql-programming-guide.md had already included
-Phive-thriftserver option
---
If your project is set up for it, you can reply to this email and have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1256#issuecomment-51749415
QA results for PR 1256:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds the following public classes
(experimental):brclass
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1885#issuecomment-51749549
Oh, master is updated.
But, as I mentioned, It's not friendly for builders.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1884#discussion_r16040902
--- Diff:
core/src/main/scala/org/apache/spark/util/collection/ExternalSorter.scala ---
@@ -744,13 +744,21 @@ private[spark] class ExternalSorter[K, V, C](
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1884#discussion_r16040916
--- Diff:
core/src/main/scala/org/apache/spark/util/collection/ExternalSorter.scala ---
@@ -744,13 +744,21 @@ private[spark] class ExternalSorter[K, V, C](
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1886#issuecomment-51751337
QA results for PR 1886:br- This patch FAILED unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user colorant commented on a diff in the pull request:
https://github.com/apache/spark/pull/1884#discussion_r16043137
--- Diff:
core/src/main/scala/org/apache/spark/util/collection/ExternalSorter.scala ---
@@ -744,13 +744,21 @@ private[spark] class ExternalSorter[K, V, C](
GitHub user ueshin opened a pull request:
https://github.com/apache/spark/pull/1887
[SPARK-2965][SQL] Fix HashOuterJoin output nullabilities.
Output attributes of opposite side of `OuterJoin` should be nullable.
You can merge this pull request into a Git repository by running:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1887#issuecomment-51756600
QA tests have started for PR 1887. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18304/consoleFull
---
Github user nrchandan commented on the pull request:
https://github.com/apache/spark/pull/1787#issuecomment-51759498
@srowen The histogram method needs the steps to be double too. Your
approach (using a custom function vs in-built Range) should work. I will give
it a try tomorrow.
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1886#issuecomment-51760298
I think, much better solution is modify spark-sql script to use utils.sh.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1886#issuecomment-51760458
QA tests have started for PR 1886. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18305/consoleFull
---
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1887#issuecomment-51763025
QA results for PR 1887:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1888#issuecomment-51763060
QA tests have started for PR 1888. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18306/consoleFull
---
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1877#discussion_r16045934
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1024,31 +1024,33 @@ class DAGScheduler(
case
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1877#discussion_r16046073
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1024,31 +1024,33 @@ class DAGScheduler(
case
Github user guowei2 commented on the pull request:
https://github.com/apache/spark/pull/1822#issuecomment-51764097
Thank you for your suggestion, It truely encourage me . I'll do my best to
fix it up
---
If your project is set up for it, you can reply to this email and have your
GitHub user ueshin opened a pull request:
https://github.com/apache/spark/pull/1889
[SPARK-2969][SQL] Make ScalaReflection be able to handle
MapType.containsNull and MapType.valueContainsNull.
Make `ScalaReflection` be able to handle like:
- `Seq[Int]` as
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1889#issuecomment-51766557
QA tests have started for PR 1889. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18307/consoleFull
---
GitHub user GrahamDennis opened a pull request:
https://github.com/apache/spark/pull/1890
[SPARK-2878]: Fix custom spark.kryo.registrator
This is a work-in-progress, and I'm looking for feedback on my current
approach. My aim here is to add the user jars specified in SparkConf
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1890#issuecomment-51767200
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1888#issuecomment-51769120
QA results for PR 1888:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/1891
[SPARK-2970] [SQL] spark-sql script ends with IOException when EventLogging
is enabled
You can merge this pull request into a Git repository by running:
$ git pull
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1891#issuecomment-51771105
QA tests have started for PR 1891. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18308/consoleFull
---
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1891#issuecomment-51771163
QA results for PR 1891:br- This patch FAILED unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1889#issuecomment-51771668
QA results for PR 1889:br- This patch FAILED unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1891#issuecomment-51775233
QA tests have started for PR 1891. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18309/consoleFull
---
Github user hunggpham commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-51779387
Hi Bert,
I want to try your ANN on Spark but could not find it in the latest clone.
It's probably not there yet despite the successful tests and merge
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/1880#discussion_r16052399
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/columnar/InMemoryColumnarTableScan.scala
---
@@ -90,22 +101,31 @@ private[sql] case class
Github user avulanov commented on a diff in the pull request:
https://github.com/apache/spark/pull/216#discussion_r16053704
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/discretization/EntropyMinimizationDiscretizer.scala
---
@@ -0,0 +1,276 @@
+/*
+ * Licensed to
Github user avulanov commented on a diff in the pull request:
https://github.com/apache/spark/pull/216#discussion_r16053868
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/discretization/EntropyMinimizationDiscretizerModel.scala
---
@@ -0,0 +1,82 @@
+/*
+ * Licensed
Github user avulanov commented on a diff in the pull request:
https://github.com/apache/spark/pull/216#discussion_r16054011
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/discretization/EntropyMinimizationDiscretizerSuite.scala
---
@@ -0,0 +1,71 @@
+/*
+ * Licensed
Github user avulanov commented on a diff in the pull request:
https://github.com/apache/spark/pull/216#discussion_r16053983
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/discretization/EntropyMinimizationDiscretizerSuite.scala
---
@@ -0,0 +1,71 @@
+/*
+ * Licensed
Github user avulanov commented on a diff in the pull request:
https://github.com/apache/spark/pull/216#discussion_r16054027
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/discretization/EntropyMinimizationDiscretizerSuite.scala
---
@@ -0,0 +1,71 @@
+/*
+ * Licensed
Github user avulanov commented on a diff in the pull request:
https://github.com/apache/spark/pull/216#discussion_r16054017
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/discretization/EntropyMinimizationDiscretizerSuite.scala
---
@@ -0,0 +1,71 @@
+/*
+ * Licensed
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1891#issuecomment-51783932
QA results for PR 1891:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user avulanov commented on a diff in the pull request:
https://github.com/apache/spark/pull/216#discussion_r16053912
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/discretization/EntropyMinimizationDiscretizerModel.scala
---
@@ -0,0 +1,82 @@
+/*
+ * Licensed
Github user avulanov commented on the pull request:
https://github.com/apache/spark/pull/216#issuecomment-51784545
@mengxr I've tested the code on few examples after making it compatible
with the current version of `LabeledPoint`. It seems to work and produce
results similar to what
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1851#issuecomment-51784944
@scwf Bash escaping is really annoying... Thanks for spotting and fixing
this issue! This PR fixed the `--driver-java-options` option, but any other
options that may
GitHub user liyezhang556520 opened a pull request:
https://github.com/apache/spark/pull/1892
[SPARK-1777 (partial)] bugfix: make size of requested memory correctly
You can merge this pull request into a Git repository by running:
$ git pull
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1892#issuecomment-51788524
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1889#issuecomment-51788777
QA tests have started for PR 1889. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18310/consoleFull
---
Github user critikaled commented on the pull request:
https://github.com/apache/spark/pull/1372#issuecomment-51789142
its just annoying, its ok, I have built spark form source and using it as
external lib. btw when would be aprox release date for 1.1 and I was reading
about it in
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/1816#issuecomment-51790052
Also, if the patch does merge cleanly, would it make more sense to let
contributors know at start of the test cycle that they are adding new public
classes, as opposed
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1885#issuecomment-51795351
@sarutak Just checked the SQL programming guide in master, as @tianyi said,
Thrift server and CLI related contents were both added. But we didn't mention
`sbt
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1885#issuecomment-51796416
@liancheng Thanks for your reply!
Not friendly means, the description about the place -Phive-thrift-server
is needed is not good.
Currently, the description is
Github user debasish83 commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-51796460
Hung,
You can merge the repository on your spark fork and you should be able to
see the code..
---
If your project is set up for it, you can reply to this
Github user debasish83 commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-51798466
SteepestDescend should be SteepestDescent !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/1886#discussion_r16061170
--- Diff: bin/spark-sql ---
@@ -53,42 +57,26 @@ function ensure_arg_number {
fi
}
-if [[ $@ = --help ]] || [[ $@ = -h ]]; then
+if
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/1886#discussion_r16061353
--- Diff: bin/spark-sql ---
@@ -53,42 +57,26 @@ function ensure_arg_number {
fi
}
-if [[ $@ = --help ]] || [[ $@ = -h ]]; then
+if
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/1886#discussion_r16061186
--- Diff: bin/spark-sql ---
@@ -53,42 +57,26 @@ function ensure_arg_number {
fi
}
-if [[ $@ = --help ]] || [[ $@ = -h ]]; then
+if
Github user roji commented on the pull request:
https://github.com/apache/spark/pull/1875#issuecomment-51799530
Have just noticed the additional scripts under sbin, which also require
treatment. Just before I go ahead and work on that, can you confirm this is a
desirable PR?
---
If
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1885#issuecomment-51799557
Ah OK, fair enough. I agree with you.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1886#issuecomment-5178
`bin/spark-sql` and `sbin/start-thriftserver.sh` are both added earlier and
didn't get updated when `bin/utils.sh` were added. Thanks for fixing both this
issue and
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1891#issuecomment-51801303
LGTM, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1889#issuecomment-51801368
QA results for PR 1889:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1885#issuecomment-51802147
Then I guess we should add `-Phive` and `-Phive-thriftserver` related
instructions to both our main README file and `building-with-maven.md`.
---
If your project is
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/1890#issuecomment-51805750
FWIW I think this is already what happens in YARN, as we use Hadoop's
distributed cache to send out the jars and include them on the executor
classpath at startup.
---
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1877#issuecomment-51806432
It takes some time to add a test for this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user erikerlandson commented on the pull request:
https://github.com/apache/spark/pull/1839#issuecomment-51806430
Assuming this is correct, okay is not same as ok:
The following regex checks that: .*ok\W+to\W+test.*
So I think you should be able to use it in a
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1885#issuecomment-51807938
Ah, you're right.
Now I've modified building-with-maven.md. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1885#issuecomment-51808354
QA tests have started for PR 1885. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18311/consoleFull
---
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1813#discussion_r16065213
--- Diff: core/src/main/java/com/google/common/base/Optional.java ---
@@ -0,0 +1,243 @@
+/*
+ * Copyright (C) 2011 The Guava Authors
--- End diff
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/1886#discussion_r16065658
--- Diff: bin/spark-sql ---
@@ -53,42 +57,26 @@ function ensure_arg_number {
fi
}
-if [[ $@ = --help ]] || [[ $@ = -h ]]; then
+if
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1759#issuecomment-51810035
I was planning to query the database or file at compile time for these
sorts of data sources. While you are right that this is less `deterministic`,
its not clear to
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/1760#discussion_r16066159
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/TestHive.scala
---
@@ -60,6 +60,8 @@ class TestHiveContext(sc: SparkContext) extends
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/1760#discussion_r16066399
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/TestHive.scala
---
@@ -70,6 +72,17 @@ class TestHiveContext(sc: SparkContext) extends
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1893#issuecomment-51811506
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/1893
SPARK-1297 Upgrade HBase dependency to 0.98
Two profiles are added to examples/pom.xml :
hbase-hadoop1 (default)
hbase-hadoop2
I verified that compilation passes with either profile
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1893#discussion_r16066774
--- Diff: examples/pom.xml ---
@@ -45,6 +45,39 @@
/dependency
/dependencies
/profile
+profile
+
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1893#discussion_r16066804
--- Diff: examples/pom.xml ---
@@ -45,6 +45,39 @@
/dependency
/dependencies
/profile
+profile
+
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/1893#discussion_r16066911
--- Diff: examples/pom.xml ---
@@ -45,6 +45,39 @@
/dependency
/dependencies
/profile
+profile
+
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1893#discussion_r16066899
--- Diff: examples/pom.xml ---
@@ -110,36 +143,52 @@
version${project.version}/version
/dependency
dependency
-
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/1893#discussion_r16066969
--- Diff: examples/pom.xml ---
@@ -45,6 +45,39 @@
/dependency
/dependencies
/profile
+profile
+
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1893#discussion_r16067104
--- Diff: examples/pom.xml ---
@@ -45,6 +45,39 @@
/dependency
/dependencies
/profile
+profile
+
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1759#issuecomment-51812566
I see. I think querying Hive at compile time will be tricky (since it may
be behind all kind of firewalls, etc). But I guess we can start with the
current approach, as
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/1893#discussion_r16067144
--- Diff: examples/pom.xml ---
@@ -110,36 +143,52 @@
version${project.version}/version
/dependency
dependency
-
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/1885#discussion_r16067265
--- Diff: README.md ---
@@ -115,6 +115,15 @@ If your project is built with Maven, add this to your
POM file's `dependencies
/dependency
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/1813#discussion_r16067286
--- Diff: core/src/main/java/com/google/common/base/Optional.java ---
@@ -0,0 +1,243 @@
+/*
+ * Copyright (C) 2011 The Guava Authors
--- End diff
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/1885#discussion_r16067388
--- Diff: README.md ---
@@ -115,6 +115,15 @@ If your project is built with Maven, add this to your
POM file's `dependencies
/dependency
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/1874#discussion_r16067428
--- Diff: python/pyspark/tests.py ---
@@ -905,8 +911,9 @@ def createFileInZip(self, name, content):
pattern = re.compile(r'^ *\|', re.MULTILINE)
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/1885#discussion_r16067441
--- Diff: README.md ---
@@ -115,6 +115,15 @@ If your project is built with Maven, add this to your
POM file's `dependencies
/dependency
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1813#discussion_r16067467
--- Diff: core/src/main/java/com/google/common/base/Optional.java ---
@@ -0,0 +1,243 @@
+/*
+ * Copyright (C) 2011 The Guava Authors
--- End diff
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1813#discussion_r16067603
--- Diff: core/src/main/java/com/google/common/base/Optional.java ---
@@ -0,0 +1,243 @@
+/*
+ * Copyright (C) 2011 The Guava Authors
--- End diff
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1885#issuecomment-51813564
Since I'm not a native tongue either, we'd better wait for some native
speaker to have a look. Otherwise LGTM, thanks!
---
If your project is set up for it, you can
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1856#issuecomment-51814332
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1886#issuecomment-51814876
@liancheng Thanks reviewing!
Now I've just modified coding styles and removed useless code.
---
If your project is set up for it, you can reply to this email and
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1856#issuecomment-51815059
QA tests have started for PR 1856. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18313/consoleFull
---
1 - 100 of 326 matches
Mail list logo