Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/22356
Thanks for taking my codes. Looks good.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/21638
Here is the test code, not sure it is right or not ---
```
test("Number of partitions") {
sc = new SparkContext(new
SparkConf().setAppName("test").setMaster(&qu
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/21638#discussion_r215022562
--- Diff:
core/src/main/scala/org/apache/spark/input/PortableDataStream.scala ---
@@ -47,7 +47,7 @@ private[spark] abstract class StreamFileInputFormat[T
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/21638#discussion_r215010040
--- Diff:
core/src/main/scala/org/apache/spark/input/PortableDataStream.scala ---
@@ -47,7 +47,7 @@ private[spark] abstract class StreamFileInputFormat[T
Github user bomeng closed the pull request at:
https://github.com/apache/spark/pull/22276
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/22276
Ok, closing
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/22276
The tests failed due to method signatures' change, but it should not affect
the existing test cases and existing usages
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/22276
[SPARK-25242][SQL] make sql config setting fluent
## What changes were proposed in this pull request?
User can now set conf more easily by doing this:
```
sparkSession.conf.set
Github user bomeng closed the pull request at:
https://github.com/apache/spark/pull/22127
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/22127
Good points. I will leave it open for any suggestions for improving the
user experience..
---
-
To unsubscribe, e-mail: reviews
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/22127
[SPARK-25032][SQL] fix drop database issue
## What changes were proposed in this pull request?
When user tries to drop the current database (other than default database),
after the database
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/22115
I have already done the global search. That is the only place needs change.
---
-
To unsubscribe, e-mail: reviews-unsubscr
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/22115
[SPARK-25082] [SQL] improve the javadoc for expm1()
## What changes were proposed in this pull request?
Correct the javadoc for expm1() function.
## How was this patch tested?
None
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/21638#discussion_r204517923
--- Diff:
core/src/main/scala/org/apache/spark/input/PortableDataStream.scala ---
@@ -47,7 +47,7 @@ private[spark] abstract class StreamFileInputFormat[T
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/21638
Either way works for me, but I think since this is not a private method, so
people may use it in their own approach. The minimal change will be the best
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/21638#discussion_r202907829
--- Diff:
core/src/main/scala/org/apache/spark/input/PortableDataStream.scala ---
@@ -47,7 +47,7 @@ private[spark] abstract class StreamFileInputFormat[T
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/21638
@HyukjinKwon please review. thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/21638
[SPARK-22357][CORE] SparkContext.binaryFiles ignore minPartitions parameter
## What changes were proposed in this pull request?
Fix the issue that minPartitions was not used in the method
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/19614
I will fix the style shortly.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/19614
update the location of reference paper
## What changes were proposed in this pull request?
Update the url of reference paper.
## How was this patch tested?
It is comments, so
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/17470
[SPARK-20146][SQL] fix comment missing issue for thrift server
## What changes were proposed in this pull request?
The column comment was missing while constructing the Hive TableSchema
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/13720
@cloud-fan please review again, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/13720
ok, i will work on it based on comments. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/13720
@cloud-fan Is this one worth to be fixed?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/12739
close this pr provided it was fixed by another pr.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user bomeng closed the pull request at:
https://github.com/apache/spark/pull/12739
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/13140
i do not know what happened to jenkin, looks the failure is irrelevant.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/13140
@cloud-fan thanks for your concise codes!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/13720#discussion_r67958472
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -127,7 +127,7 @@ case class CatalogTable
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/13720#discussion_r67957269
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -180,7 +180,8 @@ case class CatalogTable(
Seq
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/13720#discussion_r67812926
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -127,7 +127,7 @@ case class CatalogTable
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/13791
[SPARK-16084] [SQL] Minor Javadoc update for "DESCRIBE" table
## What changes were proposed in this pull request?
1. FORMATTED is actually supported, but partition is not suppor
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/13720
@srowen please review. thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/13720
[SPAKR-16004] [SQL] improve the disply of CatalogTable information
## What changes were proposed in this pull request?
A few issues found when running "describe extended | form
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/12739
@andrewor14 Hey Andrew, could you please review this one as well?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/13695
Thanks for merging !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/13695
@rxin could you please review it again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/13695
[SPARK-15978] [SQL] remove unnecessary format
## What changes were proposed in this pull request?
I've found some minor issues in "show tables" command:
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/13671
thanks for merging!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/13671
for issue 1, I have updated the existing test case for testing this (the
original one just tests the count of the result). for issue 2, it is minor and
just a text change.
---
If your project
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/13671
[SPARK-15952] [SQL] fix "show databases" ordering issue
## What changes were proposed in this pull request?
Two issues I've found for "show databases" commands:
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/13543
@srowen Thanks for merging.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/13533
@srowen Thanks for merging.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/13543#discussion_r66701686
--- Diff:
core/src/main/scala/org/apache/spark/deploy/master/MasterArguments.scala ---
@@ -20,18 +20,24 @@ package org.apache.spark.deploy.master
import
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/13543#discussion_r66647339
--- Diff:
core/src/main/scala/org/apache/spark/deploy/master/MasterArguments.scala ---
@@ -20,18 +20,24 @@ package org.apache.spark.deploy.master
import
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/13543#discussion_r66493098
--- Diff:
core/src/main/scala/org/apache/spark/deploy/master/MasterArguments.scala ---
@@ -20,18 +20,24 @@ package org.apache.spark.deploy.master
import
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/13543#discussion_r66488409
--- Diff:
core/src/main/scala/org/apache/spark/deploy/master/MasterArguments.scala ---
@@ -20,18 +20,24 @@ package org.apache.spark.deploy.master
import
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/13543
Yes. I can add a warning if SPARK_MASTER_IP is set. Ideally we should use
SPARK_MASTER_HOST in all places to avoid confusion.
---
If your project is set up for it, you can reply to this email
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/13543
Here is the link:
[MasterArguments.scala](https://github.com/bomeng/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/master/MasterArguments.scala#L56-L59)
---
If your project is set
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/13543
Please note that there are also some places still using SPARK_MASTER_IP,
for example, start-master.sh, etc. I did not replace them, because it may break
the current running script.
---
If your
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/13543
[SPARK-15806] [Documentation] update doc for SPARK_MASTER_IP
## What changes were proposed in this pull request?
SPARK_MASTER_IP is a deprecated environment variable. It is replaced
Github user bomeng commented on the issue:
https://github.com/apache/spark/pull/13533
That could be another JIRA as we do not want to use one JIRA to fix all
issues. Please file one if desired.
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/13533
[SPARK-17581] [ Documentation] remove deprecated environment variable doc
## What changes were proposed in this pull request?
Like `SPARK_JAVA_OPTS` and `SPARK_CLASSPATH`, we will remove
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/13475
[SPARK-15737] [CORE] fix jetty warning
## What changes were proposed in this pull request?
After upgrading the Jetty to 9.2, we always see "
Github user bomeng closed the pull request at:
https://github.com/apache/spark/pull/13141
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/13304#discussion_r64665909
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/orc/OrcSourceSuite.scala ---
@@ -38,12 +39,12 @@ abstract class OrcSuite extends QueryTest
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/13304
[SPARK-15537] [SQL] fix dir delete issue
## What changes were proposed in this pull request?
For some of the test cases, e.g. OrcSourceSuite, it will create temp
folders and temp files
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/13246#discussion_r64142270
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/planning/patterns.scala
---
@@ -227,8 +227,8 @@ object IntegerIndex {
* - Unnamed
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/13246
[SPARK-15468] [SQL] some some typos
## What changes were proposed in this pull request?
Fix some typos while browsing the codes.
## How was this patch tested?
None
Github user bomeng commented on the pull request:
https://github.com/apache/spark/pull/12661#issuecomment-219582450
Since this one has been here for more than 10 days, I've provided another
approach with new test case. Please take a look. Thanks.
[PR for SPARK-14752](https
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/13141
[SPARK-14752] [SQL] fix kryo ordering serialization
## What changes were proposed in this pull request?
When using Kryo as serializer and we will get `NullPointerException`
exception
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/13140
[SPARK-15230] [SQL] distinct() does not handle column name with dot properly
## What changes were proposed in this pull request?
When table is created with column name containing dot
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/12916#discussion_r63060249
--- Diff: yarn/pom.xml ---
@@ -102,6 +102,10 @@
org.eclipse.jetty
jetty-servlet
+
+ org.eclipse.jetty
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/12916#discussion_r63003988
--- Diff: core/pom.xml ---
@@ -125,12 +125,17 @@
jetty-servlet
compile
+
+ org.eclipse.jetty
+ jetty
Github user bomeng commented on the pull request:
https://github.com/apache/spark/pull/12916#issuecomment-218668164
@srowen sorry for the late reply, I did not notice it. I have run the mvn
dependency:tree and only javax.servlet-api 3.1.0 is listed, so it should be
fine
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/12916#discussion_r62968602
--- Diff: core/pom.xml ---
@@ -125,12 +125,17 @@
jetty-servlet
compile
+
+ org.eclipse.jetty
+ jetty
Github user bomeng commented on the pull request:
https://github.com/apache/spark/pull/12916#issuecomment-218340455
@srowen Finally I've got it working. Servlet and Derby were upgraded as
well due to requirement of Jetty. Please review.
---
If your project is set up for it, you can
Github user bomeng commented on the pull request:
https://github.com/apache/spark/pull/12916#issuecomment-218320176
retest please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user bomeng commented on the pull request:
https://github.com/apache/spark/pull/12916#issuecomment-217338702
The test failure was caused by timeout for HiveThriftHttpServerSuite and
SingleSessionSuite... have not figure out the cause, any suggestion will be
welcome
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/12916
[SPARK-14897] [SQL] upgrade to jetty 9.2.16
## What changes were proposed in this pull request?
Since Jetty 8 is EOL (end of life) and has critical security issue
[http
Github user bomeng commented on the pull request:
https://github.com/apache/spark/pull/12849#issuecomment-216384799
Making the changes based on the comments. Will post it shortly. List[_]
should be supported as Seq[_], for now, you can use Seq[_] as workaround.
---
If your project
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/12849
[SPARK-15062] [SQL] fix list type infer serializer issue
## What changes were proposed in this pull request?
Make serializer correctly inferred if the input type is List[_], since
List
Github user bomeng commented on the pull request:
https://github.com/apache/spark/pull/12739#issuecomment-215450711
@srowen Please review again. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/12739#discussion_r61342768
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala
---
@@ -54,15 +54,22 @@ private[sql] object
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/12739
[SPARK-14955] [SQL] avoid stride value equals to zero
## What changes were proposed in this pull request?
In the columnPartition() method of JDBCRelation, stride is used for
calculating
Github user bomeng closed the pull request at:
https://github.com/apache/spark/pull/12607
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user bomeng commented on the pull request:
https://github.com/apache/spark/pull/12607#issuecomment-214958322
closing it. thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user bomeng commented on the pull request:
https://github.com/apache/spark/pull/12709#issuecomment-214906151
Yes, I missed that. Parser already handles it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user bomeng closed the pull request at:
https://github.com/apache/spark/pull/12709
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/12709
[SPARK-14928] [SQL] support substitution in SET key=value
## What changes were proposed in this pull request?
In the `SET key=value` command, value can be defined as a variable
Github user bomeng commented on the pull request:
https://github.com/apache/spark/pull/12607#issuecomment-214818178
@rxin I am open to your decision. I think it is still useful to allow user
to use "SET" command by using spark.sql.variable.substitute as configuration.
Github user bomeng closed the pull request at:
https://github.com/apache/spark/pull/12347
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user bomeng commented on the pull request:
https://github.com/apache/spark/pull/12347#issuecomment-214518666
closing this PR. thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user bomeng commented on the pull request:
https://github.com/apache/spark/pull/12607#issuecomment-213614415
I think you mean set the value of `spark.sql.variable.substitute` and read
`spark.sql.variable.substitute` above. I will post another try shortly.
---
If your project
Github user bomeng commented on the pull request:
https://github.com/apache/spark/pull/12607#issuecomment-213533820
@rxin Just wanna confirm, you want to let user to do `SET
hive.variable.substitute=true/false` in SQL? It will logWarning in
`setConfWithCheck()` method and I just
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/12607
[SPARK-14806] [SQL] support substitution in set command
## What changes were proposed in this pull request?
Since we have spark.sql.variable.substitute as an alias
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/12583#discussion_r60685224
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -716,4 +716,8 @@ class DDLSuite extends QueryTest
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/12583#discussion_r60669363
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -716,4 +716,8 @@ class DDLSuite extends QueryTest
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/12583
[SPARK-14819] [SQL] Improve SET / SET -v command
## What changes were proposed in this pull request?
Currently `SET` and `SET -v` commands are similar to Hive `SET` command
except
Github user bomeng commented on the pull request:
https://github.com/apache/spark/pull/12373#issuecomment-212169908
@rxin Could you please take a look if you get a chance? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user bomeng commented on the pull request:
https://github.com/apache/spark/pull/12191#issuecomment-211543283
Yes, the reason for sorting the keywords is for ease of searching purpose.
I have checked the generated codes and see the switch/case for each
non-reserved words
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/12373#discussion_r59928147
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/nullExpressions.scala
---
@@ -128,6 +128,143 @@ case class IsNaN(child
Github user bomeng commented on the pull request:
https://github.com/apache/spark/pull/12373#issuecomment-210605312
I have revisited the codes and made the codes more robust. Heavily tested
against different data types by using introducing testAllTypes2Values() with 2
different
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/12373#discussion_r59904696
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/nullExpressions.scala
---
@@ -128,6 +128,143 @@ case class IsNaN(child
Github user bomeng commented on the pull request:
https://github.com/apache/spark/pull/12373#issuecomment-210300785
I will address these issues tomorrow! Thank you all!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/12252#discussion_r59824077
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -246,13 +247,23 @@ object JdbcUtils extends Logging
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/12252#discussion_r59819746
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -246,13 +247,23 @@ object JdbcUtils extends Logging
Github user bomeng commented on a diff in the pull request:
https://github.com/apache/spark/pull/12373#discussion_r59659273
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/nullExpressions.scala
---
@@ -128,6 +128,58 @@ case class IsNaN(child
GitHub user bomeng opened a pull request:
https://github.com/apache/spark/pull/12373
[SPARK-14541] [SQL] [WIP] SQL function: IFNULL, NULLIF, NVL and NVL2
## What changes were proposed in this pull request?
I am trying to implement functions `NULLIF` in this PR. The meaning
Github user bomeng commented on the pull request:
https://github.com/apache/spark/pull/12347#issuecomment-209622770
Ok, not a problem. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
1 - 100 of 171 matches
Mail list logo