Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17458
Probably, I guess this should be fine. Just in my experience, IntelliJ's
inspection was quite okay except the case of breaking Scala 2.10. It might be
better if they can be manually teste
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17435#discussion_r108437512
--- Diff: python/pyspark/sql/types.py ---
@@ -57,7 +57,25 @@ def __ne__(self, other):
@classmethod
def typeName(cls
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17458
They are suggestions in my point of view. it doesn't necessarily mean you
should follow if there are some reasons.
---
If your project is set up for it, you can reply to this email and
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17458#discussion_r108560956
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala ---
@@ -103,7 +103,7 @@ private[ui] class StagePage(parent: StagesTab) extends
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17466
Hi @shaynativ, this PR looks a non-trivial change that needs a JIRA. Please
refer http://spark.apache.org/contributing.html.
---
If your project is set up for it, you can reply to this email
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17467
Hi @shaynativ, this PR looks a non-trivial change that needs a JIRA. Please
refer http://spark.apache.org/contributing.html.
---
If your project is set up for it, you can reply to this email
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17467
Hi @yssharma, this PR looks a non-trivial change that needs a JIRA. Please
refer http://spark.apache.org/contributing.html.
---
If your project is set up for it, you can reply to this email
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17467
You could just edit the title. I think closing this and opening new one is
also fine and an option.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17435#discussion_r108685381
--- Diff: python/pyspark/sql/types.py ---
@@ -57,7 +57,25 @@ def __ne__(self, other):
@classmethod
def typeName(cls
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/17468
[SPARK-20143][SQL] DataType.fromJson should throw an exception with better
message
## What changes were proposed in this pull request?
Currently, `DataType.fromJson` throws
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17468
(BTW, I believe this does not make a conflict with PR 17406)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17375
Yea, it might be less important but I guess still it is a valid backport.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17469#discussion_r108835449
--- Diff: python/pyspark/sql/column.py ---
@@ -124,6 +124,35 @@ def _(self, other):
return _
+like_doc = """ R
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17467
I am not used to Kinesis. I usually click blame button and check both the
recent code modifier and committer, e.g.,
https://github.com/yssharma/spark/blame
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/17477
[SPARK-18692][BUILD][DOCS] Test Java 8 unidoc build on Jenkins
## What changes were proposed in this pull request?
This PR proposes to run Spark unidoc to test Javadoc 8 build as
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17477#discussion_r108852891
--- Diff: mllib/src/test/scala/org/apache/spark/ml/PipelineSuite.scala ---
@@ -230,7 +230,9 @@ class PipelineSuite extends SparkFunSuite with
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17477#discussion_r108853043
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/Classifier.scala ---
@@ -74,7 +74,7 @@ abstract class Classifier
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17477
FYI, If I haven't missed something, all the cases are the instances with
the ones previously fixed. cc @joshrosen, @srowen and @jkbradley. Could you
take a look and see if it makes
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17477#discussion_r108857749
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -704,12 +704,12 @@ private[spark] object TaskSchedulerImpl
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17479
I think UI change requires a screenshot as written above. It seems trivial
though.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17468
@gatorsmile, could you take a look please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17458#discussion_r108874489
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/StagesTab.scala ---
@@ -35,7 +35,7 @@ private[ui] class StagesTab(parent: SparkUI) extends
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17458#discussion_r108874623
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/UIData.scala ---
@@ -180,8 +180,8 @@ private[spark] object UIData {
speculative
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17458#discussion_r108874836
--- Diff: core/src/main/scala/org/apache/spark/ui/storage/RDDPage.scala ---
@@ -42,7 +42,7 @@ private[ui] class RDDPage(parent: StorageTab) extends
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17479#discussion_r108879364
--- Diff:
core/src/main/resources/org/apache/spark/ui/static/executorspage-template.html
---
@@ -24,7 +24,7 @@ Summary
RDD
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17477#discussion_r108882566
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -704,12 +704,12 @@ private[spark] object TaskSchedulerImpl
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17458
It looks good to me too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17468
Thank you @gatorsmile.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17489
cc @srowen and @ueshin, could you take a look please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/17489
[SPARK-20166][SQL] Use XXX for ISO timezone instead of ZZ (FastDateFormat
specific) in CSV/JSON timeformat options
## What changes were proposed in this pull request?
This PR proposes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17492
cc @NathanHowell and @cloud-fan.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/17492
[SPARK-19641][SQL] JSON schema inference in DROPMALFORMED mode produces
incorrect schema for non-array/object JSONs
## What changes were proposed in this pull request?
Currently, when
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17492
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15751
I will close this for now and make a new one soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon closed the pull request at:
https://github.com/apache/spark/pull/15751
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17489
> ZZ means something different to FastDateFormat, and what it means
actually matches what XXX means to SimpleDateFormat?
Yes, it seems so given my tests and checking the co
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17489
I tested "-1100" cases
[before](https://issues.apache.org/jira/browse/SPARK-17545?focusedCommentId=15509110&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#co
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17489
Let me just fix up the documentation if you are worried of it. I am fine
with it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17489
Otherwise, let me ask if both `ZZ` to `XXX` are really the same to Apache
commons user mailing list.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17489
Yes. I asked a question to commons mailing list about this for clarity. Let
me update you when I have an answer.
---
If your project is set up for it, you can reply to this email and have your
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17489
@srowen, I had a great answer about `ZZ` -
[here](http://mail-archives.apache.org/mod_mbox/commons-user/201704.mbox/).
It seems okay to use `XXX` instead.
---
If your project is set up for it
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17489
I tried to grep this pattern. I think these are all if I haven't missed
ones. Thanks for approving it.
---
If your project is set up for it, you can reply to this email and have your
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17468
@gatorsmile, could this get merged maybe?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17492#discussion_r109304123
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JsonInferSchema.scala
---
@@ -217,26 +221,43 @@ private[sql] object
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17477
(gentle ping @joshrosen).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17375
(gentle ping @holdenk and @davies)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14660
@rxin, I think there are no downside vs what it improves. Do you mind if I
ask reconsider this?
If you think it is still not worth being added, I will close, resolve the
JIRA and leave
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17489#discussion_r109305593
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -363,7 +363,7 @@ def csv(self, path, schema=None, sep=None,
encoding=None, quote=None, escape=Non
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17489#discussion_r109305610
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -363,7 +363,7 @@ def csv(self, path, schema=None, sep=None,
encoding=None, quote=None, escape=Non
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17492#discussion_r109307361
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JsonInferSchema.scala
---
@@ -202,41 +206,54 @@ private[sql] object
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17492#discussion_r109307493
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonSuite.scala
---
@@ -1903,9 +1932,8 @@ class JsonSuite extends
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17492#discussion_r109307548
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonSuite.scala
---
@@ -1041,7 +1041,6 @@ class JsonSuite extends
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17469#discussion_r109309786
--- Diff: python/pyspark/sql/column.py ---
@@ -250,11 +250,41 @@ def __iter__(self):
raise TypeError("Column is not ite
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17469#discussion_r109309540
--- Diff: python/pyspark/sql/column.py ---
@@ -303,8 +333,25 @@ def isin(self, *cols):
desc = _unary_op("desc", "Returns a
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17469#discussion_r109309750
--- Diff: python/pyspark/sql/column.py ---
@@ -250,11 +250,39 @@ def __iter__(self):
raise TypeError("Column is not ite
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17469#discussion_r109309888
--- Diff: python/pyspark/sql/column.py ---
@@ -303,8 +333,25 @@ def isin(self, *cols):
desc = _unary_op("desc", "Returns a
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17469
Also, you could run the python tests via `./python/run-tests.py --module
pyspark-sql` for python tests after building. It is fine because I guess
Jenkins will catch this but it might be nicer
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/17517
[MINOR][DOCS] Replace non-breaking space to normal spaces that breaks
rendering markdown
## What changes were proposed in this pull request?
It seems there are several non-breaking
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17517#discussion_r109336173
--- Diff: docs/monitoring.md ---
@@ -257,7 +257,7 @@ In the API, an application is referenced by its
application ID, `[app-id]`.
When running on
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17517
cc @srowen. Could you take a look please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17517#discussion_r109336405
--- Diff: docs/monitoring.md ---
@@ -257,7 +257,7 @@ In the API, an application is referenced by its
application ID, `[app-id]`.
When running on
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17517#discussion_r109336446
--- Diff: docs/building-spark.md ---
@@ -154,7 +154,7 @@ Developers who compile Spark frequently may want to
speed up compilation; e.g
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17517#discussion_r109336475
--- Diff: README.md ---
@@ -97,7 +97,7 @@ building for particular Hive and Hive Thriftserver
distributions.
Please refer to the [Configuration
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17489
Thank you for asking the details and merging it @srowen.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17492
Thank you @cloud-fan.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17523
(it would be nicer if the title is fixed to indicate what it proposes in
short)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17469#discussion_r109574278
--- Diff: python/pyspark/sql/column.py ---
@@ -303,8 +333,25 @@ def isin(self, *cols):
desc = _unary_op("desc", "Returns a
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17469#discussion_r109574284
--- Diff: python/pyspark/sql/column.py ---
@@ -303,8 +333,25 @@ def isin(self, *cols):
desc = _unary_op("desc", "Returns a
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17469
It might be better to run`./dev/lint-python` locally if possible. There
will catch more of minor nits ahead.
---
If your project is set up for it, you can reply to this email and have your
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17469#discussion_r109575023
--- Diff: python/pyspark/sql/column.py ---
@@ -250,11 +250,39 @@ def __iter__(self):
raise TypeError("Column is not ite
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17469#discussion_r109575589
--- Diff: python/pyspark/sql/column.py ---
@@ -303,8 +333,25 @@ def isin(self, *cols):
desc = _unary_op("desc", "Returns a
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/17528
[MINOR][R] Reorder `Collate` fields in DESCRIPTION file
## What changes were proposed in this pull request?
It seems cran check scripts corrects `R/pkg/DESCRIPTION` and follows the
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17528
@felixcheung, it is probably not worth being as a separate PR. I am fine if
you add this in any of your PR that is going to be merged soon.
---
If your project is set up for it, you can reply
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17149#discussion_r109826847
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -386,7 +386,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17149#discussion_r109826890
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -386,7 +386,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17149#discussion_r109827045
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -386,7 +386,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17149#discussion_r109827042
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -386,7 +386,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17149#discussion_r109827145
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -386,7 +386,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17149#discussion_r109854005
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -386,7 +386,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17149#discussion_r109863558
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -386,7 +386,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17149#discussion_r109865610
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -386,7 +386,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17466
If you think it is the JIRA, fix the title to `[SPARK-14681][ML] Added
getter for impurityStats`. That would make a link to the JIRA automatically.
---
If your project is set up for it, you
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17149#discussion_r109892051
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -285,7 +285,7 @@ private[spark] class HiveExternalCatalog
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17527
I support this idea in general. I can at least identify several references,
for example,,
https://hibernate.atlassian.net/plugins/servlet/mobile#issue/HHH-9722,
`https://github.com/hibernate
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17149#discussion_r110060244
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -285,7 +285,7 @@ private[spark] class HiveExternalCatalog
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17527#discussion_r110064613
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java ---
@@ -407,7 +408,7 @@ public UTF8String toLowerCase
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17527
@viirya, I think it is possible. In case of `Lower`, `Upper` and `InitCap`
as an example maybe.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17527
Yea, that's the concern. The downside is when these are exposed to users.
However, it might be an advantage as well. The behavior doesn't depend on
default JVM locale and is consisten
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17527
Thank you for clarifying it, @srowen and @viirya . I am okay with it too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17149#discussion_r110114252
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -285,7 +285,7 @@ private[spark] class HiveExternalCatalog
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17315
Currently, the corrupted record field should be set explicitly if I haven't
missed some changes in the related code path. Please refer the test here -
https://github.com/apache/spark
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17527#discussion_r110315394
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuery.scala
---
@@ -82,8 +84,8 @@ case class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17527#discussion_r110314541
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/CaseInsensitiveMap.scala
---
@@ -26,11 +28,12 @@ package
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17527#discussion_r110317272
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/HadoopFsRelation.scala
---
@@ -52,7 +54,11 @@ case class HadoopFsRelation
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17527#discussion_r110317441
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala
---
@@ -128,7 +128,8 @@ object PartitioningUtils
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17527#discussion_r110314669
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/StringKeyHashMap.scala
---
@@ -25,7 +27,7 @@ object StringKeyHashMap
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17527#discussion_r110298557
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileIndex.scala
---
@@ -396,7 +397,7 @@ object
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17527#discussion_r110317695
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/rules.scala
---
@@ -222,7 +225,7 @@ case class PreprocessTableCreation
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17527#discussion_r110317549
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala
---
@@ -328,7 +329,7 @@ object PartitioningUtils
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16929#discussion_r10990
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -2969,11 +2969,27 @@ object functions
801 - 900 of 12634 matches
Mail list logo