Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15090#discussion_r79330219
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeColumnCommand.scala
---
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15090#discussion_r79329962
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeColumnCommand.scala
---
@@ -0,0 +1,209 @@
+/*
+ * Licensed to
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15090#discussion_r79329866
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeColumnCommand.scala
---
@@ -0,0 +1,159 @@
+/*
+ * Licensed to
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15146
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15146
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/65588/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15146
**[Test build #65588 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65588/consoleFull)**
for PR 15146 at commit
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15090#discussion_r79329639
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/StatisticsSuite.scala ---
@@ -101,4 +101,47 @@ class StatisticsSuite extends QueryTest with
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15090#discussion_r79329457
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/Statistics.scala
---
@@ -32,19 +34,70 @@ package
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15090#discussion_r79329382
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeColumnCommand.scala
---
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15053
@holdenk I am also cautious but leaving everything but adding `df.show()`
in the package docstring with cleaning up duplicated defining dataframes in
each docstring will be minimal change and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15054
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/65587/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15054
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15054
**[Test build #65587 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65587/consoleFull)**
for PR 15054 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15024
For `FileFormat `, [`allPaths` is changed to `paths ++ new
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15024
It sounds like we also need to call `optionsToStorageFormat` for
`visitCreateTempViewUsing`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15024#discussion_r79328309
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -507,3 +400,117 @@ case class DataSource(
Github user zjffdu commented on a diff in the pull request:
https://github.com/apache/spark/pull/13599#discussion_r79328285
--- Diff:
core/src/main/scala/org/apache/spark/api/python/PythonWorkerFactory.scala ---
@@ -69,6 +84,67 @@ private[spark] class
Github user zjffdu commented on a diff in the pull request:
https://github.com/apache/spark/pull/13599#discussion_r79328171
--- Diff:
core/src/main/scala/org/apache/spark/api/python/PythonWorkerFactory.scala ---
@@ -69,6 +84,67 @@ private[spark] class
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/15053
I was thinking that the user would probably read the package string
documentation before looking at the individual functions (or if they went
looking for the definition of the dataframe). I'm a
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/15090#discussion_r79327748
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/StatisticsSuite.scala ---
@@ -101,4 +101,47 @@ class StatisticsSuite extends QueryTest with
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15131
**[Test build #65590 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65590/consoleFull)**
for PR 15131 at commit
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/15131
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/15102
> We do need to handle it comparing completely different topicpartitions,
because it's entirely possible to have a job with a single topicpartition A,
which is deleted or unsubscribed, and then
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/15090#discussion_r79327304
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/Statistics.scala
---
@@ -32,19 +34,70 @@ package
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14784
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14784
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/65589/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14784
**[Test build #65589 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65589/consoleFull)**
for PR 14784 at commit
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/11105
If @rxin or @squito has the bandwith to continue reviewing I'd really
appreciate it (especially on the mergeImpl / addImpl wrapping or if should go
about it in another way).
---
If your project
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15146
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15146
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/65586/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15146
**[Test build #65586 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65586/consoleFull)**
for PR 15146 at commit
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/15090#discussion_r79326407
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeColumnCommand.scala
---
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14784
**[Test build #65589 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65589/consoleFull)**
for PR 14784 at commit
Github user zjffdu commented on the issue:
https://github.com/apache/spark/pull/14784
@shivaram @felixcheung Sorry for late response, I just rebase the PR and
also take spark.master over master. Please help review.
---
If your project is set up for it, you can reply to this email
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15053
Oh, I meant just don't touch the `globs` in `_test()` but just print the
global dataframes (which should be rather `show()` to show the contents) so
that users can understand the input and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14597
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/65585/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14597
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14597
**[Test build #65585 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65585/consoleFull)**
for PR 14597 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/15146
cc @hvanhovell @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15146
**[Test build #65588 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65588/consoleFull)**
for PR 15146 at commit
Github user mortada commented on the issue:
https://github.com/apache/spark/pull/15053
@HyukjinKwon I understand we can have `py.test` and `doctest`, but I don't
quite see how we could define the input DataFrame globally while at the same
time have a clear, self-contained docstring
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14995
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14995
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/65584/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14995
**[Test build #65584 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65584/consoleFull)**
for PR 14995 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/15041
cc @cloud-fan @hvanhovell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15054
**[Test build #65587 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65587/consoleFull)**
for PR 15054 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15147
FYI, we backported this to branch 2.0 too. So this will be fixed from 2.0.1
https://github.com/apache/spark/pull/14799.
---
If your project is set up for it, you can reply to this email and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15147
cc @srowen who was in the JIRA too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15147
To continue the discussion of JIRA, I think the issue you faced is to read
those in CSV?
Whether it is intended or not in `FastDateFormat`, the default pattern
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15147
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user nbeyer opened a pull request:
https://github.com/apache/spark/pull/15147
[SPARK-17545] [SQL] Handle additional time offset formats of ISO 8601
## What changes were proposed in this pull request?
Allows flexibility in handling additional ISO 8601 time offset variants.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15146
**[Test build #65586 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65586/consoleFull)**
for PR 15146 at commit
GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/15146
[SPARK-17590][SQL] Analyze CTE definitions at once
## What changes were proposed in this pull request?
We substitute logical plan with CTE definitions in the analyzer rule
CTESubstitution.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15145
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/65583/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15145
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15145
**[Test build #65583 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65583/consoleFull)**
for PR 15145 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/14803
ping @marmbrus @zsxwing Would you mind to take a look this and provide your
feedback? If this is not going to be fixed, please let me know too. This is a
small change and I don't think it should be
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15145#discussion_r79321615
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/MetastoreDataSourcesSuite.scala
---
@@ -509,7 +509,7 @@ class MetastoreDataSourcesSuite
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/14452
@hvanhovell @davies I rethink this PR in recent days. The changes includes
some hacky change and are too big to review. I would like to separate it to
individual small PRs which can be reviewed
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14597
**[Test build #65585 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65585/consoleFull)**
for PR 14597 at commit
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/13680
@cloud-fan would it be possible to review this? I think that I implemented
your suggestions.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14995
**[Test build #65584 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65584/consoleFull)**
for PR 14995 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/15145#discussion_r79320510
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/MetastoreDataSourcesSuite.scala
---
@@ -509,7 +509,7 @@ class MetastoreDataSourcesSuite
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/15145#discussion_r79320458
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/MetastoreDataSourcesSuite.scala
---
@@ -509,7 +509,7 @@ class MetastoreDataSourcesSuite
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15053
Hi @mortada , I am sorry that I am a bit being noisy here but I just took a
look for myself.
I resembled the PySpark structure and made a draft for myself.
```python
"""
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15054#discussion_r79320015
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -65,7 +64,11 @@ case class CreateTableLikeCommand(
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15136
Will do more investigation on this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15145
**[Test build #65583 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65583/consoleFull)**
for PR 15145 at commit
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/15145
[SPARK-17589] [TEST] [2.0] Fix test case `create external table` in
MetastoreDataSourcesSuite
### What changes were proposed in this pull request?
This PR is to fix a test failure on the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15145
cc @cloud-fan @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15122
@petermaxlee I believe you will get a runtime exception saying that the
file does not exist.
Also, regarding your options 2, are you suggesting that users of structured
streaming to use
Github user peteb4ker commented on the issue:
https://github.com/apache/spark/pull/15142
Looks great, thx Sean.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15138
Yes, you are right and also yes, the purpose of the setting is to prevent
OOM
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15099
Let me do a quick fix.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user JoshRosen commented on the issue:
https://github.com/apache/spark/pull/15114
I tried running this in SBT and ran into a bunch of spurious exceptions
from logging code:
```
SLF4J: Failed toString() invocation on an object of type
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15053
I haven't checked if package level docstring can define global level
variables to be accessed from other docstrings. So, I would like to defer this
to @holdenk (if you are not sure too, then we
Github user mortada commented on the issue:
https://github.com/apache/spark/pull/15053
@HyukjinKwon thanks for your help! I'm happy to complete this PR and follow
what you suggest for testing.
How would the package level docstring work? The goal (which I think we all
agree
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15053
Hi @mortada, do you mind if I ask to address mine or @holdenk' comment? If
you find any problem with testing, I am willing to take over this which I will
ask comitters to credit this to you.
Github user erenavsarogullari commented on the issue:
https://github.com/apache/spark/pull/15143
Hi @rxin,
Firstly, thanks for quick reply.
I was thinking for unit-test coverage perspective and a starter point to
contribute project but it is ok for me if PR is
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15127
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/15127
Merging in master/2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/15143
hm I agree having good unit test coverage is important -- this seems too
trivial to test?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/15142
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user manugarri commented on the issue:
https://github.com/apache/spark/pull/3062
im not sure if this is the right place to ask, but is there any plan to
implement PMML export from pyspark? Cant find anything on the pyspark docs.
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15144
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15144
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/65582/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15144
**[Test build #65582 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65582/consoleFull)**
for PR 15144 at commit
Github user sadikovi commented on the issue:
https://github.com/apache/spark/pull/15134
@phalodi Does this solve (intend to solve) situation when spark-submit is
launched with empty app name? Currently, as of 1.6, it will use empty
application name.
---
If your project is set up
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15142
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/65581/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15142
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15142
**[Test build #65581 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65581/consoleFull)**
for PR 15142 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15144
**[Test build #65582 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65582/consoleFull)**
for PR 15144 at commit
GitHub user zero323 opened a pull request:
https://github.com/apache/spark/pull/15144
[SPARK-17587][PYTHON][MLLIB] SparseVector __getitem__ should follow
__getitem__ contract
## What changes were proposed in this pull request?
Replaces ValueError with IndexError when index
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15143
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user eyalfa commented on the issue:
https://github.com/apache/spark/pull/1
@cloud-fan, please see this
[https://github.com/apache/spark/blob/1dbb725dbef30bf7633584ce8efdb573f2d92bca/sql/core/src/main/scala/org/apache/spark/sql/functions.scala#L1104-L1115](url),
it seems
GitHub user erenavsarogullari opened a pull request:
https://github.com/apache/spark/pull/15143
[SPARK-17584][Test] - Add unit test coverage for TaskState and ExecutorState
## What changes were proposed in this pull request?
- TaskState and ExecutorState expose isFailed and
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/15114
I verified this works on native docker in linux with :
build/mvn -Pdocker-integration-tests -Pscala-2.11 -pl
:spark-docker-integration-tests_2.11 clean compile test
LGTM.
---
Github user sumansomasundar commented on the issue:
https://github.com/apache/spark/pull/14762
@srowen I ran dev/lint-java, removed few additional white spaces, and
shortened few lines longer than 100 characters, then rebased it.
---
If your project is set up for it, you can reply
Github user eyalfa commented on the issue:
https://github.com/apache/spark/pull/1
@hvanhovell , I'm currently trying your approach of testing `ne.resolved`
prior to accessing `ne.name`.
tests are running as I write here, but a quick dive into the
`NamedExpression` hierarchy
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14981
@srowen Please don't get me wrong, I don't have any interest on this
extension either, but just want to make sure we start doing the right thing for
Apache Spark. I will try to ping some of the
1 - 100 of 310 matches
Mail list logo