Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14746#discussion_r76366559
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -105,7 +105,13 @@ case class CreateViewCommand(
}
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14572
ping @yhuai : )
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/8880
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64451/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/8880
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/8880
**[Test build #64451 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64451/consoleFull)**
for PR 8880 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14572
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64452/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14572
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14572
**[Test build #64452 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64452/consoleFull)**
for PR 14572 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14801
**[Test build #64456 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64456/consoleFull)**
for PR 14801 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14801
**[Test build #64455 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64455/consoleFull)**
for PR 14801 at commit
Github user sethah commented on the issue:
https://github.com/apache/spark/pull/14808
This is going to have to be changed after
[SPARK-17163](https://issues.apache.org/jira/browse/SPARK-17163). Sorry about
the confusion! We'll still want to make an example with multiclass, though, so
Github user sethah commented on the issue:
https://github.com/apache/spark/pull/14818
In fact, we are actually just eliminating the
`MultinomialLogisticRegression` interface and merging into the existing
`LogisticRegression` estimator. So, maybe we won't need a change after all? I'm
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/13231
@tejasapatil any chance to update it soon? If not, I am interested in
implement it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14821
LGTM, cc @yhuai to confirm.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14617
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64450/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14617
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14617
**[Test build #64450 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64450/consoleFull)**
for PR 14617 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14728
**[Test build #64454 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64454/consoleFull)**
for PR 14728 at commit
Github user petermaxlee commented on the issue:
https://github.com/apache/spark/pull/14802
@zsxwing yup I plan to consolidate them.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14814
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14819
Does other database do this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14814
Merging in master/2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14618
**[Test build #64453 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64453/consoleFull)**
for PR 14618 at commit
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14818
cool, thanks for the heads up @sethah - please loop us in for the R side
changes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14572
**[Test build #64452 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64452/consoleFull)**
for PR 14572 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14572
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user sethah commented on the issue:
https://github.com/apache/spark/pull/14818
This is going to have to wait. We are changing the interface completely.
See https://issues.apache.org/jira/browse/SPARK-17163.
---
If your project is set up for it, you can reply to this email
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/8880
**[Test build #64451 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64451/consoleFull)**
for PR 8880 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14821
cc @cloud-fan @yhuai
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14809
Yeah, that is a bug. We did not get an exception when we read it, but we
can get the error when trying to write it. The error message is confusing
```
Can only write data to relations
Github user Downchuck commented on the issue:
https://github.com/apache/spark/pull/13452
Regarding the reason for disallowing bucket writes: "we have no idea [on
read] if the data is bucketed or not, so it doesn't make sense to use save to
write bucketed data"
It's easy
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14537
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64449/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14537
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14537
**[Test build #64449 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64449/consoleFull)**
for PR 14537 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14821
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64448/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14821
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14821
**[Test build #64448 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64448/consoleFull)**
for PR 14821 at commit
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/14712#discussion_r76356157
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeTableCommand.scala
---
@@ -88,14 +89,70 @@ case class
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14809
ah i see, btw in your example when will we throw exception? when we read
it? a file-based external table without path is invalid.
---
If your project is set up for it, you can reply to this
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/14712
@cloud-fan Can you please launch test for this pr? thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/14819#discussion_r76355514
--- Diff: sql/core/src/test/resources/sql-tests/inputs/literals.sql ---
@@ -27,6 +27,12 @@ select 9223372036854775807L, -9223372036854775808L;
-- out of
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14537#discussion_r76355262
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -237,21 +237,27 @@ private[hive] class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14617
**[Test build #64450 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64450/consoleFull)**
for PR 14617 at commit
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/14712#discussion_r76354939
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeTableCommand.scala
---
@@ -88,14 +89,70 @@ case class
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/14537
BTW, @rajeshbalamohan as you directly use metastore schema now, the PR
description looks not correct anymore, can you also update it? Thanks.
---
If your project is set up for it, you can reply to
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14819
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14819
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64445/
Test PASSed.
---
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14617
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14819
**[Test build #64445 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64445/consoleFull)**
for PR 14819 at commit
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/14712#discussion_r76354802
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeTableCommand.scala
---
@@ -88,14 +89,70 @@ case class
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/14537
@gatorsmile Thanks for cc'ing me.
As `spark.sql.hive.convertMetastoreOrc` is set to `false` by default, this
change looks fine. However, if setting the config to `true`, and hitting with
Github user f7753 commented on the issue:
https://github.com/apache/spark/pull/14239
@tgravescs To make it more readable and answer the question above.
**1. Are you saying that you are loading all the data for all the maps from
disk into memory and caching it waiting for the
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/14712#discussion_r76354055
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeTableCommand.scala
---
@@ -88,14 +89,70 @@ case class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14710
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64442/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14710
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14710
**[Test build #64442 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64442/consoleFull)**
for PR 14710 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14537
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14537
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64446/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14537
**[Test build #64446 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64446/consoleFull)**
for PR 14537 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14809
If we do not specify the schema, it will behave like what you said. For
example,
```Scala
sparkSession.catalog.createExternalTable(
"createdParquetTable",
Github user angolon commented on the issue:
https://github.com/apache/spark/pull/14710
Thanks for the feedback, @vanzin - all good points. I'll fix them up.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user rajeshbalamohan commented on the issue:
https://github.com/apache/spark/pull/14537
Thanks @gatorsmile . Removed the changes related to OrcFileFormat
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14537
**[Test build #64449 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64449/consoleFull)**
for PR 14537 at commit
Github user sameeragarwal commented on a diff in the pull request:
https://github.com/apache/spark/pull/14815#discussion_r76351420
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1386,15 +1386,17 @@ object EliminateOuterJoin
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14821
**[Test build #64448 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64448/consoleFull)**
for PR 14821 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14820
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64447/
Test PASSed.
---
Github user mallman commented on the issue:
https://github.com/apache/spark/pull/14811
Done
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user mallman closed the pull request at:
https://github.com/apache/spark/pull/14811
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14820
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14820
**[Test build #64447 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64447/consoleFull)**
for PR 14820 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14816
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14816
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64443/
Test PASSed.
---
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14809
@gatorsmile can you explain more about this example? I think we will throw
exception in `CreateDataSourceTableCommand` when we create a `DataSource` and
call its `resolveRelation`.
---
If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14816
**[Test build #64443 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64443/consoleFull)**
for PR 14816 at commit
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/14821
[SPARK-17250] [SQL] Remove HiveClient and setCurrentDatabase from
HiveSessionCatalog
### What changes were proposed in this pull request?
This is the first step to remove `HiveClient` from
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14753
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14786
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14786
thanks, merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14537
You might forget this comment
https://github.com/apache/spark/pull/14537#discussion_r76189474
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user qualiu commented on the issue:
https://github.com/apache/spark/pull/14807
@srowen @tsudukim @tritab @andrewor14 : Hello, I've updated to a more
conservative fix, please review it, thanks!
I didn't push [my former fix](
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14820
**[Test build #64447 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64447/consoleFull)**
for PR 14820 at commit
GitHub user junyangq opened a pull request:
https://github.com/apache/spark/pull/14820
[SparkR][Minor] Fix example of spark.naiveBayes
## What changes were proposed in this pull request?
The original example doesn't work because the features are not categorical.
This PR
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/14710
Looks ok, a couple of minor suggestions that from my understanding should
work now. I guess this is the next best thing without making all of these APIs
properly asynchronous.
pinging
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14710#discussion_r76348848
--- Diff:
yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
---
@@ -269,20 +258,22 @@ private[spark] abstract class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14638
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14638
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64440/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14638
**[Test build #64440 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64440/consoleFull)**
for PR 14638 at commit
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/14802
It would be great if we can reuse codes in `FileStreamSinkLog` for both
`FileStreamSource` and `FileStreamSink`.
---
If your project is set up for it, you can reply to this email and have your
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14710#discussion_r76347979
--- Diff:
core/src/main/scala/org/apache/spark/deploy/client/StandaloneAppClient.scala ---
@@ -220,19 +225,13 @@ private[spark] class StandaloneAppClient(
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/14728
Looks pretty good. Just one comment about `Serializable`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/14749
@rxin It doesn't fail any tests. Found this issue while working on related
code path.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14537
**[Test build #64446 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64446/consoleFull)**
for PR 14537 at commit
Github user rajeshbalamohan commented on the issue:
https://github.com/apache/spark/pull/14537
Fixed the test case name. I haven't changed the parquet code path as I
wasn't sure on whether it would break any backward compatibility.
---
If your project is set up for it, you can reply
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14814
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/6/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14814
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14814
**[Test build #6 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/6/consoleFull)**
for PR 14814 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14819
**[Test build #64445 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64445/consoleFull)**
for PR 14819 at commit
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/14819
cc @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/14819
[SPARK-17246][SQL] Add BigDecimal literal
## What changes were proposed in this pull request?
This PR adds parser support for `BigDecimal` literals. If you append the
suffix `BD` to a valid
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14809
Condition 2 is not always true when condition 1 is `true`. I found an
exception.
```Scala
val schema = StructType(StructField("b", StringType, true) :: Nil)
1 - 100 of 508 matches
Mail list logo