Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14537
You might forget this comment
https://github.com/apache/spark/pull/14537#discussion_r76189474
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitH
Github user qualiu commented on the issue:
https://github.com/apache/spark/pull/14807
@srowen @tsudukim @tritab @andrewor14 : Hello, I've updated to a more
conservative fix, please review it, thanks!
I didn't push [my former fix](
https://github.com/qualiu/spark/tree/submit-cmd-
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14820
**[Test build #64447 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64447/consoleFull)**
for PR 14820 at commit
[`607f117`](https://github.com/apache/spark/commit/6
GitHub user junyangq opened a pull request:
https://github.com/apache/spark/pull/14820
[SparkR][Minor] Fix example of spark.naiveBayes
## What changes were proposed in this pull request?
The original example doesn't work because the features are not categorical.
This PR fix
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/14710
Looks ok, a couple of minor suggestions that from my understanding should
work now. I guess this is the next best thing without making all of these APIs
properly asynchronous.
pinging @zsxwi
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14710#discussion_r76348848
--- Diff:
yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
---
@@ -269,20 +258,22 @@ private[spark] abstract class YarnSched
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14638
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14638
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64440/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14638
**[Test build #64440 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64440/consoleFull)**
for PR 14638 at commit
[`3c9adb3`](https://github.com/apache/spark/commit/
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/14802
It would be great if we can reuse codes in `FileStreamSinkLog` for both
`FileStreamSource` and `FileStreamSink`.
---
If your project is set up for it, you can reply to this email and have your
repl
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14710#discussion_r76347979
--- Diff:
core/src/main/scala/org/apache/spark/deploy/client/StandaloneAppClient.scala ---
@@ -220,19 +225,13 @@ private[spark] class StandaloneAppClient(
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/14728
Looks pretty good. Just one comment about `Serializable`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/14749
@rxin It doesn't fail any tests. Found this issue while working on related
code path.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as wel
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14537
**[Test build #64446 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64446/consoleFull)**
for PR 14537 at commit
[`fc14e2d`](https://github.com/apache/spark/commit/f
Github user rajeshbalamohan commented on the issue:
https://github.com/apache/spark/pull/14537
Fixed the test case name. I haven't changed the parquet code path as I
wasn't sure on whether it would break any backward compatibility.
---
If your project is set up for it, you can reply
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14814
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/6/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14814
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14814
**[Test build #6 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/6/consoleFull)**
for PR 14814 at commit
[`46bf9ab`](https://github.com/apache/spark/commit/
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14819
**[Test build #64445 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64445/consoleFull)**
for PR 14819 at commit
[`fda100f`](https://github.com/apache/spark/commit/f
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/14819
cc @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/14819
[SPARK-17246][SQL] Add BigDecimal literal
## What changes were proposed in this pull request?
This PR adds parser support for `BigDecimal` literals. If you append the
suffix `BD` to a valid
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14809
Condition 2 is not always true when condition 1 is `true`. I found an
exception.
```Scala
val schema = StructType(StructField("b", StringType, true) :: Nil)
sparkS
Github user junyangq commented on the issue:
https://github.com/apache/spark/pull/14818
We may want to use a different name? glmnet related name could be confusing
if it is actually only multiclass logistic.
---
If your project is set up for it, you can reply to this email and have y
Github user junyangq commented on the issue:
https://github.com/apache/spark/pull/13584
Sounds good. That's also what we meant.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/13584
Yeah I was going to say that we need to handle cases where `labels_output`
is also used. We can just add a numeric suffix maybe ?
---
If your project is set up for it, you can reply to this email
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14753
Thanks. Overall looks good. I am merging this to master. Let me tweak the
interface later.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as w
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/14811
@mallman Could you close this PR since GitHub won't close it automatically,
please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. I
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14814
**[Test build #6 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/6/consoleFull)**
for PR 14814 at commit
[`46bf9ab`](https://github.com/apache/spark/commit/4
Github user Parth-Brahmbhatt commented on the issue:
https://github.com/apache/spark/pull/14817
@hvanhovell The behavior in case this fallbackToHdfs is not enabled ( and
by default it is not enabled for performance reason) is to return the value
specified via spark.sql.defaultSizeInBy
Github user junyangq commented on the issue:
https://github.com/apache/spark/pull/13584
@shivaram Does it sound reasonable to you? Just discussed this with
@jkbradley.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If y
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/14811
Thanks! Merging into branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled an
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14811
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14816
**[Test build #64443 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64443/consoleFull)**
for PR 14816 at commit
[`8b57886`](https://github.com/apache/spark/commit/8
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14815
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64437/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14811
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64438/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14815
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14811
**[Test build #64438 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64438/consoleFull)**
for PR 14811 at commit
[`e44d943`](https://github.com/apache/spark/commit/
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14815
**[Test build #64437 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64437/consoleFull)**
for PR 14815 at commit
[`d0b1009`](https://github.com/apache/spark/commit/
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14816
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14816
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14710
**[Test build #64442 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64442/consoleFull)**
for PR 14710 at commit
[`380291b`](https://github.com/apache/spark/commit/3
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/14710
wow a core dump in the build. retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fea
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/13584
sure I'll try to scan through all the mllib algorithms
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14813
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14818
**[Test build #64441 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64441/consoleFull)**
for PR 14818 at commit
[`d6dbff8`](https://github.com/apache/spark/commit/
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14818
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64441/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14818
**[Test build #64441 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64441/consoleFull)**
for PR 14818 at commit
[`d6dbff8`](https://github.com/apache/spark/commit/d
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14818
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/14813
Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14176#discussion_r76341601
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -459,52 +475,91 @@ case class HashAggregateExec(
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/14817
@Parth-Brahmbhatt we are currently working Cost Based Optimization in
Spark. An important input will be the actual size of the table. Having partial
statistics (what you are suggestion) will not
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/14818
[SPARK-17157][SPARKR][WIP]: Add multiclass logistic regression SparkR
Wrapper
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
Add
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14809
https://github.com/apache/spark/pull/13060 This change is based on another
PR. If users specify the location in `CREATE TABLE`, we always set the table
type to `EXTERNAL`.
---
If your project
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14793
@rxin How about this one? This is porting our existing test cases, instead
of HiveCompatbilitySuite
---
If your project is set up for it, you can reply to this email and have your
reply appear o
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14782
I see. Thanks! Let me close this one.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
en
Github user gatorsmile closed the pull request at:
https://github.com/apache/spark/pull/14782
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14176#discussion_r76340251
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -518,15 +573,30 @@ case class HashAggregateExec(
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14176#discussion_r76340181
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -518,15 +573,30 @@ case class HashAggregateExec(
Github user junyangq commented on the issue:
https://github.com/apache/spark/pull/13584
@keypointt Can we keep searching (in random or sequential way) until an
unused column name has been found?
---
If your project is set up for it, you can reply to this email and have your
reply app
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14176#discussion_r76339927
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -459,52 +475,91 @@ case class HashAggregateExec(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14637
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64435/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14637
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14176#discussion_r76339716
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -459,52 +475,91 @@ case class HashAggregateExec(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14637
**[Test build #64435 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64435/consoleFull)**
for PR 14637 at commit
[`09f3197`](https://github.com/apache/spark/commit/
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14638
**[Test build #64440 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64440/consoleFull)**
for PR 14638 at commit
[`3c9adb3`](https://github.com/apache/spark/commit/3
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14638
Hi, @rxin . Until now, I couldn't find more suitable class in `hive`
package. I added the logic to check the Table's InputFormat. Now,
`skip.header.line.count` option is applied for the table
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/14728#discussion_r76339023
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamOptions.scala
---
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apac
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14817
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14176#discussion_r76338880
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -459,52 +475,91 @@ case class HashAggregateExec(
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14176#discussion_r76338652
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -279,9 +280,14 @@ case class HashAggregateExec(
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14176#discussion_r76338535
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -279,9 +280,14 @@ case class HashAggregateExec(
GitHub user Parth-Brahmbhatt opened a pull request:
https://github.com/apache/spark/pull/14817
[SPARK-17247][SQL]: when calcualting size of a relation from hdfs, thâ¦
## What changes were proposed in this pull request?
when calcualting size of a relation from hdfs, the size calc
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14815
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14815
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64436/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14815
**[Test build #64436 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64436/consoleFull)**
for PR 14815 at commit
[`6728fc3`](https://github.com/apache/spark/commit/
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14655
Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14815#discussion_r76336122
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1386,15 +1386,17 @@ object EliminateOuterJoin exten
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/11192
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/14777#discussion_r76335953
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
---
@@ -251,8 +251,21 @@ case class Literal (value: Any, d
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14816
**[Test build #64439 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64439/consoleFull)**
for PR 14816 at commit
[`8b57886`](https://github.com/apache/spark/commit/
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14816
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14816
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64439/
Test FAILed.
---
Github user Parth-Brahmbhatt commented on the issue:
https://github.com/apache/spark/pull/14655
@gatorsmile not sure if it will simplify much in this case as most of the
complexity is in figuring out what partitions can be pruned which I don't think
will go away. We will rely on hive
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14777
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14813
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14813
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64433/
Test PASSed.
---
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14815#discussion_r76334971
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1386,15 +1386,17 @@ object EliminateOuterJoin exten
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/14777#discussion_r76334925
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
---
@@ -251,8 +251,21 @@ case class Literal (value: Any, d
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14813
**[Test build #64433 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64433/consoleFull)**
for PR 14813 at commit
[`45cf302`](https://github.com/apache/spark/commit/
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/14777
LGTM - merging to master/2.0. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
en
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14816
**[Test build #64439 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64439/consoleFull)**
for PR 14816 at commit
[`8b57886`](https://github.com/apache/spark/commit/8
Github user Parth-Brahmbhatt commented on a diff in the pull request:
https://github.com/apache/spark/pull/14720#discussion_r76334577
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveQuerySuite.scala
---
@@ -865,6 +865,16 @@ class HiveQuerySuite extends Hi
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/8880#discussion_r76334586
--- Diff:
yarn/src/test/scala/org/apache/spark/security/IOEncryptionSuite.scala ---
@@ -0,0 +1,332 @@
+/*
+ * Licensed to the Apache Software Foundatio
Github user Parth-Brahmbhatt commented on a diff in the pull request:
https://github.com/apache/spark/pull/14720#discussion_r76334535
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -87,6 +88,9 @@ private[hive] class HiveClientImpl(
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/8880
> They could be different sizes (different config options control that). We
could change it so both use the same configs / same code to generate the keys,
but in general if they're used for different
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14816
[SPARK-17245] [SQL] [BRANCH-1.6] Do not rely on Hive's session state to
retrieve HiveConf
## What changes were proposed in this pull request?
Right now, we rely on Hive's `SessionState.get()` to
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/8880#discussion_r76334131
--- Diff:
yarn/src/test/scala/org/apache/spark/security/IOEncryptionSuite.scala ---
@@ -0,0 +1,332 @@
+/*
+ * Licensed to the Apache Software Foundati
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14777#discussion_r76332385
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
---
@@ -251,8 +251,21 @@ case class Literal (value: Any, da
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/8880#discussion_r76332258
--- Diff: docs/configuration.md ---
@@ -559,6 +559,39 @@ Apart from these, the following properties are also
available, and may be useful
spark.io.co
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/8880
> Why needs to generate a new key for IO encryption?
They could be different sizes (different config options control that). We
could change it so both use the same configs / same code to gener
101 - 200 of 506 matches
Mail list logo