Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17312
Can you put a screenshot here? Might actually be useful to have.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17318
Can you put the after exception in the pr description as well?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17337
Merging in master/branch-2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17330#discussion_r106758290
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/subquery.scala
---
@@ -61,6 +63,36 @@ abstract class SubqueryExpression
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16483
Merging in master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/17322
[SPARK-19987][SQL] Pass all filters into FileIndex
## What changes were proposed in this pull request?
This is a tiny teeny refactoring to pass data filters also to the
FileIndex, so FileIndex
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17191
I personally have run into this issue and was surprised that we didn't
support it ... it's pretty verbose to retype everything.
If Postgres and MySQL both support it, I think we should do
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17303
Yes it'd be nice to have some benchmark on this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17273
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17304
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin closed the pull request at:
https://github.com/apache/spark/pull/17301
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17166
hm it might be useful to have details, but it'd also be useful to have this
in the overview page without having to drill down. iiuc, the pr already has the
information in task list page, doesn't
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/17301
[SPARK-19944][SQL] Move SQLConf from sql/core to sql/catalyst (branch-2.1)
## What changes were proposed in this pull request?
This patch moves SQLConf from sql/core to sql/catalyst. To minimize
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17273
I'd fix the log msg instead.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17292#discussion_r106093910
--- Diff: core/src/test/scala/org/apache/spark/SparkContextSuite.scala ---
@@ -537,6 +539,21 @@ class SparkContextSuite extends SparkFunSuite
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17264
In the future can we put the perf result in PR descriptions?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17285#discussion_r105976759
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SimpleCatalystConf.scala
---
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/17285
[SPARK-19944][SQL] Move SQLConf from sql/core to sql/catalyst
## What changes were proposed in this pull request?
This patch moves SQLConf from sql/core to sql/catalyst. To minimize the
changes
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16541
I didn't look into the details here, but very often scanning data twice
doesn't necessarily slow things down, especially in the case of sequential
scan.
---
If your project is set up for it, you
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16826#discussion_r105506911
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SessionState.scala ---
@@ -17,43 +17,70 @@
package org.apache.spark.sql.internal
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17241#discussion_r105453191
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -595,6 +594,11 @@ class Analyzer(
case view
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17241
SGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17244
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17220
I don't think you understand this. This value is here so if at some point
some user picked tungsten-sort, we won't break it. In recent versions of Spark
the default sort manager accomplishes the thing
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17220
If anything, we should just update the file to add a line of comment to
make sure people don't delete this in the future.
---
If your project is set up for it, you can reply to this email and have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17220
Is this change even correct? This is here for backward compatibility.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17202#discussion_r104983300
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -576,6 +576,11 @@ class Dataset[T] private[sql](
val parsedDelay
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17202#discussion_r104983221
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -563,7 +563,7 @@ class Dataset[T] private[sql](
* @param eventTime the name
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17205
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17205
LGTM too
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17184#discussion_r104845661
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java ---
@@ -897,41 +898,52 @@ public long toLong() {
break
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17184
I believe IBM J9 actually improved this specific case (their JIT handles
tons of exceptions better). Oh well -- if only JIT is perfect.
---
If your project is set up for it, you can reply
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17184#discussion_r104841789
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java ---
@@ -897,41 +898,52 @@ public long toLong() {
break
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17184#discussion_r104841761
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java ---
@@ -897,41 +898,52 @@ public long toLong() {
break
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17184#discussion_r104841735
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java ---
@@ -850,26 +850,27 @@ public UTF8String translate(Map<Charac
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17196#discussion_r104804384
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FilePartitionStrategy.scala
---
@@ -0,0 +1,156 @@
+/*
+ * Licensed
Github user rxin closed the pull request at:
https://github.com/apache/spark/pull/16958
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17196#discussion_r104798525
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FilePartitionStrategy.scala
---
@@ -0,0 +1,156 @@
+/*
+ * Licensed
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/17196
[SPARK-19855][SQL] Create an internal FilePartitionStrategy interface
## What changes were proposed in this pull request?
The way we currently do file partitioning strategy is hard coded
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17166#discussion_r104595706
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -2250,6 +2250,25 @@ class SparkContext(config: SparkConf) extends
Logging
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17166#discussion_r104593920
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedClusterMessage.scala
---
@@ -40,7 +40,8 @@ private[spark] object
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17166#discussion_r104593825
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -732,6 +732,13 @@ class DAGScheduler
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17166#discussion_r104593790
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -158,7 +158,8 @@ private[spark] class Executor(
threadPool.execute(tr
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17166#discussion_r104593710
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -2250,6 +2250,25 @@ class SparkContext(config: SparkConf) extends
Logging
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17166#discussion_r104593724
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -2250,6 +2250,25 @@ class SparkContext(config: SparkConf) extends
Logging
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/15928
What do you mean? The improvement was small?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17114
Put the test case in a sql file?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17099#discussion_r103501851
--- Diff: sql/core/src/test/resources/sql-tests/inputs/inner-join.sql ---
@@ -0,0 +1,25 @@
+CREATE TEMPORARY VIEW t1 AS SELECT * FROM VALUES (1
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17049
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17053#discussion_r102889140
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalog.scala
---
@@ -251,7 +251,8 @@ abstract class ExternalCatalog
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17049
Looks good except that comment.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17049#discussion_r102881054
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/HashExpressionsSuite.scala
---
@@ -71,6 +75,242 @@ class HashExpressionsSuite
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17002
Yea @gatorsmile be careful in the future and check the commit hash.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17002#discussion_r102070142
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -95,16 +95,26 @@ class SparkSession private(
/**
* State
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/17002
[SPARK-19669][SQL] Open up visibility for sharedState, sessionState, and a
few other functions
## What changes were proposed in this pull request?
To ease debugging, most of Spark SQL internals
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16977
Are tests flaky right now? Otherwise it seems like this has introduced
legitimate issue with the test timing out. Three times in a row.
---
If your project is set up for it, you can reply
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16960
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16960
cc @hvanhovell if you have a min to review this ...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16960#discussion_r101575264
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/metric/SQLMetricsSuite.scala
---
@@ -309,4 +314,84 @@ class SQLMetricsSuite extends
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16960#discussion_r101575199
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/metric/SQLMetricsSuite.scala
---
@@ -309,4 +314,84 @@ class SQLMetricsSuite extends
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16958
So nice when I got two LGTMs and then Jenkins disagreed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16826
What's WIP about this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16611
For SQL, rather than "array", can we follow Python, e.g.
```
CREATE TEMPORARY TABLE tableA USING csv
OPTIONS (nullValue ['NA', 'null'], ...)
```
---
If your project
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16611#discussion_r101553890
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -97,6 +99,15 @@ class DataFrameReader private[sql](sparkSession
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16534
Change looks good to me but I didn't look super carefully.
@holdenk can you take a look at this?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16958
cc @hvanhovell @bogdanrdc
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/16958
[SPARK-13721][SQL] Make GeneratorOuter unresolved.
## What changes were proposed in this pull request?
This is a small change to make GeneratorOuter always unresolved. It is
mostly no-op change
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16956#discussion_r101530187
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveHints.scala
---
@@ -54,10 +54,6 @@ object ResolveHints
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16943
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16941
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16941#discussion_r101329235
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/PlanParserSuite.scala
---
@@ -524,7 +530,7 @@ class PlanParserSuite extends
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16940
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16925#discussion_r101289645
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -374,6 +374,16 @@ querySpecification
windows
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16925#discussion_r101289574
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -374,6 +374,16 @@ querySpecification
windows
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16925#discussion_r101288304
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -374,6 +374,16 @@ querySpecification
windows
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16920
Yea the only issue is that it requires another manual update. Why not use
the chrome plugin I sent?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16940
LGTM (pending Jenkins).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16920
Why is this necessary? It seems like an extra step needed and doesn't
provide any real information.
I suggest you use this:
https://chrome.google.com/webstore/detail/jirafy
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/16939
[SPARK-16475][SQL] broadcast hint for SQL queries - follow up
## What changes were proposed in this pull request?
A small update to https://github.com/apache/spark/pull/16925
1. Rename
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16925
the latest commit hasn't finished running tests yet ... but probably fine
given the small change.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16925#discussion_r101137229
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/SubstituteHintsSuite.scala
---
@@ -0,0 +1,123 @@
+/*
+ * Licensed
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16925#discussion_r101129634
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteHints.scala
---
@@ -0,0 +1,103 @@
+/*
+ * Licensed
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16925#discussion_r101129594
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteHints.scala
---
@@ -0,0 +1,103 @@
+/*
+ * Licensed
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16925#discussion_r101129453
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/SubstituteHintsSuite.scala
---
@@ -0,0 +1,123 @@
+/*
+ * Licensed
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16925
cc @dongjoon-hyun, @cloud-fan, @gatorsmile and @hvanhovell This should be
ready for review. Note that the semantics is different from the earlier
versions.
---
If your project is set up for it, you
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16925#discussion_r101088496
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/SubstituteHints.scala
---
@@ -0,0 +1,85 @@
+/*
+ * Licensed
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16925
Actually I'm going to completely rewrite this. I don't think the current
implementation makes sense.
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/16925
[SPARK-16475][SQL] Broadcast Hint for SQL Queries
## What changes were proposed in this pull request?
This PR aims to achieve the following two goals in Spark SQL.
1. Generic Hint
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14426
Actually I have some time. I will submit a pr based on this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14426
@dongjoon-hyun do you have time to update the pull request now the view
canonicalization work is done? Basically we can remove all the SQL generation
stuff.
---
If your project is set up
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16914
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16914
LGTM pending jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16872#discussion_r100789955
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DataFrameRangeSuite.scala ---
@@ -127,4 +133,28 @@ class DataFrameRangeSuite extends QueryTest
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16888
Are there specific benefits brought by updating to 4.1 of Netty? Netty is
so core to Spark that any bug in it would be extremely difficult to debug (yes
we have founds bugs in Netty and helped fix
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16386#discussion_r100687458
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala
---
@@ -48,69 +47,110 @@ class JacksonParser
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16887
Merging in master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16888
BTW for Netty we shouldn't just bump to the highest version. We should use
the maintenance branches.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16888
Shouldn't we use netty-4.0.44.Final rather than 4.1.x?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16664
Yea we should fix that.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16664
Actually @cloud-fan are you sure it is a problem right now?
DataSOurce.write itself creates the commands, and if the information are
propagated correctly, the QueryExecution object should have
801 - 900 of 14826 matches
Mail list logo