Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/21939
got it. Thank you!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/21939
@shaneknapp what was the version of pyarrow in that build? 0.8 or 0.10?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/21939
@BryanCutler So, for this upgrade, even the JVM side dependency is 0.10,
pyspark can work with any version between pyarrow 0.8 to 0.10 without problem
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/22003
@dongjoon-hyun no problem. Thank you!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/22003
lgtm. Merging to master.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/22003#discussion_r207986831
--- Diff: sql/core/pom.xml ---
@@ -90,39 +90,11 @@
org.apache.orc
orc-core
${orc.classifier
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/22003#discussion_r207962501
--- Diff: sql/core/pom.xml ---
@@ -90,39 +90,11 @@
org.apache.orc
orc-core
${orc.classifier
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/22003#discussion_r207888608
--- Diff: sql/core/pom.xml ---
@@ -90,39 +90,11 @@
org.apache.orc
orc-core
${orc.classifier
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/21865
lgtm. I am merging this PR to master branch. Then, I will kick off
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/job/spark-master-maven-snapshots
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/21865
cc @HyukjinKwon @kiszk
I will merge this PR once it passes the test.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/20473#discussion_r16362
--- Diff: python/run-tests.py ---
@@ -151,6 +151,38 @@ def parse_opts():
return opts
+def _check_dependencies(python_exec
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/19872#discussion_r165449847
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/planning/patterns.scala
---
@@ -199,7 +200,7 @@ object ExtractFiltersAndInnerJoins
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/20473#discussion_r165445947
--- Diff: python/run-tests.py ---
@@ -151,6 +151,38 @@ def parse_opts():
return opts
+def _check_dependencies(python_exec
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/20473#discussion_r165445232
--- Diff: python/run-tests.py ---
@@ -151,6 +151,38 @@ def parse_opts():
return opts
+def _check_dependencies(python_exec
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/20465
So, jenkins jobs run those tests with python3? If so, I feel better because
those tests are not completely skipped in Jenkins. If it is hard to make them
run with python 2. Letâs have a log to
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/20465
@felixcheung jenkins is actually skipping those tests (see the failure of
this pr). It makes sense to provide a way to allow developers to not run those
tests. But, I'd prefer that we run those
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/19872#discussion_r165253818
--- Diff: python/pyspark/sql/tests.py ---
@@ -4353,6 +4347,446 @@ def test_unsupported_types(self):
df.groupby('id').apply(
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/19872#discussion_r165253514
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/planning/patterns.scala
---
@@ -199,7 +200,7 @@ object ExtractFiltersAndInnerJoins
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/19872#discussion_r165220142
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/planning/patterns.scala
---
@@ -199,7 +200,7 @@ object ExtractFiltersAndInnerJoins
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/20037#discussion_r163463718
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -1271,7 +1271,7 @@ private[spark] object SparkSubmitUtils
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/20110
Thank you! Let's also check the build result to make sure
`pyspark.streaming.tests.FlumePollingStreamTests` is indeed triggered (I hit
this issue while running this
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/19535#discussion_r159019845
--- Diff: python/pyspark/streaming/flume.py ---
@@ -54,8 +54,13 @@ def createStream(ssc, hostname, port,
:param bodyDecoder: A function used to
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/19535#discussion_r159013024
--- Diff: python/pyspark/streaming/flume.py ---
@@ -54,8 +54,13 @@ def createStream(ssc, hostname, port,
:param bodyDecoder: A function used to
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/5604#discussion_r157933488
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/windowExpressions.scala
---
@@ -0,0 +1,340 @@
+/*
+ * Licensed to the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/19448
Thank you :)
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/19448
I am not really worried about this particular change. It's already merged
and it seems a small and safe change. I am not planning to revert it.
But, in general, let's avoid of mergi
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/19448
@HyukjinKwon branch-2.2 is in a maintenance branch, I am not sure it is
appropriate to merge this change to branch-2.2 since it is not really a bug
fix. If the doc is not accurate, we should fix the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/19149
Can we add a test?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/19080#discussion_r136214689
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/physical/partitioning.scala
---
@@ -30,18 +30,43 @@ import
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/19080
Have a question after reading the new approach. Let's say that we have a
join like `T1 JOIN T2 on T1.a = T2.a`. Also `T1` is hash partitioned by the
value of `T1.a` and it has 10 partitions, an
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/18944
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/18316
Thanks! I have merged this pr to branch-2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/18316
thanks! merging to branch-2.2
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/18316
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/18064
My suggestion was about getting changes on the interfaces of
ExecutedCommandExec and SaveIntoDataSourceCommand to separate prs. It will help
code review (both speed and quality).
---
If your
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/18148
@vanzin Seems merging to branch-2.2 was an accident? Since it is not really
a bug fix, should we revert it from branch-2.2 and just keep it in the master?
---
If your project is set up for it, you
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/18064
I just case across this pr. I have one general feedback. It will be great
if we can make a pr have a single purpose. This pr contains different kinds of
changes in order to fix the UI. If refactoring
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/18172
Reverting this because it breaks repl tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/17617#discussion_r119938185
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -143,14 +144,29 @@ class SparkHadoopUtil extends Logging
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17763
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17666
I have reverted this change from both master and branch-2.2. I have
reopened the jira.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17666
I am going to revert this PR from master and branch-2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17666
@maropu Sorry. I think this PR introduces a regression.
```
scala> spark.sql("select * from range(1, 10) cross join range(1,
10)").explain
==
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17905
i see. I think
https://github.com/apache/spark/pull/17905/commits/d4c1a9db25ee7386f7b12e4dabb54210a9892510
is good. How about we get it checked in first (after jenkins passes)?
---
If your project
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17905
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17905
@falaki's PR did not actually trigger that test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17905
@felixcheung you are right. That is the problem.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17903
I do not think https://github.com/apache/spark/pull/17649 caused the
problem. I saw failures without that internally.
---
If your project is set up for it, you can reply to this email and have your
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17903
Thanks @falaki. Merging to master and branch-2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17903
Seems 2.2 build is fine. But, I'd like to get this merged in branch-2.2
since this test will fail if any previous tests leak tables.
---
If your project is set up for it, you can reply to this
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17903
@felixcheung fyi. I think the main problem of this test is that it will be
broken if tests executed before this one leak any table. I think this change
makes sense. I will merge it once it passes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17892
@felixcheung Seems master build is broken because R tests are broken
(https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-sbt-hadoop-2.7/2844/console).
I am not sure
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17746
@dbtsai Thanks for the explanation and the context :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17746
Can I ask how we decided merging this dependency change after the cut of
the release branch (especially this change affects user code)?
---
If your project is set up for it, you can reply to this
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17659
lgtm. Merging to master and branch-2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17531
Thanks. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17531
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17423
got it. Thanks :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17423
@felixcheung `SparkContext.getOrCreate` is the preferred way to create a
SparkContext. So, even we have check, it is still better to use `getOrCreate`.
---
If your project is set up for it, you can
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16952
LGTM. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17156
merged to branch-2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17156
Let's also merge this to branch-2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
en
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16917
Let's use a meaningful title in future :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16935
cool. It has been merged.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16935
Seems I cannot merge now... Will try again later.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16935
ok. Nothing new to add. I will merge this to master and branch-2.1 (in case
we want to debug any python test hanging issue in branch-2.1).
---
If your project is set up for it, you can reply to this
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16935
Let's not merge it right now. I may need to log more.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/16935
[SPARK-19604] [TESTS] Log the start of every Python test
## What changes were proposed in this pull request?
Right now, we only have info level log after we finish the tests of a
Python
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16894
thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16067
@gatorsmile can we also add it in branch-2.0? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16649
Cool I am merging this to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16645
My main concern of this pr is that if people will think it is recommended
to add new batches to force those rules running in a certain ordering. For
these resolution rules, we can also use conditions
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/16649
[SPARK-19295] [SQL] IsolatedClientLoader's downloadVersion should log the
location of downloaded metastore client jars
## What changes were proposed in this pull request?
This will hel
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16613
nvm. After second thought, the feature flag does not really buy us
anything. We just store the original view definition and the column mapping in
the metastore. So, I think it is fine to just do the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16628
I am merging this to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14204
ok I agree. Originally, I thought it will be helpful to figure out the
worker that an executor belongs to.
But, if it does not provide very useful information. I am fine to drop it.
---
If
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16628
done
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16613
is there a feature flag that is used to determine if we use this new
approach? I feel it will be good to have an internal feature flag to determine
the code path. So, if there is something wrong that
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16517
Looks good to me. @gatorsmile can you explain your concerns? I am wondering
what kind of cases that you think HiveFileFormat may not be able to handle.
---
If your project is set up for it, you can
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r96566857
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -276,40 +276,31 @@ case class InsertIntoHiveTable
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r96566523
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -276,40 +276,31 @@ case class InsertIntoHiveTable
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r96566290
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveFileFormat.scala
---
@@ -0,0 +1,135 @@
+/*
+ * Licensed to the Apache
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r96566171
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -86,6 +86,47 @@ class DetermineHiveSerde(conf: SQLConf) extends
Rule
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r96549456
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveFileFormat.scala
---
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16628
cc @lw-lin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/16628
Update known_translations for contributor names
## What changes were proposed in this pull request?
Update known_translations per
https://github.com/apache/spark/pull/16423#issuecomment
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16423
Sure.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r96316695
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -86,6 +86,47 @@ class DetermineHiveSerde(conf: SQLConf) extends
Rule
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r96317272
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveFileFormat.scala
---
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r96317211
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveFileFormat.scala
---
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r96316863
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -86,6 +86,47 @@ class DetermineHiveSerde(conf: SQLConf) extends
Rule
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r96316302
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala ---
@@ -108,35 +108,30 @@ class QueryExecution(val sparkSession
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r96316982
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -86,6 +86,47 @@ class DetermineHiveSerde(conf: SQLConf) extends
Rule
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r96317150
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveFileFormat.scala
---
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16528
looks good to me. If possible, I'd like to get
https://github.com/apache/spark/pull/16528/files#r96314156 reverted.
---
If your project is set up for it, you can reply to this email and have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16528#discussion_r96314156
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/GenerateOrdering.scala
---
@@ -131,17 +131,15 @@ object
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16561#discussion_r96313495
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/view.scala
---
@@ -28,22 +28,60 @@ import
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16597
I am not sure if it is worth breaking this behavior. If the table is a
managed table, it is possible that existing behavior allows users to move a
table from one managed place to another managed
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16568
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16233
@jiangxb1987 Once jira is back, let's create jiras to address follow-up
issues (probably you have already done that before jira went down).
---
If your project is set up for it, you can rep
1 - 100 of 5297 matches
Mail list logo