[spark] branch master updated (f556946 -> f5360e7)

2020-09-06 Thread gengliang
This is an automated email from the ASF dual-hosted git repository.

gengliang pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f556946  [SPARK-32800][SQL] Remove ExpressionSet from the 2.13 branch
 add f5360e7  [SPARK-32548][SQL] - Add Application attemptId support to SQL 
Rest API

No new revisions were added by this update.

Summary of changes:
 .../status/api/v1/sql/ApiSqlRootResource.scala |   9 +-
 .../spark/status/api/v1/sql/SqlResourceSuite.scala |   5 +-
 .../v1/sql/SqlResourceWithActualMetricsSuite.scala | 127 +
 3 files changed, 136 insertions(+), 5 deletions(-)
 create mode 100644 
sql/core/src/test/scala/org/apache/spark/status/api/v1/sql/SqlResourceWithActualMetricsSuite.scala


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f556946 -> f5360e7)

2020-09-06 Thread gengliang
This is an automated email from the ASF dual-hosted git repository.

gengliang pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f556946  [SPARK-32800][SQL] Remove ExpressionSet from the 2.13 branch
 add f5360e7  [SPARK-32548][SQL] - Add Application attemptId support to SQL 
Rest API

No new revisions were added by this update.

Summary of changes:
 .../status/api/v1/sql/ApiSqlRootResource.scala |   9 +-
 .../spark/status/api/v1/sql/SqlResourceSuite.scala |   5 +-
 .../v1/sql/SqlResourceWithActualMetricsSuite.scala | 127 +
 3 files changed, 136 insertions(+), 5 deletions(-)
 create mode 100644 
sql/core/src/test/scala/org/apache/spark/status/api/v1/sql/SqlResourceWithActualMetricsSuite.scala


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f556946 -> f5360e7)

2020-09-06 Thread gengliang
This is an automated email from the ASF dual-hosted git repository.

gengliang pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f556946  [SPARK-32800][SQL] Remove ExpressionSet from the 2.13 branch
 add f5360e7  [SPARK-32548][SQL] - Add Application attemptId support to SQL 
Rest API

No new revisions were added by this update.

Summary of changes:
 .../status/api/v1/sql/ApiSqlRootResource.scala |   9 +-
 .../spark/status/api/v1/sql/SqlResourceSuite.scala |   5 +-
 .../v1/sql/SqlResourceWithActualMetricsSuite.scala | 127 +
 3 files changed, 136 insertions(+), 5 deletions(-)
 create mode 100644 
sql/core/src/test/scala/org/apache/spark/status/api/v1/sql/SqlResourceWithActualMetricsSuite.scala


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f556946 -> f5360e7)

2020-09-06 Thread gengliang
This is an automated email from the ASF dual-hosted git repository.

gengliang pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f556946  [SPARK-32800][SQL] Remove ExpressionSet from the 2.13 branch
 add f5360e7  [SPARK-32548][SQL] - Add Application attemptId support to SQL 
Rest API

No new revisions were added by this update.

Summary of changes:
 .../status/api/v1/sql/ApiSqlRootResource.scala |   9 +-
 .../spark/status/api/v1/sql/SqlResourceSuite.scala |   5 +-
 .../v1/sql/SqlResourceWithActualMetricsSuite.scala | 127 +
 3 files changed, 136 insertions(+), 5 deletions(-)
 create mode 100644 
sql/core/src/test/scala/org/apache/spark/status/api/v1/sql/SqlResourceWithActualMetricsSuite.scala


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f556946 -> f5360e7)

2020-09-06 Thread gengliang
This is an automated email from the ASF dual-hosted git repository.

gengliang pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f556946  [SPARK-32800][SQL] Remove ExpressionSet from the 2.13 branch
 add f5360e7  [SPARK-32548][SQL] - Add Application attemptId support to SQL 
Rest API

No new revisions were added by this update.

Summary of changes:
 .../status/api/v1/sql/ApiSqlRootResource.scala |   9 +-
 .../spark/status/api/v1/sql/SqlResourceSuite.scala |   5 +-
 .../v1/sql/SqlResourceWithActualMetricsSuite.scala | 127 +
 3 files changed, 136 insertions(+), 5 deletions(-)
 create mode 100644 
sql/core/src/test/scala/org/apache/spark/status/api/v1/sql/SqlResourceWithActualMetricsSuite.scala


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f5360e7 -> de44e9c)

2020-09-06 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f5360e7  [SPARK-32548][SQL] - Add Application attemptId support to SQL 
Rest API
 add de44e9c  [SPARK-32785][SQL] Interval with dangling parts should not 
results null

No new revisions were added by this update.

Summary of changes:
 docs/sql-migration-guide.md|   2 +
 .../spark/sql/catalyst/util/IntervalUtils.scala|   5 +-
 .../sql/catalyst/util/IntervalUtilsSuite.scala |  13 +++
 .../test/resources/sql-tests/inputs/interval.sql   |   8 ++
 .../sql-tests/results/ansi/interval.sql.out| 100 -
 .../resources/sql-tests/results/interval.sql.out   | 100 -
 6 files changed, 225 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f5360e7 -> de44e9c)

2020-09-06 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f5360e7  [SPARK-32548][SQL] - Add Application attemptId support to SQL 
Rest API
 add de44e9c  [SPARK-32785][SQL] Interval with dangling parts should not 
results null

No new revisions were added by this update.

Summary of changes:
 docs/sql-migration-guide.md|   2 +
 .../spark/sql/catalyst/util/IntervalUtils.scala|   5 +-
 .../sql/catalyst/util/IntervalUtilsSuite.scala |  13 +++
 .../test/resources/sql-tests/inputs/interval.sql   |   8 ++
 .../sql-tests/results/ansi/interval.sql.out| 100 -
 .../resources/sql-tests/results/interval.sql.out   | 100 -
 6 files changed, 225 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f5360e7 -> de44e9c)

2020-09-06 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f5360e7  [SPARK-32548][SQL] - Add Application attemptId support to SQL 
Rest API
 add de44e9c  [SPARK-32785][SQL] Interval with dangling parts should not 
results null

No new revisions were added by this update.

Summary of changes:
 docs/sql-migration-guide.md|   2 +
 .../spark/sql/catalyst/util/IntervalUtils.scala|   5 +-
 .../sql/catalyst/util/IntervalUtilsSuite.scala |  13 +++
 .../test/resources/sql-tests/inputs/interval.sql   |   8 ++
 .../sql-tests/results/ansi/interval.sql.out| 100 -
 .../resources/sql-tests/results/interval.sql.out   | 100 -
 6 files changed, 225 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f5360e7 -> de44e9c)

2020-09-06 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f5360e7  [SPARK-32548][SQL] - Add Application attemptId support to SQL 
Rest API
 add de44e9c  [SPARK-32785][SQL] Interval with dangling parts should not 
results null

No new revisions were added by this update.

Summary of changes:
 docs/sql-migration-guide.md|   2 +
 .../spark/sql/catalyst/util/IntervalUtils.scala|   5 +-
 .../sql/catalyst/util/IntervalUtilsSuite.scala |  13 +++
 .../test/resources/sql-tests/inputs/interval.sql   |   8 ++
 .../sql-tests/results/ansi/interval.sql.out| 100 -
 .../resources/sql-tests/results/interval.sql.out   | 100 -
 6 files changed, 225 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (f5360e7 -> de44e9c)

2020-09-06 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from f5360e7  [SPARK-32548][SQL] - Add Application attemptId support to SQL 
Rest API
 add de44e9c  [SPARK-32785][SQL] Interval with dangling parts should not 
results null

No new revisions were added by this update.

Summary of changes:
 docs/sql-migration-guide.md|   2 +
 .../spark/sql/catalyst/util/IntervalUtils.scala|   5 +-
 .../sql/catalyst/util/IntervalUtilsSuite.scala |  13 +++
 .../test/resources/sql-tests/inputs/interval.sql   |   8 ++
 .../sql-tests/results/ansi/interval.sql.out| 100 -
 .../resources/sql-tests/results/interval.sql.out   | 100 -
 6 files changed, 225 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (de44e9c -> 05fcf26)

2020-09-06 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from de44e9c  [SPARK-32785][SQL] Interval with dangling parts should not 
results null
 add 05fcf26  [SPARK-32677][SQL] Load function resource before create

No new revisions were added by this update.

Summary of changes:
 python/pyspark/sql/tests/test_catalog.py |  5 +++--
 .../spark/sql/catalyst/catalog/SessionCatalog.scala  | 20 ++--
 .../spark/sql/execution/command/functions.scala  | 13 +
 .../test/resources/sql-tests/results/udaf.sql.out|  7 +--
 .../resources/sql-tests/results/udf/udf-udaf.sql.out |  7 +--
 .../spark/sql/execution/command/DDLSuite.scala   | 15 +++
 .../spark/sql/hive/HiveUDFDynamicLoadSuite.scala |  4 +++-
 7 files changed, 54 insertions(+), 17 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (de44e9c -> 05fcf26)

2020-09-06 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from de44e9c  [SPARK-32785][SQL] Interval with dangling parts should not 
results null
 add 05fcf26  [SPARK-32677][SQL] Load function resource before create

No new revisions were added by this update.

Summary of changes:
 python/pyspark/sql/tests/test_catalog.py |  5 +++--
 .../spark/sql/catalyst/catalog/SessionCatalog.scala  | 20 ++--
 .../spark/sql/execution/command/functions.scala  | 13 +
 .../test/resources/sql-tests/results/udaf.sql.out|  7 +--
 .../resources/sql-tests/results/udf/udf-udaf.sql.out |  7 +--
 .../spark/sql/execution/command/DDLSuite.scala   | 15 +++
 .../spark/sql/hive/HiveUDFDynamicLoadSuite.scala |  4 +++-
 7 files changed, 54 insertions(+), 17 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (de44e9c -> 05fcf26)

2020-09-06 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from de44e9c  [SPARK-32785][SQL] Interval with dangling parts should not 
results null
 add 05fcf26  [SPARK-32677][SQL] Load function resource before create

No new revisions were added by this update.

Summary of changes:
 python/pyspark/sql/tests/test_catalog.py |  5 +++--
 .../spark/sql/catalyst/catalog/SessionCatalog.scala  | 20 ++--
 .../spark/sql/execution/command/functions.scala  | 13 +
 .../test/resources/sql-tests/results/udaf.sql.out|  7 +--
 .../resources/sql-tests/results/udf/udf-udaf.sql.out |  7 +--
 .../spark/sql/execution/command/DDLSuite.scala   | 15 +++
 .../spark/sql/hive/HiveUDFDynamicLoadSuite.scala |  4 +++-
 7 files changed, 54 insertions(+), 17 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (de44e9c -> 05fcf26)

2020-09-06 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from de44e9c  [SPARK-32785][SQL] Interval with dangling parts should not 
results null
 add 05fcf26  [SPARK-32677][SQL] Load function resource before create

No new revisions were added by this update.

Summary of changes:
 python/pyspark/sql/tests/test_catalog.py |  5 +++--
 .../spark/sql/catalyst/catalog/SessionCatalog.scala  | 20 ++--
 .../spark/sql/execution/command/functions.scala  | 13 +
 .../test/resources/sql-tests/results/udaf.sql.out|  7 +--
 .../resources/sql-tests/results/udf/udf-udaf.sql.out |  7 +--
 .../spark/sql/execution/command/DDLSuite.scala   | 15 +++
 .../spark/sql/hive/HiveUDFDynamicLoadSuite.scala |  4 +++-
 7 files changed, 54 insertions(+), 17 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (de44e9c -> 05fcf26)

2020-09-06 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from de44e9c  [SPARK-32785][SQL] Interval with dangling parts should not 
results null
 add 05fcf26  [SPARK-32677][SQL] Load function resource before create

No new revisions were added by this update.

Summary of changes:
 python/pyspark/sql/tests/test_catalog.py |  5 +++--
 .../spark/sql/catalyst/catalog/SessionCatalog.scala  | 20 ++--
 .../spark/sql/execution/command/functions.scala  | 13 +
 .../test/resources/sql-tests/results/udaf.sql.out|  7 +--
 .../resources/sql-tests/results/udf/udf-udaf.sql.out |  7 +--
 .../spark/sql/execution/command/DDLSuite.scala   | 15 +++
 .../spark/sql/hive/HiveUDFDynamicLoadSuite.scala |  4 +++-
 7 files changed, 54 insertions(+), 17 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (05fcf26 -> b0322bf)

2020-09-06 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 05fcf26  [SPARK-32677][SQL] Load function resource before create
 add b0322bf  [SPARK-32779][SQL] Avoid using synchronized API of 
SessionCatalog in withClient flow, this leads to DeadLock

No new revisions were added by this update.

Summary of changes:
 .../src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-32779][SQL] Avoid using synchronized API of SessionCatalog in withClient flow, this leads to DeadLock

2020-09-06 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new c2c7c9e  [SPARK-32779][SQL] Avoid using synchronized API of 
SessionCatalog in withClient flow, this leads to DeadLock
c2c7c9e is described below

commit c2c7c9ef78441682a585abb1dede9b668802a224
Author: sandeep.katta 
AuthorDate: Mon Sep 7 15:10:33 2020 +0900

[SPARK-32779][SQL] Avoid using synchronized API of SessionCatalog in 
withClient flow, this leads to DeadLock

### What changes were proposed in this pull request?

No need of using database name in `loadPartition` API of `Shim_v3_0` to get 
the hive table, in hive there is a overloaded method which gives hive table 
using table name. By using this API dependency with `SessionCatalog` can be 
removed in Shim layer

### Why are the changes needed?
To avoid deadlock when communicating with Hive metastore 3.1.x
```
Found one Java-level deadlock:
=
"worker3":
  waiting to lock monitor 0x7faf0be602b8 (object 0x0007858f85f0, a 
org.apache.spark.sql.hive.HiveSessionCatalog),
  which is held by "worker0"
"worker0":
  waiting to lock monitor 0x7faf0be5fc88 (object 0x000785c15c80, a 
org.apache.spark.sql.hive.HiveExternalCatalog),
  which is held by "worker3"

Java stack information for the threads listed above:
===
"worker3":
  at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.getCurrentDatabase(SessionCatalog.scala:256)
  - waiting to lock <0x0007858f85f0> (a 
org.apache.spark.sql.hive.HiveSessionCatalog)
  at 
org.apache.spark.sql.hive.client.Shim_v3_0.loadPartition(HiveShim.scala:1332)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$loadPartition$1(HiveClientImpl.scala:870)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl$$Lambda$4459/1387095575.apply$mcV$sp(Unknown
 Source)
  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl$$Lambda$2227/313239499.apply(Unknown
 Source)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
  - locked <0x000785ef9d78> (a 
org.apache.spark.sql.hive.client.IsolatedClientLoader)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:860)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$loadPartition$1(HiveExternalCatalog.scala:911)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog$$Lambda$4457/2037578495.apply$mcV$sp(Unknown
 Source)
  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99)
  - locked <0x000785c15c80> (a 
org.apache.spark.sql.hive.HiveExternalCatalog)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:890)
  at 
org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.loadPartition(ExternalCatalogWithListener.scala:179)
  at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.loadPartition(SessionCatalog.scala:512)
  at 
org.apache.spark.sql.execution.command.LoadDataCommand.run(tables.scala:383)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
  - locked <0x0007b1690ff8> (a 
org.apache.spark.sql.execution.command.ExecutedCommandExec)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
  at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
  at org.apache.spark.sql.Dataset$$Lambda$2084/428667685.apply(Unknown 
Source)
  at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3616)
  at org.apache.spark.sql.Dataset$$Lambda$2085/559530590.apply(Unknown 
Source)
  at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
  at 
org.apache.spark.sql.execution.SQLExecution$$$Lambda$2093/139449177.apply(Unknown
 Source)
  at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
  at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
   

[spark] branch master updated (05fcf26 -> b0322bf)

2020-09-06 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 05fcf26  [SPARK-32677][SQL] Load function resource before create
 add b0322bf  [SPARK-32779][SQL] Avoid using synchronized API of 
SessionCatalog in withClient flow, this leads to DeadLock

No new revisions were added by this update.

Summary of changes:
 .../src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-32779][SQL] Avoid using synchronized API of SessionCatalog in withClient flow, this leads to DeadLock

2020-09-06 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new c2c7c9e  [SPARK-32779][SQL] Avoid using synchronized API of 
SessionCatalog in withClient flow, this leads to DeadLock
c2c7c9e is described below

commit c2c7c9ef78441682a585abb1dede9b668802a224
Author: sandeep.katta 
AuthorDate: Mon Sep 7 15:10:33 2020 +0900

[SPARK-32779][SQL] Avoid using synchronized API of SessionCatalog in 
withClient flow, this leads to DeadLock

### What changes were proposed in this pull request?

No need of using database name in `loadPartition` API of `Shim_v3_0` to get 
the hive table, in hive there is a overloaded method which gives hive table 
using table name. By using this API dependency with `SessionCatalog` can be 
removed in Shim layer

### Why are the changes needed?
To avoid deadlock when communicating with Hive metastore 3.1.x
```
Found one Java-level deadlock:
=
"worker3":
  waiting to lock monitor 0x7faf0be602b8 (object 0x0007858f85f0, a 
org.apache.spark.sql.hive.HiveSessionCatalog),
  which is held by "worker0"
"worker0":
  waiting to lock monitor 0x7faf0be5fc88 (object 0x000785c15c80, a 
org.apache.spark.sql.hive.HiveExternalCatalog),
  which is held by "worker3"

Java stack information for the threads listed above:
===
"worker3":
  at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.getCurrentDatabase(SessionCatalog.scala:256)
  - waiting to lock <0x0007858f85f0> (a 
org.apache.spark.sql.hive.HiveSessionCatalog)
  at 
org.apache.spark.sql.hive.client.Shim_v3_0.loadPartition(HiveShim.scala:1332)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$loadPartition$1(HiveClientImpl.scala:870)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl$$Lambda$4459/1387095575.apply$mcV$sp(Unknown
 Source)
  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl$$Lambda$2227/313239499.apply(Unknown
 Source)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
  - locked <0x000785ef9d78> (a 
org.apache.spark.sql.hive.client.IsolatedClientLoader)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:860)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$loadPartition$1(HiveExternalCatalog.scala:911)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog$$Lambda$4457/2037578495.apply$mcV$sp(Unknown
 Source)
  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99)
  - locked <0x000785c15c80> (a 
org.apache.spark.sql.hive.HiveExternalCatalog)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:890)
  at 
org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.loadPartition(ExternalCatalogWithListener.scala:179)
  at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.loadPartition(SessionCatalog.scala:512)
  at 
org.apache.spark.sql.execution.command.LoadDataCommand.run(tables.scala:383)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
  - locked <0x0007b1690ff8> (a 
org.apache.spark.sql.execution.command.ExecutedCommandExec)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
  at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
  at org.apache.spark.sql.Dataset$$Lambda$2084/428667685.apply(Unknown 
Source)
  at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3616)
  at org.apache.spark.sql.Dataset$$Lambda$2085/559530590.apply(Unknown 
Source)
  at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
  at 
org.apache.spark.sql.execution.SQLExecution$$$Lambda$2093/139449177.apply(Unknown
 Source)
  at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
  at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
   

[spark] branch master updated (05fcf26 -> b0322bf)

2020-09-06 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 05fcf26  [SPARK-32677][SQL] Load function resource before create
 add b0322bf  [SPARK-32779][SQL] Avoid using synchronized API of 
SessionCatalog in withClient flow, this leads to DeadLock

No new revisions were added by this update.

Summary of changes:
 .../src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-32779][SQL] Avoid using synchronized API of SessionCatalog in withClient flow, this leads to DeadLock

2020-09-06 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new c2c7c9e  [SPARK-32779][SQL] Avoid using synchronized API of 
SessionCatalog in withClient flow, this leads to DeadLock
c2c7c9e is described below

commit c2c7c9ef78441682a585abb1dede9b668802a224
Author: sandeep.katta 
AuthorDate: Mon Sep 7 15:10:33 2020 +0900

[SPARK-32779][SQL] Avoid using synchronized API of SessionCatalog in 
withClient flow, this leads to DeadLock

### What changes were proposed in this pull request?

No need of using database name in `loadPartition` API of `Shim_v3_0` to get 
the hive table, in hive there is a overloaded method which gives hive table 
using table name. By using this API dependency with `SessionCatalog` can be 
removed in Shim layer

### Why are the changes needed?
To avoid deadlock when communicating with Hive metastore 3.1.x
```
Found one Java-level deadlock:
=
"worker3":
  waiting to lock monitor 0x7faf0be602b8 (object 0x0007858f85f0, a 
org.apache.spark.sql.hive.HiveSessionCatalog),
  which is held by "worker0"
"worker0":
  waiting to lock monitor 0x7faf0be5fc88 (object 0x000785c15c80, a 
org.apache.spark.sql.hive.HiveExternalCatalog),
  which is held by "worker3"

Java stack information for the threads listed above:
===
"worker3":
  at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.getCurrentDatabase(SessionCatalog.scala:256)
  - waiting to lock <0x0007858f85f0> (a 
org.apache.spark.sql.hive.HiveSessionCatalog)
  at 
org.apache.spark.sql.hive.client.Shim_v3_0.loadPartition(HiveShim.scala:1332)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$loadPartition$1(HiveClientImpl.scala:870)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl$$Lambda$4459/1387095575.apply$mcV$sp(Unknown
 Source)
  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl$$Lambda$2227/313239499.apply(Unknown
 Source)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
  - locked <0x000785ef9d78> (a 
org.apache.spark.sql.hive.client.IsolatedClientLoader)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:860)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$loadPartition$1(HiveExternalCatalog.scala:911)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog$$Lambda$4457/2037578495.apply$mcV$sp(Unknown
 Source)
  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99)
  - locked <0x000785c15c80> (a 
org.apache.spark.sql.hive.HiveExternalCatalog)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:890)
  at 
org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.loadPartition(ExternalCatalogWithListener.scala:179)
  at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.loadPartition(SessionCatalog.scala:512)
  at 
org.apache.spark.sql.execution.command.LoadDataCommand.run(tables.scala:383)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
  - locked <0x0007b1690ff8> (a 
org.apache.spark.sql.execution.command.ExecutedCommandExec)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
  at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
  at org.apache.spark.sql.Dataset$$Lambda$2084/428667685.apply(Unknown 
Source)
  at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3616)
  at org.apache.spark.sql.Dataset$$Lambda$2085/559530590.apply(Unknown 
Source)
  at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
  at 
org.apache.spark.sql.execution.SQLExecution$$$Lambda$2093/139449177.apply(Unknown
 Source)
  at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
  at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
   

[spark] branch master updated (05fcf26 -> b0322bf)

2020-09-06 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 05fcf26  [SPARK-32677][SQL] Load function resource before create
 add b0322bf  [SPARK-32779][SQL] Avoid using synchronized API of 
SessionCatalog in withClient flow, this leads to DeadLock

No new revisions were added by this update.

Summary of changes:
 .../src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-32779][SQL] Avoid using synchronized API of SessionCatalog in withClient flow, this leads to DeadLock

2020-09-06 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new c2c7c9e  [SPARK-32779][SQL] Avoid using synchronized API of 
SessionCatalog in withClient flow, this leads to DeadLock
c2c7c9e is described below

commit c2c7c9ef78441682a585abb1dede9b668802a224
Author: sandeep.katta 
AuthorDate: Mon Sep 7 15:10:33 2020 +0900

[SPARK-32779][SQL] Avoid using synchronized API of SessionCatalog in 
withClient flow, this leads to DeadLock

### What changes were proposed in this pull request?

No need of using database name in `loadPartition` API of `Shim_v3_0` to get 
the hive table, in hive there is a overloaded method which gives hive table 
using table name. By using this API dependency with `SessionCatalog` can be 
removed in Shim layer

### Why are the changes needed?
To avoid deadlock when communicating with Hive metastore 3.1.x
```
Found one Java-level deadlock:
=
"worker3":
  waiting to lock monitor 0x7faf0be602b8 (object 0x0007858f85f0, a 
org.apache.spark.sql.hive.HiveSessionCatalog),
  which is held by "worker0"
"worker0":
  waiting to lock monitor 0x7faf0be5fc88 (object 0x000785c15c80, a 
org.apache.spark.sql.hive.HiveExternalCatalog),
  which is held by "worker3"

Java stack information for the threads listed above:
===
"worker3":
  at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.getCurrentDatabase(SessionCatalog.scala:256)
  - waiting to lock <0x0007858f85f0> (a 
org.apache.spark.sql.hive.HiveSessionCatalog)
  at 
org.apache.spark.sql.hive.client.Shim_v3_0.loadPartition(HiveShim.scala:1332)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$loadPartition$1(HiveClientImpl.scala:870)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl$$Lambda$4459/1387095575.apply$mcV$sp(Unknown
 Source)
  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl$$Lambda$2227/313239499.apply(Unknown
 Source)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
  - locked <0x000785ef9d78> (a 
org.apache.spark.sql.hive.client.IsolatedClientLoader)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:860)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$loadPartition$1(HiveExternalCatalog.scala:911)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog$$Lambda$4457/2037578495.apply$mcV$sp(Unknown
 Source)
  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99)
  - locked <0x000785c15c80> (a 
org.apache.spark.sql.hive.HiveExternalCatalog)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:890)
  at 
org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.loadPartition(ExternalCatalogWithListener.scala:179)
  at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.loadPartition(SessionCatalog.scala:512)
  at 
org.apache.spark.sql.execution.command.LoadDataCommand.run(tables.scala:383)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
  - locked <0x0007b1690ff8> (a 
org.apache.spark.sql.execution.command.ExecutedCommandExec)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
  at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
  at org.apache.spark.sql.Dataset$$Lambda$2084/428667685.apply(Unknown 
Source)
  at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3616)
  at org.apache.spark.sql.Dataset$$Lambda$2085/559530590.apply(Unknown 
Source)
  at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
  at 
org.apache.spark.sql.execution.SQLExecution$$$Lambda$2093/139449177.apply(Unknown
 Source)
  at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
  at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
   

[spark] branch master updated (b0322bf -> 04f7f6da)

2020-09-06 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from b0322bf  [SPARK-32779][SQL] Avoid using synchronized API of 
SessionCatalog in withClient flow, this leads to DeadLock
 add 04f7f6da [SPARK-32748][SQL] Support local property propagation in 
SubqueryBroadcastExec

No new revisions were added by this update.

Summary of changes:
 .../sql/execution/SubqueryBroadcastExec.scala  | 16 --
 .../sql/internal/ExecutorSideSQLConfSuite.scala| 63 +-
 2 files changed, 72 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (05fcf26 -> b0322bf)

2020-09-06 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 05fcf26  [SPARK-32677][SQL] Load function resource before create
 add b0322bf  [SPARK-32779][SQL] Avoid using synchronized API of 
SessionCatalog in withClient flow, this leads to DeadLock

No new revisions were added by this update.

Summary of changes:
 .../src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-32779][SQL] Avoid using synchronized API of SessionCatalog in withClient flow, this leads to DeadLock

2020-09-06 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new c2c7c9e  [SPARK-32779][SQL] Avoid using synchronized API of 
SessionCatalog in withClient flow, this leads to DeadLock
c2c7c9e is described below

commit c2c7c9ef78441682a585abb1dede9b668802a224
Author: sandeep.katta 
AuthorDate: Mon Sep 7 15:10:33 2020 +0900

[SPARK-32779][SQL] Avoid using synchronized API of SessionCatalog in 
withClient flow, this leads to DeadLock

### What changes were proposed in this pull request?

No need of using database name in `loadPartition` API of `Shim_v3_0` to get 
the hive table, in hive there is a overloaded method which gives hive table 
using table name. By using this API dependency with `SessionCatalog` can be 
removed in Shim layer

### Why are the changes needed?
To avoid deadlock when communicating with Hive metastore 3.1.x
```
Found one Java-level deadlock:
=
"worker3":
  waiting to lock monitor 0x7faf0be602b8 (object 0x0007858f85f0, a 
org.apache.spark.sql.hive.HiveSessionCatalog),
  which is held by "worker0"
"worker0":
  waiting to lock monitor 0x7faf0be5fc88 (object 0x000785c15c80, a 
org.apache.spark.sql.hive.HiveExternalCatalog),
  which is held by "worker3"

Java stack information for the threads listed above:
===
"worker3":
  at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.getCurrentDatabase(SessionCatalog.scala:256)
  - waiting to lock <0x0007858f85f0> (a 
org.apache.spark.sql.hive.HiveSessionCatalog)
  at 
org.apache.spark.sql.hive.client.Shim_v3_0.loadPartition(HiveShim.scala:1332)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$loadPartition$1(HiveClientImpl.scala:870)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl$$Lambda$4459/1387095575.apply$mcV$sp(Unknown
 Source)
  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl$$Lambda$2227/313239499.apply(Unknown
 Source)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
  - locked <0x000785ef9d78> (a 
org.apache.spark.sql.hive.client.IsolatedClientLoader)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
  at 
org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:860)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$loadPartition$1(HiveExternalCatalog.scala:911)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog$$Lambda$4457/2037578495.apply$mcV$sp(Unknown
 Source)
  at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99)
  - locked <0x000785c15c80> (a 
org.apache.spark.sql.hive.HiveExternalCatalog)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:890)
  at 
org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.loadPartition(ExternalCatalogWithListener.scala:179)
  at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.loadPartition(SessionCatalog.scala:512)
  at 
org.apache.spark.sql.execution.command.LoadDataCommand.run(tables.scala:383)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
  - locked <0x0007b1690ff8> (a 
org.apache.spark.sql.execution.command.ExecutedCommandExec)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
  at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
  at org.apache.spark.sql.Dataset$$Lambda$2084/428667685.apply(Unknown 
Source)
  at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3616)
  at org.apache.spark.sql.Dataset$$Lambda$2085/559530590.apply(Unknown 
Source)
  at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
  at 
org.apache.spark.sql.execution.SQLExecution$$$Lambda$2093/139449177.apply(Unknown
 Source)
  at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
  at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
   

[spark] branch master updated (b0322bf -> 04f7f6da)

2020-09-06 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from b0322bf  [SPARK-32779][SQL] Avoid using synchronized API of 
SessionCatalog in withClient flow, this leads to DeadLock
 add 04f7f6da [SPARK-32748][SQL] Support local property propagation in 
SubqueryBroadcastExec

No new revisions were added by this update.

Summary of changes:
 .../sql/execution/SubqueryBroadcastExec.scala  | 16 --
 .../sql/internal/ExecutorSideSQLConfSuite.scala| 63 +-
 2 files changed, 72 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (b0322bf -> 04f7f6da)

2020-09-06 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from b0322bf  [SPARK-32779][SQL] Avoid using synchronized API of 
SessionCatalog in withClient flow, this leads to DeadLock
 add 04f7f6da [SPARK-32748][SQL] Support local property propagation in 
SubqueryBroadcastExec

No new revisions were added by this update.

Summary of changes:
 .../sql/execution/SubqueryBroadcastExec.scala  | 16 --
 .../sql/internal/ExecutorSideSQLConfSuite.scala| 63 +-
 2 files changed, 72 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (b0322bf -> 04f7f6da)

2020-09-06 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from b0322bf  [SPARK-32779][SQL] Avoid using synchronized API of 
SessionCatalog in withClient flow, this leads to DeadLock
 add 04f7f6da [SPARK-32748][SQL] Support local property propagation in 
SubqueryBroadcastExec

No new revisions were added by this update.

Summary of changes:
 .../sql/execution/SubqueryBroadcastExec.scala  | 16 --
 .../sql/internal/ExecutorSideSQLConfSuite.scala| 63 +-
 2 files changed, 72 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (b0322bf -> 04f7f6da)

2020-09-06 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from b0322bf  [SPARK-32779][SQL] Avoid using synchronized API of 
SessionCatalog in withClient flow, this leads to DeadLock
 add 04f7f6da [SPARK-32748][SQL] Support local property propagation in 
SubqueryBroadcastExec

No new revisions were added by this update.

Summary of changes:
 .../sql/execution/SubqueryBroadcastExec.scala  | 16 --
 .../sql/internal/ExecutorSideSQLConfSuite.scala| 63 +-
 2 files changed, 72 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org