LuciferYang commented on code in PR #43735:
URL: https://github.com/apache/spark/pull/43735#discussion_r1425221556
##
project/MimaExcludes.scala:
##
@@ -90,6 +90,16 @@ object MimaExcludes {
// SPARK-43299: Convert StreamingQueryException in Scala Client
LuciferYang commented on code in PR #43735:
URL: https://github.com/apache/spark/pull/43735#discussion_r1425221556
##
project/MimaExcludes.scala:
##
@@ -90,6 +90,16 @@ object MimaExcludes {
// SPARK-43299: Convert StreamingQueryException in Scala Client
vicennial commented on PR #43735:
URL: https://github.com/apache/spark/pull/43735#issuecomment-1829823363
@fhalde Yes, with this PR, it would be possible to have isolated
classloaders per spark session on the executors without going through Spark
Connect.
The `withResources` method
fhalde commented on PR #43735:
URL: https://github.com/apache/spark/pull/43735#issuecomment-1822029844
Hi, we are super interested in having the isolated classloader per spark
session ability for our usecase. i believe this today is only achievable if
jobs are run from a connect client. we
HyukjinKwon commented on PR #43735:
URL: https://github.com/apache/spark/pull/43735#issuecomment-1820076429
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon closed pull request #43735: [SPARK-45856] Move ArtifactManager from
Spark Connect into SparkSession (sql/core)
URL: https://github.com/apache/spark/pull/43735
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
vicennial commented on PR #43735:
URL: https://github.com/apache/spark/pull/43735#issuecomment-1819895269
@dongjoon-hyun The CI is green now :)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
vicennial commented on PR #43735:
URL: https://github.com/apache/spark/pull/43735#issuecomment-1818584398
Hmm, JavaDocGeneration is failing in 2 tests and from the logs, its not
clear why...
I will merge master and see if that helps
--
This is an automated message from the Apache Git
vicennial commented on PR #43735:
URL: https://github.com/apache/spark/pull/43735#issuecomment-1817850298
@dongjoon-hyun Thank you for the review!
I've updated the version and for the deprecated conf
`spark.connect.copyFromLocalToFs.allowDestLocal`, I've added it to the
dongjoon-hyun commented on PR #43735:
URL: https://github.com/apache/spark/pull/43735#issuecomment-1817146711
Could you re-trigger the failed pipelines?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
dongjoon-hyun commented on code in PR #43735:
URL: https://github.com/apache/spark/pull/43735#discussion_r1394847206
##
sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala:
##
@@ -243,6 +244,16 @@ class SparkSession private(
@Unstable
def streams:
vicennial commented on PR #43735:
URL: https://github.com/apache/spark/pull/43735#issuecomment-1810066835
PTAL @hvanhovell @HyukjinKwon
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
vicennial opened a new pull request, #43735:
URL: https://github.com/apache/spark/pull/43735
### What changes were proposed in this pull request?
The significant changes in this PR include:
- `SparkConnectArtifactManager` is renamed to `ArtifactManager` and
13 matches
Mail list logo