HyukjinKwon closed pull request #42989: [SPARK-45210][DOCS][3.4] Switch
languages consistently across docs for all code snippets (Spark 3.4 and below)
URL: https://github.com/apache/spark/pull/42989
--
This is an automated message from the Apache Git Service.
To respond to the message,
HyukjinKwon commented on PR #42989:
URL: https://github.com/apache/spark/pull/42989#issuecomment-1724870114
Merged to branch-3.4, branch-3.3, branch-3.2, and branch-3.1.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
panbingkun commented on PR #42917:
URL: https://github.com/apache/spark/pull/42917#issuecomment-1724869552
> looks much better now, thanks for your patience!
Thank you very much for patiently reviewing the code. ❤️
--
This is an automated message from the Apache Git Service.
To
panbingkun commented on code in PR #42917:
URL: https://github.com/apache/spark/pull/42917#discussion_r1329599785
##
common/utils/src/main/resources/error/error-classes.json:
##
@@ -860,6 +860,50 @@
"Exceeds char/varchar type length limitation: ."
]
},
+
panbingkun commented on PR #42990:
URL: https://github.com/apache/spark/pull/42990#issuecomment-1724855910
+1, LGTM.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
rednaxelafx commented on PR #42988:
URL: https://github.com/apache/spark/pull/42988#issuecomment-1724837849
One important aspect of flame graphs is the semantics of the "width" of the
bars. It can be defined to mean anything, e.g. aggregated profiling ticks (i.e.
number of samples) or wall
zhengruifeng commented on PR #42990:
URL: https://github.com/apache/spark/pull/42990#issuecomment-1724834827
LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
rangadi commented on code in PR #42986:
URL: https://github.com/apache/spark/pull/42986#discussion_r1329577300
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/StreamingQueryListenerHelper.scala:
##
@@ -76,16 +78,21 @@ class
bogao007 commented on code in PR #42986:
URL: https://github.com/apache/spark/pull/42986#discussion_r1329574367
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/StreamingQueryListenerHelper.scala:
##
@@ -76,16 +78,21 @@ class
bogao007 commented on code in PR #42986:
URL: https://github.com/apache/spark/pull/42986#discussion_r1329574367
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/StreamingQueryListenerHelper.scala:
##
@@ -76,16 +78,21 @@ class
dongjoon-hyun closed pull request #42914: [SPARK-44910][SQL][3.4] Encoders.bean
does not support superclasses with generic type arguments
URL: https://github.com/apache/spark/pull/42914
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
dongjoon-hyun commented on PR #42914:
URL: https://github.com/apache/spark/pull/42914#issuecomment-1724820449
Merged to branch-3.4.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
dongjoon-hyun commented on PR #42634:
URL: https://github.com/apache/spark/pull/42634#issuecomment-1724819818
Merged to master/3.5.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
dongjoon-hyun closed pull request #42634: [SPARK-44910][SQL] Encoders.bean does
not support superclasses with generic type arguments
URL: https://github.com/apache/spark/pull/42634
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
dongjoon-hyun closed pull request #42956: [SPARK-43654][CONNECT][PS][TESTS]
Enable `InternalFrameParityTests.test_from_pandas`
URL: https://github.com/apache/spark/pull/42956
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
yaooqinn commented on PR #42988:
URL: https://github.com/apache/spark/pull/42988#issuecomment-1724810773
This PR mainly focuses on the UI, independent of the profiling steps. What
we might have in the future are:
- Flame Graph Support For Task Thread Page, which
LuciferYang opened a new pull request, #42990:
URL: https://github.com/apache/spark/pull/42990
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
###
mridulm commented on PR #42988:
URL: https://github.com/apache/spark/pull/42988#issuecomment-1724800071
The UI looks nice ! Thanks for working on this @yaooqinn :-)
My main concern is around effectively capturing stack frames without
safepoint bias, correlating it to the specific
sunchao commented on code in PR #42612:
URL: https://github.com/apache/spark/pull/42612#discussion_r1329548074
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala:
##
@@ -279,7 +283,9 @@ case class StaticInvoke(
inputTypes:
allisonwang-db commented on code in PR #42949:
URL: https://github.com/apache/spark/pull/42949#discussion_r1329547039
##
python/pyspark/sql/connect/client/artifact.py:
##
@@ -243,11 +244,15 @@ def _create_requests(
self, *path: str, pyfile: bool, archive: bool, file:
LuciferYang commented on PR #42981:
URL: https://github.com/apache/spark/pull/42981#issuecomment-1724793150
Thanks @juliuszsompolski
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
ConeyLiu commented on code in PR #42612:
URL: https://github.com/apache/spark/pull/42612#discussion_r1329541422
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala:
##
@@ -279,7 +283,9 @@ case class StaticInvoke(
inputTypes:
ConeyLiu commented on code in PR #42612:
URL: https://github.com/apache/spark/pull/42612#discussion_r1329539929
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala:
##
@@ -279,7 +283,9 @@ case class StaticInvoke(
inputTypes:
zhengruifeng commented on PR #42988:
URL: https://github.com/apache/spark/pull/42988#issuecomment-1724783234
awesome!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
HyukjinKwon commented on PR #42988:
URL: https://github.com/apache/spark/pull/42988#issuecomment-1724783071
Looks cool. cc @mridulm FYI
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
LuciferYang commented on PR #42981:
URL: https://github.com/apache/spark/pull/42981#issuecomment-1724777489
connect module test success with Scala 2.12 with this pr:
https://github.com/LuciferYang/spark/runs/16908220090
cloud-fan commented on code in PR #42612:
URL: https://github.com/apache/spark/pull/42612#discussion_r1329529759
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala:
##
@@ -279,7 +283,9 @@ case class StaticInvoke(
inputTypes:
cloud-fan commented on PR #42917:
URL: https://github.com/apache/spark/pull/42917#issuecomment-1724772520
looks much better now, thanks for your patience!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
yaooqinn commented on PR #42969:
URL: https://github.com/apache/spark/pull/42969#issuecomment-1724771959
cc @sarutak @HyukjinKwon @dongjoon-hyun thanks
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
cloud-fan commented on code in PR #42917:
URL: https://github.com/apache/spark/pull/42917#discussion_r1329519381
##
common/utils/src/main/resources/error/error-classes.json:
##
@@ -860,6 +860,50 @@
"Exceeds char/varchar type length limitation: ."
]
},
+
HyukjinKwon commented on PR #42989:
URL: https://github.com/apache/spark/pull/42989#issuecomment-1724767346
cc @gengliangwang @panbingkun @sarutak @zhengruifeng FYI
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
HyukjinKwon opened a new pull request, #42989:
URL: https://github.com/apache/spark/pull/42989
### What changes were proposed in this pull request?
This PR proposes to recover the availity of switching languages consistently
across docs for all code snippets in Spark 3.4 and below by
cloud-fan commented on code in PR #42917:
URL: https://github.com/apache/spark/pull/42917#discussion_r1329515855
##
common/utils/src/main/resources/error/error-classes.json:
##
@@ -860,6 +860,50 @@
"Exceeds char/varchar type length limitation: ."
]
},
+
yaooqinn commented on PR #42982:
URL: https://github.com/apache/spark/pull/42982#issuecomment-1724765818
The job containing lint-js passed.
Thanks @sarutak @dongjoon-hyun @HyukjinKwon , merged to master
--
This is an automated message from the Apache Git Service.
To respond to the
yaooqinn closed pull request #42982: [SPARK-45202][BUILD] Fix lint-js tool and
js format
URL: https://github.com/apache/spark/pull/42982
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
mridulm commented on code in PR #42357:
URL: https://github.com/apache/spark/pull/42357#discussion_r1329512411
##
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala:
##
@@ -533,9 +536,12 @@ private[spark] class Client(
// If preload is enabled,
yaooqinn opened a new pull request, #42988:
URL: https://github.com/apache/spark/pull/42988
### What changes were proposed in this pull request?
This PR draws a CPU Flame Graph by Java stack traces for executors and
drivers. Currently, the Java stack traces is just a
panbingkun commented on code in PR #42917:
URL: https://github.com/apache/spark/pull/42917#discussion_r1329502401
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala:
##
@@ -1142,7 +1142,7 @@ class Analyzer(override val catalogManager:
panbingkun commented on code in PR #42917:
URL: https://github.com/apache/spark/pull/42917#discussion_r1329502973
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala:
##
@@ -1142,7 +1142,7 @@ class Analyzer(override val catalogManager:
panbingkun commented on code in PR #42917:
URL: https://github.com/apache/spark/pull/42917#discussion_r1329502401
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala:
##
@@ -1142,7 +1142,7 @@ class Analyzer(override val catalogManager:
ConeyLiu commented on code in PR #42612:
URL: https://github.com/apache/spark/pull/42612#discussion_r1329489593
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala:
##
@@ -279,7 +283,9 @@ case class StaticInvoke(
inputTypes:
yaooqinn commented on code in PR #42982:
URL: https://github.com/apache/spark/pull/42982#discussion_r1329460092
##
dev/lint-js:
##
@@ -44,8 +44,14 @@ if ! npm ls eslint > /dev/null; then
npm ci eslint
fi
-npx eslint -c "$SPARK_ROOT_DIR/dev/eslint.json" $LINT_TARGET_FILES
yaooqinn commented on code in PR #42982:
URL: https://github.com/apache/spark/pull/42982#discussion_r1329459486
##
dev/lint-js:
##
@@ -44,8 +44,14 @@ if ! npm ls eslint > /dev/null; then
npm ci eslint
fi
-npx eslint -c "$SPARK_ROOT_DIR/dev/eslint.json" $LINT_TARGET_FILES
rangadi commented on code in PR #42986:
URL: https://github.com/apache/spark/pull/42986#discussion_r1329448797
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/StreamingForeachBatchHelper.scala:
##
@@ -125,8 +128,21 @@ object
ulysses-you commented on code in PR #42967:
URL: https://github.com/apache/spark/pull/42967#discussion_r1329450429
##
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryRelation.scala:
##
@@ -264,8 +269,7 @@ case class CachedRDDBuilder(
}
private
cloud-fan commented on code in PR #42612:
URL: https://github.com/apache/spark/pull/42612#discussion_r1329447311
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala:
##
@@ -279,7 +283,9 @@ case class StaticInvoke(
inputTypes:
bogao007 commented on code in PR #42986:
URL: https://github.com/apache/spark/pull/42986#discussion_r1329447168
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/StreamingQueryListenerHelper.scala:
##
@@ -76,16 +78,21 @@ class
bogao007 commented on code in PR #42986:
URL: https://github.com/apache/spark/pull/42986#discussion_r1329447168
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/StreamingQueryListenerHelper.scala:
##
@@ -76,16 +78,21 @@ class
copperybean commented on PR #42495:
URL: https://github.com/apache/spark/pull/42495#issuecomment-1724694181
@cloud-fan @wangyum Could you review this PR, please.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
itholic commented on PR #42793:
URL: https://github.com/apache/spark/pull/42793#issuecomment-1724692463
Thanks all!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
WweiL commented on code in PR #42986:
URL: https://github.com/apache/spark/pull/42986#discussion_r1329441472
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/StreamingQueryListenerHelper.scala:
##
@@ -76,16 +78,21 @@ class
HeartSaVioR commented on PR #42895:
URL: https://github.com/apache/spark/pull/42895#issuecomment-1724683270
Maybe it's the first time you are contributing to Apache Spark? If then,
congrats on your first contribution!
https://spark.apache.org/contributing.html
Please check the
HeartSaVioR commented on code in PR #42940:
URL: https://github.com/apache/spark/pull/42940#discussion_r1329427441
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MicroBatchExecution.scala:
##
@@ -52,11 +52,40 @@ class MicroBatchExecution(
@volatile
panbingkun commented on code in PR #42917:
URL: https://github.com/apache/spark/pull/42917#discussion_r1329425447
##
common/utils/src/main/resources/error/error-classes.json:
##
@@ -860,6 +860,35 @@
"Exceeds char/varchar type length limitation: ."
]
},
+
HeartSaVioR commented on code in PR #42940:
URL: https://github.com/apache/spark/pull/42940#discussion_r1329424458
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MicroBatchExecution.scala:
##
@@ -52,11 +52,40 @@ class MicroBatchExecution(
@volatile
HeartSaVioR commented on code in PR #42940:
URL: https://github.com/apache/spark/pull/42940#discussion_r1329424143
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/memory.scala:
##
@@ -201,7 +207,15 @@ case class MemoryStream[A : Encoder](
override def
bogao007 commented on code in PR #42986:
URL: https://github.com/apache/spark/pull/42986#discussion_r1329422764
##
python/pyspark/sql/connect/streaming/worker/foreach_batch_worker.py:
##
@@ -69,8 +73,32 @@ def process(df_id, batch_id): # type: ignore[no-untyped-def]
while
anishshri-db commented on code in PR #42940:
URL: https://github.com/apache/spark/pull/42940#discussion_r1329420413
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/AvailableNowDataStreamWrapper.scala:
##
@@ -28,6 +28,12 @@ import
cloud-fan commented on code in PR #42971:
URL: https://github.com/apache/spark/pull/42971#discussion_r1329419904
##
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/AnalysisSuite.scala:
##
@@ -1410,6 +1410,21 @@ class AnalysisSuite extends AnalysisTest with
HeartSaVioR commented on code in PR #42940:
URL: https://github.com/apache/spark/pull/42940#discussion_r1329418792
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/AvailableNowDataStreamWrapper.scala:
##
@@ -28,6 +28,12 @@ import
anishshri-db commented on code in PR #42940:
URL: https://github.com/apache/spark/pull/42940#discussion_r1329417880
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MicroBatchExecution.scala:
##
@@ -52,11 +52,40 @@ class MicroBatchExecution(
@volatile
HeartSaVioR commented on code in PR #42940:
URL: https://github.com/apache/spark/pull/42940#discussion_r1329417758
##
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##
@@ -2180,6 +2180,17 @@ object SQLConf {
.booleanConf
itholic commented on code in PR #40642:
URL: https://github.com/apache/spark/pull/40642#discussion_r1329417329
##
python/pyspark/errors/error_classes.py:
##
@@ -24,6 +24,21 @@
"Argument `` is required when ."
]
},
+ "CANNOT_ACCESS_TO_DUNDER": {
Review Comment:
itholic commented on code in PR #40642:
URL: https://github.com/apache/spark/pull/40642#discussion_r1329417329
##
python/pyspark/errors/error_classes.py:
##
@@ -24,6 +24,21 @@
"Argument `` is required when ."
]
},
+ "CANNOT_ACCESS_TO_DUNDER": {
Review Comment:
Hisoka-X commented on PR #42960:
URL: https://github.com/apache/spark/pull/42960#issuecomment-1724658573
Thanks @MaxGekk @dongjoon-hyun
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
HeartSaVioR commented on code in PR #42940:
URL: https://github.com/apache/spark/pull/42940#discussion_r1329416576
##
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##
@@ -2180,6 +2180,17 @@ object SQLConf {
.booleanConf
HeartSaVioR commented on code in PR #42940:
URL: https://github.com/apache/spark/pull/42940#discussion_r1329415427
##
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##
@@ -2180,6 +2180,17 @@ object SQLConf {
.booleanConf
github-actions[bot] closed pull request #41498: [SPARK-44001][Protobuf] spark
protobuf: handle well known wrapper types
URL: https://github.com/apache/spark/pull/41498
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
zhengruifeng commented on PR #42977:
URL: https://github.com/apache/spark/pull/42977#issuecomment-1724649338
thank you guys!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
sunchao commented on code in PR #42612:
URL: https://github.com/apache/spark/pull/42612#discussion_r1329406870
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala:
##
@@ -279,7 +283,9 @@ case class StaticInvoke(
inputTypes:
HyukjinKwon closed pull request #42968: [SPARK-45113][PYTHON][DOCS][FOLLOWUP]
Add sorting to the example of `collect_set/collect_list` to ensure stable
results
URL: https://github.com/apache/spark/pull/42968
--
This is an automated message from the Apache Git Service.
To respond to the
HyukjinKwon commented on PR #42968:
URL: https://github.com/apache/spark/pull/42968#issuecomment-1724630071
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon commented on PR #42986:
URL: https://github.com/apache/spark/pull/42986#issuecomment-1724629485
Otherwise looks sane to me. cc @ueshin
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
HyukjinKwon commented on code in PR #42986:
URL: https://github.com/apache/spark/pull/42986#discussion_r1329393005
##
python/pyspark/sql/connect/streaming/worker/foreach_batch_worker.py:
##
@@ -69,8 +73,32 @@ def process(df_id, batch_id): # type: ignore[no-untyped-def]
HyukjinKwon commented on code in PR #42986:
URL: https://github.com/apache/spark/pull/42986#discussion_r1329392786
##
python/pyspark/sql/connect/streaming/worker/foreach_batch_worker.py:
##
@@ -69,8 +73,32 @@ def process(df_id, batch_id): # type: ignore[no-untyped-def]
HyukjinKwon commented on code in PR #42986:
URL: https://github.com/apache/spark/pull/42986#discussion_r1329392064
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/StreamingForeachBatchHelper.scala:
##
@@ -125,8 +128,21 @@ object
HyukjinKwon commented on code in PR #42949:
URL: https://github.com/apache/spark/pull/42949#discussion_r1329390408
##
python/pyspark/sql/connect/client/logging.py:
##
@@ -0,0 +1,42 @@
+import logging
Review Comment:
Seems like linter complains that there's no license header
HyukjinKwon commented on PR #42965:
URL: https://github.com/apache/spark/pull/42965#issuecomment-1724619612
Merged to branch-3.5 too.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon closed pull request #42973: [SPARK-45167][CONNECT][PYTHON][3.5]
Python client must call `release_all`
URL: https://github.com/apache/spark/pull/42973
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
heyihong opened a new pull request, #42987:
URL: https://github.com/apache/spark/pull/42987
### What changes were proposed in this pull request?
### Why are the changes needed?
### Does this PR introduce _any_ user-facing change?
### How
HyukjinKwon commented on PR #42973:
URL: https://github.com/apache/spark/pull/42973#issuecomment-1724618155
Merged to branch-3.5.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
HyukjinKwon closed pull request #42910: [SPARK-45133][CONNECT][TESTS][FOLLOWUP]
Add test that queries transition to FINISHED
URL: https://github.com/apache/spark/pull/42910
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
HyukjinKwon commented on PR #42910:
URL: https://github.com/apache/spark/pull/42910#issuecomment-1724617224
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
heyihong commented on code in PR #42377:
URL: https://github.com/apache/spark/pull/42377#discussion_r1329325417
##
connector/connect/common/src/main/scala/org/apache/spark/sql/connect/client/GrpcExceptionConverter.scala:
##
@@ -26,47 +26,131 @@ import
dongjoon-hyun commented on PR #42793:
URL: https://github.com/apache/spark/pull/42793#issuecomment-1724614064
Thank you, @itholic and all!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
dongjoon-hyun closed pull request #42793: [SPARK-45065][PYTHON][PS] Support
Pandas 2.1.0
URL: https://github.com/apache/spark/pull/42793
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
anishshri-db commented on code in PR #42940:
URL: https://github.com/apache/spark/pull/42940#discussion_r1329378160
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MicroBatchExecution.scala:
##
@@ -52,11 +52,40 @@ class MicroBatchExecution(
@volatile
anishshri-db commented on code in PR #42940:
URL: https://github.com/apache/spark/pull/42940#discussion_r1329377950
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MicroBatchExecution.scala:
##
@@ -52,11 +52,40 @@ class MicroBatchExecution(
@volatile
heyihong commented on code in PR #42377:
URL: https://github.com/apache/spark/pull/42377#discussion_r1329300383
##
connector/connect/common/src/main/protobuf/spark/connect/base.proto:
##
@@ -778,6 +778,67 @@ message ReleaseExecuteResponse {
optional string operation_id = 2;
heyihong commented on code in PR #42377:
URL: https://github.com/apache/spark/pull/42377#discussion_r1329300029
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/utils/ErrorUtils.scala:
##
@@ -57,28 +69,105 @@ private[connect] object ErrorUtils extends
heyihong commented on code in PR #42377:
URL: https://github.com/apache/spark/pull/42377#discussion_r1329300029
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/utils/ErrorUtils.scala:
##
@@ -57,28 +69,105 @@ private[connect] object ErrorUtils extends
heyihong commented on code in PR #42377:
URL: https://github.com/apache/spark/pull/42377#discussion_r1329300029
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/utils/ErrorUtils.scala:
##
@@ -57,28 +69,105 @@ private[connect] object ErrorUtils extends
bogao007 commented on code in PR #42986:
URL: https://github.com/apache/spark/pull/42986#discussion_r1329356792
##
python/pyspark/sql/connect/streaming/worker/listener_worker.py:
##
@@ -83,7 +86,14 @@ def process(listener_event_str, listener_event_type): #
type:
mridulm commented on PR #42893:
URL: https://github.com/apache/spark/pull/42893#issuecomment-1724562144
If this is specific to this deployment, as @srowen mentioned, why not do
this in user code/library?
You can run a thread which periodically does this
--
This is an automated
heyihong commented on code in PR #42377:
URL: https://github.com/apache/spark/pull/42377#discussion_r1329353558
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectService.scala:
##
@@ -291,15 +307,16 @@ object SparkConnectService extends
heyihong commented on code in PR #42377:
URL: https://github.com/apache/spark/pull/42377#discussion_r1329352625
##
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectFetchErrorDetailsHandler.scala:
##
@@ -0,0 +1,52 @@
+/*
+ * Licensed to
mridulm commented on PR #42426:
URL: https://github.com/apache/spark/pull/42426#issuecomment-1724557591
Very good callout @Ngone51 , we should probably add a check style error as
well to prevent its usage
--
This is an automated message from the Apache Git Service.
To respond to the
heyihong commented on code in PR #42377:
URL: https://github.com/apache/spark/pull/42377#discussion_r1329349965
##
connector/connect/common/src/main/scala/org/apache/spark/sql/connect/client/GrpcExceptionConverter.scala:
##
@@ -93,33 +177,44 @@ private[client] object
heyihong commented on code in PR #42377:
URL: https://github.com/apache/spark/pull/42377#discussion_r1329349965
##
connector/connect/common/src/main/scala/org/apache/spark/sql/connect/client/GrpcExceptionConverter.scala:
##
@@ -93,33 +177,44 @@ private[client] object
WweiL commented on code in PR #42986:
URL: https://github.com/apache/spark/pull/42986#discussion_r1329337505
##
python/pyspark/sql/connect/streaming/worker/listener_worker.py:
##
@@ -83,7 +86,14 @@ def process(listener_event_str, listener_event_type): #
type:
1 - 100 of 285 matches
Mail list logo