LuciferYang commented on PR #36611:
URL: https://github.com/apache/spark/pull/36611#issuecomment-1131660071
cc @attilapiros as discussed in
SPARK-39102(https://github.com/apache/spark/pull/36529#discussion_r872838034),
this pr replace `Utils.createTempDir()` with
gengliangwang commented on code in PR #36604:
URL: https://github.com/apache/spark/pull/36604#discussion_r877016968
##
core/src/main/scala/org/apache/spark/ErrorInfo.scala:
##
@@ -82,11 +85,11 @@ private[spark] object SparkThrowableHelper {
val subMessageParameters =
gengliangwang commented on code in PR #36612:
URL: https://github.com/apache/spark/pull/36612#discussion_r877015458
##
core/src/main/scala/org/apache/spark/ErrorInfo.scala:
##
@@ -77,20 +77,25 @@ private[spark] object SparkThrowableHelper {
queryContext: String = ""):
gengliangwang opened a new pull request, #36612:
URL: https://github.com/apache/spark/pull/36612
### What changes were proposed in this pull request?
1. Remove the starting "\n" in `Origin.context`. The "\n" will be append in
the method `SparkThrowableHelper.getMessage`
gengliangwang closed pull request #36609: [SPARK-39233][SQL] Remove the check
for TimestampNTZ output in Analyzer
URL: https://github.com/apache/spark/pull/36609
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
gengliangwang commented on PR #36609:
URL: https://github.com/apache/spark/pull/36609#issuecomment-1131629803
Merging to 3.3
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
Yikun commented on code in PR #36599:
URL: https://github.com/apache/spark/pull/36599#discussion_r876936498
##
python/pyspark/pandas/series.py:
##
@@ -6239,13 +6239,19 @@ def argsort(self) -> "Series":
ps.concat([psser,
LuciferYang opened a new pull request, #36611:
URL: https://github.com/apache/spark/pull/36611
### What changes were proposed in this pull request?
This main change of this pr is replace all use of `Utils.createTempDir` with
`JavaUtils.createTempDir`, the replacement rules are as
LuciferYang closed pull request #36610: [SPARK-39204][CORE][SQL] Replace
`Utils.createTempDir` with `JavaUtils.createTempDir`
URL: https://github.com/apache/spark/pull/36610
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
LuciferYang opened a new pull request, #36610:
URL: https://github.com/apache/spark/pull/36610
### What changes were proposed in this pull request?
This main change of this pr is replace all use of `Utils.createTempDir` with
`JavaUtils.createTempDir`, the replacement rules are as
AmplabJenkins commented on PR #36601:
URL: https://github.com/apache/spark/pull/36601#issuecomment-1131614769
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
AmplabJenkins commented on PR #36602:
URL: https://github.com/apache/spark/pull/36602#issuecomment-1131614708
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
AmplabJenkins commented on PR #36603:
URL: https://github.com/apache/spark/pull/36603#issuecomment-1131614649
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
AmplabJenkins commented on PR #36606:
URL: https://github.com/apache/spark/pull/36606#issuecomment-1131614614
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
AngersZh commented on PR #36377:
URL: https://github.com/apache/spark/pull/36377#issuecomment-1131608162
OK
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
gengliangwang closed pull request #36607: [SPARK-39229][SQL][3.3] Separate
query contexts from error-classes.json
URL: https://github.com/apache/spark/pull/36607
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
cloud-fan commented on PR #36377:
URL: https://github.com/apache/spark/pull/36377#issuecomment-1131603731
It seems like this also impacts Spark. First of all, Spark will read hive
Table stats and turn them into Spark's own stats, see
gengliangwang commented on PR #36607:
URL: https://github.com/apache/spark/pull/36607#issuecomment-1131600384
Merging to 3.3
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
cloud-fan commented on code in PR #36330:
URL: https://github.com/apache/spark/pull/36330#discussion_r876959009
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/util/V2ExpressionSQLBuilder.java:
##
@@ -228,4 +244,18 @@ protected String visitSQLFunction(String
yaooqinn commented on PR #36609:
URL: https://github.com/apache/spark/pull/36609#issuecomment-1131517228
LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
gengliangwang opened a new pull request, #36609:
URL: https://github.com/apache/spark/pull/36609
### What changes were proposed in this pull request?
In [#36094](https://github.com/apache/spark/pull/36094), a check for failing
TimestampNTZ output is added.
However, the
HyukjinKwon closed pull request #36569: [SPARK-39201][PYTHON][PS] Implement
`ignore_index` of `DataFrame.explode` and `DataFrame.drop_duplicates`
URL: https://github.com/apache/spark/pull/36569
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
HyukjinKwon commented on PR #36569:
URL: https://github.com/apache/spark/pull/36569#issuecomment-1131510364
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
eejbyfeldt commented on PR #36004:
URL: https://github.com/apache/spark/pull/36004#issuecomment-1131496353
While testing the spark 3.3.0 release candidate I noticed that this is
actually a regression from 3.2 that was introduced in
https://github.com/apache/spark/pull/33205 so I this
HyukjinKwon closed pull request #36464: [SPARK-38947][PYTHON][PS] Supports
groupby positional indexing
URL: https://github.com/apache/spark/pull/36464
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
HyukjinKwon commented on PR #36464:
URL: https://github.com/apache/spark/pull/36464#issuecomment-1131494802
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
andersonm-ibm commented on PR #36523:
URL: https://github.com/apache/spark/pull/36523#issuecomment-1131484387
@HyukjinKwon Thank you for your review. Your comment was addressed. Could
you please have another look?
--
This is an automated message from the Apache Git Service.
To respond
beliefer commented on PR #36593:
URL: https://github.com/apache/spark/pull/36593#issuecomment-1131436131
ping @huaxingao cc @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
beliefer opened a new pull request, #36608:
URL: https://github.com/apache/spark/pull/36608
### What changes were proposed in this pull request?
`REGR_SLOPE` is an ANSI aggregate functions
**Syntax**: REGR_SLOPE(y, x)
**Arguments**:
- **y**:The dependent variable. This must
gengliangwang commented on PR #36604:
URL: https://github.com/apache/spark/pull/36604#issuecomment-1131435266
@MaxGekk I have opened a backport in
https://github.com/apache/spark/pull/36607
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
gengliangwang opened a new pull request, #36607:
URL: https://github.com/apache/spark/pull/36607
### What changes were proposed in this pull request?
Separate query contexts for runtime errors from error-classes.json.
### Why are the changes needed?
The message
beliefer commented on code in PR #36330:
URL: https://github.com/apache/spark/pull/36330#discussion_r876791321
##
sql/catalyst/src/main/java/org/apache/spark/sql/connector/util/V2ExpressionSQLBuilder.java:
##
@@ -228,4 +244,18 @@ protected String visitSQLFunction(String
gengliangwang closed pull request #36600: [SPARK-39212][SQL][3.3] Use double
quotes for values of SQL configs/DS options in error messages
URL: https://github.com/apache/spark/pull/36600
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
gengliangwang commented on PR #36600:
URL: https://github.com/apache/spark/pull/36600#issuecomment-1131420222
Thanks, merging to 3.3
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
MaxGekk commented on PR #36561:
URL: https://github.com/apache/spark/pull/36561#issuecomment-1131399297
@panbingkun Could you backport this to branch-3.3, please.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
MaxGekk closed pull request #36561: [SPARK-37939][SQL] Use error classes in the
parsing errors of properties
URL: https://github.com/apache/spark/pull/36561
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
mridulm commented on PR #36162:
URL: https://github.com/apache/spark/pull/36162#issuecomment-1131396197
Thanks for the changes @weixiuli.
I will try to take a look early next week - a bit swamped by some other work
unfortunately.
--
This is an automated message from the Apache Git
mridulm commented on code in PR #36512:
URL: https://github.com/apache/spark/pull/36512#discussion_r876759477
##
core/src/main/scala/org/apache/spark/storage/BlockManager.scala:
##
@@ -933,10 +933,29 @@ private[spark] class BlockManager(
})
Some(new
MaxGekk commented on PR #36604:
URL: https://github.com/apache/spark/pull/36604#issuecomment-1131372321
@gengliangwang The changes cause conflicts in branch-3.3. Please, open a
separate PR to backport this (if it is needed)
--
This is an automated message from the Apache Git Service.
To
MaxGekk closed pull request #36604: [SPARK-39229][SQL] Separate query contexts
from error-classes.json
URL: https://github.com/apache/spark/pull/36604
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
Yikun commented on code in PR #36569:
URL: https://github.com/apache/spark/pull/36569#discussion_r876716230
##
python/pyspark/pandas/frame.py:
##
@@ -12212,7 +12237,8 @@ def explode(self, column: Name) -> "DataFrame":
data_fields[idx] = field.copy(dtype=dtype,
HyukjinKwon closed pull request #36594: [SPARK-39223][PS] Implement skew and
kurt in Rolling/RollingGroupby/Expanding/ExpandingGroupby
URL: https://github.com/apache/spark/pull/36594
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
HyukjinKwon commented on PR #36594:
URL: https://github.com/apache/spark/pull/36594#issuecomment-1131351089
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
itholic commented on PR #36509:
URL: https://github.com/apache/spark/pull/36509#issuecomment-1131335722
Thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
Yikun commented on PR #36464:
URL: https://github.com/apache/spark/pull/36464#issuecomment-1131300293
@HyukjinKwon Thanks! Pls let me know if you have other concern.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
dongjoon-hyun commented on PR #36594:
URL: https://github.com/apache/spark/pull/36594#issuecomment-1131293193
Thanks! I caught up the thread.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
dongjoon-hyun commented on PR #36597:
URL: https://github.com/apache/spark/pull/36597#issuecomment-1131292324
BTW, I revised the PR and JIRA title more precisely based on your PR content.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
dongjoon-hyun commented on PR #36597:
URL: https://github.com/apache/spark/pull/36597#issuecomment-1131291360
Thank you for updates, @hai-tao-1 . Yes, the only remaining comment is the
test case.
> We need a test case for the configuration. Please check the corner cases
especially.
--
dongjoon-hyun commented on PR #36595:
URL: https://github.com/apache/spark/pull/36595#issuecomment-1131290098
Merged to master/3.3.
cc @MaxGekk
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
dongjoon-hyun closed pull request #36595: [SPARK-39216][SQL] Do not collapse
projects in CombineUnions if it hasCorrelatedSubquery
URL: https://github.com/apache/spark/pull/36595
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
dongjoon-hyun commented on code in PR #36358:
URL: https://github.com/apache/spark/pull/36358#discussion_r876653990
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala:
##
@@ -684,6 +684,15 @@ private[spark] object Config extends Logging
dongjoon-hyun commented on code in PR #36358:
URL: https://github.com/apache/spark/pull/36358#discussion_r876653355
##
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala:
##
@@ -684,6 +684,15 @@ private[spark] object Config extends Logging
dongjoon-hyun closed pull request #36605: [SPARK-28516][SQL][TEST][FOLLOWUP] Do
not run PostgreSQL tests that are not supported by Spark yet
URL: https://github.com/apache/spark/pull/36605
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
dongjoon-hyun commented on PR #36605:
URL: https://github.com/apache/spark/pull/36605#issuecomment-1131262690
Merged to master~
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
akpatnam25 commented on PR #36601:
URL: https://github.com/apache/spark/pull/36601#issuecomment-1131257367
@otterc @mridulm I've updated the PR to add a UT in the
`ShuffleBlockIteratorSuite`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please
101 - 155 of 155 matches
Mail list logo