AmplabJenkins commented on PR #36418:
URL: https://github.com/apache/spark/pull/36418#issuecomment-1114150521
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
AmplabJenkins commented on PR #36419:
URL: https://github.com/apache/spark/pull/36419#issuecomment-1114150514
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
zhengruifeng opened a new pull request, #36420:
URL: https://github.com/apache/spark/pull/36420
### What changes were proposed in this pull request?
Implement DataFrame.resample and Series.resample
### Why are the changes needed?
To Increase pandas API coverage in PySpark
github-actions[bot] closed pull request #34647: [SPARK-36180][SQL] Support
TimestampNTZ type in Hive
URL: https://github.com/apache/spark/pull/34647
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to t
github-actions[bot] closed pull request #35244: [SPARK-37956][DOCS] Add Python
and Java examples of Parquet encryption in Spark SQL to documentation
URL: https://github.com/apache/spark/pull/35244
--
This is an automated message from the Apache Git Service.
To respond to the message, please
HyukjinKwon closed pull request #36399: [SPARK-39034][SQL][TESTS][DOCS] Add
tests for options from `to_json` and `from_json`.
URL: https://github.com/apache/spark/pull/36399
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use
HyukjinKwon closed pull request #36401: [SPARK-39035][SQL][TESTS] Add tests for
options from `to_csv` and `from_csv`.
URL: https://github.com/apache/spark/pull/36401
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
HyukjinKwon commented on PR #36399:
URL: https://github.com/apache/spark/pull/36399#issuecomment-1114074194
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
HyukjinKwon commented on PR #36401:
URL: https://github.com/apache/spark/pull/36401#issuecomment-1114074111
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
pan3793 commented on PR #36418:
URL: https://github.com/apache/spark/pull/36418#issuecomment-1114041155
cc @cloud-fan
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To un
wankunde commented on PR #36419:
URL: https://github.com/apache/spark/pull/36419#issuecomment-1114007398
As a followup of SPARK-38965, if there are no BLOCK_APPEND_COLLISION
exception, shuffle fetcher and shuffle pusher do not handle the exception
returned from server, and RetryingBlockTran
wankunde opened a new pull request, #36419:
URL: https://github.com/apache/spark/pull/36419
### What changes were proposed in this pull request?
Optimize ErrorHandler code.
### Why are the changes needed?
Shuffle ErrorHandler only has two stateless method, so
LuciferYang commented on PR #36403:
URL: https://github.com/apache/spark/pull/36403#issuecomment-1113994582
https://github.com/LuciferYang/spark/compare/SPARK-39063...LuciferYang:SPARK-39063-backup?expand=1
I added an additional global Tracker to audit the `create` and `close`
operat
LuciferYang commented on PR #36406:
URL: https://github.com/apache/spark/pull/36406#issuecomment-1113993059
Updated pr description and GA passed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
srowen commented on PR #36406:
URL: https://github.com/apache/spark/pull/36406#issuecomment-1113987379
This seems fine, go ahead and fill it out and let it test
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL a
pan3793 opened a new pull request, #36418:
URL: https://github.com/apache/spark/pull/36418
### What changes were proposed in this pull request?
Do not allow the v2 catalog's name contains `.`
### Why are the changes needed?
In the following configuration, we d
AmplabJenkins commented on PR #36404:
URL: https://github.com/apache/spark/pull/36404#issuecomment-1113960101
Can one of the admins verify this patch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
HyukjinKwon closed pull request #36357: [SPARK-38820][PYHTON] Refresh
categories.dtype when astype('category')
URL: https://github.com/apache/spark/pull/36357
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
HyukjinKwon commented on PR #36357:
URL: https://github.com/apache/spark/pull/36357#issuecomment-1113948187
Merged to master.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
beliefer opened a new pull request, #36417:
URL: https://github.com/apache/spark/pull/36417
### What changes were proposed in this pull request?
https://github.com/apache/spark/pull/35975 supports offset clause, it create
a logical node named
`GlobalLimitAndOffset`. In fact, we can av
20 matches
Mail list logo