This is an automated email from the ASF dual-hosted git repository.

yao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 7c513807352 [MINOR][DOCS] Fix some spelling typos
7c513807352 is described below

commit 7c513807352b90464275667f8182dffd0019da77
Author: 袁焊忠 <yuanhanzhong...@gmail.com>
AuthorDate: Mon Nov 6 16:46:31 2023 +0800

    [MINOR][DOCS] Fix some spelling typos
    
    ### What changes were proposed in this pull request?
    Fixed typo.
    
    ### Why are the changes needed?
    To help make spark perfect.
    
    ### Does this PR introduce _any_ user-facing change?
    No.
    
    ### How was this patch tested?
    Just comment typo, no need to test.
    
    ### Was this patch authored or co-authored using generative AI tooling?
    No.
    
    Closes #43634 from YuanHanzhong/patch-3.
    
    Lead-authored-by: 袁焊忠 <yuanhanzhong...@gmail.com>
    Co-authored-by: YuanHanzhong <yuanhanzhong...@gmail.com>
    Signed-off-by: Kent Yao <y...@apache.org>
---
 hadoop-cloud/README.md                                                  | 2 +-
 .../spark/sql/execution/command/AlterTableDropPartitionSuiteBase.scala  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/hadoop-cloud/README.md b/hadoop-cloud/README.md
index 840ff1576f5..dc7647b75b1 100644
--- a/hadoop-cloud/README.md
+++ b/hadoop-cloud/README.md
@@ -16,5 +16,5 @@ Integration tests will have some extra configurations for 
example selecting the
 run the test against. Those configs are passed as environment variables and 
the existence of these
 variables must be checked by the test.
 Like for `AwsS3AbortableStreamBasedCheckpointFileManagerSuite` the S3 bucket 
used for testing
-is passed in the `S3A_PATH` and the credetinals to access AWS S3 are 
AWS_ACCESS_KEY_ID and
+is passed in the `S3A_PATH` and the credentials to access AWS S3 are 
AWS_ACCESS_KEY_ID and
 AWS_SECRET_ACCESS_KEY (in addition you can define optional AWS_SESSION_TOKEN 
and AWS_ENDPOINT_URL too).
diff --git 
a/sql/core/src/test/scala/org/apache/spark/sql/execution/command/AlterTableDropPartitionSuiteBase.scala
 
b/sql/core/src/test/scala/org/apache/spark/sql/execution/command/AlterTableDropPartitionSuiteBase.scala
index 1e786c8e578..199d1b8b4b6 100644
--- 
a/sql/core/src/test/scala/org/apache/spark/sql/execution/command/AlterTableDropPartitionSuiteBase.scala
+++ 
b/sql/core/src/test/scala/org/apache/spark/sql/execution/command/AlterTableDropPartitionSuiteBase.scala
@@ -155,7 +155,7 @@ trait AlterTableDropPartitionSuiteBase extends QueryTest 
with DDLCommandTestUtil
     }
   }
 
-  test("SPARK-33990: don not return data from dropped partition") {
+  test("SPARK-33990: do not return data from dropped partition") {
     withNamespaceAndTable("ns", "tbl") { t =>
       sql(s"CREATE TABLE $t (id int, part int) $defaultUsing PARTITIONED BY 
(part)")
       sql(s"INSERT INTO $t PARTITION (part=0) SELECT 0")


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to