(spark) branch master updated: [MINOR] Fix some typos in `error-states.json`

2024-06-22 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 84d278ca99e3 [MINOR] Fix some typos in `error-states.json`
84d278ca99e3 is described below

commit 84d278ca99e379211d9a4ec667e83e34c6ce2b7c
Author: Wei Guo 
AuthorDate: Sun Jun 23 10:53:46 2024 +0900

[MINOR] Fix some typos in `error-states.json`

### What changes were proposed in this pull request?

This PR fixes some typos in `error-states.json` of `common-utils`

### Why are the changes needed?

Fix the typos.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Pass GA.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #47056 from wayneguow/typo_in_error.

Authored-by: Wei Guo 
Signed-off-by: Hyukjin Kwon 
---
 common/utils/src/main/resources/error/error-states.json | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/common/utils/src/main/resources/error/error-states.json 
b/common/utils/src/main/resources/error/error-states.json
index 791fa8cd887c..dd87d6bda5f2 100644
--- a/common/utils/src/main/resources/error/error-states.json
+++ b/common/utils/src/main/resources/error/error-states.json
@@ -108,7 +108,7 @@
 "usedBy": ["SQL/XML"]
 },
 "01011": {
-"description": "SQL-Java path too long for infor- mation schema",
+"description": "SQL-Java path too long for information schema",
 "origin": "SQL/JRT",
 "standard": "Y",
 "usedBy": ["SQL/JRT", "DB2"]
@@ -1584,7 +1584,7 @@
 "usedBy": ["SQL/Foundation"]
 },
 "2201J": {
-"description": "XQuery sequence cannot be vali- dated",
+"description": "XQuery sequence cannot be validated",
 "origin": "SQL/XML",
 "standard": "Y",
 "usedBy": ["SQL/XML"]
@@ -1896,7 +1896,7 @@
 "usedBy": ["SQL/MD"]
 },
 "2203L": {
-"description": "MD-array subset not within MD- extent",
+"description": "MD-array subset not within MD-extent",
 "origin": "SQL/MD",
 "standard": "Y",
 "usedBy": ["SQL/MD"]
@@ -1926,7 +1926,7 @@
 "usedBy": ["SQL/MD"]
 },
 "2203R": {
-"description": "MD-array operands with non- matching MD-extents",
+"description": "MD-array operands with non-matching MD-extents",
 "origin": "SQL/MD",
 "standard": "Y",
 "usedBy": ["SQL/MD"]
@@ -2502,7 +2502,7 @@
 "usedBy": ["SQL/Foundation", "PostgreSQL", "Redshift"]
 },
 "25P01": {
-"description": "no activ sql transaction",
+"description": "no active sql transaction",
 "origin": "PostgreSQL",
 "standard": "N",
 "usedBy": ["PostgreSQL", "Redshift"]
@@ -2682,7 +2682,7 @@
 "usedBy": ["SQL/Foundation"]
 },
 "33000": {
-"description": "invalid SQL descriptor nameno subclass)",
+"description": "invalid SQL descriptor nameno subclass",
 "origin": "SQL/Foundation",
 "standard": "Y",
 "usedBy": ["SQL/Foundation", "Oracle"]
@@ -6752,7 +6752,7 @@
 "usedBy": ["SQL/MED", "PostgreSQL"]
 },
 "HV021": {
-"description": "inconsistent descriptor informa- tion",
+"description": "inconsistent descriptor information",
 "origin": "SQL/MED",
 "standard": "Y",
 "usedBy": ["SQL/MED", "PostgreSQL"]
@@ -6848,7 +6848,7 @@
 "usedBy": ["SQL/CLI", "SQL Server"]
 },
 "HY007": {
-"description": "associated statement is not pre- pared",
+"description": "associated statement is not prepared",
 "origin": "SQL/CLI",
 "standard": "Y",
 "usedBy": ["SQL/CLI", "SQL Server"]


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [MINOR] Fix some typos in QueryExecution and TaskSchedulerImpl

2022-12-30 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 6aac6428aae [MINOR] Fix some typos in QueryExecution and 
TaskSchedulerImpl
6aac6428aae is described below

commit 6aac6428aae89915c5634b6a9659aff3d450f173
Author: Silly Carbon 
AuthorDate: Sat Dec 31 10:29:50 2022 +0900

[MINOR] Fix some typos in QueryExecution and TaskSchedulerImpl

### What changes were proposed in this pull request?

Fix some typos in `QueryExecution` and `TaskSchedulerImpl`.

### Why are the changes needed?

The typos confuse users.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

No need to test.

Closes #39308 from silly-carbon/fix-typos.

Authored-by: Silly Carbon 
Signed-off-by: Hyukjin Kwon 
---
 core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala  | 2 +-
 docs/running-on-yarn.md | 2 +-
 .../src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala  | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala 
b/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala
index 4580ec53289..91b0c983e4a 100644
--- a/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala
+++ b/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala
@@ -472,7 +472,7 @@ private[spark] class TaskSchedulerImpl(
 val taskCpus = ResourceProfile.getTaskCpusOrDefaultForProfile(taskSetProf, 
conf)
 // check if the ResourceProfile has cpus first since that is common case
 if (availCpus < taskCpus) return None
-// only look at the resource other then cpus
+// only look at the resource other than cpus
 val tsResources = taskSetProf.getCustomTaskResources()
 if (tsResources.isEmpty) return Some(Map.empty)
 val localTaskReqAssign = HashMap[String, ResourceInformation]()
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index 4112c71cdf9..35aaece15c5 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -730,7 +730,7 @@ Please make sure to have read the Custom Resource 
Scheduling and Configuration O
 YARN needs to be configured to support any resources the user wants to use 
with Spark. Resource scheduling on YARN was added in YARN 3.1.0. See the YARN 
documentation for more information on configuring resources and properly 
setting up isolation. Ideally the resources are setup isolated so that an 
executor can only see the resources it was allocated. If you do not have 
isolation enabled, the user is responsible for creating a discovery script that 
ensures the resource is not shared betw [...]
 
 YARN supports user defined resource types but has built in types for GPU 
(yarn.io/gpu) and FPGA (yarn.io/fpga). For that 
reason, if you are using either of those resources, Spark can translate your 
request for spark resources into YARN resources and you only have to specify 
the spark.{driver/executor}.resource. configs. Note, if you are 
using a custom resource type for GPUs or FPGAs with YARN you can change the 
Spark mapping using spark.yarn.r [...]
- If you are using a resource other then FPGA or GPU, the user is responsible 
for specifying the configs for both YARN 
(spark.yarn.{driver/executor}.resource.) and Spark 
(spark.{driver/executor}.resource.).
+ If you are using a resource other than FPGA or GPU, the user is responsible 
for specifying the configs for both YARN 
(spark.yarn.{driver/executor}.resource.) and Spark 
(spark.{driver/executor}.resource.).
 
 For example, the user wants to request 2 GPUs for each executor. The user can 
just specify spark.executor.resource.gpu.amount=2 and Spark will 
handle requesting yarn.io/gpu resource type from YARN.
 
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala
index 796ec41ab51..362615770a3 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala
@@ -411,7 +411,7 @@ object QueryExecution {
 
   /**
* Construct a sequence of rules that are used to prepare a planned 
[[SparkPlan]] for execution.
-   * These rules will make sure subqueries are planned, make use the data 
partitioning and ordering
+   * These rules will make sure subqueries are planned, make sure the data 
partitioning and ordering
* are correct, insert whole stage code gen, and try to reduce the work done 
by reusing exchanges
* and subqueries.
*/


-

[spark] branch master updated: [MINOR] Fix some typos

2022-12-20 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 52e4b31903c [MINOR] Fix some typos
52e4b31903c is described below

commit 52e4b31903cde37bef24a5abf808b11615845867
Author: Liu Chunbo 
AuthorDate: Wed Dec 21 10:36:40 2022 +0900

[MINOR] Fix some typos

What changes were proposed in this pull request?
Fix some typos in the code comments.
Why are the changes needed?
Modify these two comment mistakes and make code comments more standardized.
Does this PR introduce any user-facing change?
No
How was this patch tested?
No test required

Closes #39111 from for08/SPARK-41560.

Authored-by: Liu Chunbo 
Signed-off-by: Hyukjin Kwon 
---
 .../src/main/java/org/apache/spark/network/buffer/ManagedBuffer.java| 2 +-
 .../main/java/org/apache/spark/network/protocol/MessageWithHeader.java  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/common/network-common/src/main/java/org/apache/spark/network/buffer/ManagedBuffer.java
 
b/common/network-common/src/main/java/org/apache/spark/network/buffer/ManagedBuffer.java
index 2d573f51243..4dd8cec2900 100644
--- 
a/common/network-common/src/main/java/org/apache/spark/network/buffer/ManagedBuffer.java
+++ 
b/common/network-common/src/main/java/org/apache/spark/network/buffer/ManagedBuffer.java
@@ -68,7 +68,7 @@ public abstract class ManagedBuffer {
   public abstract ManagedBuffer release();
 
   /**
-   * Convert the buffer into an Netty object, used to write the data out. The 
return value is either
+   * Convert the buffer into a Netty object, used to write the data out. The 
return value is either
* a {@link io.netty.buffer.ByteBuf} or a {@link 
io.netty.channel.FileRegion}.
*
* If this method returns a ByteBuf, then that buffer's reference count will 
be incremented and
diff --git 
a/common/network-common/src/main/java/org/apache/spark/network/protocol/MessageWithHeader.java
 
b/common/network-common/src/main/java/org/apache/spark/network/protocol/MessageWithHeader.java
index 19eeddb842c..dfcb1c642eb 100644
--- 
a/common/network-common/src/main/java/org/apache/spark/network/protocol/MessageWithHeader.java
+++ 
b/common/network-common/src/main/java/org/apache/spark/network/protocol/MessageWithHeader.java
@@ -140,7 +140,7 @@ class MessageWithHeader extends AbstractFileRegion {
 // SPARK-24578: cap the sub-region's size of returned nio buffer to 
improve the performance
 // for the case that the passed-in buffer has too many components.
 int length = Math.min(buf.readableBytes(), NIO_BUFFER_LIMIT);
-// If the ByteBuf holds more then one ByteBuffer we should better call 
nioBuffers(...)
+// If the ByteBuf holds more than one ByteBuffer we should better call 
nioBuffers(...)
 // to eliminate extra memory copies.
 int written = 0;
 if (buf.nioBufferCount() == 1) {


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org