This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 4eab7a75ede [MINOR][DOCS] Reviews and updates the doc links for avro
4eab7a75ede is described below

commit 4eab7a75ededf804cd5ab0948e3192490a183258
Author: panbingkun <pbk1...@gmail.com>
AuthorDate: Mon Oct 3 14:28:34 2022 +0900

    [MINOR][DOCS] Reviews and updates the doc links for avro
    
    ### What changes were proposed in this pull request?
    The pr aim to reviews and updates the doc links for avro.
    
    ### Why are the changes needed?
    Improve docs.
    
    ### Does this PR introduce _any_ user-facing change?
    No.
    
    ### How was this patch tested?
    Manually verified.
    
    Closes #38067 from panbingkun/avro_docs_fix_broken_url.
    
    Authored-by: panbingkun <pbk1...@gmail.com>
    Signed-off-by: Hyukjin Kwon <gurwls...@apache.org>
---
 .../avro/src/main/scala/org/apache/spark/sql/avro/AvroOptions.scala   | 4 ++--
 docs/sql-data-sources-avro.md                                         | 4 ++--
 .../test/scala/org/apache/spark/sql/hive/client/HiveClientSuite.scala | 2 +-
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git 
a/connector/avro/src/main/scala/org/apache/spark/sql/avro/AvroOptions.scala 
b/connector/avro/src/main/scala/org/apache/spark/sql/avro/AvroOptions.scala
index 540420974f5..e065bbce270 100644
--- a/connector/avro/src/main/scala/org/apache/spark/sql/avro/AvroOptions.scala
+++ b/connector/avro/src/main/scala/org/apache/spark/sql/avro/AvroOptions.scala
@@ -79,14 +79,14 @@ private[sql] class AvroOptions(
 
   /**
    * Top level record name in write result, which is required in Avro spec.
-   * See https://avro.apache.org/docs/1.11.1/spec.html#schema_record .
+   * See https://avro.apache.org/docs/1.11.1/specification/#schema-record .
    * Default value is "topLevelRecord"
    */
   val recordName: String = parameters.getOrElse("recordName", "topLevelRecord")
 
   /**
    * Record namespace in write result. Default value is "".
-   * See Avro spec for details: 
https://avro.apache.org/docs/1.11.1/spec.html#schema_record .
+   * See Avro spec for details: 
https://avro.apache.org/docs/1.11.1/specification/#schema-record .
    */
   val recordNamespace: String = parameters.getOrElse("recordNamespace", "")
 
diff --git a/docs/sql-data-sources-avro.md b/docs/sql-data-sources-avro.md
index 117692a7618..4422baa4c29 100644
--- a/docs/sql-data-sources-avro.md
+++ b/docs/sql-data-sources-avro.md
@@ -393,7 +393,7 @@ applications. Read the [Advanced Dependency 
Management](https://spark.apache
 Submission Guide for more details. 
 
 ## Supported types for Avro -> Spark SQL conversion
-Currently Spark supports reading all [primitive 
types](https://avro.apache.org/docs/1.11.1/spec.html#schema_primitive) and 
[complex types](https://avro.apache.org/docs/1.11.1/spec.html#schema_complex) 
under records of Avro.
+Currently Spark supports reading all [primitive 
types](https://avro.apache.org/docs/1.11.1/specification/#primitive-types) and 
[complex 
types](https://avro.apache.org/docs/1.11.1/specification/#complex-types) under 
records of Avro.
 <table class="table">
   <tr><th><b>Avro type</b></th><th><b>Spark SQL type</b></th></tr>
   <tr>
@@ -457,7 +457,7 @@ In addition to the types listed above, it supports reading 
`union` types. The fo
 3. `union(something, null)`, where something is any supported Avro type. This 
will be mapped to the same Spark SQL type as that of something, with nullable 
set to true.
 All other union types are considered complex. They will be mapped to 
StructType where field names are member0, member1, etc., in accordance with 
members of the union. This is consistent with the behavior when converting 
between Avro and Parquet.
 
-It also supports reading the following Avro [logical 
types](https://avro.apache.org/docs/1.11.1/spec.html#Logical+Types):
+It also supports reading the following Avro [logical 
types](https://avro.apache.org/docs/1.11.1/specification/#logical-types):
 
 <table class="table">
   <tr><th><b>Avro logical type</b></th><th><b>Avro type</b></th><th><b>Spark 
SQL type</b></th></tr>
diff --git 
a/sql/hive/src/test/scala/org/apache/spark/sql/hive/client/HiveClientSuite.scala
 
b/sql/hive/src/test/scala/org/apache/spark/sql/hive/client/HiveClientSuite.scala
index 2645d411b01..184e03d088c 100644
--- 
a/sql/hive/src/test/scala/org/apache/spark/sql/hive/client/HiveClientSuite.scala
+++ 
b/sql/hive/src/test/scala/org/apache/spark/sql/hive/client/HiveClientSuite.scala
@@ -895,7 +895,7 @@ class HiveClientSuite(version: String, allVersions: 
Seq[String])
   test("Decimal support of Avro Hive serde") {
     val tableName = "tab1"
     // TODO: add the other logical types. For details, see the link:
-    // https://avro.apache.org/docs/1.11.1/spec.html#Logical+Types
+    // https://avro.apache.org/docs/1.11.1/specification/#logical-types
     val avroSchema =
     """{
       |  "name": "test_record",


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to