Repository: spark
Updated Branches:
  refs/heads/branch-2.0 7d8cddfb4 -> fb0fab63c


[MINOR][DOCS][SQL] Fix some comments about types(TypeCoercion,Partition) and 
exceptions.

## What changes were proposed in this pull request?

This PR contains a few changes on code comments.
- `HiveTypeCoercion` is renamed into `TypeCoercion`.
- `NoSuchDatabaseException` is only used for the absence of database.
- For partition type inference, only `DoubleType` is considered.

## How was this patch tested?

N/A

Author: Dongjoon Hyun <dongj...@apache.org>

Closes #13674 from dongjoon-hyun/minor_doc_types.

(cherry picked from commit 2d27eb1e753daefbd311136fc7de1a3e8fb9dc63)
Signed-off-by: Andrew Or <and...@databricks.com>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/fb0fab63
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/fb0fab63
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/fb0fab63

Branch: refs/heads/branch-2.0
Commit: fb0fab63cb005d9efc624aeb0ac85476a9ddc4f4
Parents: 7d8cddf
Author: Dongjoon Hyun <dongj...@apache.org>
Authored: Thu Jun 16 14:27:09 2016 -0700
Committer: Andrew Or <and...@databricks.com>
Committed: Thu Jun 16 14:27:17 2016 -0700

----------------------------------------------------------------------
 .../org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala    | 4 ++--
 .../org/apache/spark/sql/catalyst/catalog/ExternalCatalog.scala  | 2 +-
 .../src/main/scala/org/apache/spark/sql/types/Decimal.scala      | 2 +-
 .../spark/sql/execution/datasources/PartitioningUtils.scala      | 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/fb0fab63/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
----------------------------------------------------------------------
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
index 16df628..baec6d1 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
@@ -73,7 +73,7 @@ object TypeCoercion {
       DoubleType)
 
   /**
-   * Case 1 type widening (see the classdoc comment above for 
HiveTypeCoercion).
+   * Case 1 type widening (see the classdoc comment above for TypeCoercion).
    *
    * Find the tightest common type of two types that might be used in a binary 
expression.
    * This handles all numeric types except fixed-precision decimals 
interacting with each other or
@@ -132,7 +132,7 @@ object TypeCoercion {
   }
 
   /**
-   * Case 2 type widening (see the classdoc comment above for 
HiveTypeCoercion).
+   * Case 2 type widening (see the classdoc comment above for TypeCoercion).
    *
    * i.e. the main difference with [[findTightestCommonTypeOfTwo]] is that 
here we allow some
    * loss of precision when widening decimal and double.

http://git-wip-us.apache.org/repos/asf/spark/blob/fb0fab63/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalog.scala
----------------------------------------------------------------------
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalog.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalog.scala
index 81974b2..6714846 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalog.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalog.scala
@@ -27,7 +27,7 @@ import 
org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException
  * can be accessed in multiple threads. This is an external catalog because it 
is expected to
  * interact with external systems.
  *
- * Implementations should throw [[NoSuchDatabaseException]] when table or 
database don't exist.
+ * Implementations should throw [[NoSuchDatabaseException]] when databases 
don't exist.
  */
 abstract class ExternalCatalog {
   import CatalogTypes.TablePartitionSpec

http://git-wip-us.apache.org/repos/asf/spark/blob/fb0fab63/sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala
----------------------------------------------------------------------
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala
index 52e0210..cc8175c 100644
--- a/sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala
+++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala
@@ -322,7 +322,7 @@ final class Decimal extends Ordered[Decimal] with 
Serializable {
     }
   }
 
-  // HiveTypeCoercion will take care of the precision, scale of result
+  // TypeCoercion will take care of the precision, scale of result
   def * (that: Decimal): Decimal =
     Decimal(toJavaBigDecimal.multiply(that.toJavaBigDecimal, MATH_CONTEXT))
 

http://git-wip-us.apache.org/repos/asf/spark/blob/fb0fab63/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala
----------------------------------------------------------------------
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala
index 2340ff0..388df70 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala
@@ -159,7 +159,7 @@ private[sql] object PartitioningUtils {
    *     Seq(
    *       Literal.create(42, IntegerType),
    *       Literal.create("hello", StringType),
-   *       Literal.create(3.14, FloatType)))
+   *       Literal.create(3.14, DoubleType)))
    * }}}
    * and the path when we stop the discovery is:
    * {{{


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to