GitHub user kiszk opened a pull request:

    https://github.com/apache/spark/pull/22014

    [SPARK-25036][SQL] avoid match may not be exhaustive in Scala-2.12

    ## What changes were proposed in this pull request?
    
    The PR remove the following compilation error using scala-2.12 with sbt by 
adding a default case to `match`.
    
    ```
    
/home/ishizaki/Spark/PR/scala212/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/statsEstimation/ValueInterval.scala:63:
 match may not be exhaustive.
    [error] It would fail on the following inputs: (NumericValueInterval(_, _), 
_), (_, NumericValueInterval(_, _)), (_, _)
    [error] [warn]   def isIntersected(r1: ValueInterval, r2: ValueInterval): 
Boolean = (r1, r2) match {
    [error] [warn] 
    [error] [warn] 
/home/ishizaki/Spark/PR/scala212/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/statsEstimation/ValueInterval.scala:79:
 match may not be exhaustive.
    [error] It would fail on the following inputs: (NumericValueInterval(_, _), 
_), (_, NumericValueInterval(_, _)), (_, _)
    [error] [warn]     (r1, r2) match {
    [error] [warn] 
    [error] [warn] 
/home/ishizaki/Spark/PR/scala212/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/ApproxCountDistinctForIntervals.scala:67:
 match may not be exhaustive.
    [error] It would fail on the following inputs: (ArrayType(_, _), _), (_, 
ArrayData()), (_, _)
    [error] [warn]     (endpointsExpression.dataType, 
endpointsExpression.eval()) match {
    [error] [warn] 
    [error] [warn] 
/home/ishizaki/Spark/PR/scala212/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala:470:
 match may not be exhaustive.
    [error] It would fail on the following inputs: NewFunctionSpec(_, None, 
Some(_)), NewFunctionSpec(_, Some(_), None)
    [error] [warn]     newFunction match {
    [error] [warn] 
    [error] [warn] [error] [warn] 
/home/ishizaki/Spark/PR/scala212/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/ScalaReflection.scala:709:
 match may not be exhaustive.
    [error] It would fail on the following input: Schema((x: 
org.apache.spark.sql.types.DataType forSome x not in 
org.apache.spark.sql.types.StructType), _)
    [error] [warn]   def attributesFor[T: TypeTag]: Seq[Attribute] = 
schemaFor[T] match {
    [error] [warn] 
    ```
    
    ## How was this patch tested?
    
    Existing UTs with Scala-2.11.
    Manually build with Scala-2.12

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/kiszk/spark SPARK-25036b

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/22014.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #22014
    
----
commit 9cc0c60611d413b363718066f246926f47e03ffd
Author: Kazuaki Ishizaki <ishizaki@...>
Date:   2018-08-06T19:24:08Z

    add default case to match

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to