Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21882#discussion_r205935411
  
    --- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/PartitionBatchPruningSuite.scala
 ---
    @@ -95,6 +111,17 @@ class PartitionBatchPruningSuite
       checkBatchPruning("SELECT key FROM pruningData WHERE 11 >= key", 1, 2)(1 
to 11)
       checkBatchPruning("SELECT key FROM pruningData WHERE 88 < key", 1, 2)(89 
to 100)
       checkBatchPruning("SELECT key FROM pruningData WHERE 89 <= key", 1, 
2)(89 to 100)
    +  // Do not filter on array type
    +  checkBatchPruning("SELECT _1 FROM pruningArrayData WHERE _1 = array(1)", 
5, 10)(Seq(Array(1)))
    +  checkBatchPruning("SELECT _1 FROM pruningArrayData WHERE _1 <= 
array(1)", 5, 10)(Seq(Array(1)))
    +  checkBatchPruning("SELECT _1 FROM pruningArrayData WHERE _1 >= 
array(1)", 5, 10)(
    +    testArrayData.map(_._1))
    +  // Do not filter on binary type
    +  checkBatchPruning(
    +    title = "SELECT _1 FROM pruningBinaryData WHERE _1 == 0x01 (binary 
literal)",
    +    actual = 
spark.table("pruningBinaryData").filter($"_1".equalTo(Array[Byte](1.toByte))),
    --- End diff --
    
    The problem here is, there seems no SQL liternal. So, I had to use DSL


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to