Github user rednaxelafx commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21643#discussion_r198370477
  
    --- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/util/ComplexDataSuite.scala
 ---
    @@ -104,4 +104,40 @@ class ComplexDataSuite extends SparkFunSuite {
         // The copied data should not be changed externally.
         assert(copied.getStruct(0, 1).getUTF8String(0).toString == "a")
       }
    +
    +  test("SPARK-24659: GenericArrayData.equals should respect element type 
differences") {
    +    import scala.reflect.ClassTag
    --- End diff --
    
    Thanks for your suggestion! I'm used to making one-off imports inside a 
function when an import is only used within that function, so that the scope is 
as narrow as possible without being disturbing.
    Are there any Spark coding style guidelines that suggest otherwise? If so 
I'll follow the guideline and always import at the beginning of the file.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to