Github user hvanhovell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21193#discussion_r186644321
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
 ---
    @@ -623,8 +624,14 @@ case class Cast(child: Expression, dataType: DataType, 
timeZoneId: Option[String
       override def doGenCode(ctx: CodegenContext, ev: ExprCode): ExprCode = {
         val eval = child.genCode(ctx)
         val nullSafeCast = nullSafeCastFunction(child.dataType, dataType, ctx)
    +
    +    // Below the code comment including `eval.value` and `eval.isNull` is 
a trick. It makes the two
    +    // expr values are referred by this code block.
         ev.copy(code = eval.code +
    -      castCode(ctx, eval.value, eval.isNull, ev.value, ev.isNull, 
dataType, nullSafeCast))
    +      code"""
    +        // Cast from ${eval.value}, ${eval.isNull}
    --- End diff --
    
    In this particular case I think we should not use the string interpolator. 
My preferred end game would be that the `CodeGenerator` functions will just 
return blocks (or something like that) instead of an opaque strings. That is 
definitely something we should do in a follow up, can we for now just manually 
create the block?
    
    That being said, if we are going to pick then I'd strong prefer option 2. I 
think option 1 is much harder to work with, and is also potentially buggy (what 
happens if you get the order wrong).


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to