Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13846#discussion_r68240312
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
 ---
    @@ -206,15 +205,33 @@ object RemoveAliasOnlyProject extends 
Rule[LogicalPlan] {
     object EliminateSerialization extends Rule[LogicalPlan] {
       def apply(plan: LogicalPlan): LogicalPlan = plan transform {
         case d @ DeserializeToObject(_, _, s: SerializeFromObject)
    -        if d.outputObjectType == s.inputObjectType =>
    +        if d.outputObjAttr.dataType == s.inputObjAttr.dataType =>
           // Adds an extra Project here, to preserve the output expr id of 
`DeserializeToObject`.
           // We will remove it later in RemoveAliasOnlyProject rule.
    -      val objAttr =
    -        Alias(s.child.output.head, s.child.output.head.name)(exprId = 
d.output.head.exprId)
    +      val objAttr = Alias(s.inputObjAttr, s.inputObjAttr.name)(exprId = 
d.outputObjAttr.exprId)
           Project(objAttr :: Nil, s.child)
    +
         case a @ AppendColumns(_, _, _, s: SerializeFromObject)
    -        if a.deserializer.dataType == s.inputObjectType =>
    +        if a.deserializer.dataType == s.inputObjAttr.dataType =>
           AppendColumnsWithObject(a.func, s.serializer, a.serializer, s.child)
    +
    +    // If there is a `SerializeFromObject` under typed filter and its 
input object type is same with
    +    // the typed filter's deserializer, we can convert typed filter to 
normal filter without
    +    // deserialization in condition, and push it down through 
`SerializeFromObject`.
    +    // e.g. `ds.map(...).filter(...)` can be optimized by this rule to 
save extra deserialization,
    +    // but `ds.map(...).as[AnotherType].filter(...)` can not be optimized.
    +    case f @ TypedFilter(_, _, s: SerializeFromObject)
    +        if f.deserializer.dataType == s.inputObjAttr.dataType =>
    +      s.copy(child = f.withObject(s.child))
    --- End diff --
    
    Well, it's true, and `Filter` can be any other unary operators whose output 
is derived from its child, e.g. Sort.
    
    However, I don't think `ds.map(...).filter(byExpr).filter(byFunc)` is a 
common case, i.e. mixing typed and untyped operations interlaced. If there is 
an easy and general way to optimize it, I'm happy to have it, or I'd like to 
leave it.
    
    what do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to