Github user marmbrus commented on a diff in the pull request:

    https://github.com/apache/spark/pull/448#discussion_r11818451
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/SchemaRDD.scala ---
    @@ -313,4 +314,46 @@ class SchemaRDD(
           }
         }
       }
    +
    +  /**
    +   * Creates SchemaRDD by applying own schema to child RDD. Typically used 
to wrap results of base
    +   * RDD functions that do not change schema.
    +   *
    +   * @param childRDD RDD derived from this one and has same schema
    +   *
    +   * @group schema
    +   */
    +  private def applySchema(childRDD: RDD[Row]): SchemaRDD =
    +    new SchemaRDD(sqlContext, 
SparkLogicalPlan(ExistingRdd(logicalPlan.output, childRDD)))
    +
    +  // 
=======================================================================
    +  // Base RDD functions that do NOT change schema
    +  // 
=======================================================================
    +
    +  override def coalesce(numPartitions: Int, shuffle: Boolean = false): 
SchemaRDD =
    +    applySchema(super.coalesce(numPartitions, shuffle))
    +
    +  override def distinct(numPartitions: Int): SchemaRDD =
    +    applySchema(super.distinct(numPartitions))
    +
    +  override def filter(f: Row => Boolean): SchemaRDD =
    +    applySchema(super.filter(f))
    +
    +  override def intersection(other: RDD[Row]): SchemaRDD =
    +    applySchema(super.intersection(other))
    +
    +  override def intersection(other: RDD[Row], partitioner: Partitioner): 
SchemaRDD =
    +    applySchema(super.intersection(other, partitioner))
    +
    +  override def intersection(other: RDD[Row], numPartitions: Int): 
SchemaRDD =
    +    applySchema(super.intersection(other, numPartitions))
    +
    +  override def sample(withReplacement: Boolean, fraction: Double, seed: 
Int): SchemaRDD =
    --- End diff --
    
    Oh, you are right.  The base impl is probably lazy too.  The distinction I 
was trying to make is that while normal RDD operations are lazy, they are not 
holistically optimized before execution.  Where as if we create a logical 
operator and defer the creation of RDDs, there may be some extra chances for 
optimization (at some point in the future).  We definitely want to override the 
base impl, but we don't need to have multiple redundant methods for creating 
samples.
    
    Also note that you might need to sync with the changes being made in #462 .


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to