Github user dorx commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1367#discussion_r14846509
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/mllib/stat/correlation/Correlation.scala 
---
    @@ -0,0 +1,121 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.spark.mllib.stat.correlation
    +
    +import org.apache.spark.mllib.linalg.{DenseVector, Matrix, Vector}
    +import org.apache.spark.rdd.RDD
    +
    +/**
    + * New correlation algorithms should implement this trait
    + */
    +trait Correlation {
    +
    +  /**
    +   * Compute correlation for two datasets.
    +   */
    +  def computeCorrelation(x: RDD[Double], y: RDD[Double]): Double
    +
    +  /**
    +   * Compute the correlation matrix S, for the input matrix, where S(i, j) 
is the correlation
    +   * between column i and j.
    +   */
    +  def computeCorrelationMatrix(X: RDD[Vector]): Matrix
    +
    +  /**
    +   * Combine the two input RDD[Double]s into an RDD[Vector] and compute 
the correlation using the
    +   * correlation implementation for RDD[Vector]
    +   */
    +  def computeCorrelationWithMatrixImpl(x: RDD[Double], y: RDD[Double]): 
Double = {
    +    val mat: RDD[Vector] = x.zip(y).mapPartitions({ iter =>
    +      iter.map {case(xi, yi) => new DenseVector(Array(xi, yi))}
    +    }, preservesPartitioning = true)
    +    computeCorrelationMatrix(mat)(0, 1)
    +  }
    +
    +}
    +
    +/**
    + * Delegates computation to the specific correlation object based on the 
input method name
    + *
    + * Currently supported correlations: pearson, spearman.
    + * After new correlation algorithms are added, please update the 
documentation here and in
    + * Statistics.scala for the correlation APIs.
    + *
    + * Cases are ignored when doing method matching. We also allow initials, 
e.g. "P" for "pearson", as
    + * long as initials are unique in the supported set of correlation 
algorithms. In addition, a
    --- End diff --
    
    While I agree we definitely should anticipate cases for generalization, I 
feel ambivalent about generalizing based on fairly hypothetical use cases 
(especially since there hasn't been issues of name collision in well known 
correlation algorithms for many years in almost all major stats packages).
    In any case, maybe fuzzy string matching isn't the optimal solution here, 
but the consideration was around user friendliness and the fact that the method 
name isn't something that gets checked at compile time (unless we find a 
mechanism to do so). Since Spark attracts users with its fault tolerance and 
user friendliness, it seems silly to me to have something fail at runtime 
(potentially after a lot of other data processing computation) because of an 
extra "s" in  "spearmans"(and both "spearman" and "spearmans" seem popular 
options). The Correlation interface idea sounds interesting, and I can see 
people who need things to be fault tolerant going the extra mile of 
implementing it (but of course by default we don't require it and use exact 
string matching instead). 
    On the other hand, I understand that developers can only do so much to 
tolerate fault in user behaviors, and maintainability seems much more important 
in comparison. 
    At any rate, if it comes down to exact string matching or no deal, I'll 
gladly go with exact string matching only.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to