Github user freeman-lab commented on a diff in the pull request:

    https://github.com/apache/spark/pull/5267#discussion_r29121200
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/mllib/clustering/HierarchicalClustering.scala
 ---
    @@ -0,0 +1,574 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.spark.mllib.clustering
    +
    +import breeze.linalg.{DenseVector => BDV, SparseVector => BSV, Vector => 
BV, norm => breezeNorm}
    +import org.apache.spark.mllib.linalg.{Vector, Vectors}
    +import org.apache.spark.rdd.RDD
    +import org.apache.spark.util.random.XORShiftRandom
    +import org.apache.spark.{Logging, SparkException}
    +
    +import scala.collection.{Map, mutable}
    +
    +
    +object HierarchicalClustering extends Logging {
    +
    +  private[clustering] val ROOT_INDEX_KEY: Long = 1
    +
    +  /**
    +   * Finds the closes cluster's center
    +   *
    +   * @param metric a distance metric
    +   * @param centers centers of the clusters
    +   * @param point a target point
    +   * @return an index of the array of clusters
    +   */
    +  private[mllib]
    +  def findClosestCenter(metric: Function2[BV[Double], BV[Double], Double])
    +        (centers: Seq[BV[Double]])(point: BV[Double]): Int = {
    +    val (closestCenter, closestIndex) =
    +      centers.zipWithIndex.map { case (center, idx) => (metric(center, 
point), idx)}.minBy(_._1)
    +    closestIndex
    +  }
    +}
    +
    +/**
    + * This is a divisive hierarchical clustering algorithm based on bi-sect 
k-means algorithm.
    + *
    + * The main idea of this algorithm is based on "A comparison of document 
clustering techniques",
    + * M. Steinbach, G. Karypis and V. Kumar. Workshop on Text Mining, KDD, 
2000.
    + * http://cs.fit.edu/~pkc/classes/ml-internet/papers/steinbach00tr.pdf
    + *
    + * @param numClusters tne number of clusters you want
    + * @param clusterMap the pairs of cluster and its index as Map
    + * @param maxIterations the number of maximal iterations
    + * @param maxRetries the number of maximum retries
    + * @param seed a random seed
    + */
    +class HierarchicalClustering private (
    +  private var numClusters: Int,
    +  private var clusterMap: Map[Long, ClusterTree],
    +  private var maxIterations: Int,
    +  private var maxRetries: Int,
    +  private var seed: Long) extends Logging {
    +
    +  /**
    +   * Constructs with the default configuration
    +   */
    +  def this() = this(20, mutable.ListMap.empty[Long, ClusterTree], 20, 10, 
1)
    +
    +  /**
    +   * Sets the number of clusters you want
    +   */
    +  def setNumClusters(numClusters: Int): this.type = {
    +    this.numClusters = numClusters
    +    this
    +  }
    +
    +  def getNumClusters: Int = this.numClusters
    +
    +  /**
    +   * Sets the number of maximal iterations in each clustering step
    +   */
    +  def setMaxIterations(maxIterations: Int): this.type = {
    +    this.maxIterations = maxIterations
    +    this
    +  }
    +
    +  def getSubIterations: Int = this.maxIterations
    +
    +  /**
    +   * Sets the number of maximum retries of each clustering step
    +   */
    +  def setMaxRetries(maxRetries: Int): this.type = {
    +    this.maxRetries = maxRetries
    +    this
    +  }
    +
    +  def getMaxRetries: Int = this.maxRetries
    +
    +  /**
    +   * Sets the random seed
    +   */
    +  def setSeed(seed: Long): this.type = {
    +    this.seed = seed
    +    this
    +  }
    +
    +  def getSeed: Long = this.seed
    +
    +  /**
    +   * Runs the hierarchical clustering algorithm
    +   * @param input RDD of vectors
    +   * @return model for the hierarchical clustering
    +   */
    +  def run(input: RDD[Vector]): HierarchicalClusteringModel = {
    +    val sc = input.sparkContext
    +    log.info(s"${sc.appName} starts a hierarchical clustering algorithm")
    +
    +    var data = initData(input).cache()
    --- End diff --
    
    This algorithm contains a lot of `cacheing` and `unpersisting`. Can we add 
a more detailed note in the docstrings as to how much of a data set will be 
cached? In other words, what is the expected memory footprint of this algorithm 
relative to the size of the data set? This will be useful for users in 
considering how much RAM to have available for this algorithm.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to