Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15795#discussion_r86744662
  
    --- Diff: docs/ml-features.md ---
    @@ -1396,3 +1396,134 @@ for more details on the API.
     {% include_example python/ml/chisq_selector_example.py %}
     </div>
     </div>
    +
    +# Locality Sensitive Hashing
    +[Locality Sensitive 
Hashing(LSH)](https://en.wikipedia.org/wiki/Locality-sensitive_hashing) is a 
class of dimension reduction hash families, which can be used as both feature 
transformation and machine-learned ranking. Difference distance metric has its 
own LSH family class in `spark.ml`, which can transform feature columns to hash 
values as new columns. Besides feature transforming, `spark.ml` also 
implemented approximate nearest neighbor algorithm and approximate similarity 
join algorithm using LSH.
    +
    +In this section, we call a pair of input features a false positive if the 
two features are hashed into the same hash bucket but they are far away in 
distance, and we define false negative as the pair of features when their 
distance are close but they are not in the same hash bucket.
    +
    +## Random Projection for Euclidean Distance
    +**Note:** Please note that this is different than the [Random Projection 
for cosine 
distance](https://en.wikipedia.org/wiki/Locality-sensitive_hashing#Random_projection).
    +
    +[Random 
Projection](https://en.wikipedia.org/wiki/Locality-sensitive_hashing#Stable_distributions)
 is the LSH family in `spark.ml` for Euclidean distance. The Euclidean distance 
is defined as follows:
    +`\[
    +d(\mathbf{x}, \mathbf{y}) = \sqrt{\sum_i (x_i - y_i)^2}
    +\]`
    +Its LSH family projects features onto a random unit vector and divide the 
projected results to hash buckets:
    +`\[
    +h(\mathbf{x}) = \lfloor \frac{\mathbf{x} \cdot \mathbf{v}}{r} \rfloor
    +\]`
    +where `v` is a normalized random unit vector and `r` is user-defined 
bucket length.
    +
    +The input features in Euclidean space are represented in vectors. Both 
sparse and dense vectors are supported.
    +
    +The bucket length can be used to trade off the performance of random 
projection. A larger bucket lowers the false negative rate but usually 
increases running time and false positive rate.
    +
    +<div class="codetabs">
    +<div data-lang="scala" markdown="1">
    +
    +Refer to the [RandomProjection Scala 
docs](api/scala/index.html#org.apache.spark.ml.feature.RandomProjection)
    +for more details on the API.
    +
    +{% include_example 
scala/org/apache/spark/examples/ml/RandomProjectionExample.scala %}
    +</div>
    +
    +<div data-lang="java" markdown="1">
    +
    +Refer to the [RandomProjection Java 
docs](api/java/org/apache/spark/ml/feature/RandomProjection.html)
    +for more details on the API.
    +
    +{% include_example 
java/org/apache/spark/examples/ml/JavaRandomProjectionExample.java %}
    +</div>
    +</div>
    +
    +## MinHash for Jaccard Distance
    +[MinHash](https://en.wikipedia.org/wiki/MinHash) is the LSH family in 
`spark.ml` for Jaccard distance where input features are sets of natural 
numbers. Jaccard distance of two sets is defined by the cardinality of their 
intersection and union:
    +`\[
    +d(\mathbf{A}, \mathbf{B}) = 1 - \frac{|\mathbf{A} \cap 
\mathbf{B}|}{|\mathbf{A} \cup \mathbf{B}|}
    +\]`
    +As its LSH family, MinHash applies a random [perfect hash 
function](https://en.wikipedia.org/wiki/Perfect_hash_function) `g` to each 
elements in the set and take the minimum of all hashed values:
    +`\[
    +h(\mathbf{A}) = \min_{a \in \mathbf{A}}(g(a))
    +\]`
    +
    +Input sets for MinHash is represented in vectors which dimension equals 
the total number of elements in the space. Each dimension of the vectors 
represents the status of an elements: zero value means the elements is not in 
the set; non-zero value means the set contains the corresponding elements. For 
example, `Vectors.sparse(10, Array[(2, 1.0), (3, 1.0), (5, 1.0)])` means there 
are 10 elements in the space. This set contains elem 2, elem 3 and elem 5.
    --- End diff --
    
    Same sort of comment: isn't the intuition that MinHash approximates the 
Jaccard similarity without actually computing it completely?
    
    This may again reduce to my different understanding of the technique, but 
MinHash isn't a hashing technique per se. it relies on a given family of hash 
functions to approximate set similarity. I'm finding it a little hard to view 
it as an 'LSH family'


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to