Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15795#discussion_r86744459
  
    --- Diff: docs/ml-features.md ---
    @@ -1396,3 +1396,134 @@ for more details on the API.
     {% include_example python/ml/chisq_selector_example.py %}
     </div>
     </div>
    +
    +# Locality Sensitive Hashing
    +[Locality Sensitive 
Hashing(LSH)](https://en.wikipedia.org/wiki/Locality-sensitive_hashing) is a 
class of dimension reduction hash families, which can be used as both feature 
transformation and machine-learned ranking. Difference distance metric has its 
own LSH family class in `spark.ml`, which can transform feature columns to hash 
values as new columns. Besides feature transforming, `spark.ml` also 
implemented approximate nearest neighbor algorithm and approximate similarity 
join algorithm using LSH.
    +
    +In this section, we call a pair of input features a false positive if the 
two features are hashed into the same hash bucket but they are far away in 
distance, and we define false negative as the pair of features when their 
distance are close but they are not in the same hash bucket.
    +
    +## Random Projection for Euclidean Distance
    +**Note:** Please note that this is different than the [Random Projection 
for cosine 
distance](https://en.wikipedia.org/wiki/Locality-sensitive_hashing#Random_projection).
    +
    +[Random 
Projection](https://en.wikipedia.org/wiki/Locality-sensitive_hashing#Stable_distributions)
 is the LSH family in `spark.ml` for Euclidean distance. The Euclidean distance 
is defined as follows:
    +`\[
    +d(\mathbf{x}, \mathbf{y}) = \sqrt{\sum_i (x_i - y_i)^2}
    +\]`
    +Its LSH family projects features onto a random unit vector and divide the 
projected results to hash buckets:
    +`\[
    +h(\mathbf{x}) = \lfloor \frac{\mathbf{x} \cdot \mathbf{v}}{r} \rfloor
    +\]`
    +where `v` is a normalized random unit vector and `r` is user-defined 
bucket length.
    +
    +The input features in Euclidean space are represented in vectors. Both 
sparse and dense vectors are supported.
    +
    +The bucket length can be used to trade off the performance of random 
projection. A larger bucket lowers the false negative rate but usually 
increases running time and false positive rate.
    --- End diff --
    
    Isn't the point here that near vectors end up in nearby buckets? I feel 
like this is skipping the intuition of why you would care about this technique 
or when you'd use it. Not like this needs a whole paragraph, just a few 
sentences of pointers?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to