[ 
https://issues.apache.org/jira/browse/SPARK-13969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15900825#comment-15900825
 ] 

Nick Pentreath commented on SPARK-13969:
----------------------------------------

I think {{HashingTF}} and {{FeatureHasher}} are different things - similar to 
HashingVectorizer and FeatureHasher in scikit-learn.

{{HashingTF}} (HashingVectorizer) transforms a Seq of terms (typically 
{{String}}) into a term frequency vector. Yes technically it can operate on Seq 
of any type (well actually only strings and numbers, see [murmur3Hash 
function|https://github.com/apache/spark/blob/60022bfd65e4637efc0eb5f4cc0112289c783147/mllib/src/main/scala/org/apache/spark/mllib/feature/HashingTF.scala#L151]).
 It could certainly operate on multiple columns - that would hash all the 
columns of sentences into one term frequency vector, so it seems like it would 
probably be less used in practice (though Vowpal Wabbit supports a form of this 
with its namespaces).

What {{HashingTF}} does not support is arbitrary categorical or numeric 
columns. It is possible to support categorical "one-hot" style encoding using 
what I have come to call the "stringify hack" - transforming a set of 
categorical columns into a Seq for input to HashingTF.

So taking say two categorical columns {{city}} and {{state}}, for example:
{code}
+--------+-----+-------------------------+
|city    |state|stringified              |
+--------+-----+-------------------------+
|Boston  |MA   |[city=Boston, state=MA]  |
|New York|NY   |[city=New York, state=NY]|
+--------+-----+-------------------------+
{code}

This works but is pretty ugly, doesn't fit nicely into a pipeline, and can't 
support numeric columns.

The {{FeatureHasher}} I propose acts like that in scikit-learn - it can handle 
multiple numeric and/or categorical columns in one pass. I go into some detail 
about all of this in my [Spark Summit East 2017 
talk|https://www.slideshare.net/SparkSummit/feature-hashing-for-scalable-machine-learning-spark-summit-east-talk-by-nick-pentreath].
 The rough draft of it used for the talk is 
[here|https://github.com/MLnick/spark/blob/FeatureHasher/mllib/src/main/scala/org/apache/spark/ml/feature/FeatureHasher.scala].

Another nice thing about the {{FeatureHasher}} is it opens up possibilities for 
doing things like namespaces in Vowpal Wabbit and it would be interesting to 
see if we could mimic their internal feature crossing, and so on.

> Extend input format that feature hashing can handle
> ---------------------------------------------------
>
>                 Key: SPARK-13969
>                 URL: https://issues.apache.org/jira/browse/SPARK-13969
>             Project: Spark
>          Issue Type: Sub-task
>          Components: ML, MLlib
>            Reporter: Nick Pentreath
>            Priority: Minor
>
> Currently {{HashingTF}} works like {{CountVectorizer}} (the equivalent in 
> scikit-learn is {{HashingVectorizer}}). That is, it works on a sequence of 
> strings and computes term frequencies.
> The use cases for feature hashing extend to arbitrary feature values (binary, 
> count or real-valued). For example, scikit-learn's {{FeatureHasher}} can 
> accept a sequence of (feature_name, value) pairs (e.g. a map, list). In this 
> way, feature hashing can operate as both "one-hot encoder" and "vector 
> assembler" at the same time.
> Investigate adding a more generic feature hasher (that in turn can be used by 
> {{HashingTF}}).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to