If you want to convert the hash to word, the very thought defies the usage of
hashing.
You may map the words with hashing, but that wouldn't be good.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/TFIDF-Transformation-tp24086p24203.html
Sent from
in advance and I hope to hear from you soon!
Best,
Hans
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/TFIDF-Transformation-tp24086.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/TFIDF-Transformation-tp24086.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr
Hello spark users,
I hope your week is going fantastic! I am having some troubles with the TFIDF
in MLlib and was wondering if anyone can point me to the right direction.
The data ingestion and the initial term frequency count code taken from the
example works fine (I am using the first
the counting and spreads
the current array into two separate ones using Vectors.sparse.
Thanks in advance and I hope to hear from you soon!
Best,
Hans
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/TFIDF-Transformation-tp24086.html
Sent from the Apache Spark