Github user ygcao commented on a diff in the pull request: https://github.com/apache/spark/pull/10152#discussion_r47435244 --- Diff: mllib/src/main/scala/org/apache/spark/mllib/feature/Word2Vec.scala --- @@ -534,8 +577,15 @@ class Word2VecModel private[spark] ( // Need not divide with the norm of the given vector since it is constant. val cosVec = cosineVec.map(_.toDouble) var ind = 0 + var vecNorm = 1f + if (norm) { --- End diff -- I don't mind much about make it always normalized. Just FYI: for current brute-force kNN implementation in findSynonyms, the unnormalized version does save potentially millions of division operation when the vocabulary is millions, of course it is still in at most seconds saved. When caller only care about top K without needs of metrics and call it for all their interested words, the minor save of time could be multiplied again.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org