Github user jkbradley commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18265#discussion_r121315648
  
    --- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Word2Vec.scala 
---
    @@ -355,9 +364,12 @@ object Word2VecModel extends MLReadable[Word2VecModel] 
{
           // Calculate the approximate size of the model.
           // Assuming an average word size of 15 bytes, the formula is:
           // (floatSize * vectorSize + 15) * numWords
    -      val numWords = instance.wordVectors.wordIndex.size
    -      val approximateSizeInBytes = (floatSize * instance.getVectorSize + 
averageWordSize) * numWords
    -      ((approximateSizeInBytes / bufferSizeInBytes) + 1).toInt
    +      val approximateSizeInBytes = (floatSize * vectorSize + 
averageWordSize) * numWords
    +      val numPartitions = (approximateSizeInBytes / bufferSizeInBytes) + 1
    +      require(numPartitions < 10e8, s"Word2VecModel calculated that it 
needs $numPartitions " +
    --- End diff --
    
    I'm pretty sure it is necessary.  If we cap it at Int.MAX and the user hits 
that cap, then it means that we'll fail when trying to write the partitions.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to