ytatichno opened a new pull request, #652:
URL: https://github.com/apache/commons-collections/pull/652

   <!--
     Licensed to the Apache Software Foundation (ASF) under one
     or more contributor license agreements.  See the NOTICE file
     distributed with this work for additional information
     regarding copyright ownership.  The ASF licenses this file
     to you under the Apache License, Version 2.0 (the
     "License"); you may not use this file except in compliance
     with the License.  You may obtain a copy of the License at
   
       https://www.apache.org/licenses/LICENSE-2.0
   
     Unless required by applicable law or agreed to in writing,
     software distributed under the License is distributed on an
     "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
     KIND, either express or implied.  See the License for the
     specific language governing permissions and limitations
     under the License.
   -->
   
   According to 
https://issues.apache.org/jira/projects/COLLECTIONS/issues/COLLECTIONS-878
   
   Total summary:
   - Precompute initial capacity for new HashMap in MapUtils.invertMap to avoid 
resize during following insertion.
   - Implementation same as JDK's HashMap static factory method.
   
   ```
   // from HashMap (since 19)
   static int calculateHashMapCapacity(int numMappings) {
       return (int) Math.ceil(numMappings / (double) DEFAULT_LOAD_FACTOR);
   }
   ```
   
   Why `0.75d`:
   - It is DEFAULT_LOAD_FACTOR casted to double
   - load factor is being casted to double in HashMap.calculateHashMapCapacity
   - For very big maps we rarely can encounter lack of float precision and have 
extra resize.
   
   We have extra resize for 50% of Maps. In example:
   Let's inspect for maps with size on [4, 16].
   - On size 4 will be allocated map with size 4 and when the 4th element will 
be inserted the HashMap will be resized.
   - On sizes 5, 6 will be allocated map with size rounded to 8 and there will 
not be any resizes.
   - On sizes 7, 8 will be allocated map with size rounded to 8 and there will 
be resize when the 7th element will be inserted.
   - On sizes 9, 10, 11, 12 will be allocated map with size rounded to 16 and 
everything will be ok
   - On sizes 13, 14, 15, 16 will be allocated map with size rounded to 16 and 
there will be resize on 13th element insertion.
   
   To sum up:
   For each power of two we have segment from 50% to 100%. Because 50% is 
previous power of two. And for each initial size from 50% to 75% 
(DEFAULT_LOAD_FACTOR) everything ok, but for sizes from 75% to 100% we have 
extra resize. So we have 50% of probability to have extra resize. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to