[ 
https://issues.apache.org/jira/browse/HADOOP-17905?focusedWorklogId=649696&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-649696
 ]

ASF GitHub Bot logged work on HADOOP-17905:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 12/Sep/21 12:18
            Start Date: 12/Sep/21 12:18
    Worklog Time Spent: 10m 
      Work Description: pbacsko commented on a change in pull request #3423:
URL: https://github.com/apache/hadoop/pull/3423#discussion_r706827825



##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
##########
@@ -73,6 +73,10 @@ protected CharsetDecoder initialValue() {
     }
   };
 
+  // max size of the byte array, seems to be a safe choice for multiple VMs

Review comment:
       Maybe I should have written different kind of VMs (OpenJDK, HotSpot, 
etc). It's more like a practical value that will likely work under different 
versions. Some details: 
https://programming.guide/java/array-maximum-length.html. If this comment is 
confusing, I can remove it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 649696)
    Time Spent: 1.5h  (was: 1h 20m)

> Modify Text.ensureCapacity() to efficiently max out the backing array size
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-17905
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17905
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Peter Bacsko
>            Assignee: Peter Bacsko
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> This is a continuation of HADOOP-17901.
> Right now we use a factor of 1.5x to increase the byte array if it's full. 
> However, if the size reaches a certain point, the increment is only (current 
> size + length). This can cause performance issues if the textual data which 
> we intend to store is beyond this point.
> Instead, let's max out the array to the maximum. Based on different sources, 
> a safe choice seems to be Integer.MAX_VALUE - 8 (see ArrayList, 
> AbstractCollection, HashTable, etc).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to