[ 
https://issues.apache.org/jira/browse/HADOOP-17905?focusedWorklogId=649600&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-649600
 ]

ASF GitHub Bot logged work on HADOOP-17905:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 11/Sep/21 17:40
            Start Date: 11/Sep/21 17:40
    Worklog Time Spent: 10m 
      Work Description: goiri commented on a change in pull request #3423:
URL: https://github.com/apache/hadoop/pull/3423#discussion_r706639391



##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
##########
@@ -73,6 +73,10 @@ protected CharsetDecoder initialValue() {
     }
   };
 
+  // max size of the byte array, seems to be a safe choice for multiple VMs
+  // (see ArrayList.MAX_ARRAY_SIZE)

Review comment:
       Where is this ArrayList.MAX_ARRAY_SIZE?

##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
##########
@@ -73,6 +73,10 @@ protected CharsetDecoder initialValue() {
     }
   };
 
+  // max size of the byte array, seems to be a safe choice for multiple VMs

Review comment:
       Multiple VMs?

##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
##########
@@ -301,9 +305,18 @@ public void clear() {
    */
   private boolean ensureCapacity(final int capacity) {
     if (bytes.length < capacity) {
+      // use long to allow overflow
+      long tmpLength = bytes.length;
+      long tmpCapacity = capacity;
+
       // Try to expand the backing array by the factor of 1.5x
-      // (by taking the current size + diving it by half)
-      int targetSize = Math.max(capacity, bytes.length + (bytes.length >> 1));
+      // (by taking the current size + diving it by half).
+      //
+      // If the calculated value is beyond the size
+      // limit, we cap it to ARRAY_MAX_SIZE
+      int targetSize = (int)Math.min(ARRAY_MAX_SIZE,

Review comment:
       Why do we need the cast to int by the way?

##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
##########
@@ -301,9 +305,18 @@ public void clear() {
    */
   private boolean ensureCapacity(final int capacity) {
     if (bytes.length < capacity) {
+      // use long to allow overflow
+      long tmpLength = bytes.length;
+      long tmpCapacity = capacity;
+
       // Try to expand the backing array by the factor of 1.5x
-      // (by taking the current size + diving it by half)
-      int targetSize = Math.max(capacity, bytes.length + (bytes.length >> 1));
+      // (by taking the current size + diving it by half).
+      //
+      // If the calculated value is beyond the size
+      // limit, we cap it to ARRAY_MAX_SIZE
+      int targetSize = (int)Math.min(ARRAY_MAX_SIZE,

Review comment:
       I would separate it into two lines.
   int targetSize = Math.max(tmpCapacity, tmpLength + (tmpLength >> 1));
   targetSize = (int)Math.min(ARRAY_MAX_SIZE, targetSize);




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 649600)
    Time Spent: 40m  (was: 0.5h)

> Modify Text.ensureCapacity() to efficiently max out the backing array size
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-17905
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17905
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Peter Bacsko
>            Assignee: Peter Bacsko
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> This is a continuation of HADOOP-17901.
> Right now we use a factor of 1.5x to increase the byte array if it's full. 
> However, if the size reaches a certain point, the increment is only (current 
> size + length). This can cause performance issues if the textual data which 
> we intend to store is beyond this point.
> Instead, let's max out the array to the maximum. Based on different sources, 
> a safe choice seems to be Integer.MAX_VALUE - 8 (see ArrayList, 
> AbstractCollection, HashTable, etc).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to