[jira] [Commented] (HDFS-10448) CacheManager#addInternal tracks bytesNeeded incorrectly when dealing with replication factors other than 1

2016-06-20 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15340851#comment-15340851
 ] 

Colin Patrick McCabe commented on HDFS-10448:
-

Hi [~linyiqun],

Sorry, I misread the patch the first time around.  You are indeed changing 
computeNeeded to take the replication factor into account, which seems like a 
better way to go.

+1

> CacheManager#addInternal tracks bytesNeeded incorrectly when dealing with 
> replication factors other than 1
> --
>
> Key: HDFS-10448
> URL: https://issues.apache.org/jira/browse/HDFS-10448
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10448.001.patch
>
>
> The logic in {{CacheManager#checkLimit}} is not correct. In this method, it 
> does with these three logic:
> First, it will compute needed bytes for the specific path.
> {code}
> CacheDirectiveStats stats = computeNeeded(path, replication);
> {code}
> But the param {{replication}} is not used here. And the bytesNeeded is just 
> one replication's vaue.
> {code}
> return new CacheDirectiveStats.Builder()
> .setBytesNeeded(requestedBytes)
> .setFilesCached(requestedFiles)
> .build();
> {code}
> Second, then it should be multiply by the replication to compare the limit 
> size because the method {{computeNeeded}} was not used replication.
> {code}
> pool.getBytesNeeded() + (stats.getBytesNeeded() * replication) > 
> pool.getLimit()
> {code}
> Third, if we find the size was more than the limit value and then print 
> warning info. It divided by replication here, while the 
> {{stats.getBytesNeeded()}} was just one replication value.
> {code}
>   throw new InvalidRequestException("Caching path " + path + " of size "
>   + stats.getBytesNeeded() / replication + " bytes at replication "
>   + replication + " would exceed pool " + pool.getPoolName()
>   + "'s remaining capacity of "
>   + (pool.getLimit() - pool.getBytesNeeded()) + " bytes.");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10448) CacheManager#addInternal tracks bytesNeeded incorrectly when dealing with replication factors other than 1

2016-06-20 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15340862#comment-15340862
 ] 

Colin Patrick McCabe commented on HDFS-10448:
-

Committed to 2.8.  Thanks, [~linyiqun]!  Sorry for the delays in reviews.

> CacheManager#addInternal tracks bytesNeeded incorrectly when dealing with 
> replication factors other than 1
> --
>
> Key: HDFS-10448
> URL: https://issues.apache.org/jira/browse/HDFS-10448
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 2.8.0
>
> Attachments: HDFS-10448.001.patch
>
>
> The logic in {{CacheManager#checkLimit}} is not correct. In this method, it 
> does with these three logic:
> First, it will compute needed bytes for the specific path.
> {code}
> CacheDirectiveStats stats = computeNeeded(path, replication);
> {code}
> But the param {{replication}} is not used here. And the bytesNeeded is just 
> one replication's vaue.
> {code}
> return new CacheDirectiveStats.Builder()
> .setBytesNeeded(requestedBytes)
> .setFilesCached(requestedFiles)
> .build();
> {code}
> Second, then it should be multiply by the replication to compare the limit 
> size because the method {{computeNeeded}} was not used replication.
> {code}
> pool.getBytesNeeded() + (stats.getBytesNeeded() * replication) > 
> pool.getLimit()
> {code}
> Third, if we find the size was more than the limit value and then print 
> warning info. It divided by replication here, while the 
> {{stats.getBytesNeeded()}} was just one replication value.
> {code}
>   throw new InvalidRequestException("Caching path " + path + " of size "
>   + stats.getBytesNeeded() / replication + " bytes at replication "
>   + replication + " would exceed pool " + pool.getPoolName()
>   + "'s remaining capacity of "
>   + (pool.getLimit() - pool.getBytesNeeded()) + " bytes.");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10448) CacheManager#addInternal tracks bytesNeeded incorrectly when dealing with replication factors other than 1

2016-06-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15340920#comment-15340920
 ] 

Hudson commented on HDFS-10448:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #9992 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9992/])
HDFS-10448. CacheManager#addInternal tracks bytesNeeded incorrectly when 
(cmccabe: rev 46f1602e896273b308fbd5df6c75f6c142828227)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java


> CacheManager#addInternal tracks bytesNeeded incorrectly when dealing with 
> replication factors other than 1
> --
>
> Key: HDFS-10448
> URL: https://issues.apache.org/jira/browse/HDFS-10448
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 2.8.0
>
> Attachments: HDFS-10448.001.patch
>
>
> The logic in {{CacheManager#checkLimit}} is not correct. In this method, it 
> does with these three logic:
> First, it will compute needed bytes for the specific path.
> {code}
> CacheDirectiveStats stats = computeNeeded(path, replication);
> {code}
> But the param {{replication}} is not used here. And the bytesNeeded is just 
> one replication's vaue.
> {code}
> return new CacheDirectiveStats.Builder()
> .setBytesNeeded(requestedBytes)
> .setFilesCached(requestedFiles)
> .build();
> {code}
> Second, then it should be multiply by the replication to compare the limit 
> size because the method {{computeNeeded}} was not used replication.
> {code}
> pool.getBytesNeeded() + (stats.getBytesNeeded() * replication) > 
> pool.getLimit()
> {code}
> Third, if we find the size was more than the limit value and then print 
> warning info. It divided by replication here, while the 
> {{stats.getBytesNeeded()}} was just one replication value.
> {code}
>   throw new InvalidRequestException("Caching path " + path + " of size "
>   + stats.getBytesNeeded() / replication + " bytes at replication "
>   + replication + " would exceed pool " + pool.getPoolName()
>   + "'s remaining capacity of "
>   + (pool.getLimit() - pool.getBytesNeeded()) + " bytes.");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10448) CacheManager#addInternal tracks bytesNeeded incorrectly when dealing with replication factors other than 1

2016-06-20 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15341213#comment-15341213
 ] 

Yiqun Lin commented on HDFS-10448:
--

Thanks [~cmccabe] for review again and commit!

> CacheManager#addInternal tracks bytesNeeded incorrectly when dealing with 
> replication factors other than 1
> --
>
> Key: HDFS-10448
> URL: https://issues.apache.org/jira/browse/HDFS-10448
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 2.8.0
>
> Attachments: HDFS-10448.001.patch
>
>
> The logic in {{CacheManager#checkLimit}} is not correct. In this method, it 
> does with these three logic:
> First, it will compute needed bytes for the specific path.
> {code}
> CacheDirectiveStats stats = computeNeeded(path, replication);
> {code}
> But the param {{replication}} is not used here. And the bytesNeeded is just 
> one replication's vaue.
> {code}
> return new CacheDirectiveStats.Builder()
> .setBytesNeeded(requestedBytes)
> .setFilesCached(requestedFiles)
> .build();
> {code}
> Second, then it should be multiply by the replication to compare the limit 
> size because the method {{computeNeeded}} was not used replication.
> {code}
> pool.getBytesNeeded() + (stats.getBytesNeeded() * replication) > 
> pool.getLimit()
> {code}
> Third, if we find the size was more than the limit value and then print 
> warning info. It divided by replication here, while the 
> {{stats.getBytesNeeded()}} was just one replication value.
> {code}
>   throw new InvalidRequestException("Caching path " + path + " of size "
>   + stats.getBytesNeeded() / replication + " bytes at replication "
>   + replication + " would exceed pool " + pool.getPoolName()
>   + "'s remaining capacity of "
>   + (pool.getLimit() - pool.getBytesNeeded()) + " bytes.");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org