[ https://issues.apache.org/jira/browse/HDFS-9772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
John Zhuge updated HDFS-9772: ----------------------------- Component/s: test > TestBlockReplacement#testThrottler doesn't work as expected > ----------------------------------------------------------- > > Key: HDFS-9772 > URL: https://issues.apache.org/jira/browse/HDFS-9772 > Project: Hadoop HDFS > Issue Type: Bug > Components: test > Affects Versions: 2.7.1 > Reporter: Yiqun Lin > Assignee: Yiqun Lin > Priority: Minor > Labels: test > Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1 > > Attachments: HDFS.001.patch > > > In {{TestBlockReplacement#testThrottler}}, it use a fault variable to > calculate the ended bandwidth. It use variable {{totalBytes}} rathe than > final variable {{TOTAL_BYTES}}. And the value of {{TOTAL_BYTES}} is set to > {{bytesToSend}}. The {{totalBytes}} looks no meaning here and this will make > {{totalBytes*1000/(end-start)}} always be 0 and the comparison always true. > The method code is below: > {code} > @Test > public void testThrottler() throws IOException { > Configuration conf = new HdfsConfiguration(); > FileSystem.setDefaultUri(conf, "hdfs://localhost:0"); > long bandwidthPerSec = 1024*1024L; > final long TOTAL_BYTES =6*bandwidthPerSec; > long bytesToSend = TOTAL_BYTES; > long start = Time.monotonicNow(); > DataTransferThrottler throttler = new > DataTransferThrottler(bandwidthPerSec); > long totalBytes = 0L; > long bytesSent = 1024*512L; // 0.5MB > throttler.throttle(bytesSent); > bytesToSend -= bytesSent; > bytesSent = 1024*768L; // 0.75MB > throttler.throttle(bytesSent); > bytesToSend -= bytesSent; > try { > Thread.sleep(1000); > } catch (InterruptedException ignored) {} > throttler.throttle(bytesToSend); > long end = Time.monotonicNow(); > assertTrue(totalBytes*1000/(end-start)<=bandwidthPerSec); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org