[jira] [Commented] (HDFS-4356) BlockReaderLocal should use passed file descriptors rather than paths
[ https://issues.apache.org/jira/browse/HDFS-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549400#comment-13549400 ] Todd Lipcon commented on HDFS-4356: --- - The patch still adds {{available()}} to {{BlockReader}}, but it's not actually used, best I can tell. {code} + String msg = "access control error while " + + "attempting to set up short-circuit access to " + + file + resp.getMessage(); + DFSClient.LOG.error(msg); + throw new InvalidBlockTokenException(msg); {code} No need to log this here, especailly at ERROR level. The exception will get logged in the catch clause at the callsite already. - A couple unused imports (eg LoadingCache) {code} + DFSClient.LOG.warn("error while attempting to set up short-circuit " + + "access to " + file + ": " + resp.getMessage()); {code} Can you add an {{isDebugEnabled}} log here? It's on the hot path for random read. - Can you add a warning log if the shortcircuit flag is enabled, but there is no path configured? This will be a common case for people upgrading from HDFS-2246 support to HDFS-347. {code} + public static final String DFS_DATANODE_DOMAIN_SOCKET_PATH = "dfs.datanode.domain.socket.path"; {code} Rename constant to end in {{_KEY}} like the others. Also, there's an extra space in this line. {code} +this.fileInputStreamCache = new FileInputStreamCache(5, 3); {code} Add config keys for these constants. I would also say the default number of streams should be higher -- eg a seeky workload in a 1GB file with 64MB blocks would need 20 entries in order to avoid churning the cache. {code} - // Don't use the cache on the last attempt - it's possible that there - // are arbitrarily many unusable sockets in the cache, but we don't - // want to fail the read. {code} Why'd you remove this comment? {code} +boolean allowShortCircuitcLocalReads = {code} typo: circuitc {code} +if (conf.domainSocketPath == null) return null; +// UNIX domain sockets can only be used to talk to local peers +if (!DFSClient.isLocalAddress(addr)) return null; +// If the DomainSocket code is not loaded, we can't create +// DomainSocket objects. +if (DomainSocket.getLoadingFailureReason() != null) return null; ... ... + // If we don't want to pass data over domain sockets, and we don't want + // to pass file descriptors over them either, we have no use for domain + // sockets. + return null; {code} Add guarded DEBUG logs in these cases. At DFSClient configuration time, if domain sockets are supposed to be enabled, and DomainSocket.getLoadingFailureReason is non-null, you should also WARN at that point (but only once, not once per read). This will help users be sure they've got their setup right. - FileInputStreamCache is missing a license. Also please add javadoc. - Please add some javadoc on the members in this class as well. {code} + private final static ScheduledThreadPoolExecutor executor + = new ScheduledThreadPoolExecutor(1); {code} Indentation. Also, pass a ThreadFactory so that it makes daemon threads with reasonable names. {code} + map.remove(entry.getKey(), entry.getValue()); {code} This will throw a ConcurrentModificationException. You need to remove from the iterator. This bug shows up in a few places. {code} + return (datanodeID.equals(otherKey.datanodeID) && (block.equals(otherKey.block))); {code} The block equals function doesn't compare generation stamps, but I think you should do so here by adding block.getGenerationStamp == otherKey.block.getGenerationStamp() Stupid performance things on the FileInputStreamCache.Key: - change equals to compare blocks first, then datanodes (datanodes usually won't differ, since there is only one local DN) - change hashcode to only use the block's hashcode, since datanodes won't differ {code} +for (FileInputStream f : fis) { + IOUtils.cleanup(LOG, f); +} {code} You can just use {{IOUtils.cleanup(LOG, fis)}} here, since it takes an array, no? {code} + static final int CURRENT_BLOCK_FORMAT_VERSION = 1; {code} Find a better spot for this? Seems like not quite the right place. {code} +String domainSocketPath = +conf.get(DFSConfigKeys.DFS_DATANODE_DOMAIN_SOCKET_PATH); +if (domainSocketPath == null) return null; {code} Should WARN if the flag says enabled, but there is no path configured. > BlockReaderLocal should use passed file descriptors rather than paths > - > > Key: HDFS-4356 > URL: https://issues.apache.org/jira/browse/HDFS-4356 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, hdfs-client, per
[jira] [Commented] (HDFS-4274) BlockPoolSliceScanner does not close verification log during shutdown
[ https://issues.apache.org/jira/browse/HDFS-4274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549394#comment-13549394 ] Aaron T. Myers commented on HDFS-4274: -- Hey ya'll, it looks like this commit has broken TestLargeBlock. I can't tell for sure since the results from this JIRA's test-patch run have been deleted by Jenkins, but I suspect that this wasn't noticed since test-patch doesn't report on tests that time out rather than explicitly fail. This test failure is tracked by HDFS-4328. Chris - would you mind taking a look at that JIRA to see if you can tell what the trouble is? > BlockPoolSliceScanner does not close verification log during shutdown > - > > Key: HDFS-4274 > URL: https://issues.apache.org/jira/browse/HDFS-4274 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 3.0.0, trunk-win >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Fix For: 3.0.0 > > Attachments: HDFS-4274.1.patch, HDFS-4274.2.patch > > > {{BlockPoolSliceScanner}} holds open a handle to a verification log. This > file is not getting closed during process shutdown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4328) TestLargeBlock#testLargeBlockSize is timing out
[ https://issues.apache.org/jira/browse/HDFS-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549393#comment-13549393 ] Aaron T. Myers commented on HDFS-4328: -- git-bisect indicates that this test timeout was introduced by HDFS-4274, and indeed I confirmed that reverting this commit allows the test to pass. > TestLargeBlock#testLargeBlockSize is timing out > --- > > Key: HDFS-4328 > URL: https://issues.apache.org/jira/browse/HDFS-4328 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 3.0.0 >Reporter: Jason Lowe > > For some time now TestLargeBlock#testLargeBlockSize has been timing out on > trunk. It is getting hung up during cluster shutdown, and after 15 minutes > surefire kills it and causes the build to fail since it exited uncleanly. > In addition to fixing the hang, we should consider adding a timeout parameter > to the @Test decorator for this test. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4333) Using right default value for creating files in HDFS
[ https://issues.apache.org/jira/browse/HDFS-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549368#comment-13549368 ] Binglin Chang commented on HDFS-4333: - Oops, I just realize that some hdfs changes are already accidentally included in HADOOP-9155 and committed. So I think this jira can be closed. > Using right default value for creating files in HDFS > > > Key: HDFS-4333 > URL: https://issues.apache.org/jira/browse/HDFS-4333 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.0.2-alpha >Reporter: Binglin Chang >Assignee: Binglin Chang >Priority: Minor > > The default permission to create file should be 0666 rather than 0777, > HADOOP-9155 add default permission for files and change > localfilesystem.create to use this default value, this jira makes the similar > change with hdfs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4098) Support append to original files which are snapshotted
[ https://issues.apache.org/jira/browse/HDFS-4098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE updated HDFS-4098: - Attachment: h4098_20130109b.patch h4098_20130109b.patch: enables all snapshot append tests. > Support append to original files which are snapshotted > -- > > Key: HDFS-4098 > URL: https://issues.apache.org/jira/browse/HDFS-4098 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Tsz Wo (Nicholas), SZE > Attachments: h4098_20130107.patch, h4098_20130109b.patch > > > When a regular file is reopened for append, the type is changed from > INodeFile to INodeFileUnderConstruction. The type of snapshotted files (i.e. > original files) is INodeFileWithLink. We have to support similar "under > construction" INodeFileWithLink. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4014) Fix warnings found by findbugs2
[ https://issues.apache.org/jira/browse/HDFS-4014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated HDFS-4014: -- Attachment: findbugs.out.26.html findbugs.out.25.html findbugs.out.24.html Attaching reports for the remaining hdfs warnings (bkjournal, httpfs, and some stuff that crept in since the last batch). These can probably be handled with one more patch when we're close to updating the findbugs version project wide. Yarn and common still need to be done. > Fix warnings found by findbugs2 > > > Key: HDFS-4014 > URL: https://issues.apache.org/jira/browse/HDFS-4014 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Attachments: findbugs.out.24.html, findbugs.out.25.html, > findbugs.out.26.html > > > The HDFS side of HADOOP-8594. Ubrella jira for fixing the warnings found by > findbugs 2. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4032) Specify the charset explicitly rather than rely on the default
[ https://issues.apache.org/jira/browse/HDFS-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549305#comment-13549305 ] Hudson commented on HDFS-4032: -- Integrated in Hadoop-trunk-Commit #3211 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/3211/]) HDFS-4032. Specify the charset explicitly rather than rely on the default. Contributed by Eli Collins (Revision 1431179) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431179 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferEncryptor.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RollingLogsImpl.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ClusterJspHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RenewDelegationTokenServlet.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsXmlLoader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/StatisticsEditsVisitor.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TextWriterImageVisitor.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/MD5FileUtils.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestPathComponents.java > Specify the charset explicitly rather than rely on the default > -- > > Key: HDFS-4032 > URL: https://issues.apache.org/jira/browse/HDFS-4032 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Attachments: hdfs-4032.txt, hdfs-4032.txt > > > Findbugs 2 warns about relying on the default Java charset instead of > specifying it explicitly. Given that we're porting Hadoop to different > platforms it's better to be explicit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4032) Specify the charset explicitly rather than rely on the default
[ https://issues.apache.org/jira/browse/HDFS-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated HDFS-4032: -- Resolution: Fixed Fix Version/s: 2.0.3-alpha Target Version/s: (was: 2.0.3-alpha) Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I've committed this and merged to branch-2. > Specify the charset explicitly rather than rely on the default > -- > > Key: HDFS-4032 > URL: https://issues.apache.org/jira/browse/HDFS-4032 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 2.0.3-alpha > > Attachments: hdfs-4032.txt, hdfs-4032.txt > > > Findbugs 2 warns about relying on the default Java charset instead of > specifying it explicitly. Given that we're porting Hadoop to different > platforms it's better to be explicit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4356) BlockReaderLocal should use passed file descriptors rather than paths
[ https://issues.apache.org/jira/browse/HDFS-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-4356: --- Status: Open (was: Patch Available) Since this is going to a branch, trunk Jenkins cannot run on it. > BlockReaderLocal should use passed file descriptors rather than paths > - > > Key: HDFS-4356 > URL: https://issues.apache.org/jira/browse/HDFS-4356 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, hdfs-client, performance >Affects Versions: 2.0.3-alpha >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: 04b-cumulative.patch, _04b.patch, 04-cumulative.patch, > 04d-cumulative.patch, 04f-cumulative.patch, 04g-cumulative.patch > > > {{BlockReaderLocal}} should use file descriptors passed over UNIX domain > sockets rather than paths. We also need some configuration options for these > UNIX domain sockets. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4356) BlockReaderLocal should use passed file descriptors rather than paths
[ https://issues.apache.org/jira/browse/HDFS-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549244#comment-13549244 ] Hadoop QA commented on HDFS-4356: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12564064/_04b.patch against trunk revision . {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3813//console This message is automatically generated. > BlockReaderLocal should use passed file descriptors rather than paths > - > > Key: HDFS-4356 > URL: https://issues.apache.org/jira/browse/HDFS-4356 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, hdfs-client, performance >Affects Versions: 2.0.3-alpha >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: 04b-cumulative.patch, _04b.patch, 04-cumulative.patch, > 04d-cumulative.patch, 04f-cumulative.patch, 04g-cumulative.patch > > > {{BlockReaderLocal}} should use file descriptors passed over UNIX domain > sockets rather than paths. We also need some configuration options for these > UNIX domain sockets. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4356) BlockReaderLocal should use passed file descriptors rather than paths
[ https://issues.apache.org/jira/browse/HDFS-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-4356: --- Attachment: _04b.patch HDFS-347 branch version > BlockReaderLocal should use passed file descriptors rather than paths > - > > Key: HDFS-4356 > URL: https://issues.apache.org/jira/browse/HDFS-4356 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, hdfs-client, performance >Affects Versions: 2.0.3-alpha >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: 04b-cumulative.patch, _04b.patch, 04-cumulative.patch, > 04d-cumulative.patch, 04f-cumulative.patch, 04g-cumulative.patch > > > {{BlockReaderLocal}} should use file descriptors passed over UNIX domain > sockets rather than paths. We also need some configuration options for these > UNIX domain sockets. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-4380) Opening a file for read before writer writes a block causes NPE
Todd Lipcon created HDFS-4380: - Summary: Opening a file for read before writer writes a block causes NPE Key: HDFS-4380 URL: https://issues.apache.org/jira/browse/HDFS-4380 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 1.0.3 Reporter: Todd Lipcon JD Cryans found this issue: it seems like, if you open a file for read immediately after it's been created by the writer, after a block has been allocated, but before the block is created on the DNs, then you can end up with the following NPE: java.lang.NullPointerException at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.updateBlockInfo(DFSClient.java:1885) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1858) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.(DFSClient.java:1834) at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:578) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:154) This seems to be because {{getBlockInfo}} returns a null block when the DN doesn't yet have the replica. The client should probably either fall back to a different replica or treat it as zero-length. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2745) unclear to users which command to use to access the filesystem
[ https://issues.apache.org/jira/browse/HDFS-2745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549225#comment-13549225 ] Andy Isaacson commented on HDFS-2745: - I don't think we should remove {{hdfs dfs}} on branch-2. I've used that by accident more than once, and somebody's law says that if a command worked in one release, someone has added it to a script that will break if it's removed. > unclear to users which command to use to access the filesystem > -- > > Key: HDFS-2745 > URL: https://issues.apache.org/jira/browse/HDFS-2745 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 0.23.0, 1.2.0, 2.0.2-alpha >Reporter: Thomas Graves >Assignee: Andrew Wang >Priority: Critical > Attachments: hdfs-2745-1.patch, hdfs-2745-2.patch > > > Its unclear to users which command to use to access the filesystem. Need some > background and then we can fix accordingly. We have 3 choices: > hadoop dfs -> says its deprecated and to use hdfs. If I run hdfs usage it > doesn't list any options like -ls in the usage, although there is an hdfs dfs > command > hdfs dfs -> not in the usage of hdfs. If we recommend it when running hadoop > dfs it should atleast be in the usage. > hadoop fs -> seems like one to use it appears generic for any filesystem. > Any input on this what is the recommended way to do this? Based on that we > can fix up the other issues. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4032) Specify the charset explicitly rather than rely on the default
[ https://issues.apache.org/jira/browse/HDFS-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549200#comment-13549200 ] Hadoop QA commented on HDFS-4032: - {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12564042/hdfs-4032.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/3812//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3812//console This message is automatically generated. > Specify the charset explicitly rather than rely on the default > -- > > Key: HDFS-4032 > URL: https://issues.apache.org/jira/browse/HDFS-4032 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Attachments: hdfs-4032.txt, hdfs-4032.txt > > > Findbugs 2 warns about relying on the default Java charset instead of > specifying it explicitly. Given that we're porting Hadoop to different > platforms it's better to be explicit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4377) Some trivial DN comment cleanup
[ https://issues.apache.org/jira/browse/HDFS-4377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HDFS-4377: -- Priority: Trivial (was: Minor) > Some trivial DN comment cleanup > --- > > Key: HDFS-4377 > URL: https://issues.apache.org/jira/browse/HDFS-4377 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins >Priority: Trivial > Attachments: hdfs-4377.txt, hdfs-4377.txt > > > DataStorage.java > - The "initilized" member is misspelled > - Comment what the storageID member is > DataNode.java > - Cleanup createNewStorageId comment (should mention the port is included and > is overly verbose) > BlockManager.java > - TreeSet in the comment should be TreeMap -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4306) PBHelper.convertLocatedBlock miss convert BlockToken
[ https://issues.apache.org/jira/browse/HDFS-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549129#comment-13549129 ] Hudson commented on HDFS-4306: -- Integrated in Hadoop-trunk-Commit #3207 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/3207/]) HDFS-4306. PBHelper.convertLocatedBlock miss convert BlockToken. Contributed by Binglin Chang. (Revision 1431117) Result = SUCCESS atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431117 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java > PBHelper.convertLocatedBlock miss convert BlockToken > > > Key: HDFS-4306 > URL: https://issues.apache.org/jira/browse/HDFS-4306 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.2-alpha >Reporter: Binglin Chang >Assignee: Binglin Chang > Fix For: 2.0.3-alpha > > Attachments: HDFS-4306.patch, HDFS-4306.v2.patch, HDFS-4306.v3.patch, > HDFS-4306.v4.patch, HDFS-4306.v4.patch > > > PBHelper.convertLocatedBlock(from protobuf array to primitive array) miss > convert BlockToken. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes
[ https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549126#comment-13549126 ] Hadoop QA commented on HDFS-4353: - {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12564027/_02a.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 8 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/3811//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3811//console This message is automatically generated. > Encapsulate connections to peers in Peer and PeerServer classes > --- > > Key: HDFS-4353 > URL: https://issues.apache.org/jira/browse/HDFS-4353 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, hdfs-client >Affects Versions: 2.0.3-alpha >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: _02a.patch, 02b-cumulative.patch, 02c.patch, 02c.patch, > 02-cumulative.patch, 02d.patch, 02e.patch, 02f.patch > > > Encapsulate connections to peers into the {{Peer}} and {{PeerServer}} > classes. Since many Java classes may be involved with these connections, it > makes sense to create a container for them. For example, a connection to a > peer may have an input stream, output stream, readablebytechannel, encrypted > output stream, and encrypted input stream associated with it. > This makes us less dependent on the {{NetUtils}} methods which use > {{instanceof}} to manipulate socket and stream states based on the runtime > type. it also paves the way to introduce UNIX domain sockets which don't > inherit from {{java.net.Socket}}. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4306) PBHelper.convertLocatedBlock miss convert BlockToken
[ https://issues.apache.org/jira/browse/HDFS-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers updated HDFS-4306: - Resolution: Fixed Fix Version/s: 2.0.3-alpha Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I've just committed this to trunk and branch-2. Thanks a lot for the contribution, Binglin. > PBHelper.convertLocatedBlock miss convert BlockToken > > > Key: HDFS-4306 > URL: https://issues.apache.org/jira/browse/HDFS-4306 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.2-alpha >Reporter: Binglin Chang >Assignee: Binglin Chang > Fix For: 2.0.3-alpha > > Attachments: HDFS-4306.patch, HDFS-4306.v2.patch, HDFS-4306.v3.patch, > HDFS-4306.v4.patch, HDFS-4306.v4.patch > > > PBHelper.convertLocatedBlock(from protobuf array to primitive array) miss > convert BlockToken. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4306) PBHelper.convertLocatedBlock miss convert BlockToken
[ https://issues.apache.org/jira/browse/HDFS-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549101#comment-13549101 ] Aaron T. Myers commented on HDFS-4306: -- +1, I'm going to commit this momentarily. > PBHelper.convertLocatedBlock miss convert BlockToken > > > Key: HDFS-4306 > URL: https://issues.apache.org/jira/browse/HDFS-4306 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.2-alpha >Reporter: Binglin Chang >Assignee: Binglin Chang > Attachments: HDFS-4306.patch, HDFS-4306.v2.patch, HDFS-4306.v3.patch, > HDFS-4306.v4.patch, HDFS-4306.v4.patch > > > PBHelper.convertLocatedBlock(from protobuf array to primitive array) miss > convert BlockToken. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4032) Specify the charset explicitly rather than rely on the default
[ https://issues.apache.org/jira/browse/HDFS-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549079#comment-13549079 ] Aaron T. Myers commented on HDFS-4032: -- Great find, Eli. +1 pending Jenkins. > Specify the charset explicitly rather than rely on the default > -- > > Key: HDFS-4032 > URL: https://issues.apache.org/jira/browse/HDFS-4032 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Attachments: hdfs-4032.txt, hdfs-4032.txt > > > Findbugs 2 warns about relying on the default Java charset instead of > specifying it explicitly. Given that we're porting Hadoop to different > platforms it's better to be explicit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4377) Some trivial DN comment cleanup
[ https://issues.apache.org/jira/browse/HDFS-4377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549074#comment-13549074 ] Hadoop QA commented on HDFS-4377: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12564025/hdfs-4377.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestPersistBlocks {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/3810//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3810//console This message is automatically generated. > Some trivial DN comment cleanup > --- > > Key: HDFS-4377 > URL: https://issues.apache.org/jira/browse/HDFS-4377 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins >Priority: Minor > Attachments: hdfs-4377.txt, hdfs-4377.txt > > > DataStorage.java > - The "initilized" member is misspelled > - Comment what the storageID member is > DataNode.java > - Cleanup createNewStorageId comment (should mention the port is included and > is overly verbose) > BlockManager.java > - TreeSet in the comment should be TreeMap -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4377) Some trivial DN comment cleanup
[ https://issues.apache.org/jira/browse/HDFS-4377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549069#comment-13549069 ] Hadoop QA commented on HDFS-4377: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12564025/hdfs-4377.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/3809//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3809//console This message is automatically generated. > Some trivial DN comment cleanup > --- > > Key: HDFS-4377 > URL: https://issues.apache.org/jira/browse/HDFS-4377 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins >Priority: Minor > Attachments: hdfs-4377.txt, hdfs-4377.txt > > > DataStorage.java > - The "initilized" member is misspelled > - Comment what the storageID member is > DataNode.java > - Cleanup createNewStorageId comment (should mention the port is included and > is overly verbose) > BlockManager.java > - TreeSet in the comment should be TreeMap -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4032) Specify the charset explicitly rather than rely on the default
[ https://issues.apache.org/jira/browse/HDFS-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated HDFS-4032: -- Attachment: hdfs-4032.txt Updated the patch. The test failures were due to a missing flush in NamenodeWebHdfsMethods#getListingStream. Given that both the new PrintWriter like the old PrintStream does not auto flush, and that they are implemented similarly (the PrintStream was creating a OutputStreamWriter under the hood like we're doing here) the only difference I can see is a different buffering implementation tickling this. According to docs the StreamingOutput method created here should be flushing it's stream so I think the flush here was always needed, we've just gotten away with not having it so far. Also updated Journal.java which came in after the last patch as part of the QJM merge. > Specify the charset explicitly rather than rely on the default > -- > > Key: HDFS-4032 > URL: https://issues.apache.org/jira/browse/HDFS-4032 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Attachments: hdfs-4032.txt, hdfs-4032.txt > > > Findbugs 2 warns about relying on the default Java charset instead of > specifying it explicitly. Given that we're porting Hadoop to different > platforms it's better to be explicit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2745) unclear to users which command to use to access the filesystem
[ https://issues.apache.org/jira/browse/HDFS-2745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549057#comment-13549057 ] Thomas Graves commented on HDFS-2745: - {quote} I looked at branch-1, and it doesn't have a separate hdfs script. git log doesn't have HADOOP-4868 mentioned either. Is there another branch I should be doing this for too? {quote} I would say if we do anything on branch-1 it would be: Update hadoop dfs to indicate "hadoop fs" should be used. But I don't have strong opinion on that. As you found hdfs command doesn't exist there. > unclear to users which command to use to access the filesystem > -- > > Key: HDFS-2745 > URL: https://issues.apache.org/jira/browse/HDFS-2745 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 0.23.0, 1.2.0, 2.0.2-alpha >Reporter: Thomas Graves >Assignee: Andrew Wang >Priority: Critical > Attachments: hdfs-2745-1.patch, hdfs-2745-2.patch > > > Its unclear to users which command to use to access the filesystem. Need some > background and then we can fix accordingly. We have 3 choices: > hadoop dfs -> says its deprecated and to use hdfs. If I run hdfs usage it > doesn't list any options like -ls in the usage, although there is an hdfs dfs > command > hdfs dfs -> not in the usage of hdfs. If we recommend it when running hadoop > dfs it should atleast be in the usage. > hadoop fs -> seems like one to use it appears generic for any filesystem. > Any input on this what is the recommended way to do this? Based on that we > can fix up the other issues. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4367) GetDataEncryptionKeyResponseProto does not handle null response
[ https://issues.apache.org/jira/browse/HDFS-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549050#comment-13549050 ] Aaron T. Myers commented on HDFS-4367: -- Hey Suresh, if I understand you correctly, you're saying that, assuming dfs.block.access.token.enable is set to false and dfs.encrypt.data.transfer is set to true: # Before this change we'd end up with a null pointer exception on the server, since you can't set a "required" field to null. # After this change we'd end up with a null pointer exception on the client, since null would now be returned but this isn't handled correctly by 2.0.2 client code. If my understanding of your point is correct, then I would counter that having "dfs.block.access.token.enable" set to false and "dfs.encrypt.data.transfer" set to true is not a legitimate configuration. Clearly no existing (2.0.2) deployment could be running with such a configuration since HDFS reads/writes would not work. Given that, all existing deployments which are using this feature must have dfs.block.access.token.enable set to true if they have dfs.encrypt.data.transfer set to true. This would mean that, even after this change, all 2.0.2 clients could still communicate with 2.0.3 servers, and vice versa. Hence, this change should not be considered incompatible. > GetDataEncryptionKeyResponseProto does not handle null response > > > Key: HDFS-4367 > URL: https://issues.apache.org/jira/browse/HDFS-4367 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.2-alpha >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas >Priority: Blocker > Attachments: HDFS-4367.patch > > > GetDataEncryptionKeyResponseProto member dataEncryptionKey should be optional > to handle null response. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4354) Create DomainSocket and DomainPeer and associated unit tests
[ https://issues.apache.org/jira/browse/HDFS-4354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-4354: -- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed to branch. Thanks, Colin. > Create DomainSocket and DomainPeer and associated unit tests > > > Key: HDFS-4354 > URL: https://issues.apache.org/jira/browse/HDFS-4354 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, hdfs-client, performance >Affects Versions: 2.0.3-alpha >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: 03b.patch, 03-cumulative.patch > > > Create {{DomainSocket}}, a JNI class which provides UNIX domain sockets > functionality in Java. Also create {{DomainPeer}}, {{DomainPeerServer}}. > This change also adds a unit test as well as {{TemporarySocketDirectory}}. > Finally, this change adds a few C utility methods for handling JNI > exceptions, such as {{newException}}. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4363) Combine PBHelper and HdfsProtoUtil and remove redundant methods
[ https://issues.apache.org/jira/browse/HDFS-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HDFS-4363: -- Resolution: Fixed Fix Version/s: 2.0.3-alpha Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I committed the patch to trunk and branch-2. Thank you Nicholas for the review. > Combine PBHelper and HdfsProtoUtil and remove redundant methods > --- > > Key: HDFS-4363 > URL: https://issues.apache.org/jira/browse/HDFS-4363 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.2-alpha >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas > Fix For: 2.0.3-alpha > > Attachments: HDFS-4363.patch, HDFS-4363.patch, HDFS-4363.patch, > HDFS-4363.patch > > > There are many methods overlapping between PBHelper and HdfsProtoUtil. This > jira combines these two helper classes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes
[ https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HDFS-4353: -- Resolution: Fixed Status: Resolved (was: Patch Available) +1. Committed to branch. > Encapsulate connections to peers in Peer and PeerServer classes > --- > > Key: HDFS-4353 > URL: https://issues.apache.org/jira/browse/HDFS-4353 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, hdfs-client >Affects Versions: 2.0.3-alpha >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: _02a.patch, 02b-cumulative.patch, 02c.patch, 02c.patch, > 02-cumulative.patch, 02d.patch, 02e.patch, 02f.patch > > > Encapsulate connections to peers into the {{Peer}} and {{PeerServer}} > classes. Since many Java classes may be involved with these connections, it > makes sense to create a container for them. For example, a connection to a > peer may have an input stream, output stream, readablebytechannel, encrypted > output stream, and encrypted input stream associated with it. > This makes us less dependent on the {{NetUtils}} methods which use > {{instanceof}} to manipulate socket and stream states based on the runtime > type. it also paves the way to introduce UNIX domain sockets which don't > inherit from {{java.net.Socket}}. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2745) unclear to users which command to use to access the filesystem
[ https://issues.apache.org/jira/browse/HDFS-2745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549033#comment-13549033 ] Jason Lowe commented on HDFS-2745: -- Ha, comment race. What Tom said. ;-) > unclear to users which command to use to access the filesystem > -- > > Key: HDFS-2745 > URL: https://issues.apache.org/jira/browse/HDFS-2745 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 0.23.0, 1.2.0, 2.0.2-alpha >Reporter: Thomas Graves >Assignee: Andrew Wang >Priority: Critical > Attachments: hdfs-2745-1.patch, hdfs-2745-2.patch > > > Its unclear to users which command to use to access the filesystem. Need some > background and then we can fix accordingly. We have 3 choices: > hadoop dfs -> says its deprecated and to use hdfs. If I run hdfs usage it > doesn't list any options like -ls in the usage, although there is an hdfs dfs > command > hdfs dfs -> not in the usage of hdfs. If we recommend it when running hadoop > dfs it should atleast be in the usage. > hadoop fs -> seems like one to use it appears generic for any filesystem. > Any input on this what is the recommended way to do this? Based on that we > can fix up the other issues. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2745) unclear to users which command to use to access the filesystem
[ https://issues.apache.org/jira/browse/HDFS-2745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549030#comment-13549030 ] Jason Lowe commented on HDFS-2745: -- Hadoop 0.23 shipped many versions with this confusing pointer to the hdfs command, and there may now be users who have coded scripts relying on the "hdfs dfs" command to keep working. As those users move from 0.23 to 2.0, they may be surprised to find "hdfs dfs" simply doesn't work after this patch and no indication as to why. Can we mark "hdfs dfs" as deprecated in 2.0 and remove it in trunk? That provides a smoother migration path for users who tried to follow the original deprecation directions in 0.23 and coded their scripts to use "hdfs dfs" as directed. > unclear to users which command to use to access the filesystem > -- > > Key: HDFS-2745 > URL: https://issues.apache.org/jira/browse/HDFS-2745 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 0.23.0, 1.2.0, 2.0.2-alpha >Reporter: Thomas Graves >Assignee: Andrew Wang >Priority: Critical > Attachments: hdfs-2745-1.patch, hdfs-2745-2.patch > > > Its unclear to users which command to use to access the filesystem. Need some > background and then we can fix accordingly. We have 3 choices: > hadoop dfs -> says its deprecated and to use hdfs. If I run hdfs usage it > doesn't list any options like -ls in the usage, although there is an hdfs dfs > command > hdfs dfs -> not in the usage of hdfs. If we recommend it when running hadoop > dfs it should atleast be in the usage. > hadoop fs -> seems like one to use it appears generic for any filesystem. > Any input on this what is the recommended way to do this? Based on that we > can fix up the other issues. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-2745) unclear to users which command to use to access the filesystem
[ https://issues.apache.org/jira/browse/HDFS-2745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549027#comment-13549027 ] Thomas Graves commented on HDFS-2745: - {quote} I do also find it a little odd that hadoop-daemon.sh could conceivably run dfsadmin/fsck, and arguably the balancer as well. Perhaps we should remove those from hadoop-daemon.sh as well, since we're removing dfs. Thoughts? {quote} I agree this seems odd to me. I wasn't even aware of it before this patch. I also propose that for branch-2 we don't actually remove hdfs dfs but deprecate and hide it, trunk actually removes it. My reasoning is we have it in branch-0.23 and I know customers are using and want to give them one release to move off. Its also there is anyone who has started to use branch-2. any objections? > unclear to users which command to use to access the filesystem > -- > > Key: HDFS-2745 > URL: https://issues.apache.org/jira/browse/HDFS-2745 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 0.23.0, 1.2.0, 2.0.2-alpha >Reporter: Thomas Graves >Assignee: Andrew Wang >Priority: Critical > Attachments: hdfs-2745-1.patch, hdfs-2745-2.patch > > > Its unclear to users which command to use to access the filesystem. Need some > background and then we can fix accordingly. We have 3 choices: > hadoop dfs -> says its deprecated and to use hdfs. If I run hdfs usage it > doesn't list any options like -ls in the usage, although there is an hdfs dfs > command > hdfs dfs -> not in the usage of hdfs. If we recommend it when running hadoop > dfs it should atleast be in the usage. > hadoop fs -> seems like one to use it appears generic for any filesystem. > Any input on this what is the recommended way to do this? Based on that we > can fix up the other issues. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-4379) DN block reports should include a sequence number
Daryn Sharp created HDFS-4379: - Summary: DN block reports should include a sequence number Key: HDFS-4379 URL: https://issues.apache.org/jira/browse/HDFS-4379 Project: Hadoop HDFS Issue Type: Improvement Components: datanode, namenode Affects Versions: 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Block reports should include a monotonically increasing sequence number. If the sequence starts from zero, this will aid the NN in being able to distinguish a DN restart (seqNum == 0) versus a re-registration after network interruption (seqNum != 0). The NN may also use it to identify and skip already processed block reports. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4363) Combine PBHelper and HdfsProtoUtil and remove redundant methods
[ https://issues.apache.org/jira/browse/HDFS-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549018#comment-13549018 ] Hudson commented on HDFS-4363: -- Integrated in Hadoop-trunk-Commit #3206 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/3206/]) HDFS-4363. Combine PBHelper and HdfsProtoUtil and remove redundant methods. Contributed by Suresh Srinivas. (Revision 1431088) Result = SUCCESS suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431088 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader2.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsProtoUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferEncryptor.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Sender.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/TestHdfsProtoUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java > Combine PBHelper and HdfsProtoUtil and remove redundant methods > --- > > Key: HDFS-4363 > URL: https://issues.apache.org/jira/browse/HDFS-4363 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.2-alpha >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas > Attachments: HDFS-4363.patch, HDFS-4363.patch, HDFS-4363.patch, > HDFS-4363.patch > > > There are many methods overlapping between PBHelper and HdfsProtoUtil. This > jira combines these two helper classes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4367) GetDataEncryptionKeyResponseProto does not handle null response
[ https://issues.apache.org/jira/browse/HDFS-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549011#comment-13549011 ] Suresh Srinivas commented on HDFS-4367: --- bq. I don't think this should be considered an incompatible change since old clients will in fact never request a DataEncryptionKey when data encryption isn't enabled. i.e. a client will never request an encryption key from the NN in a scenario where it's possible for the NN to return a null response. Aaron, sorry I may not have understood the comment or this feature well. I still think this is incompatible. Let me know if I understand this correctly. This feature was introduced by HDFS-3637 in 2.0.2-alpha and works as follows: # Server returns null if dfs.block.access.token.enable is false or dfs.encrypt.data.transfer is false # Client methods that use DFSClient#getDataEncryptionKey() check server defaults and calls ClientProtocol#getDataEncryptionKey if dfs.encrypt.data.transfer is set true #* These calls hit null pointer exception, if on the server, dfs.block.access.token.enable is false and dfs.encrypt.data.transfer is true. When we fix server, this hits protobuf exception on the client side. > GetDataEncryptionKeyResponseProto does not handle null response > > > Key: HDFS-4367 > URL: https://issues.apache.org/jira/browse/HDFS-4367 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.2-alpha >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas >Priority: Blocker > Attachments: HDFS-4367.patch > > > GetDataEncryptionKeyResponseProto member dataEncryptionKey should be optional > to handle null response. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4377) Some trivial DN comment cleanup
[ https://issues.apache.org/jira/browse/HDFS-4377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548977#comment-13548977 ] Hadoop QA commented on HDFS-4377: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12563989/hdfs-4377.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/3808//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3808//console This message is automatically generated. > Some trivial DN comment cleanup > --- > > Key: HDFS-4377 > URL: https://issues.apache.org/jira/browse/HDFS-4377 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins >Priority: Minor > Attachments: hdfs-4377.txt, hdfs-4377.txt > > > DataStorage.java > - The "initilized" member is misspelled > - Comment what the storageID member is > DataNode.java > - Cleanup createNewStorageId comment (should mention the port is included and > is overly verbose) > BlockManager.java > - TreeSet in the comment should be TreeMap -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes
[ https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-4353: --- Status: Patch Available (was: Reopened) > Encapsulate connections to peers in Peer and PeerServer classes > --- > > Key: HDFS-4353 > URL: https://issues.apache.org/jira/browse/HDFS-4353 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, hdfs-client >Affects Versions: 2.0.3-alpha >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: _02a.patch, 02b-cumulative.patch, 02c.patch, 02c.patch, > 02-cumulative.patch, 02d.patch, 02e.patch, 02f.patch > > > Encapsulate connections to peers into the {{Peer}} and {{PeerServer}} > classes. Since many Java classes may be involved with these connections, it > makes sense to create a container for them. For example, a connection to a > peer may have an input stream, output stream, readablebytechannel, encrypted > output stream, and encrypted input stream associated with it. > This makes us less dependent on the {{NetUtils}} methods which use > {{instanceof}} to manipulate socket and stream states based on the runtime > type. it also paves the way to introduce UNIX domain sockets which don't > inherit from {{java.net.Socket}}. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes
[ https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-4353: --- Attachment: _02a.patch this patch applies against the HDFS-347 branch. > Encapsulate connections to peers in Peer and PeerServer classes > --- > > Key: HDFS-4353 > URL: https://issues.apache.org/jira/browse/HDFS-4353 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, hdfs-client >Affects Versions: 2.0.3-alpha >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: _02a.patch, 02b-cumulative.patch, 02c.patch, 02c.patch, > 02-cumulative.patch, 02d.patch, 02e.patch, 02f.patch > > > Encapsulate connections to peers into the {{Peer}} and {{PeerServer}} > classes. Since many Java classes may be involved with these connections, it > makes sense to create a container for them. For example, a connection to a > peer may have an input stream, output stream, readablebytechannel, encrypted > output stream, and encrypted input stream associated with it. > This makes us less dependent on the {{NetUtils}} methods which use > {{instanceof}} to manipulate socket and stream states based on the runtime > type. it also paves the way to introduce UNIX domain sockets which don't > inherit from {{java.net.Socket}}. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4377) Some trivial DN comment cleanup
[ https://issues.apache.org/jira/browse/HDFS-4377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated HDFS-4377: -- Attachment: (was: hdfs-4377.txt) > Some trivial DN comment cleanup > --- > > Key: HDFS-4377 > URL: https://issues.apache.org/jira/browse/HDFS-4377 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins >Priority: Minor > Attachments: hdfs-4377.txt, hdfs-4377.txt > > > DataStorage.java > - The "initilized" member is misspelled > - Comment what the storageID member is > DataNode.java > - Cleanup createNewStorageId comment (should mention the port is included and > is overly verbose) > BlockManager.java > - TreeSet in the comment should be TreeMap -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4377) Some trivial DN comment cleanup
[ https://issues.apache.org/jira/browse/HDFS-4377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated HDFS-4377: -- Attachment: hdfs-4377.txt > Some trivial DN comment cleanup > --- > > Key: HDFS-4377 > URL: https://issues.apache.org/jira/browse/HDFS-4377 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins >Priority: Minor > Attachments: hdfs-4377.txt, hdfs-4377.txt > > > DataStorage.java > - The "initilized" member is misspelled > - Comment what the storageID member is > DataNode.java > - Cleanup createNewStorageId comment (should mention the port is included and > is overly verbose) > BlockManager.java > - TreeSet in the comment should be TreeMap -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4377) Some trivial DN comment cleanup
[ https://issues.apache.org/jira/browse/HDFS-4377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated HDFS-4377: -- Attachment: hdfs-4377.txt Thanks for the review Todd. Updated patch attached. #1 Good catch, I just noticed the type changed recently and updated the comment naively. I re-wrote it now. Related, I filed HDFS-4378 to make StorageID a class so the types are more readable and the code is less error prone. #2 I rewrote the implementation comment to fix spelling mistakes/grammar and hopefully improve the explanation. > Some trivial DN comment cleanup > --- > > Key: HDFS-4377 > URL: https://issues.apache.org/jira/browse/HDFS-4377 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins >Priority: Minor > Attachments: hdfs-4377.txt, hdfs-4377.txt > > > DataStorage.java > - The "initilized" member is misspelled > - Comment what the storageID member is > DataNode.java > - Cleanup createNewStorageId comment (should mention the port is included and > is overly verbose) > BlockManager.java > - TreeSet in the comment should be TreeMap -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4288) NN accepts incremental BR as IBR in safemode
[ https://issues.apache.org/jira/browse/HDFS-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548938#comment-13548938 ] Suresh Srinivas commented on HDFS-4288: --- bq. Better yet, the logic would be to (seqNum == 0 || seqNum != lastSeqNum). However this requires writable/RPC changes on 23, and protobuf changes on trunk/2 and trying to ensure backwards compatibility with an optional protobuf field, etc. Would you be ok if I filed another jira? Separate jira should be fine. We may want to mark this jira as 0.23 only. > NN accepts incremental BR as IBR in safemode > > > Key: HDFS-4288 > URL: https://issues.apache.org/jira/browse/HDFS-4288 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Attachments: HDFS-4288.branch-23.patch > > > If a DN is ready to send an incremental BR and the NN goes down, the DN will > repeatedly try to reconnect. The NN will then process the DN's incremental > BR as an initial BR. The NN now thinks the DN has only a few blocks, and > will ignore all subsequent BRs from that DN until out of safemode -- which it > may never do because of all the "missing" blocks on the affected DNs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-4378) Create a StorageID class
Eli Collins created HDFS-4378: - Summary: Create a StorageID class Key: HDFS-4378 URL: https://issues.apache.org/jira/browse/HDFS-4378 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Affects Versions: 2.0.0-alpha Reporter: Eli Collins Assignee: Eli Collins Priority: Minor We currently pass DataNode storage IDs around as strings, the code would be more readable (eg map keys could be specified as StorageIDs rather than strings) and less error prone if we used a simple class. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4249) Add status NameNode startup to webUI
[ https://issues.apache.org/jira/browse/HDFS-4249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548912#comment-13548912 ] Aaron T. Myers commented on HDFS-4249: -- No problem. Thanks for working on this! > Add status NameNode startup to webUI > - > > Key: HDFS-4249 > URL: https://issues.apache.org/jira/browse/HDFS-4249 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Suresh Srinivas >Assignee: Chris Nauroth > Attachments: HDFS-4249.1.pdf > > > Currently NameNode WebUI server starts only after the fsimage is loaded, > edits are applied and checkpoint is complete. Any status related to namenode > startin up is available only in the logs. I propose starting the webserver > before loading namespace and providing namenode startup information. > More details in the next comment. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4249) Add status NameNode startup to webUI
[ https://issues.apache.org/jira/browse/HDFS-4249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548906#comment-13548906 ] Chris Nauroth commented on HDFS-4249: - I just converted the child issues into sub-tasks. Aaron, thanks for letting me know about this feature of JIRA. > Add status NameNode startup to webUI > - > > Key: HDFS-4249 > URL: https://issues.apache.org/jira/browse/HDFS-4249 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Suresh Srinivas >Assignee: Chris Nauroth > Attachments: HDFS-4249.1.pdf > > > Currently NameNode WebUI server starts only after the fsimage is loaded, > edits are applied and checkpoint is complete. Any status related to namenode > startin up is available only in the logs. I propose starting the webserver > before loading namespace and providing namenode startup information. > More details in the next comment. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4374) Display NameNode startup progress in UI
[ https://issues.apache.org/jira/browse/HDFS-4374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HDFS-4374: Issue Type: Sub-task (was: Bug) Parent: HDFS-4249 > Display NameNode startup progress in UI > --- > > Key: HDFS-4374 > URL: https://issues.apache.org/jira/browse/HDFS-4374 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 3.0.0 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > > Display the information about the NameNode's startup progress in the NameNode > web UI. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4373) Add HTTP API for querying NameNode startup progress
[ https://issues.apache.org/jira/browse/HDFS-4373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HDFS-4373: Issue Type: Sub-task (was: Bug) Parent: HDFS-4249 > Add HTTP API for querying NameNode startup progress > --- > > Key: HDFS-4373 > URL: https://issues.apache.org/jira/browse/HDFS-4373 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 3.0.0 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > > Provide an HTTP API for non-browser clients to query the NameNode's current > progress through startup. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4372) Track NameNode startup progress
[ https://issues.apache.org/jira/browse/HDFS-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HDFS-4372: Issue Type: Sub-task (was: Bug) Parent: HDFS-4249 > Track NameNode startup progress > --- > > Key: HDFS-4372 > URL: https://issues.apache.org/jira/browse/HDFS-4372 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 3.0.0 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > > Track detailed progress information about the steps of NameNode startup to > enable display to users. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4249) Add status NameNode startup to webUI
[ https://issues.apache.org/jira/browse/HDFS-4249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548890#comment-13548890 ] Aaron T. Myers commented on HDFS-4249: -- Hey Chris, perhaps we should move those three issues you created to be actual sub-task JIRAs of this one? > Add status NameNode startup to webUI > - > > Key: HDFS-4249 > URL: https://issues.apache.org/jira/browse/HDFS-4249 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Suresh Srinivas >Assignee: Chris Nauroth > Attachments: HDFS-4249.1.pdf > > > Currently NameNode WebUI server starts only after the fsimage is loaded, > edits are applied and checkpoint is complete. Any status related to namenode > startin up is available only in the logs. I propose starting the webserver > before loading namespace and providing namenode startup information. > More details in the next comment. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4363) Combine PBHelper and HdfsProtoUtil and remove redundant methods
[ https://issues.apache.org/jira/browse/HDFS-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548877#comment-13548877 ] Hadoop QA commented on HDFS-4363: - {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12563964/HDFS-4363.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/3807//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3807//console This message is automatically generated. > Combine PBHelper and HdfsProtoUtil and remove redundant methods > --- > > Key: HDFS-4363 > URL: https://issues.apache.org/jira/browse/HDFS-4363 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.2-alpha >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas > Attachments: HDFS-4363.patch, HDFS-4363.patch, HDFS-4363.patch, > HDFS-4363.patch > > > There are many methods overlapping between PBHelper and HdfsProtoUtil. This > jira combines these two helper classes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4288) NN accepts incremental BR as IBR in safemode
[ https://issues.apache.org/jira/browse/HDFS-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548872#comment-13548872 ] Daryn Sharp commented on HDFS-4288: --- bq. This will also solve issues related to DN restart and NN may not process the block report. True, but the boolean patch (simple incremental improvement on the existing trunk behavior) fixes both DN restart and reregistration after a broken connection. The NN cannot distinguish the two. So with a boolean, the NN (naively) processes the BR associated with every (re)registration. A sequence number, that relies on a sentinel value, allows the DN to dictate the NN's behavior. This works well for restart since we know we are starting from 0. For a rereg, block updates may have been lost, so the sequence number must be guaranteed to always be reset to 0. That's naive like the boolean, and might be hard or fragile to ensure it's always reset - in which case we might as well go with the boolean. Better yet, the logic would be to {{(seqNum == 0 || seqNum != lastSeqNum)}}. However this requires writable/RPC changes on 23, and protobuf changes on trunk/2 and trying to ensure backwards compatibility with an optional protobuf field, etc. Would you be ok if I filed another jira? > NN accepts incremental BR as IBR in safemode > > > Key: HDFS-4288 > URL: https://issues.apache.org/jira/browse/HDFS-4288 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Attachments: HDFS-4288.branch-23.patch > > > If a DN is ready to send an incremental BR and the NN goes down, the DN will > repeatedly try to reconnect. The NN will then process the DN's incremental > BR as an initial BR. The NN now thinks the DN has only a few blocks, and > will ignore all subsequent BRs from that DN until out of safemode -- which it > may never do because of all the "missing" blocks on the affected DNs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4377) Some trivial DN comment cleanup
[ https://issues.apache.org/jira/browse/HDFS-4377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548861#comment-13548861 ] Todd Lipcon commented on HDFS-4377: --- {code} - // Keeps a TreeSet for every named node. Each treeset contains + // Keeps a TreeMap for every named node. Each treeset contains // a list of the blocks that are "extra" at that location. We'll // eventually remove these extras. - // Mapping: StorageID -> TreeSet + // Mapping: StorageID -> TreeMap // public final Map> excessReplicateMap = new TreeMap>(); {code} This isn't right -- the value of the map is itself a set, not a map. {code} - * It is considered extermely rare for all these numbers to match - * on a different machine accidentally for the following - * a) SecureRandom(INT_MAX) is pretty much random (1 in 2 billion), and - * b) Good chance ip address would be different, and - * c) Even on the same machine, Datanode is designed to use different ports. - * d) Good chance that these are started at different times. - * For a confict to occur all the 4 above have to match!. - * The format of this string can be changed anytime in future without - * affecting its functionality. {code} I think those comments were useful. Probably worth leaving them as '//' style comments. Otherwise seems fine > Some trivial DN comment cleanup > --- > > Key: HDFS-4377 > URL: https://issues.apache.org/jira/browse/HDFS-4377 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins >Priority: Minor > Attachments: hdfs-4377.txt > > > DataStorage.java > - The "initilized" member is misspelled > - Comment what the storageID member is > DataNode.java > - Cleanup createNewStorageId comment (should mention the port is included and > is overly verbose) > BlockManager.java > - TreeSet in the comment should be TreeMap -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4367) GetDataEncryptionKeyResponseProto does not handle null response
[ https://issues.apache.org/jira/browse/HDFS-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548857#comment-13548857 ] Aaron T. Myers commented on HDFS-4367: -- +1, the patch looks good to me. I don't think this should be considered an incompatible change since old clients will in fact never request a DataEncryptionKey when data encryption isn't enabled. i.e. a client will never request an encryption key from the NN in a scenario where it's possible for the NN to return a null response. > GetDataEncryptionKeyResponseProto does not handle null response > > > Key: HDFS-4367 > URL: https://issues.apache.org/jira/browse/HDFS-4367 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.2-alpha >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas >Priority: Blocker > Attachments: HDFS-4367.patch > > > GetDataEncryptionKeyResponseProto member dataEncryptionKey should be optional > to handle null response. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4365) Add junit timeout to TestBalancerWithNodeGroup
[ https://issues.apache.org/jira/browse/HDFS-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers updated HDFS-4365: - Resolution: Duplicate Status: Resolved (was: Patch Available) Looks like this issue got addressed as part of the commit of HDFS-4261. Thanks a lot for looking into this anyway, Colin. > Add junit timeout to TestBalancerWithNodeGroup > -- > > Key: HDFS-4365 > URL: https://issues.apache.org/jira/browse/HDFS-4365 > Project: Hadoop HDFS > Issue Type: Improvement > Components: test >Affects Versions: 2.0.3-alpha >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe >Priority: Trivial > Attachments: HDFS-4365.001.patch > > > TestBalancerWithNodeGroup should have a junit timeout so that when it fails, > we can easily identify it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4375) Use token request messages defined in hadoop common
[ https://issues.apache.org/jira/browse/HDFS-4375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HDFS-4375: -- Attachment: HDFS-4375.patch > Use token request messages defined in hadoop common > --- > > Key: HDFS-4375 > URL: https://issues.apache.org/jira/browse/HDFS-4375 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, security >Affects Versions: 2.0.2-alpha >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas > Attachments: HDFS-4375.patch, HDFS-4375.patch > > > HDFS changes related to HADOOP-9192 to reuse the protobuf messages defined in > common. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4377) Some trivial DN comment cleanup
[ https://issues.apache.org/jira/browse/HDFS-4377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated HDFS-4377: -- Attachment: hdfs-4377.txt Patch attached. > Some trivial DN comment cleanup > --- > > Key: HDFS-4377 > URL: https://issues.apache.org/jira/browse/HDFS-4377 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins >Priority: Minor > Attachments: hdfs-4377.txt > > > DataStorage.java > - The "initilized" member is misspelled > - Comment what the storageID member is > DataNode.java > - Cleanup createNewStorageId comment (should mention the port is included and > is overly verbose) > BlockManager.java > - TreeSet in the comment should be TreeMap -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-4377) Some trivial DN comment cleanup
Eli Collins created HDFS-4377: - Summary: Some trivial DN comment cleanup Key: HDFS-4377 URL: https://issues.apache.org/jira/browse/HDFS-4377 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.0.0-alpha Reporter: Eli Collins Assignee: Eli Collins Priority: Minor Attachments: hdfs-4377.txt DataStorage.java - The "initilized" member is misspelled - Comment what the storageID member is DataNode.java - Cleanup createNewStorageId comment (should mention the port is included and is overly verbose) BlockManager.java - TreeSet in the comment should be TreeMap -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4377) Some trivial DN comment cleanup
[ https://issues.apache.org/jira/browse/HDFS-4377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated HDFS-4377: -- Status: Patch Available (was: Open) > Some trivial DN comment cleanup > --- > > Key: HDFS-4377 > URL: https://issues.apache.org/jira/browse/HDFS-4377 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins >Priority: Minor > Attachments: hdfs-4377.txt > > > DataStorage.java > - The "initilized" member is misspelled > - Comment what the storageID member is > DataNode.java > - Cleanup createNewStorageId comment (should mention the port is included and > is overly verbose) > BlockManager.java > - TreeSet in the comment should be TreeMap -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-347) DFS read performance suboptimal when client co-located on nodes with data
[ https://issues.apache.org/jira/browse/HDFS-347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548757#comment-13548757 ] Todd Lipcon commented on HDFS-347: -- Sure. Created a branch. I anticipate having all the work committed in the next day or two and will call a merge immediately. Keep in mind this work has been under review here for 2-3 months now, and there are 100+ watchers on this JIRA, so I don't anticipate needing a lengthy review period like we did for other branches. > DFS read performance suboptimal when client co-located on nodes with data > - > > Key: HDFS-347 > URL: https://issues.apache.org/jira/browse/HDFS-347 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, hdfs-client, performance >Reporter: George Porter >Assignee: Colin Patrick McCabe > Attachments: all.tsv, BlockReaderLocal1.txt, HADOOP-4801.1.patch, > HADOOP-4801.2.patch, HADOOP-4801.3.patch, HDFS-347-016_cleaned.patch, > HDFS-347.016.patch, HDFS-347.017.clean.patch, HDFS-347.017.patch, > HDFS-347.018.clean.patch, HDFS-347.018.patch2, HDFS-347.019.patch, > HDFS-347.020.patch, HDFS-347.021.patch, HDFS-347.022.patch, > HDFS-347.024.patch, HDFS-347.025.patch, HDFS-347.026.patch, > HDFS-347.027.patch, HDFS-347.029.patch, HDFS-347.030.patch, > HDFS-347.033.patch, HDFS-347.035.patch, HDFS-347-branch-20-append.txt, > hdfs-347.png, hdfs-347.txt, local-reads-doc > > > One of the major strategies Hadoop uses to get scalable data processing is to > move the code to the data. However, putting the DFS client on the same > physical node as the data blocks it acts on doesn't improve read performance > as much as expected. > After looking at Hadoop and O/S traces (via HADOOP-4049), I think the problem > is due to the HDFS streaming protocol causing many more read I/O operations > (iops) than necessary. Consider the case of a DFSClient fetching a 64 MB > disk block from the DataNode process (running in a separate JVM) running on > the same machine. The DataNode will satisfy the single disk block request by > sending data back to the HDFS client in 64-KB chunks. In BlockSender.java, > this is done in the sendChunk() method, relying on Java's transferTo() > method. Depending on the host O/S and JVM implementation, transferTo() is > implemented as either a sendfilev() syscall or a pair of mmap() and write(). > In either case, each chunk is read from the disk by issuing a separate I/O > operation for each chunk. The result is that the single request for a 64-MB > block ends up hitting the disk as over a thousand smaller requests for 64-KB > each. > Since the DFSClient runs in a different JVM and process than the DataNode, > shuttling data from the disk to the DFSClient also results in context > switches each time network packets get sent (in this case, the 64-kb chunk > turns into a large number of 1500 byte packet send operations). Thus we see > a large number of context switches for each block send operation. > I'd like to get some feedback on the best way to address this, but I think > providing a mechanism for a DFSClient to directly open data blocks that > happen to be on the same machine. It could do this by examining the set of > LocatedBlocks returned by the NameNode, marking those that should be resident > on the local host. Since the DataNode and DFSClient (probably) share the > same hadoop configuration, the DFSClient should be able to find the files > holding the block data, and it could directly open them and send data back to > the client. This would avoid the context switches imposed by the network > layer, and would allow for much larger read buffers than 64KB, which should > reduce the number of iops imposed by each read block operation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out
[ https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548745#comment-13548745 ] Aaron T. Myers commented on HDFS-4261: -- Thanks for committing this, Nicholas. I've filed this JIRA to track the intermittent timeout which still occurs: HDFS-4376. > TestBalancerWithNodeGroup times out > --- > > Key: HDFS-4261 > URL: https://issues.apache.org/jira/browse/HDFS-4261 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer >Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Junping Du > Fix For: 3.0.0 > > Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, > HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, > HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, > org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac, > > org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win, > test-balancer-with-node-group-timeout.txt > > > When I manually ran TestBalancerWithNodeGroup, it always timed out in my > machine. Looking at the Jerkins report [build > #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/], > TestBalancerWithNodeGroup somehow was skipped so that the problem was not > detected. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4376) Intermittent timeout of TestBalancerWithNodeGroup
[ https://issues.apache.org/jira/browse/HDFS-4376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers updated HDFS-4376: - Attachment: test-balancer-with-node-group-timeout.txt Thread dump when the test timeout occurred. > Intermittent timeout of TestBalancerWithNodeGroup > - > > Key: HDFS-4376 > URL: https://issues.apache.org/jira/browse/HDFS-4376 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer, test >Affects Versions: 2.0.3-alpha >Reporter: Aaron T. Myers >Priority: Minor > Attachments: test-balancer-with-node-group-timeout.txt > > > HDFS-4261 fixed several issues with the balancer and balancer tests, and > reduced the frequency with which TestBalancerWithNodeGroup times out. Despite > this, occasional timeouts still occur in this test. This JIRA is to track and > fix this problem. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-4376) Intermittent timeout of TestBalancerWithNodeGroup
Aaron T. Myers created HDFS-4376: Summary: Intermittent timeout of TestBalancerWithNodeGroup Key: HDFS-4376 URL: https://issues.apache.org/jira/browse/HDFS-4376 Project: Hadoop HDFS Issue Type: Bug Components: balancer, test Affects Versions: 2.0.3-alpha Reporter: Aaron T. Myers Priority: Minor Attachments: test-balancer-with-node-group-timeout.txt HDFS-4261 fixed several issues with the balancer and balancer tests, and reduced the frequency with which TestBalancerWithNodeGroup times out. Despite this, occasional timeouts still occur in this test. This JIRA is to track and fix this problem. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HDFS-4375) Use token request messages defined in hadoop common
[ https://issues.apache.org/jira/browse/HDFS-4375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas reassigned HDFS-4375: - Assignee: Suresh Srinivas > Use token request messages defined in hadoop common > --- > > Key: HDFS-4375 > URL: https://issues.apache.org/jira/browse/HDFS-4375 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, security >Affects Versions: 2.0.2-alpha >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas > Attachments: HDFS-4375.patch > > > HDFS changes related to HADOOP-9192 to reuse the protobuf messages defined in > common. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4375) Use token request messages defined in hadoop common
[ https://issues.apache.org/jira/browse/HDFS-4375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HDFS-4375: -- Attachment: HDFS-4375.patch > Use token request messages defined in hadoop common > --- > > Key: HDFS-4375 > URL: https://issues.apache.org/jira/browse/HDFS-4375 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, security >Affects Versions: 2.0.2-alpha >Reporter: Suresh Srinivas > Attachments: HDFS-4375.patch > > > HDFS changes related to HADOOP-9192 to reuse the protobuf messages defined in > common. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-4375) Use token request messages defined in hadoop common
Suresh Srinivas created HDFS-4375: - Summary: Use token request messages defined in hadoop common Key: HDFS-4375 URL: https://issues.apache.org/jira/browse/HDFS-4375 Project: Hadoop HDFS Issue Type: Improvement Components: namenode, security Affects Versions: 2.0.2-alpha Reporter: Suresh Srinivas HDFS changes related to HADOOP-9192 to reuse the protobuf messages defined in common. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4363) Combine PBHelper and HdfsProtoUtil and remove redundant methods
[ https://issues.apache.org/jira/browse/HDFS-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HDFS-4363: -- Attachment: HDFS-4363.patch Rebased patch + addressed comments from Nicholas. > Combine PBHelper and HdfsProtoUtil and remove redundant methods > --- > > Key: HDFS-4363 > URL: https://issues.apache.org/jira/browse/HDFS-4363 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.2-alpha >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas > Attachments: HDFS-4363.patch, HDFS-4363.patch, HDFS-4363.patch, > HDFS-4363.patch > > > There are many methods overlapping between PBHelper and HdfsProtoUtil. This > jira combines these two helper classes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4363) Combine PBHelper and HdfsProtoUtil and remove redundant methods
[ https://issues.apache.org/jira/browse/HDFS-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548670#comment-13548670 ] Tsz Wo (Nicholas), SZE commented on HDFS-4363: -- +1 once the above minor comments are addressed. > Combine PBHelper and HdfsProtoUtil and remove redundant methods > --- > > Key: HDFS-4363 > URL: https://issues.apache.org/jira/browse/HDFS-4363 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.2-alpha >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas > Attachments: HDFS-4363.patch, HDFS-4363.patch, HDFS-4363.patch > > > There are many methods overlapping between PBHelper and HdfsProtoUtil. This > jira combines these two helper classes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HDFS-4244) Support deleting snapshots
[ https://issues.apache.org/jira/browse/HDFS-4244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE resolved HDFS-4244. -- Resolution: Fixed Fix Version/s: Snapshot (HDFS-2802) Hadoop Flags: Reviewed I have committed this. Thank Jing! > Support deleting snapshots > -- > > Key: HDFS-4244 > URL: https://issues.apache.org/jira/browse/HDFS-4244 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Jing Zhao >Assignee: Jing Zhao > Fix For: Snapshot (HDFS-2802) > > Attachments: HDFS-4244.001.patch, HDFS-4244.002.patch, > HDFS-4244.003.patch, HDFS-4244.004.patch, HDFS-4244.005.patch, > HDFS-4244.006.patch, HDFS-4244.007.patch > > > Provide functionality to delete a snapshot, given the name of the snapshot > and the path to the directory where the snapshot was taken. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4244) Support deleting snapshots
[ https://issues.apache.org/jira/browse/HDFS-4244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548658#comment-13548658 ] Tsz Wo (Nicholas), SZE commented on HDFS-4244: -- +1 patch looks good. > Support deleting snapshots > -- > > Key: HDFS-4244 > URL: https://issues.apache.org/jira/browse/HDFS-4244 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Jing Zhao >Assignee: Jing Zhao > Attachments: HDFS-4244.001.patch, HDFS-4244.002.patch, > HDFS-4244.003.patch, HDFS-4244.004.patch, HDFS-4244.005.patch, > HDFS-4244.006.patch, HDFS-4244.007.patch > > > Provide functionality to delete a snapshot, given the name of the snapshot > and the path to the directory where the snapshot was taken. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4366) Block Replication Policy Implementation May Skip Higher-Priority Blocks for Lower-Priority Blocks
[ https://issues.apache.org/jira/browse/HDFS-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Derek Dagit updated HDFS-4366: -- Attachment: (was: hdfs-4366-unittest.patch) > Block Replication Policy Implementation May Skip Higher-Priority Blocks for > Lower-Priority Blocks > - > > Key: HDFS-4366 > URL: https://issues.apache.org/jira/browse/HDFS-4366 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0, 0.23.5 >Reporter: Derek Dagit >Assignee: Derek Dagit > Attachments: hdfs-4366-unittest.patch > > > In certain cases, higher-priority under-replicated blocks can be skipped by > the replication policy implementation. The current implementation maintains, > for each priority level, an index into a list of blocks that are > under-replicated. Together, the lists compose a priority queue (see note > later about branch-0.23). In some cases when blocks are removed from a list, > the caller (BlockManager) properly handles the index into the list from which > it removed a block. In some other cases, the index remains stationary while > the list changes. Whenever this happens, and the removed block happened to > be at or before the index, the implementation will skip over a block when > selecting blocks for replication work. > In situations when entire racks are decommissioned, leading to many > under-replicated blocks, loss of blocks can occur. > Background: HDFS-1765 > This patch to trunk greatly improved the state of the replication policy > implementation. Prior to the patch, the following details were true: > * The block "priority queue" was no such thing: It was really set of > trees that held blocks in natural ordering, that being by the blocks ID, > which resulted in iterator walks over the blocks in pseudo-random order. > * There was only a single index into an iteration over all of the > blocks... > * ... meaning the implementation was only successful in respecting > priority levels on the first pass. Overall, the behavior was a > round-robin-type scheduling of blocks. > After the patch > * A proper priority queue is implemented, preserving log n operations > while iterating over blocks in the order added. > * A separate index for each priority is key is kept... > * ... allowing for processing of the highest priority blocks first > regardless of which priority had last been processed. > The change was suggested for branch-0.23 as well as trunk, but it does not > appear to have been pulled in. > The problem: > Although the indices are now tracked in a better way, there is a > synchronization issue since the indices are managed outside of methods to > modify the contents of the queue. > Removal of a block from a priority level without adjusting the index can mean > that the index then points to the block after the block it originally pointed > to. In the next round of scheduling for that priority level, the block > originally pointed to by the index is skipped. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4366) Block Replication Policy Implementation May Skip Higher-Priority Blocks for Lower-Priority Blocks
[ https://issues.apache.org/jira/browse/HDFS-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Derek Dagit updated HDFS-4366: -- Attachment: hdfs-4366-unittest.patch Replacing patch, as previous patch had extra unrelated changes. > Block Replication Policy Implementation May Skip Higher-Priority Blocks for > Lower-Priority Blocks > - > > Key: HDFS-4366 > URL: https://issues.apache.org/jira/browse/HDFS-4366 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0, 0.23.5 >Reporter: Derek Dagit >Assignee: Derek Dagit > Attachments: hdfs-4366-unittest.patch > > > In certain cases, higher-priority under-replicated blocks can be skipped by > the replication policy implementation. The current implementation maintains, > for each priority level, an index into a list of blocks that are > under-replicated. Together, the lists compose a priority queue (see note > later about branch-0.23). In some cases when blocks are removed from a list, > the caller (BlockManager) properly handles the index into the list from which > it removed a block. In some other cases, the index remains stationary while > the list changes. Whenever this happens, and the removed block happened to > be at or before the index, the implementation will skip over a block when > selecting blocks for replication work. > In situations when entire racks are decommissioned, leading to many > under-replicated blocks, loss of blocks can occur. > Background: HDFS-1765 > This patch to trunk greatly improved the state of the replication policy > implementation. Prior to the patch, the following details were true: > * The block "priority queue" was no such thing: It was really set of > trees that held blocks in natural ordering, that being by the blocks ID, > which resulted in iterator walks over the blocks in pseudo-random order. > * There was only a single index into an iteration over all of the > blocks... > * ... meaning the implementation was only successful in respecting > priority levels on the first pass. Overall, the behavior was a > round-robin-type scheduling of blocks. > After the patch > * A proper priority queue is implemented, preserving log n operations > while iterating over blocks in the order added. > * A separate index for each priority is key is kept... > * ... allowing for processing of the highest priority blocks first > regardless of which priority had last been processed. > The change was suggested for branch-0.23 as well as trunk, but it does not > appear to have been pulled in. > The problem: > Although the indices are now tracked in a better way, there is a > synchronization issue since the indices are managed outside of methods to > modify the contents of the queue. > Removal of a block from a priority level without adjusting the index can mean > that the index then points to the block after the block it originally pointed > to. In the next round of scheduling for that priority level, the block > originally pointed to by the index is skipped. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4363) Combine PBHelper and HdfsProtoUtil and remove redundant methods
[ https://issues.apache.org/jira/browse/HDFS-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548611#comment-13548611 ] Tsz Wo (Nicholas), SZE commented on HDFS-4363: -- Accidentally deleted the first line below? {code} /** - * Call {@link #create(String, FsPermission, EnumSet, short, long, - * Progressable, int)} with default permission + * Progressable, int, ChecksumOpt)} with default permission * {@link FsPermission#getDefault()}. {code} BTW, the patch is out of sync. > Combine PBHelper and HdfsProtoUtil and remove redundant methods > --- > > Key: HDFS-4363 > URL: https://issues.apache.org/jira/browse/HDFS-4363 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.2-alpha >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas > Attachments: HDFS-4363.patch, HDFS-4363.patch, HDFS-4363.patch > > > There are many methods overlapping between PBHelper and HdfsProtoUtil. This > jira combines these two helper classes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out
[ https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548610#comment-13548610 ] Hudson commented on HDFS-4261: -- Integrated in Hadoop-trunk-Commit #3202 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/3202/]) HDFS-4261. Fix bugs in Balaner causing infinite loop and TestBalancerWithNodeGroup timeing out. Contributed by Junping Du (Revision 1430917) Result = SUCCESS szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430917 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithNodeGroup.java > TestBalancerWithNodeGroup times out > --- > > Key: HDFS-4261 > URL: https://issues.apache.org/jira/browse/HDFS-4261 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer >Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Junping Du > Fix For: 3.0.0 > > Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, > HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, > HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, > org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac, > > org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win, > test-balancer-with-node-group-timeout.txt > > > When I manually ran TestBalancerWithNodeGroup, it always timed out in my > machine. Looking at the Jerkins report [build > #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/], > TestBalancerWithNodeGroup somehow was skipped so that the problem was not > detected. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4261) TestBalancerWithNodeGroup times out
[ https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE updated HDFS-4261: - Resolution: Fixed Status: Resolved (was: Patch Available) I have committed this. Thanks, Junping! Also, thanks everyone for helping out here. > TestBalancerWithNodeGroup times out > --- > > Key: HDFS-4261 > URL: https://issues.apache.org/jira/browse/HDFS-4261 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer >Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Junping Du > Fix For: 3.0.0 > > Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, > HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, > HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, > org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac, > > org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win, > test-balancer-with-node-group-timeout.txt > > > When I manually ran TestBalancerWithNodeGroup, it always timed out in my > machine. Looking at the Jerkins report [build > #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/], > TestBalancerWithNodeGroup somehow was skipped so that the problem was not > detected. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4366) Block Replication Policy Implementation May Skip Higher-Priority Blocks for Lower-Priority Blocks
[ https://issues.apache.org/jira/browse/HDFS-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548598#comment-13548598 ] Kihwal Lee commented on HDFS-4366: -- HDFS-1765 was checked in to branch-0.23 of that time, which later became branch-2. Current branch-0.23 was created from branch-0.23.2. Current branch-0.23 does not have it because HDFS-1765 was not merged to 0.23.2 at that time. Would you check whether the attached patch is what you intended to post? > Block Replication Policy Implementation May Skip Higher-Priority Blocks for > Lower-Priority Blocks > - > > Key: HDFS-4366 > URL: https://issues.apache.org/jira/browse/HDFS-4366 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0, 0.23.5 >Reporter: Derek Dagit >Assignee: Derek Dagit > Attachments: hdfs-4366-unittest.patch > > > In certain cases, higher-priority under-replicated blocks can be skipped by > the replication policy implementation. The current implementation maintains, > for each priority level, an index into a list of blocks that are > under-replicated. Together, the lists compose a priority queue (see note > later about branch-0.23). In some cases when blocks are removed from a list, > the caller (BlockManager) properly handles the index into the list from which > it removed a block. In some other cases, the index remains stationary while > the list changes. Whenever this happens, and the removed block happened to > be at or before the index, the implementation will skip over a block when > selecting blocks for replication work. > In situations when entire racks are decommissioned, leading to many > under-replicated blocks, loss of blocks can occur. > Background: HDFS-1765 > This patch to trunk greatly improved the state of the replication policy > implementation. Prior to the patch, the following details were true: > * The block "priority queue" was no such thing: It was really set of > trees that held blocks in natural ordering, that being by the blocks ID, > which resulted in iterator walks over the blocks in pseudo-random order. > * There was only a single index into an iteration over all of the > blocks... > * ... meaning the implementation was only successful in respecting > priority levels on the first pass. Overall, the behavior was a > round-robin-type scheduling of blocks. > After the patch > * A proper priority queue is implemented, preserving log n operations > while iterating over blocks in the order added. > * A separate index for each priority is key is kept... > * ... allowing for processing of the highest priority blocks first > regardless of which priority had last been processed. > The change was suggested for branch-0.23 as well as trunk, but it does not > appear to have been pulled in. > The problem: > Although the indices are now tracked in a better way, there is a > synchronization issue since the indices are managed outside of methods to > modify the contents of the queue. > Removal of a block from a priority level without adjusting the index can mean > that the index then points to the block after the block it originally pointed > to. In the next round of scheduling for that priority level, the block > originally pointed to by the index is skipped. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4261) TestBalancerWithNodeGroup times out
[ https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE updated HDFS-4261: - +1 patch looks good. > TestBalancerWithNodeGroup times out > --- > > Key: HDFS-4261 > URL: https://issues.apache.org/jira/browse/HDFS-4261 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer >Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha >Reporter: Tsz Wo (Nicholas), SZE >Assignee: Junping Du > Fix For: 3.0.0 > > Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, > HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, > HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, > org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac, > > org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win, > test-balancer-with-node-group-timeout.txt > > > When I manually ran TestBalancerWithNodeGroup, it always timed out in my > machine. Looking at the Jerkins report [build > #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/], > TestBalancerWithNodeGroup somehow was skipped so that the problem was not > detected. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class
[ https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548575#comment-13548575 ] Tsz Wo (Nicholas), SZE commented on HDFS-4352: -- > However, test-patch failed to detect these warnings. The patch committed earlier actually have never passed Jenkins. It was not the case that test-patch failed to detect these warnings. Todd, why you have committed the patch? > Encapsulate arguments to BlockReaderFactory in a class > -- > > Key: HDFS-4352 > URL: https://issues.apache.org/jira/browse/HDFS-4352 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Affects Versions: 2.0.3-alpha >Reporter: Colin Patrick McCabe > Attachments: 01b.patch, 01.patch > > > Encapsulate the arguments to BlockReaderFactory in a class to avoid having to > pass around 10+ arguments to a few different functions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HDFS-4249) Add status NameNode startup to webUI
[ https://issues.apache.org/jira/browse/HDFS-4249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HDFS-4249: Attachment: HDFS-4249.1.pdf I attached a design document and entered linked issues for task breakdown. > Add status NameNode startup to webUI > - > > Key: HDFS-4249 > URL: https://issues.apache.org/jira/browse/HDFS-4249 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Suresh Srinivas >Assignee: Chris Nauroth > Attachments: HDFS-4249.1.pdf > > > Currently NameNode WebUI server starts only after the fsimage is loaded, > edits are applied and checkpoint is complete. Any status related to namenode > startin up is available only in the logs. I propose starting the webserver > before loading namespace and providing namenode startup information. > More details in the next comment. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-4374) Display NameNode startup progress in UI
Chris Nauroth created HDFS-4374: --- Summary: Display NameNode startup progress in UI Key: HDFS-4374 URL: https://issues.apache.org/jira/browse/HDFS-4374 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 3.0.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Display the information about the NameNode's startup progress in the NameNode web UI. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-4373) Add HTTP API for querying NameNode startup progress
Chris Nauroth created HDFS-4373: --- Summary: Add HTTP API for querying NameNode startup progress Key: HDFS-4373 URL: https://issues.apache.org/jira/browse/HDFS-4373 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 3.0.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Provide an HTTP API for non-browser clients to query the NameNode's current progress through startup. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-4372) Track NameNode startup progress
Chris Nauroth created HDFS-4372: --- Summary: Track NameNode startup progress Key: HDFS-4372 URL: https://issues.apache.org/jira/browse/HDFS-4372 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 3.0.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Track detailed progress information about the steps of NameNode startup to enable display to users. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4031) Update findbugsExcludeFile.xml to include findbugs 2 exclusions
[ https://issues.apache.org/jira/browse/HDFS-4031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548504#comment-13548504 ] Hudson commented on HDFS-4031: -- Integrated in Hadoop-Mapreduce-trunk #1308 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1308/]) HDFS-4031. Update findbugsExcludeFile.xml to include findbugs 2 exclusions. Contributed by Eli Collins (Revision 1430468) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430468 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml > Update findbugsExcludeFile.xml to include findbugs 2 exclusions > --- > > Key: HDFS-4031 > URL: https://issues.apache.org/jira/browse/HDFS-4031 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 2.0.3-alpha > > Attachments: hdfs-4031.txt > > > Findbugs 2 warns about some volatile increments (VO_VOLATILE_INCREMENT) that > unlike HDFS-4029 and HDFS-4030 are less problematic: > - numFailedVolumes is only incremented in one thread and that access is > synchronized > - pendingReceivedRequests in BPServiceActor is clearly synchronized > It would be reasonable to make these Atomics as well but I think they're uses > are clearly correct so figured for these the warning was more obviously bogus > and so could be ignored. > There's also a SE_BAD_FIELD_INNER_CLASS warning (LocalDatanodeInfo's > anonymous class is serializable but it is not) in BPServiceActor is OK to > ignore since we don't serialize LocalDatanodeInfo. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4030) BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount should be AtomicLongs
[ https://issues.apache.org/jira/browse/HDFS-4030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548503#comment-13548503 ] Hudson commented on HDFS-4030: -- Integrated in Hadoop-Mapreduce-trunk #1308 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1308/]) HDFS-4030. BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount should be AtomicLongs. Contributed by Eli Collins (Revision 1430462) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430462 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java > BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount should > be AtomicLongs > -- > > Key: HDFS-4030 > URL: https://issues.apache.org/jira/browse/HDFS-4030 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 2.0.3-alpha > > Attachments: hdfs-4030.txt, hdfs-4030.txt > > > The BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount > fields are currently volatile longs which are incremented, which isn't thread > safe. It looks like they're always incremented on paths that hold the NN > write lock but it would be easier and less error prone for future changes if > we made them AtomicLongs. The other volatile long members are just set in one > thread and read in another so they're fine as is. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4033) Miscellaneous findbugs 2 fixes
[ https://issues.apache.org/jira/browse/HDFS-4033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548498#comment-13548498 ] Hudson commented on HDFS-4033: -- Integrated in Hadoop-Mapreduce-trunk #1308 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1308/]) Updated CHANGES.txt to add HDFS-4033. (Revision 1430581) HDFS-4033. Miscellaneous findbugs 2 fixes. Contributed by Eli Collins (Revision 1430534) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430581 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430534 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeJspHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/ReplicaInputStreams.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/ReceivedDeletedBlockInfo.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/RemoteEditLog.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/JMXGet.java > Miscellaneous findbugs 2 fixes > -- > > Key: HDFS-4033 > URL: https://issues.apache.org/jira/browse/HDFS-4033 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 2.0.3-alpha > > Attachments: hdfs-4033.txt, hdfs-4033.txt > > > Fix some miscellaneous findbugs 2 warnings: > - Switch statements missing default cases > - Using \n instead of %n in format methods > - A socket close that should use IOUtils#closeSocket that we missed > - A use of SimpleDateFormat that is not threadsafe > - In ReplicaInputStreams it's not clear that we always close the streams we > allocate, moving the stream creation into the class where we close them makes > that more obvious > - A couple missing null checks -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4034) Remove redundant null checks
[ https://issues.apache.org/jira/browse/HDFS-4034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548496#comment-13548496 ] Hudson commented on HDFS-4034: -- Integrated in Hadoop-Mapreduce-trunk #1308 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1308/]) HDFS-4034. Remove redundant null checks. Contributed by Eli Collins (Revision 1430585) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430585 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileOutputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeResourceChecker.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java > Remove redundant null checks > > > Key: HDFS-4034 > URL: https://issues.apache.org/jira/browse/HDFS-4034 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 2.0.3-alpha > > Attachments: hdfs-4034.txt, hdfs-4034.txt > > > Findbugs 2 catches a number of places where we're checking for null in cases > where the value will never be null. > We might need to wait until we switch to findbugs 2 to commit this as the > current findbugs may not be so smart. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes
[ https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548493#comment-13548493 ] Hudson commented on HDFS-4353: -- Integrated in Hadoop-Mapreduce-trunk #1308 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1308/]) svn merge -c -1430507 . for reverting HDFS-4353. Encapsulate connections to peers in Peer and PeerServer classes (Revision 1430662) HDFS-4353. Encapsulate connections to peers in Peer and PeerServer classes. Contributed by Colin Patrick McCabe. (Revision 1430507) Result = FAILURE szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430662 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketInputStream.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketOutputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/PeerCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader2.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/SocketCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientBlockVerification.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestConnCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferKeepalive.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDisableConnCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPeerCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSocketCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430507 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketInputStream.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketOutputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/PeerCache.java * /hadoop/common/trunk/hadoop-hdfs-project/
[jira] [Commented] (HDFS-4035) LightWeightGSet and LightWeightHashSet increment a volatile without synchronization
[ https://issues.apache.org/jira/browse/HDFS-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548494#comment-13548494 ] Hudson commented on HDFS-4035: -- Integrated in Hadoop-Mapreduce-trunk #1308 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1308/]) HDFS-4035. LightWeightGSet and LightWeightHashSet increment a volatile without synchronization. Contributed by Eli Collins (Revision 1430595) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430595 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml > LightWeightGSet and LightWeightHashSet increment a volatile without > synchronization > --- > > Key: HDFS-4035 > URL: https://issues.apache.org/jira/browse/HDFS-4035 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 2.0.3-alpha > > Attachments: hdfs-4035.txt > > > LightWeightGSet and LightWeightHashSet have a volatile modification field > that they use to detect updates while iterating so they can throw a > ConcurrentModificationException. Since these "LightWeight" classes are > explicitly "not thread safe" (eg access to their members is not synchronized) > then the current use is OK, we just need to update findbugsExcludeFile.xml to > exclude them. > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class
[ https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548492#comment-13548492 ] Hudson commented on HDFS-4352: -- Integrated in Hadoop-Mapreduce-trunk #1308 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1308/]) svn merge -c -1428729 . for reverting HDFS-4352. Encapsulate arguments to BlockReaderFactory in a class (Revision 1430663) Result = FAILURE szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430663 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader2.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java > Encapsulate arguments to BlockReaderFactory in a class > -- > > Key: HDFS-4352 > URL: https://issues.apache.org/jira/browse/HDFS-4352 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Affects Versions: 2.0.3-alpha >Reporter: Colin Patrick McCabe > Attachments: 01b.patch, 01.patch > > > Encapsulate the arguments to BlockReaderFactory in a class to avoid having to > pass around 10+ arguments to a few different functions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4031) Update findbugsExcludeFile.xml to include findbugs 2 exclusions
[ https://issues.apache.org/jira/browse/HDFS-4031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548488#comment-13548488 ] Hudson commented on HDFS-4031: -- Integrated in Hadoop-Hdfs-trunk #1280 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1280/]) HDFS-4031. Update findbugsExcludeFile.xml to include findbugs 2 exclusions. Contributed by Eli Collins (Revision 1430468) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430468 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml > Update findbugsExcludeFile.xml to include findbugs 2 exclusions > --- > > Key: HDFS-4031 > URL: https://issues.apache.org/jira/browse/HDFS-4031 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 2.0.3-alpha > > Attachments: hdfs-4031.txt > > > Findbugs 2 warns about some volatile increments (VO_VOLATILE_INCREMENT) that > unlike HDFS-4029 and HDFS-4030 are less problematic: > - numFailedVolumes is only incremented in one thread and that access is > synchronized > - pendingReceivedRequests in BPServiceActor is clearly synchronized > It would be reasonable to make these Atomics as well but I think they're uses > are clearly correct so figured for these the warning was more obviously bogus > and so could be ignored. > There's also a SE_BAD_FIELD_INNER_CLASS warning (LocalDatanodeInfo's > anonymous class is serializable but it is not) in BPServiceActor is OK to > ignore since we don't serialize LocalDatanodeInfo. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4030) BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount should be AtomicLongs
[ https://issues.apache.org/jira/browse/HDFS-4030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548487#comment-13548487 ] Hudson commented on HDFS-4030: -- Integrated in Hadoop-Hdfs-trunk #1280 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1280/]) HDFS-4030. BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount should be AtomicLongs. Contributed by Eli Collins (Revision 1430462) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430462 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java > BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount should > be AtomicLongs > -- > > Key: HDFS-4030 > URL: https://issues.apache.org/jira/browse/HDFS-4030 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 2.0.3-alpha > > Attachments: hdfs-4030.txt, hdfs-4030.txt > > > The BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount > fields are currently volatile longs which are incremented, which isn't thread > safe. It looks like they're always incremented on paths that hold the NN > write lock but it would be easier and less error prone for future changes if > we made them AtomicLongs. The other volatile long members are just set in one > thread and read in another so they're fine as is. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4033) Miscellaneous findbugs 2 fixes
[ https://issues.apache.org/jira/browse/HDFS-4033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548482#comment-13548482 ] Hudson commented on HDFS-4033: -- Integrated in Hadoop-Hdfs-trunk #1280 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1280/]) Updated CHANGES.txt to add HDFS-4033. (Revision 1430581) HDFS-4033. Miscellaneous findbugs 2 fixes. Contributed by Eli Collins (Revision 1430534) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430581 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430534 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeJspHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/ReplicaInputStreams.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/ReceivedDeletedBlockInfo.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/RemoteEditLog.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/JMXGet.java > Miscellaneous findbugs 2 fixes > -- > > Key: HDFS-4033 > URL: https://issues.apache.org/jira/browse/HDFS-4033 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 2.0.3-alpha > > Attachments: hdfs-4033.txt, hdfs-4033.txt > > > Fix some miscellaneous findbugs 2 warnings: > - Switch statements missing default cases > - Using \n instead of %n in format methods > - A socket close that should use IOUtils#closeSocket that we missed > - A use of SimpleDateFormat that is not threadsafe > - In ReplicaInputStreams it's not clear that we always close the streams we > allocate, moving the stream creation into the class where we close them makes > that more obvious > - A couple missing null checks -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4034) Remove redundant null checks
[ https://issues.apache.org/jira/browse/HDFS-4034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548480#comment-13548480 ] Hudson commented on HDFS-4034: -- Integrated in Hadoop-Hdfs-trunk #1280 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1280/]) HDFS-4034. Remove redundant null checks. Contributed by Eli Collins (Revision 1430585) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430585 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileOutputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeResourceChecker.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java > Remove redundant null checks > > > Key: HDFS-4034 > URL: https://issues.apache.org/jira/browse/HDFS-4034 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 2.0.3-alpha > > Attachments: hdfs-4034.txt, hdfs-4034.txt > > > Findbugs 2 catches a number of places where we're checking for null in cases > where the value will never be null. > We might need to wait until we switch to findbugs 2 to commit this as the > current findbugs may not be so smart. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes
[ https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548477#comment-13548477 ] Hudson commented on HDFS-4353: -- Integrated in Hadoop-Hdfs-trunk #1280 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1280/]) svn merge -c -1430507 . for reverting HDFS-4353. Encapsulate connections to peers in Peer and PeerServer classes (Revision 1430662) HDFS-4353. Encapsulate connections to peers in Peer and PeerServer classes. Contributed by Colin Patrick McCabe. (Revision 1430507) Result = FAILURE szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430662 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketInputStream.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketOutputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/PeerCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader2.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/SocketCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientBlockVerification.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestConnCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferKeepalive.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDisableConnCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPeerCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSocketCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430507 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketInputStream.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketOutputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/PeerCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdf
[jira] [Commented] (HDFS-4035) LightWeightGSet and LightWeightHashSet increment a volatile without synchronization
[ https://issues.apache.org/jira/browse/HDFS-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548478#comment-13548478 ] Hudson commented on HDFS-4035: -- Integrated in Hadoop-Hdfs-trunk #1280 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1280/]) HDFS-4035. LightWeightGSet and LightWeightHashSet increment a volatile without synchronization. Contributed by Eli Collins (Revision 1430595) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430595 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml > LightWeightGSet and LightWeightHashSet increment a volatile without > synchronization > --- > > Key: HDFS-4035 > URL: https://issues.apache.org/jira/browse/HDFS-4035 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 2.0.3-alpha > > Attachments: hdfs-4035.txt > > > LightWeightGSet and LightWeightHashSet have a volatile modification field > that they use to detect updates while iterating so they can throw a > ConcurrentModificationException. Since these "LightWeight" classes are > explicitly "not thread safe" (eg access to their members is not synchronized) > then the current use is OK, we just need to update findbugsExcludeFile.xml to > exclude them. > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class
[ https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548476#comment-13548476 ] Hudson commented on HDFS-4352: -- Integrated in Hadoop-Hdfs-trunk #1280 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1280/]) svn merge -c -1428729 . for reverting HDFS-4352. Encapsulate arguments to BlockReaderFactory in a class (Revision 1430663) Result = FAILURE szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430663 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader2.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java > Encapsulate arguments to BlockReaderFactory in a class > -- > > Key: HDFS-4352 > URL: https://issues.apache.org/jira/browse/HDFS-4352 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Affects Versions: 2.0.3-alpha >Reporter: Colin Patrick McCabe > Attachments: 01b.patch, 01.patch > > > Encapsulate the arguments to BlockReaderFactory in a class to avoid having to > pass around 10+ arguments to a few different functions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4031) Update findbugsExcludeFile.xml to include findbugs 2 exclusions
[ https://issues.apache.org/jira/browse/HDFS-4031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548381#comment-13548381 ] Hudson commented on HDFS-4031: -- Integrated in Hadoop-Yarn-trunk #91 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/91/]) HDFS-4031. Update findbugsExcludeFile.xml to include findbugs 2 exclusions. Contributed by Eli Collins (Revision 1430468) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430468 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml > Update findbugsExcludeFile.xml to include findbugs 2 exclusions > --- > > Key: HDFS-4031 > URL: https://issues.apache.org/jira/browse/HDFS-4031 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 2.0.3-alpha > > Attachments: hdfs-4031.txt > > > Findbugs 2 warns about some volatile increments (VO_VOLATILE_INCREMENT) that > unlike HDFS-4029 and HDFS-4030 are less problematic: > - numFailedVolumes is only incremented in one thread and that access is > synchronized > - pendingReceivedRequests in BPServiceActor is clearly synchronized > It would be reasonable to make these Atomics as well but I think they're uses > are clearly correct so figured for these the warning was more obviously bogus > and so could be ignored. > There's also a SE_BAD_FIELD_INNER_CLASS warning (LocalDatanodeInfo's > anonymous class is serializable but it is not) in BPServiceActor is OK to > ignore since we don't serialize LocalDatanodeInfo. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4030) BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount should be AtomicLongs
[ https://issues.apache.org/jira/browse/HDFS-4030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548380#comment-13548380 ] Hudson commented on HDFS-4030: -- Integrated in Hadoop-Yarn-trunk #91 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/91/]) HDFS-4030. BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount should be AtomicLongs. Contributed by Eli Collins (Revision 1430462) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430462 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java > BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount should > be AtomicLongs > -- > > Key: HDFS-4030 > URL: https://issues.apache.org/jira/browse/HDFS-4030 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 2.0.3-alpha > > Attachments: hdfs-4030.txt, hdfs-4030.txt > > > The BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount > fields are currently volatile longs which are incremented, which isn't thread > safe. It looks like they're always incremented on paths that hold the NN > write lock but it would be easier and less error prone for future changes if > we made them AtomicLongs. The other volatile long members are just set in one > thread and read in another so they're fine as is. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4033) Miscellaneous findbugs 2 fixes
[ https://issues.apache.org/jira/browse/HDFS-4033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548375#comment-13548375 ] Hudson commented on HDFS-4033: -- Integrated in Hadoop-Yarn-trunk #91 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/91/]) Updated CHANGES.txt to add HDFS-4033. (Revision 1430581) HDFS-4033. Miscellaneous findbugs 2 fixes. Contributed by Eli Collins (Revision 1430534) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430581 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430534 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeJspHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/ReplicaInputStreams.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/ReceivedDeletedBlockInfo.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/RemoteEditLog.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/JMXGet.java > Miscellaneous findbugs 2 fixes > -- > > Key: HDFS-4033 > URL: https://issues.apache.org/jira/browse/HDFS-4033 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 2.0.3-alpha > > Attachments: hdfs-4033.txt, hdfs-4033.txt > > > Fix some miscellaneous findbugs 2 warnings: > - Switch statements missing default cases > - Using \n instead of %n in format methods > - A socket close that should use IOUtils#closeSocket that we missed > - A use of SimpleDateFormat that is not threadsafe > - In ReplicaInputStreams it's not clear that we always close the streams we > allocate, moving the stream creation into the class where we close them makes > that more obvious > - A couple missing null checks -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4034) Remove redundant null checks
[ https://issues.apache.org/jira/browse/HDFS-4034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548373#comment-13548373 ] Hudson commented on HDFS-4034: -- Integrated in Hadoop-Yarn-trunk #91 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/91/]) HDFS-4034. Remove redundant null checks. Contributed by Eli Collins (Revision 1430585) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430585 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileOutputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeResourceChecker.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java > Remove redundant null checks > > > Key: HDFS-4034 > URL: https://issues.apache.org/jira/browse/HDFS-4034 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 2.0.3-alpha > > Attachments: hdfs-4034.txt, hdfs-4034.txt > > > Findbugs 2 catches a number of places where we're checking for null in cases > where the value will never be null. > We might need to wait until we switch to findbugs 2 to commit this as the > current findbugs may not be so smart. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes
[ https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548370#comment-13548370 ] Hudson commented on HDFS-4353: -- Integrated in Hadoop-Yarn-trunk #91 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/91/]) svn merge -c -1430507 . for reverting HDFS-4353. Encapsulate connections to peers in Peer and PeerServer classes (Revision 1430662) HDFS-4353. Encapsulate connections to peers in Peer and PeerServer classes. Contributed by Colin Patrick McCabe. (Revision 1430507) Result = SUCCESS szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430662 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketInputStream.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketOutputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/PeerCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader2.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/SocketCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientBlockVerification.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestConnCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferKeepalive.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDisableConnCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPeerCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSocketCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430507 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketInputStream.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketOutputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/PeerCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/sr
[jira] [Commented] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class
[ https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548369#comment-13548369 ] Hudson commented on HDFS-4352: -- Integrated in Hadoop-Yarn-trunk #91 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/91/]) svn merge -c -1428729 . for reverting HDFS-4352. Encapsulate arguments to BlockReaderFactory in a class (Revision 1430663) Result = SUCCESS szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430663 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader2.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java > Encapsulate arguments to BlockReaderFactory in a class > -- > > Key: HDFS-4352 > URL: https://issues.apache.org/jira/browse/HDFS-4352 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Affects Versions: 2.0.3-alpha >Reporter: Colin Patrick McCabe > Attachments: 01b.patch, 01.patch > > > Encapsulate the arguments to BlockReaderFactory in a class to avoid having to > pass around 10+ arguments to a few different functions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira