Re: [SPAM] [jira] [Updated] (HDFS-4825) webhdfs / httpfs tests broken because of min block size change
hi, all how could I unsubscribe from this mailing list? Thanks in advance. justin On 05/26/2013 10:30 AM, Suresh Srinivas (JIRA) wrote: [ https://issues.apache.org/jira/browse/HDFS-4825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HDFS-4825: -- Resolution: Fixed Fix Version/s: 2.0.5-beta Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I have committed the patch to trunk and branch-2. Thank you Andrew! webhdfs / httpfs tests broken because of min block size change -- Key: HDFS-4825 URL: https://issues.apache.org/jira/browse/HDFS-4825 Project: Hadoop HDFS Issue Type: Bug Components: webhdfs Affects Versions: 3.0.0, 2.0.5-beta Reporter: Andrew Wang Assignee: Andrew Wang Fix For: 2.0.5-beta Attachments: hdfs-4825-1.patch As reported by Suresh on HDFS-4305, some of the webhdfs tests were broken by the min block size change. {noformat} Running org.apache.hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.436 sec Results : Tests in error: testOperation[4](org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem): Specified block size is less than configured minimum value (dfs.namenode.fs-limits.min-block-size): 1024 < 1048576(..) testOperationDoAs[4](org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem): Specified block size is less than configured minimum value (dfs.namenode.fs-limits.min-block-size): 1024 < 1048576(..) testOperation[4](org.apache.hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem): Specified block size is less than configured minimum value (dfs.namenode.fs-limits.min-block-size): 1024 < 1048576(..) testOperationDoAs[4](org.apache.hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem): Specified block size is less than configured minimum value (dfs.namenode.fs-limits.min-block-size): 1024 < 1048576(..) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Not subscribed to hdfs-issues@hadoop.apache.org
On 05/24/2013 02:57 PM, Jing Zhao (JIRA) wrote: [ https://issues.apache.org/jira/browse/HDFS-4846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-4846: Labels: snapshot (was: snapshot snapshots) Status: Patch Available (was: Open) Snapshot CLI commands output stacktrace for invalid arguments - Key: HDFS-4846 URL: https://issues.apache.org/jira/browse/HDFS-4846 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 3.0.0 Reporter: Stephen Chu Assignee: Jing Zhao Priority: Minor Labels: snapshot Attachments: HDFS-4846.001.patch, HDFS-4846.002.patch It'd be useful to clean up the stacktraces output by the snapshot CLI commands when the commands are used incorrectly. This will make things more readable for operators and hopefully prevent confusion. Allowing a snapshot on a directory that doesn't exist {code} schu-mbp:~ schu$ hdfs dfsadmin -allowSnapshot adfasdf 2013-05-23 15:46:46.052 java[24580:1203] Unable to load realm info from SCDynamicStore 2013-05-23 15:46:46,066 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable allowSnapshot: Directory does not exist: /user/schu/adfasdf at org.apache.hadoop.hdfs.server.namenode.INodeDirectory.valueOf(INodeDirectory.java:52) at org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.setSnapshottable(SnapshotManager.java:106) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.allowSnapshot(FSNamesystem.java:5861) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.allowSnapshot(NameNodeRpcServer.java:1121) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.allowSnapshot(ClientNamenodeProtocolServerSideTranslatorPB.java:932) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48087) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1033) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1842) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1838) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1836) schu-mbp:~ schu$ {code} Disallow a snapshot on a directory that isn't snapshottable {code} schu-mbp:~ schu$ hdfs dfsadmin -disallowSnapshot /user 2013-05-23 15:49:07.251 java[24687:1203] Unable to load realm info from SCDynamicStore 2013-05-23 15:49:07,265 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable disallowSnapshot: Directory is not a snapshottable directory: /user at org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectorySnapshottable.valueOf(INodeDirectorySnapshottable.java:68) at org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.resetSnapshottable(SnapshotManager.java:151) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.disallowSnapshot(FSNamesystem.java:5889) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.disallowSnapshot(NameNodeRpcServer.java:1128) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.disallowSnapshot(ClientNamenodeProtocolServerSideTranslatorPB.java:943) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48089) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1033) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1842) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1838) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1836) {code} Snapshot diffs with non-existent snapshot paths {code} chu-mbp:~ schu$ hdfs snapshotDiff / gibberish1 gibberish2 2013-05-23 15:53:32.986 java[24877:1203] Unable to load realm info from SCDynamicStore 201
Re: [SPAM] [jira] [Updated] (HDFS-4840) ReplicationMonitor gets NPE during shutdown
On 05/24/2013 01:09 AM, Kihwal Lee (JIRA) wrote: [ https://issues.apache.org/jira/browse/HDFS-4840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-4840: - Attachment: HDFS-4840.patch.txt ReplicationMonitor gets NPE during shutdown --- Key: HDFS-4840 URL: https://issues.apache.org/jira/browse/HDFS-4840 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 3.0.0 Reporter: Kihwal Lee Attachments: HDFS-4840.patch.txt TestBlocksWithNotEnoughRacks occasionally fails during test tear down because RaplicationMonitor gets NPE. Seen at https://builds.apache.org/job/Hadoop-Hdfs-trunk/1406/. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira hi, all please tell me how to unsubscribe from this mailing list? thanks in advance.
[jira] [Commented] (HDFS-3627) OfflineImageViewer oiv Indented processor prints out the Java class name in the DELEGATION_KEY field
[ https://issues.apache.org/jira/browse/HDFS-3627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13570835#comment-13570835 ] gschen commented on HDFS-3627: -- hi,all how could I unsubscribe from this mailing list? > OfflineImageViewer oiv Indented processor prints out the Java class name in > the DELEGATION_KEY field > > > Key: HDFS-3627 > URL: https://issues.apache.org/jira/browse/HDFS-3627 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 0.23.0 >Reporter: Ravi Prakash >Priority: Minor > Attachments: HDFS-3627.patch, HDFS-3627.patch > > > Instead of the contents of the delegation key this is printed out > DELEGATION_KEY = > org.apache.hadoop.security.token.delegation.DelegationKey@1e2ca7 > DELEGATION_KEY = > org.apache.hadoop.security.token.delegation.DelegationKey@105bd58 > DELEGATION_KEY = > org.apache.hadoop.security.token.delegation.DelegationKey@1d1e730 > DELEGATION_KEY = > org.apache.hadoop.security.token.delegation.DelegationKey@1a116c9 > DELEGATION_KEY = > org.apache.hadoop.security.token.delegation.DelegationKey@df1832 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4424) fsdataset Mkdirs failed cause nullpointexception and other bad consequence
[ https://issues.apache.org/jira/browse/HDFS-4424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13558586#comment-13558586 ] gschen commented on HDFS-4424: -- how could I subscribe from this mailing list?? thank you. > fsdataset Mkdirs failed cause nullpointexception and other bad > consequence > --- > > Key: HDFS-4424 > URL: https://issues.apache.org/jira/browse/HDFS-4424 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 1.0.1 >Reporter: Li Junjun > > File: /hadoop-1.0.1/hdfs/org/apache/hadoop/hdfs/server/datanode/FSDataset.java > from line 205: > if (children == null || children.length == 0) { > children = new FSDir[maxBlocksPerDir]; > for (int idx = 0; idx < maxBlocksPerDir; idx++) { > children[idx] = new FSDir(new File(dir, > DataStorage.BLOCK_SUBDIR_PREFIX+idx)); > } > } > in FSDir constructer method if faild ( space full,so mkdir fails), but > the children still in use ! > the the write comes(after I run balancer ) , when choose FSDir > line 192: > File file = children[idx].addBlock(b, src, false, resetIdx); > cause exceptions like this > at > org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192) > at > org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:192) > at > org.apache.hadoop.hdfs.server.datanode.FSDataset$FSDir.addBlock(FSDataset.java:158) > > should it like this > if (children == null || children.length == 0) { > List childrenList = new ArrayList(); > > for (int idx = 0; idx < maxBlocksPerDir; idx++) { > try{ >childrenList .add( new FSDir(new File(dir, > DataStorage.BLOCK_SUBDIR_PREFIX+idx))); > }catch(Exception e){ > } > children = childrenList.toArray(); > } > } > > bad consequence , in my cluster ,this datanode's num blocks became 0 . -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes
[ https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13542872#comment-13542872 ] gschen commented on HDFS-4353: -- hi, how could I unsubscribe from this mailing list? thank you very much. justin > Encapsulate connections to peers in Peer and PeerServer classes > --- > > Key: HDFS-4353 > URL: https://issues.apache.org/jira/browse/HDFS-4353 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, hdfs-client >Affects Versions: 2.0.3-alpha >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe > Attachments: 02-cumulative.patch > > > Encapsulate connections to peers into the {{Peer}} and {{PeerServer}} > classes. Since many Java classes may be involved with these connections, it > makes sense to create a container for them. For example, a connection to a > peer may have an input stream, output stream, readablebytechannel, encrypted > output stream, and encrypted input stream associated with it. > This makes us less dependent on the {{NetUtils}} methods which use > {{instanceof}} to manipulate socket and stream states based on the runtime > type. it also paves the way to introduce UNIX domain sockets which don't > inherit from {{java.net.Socket}}. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4337) Backport HDFS-4240 to branch-1: Make sure nodes are avoided to place replica if some replica are already under the same nodegroup.
[ https://issues.apache.org/jira/browse/HDFS-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13542114#comment-13542114 ] gschen commented on HDFS-4337: -- > Backport HDFS-4240 to branch-1: Make sure nodes are avoided to place replica > if some replica are already under the same nodegroup. > -- > > Key: HDFS-4337 > URL: https://issues.apache.org/jira/browse/HDFS-4337 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 1.2.0 >Reporter: Junping Du >Assignee: meng gong > Labels: patch > Fix For: 1.2.0, 1-win > > Attachments: HDFS-4337-v2.patch, HDFS-4337-v3.patch > > > Update affects version from 1.0.0 to 1.2.0. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HDFS-4019) FSShell should support creating symlinks
[ https://issues.apache.org/jira/browse/HDFS-4019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13472008#comment-13472008 ] gschen commented on HDFS-4019: -- how could i to unsubscribe this post, thank you! > FSShell should support creating symlinks > > > Key: HDFS-4019 > URL: https://issues.apache.org/jira/browse/HDFS-4019 > Project: Hadoop HDFS > Issue Type: Improvement > Components: tools >Affects Versions: 2.0.3-alpha >Reporter: Colin Patrick McCabe >Assignee: Andy Isaacson >Priority: Minor > > FSShell should support creating symlinks. This would allow users to create > symlinks from the shell without having to write a Java program. > One thing that makes this complicated is that FSShell currently uses > FileSystem internally, and symlinks are currently only supported by the > FileContext API. So either FSShell would have to be ported to FileContext, > or symlinks would have to be added to FileSystem. Or perhaps we could open a > FileContext only when symlinks were necessary, but that seems messy. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: [jira] [Assigned] (HDFS-4019) FSShell should support creating symlinks
On 10/9/2012 8:16 AM, Andy Isaacson (JIRA) wrote: [ https://issues.apache.org/jira/browse/HDFS-4019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy Isaacson reassigned HDFS-4019: --- Assignee: Andy Isaacson FSShell should support creating symlinks Key: HDFS-4019 URL: https://issues.apache.org/jira/browse/HDFS-4019 Project: Hadoop HDFS Issue Type: Improvement Components: tools Affects Versions: 2.0.3-alpha Reporter: Colin Patrick McCabe Assignee: Andy Isaacson Priority: Minor FSShell should support creating symlinks. This would allow users to create symlinks from the shell without having to write a Java program. One thing that makes this complicated is that FSShell currently uses FileSystem internally, and symlinks are currently only supported by the FileContext API. So either FSShell would have to be ported to FileContext, or symlinks would have to be added to FileSystem. Or perhaps we could open a FileContext only when symlinks were necessary, but that seems messy. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira how could i to unsubscribe this post, thank you!
[jira] [Commented] (HDFS-4016) back-port HDFS-3582 to branch-0.23
[ https://issues.apache.org/jira/browse/HDFS-4016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13471521#comment-13471521 ] gschen commented on HDFS-4016: -- please tell me how to unsubscribe . thank you. > back-port HDFS-3582 to branch-0.23 > -- > > Key: HDFS-4016 > URL: https://issues.apache.org/jira/browse/HDFS-4016 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ivan A. Veselovsky >Assignee: Ivan A. Veselovsky >Priority: Minor > Attachments: HDFS-4016-branch-0.23.patch > > > We suggest a patch that back-ports the change > https://issues.apache.org/jira/browse/HDFS-3582 to branch 0.23. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: [jira] [Commented] (HDFS-4016) back-port HDFS-3582 to branch-0.23
On 10/8/2012 5:54 PM, Hadoop QA (JIRA) wrote: [ https://issues.apache.org/jira/browse/HDFS-4016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13471475#comment-13471475 ] Hadoop QA commented on HDFS-4016: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12548219/HDFS-4016-branch-0.23.patch against trunk revision . {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3289//console This message is automatically generated. back-port HDFS-3582 to branch-0.23 -- Key: HDFS-4016 URL: https://issues.apache.org/jira/browse/HDFS-4016 Project: Hadoop HDFS Issue Type: Bug Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Priority: Minor Attachments: HDFS-4016-branch-0.23.patch We suggest a patch that back-ports the change https://issues.apache.org/jira/browse/HDFS-3582 to branch 0.23. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira please tell me how to unsubscribe . thank you.
[jira] [Commented] (HDFS-2127) Add a test that ensure AccessControlExceptions contain a full path
[ https://issues.apache.org/jira/browse/HDFS-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13469132#comment-13469132 ] gschen commented on HDFS-2127: -- unsubscribe > Add a test that ensure AccessControlExceptions contain a full path > -- > > Key: HDFS-2127 > URL: https://issues.apache.org/jira/browse/HDFS-2127 > Project: Hadoop HDFS > Issue Type: Test > Components: name-node >Reporter: Eli Collins >Assignee: Stephen Chu > Labels: newbie > Fix For: 3.0.0 > > Attachments: HDFS-2127.patch, HDFS-2127.patch > > > HDFS-1628 added full paths to AccessControlExceptions, we should have a test > that covers the cases that were done manually in [this > comment|https://issues.apache.org/jira/browse/HDFS-1628?focusedCommentId=12996135&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12996135]. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: [jira] [Commented] (HDFS-2127) Add a test that ensure AccessControlExceptions contain a full path
On 10/4/2012 10:21 AM, Hudson (JIRA) wrote: [ https://issues.apache.org/jira/browse/HDFS-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13469093#comment-13469093 ] Hudson commented on HDFS-2127: -- Integrated in Hadoop-Hdfs-trunk-Commit #2871 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2871/]) HDFS-2127. Add a test that ensure AccessControlExceptions contain a full path. Contributed by Stephen Chu (Revision 1393878) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1393878 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestPermission.java Add a test that ensure AccessControlExceptions contain a full path -- Key: HDFS-2127 URL: https://issues.apache.org/jira/browse/HDFS-2127 Project: Hadoop HDFS Issue Type: Test Components: name-node Reporter: Eli Collins Assignee: Stephen Chu Labels: newbie Fix For: 3.0.0 Attachments: HDFS-2127.patch, HDFS-2127.patch HDFS-1628 added full paths to AccessControlExceptions, we should have a test that covers the cases that were done manually in [this comment|https://issues.apache.org/jira/browse/HDFS-1628?focusedCommentId=12996135&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12996135]. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira unsubscribe
[jira] [Commented] (HDFS-4001) TestSafeMode#testInitializeReplQueuesEarly may time out
[ https://issues.apache.org/jira/browse/HDFS-4001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13468396#comment-13468396 ] gschen commented on HDFS-4001: -- unsubscribe > TestSafeMode#testInitializeReplQueuesEarly may time out > --- > > Key: HDFS-4001 > URL: https://issues.apache.org/jira/browse/HDFS-4001 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.0.0-alpha >Reporter: Eli Collins > Attachments: timeout.txt.gz > > > Saw this failure on a recent branch-2 jenkins run, has also been seen on > trunk. > {noformat} > java.util.concurrent.TimeoutException: Timed out waiting for condition > at > org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:107) > at > org.apache.hadoop.hdfs.TestSafeMode.testInitializeReplQueuesEarly(TestSafeMode.java:191) > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: [jira] [Updated] (HDFS-4001) TestSafeMode#testInitializeReplQueuesEarly may time out
On 10/3/2012 2:58 PM, Eli Collins (JIRA) wrote: [ https://issues.apache.org/jira/browse/HDFS-4001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated HDFS-4001: -- Attachment: timeout.txt.gz Full test log attached. TestSafeMode#testInitializeReplQueuesEarly may time out --- Key: HDFS-4001 URL: https://issues.apache.org/jira/browse/HDFS-4001 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.0.0-alpha Reporter: Eli Collins Attachments: timeout.txt.gz Saw this failure on a recent branch-2 jenkins run, has also been seen on trunk. {noformat} java.util.concurrent.TimeoutException: Timed out waiting for condition at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:107) at org.apache.hadoop.hdfs.TestSafeMode.testInitializeReplQueuesEarly(TestSafeMode.java:191) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira unsubscribe
Re: [jira] [Created] (HDFS-4001) TestSafeMode#testInitializeReplQueuesEarly may time out
On 10/3/2012 2:56 PM, Eli Collins (JIRA) wrote: Eli Collins created HDFS-4001: - Summary: TestSafeMode#testInitializeReplQueuesEarly may time out Key: HDFS-4001 URL: https://issues.apache.org/jira/browse/HDFS-4001 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.0.0-alpha Reporter: Eli Collins Saw this failure on a recent branch-2 jenkins run, has also been seen on trunk. {noformat} java.util.concurrent.TimeoutException: Timed out waiting for condition at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:107) at org.apache.hadoop.hdfs.TestSafeMode.testInitializeReplQueuesEarly(TestSafeMode.java:191) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira unsubscribe
Re: [jira] [Commented] (HDFS-3373) FileContext HDFS implementation can leak socket caches
On 9/27/2012 10:00 PM, Hudson (JIRA) wrote: [ https://issues.apache.org/jira/browse/HDFS-3373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464749#comment-13464749 ] Hudson commented on HDFS-3373: -- Integrated in Hadoop-Mapreduce-trunk #1209 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1209/]) HDFS-3373. Change DFSClient input stream socket cache to global static and add a thread to cleanup expired cache entries. Contributed by John George (Revision 1390466) Result = SUCCESS szetszwo : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1390466 Files : * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/SocketCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestConnCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSocketCache.java FileContext HDFS implementation can leak socket caches -- Key: HDFS-3373 URL: https://issues.apache.org/jira/browse/HDFS-3373 Project: Hadoop HDFS Issue Type: Bug Components: hdfs client Affects Versions: 2.0.0-alpha, 3.0.0 Reporter: Todd Lipcon Assignee: John George Fix For: 2.0.3-alpha Attachments: HDFS-3373.branch-23.patch, HDFS-3373.trunk.patch, HDFS-3373.trunk.patch.1, HDFS-3373.trunk.patch.2, HDFS-3373.trunk.patch.3, HDFS-3373.trunk.patch.3, HDFS-3373.trunk.patch.4 As noted by Nicholas in HDFS-3359, FileContext doesn't have a close() method, and thus never calls DFSClient.close(). This means that, until finalizers run, DFSClient will hold on to its SocketCache object and potentially have a lot of outstanding sockets/fds held on to. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira unsubscribe
Re: [jira] [Commented] (HDFS-3963) backport namenode/datanode serviceplugin to branch-1
On 9/21/2012 9:12 AM, Suresh Srinivas (JIRA) wrote: [ https://issues.apache.org/jira/browse/HDFS-3963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13460111#comment-13460111 ] Suresh Srinivas commented on HDFS-3963: --- +1 for the patch. backport namenode/datanode serviceplugin to branch-1 Key: HDFS-3963 URL: https://issues.apache.org/jira/browse/HDFS-3963 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 1.2.0 Reporter: Brandon Li Assignee: Brandon Li Attachments: HDFS-3963.branch-1.patch backport namenode/datanode serviceplugin to branch-1 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira unsubscribe