[jira] [Commented] (HDFS-12038) Ozone: Non-admin user is unable to run InfoVolume to the volume owned by itself
[ https://issues.apache.org/jira/browse/HDFS-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173909#comment-16173909 ] Hadoop QA commented on HDFS-12038: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 53s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 15m 4s{color} | {color:red} root in HDFS-7240 failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 49s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s{color} | {color:red} hadoop-hdfs-client in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 42s{color} | {color:red} hadoop-hdfs in HDFS-7240 failed. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 41s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 54s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 40s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 39s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 18s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}155m 8s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestPread | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.server.namenode.TestReencryption | | | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12038 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888110/HDFS-12038-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 3c901ce0ff4a
[jira] [Updated] (HDFS-12496) Make QuorumJournalManager timeout properties configurable
[ https://issues.apache.org/jira/browse/HDFS-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12496: -- Attachment: HDFS-12496.04.patch > Make QuorumJournalManager timeout properties configurable > - > > Key: HDFS-12496 > URL: https://issues.apache.org/jira/browse/HDFS-12496 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12496.01.patch, HDFS-12496.02.patch, > HDFS-12496.03.patch, HDFS-12496.04.patch > > > Make QuorumJournalManager timeout properties configurable using a common key. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12486) GetConf to get journalnodeslist
[ https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173887#comment-16173887 ] Hanisha Koneru commented on HDFS-12486: --- Thanks for the update, [~bharatviswa] - bq. The below code snippet can be optimized using URI. The code in [Util#getAddressesList] does the same thing as below using URIs. I meant we can take the following code from _Util#getAddressesList_. We don't need to create the InetSocketAddress's here. (Sorry I was not clear). This way we can also make sure that the format of the URI is correct. {code} String authority = uri.getAuthority(); Preconditions.checkArgument(authority != null && !authority.isEmpty(), "URI has no authority: " + uri); String[] parts = StringUtils.split(authority, ';'); for (int i = 0; i < parts.length; i++) { parts[i] = parts[i].trim(); } {code} - Not sure if we should check for the scheme to be _qjournal_ in _DFSUtil#getJournalNodeAddresses()_. This check is performed during _FSEditLog#initJournals_ anyway. - In TestGetConf, for the _shared.edits.dir_ value, we should have a journal node address/hostname different from the namenodeIds. It is confusing if the namenodeIds - nn0 and nn1 and journal node hostnames are the same. - A nitpick: you can assign the JournalUri scheme and authority to a variable and add the journalId as and when required. {code} String journalsBaseUri = "qjournal://node1:9820;node2:9820;node3:9820" . . conf.set(DFS_NAMENODE_SHARED_EDITS_DIR_KEY+".ns1", journalsBaseUri + "/ns1"); {code} - Can you please rename _DFSUtil#journal_ to represent its intended function (for example, journalsUri). > GetConf to get journalnodeslist > --- > > Key: HDFS-12486 > URL: https://issues.apache.org/jira/browse/HDFS-12486 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, > HDFS-12486.03.patch, HDFS-12486.04.patch > > > GetConf command to list journal nodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12499) dfs.namenode.shared.edits.dir property is currently namenode specific key
[ https://issues.apache.org/jira/browse/HDFS-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173873#comment-16173873 ] Arpit Agarwal commented on HDFS-12499: -- Thanks for reporting this [~bharatviswa]. {{dfs.namenode.shared.edits.dir}} should probably have been a nameservice-specific key. I am not sure whether there is any valid situation in which NN-specific values for shared.edits.dir will be necessary. > dfs.namenode.shared.edits.dir property is currently namenode specific key > - > > Key: HDFS-12499 > URL: https://issues.apache.org/jira/browse/HDFS-12499 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Labels: Incompatible > Attachments: HDFS-12499.01.patch, HDFS-12499.02.patch > > > HDFS + Federation cluster +QJM > dfs.shared.edits.dir property can be set as > 1. dfs.shared.edits.dir.<> > 2. dfs.shared.edits.dir.<> .<> > Configuring both ways are supported currently. Option 2 should not be > supported, as for a particular nameservice quorum of journal nodes should be > same. > If option 2 is supported, users can configure for a nameservice Id which is > having two namenodes, they can configure different values for journal nodes. > which is incorrect. > Example: > > dfs.nameservices > ns1,ns2 > > > dfs.ha.namenodes.ns1 > nn1,nn2 > > > dfs.ha.namenodes.ns2 > nn1,nn2 > > > dfs.namenode.shared.edits.dir.ns1.nn1 > > qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns1 > > > dfs.namenode.shared.edits.dir.ns1.nn1 > > qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns1 > > > dfs.namenode.shared.edits.dir.ns2.nn1 > > qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns2 > > > dfs.namenode.shared.edits.dir.ns2.nn1 > > qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns2 > > This jira is to discuss do we need to support 2nd option way of configuring > or remove it? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12502) nntop should support a category based on FilesInGetListingOps
[ https://issues.apache.org/jira/browse/HDFS-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-12502: - Status: Patch Available (was: Open) > nntop should support a category based on FilesInGetListingOps > - > > Key: HDFS-12502 > URL: https://issues.apache.org/jira/browse/HDFS-12502 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Attachments: HDFS-12502.00.patch > > > Large listing ops can oftentimes be the main contributor to NameNode > slowness. The aggregate cost of listing ops is proportional to the > {{FilesInGetListingOps}} rather than the number of listing ops. Therefore > it'd be very useful for nntop to support this category. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12502) nntop should support a category based on FilesInGetListingOps
[ https://issues.apache.org/jira/browse/HDFS-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang reassigned HDFS-12502: Assignee: Zhe Zhang > nntop should support a category based on FilesInGetListingOps > - > > Key: HDFS-12502 > URL: https://issues.apache.org/jira/browse/HDFS-12502 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Attachments: HDFS-12502.00.patch > > > Large listing ops can oftentimes be the main contributor to NameNode > slowness. The aggregate cost of listing ops is proportional to the > {{FilesInGetListingOps}} rather than the number of listing ops. Therefore > it'd be very useful for nntop to support this category. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12502) nntop should support a category based on FilesInGetListingOps
[ https://issues.apache.org/jira/browse/HDFS-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HDFS-12502: - Attachment: HDFS-12502.00.patch Initial patch attached. > nntop should support a category based on FilesInGetListingOps > - > > Key: HDFS-12502 > URL: https://issues.apache.org/jira/browse/HDFS-12502 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Attachments: HDFS-12502.00.patch > > > Large listing ops can oftentimes be the main contributor to NameNode > slowness. The aggregate cost of listing ops is proportional to the > {{FilesInGetListingOps}} rather than the number of listing ops. Therefore > it'd be very useful for nntop to support this category. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-9126) namenode crash in fsimage download/transfer
[ https://issues.apache.org/jira/browse/HDFS-9126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal resolved HDFS-9126. - Resolution: Invalid Hard to say what was causing this issue. The user/dev list is a better place to bring this up. > namenode crash in fsimage download/transfer > --- > > Key: HDFS-9126 > URL: https://issues.apache.org/jira/browse/HDFS-9126 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.6.0 > Environment: OS:Centos 6.5(final) > Apache Hadoop:2.6.0 > namenode ha base 5 journalnodes >Reporter: zengyongping >Priority: Critical > > In our product Hadoop cluster,when active namenode begin download/transfer > fsimage from standby namenode.some times zkfc monitor health of NameNode > socket timeout,zkfs judge active namenode status SERVICE_NOT_RESPONDING > ,happen hadoop namenode ha failover,fence old active namenode. > zkfc logs: > 2015-09-24 11:44:44,739 WARN org.apache.hadoop.ha.HealthMonitor: > Transport-level exception trying to monitor health of NameNode at > hostname1/192.168.10.11:8020: Call From hostname1/192.168.10.11 to > hostname1:8020 failed on socket timeout exception: > java.net.SocketTimeoutException: 45000 millis timeout while waiting for > channel to be ready for read. ch : java.nio.channels.SocketChannel[connected > local=/192.168.10.11:22614 remote=hostname1/192.168.10.11:8020]; For more > details see: http://wiki.apache.org/hadoop/SocketTimeout > 2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.HealthMonitor: Entering > state SERVICE_NOT_RESPONDING > 2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.ZKFailoverController: Local > service NameNode at hostname1/192.168.10.11:8020 entered state: > SERVICE_NOT_RESPONDING > 2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.ZKFailoverController: > Quitting master election for NameNode at hostname1/192.168.10.11:8020 and > marking that fencing is necessary > 2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.ActiveStandbyElector: > Yielding from election > 2015-09-24 11:44:44,761 INFO org.apache.zookeeper.ZooKeeper: Session: > 0x54d81348fe503e3 closed > 2015-09-24 11:44:44,761 WARN org.apache.hadoop.ha.ActiveStandbyElector: > Ignoring stale result from old client with sessionId 0x54d81348fe503e3 > 2015-09-24 11:44:44,764 INFO org.apache.zookeeper.ClientCnxn: EventThread > shut down > namenode logs: > 2015-09-24 11:43:34,074 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from > 192.168.10.12 > 2015-09-24 11:43:34,074 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs > 2015-09-24 11:43:34,075 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment > 2317430129 > 2015-09-24 11:43:34,253 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: > 272988 Total time for transactions(ms): 5502 Number of transactions batched > in Syncs: 146274 Number of syncs: 32375 SyncTimes(ms): 274465 319599 > 2015-09-24 11:43:46,005 INFO > org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: > Rescanning after 3 milliseconds > 2015-09-24 11:44:21,054 WARN > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > PendingReplicationMonitor timed out blk_1185804191_112164210 > 2015-09-24 11:44:36,076 INFO > org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits > file > /software/data/hadoop-data/hdfs/namenode/current/edits_inprogress_02317430129 > -> > /software/data/hadoop-data/hdfs/namenode/current/edits_02317430129-02317703116 > 2015-09-24 11:44:36,077 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at > 2317703117 > 2015-09-24 11:45:38,008 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 1 > Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 > Number of syncs: 0 SyncTimes(ms): 0 61585 > 2015-09-24 11:45:38,009 INFO > org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 222.88s > at 63510.29 KB/s > 2015-09-24 11:45:38,009 INFO > org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file > fsimage.ckpt_02317430128 size 14495092105 bytes. > 2015-09-24 11:45:38,416 WARN > org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Remote journal > 192.168.10.13:8485 failed to write txns 2317703117-2317703117. Will try to > write to this JN again after the next log roll. > org.apache.hadoop.ipc.RemoteException(java.io.IOException): IPC's epoch 44 is > less than the last promised epoch 45 > at > org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:414) > at >
[jira] [Closed] (HDFS-9126) namenode crash in fsimage download/transfer
[ https://issues.apache.org/jira/browse/HDFS-9126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal closed HDFS-9126. --- > namenode crash in fsimage download/transfer > --- > > Key: HDFS-9126 > URL: https://issues.apache.org/jira/browse/HDFS-9126 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.6.0 > Environment: OS:Centos 6.5(final) > Apache Hadoop:2.6.0 > namenode ha base 5 journalnodes >Reporter: zengyongping >Priority: Critical > > In our product Hadoop cluster,when active namenode begin download/transfer > fsimage from standby namenode.some times zkfc monitor health of NameNode > socket timeout,zkfs judge active namenode status SERVICE_NOT_RESPONDING > ,happen hadoop namenode ha failover,fence old active namenode. > zkfc logs: > 2015-09-24 11:44:44,739 WARN org.apache.hadoop.ha.HealthMonitor: > Transport-level exception trying to monitor health of NameNode at > hostname1/192.168.10.11:8020: Call From hostname1/192.168.10.11 to > hostname1:8020 failed on socket timeout exception: > java.net.SocketTimeoutException: 45000 millis timeout while waiting for > channel to be ready for read. ch : java.nio.channels.SocketChannel[connected > local=/192.168.10.11:22614 remote=hostname1/192.168.10.11:8020]; For more > details see: http://wiki.apache.org/hadoop/SocketTimeout > 2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.HealthMonitor: Entering > state SERVICE_NOT_RESPONDING > 2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.ZKFailoverController: Local > service NameNode at hostname1/192.168.10.11:8020 entered state: > SERVICE_NOT_RESPONDING > 2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.ZKFailoverController: > Quitting master election for NameNode at hostname1/192.168.10.11:8020 and > marking that fencing is necessary > 2015-09-24 11:44:44,740 INFO org.apache.hadoop.ha.ActiveStandbyElector: > Yielding from election > 2015-09-24 11:44:44,761 INFO org.apache.zookeeper.ZooKeeper: Session: > 0x54d81348fe503e3 closed > 2015-09-24 11:44:44,761 WARN org.apache.hadoop.ha.ActiveStandbyElector: > Ignoring stale result from old client with sessionId 0x54d81348fe503e3 > 2015-09-24 11:44:44,764 INFO org.apache.zookeeper.ClientCnxn: EventThread > shut down > namenode logs: > 2015-09-24 11:43:34,074 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from > 192.168.10.12 > 2015-09-24 11:43:34,074 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs > 2015-09-24 11:43:34,075 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment > 2317430129 > 2015-09-24 11:43:34,253 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: > 272988 Total time for transactions(ms): 5502 Number of transactions batched > in Syncs: 146274 Number of syncs: 32375 SyncTimes(ms): 274465 319599 > 2015-09-24 11:43:46,005 INFO > org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: > Rescanning after 3 milliseconds > 2015-09-24 11:44:21,054 WARN > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > PendingReplicationMonitor timed out blk_1185804191_112164210 > 2015-09-24 11:44:36,076 INFO > org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits > file > /software/data/hadoop-data/hdfs/namenode/current/edits_inprogress_02317430129 > -> > /software/data/hadoop-data/hdfs/namenode/current/edits_02317430129-02317703116 > 2015-09-24 11:44:36,077 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at > 2317703117 > 2015-09-24 11:45:38,008 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 1 > Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 > Number of syncs: 0 SyncTimes(ms): 0 61585 > 2015-09-24 11:45:38,009 INFO > org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 222.88s > at 63510.29 KB/s > 2015-09-24 11:45:38,009 INFO > org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file > fsimage.ckpt_02317430128 size 14495092105 bytes. > 2015-09-24 11:45:38,416 WARN > org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Remote journal > 192.168.10.13:8485 failed to write txns 2317703117-2317703117. Will try to > write to this JN again after the next log roll. > org.apache.hadoop.ipc.RemoteException(java.io.IOException): IPC's epoch 44 is > less than the last promised epoch 45 > at > org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:414) > at > org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:442) > at > org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:342) > at >
[jira] [Commented] (HDFS-12515) Ozone: mvn package compilation fails on HDFS-7240
[ https://issues.apache.org/jira/browse/HDFS-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173815#comment-16173815 ] Anu Engineer commented on HDFS-12515: - Just committed and pushed this change to HDFS-7240. Thanks to [~xyao] and [~ajayydv] for verifying the patch solves the build break on local machines. remote: hadoop git commit: HDFS-12515. Ozone: mvn package compilation fails on HDFS-7240. Contributed by Anu Engineer. To https://git-wip-us.apache.org/repos/asf/hadoop.git 2a94ce9124c..244e7a5f65c HDFS-7240 -> HDFS-7240 Will resolve this JIRA in a day or two once we get some more Jenkins runs. > Ozone: mvn package compilation fails on HDFS-7240 > - > > Key: HDFS-12515 > URL: https://issues.apache.org/jira/browse/HDFS-12515 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Blocker > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12515-HDFS-7240.001.patch > > > Creation of a package on ozone(HDFS-7240) fails > {{mvn clean package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true}} > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) > on project hadoop-mapreduce-examples: Compilation failure: Compilation > failure: > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[50,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[51,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[69,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.terasort.TeraGen > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[53,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[86,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.BaileyBorweinPlouffe > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[51,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[80,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.DBCountPageView > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[57,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[69,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.pi.DistSum > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[23,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[38,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class > org.apache.hadoop.examples.dancing.DancingLinks > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java:[40,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java:[50,24] > cannot find symbol > [ERROR] symbol: class
[jira] [Updated] (HDFS-12499) dfs.namenode.shared.edits.dir property is currently namenode specific key
[ https://issues.apache.org/jira/browse/HDFS-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12499: -- Attachment: HDFS-12499.02.patch > dfs.namenode.shared.edits.dir property is currently namenode specific key > - > > Key: HDFS-12499 > URL: https://issues.apache.org/jira/browse/HDFS-12499 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Labels: Incompatible > Attachments: HDFS-12499.01.patch, HDFS-12499.02.patch > > > HDFS + Federation cluster +QJM > dfs.shared.edits.dir property can be set as > 1. dfs.shared.edits.dir.<> > 2. dfs.shared.edits.dir.<> .<> > Configuring both ways are supported currently. Option 2 should not be > supported, as for a particular nameservice quorum of journal nodes should be > same. > If option 2 is supported, users can configure for a nameservice Id which is > having two namenodes, they can configure different values for journal nodes. > which is incorrect. > Example: > > dfs.nameservices > ns1,ns2 > > > dfs.ha.namenodes.ns1 > nn1,nn2 > > > dfs.ha.namenodes.ns2 > nn1,nn2 > > > dfs.namenode.shared.edits.dir.ns1.nn1 > > qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns1 > > > dfs.namenode.shared.edits.dir.ns1.nn1 > > qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns1 > > > dfs.namenode.shared.edits.dir.ns2.nn1 > > qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns2 > > > dfs.namenode.shared.edits.dir.ns2.nn1 > > qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns2 > > This jira is to discuss do we need to support 2nd option way of configuring > or remove it? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12499) dfs.namenode.shared.edits.dir property is currently namenode specific key
[ https://issues.apache.org/jira/browse/HDFS-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12499: -- Attachment: (was: HDFS-12499.02.patch) > dfs.namenode.shared.edits.dir property is currently namenode specific key > - > > Key: HDFS-12499 > URL: https://issues.apache.org/jira/browse/HDFS-12499 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Labels: Incompatible > Attachments: HDFS-12499.01.patch, HDFS-12499.02.patch > > > HDFS + Federation cluster +QJM > dfs.shared.edits.dir property can be set as > 1. dfs.shared.edits.dir.<> > 2. dfs.shared.edits.dir.<> .<> > Configuring both ways are supported currently. Option 2 should not be > supported, as for a particular nameservice quorum of journal nodes should be > same. > If option 2 is supported, users can configure for a nameservice Id which is > having two namenodes, they can configure different values for journal nodes. > which is incorrect. > Example: > > dfs.nameservices > ns1,ns2 > > > dfs.ha.namenodes.ns1 > nn1,nn2 > > > dfs.ha.namenodes.ns2 > nn1,nn2 > > > dfs.namenode.shared.edits.dir.ns1.nn1 > > qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns1 > > > dfs.namenode.shared.edits.dir.ns1.nn1 > > qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns1 > > > dfs.namenode.shared.edits.dir.ns2.nn1 > > qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns2 > > > dfs.namenode.shared.edits.dir.ns2.nn1 > > qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns2 > > This jira is to discuss do we need to support 2nd option way of configuring > or remove it? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12499) dfs.namenode.shared.edits.dir property is currently namenode specific key
[ https://issues.apache.org/jira/browse/HDFS-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12499: -- Status: In Progress (was: Patch Available) > dfs.namenode.shared.edits.dir property is currently namenode specific key > - > > Key: HDFS-12499 > URL: https://issues.apache.org/jira/browse/HDFS-12499 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Labels: Incompatible > Attachments: HDFS-12499.01.patch, HDFS-12499.02.patch > > > HDFS + Federation cluster +QJM > dfs.shared.edits.dir property can be set as > 1. dfs.shared.edits.dir.<> > 2. dfs.shared.edits.dir.<> .<> > Configuring both ways are supported currently. Option 2 should not be > supported, as for a particular nameservice quorum of journal nodes should be > same. > If option 2 is supported, users can configure for a nameservice Id which is > having two namenodes, they can configure different values for journal nodes. > which is incorrect. > Example: > > dfs.nameservices > ns1,ns2 > > > dfs.ha.namenodes.ns1 > nn1,nn2 > > > dfs.ha.namenodes.ns2 > nn1,nn2 > > > dfs.namenode.shared.edits.dir.ns1.nn1 > > qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns1 > > > dfs.namenode.shared.edits.dir.ns1.nn1 > > qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns1 > > > dfs.namenode.shared.edits.dir.ns2.nn1 > > qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns2 > > > dfs.namenode.shared.edits.dir.ns2.nn1 > > qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns2 > > This jira is to discuss do we need to support 2nd option way of configuring > or remove it? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12499) dfs.namenode.shared.edits.dir property is currently namenode specific key
[ https://issues.apache.org/jira/browse/HDFS-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173786#comment-16173786 ] Bharat Viswanadham commented on HDFS-12499: --- Updated the patch to fix checkstyle issues. The test failures are not related to this patch. I ran tests locally, they are passing. > dfs.namenode.shared.edits.dir property is currently namenode specific key > - > > Key: HDFS-12499 > URL: https://issues.apache.org/jira/browse/HDFS-12499 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Labels: Incompatible > Attachments: HDFS-12499.01.patch, HDFS-12499.02.patch > > > HDFS + Federation cluster +QJM > dfs.shared.edits.dir property can be set as > 1. dfs.shared.edits.dir.<> > 2. dfs.shared.edits.dir.<> .<> > Configuring both ways are supported currently. Option 2 should not be > supported, as for a particular nameservice quorum of journal nodes should be > same. > If option 2 is supported, users can configure for a nameservice Id which is > having two namenodes, they can configure different values for journal nodes. > which is incorrect. > Example: > > dfs.nameservices > ns1,ns2 > > > dfs.ha.namenodes.ns1 > nn1,nn2 > > > dfs.ha.namenodes.ns2 > nn1,nn2 > > > dfs.namenode.shared.edits.dir.ns1.nn1 > > qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns1 > > > dfs.namenode.shared.edits.dir.ns1.nn1 > > qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns1 > > > dfs.namenode.shared.edits.dir.ns2.nn1 > > qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns2 > > > dfs.namenode.shared.edits.dir.ns2.nn1 > > qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns2 > > This jira is to discuss do we need to support 2nd option way of configuring > or remove it? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12499) dfs.namenode.shared.edits.dir property is currently namenode specific key
[ https://issues.apache.org/jira/browse/HDFS-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12499: -- Status: Patch Available (was: In Progress) > dfs.namenode.shared.edits.dir property is currently namenode specific key > - > > Key: HDFS-12499 > URL: https://issues.apache.org/jira/browse/HDFS-12499 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Labels: Incompatible > Attachments: HDFS-12499.01.patch, HDFS-12499.02.patch > > > HDFS + Federation cluster +QJM > dfs.shared.edits.dir property can be set as > 1. dfs.shared.edits.dir.<> > 2. dfs.shared.edits.dir.<> .<> > Configuring both ways are supported currently. Option 2 should not be > supported, as for a particular nameservice quorum of journal nodes should be > same. > If option 2 is supported, users can configure for a nameservice Id which is > having two namenodes, they can configure different values for journal nodes. > which is incorrect. > Example: > > dfs.nameservices > ns1,ns2 > > > dfs.ha.namenodes.ns1 > nn1,nn2 > > > dfs.ha.namenodes.ns2 > nn1,nn2 > > > dfs.namenode.shared.edits.dir.ns1.nn1 > > qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns1 > > > dfs.namenode.shared.edits.dir.ns1.nn1 > > qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns1 > > > dfs.namenode.shared.edits.dir.ns2.nn1 > > qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns2 > > > dfs.namenode.shared.edits.dir.ns2.nn1 > > qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns2 > > This jira is to discuss do we need to support 2nd option way of configuring > or remove it? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12499) dfs.namenode.shared.edits.dir property is currently namenode specific key
[ https://issues.apache.org/jira/browse/HDFS-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12499: -- Attachment: HDFS-12499.02.patch > dfs.namenode.shared.edits.dir property is currently namenode specific key > - > > Key: HDFS-12499 > URL: https://issues.apache.org/jira/browse/HDFS-12499 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Labels: Incompatible > Attachments: HDFS-12499.01.patch, HDFS-12499.02.patch > > > HDFS + Federation cluster +QJM > dfs.shared.edits.dir property can be set as > 1. dfs.shared.edits.dir.<> > 2. dfs.shared.edits.dir.<> .<> > Configuring both ways are supported currently. Option 2 should not be > supported, as for a particular nameservice quorum of journal nodes should be > same. > If option 2 is supported, users can configure for a nameservice Id which is > having two namenodes, they can configure different values for journal nodes. > which is incorrect. > Example: > > dfs.nameservices > ns1,ns2 > > > dfs.ha.namenodes.ns1 > nn1,nn2 > > > dfs.ha.namenodes.ns2 > nn1,nn2 > > > dfs.namenode.shared.edits.dir.ns1.nn1 > > qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns1 > > > dfs.namenode.shared.edits.dir.ns1.nn1 > > qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns1 > > > dfs.namenode.shared.edits.dir.ns2.nn1 > > qjournal://mycluster-node-1:8485;mycluster-node-2:8485;mycluster-node-3:8485/ns2 > > > dfs.namenode.shared.edits.dir.ns2.nn1 > > qjournal://mycluster-node-3:8485;mycluster-node-4:8485;mycluster-node-5:8485/ns2 > > This jira is to discuss do we need to support 2nd option way of configuring > or remove it? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12496) Make QuorumJournalManager timeout properties configurable
[ https://issues.apache.org/jira/browse/HDFS-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173762#comment-16173762 ] Ajay Kumar commented on HDFS-12496: --- [~arpitagarwal], Thanks for review. Uploaded new patch with changed property name and description in hdfs-default.xml > Make QuorumJournalManager timeout properties configurable > - > > Key: HDFS-12496 > URL: https://issues.apache.org/jira/browse/HDFS-12496 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12496.01.patch, HDFS-12496.02.patch, > HDFS-12496.03.patch > > > Make QuorumJournalManager timeout properties configurable using a common key. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12496) Make QuorumJournalManager timeout properties configurable
[ https://issues.apache.org/jira/browse/HDFS-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12496: -- Attachment: HDFS-12496.03.patch > Make QuorumJournalManager timeout properties configurable > - > > Key: HDFS-12496 > URL: https://issues.apache.org/jira/browse/HDFS-12496 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12496.01.patch, HDFS-12496.02.patch, > HDFS-12496.03.patch > > > Make QuorumJournalManager timeout properties configurable using a common key. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12486) GetConf to get journalnodeslist
[ https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12486: -- Status: Patch Available (was: In Progress) > GetConf to get journalnodeslist > --- > > Key: HDFS-12486 > URL: https://issues.apache.org/jira/browse/HDFS-12486 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, > HDFS-12486.03.patch, HDFS-12486.04.patch > > > GetConf command to list journal nodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12486) GetConf to get journalnodeslist
[ https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12486: -- Status: In Progress (was: Patch Available) > GetConf to get journalnodeslist > --- > > Key: HDFS-12486 > URL: https://issues.apache.org/jira/browse/HDFS-12486 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, > HDFS-12486.03.patch, HDFS-12486.04.patch > > > GetConf command to list journal nodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12486) GetConf to get journalnodeslist
[ https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173737#comment-16173737 ] Bharat Viswanadham commented on HDFS-12486: --- [~hanishakoneru] Thanks for review. Updated the patch to address the comments. > GetConf to get journalnodeslist > --- > > Key: HDFS-12486 > URL: https://issues.apache.org/jira/browse/HDFS-12486 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, > HDFS-12486.03.patch, HDFS-12486.04.patch > > > GetConf command to list journal nodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12486) GetConf to get journalnodeslist
[ https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12486: -- Attachment: HDFS-12486.04.patch > GetConf to get journalnodeslist > --- > > Key: HDFS-12486 > URL: https://issues.apache.org/jira/browse/HDFS-12486 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, > HDFS-12486.03.patch, HDFS-12486.04.patch > > > GetConf command to list journal nodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12038) Ozone: Non-admin user is unable to run InfoVolume to the volume owned by itself
[ https://issues.apache.org/jira/browse/HDFS-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173725#comment-16173725 ] Lokesh Jain commented on HDFS-12038: The patch also makes changes to the ozone shell commands. The -user option is now used by only createVolume command for specifying the owner of a volume. For rest of the commands -user option is inactive. They will simply use current login user who executes the command in the shell if -root is not specified and will use "hdfs" if -root option is specified. > Ozone: Non-admin user is unable to run InfoVolume to the volume owned by > itself > --- > > Key: HDFS-12038 > URL: https://issues.apache.org/jira/browse/HDFS-12038 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Lokesh Jain > Labels: ozoneMerge > Attachments: HDFS-12038-HDFS-7240.001.patch > > > Reproduce steps > 1. Create a volume with a non-admin user > {code} > hdfs oz -createVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -user > wwei -root -quota 2TB > {code} > 2. Run infoVolume command to get this volume info > {noformat} > hdfs oz -infoVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -user wwei > Command Failed : > {"httpCode":400,"shortMessage":"badAuthorization","resource":null,"message":"Missing > authorization or authorization has to be > unique.","requestID":"221efb47-72b9-498d-ac19-907257428573","hostName":"ozone1.fyre.ibm.com"} > {noformat} > add {{-root}} to run as admin user could bypass this issue > {noformat} > hdfs oz -infoVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -user wwei > -root > { > "owner" : { > "name" : "wwei" > }, > "quota" : { > "unit" : "TB", > "size" : 2 > }, > "volumeName" : "volume-wwei-0", > "createdOn" : null, > "createdBy" : "hdfs" > } > {noformat} > expecting: both volume owner and admin should be able to run infoVolume > command. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12515) Ozone: mvn package compilation fails on HDFS-7240
[ https://issues.apache.org/jira/browse/HDFS-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173723#comment-16173723 ] Hadoop QA commented on HDFS-12515: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 17m 3s{color} | {color:red} root in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 17s{color} | {color:red} hadoop-mapreduce-examples in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 21s{color} | {color:red} hadoop-mapreduce-examples in HDFS-7240 failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 20s{color} | {color:red} hadoop-mapreduce-project_hadoop-mapreduce-examples generated 1 new + 2 unchanged - 36 fixed = 3 total (was 38) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} hadoop-mapreduce-project_hadoop-mapreduce-examples generated 0 new + 143 unchanged - 17 fixed = 143 total (was 160) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 33s{color} | {color:green} hadoop-mapreduce-examples in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 21m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888108/HDFS-12515-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 457fc223d614 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 2a94ce9 | | Default Java | 1.8.0_144 | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/21247/artifact/patchprocess/branch-mvninstall-root.txt | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/21247/artifact/patchprocess/branch-compile-hadoop-mapreduce-project_hadoop-mapreduce-examples.txt | | mvnsite | https://builds.apache.org/job/PreCommit-HDFS-Build/21247/artifact/patchprocess/branch-mvnsite-hadoop-mapreduce-project_hadoop-mapreduce-examples.txt | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/21247/artifact/patchprocess/diff-compile-javac-hadoop-mapreduce-project_hadoop-mapreduce-examples.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21247/testReport/ | | modules | C: hadoop-mapreduce-project/hadoop-mapreduce-examples U: hadoop-mapreduce-project/hadoop-mapreduce-examples | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21247/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: mvn package compilation fails on HDFS-7240
[jira] [Updated] (HDFS-12038) Ozone: Non-admin user is unable to run InfoVolume to the volume owned by itself
[ https://issues.apache.org/jira/browse/HDFS-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDFS-12038: --- Status: Patch Available (was: Open) > Ozone: Non-admin user is unable to run InfoVolume to the volume owned by > itself > --- > > Key: HDFS-12038 > URL: https://issues.apache.org/jira/browse/HDFS-12038 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Lokesh Jain > Labels: ozoneMerge > Attachments: HDFS-12038-HDFS-7240.001.patch > > > Reproduce steps > 1. Create a volume with a non-admin user > {code} > hdfs oz -createVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -user > wwei -root -quota 2TB > {code} > 2. Run infoVolume command to get this volume info > {noformat} > hdfs oz -infoVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -user wwei > Command Failed : > {"httpCode":400,"shortMessage":"badAuthorization","resource":null,"message":"Missing > authorization or authorization has to be > unique.","requestID":"221efb47-72b9-498d-ac19-907257428573","hostName":"ozone1.fyre.ibm.com"} > {noformat} > add {{-root}} to run as admin user could bypass this issue > {noformat} > hdfs oz -infoVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -user wwei > -root > { > "owner" : { > "name" : "wwei" > }, > "quota" : { > "unit" : "TB", > "size" : 2 > }, > "volumeName" : "volume-wwei-0", > "createdOn" : null, > "createdBy" : "hdfs" > } > {noformat} > expecting: both volume owner and admin should be able to run infoVolume > command. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12515) Ozone: mvn package compilation fails on HDFS-7240
[ https://issues.apache.org/jira/browse/HDFS-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173709#comment-16173709 ] Anu Engineer commented on HDFS-12515: - Even with this fix, shading is still failing .. {noformat} [ERROR] Found artifact with unexpected contents: '/Users/aengineer/codereview/hadoop-client-modules/hadoop-client-runtime/target/hadoop-client-runtime-3.1.0-SNAPSHOT.jar' Please check the following and either correct the build or update the allowed list with reasoning. org/apache/log4j/ org/apache/log4j/MDCFriend.class org/slf4j/ org/slf4j/impl/ org/slf4j/impl/Log4jLoggerAdapter.class org/slf4j/impl/Log4jLoggerFactory.class org/slf4j/impl/Log4jMDCAdapter.class org/slf4j/impl/StaticLoggerBinder.class org/slf4j/impl/StaticMarkerBinder.class org/slf4j/impl/StaticMDCBinder.class org/slf4j/impl/VersionUtil.class [INFO] Artifact looks correct: 'hadoop-client-api-3.1.0-SNAPSHOT.jar' {noformat} > Ozone: mvn package compilation fails on HDFS-7240 > - > > Key: HDFS-12515 > URL: https://issues.apache.org/jira/browse/HDFS-12515 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Blocker > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12515-HDFS-7240.001.patch > > > Creation of a package on ozone(HDFS-7240) fails > {{mvn clean package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true}} > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) > on project hadoop-mapreduce-examples: Compilation failure: Compilation > failure: > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[50,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[51,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[69,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.terasort.TeraGen > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[53,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[86,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.BaileyBorweinPlouffe > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[51,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[80,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.DBCountPageView > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[57,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[69,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.pi.DistSum > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[23,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[38,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class > org.apache.hadoop.examples.dancing.DancingLinks > [ERROR] >
[jira] [Updated] (HDFS-12038) Ozone: Non-admin user is unable to run InfoVolume to the volume owned by itself
[ https://issues.apache.org/jira/browse/HDFS-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDFS-12038: --- Attachment: HDFS-12038-HDFS-7240.001.patch This patch fixes the issue referred to in the bug. The issue was as pointed out by [~cheersyang]. But the current ksm api does not check the user information when a call for getVolume is made. I have changed the function signature of the ksm api to include the userName as well so that volume access check can be made for the user in the ksm. I had to make changes in the client code which calls the ksm getVolumeInfo api. > Ozone: Non-admin user is unable to run InfoVolume to the volume owned by > itself > --- > > Key: HDFS-12038 > URL: https://issues.apache.org/jira/browse/HDFS-12038 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Lokesh Jain > Labels: ozoneMerge > Attachments: HDFS-12038-HDFS-7240.001.patch > > > Reproduce steps > 1. Create a volume with a non-admin user > {code} > hdfs oz -createVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -user > wwei -root -quota 2TB > {code} > 2. Run infoVolume command to get this volume info > {noformat} > hdfs oz -infoVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -user wwei > Command Failed : > {"httpCode":400,"shortMessage":"badAuthorization","resource":null,"message":"Missing > authorization or authorization has to be > unique.","requestID":"221efb47-72b9-498d-ac19-907257428573","hostName":"ozone1.fyre.ibm.com"} > {noformat} > add {{-root}} to run as admin user could bypass this issue > {noformat} > hdfs oz -infoVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -user wwei > -root > { > "owner" : { > "name" : "wwei" > }, > "quota" : { > "unit" : "TB", > "size" : 2 > }, > "volumeName" : "volume-wwei-0", > "createdOn" : null, > "createdBy" : "hdfs" > } > {noformat} > expecting: both volume owner and admin should be able to run infoVolume > command. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation
[ https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173700#comment-16173700 ] Surendra Singh Lilhore commented on HDFS-11968: --- Please handle some more comments 1. No need of adding new methods in {{BlockStoragePolicySpi}}. Remove these methods. {code} /** + byte getId(); + + byte getRealId(); {code} 2. Please don't change any code which is not related to this patch, like {code} - AdminHelper.printUsage(false, "storagepolicies", COMMANDS); + AdminHelper + .printUsage(false, "storagepolicies", COMMANDS); {code} 3. {{InodeTree.java}} changes is not required. > ViewFS: StoragePolicies commands fail with HDFS federation > -- > > Key: HDFS-11968 > URL: https://issues.apache.org/jira/browse/HDFS-11968 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.7.1 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Attachments: HDFS-11968.001.patch, HDFS-11968.002.patch, > HDFS-11968.003.patch, HDFS-11968.004.patch, HDFS-11968.005.patch, > HDFS-11968.006.patch, HDFS-11968.007.patch > > > hdfs storagepolicies command fails with HDFS federation. > For storage policies commands, a given user path should be resolved to a HDFS > path and > storage policy command should be applied onto the resolved HDFS path. > {code} > static DistributedFileSystem getDFS(Configuration conf) > throws IOException { > FileSystem fs = FileSystem.get(conf); > if (!(fs instanceof DistributedFileSystem)) { > throw new IllegalArgumentException("FileSystem " + fs.getUri() + > " is not an HDFS file system"); > } > return (DistributedFileSystem)fs; > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12447) Rename AddECPolicyResponse to AddErasureCodingPolicyResponse
[ https://issues.apache.org/jira/browse/HDFS-12447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173696#comment-16173696 ] Hudson commented on HDFS-12447: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12928 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12928/]) HDFS-12447. Rename AddECPolicyResponse to (wang: rev a12f09ba3c4a3aa4c4558090c5e1b7bcaebe3b94) * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicies.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/ECAdmin.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * (delete) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/AddECPolicyResponse.java * (add) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/AddErasureCodingPolicyResponse.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/erasurecoding.proto * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java > Rename AddECPolicyResponse to AddErasureCodingPolicyResponse > > > Key: HDFS-12447 > URL: https://issues.apache.org/jira/browse/HDFS-12447 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: SammiChen >Assignee: SammiChen > Labels: hdfs-ec-3.0-must-do > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12447.001.patch, HDFS-12447.002.patch, > HDFS-12447.003.patch, HDFS-12447.004.patch > > > As a follow on to handle some issues discussed in HDFS-12395, this is to > majorly refactor addErasureCodingPoliy API, change AddECPolicyResponse => > AddErasureCodingPolicyResponse -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12515) Ozone: mvn package compilation fails on HDFS-7240
[ https://issues.apache.org/jira/browse/HDFS-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173674#comment-16173674 ] Anu Engineer edited comment on HDFS-12515 at 9/20/17 7:07 PM: -- Adding this dependency makes this work on ozone .. {code} === --- hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml (revision 2a94ce9124c7c96a6baa527c543575b74958afd9) +++ hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml (revision ) @@ -34,6 +34,13 @@ + + + org.slf4j + slf4j-api + + + commons-cli commons-cli {code} I am going to commit this, but not resolve this JIRA in case we want to revert it. This dependency just makes it explicit that this module needed in mapreduce-examples package. So it might be ok to merge this change to trunk. was (Author: anu): Adding this dependency makes this work on ozone .. {code} === --- hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml (revision 2a94ce9124c7c96a6baa527c543575b74958afd9) +++ hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml (revision ) @@ -34,6 +34,13 @@ + + + org.slf4j + slf4j-api + + + commons-cli commons-cli {code} I am going to commit this, but not resolve this JIRA in case we want to revert it. This dependency just makes it explicit that this module need sl4j. So it might be ok to merge this change to trunk. > Ozone: mvn package compilation fails on HDFS-7240 > - > > Key: HDFS-12515 > URL: https://issues.apache.org/jira/browse/HDFS-12515 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Blocker > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12515-HDFS-7240.001.patch > > > Creation of a package on ozone(HDFS-7240) fails > {{mvn clean package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true}} > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) > on project hadoop-mapreduce-examples: Compilation failure: Compilation > failure: > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[50,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[51,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[69,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.terasort.TeraGen > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[53,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[86,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.BaileyBorweinPlouffe > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[51,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[80,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.DBCountPageView > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[57,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[69,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.pi.DistSum > [ERROR] >
[jira] [Updated] (HDFS-12515) Ozone: mvn package compilation fails on HDFS-7240
[ https://issues.apache.org/jira/browse/HDFS-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12515: Attachment: HDFS-12515-HDFS-7240.001.patch > Ozone: mvn package compilation fails on HDFS-7240 > - > > Key: HDFS-12515 > URL: https://issues.apache.org/jira/browse/HDFS-12515 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Blocker > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12515-HDFS-7240.001.patch > > > Creation of a package on ozone(HDFS-7240) fails > {{mvn clean package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true}} > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) > on project hadoop-mapreduce-examples: Compilation failure: Compilation > failure: > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[50,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[51,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[69,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.terasort.TeraGen > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[53,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[86,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.BaileyBorweinPlouffe > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[51,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[80,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.DBCountPageView > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[57,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[69,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.pi.DistSum > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[23,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[38,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class > org.apache.hadoop.examples.dancing.DancingLinks > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java:[40,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java:[50,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.terasort.TeraSort > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraOutputFormat.java:[40,17] > package org.slf4j does not exist > [ERROR] >
[jira] [Updated] (HDFS-12515) Ozone: mvn package compilation fails on HDFS-7240
[ https://issues.apache.org/jira/browse/HDFS-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12515: Status: Patch Available (was: Open) > Ozone: mvn package compilation fails on HDFS-7240 > - > > Key: HDFS-12515 > URL: https://issues.apache.org/jira/browse/HDFS-12515 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Blocker > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12515-HDFS-7240.001.patch > > > Creation of a package on ozone(HDFS-7240) fails > {{mvn clean package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true}} > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) > on project hadoop-mapreduce-examples: Compilation failure: Compilation > failure: > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[50,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[51,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[69,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.terasort.TeraGen > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[53,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[86,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.BaileyBorweinPlouffe > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[51,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[80,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.DBCountPageView > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[57,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[69,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.pi.DistSum > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[23,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[38,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class > org.apache.hadoop.examples.dancing.DancingLinks > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java:[40,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java:[50,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.terasort.TeraSort > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraOutputFormat.java:[40,17] > package org.slf4j does not exist > [ERROR] >
[jira] [Commented] (HDFS-12514) Cancelled HedgedReads cause block to be marked as suspect on Windows
[ https://issues.apache.org/jira/browse/HDFS-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173686#comment-16173686 ] Hadoop QA commented on HDFS-12514: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 37 unchanged - 1 fixed = 37 total (was 38) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 2s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}121m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.TestDFSClientRetries | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12514 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888086/HDFS-12514.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux bd1dd4759747 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ce943eb | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21246/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21246/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21246/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Cancelled HedgedReads cause block to be marked as suspect on Windows >
[jira] [Updated] (HDFS-12515) Ozone: mvn package compilation fails on HDFS-7240
[ https://issues.apache.org/jira/browse/HDFS-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12515: Attachment: (was: HDFS-12515-HDFS-7240.001.patch) > Ozone: mvn package compilation fails on HDFS-7240 > - > > Key: HDFS-12515 > URL: https://issues.apache.org/jira/browse/HDFS-12515 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Blocker > Labels: ozoneMerge > Fix For: HDFS-7240 > > > Creation of a package on ozone(HDFS-7240) fails > {{mvn clean package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true}} > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) > on project hadoop-mapreduce-examples: Compilation failure: Compilation > failure: > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[50,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[51,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[69,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.terasort.TeraGen > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[53,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[86,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.BaileyBorweinPlouffe > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[51,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[80,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.DBCountPageView > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[57,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[69,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.pi.DistSum > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[23,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[38,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class > org.apache.hadoop.examples.dancing.DancingLinks > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java:[40,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java:[50,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.terasort.TeraSort > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraOutputFormat.java:[40,17] > package org.slf4j does not exist > [ERROR] >
[jira] [Updated] (HDFS-12515) Ozone: mvn package compilation fails on HDFS-7240
[ https://issues.apache.org/jira/browse/HDFS-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12515: Attachment: HDFS-12515-HDFS-7240.001.patch > Ozone: mvn package compilation fails on HDFS-7240 > - > > Key: HDFS-12515 > URL: https://issues.apache.org/jira/browse/HDFS-12515 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Blocker > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12515-HDFS-7240.001.patch > > > Creation of a package on ozone(HDFS-7240) fails > {{mvn clean package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true}} > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) > on project hadoop-mapreduce-examples: Compilation failure: Compilation > failure: > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[50,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[51,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[69,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.terasort.TeraGen > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[53,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[86,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.BaileyBorweinPlouffe > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[51,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[80,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.DBCountPageView > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[57,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[69,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.pi.DistSum > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[23,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[38,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class > org.apache.hadoop.examples.dancing.DancingLinks > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java:[40,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java:[50,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.terasort.TeraSort > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraOutputFormat.java:[40,17] > package org.slf4j does not exist > [ERROR] >
[jira] [Commented] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation
[ https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173678#comment-16173678 ] Surendra Singh Lilhore commented on HDFS-11968: --- Thanks [~msingh] patch. {{getStoragePolicy}} command is depend on {{HdfsFileStatus}} and {{BlockStoragePolicy}} to get the policy Id, which is specific to the HDFS. Lets not change any public interface({{BlockStoragePolicySpi}}) to get the policy ID. Try this code for {{getStoragePolicy}} command. {code} try { FileStatus status; try { status = fs.getFileStatus(new Path(path)); } catch (FileNotFoundException e) { System.err.println("File/Directory does not exist: " + path); return 2; } if (status instanceof HdfsFileStatus) { byte storagePolicyId = ((HdfsFileStatus) status).getStoragePolicy(); if (storagePolicyId == HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED) { System.out.println("The storage policy of " + path + " is unspecified"); return 0; } Collection policies = fs .getAllStoragePolicies(); for (BlockStoragePolicySpi policy : policies) { if (policy instanceof BlockStoragePolicy) { if (((BlockStoragePolicy) policy).getId() == storagePolicyId) { System.out.println("The storage policy of " + path + ":\n" + policy); return 0; } } } } System.out.println(getName() + " is not supported for filesystem " + fs.getScheme() + " on path " + path); return 2; } catch (Exception e) { System.err.println(AdminHelper.prettifyException(e)); return 2; } {code} > ViewFS: StoragePolicies commands fail with HDFS federation > -- > > Key: HDFS-11968 > URL: https://issues.apache.org/jira/browse/HDFS-11968 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.7.1 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Attachments: HDFS-11968.001.patch, HDFS-11968.002.patch, > HDFS-11968.003.patch, HDFS-11968.004.patch, HDFS-11968.005.patch, > HDFS-11968.006.patch, HDFS-11968.007.patch > > > hdfs storagepolicies command fails with HDFS federation. > For storage policies commands, a given user path should be resolved to a HDFS > path and > storage policy command should be applied onto the resolved HDFS path. > {code} > static DistributedFileSystem getDFS(Configuration conf) > throws IOException { > FileSystem fs = FileSystem.get(conf); > if (!(fs instanceof DistributedFileSystem)) { > throw new IllegalArgumentException("FileSystem " + fs.getUri() + > " is not an HDFS file system"); > } > return (DistributedFileSystem)fs; > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12515) Ozone: mvn package compilation fails on HDFS-7240
[ https://issues.apache.org/jira/browse/HDFS-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173677#comment-16173677 ] Anu Engineer commented on HDFS-12515: - [~xyao], [~msingh], [~cheersyang], [~vagarychen], [~jnp] Since Ozone is broken without this patch, I am going to commit this without a Jenkins build. Please let me know if you have any concerns. I am going to leave this JIRA open for time being so we can re-evaluate once the branch is building. > Ozone: mvn package compilation fails on HDFS-7240 > - > > Key: HDFS-12515 > URL: https://issues.apache.org/jira/browse/HDFS-12515 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Blocker > Labels: ozoneMerge > Fix For: HDFS-7240 > > > Creation of a package on ozone(HDFS-7240) fails > {{mvn clean package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true}} > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) > on project hadoop-mapreduce-examples: Compilation failure: Compilation > failure: > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[50,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[51,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[69,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.terasort.TeraGen > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[53,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[86,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.BaileyBorweinPlouffe > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[51,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[80,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.DBCountPageView > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[57,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[69,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.pi.DistSum > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[23,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[38,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class > org.apache.hadoop.examples.dancing.DancingLinks > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java:[40,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java:[50,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.terasort.TeraSort > [ERROR] >
[jira] [Commented] (HDFS-12515) Ozone: mvn package compilation fails on HDFS-7240
[ https://issues.apache.org/jira/browse/HDFS-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173674#comment-16173674 ] Anu Engineer commented on HDFS-12515: - Adding this dependency makes this work on ozone .. {code} === --- hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml (revision 2a94ce9124c7c96a6baa527c543575b74958afd9) +++ hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml (revision ) @@ -34,6 +34,13 @@ + + + org.slf4j + slf4j-api + + + commons-cli commons-cli {code} I am going to commit this, but not resolve this JIRA in case we want to revert it. This dependency just makes it explicit that this module need sl4j. So it might be ok to merge this change to trunk. > Ozone: mvn package compilation fails on HDFS-7240 > - > > Key: HDFS-12515 > URL: https://issues.apache.org/jira/browse/HDFS-12515 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Blocker > Labels: ozoneMerge > Fix For: HDFS-7240 > > > Creation of a package on ozone(HDFS-7240) fails > {{mvn clean package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true}} > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) > on project hadoop-mapreduce-examples: Compilation failure: Compilation > failure: > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[50,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[51,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[69,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.terasort.TeraGen > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[53,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[86,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.BaileyBorweinPlouffe > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[51,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[80,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.DBCountPageView > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[57,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[69,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class org.apache.hadoop.examples.pi.DistSum > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[23,17] > package org.slf4j does not exist > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[38,24] > cannot find symbol > [ERROR] symbol: class Logger > [ERROR] location: class > org.apache.hadoop.examples.dancing.DancingLinks > [ERROR] > /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java:[40,17] > package org.slf4j does not exist > [ERROR] >
[jira] [Commented] (HDFS-12497) Re-enable TestDFSStripedOutputStreamWithFailure tests
[ https://issues.apache.org/jira/browse/HDFS-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173669#comment-16173669 ] Andrew Wang commented on HDFS-12497: Related, I saw this fail as follows in a precommit run: {noformat} java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.hdfs.StripedFileTestUtil.checkData(StripedFileTestUtil.java:415) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:550) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTestWithMultipleFailure(TestDFSStripedOutputStreamWithFailure.java:380) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56(TestDFSStripedOutputStreamWithFailure.java:246) {noformat} > Re-enable TestDFSStripedOutputStreamWithFailure tests > - > > Key: HDFS-12497 > URL: https://issues.apache.org/jira/browse/HDFS-12497 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Andrew Wang >Assignee: SammiChen > Labels: flaky-test, hdfs-ec-3.0-must-do > > We disabled this suite of tests in HDFS-12417 since they were very flaky. We > should fix these tests and re-enable them. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12339) NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister with rpcbind Portmapper
[ https://issues.apache.org/jira/browse/HDFS-12339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173668#comment-16173668 ] Ajay Kumar commented on HDFS-12339: --- Thanks for the patch [~msingh]. For {{RpcProgramNfs3#stopDaemons}}, maybe we can change the log level for failure to error. LGTM otherwise. > NFS Gateway on Shutdown Gives Unregistration Failure. Does Not Unregister > with rpcbind Portmapper > - > > Key: HDFS-12339 > URL: https://issues.apache.org/jira/browse/HDFS-12339 > Project: Hadoop HDFS > Issue Type: Bug > Components: nfs >Affects Versions: 2.6.0 >Reporter: Sailesh Patel >Assignee: Mukul Kumar Singh > Attachments: HDFS-12339.001.patch > > > When stopping NFS Gateway the following error is thrown in the NFS gateway > role logs. > 2017-08-17 18:09:16,529 ERROR org.apache.hadoop.oncrpc.RpcProgram: > Unregistration failure with localhost:2049, portmap entry: > (PortmapMapping-13:3:6:2049) > 2017-08-17 18:09:16,531 WARN org.apache.hadoop.util.ShutdownHookManager: > ShutdownHook 'NfsShutdownHook' failed, java.lang.RuntimeException: > Unregistration failure > java.lang.RuntimeException: Unregistration failure > .. > Caused by: java.net.SocketException: Socket is closed > at java.net.DatagramSocket.send(DatagramSocket.java:641) > at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:62) > Checking rpcinfo -p : the following entry is still there: > " 13 3 tcp 2049 nfs" -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12447) Rename AddECPolicyResponse to AddErasureCodingPolicyResponse
[ https://issues.apache.org/jira/browse/HDFS-12447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-12447: --- Resolution: Fixed Fix Version/s: 3.0.0-beta1 Status: Resolved (was: Patch Available) Thanks for the contribution Sammi, committed to trunk and branch-3.0. > Rename AddECPolicyResponse to AddErasureCodingPolicyResponse > > > Key: HDFS-12447 > URL: https://issues.apache.org/jira/browse/HDFS-12447 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: SammiChen >Assignee: SammiChen > Labels: hdfs-ec-3.0-must-do > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12447.001.patch, HDFS-12447.002.patch, > HDFS-12447.003.patch, HDFS-12447.004.patch > > > As a follow on to handle some issues discussed in HDFS-12395, this is to > majorly refactor addErasureCodingPoliy API, change AddECPolicyResponse => > AddErasureCodingPolicyResponse -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12447) Rename AddECPolicyResponse to AddErasureCodingPolicyResponse
[ https://issues.apache.org/jira/browse/HDFS-12447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-12447: --- Summary: Rename AddECPolicyResponse to AddErasureCodingPolicyResponse (was: Refactor addErasureCodingPolicy) > Rename AddECPolicyResponse to AddErasureCodingPolicyResponse > > > Key: HDFS-12447 > URL: https://issues.apache.org/jira/browse/HDFS-12447 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: SammiChen >Assignee: SammiChen > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12447.001.patch, HDFS-12447.002.patch, > HDFS-12447.003.patch, HDFS-12447.004.patch > > > As a follow on to handle some issues discussed in HDFS-12395, this is to > majorly refactor addErasureCodingPoliy API, change AddECPolicyResponse => > AddErasureCodingPolicyResponse -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12496) Make QuorumJournalManager timeout properties configurable
[ https://issues.apache.org/jira/browse/HDFS-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173657#comment-16173657 ] Hadoop QA commented on HDFS-12496: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 50s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 6s{color} | {color:orange} root: The patch generated 1 new + 441 unchanged - 0 fixed = 442 total (was 441) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 3s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 54s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}161m 23s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | | | hadoop.conf.TestCommonConfigurationFields | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12496 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888076/HDFS-12496.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle | | uname | Linux d10ebc49c14d 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7e58b24 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/21243/artifact/patchprocess/diff-checkstyle-root.txt | | unit |
[jira] [Commented] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173639#comment-16173639 ] Hadoop QA commented on HDFS-12386: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 53s{color} | {color:red} hadoop-hdfs-project generated 1 new + 450 unchanged - 0 fixed = 451 total (was 450) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 51s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 418 unchanged - 0 fixed = 419 total (was 418) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 13s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}160m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.hdfs.web.JsonUtilClient.toFsServerDefaults(Map) At JsonUtilClient.java:then immediately reboxed in org.apache.hadoop.hdfs.web.JsonUtilClient.toFsServerDefaults(Map) At JsonUtilClient.java:[line 666] | | Failed junit tests | hadoop.hdfs.web.TestWebHDFS | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12386 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888075/HDFS-12386-2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 35483416a8a9 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk /
[jira] [Created] (HDFS-12515) Ozone: mvn package compilation fails on HDFS-7240
Mukul Kumar Singh created HDFS-12515: Summary: Ozone: mvn package compilation fails on HDFS-7240 Key: HDFS-12515 URL: https://issues.apache.org/jira/browse/HDFS-12515 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Affects Versions: HDFS-7240 Reporter: Mukul Kumar Singh Assignee: Mukul Kumar Singh Priority: Blocker Fix For: HDFS-7240 Creation of a package on ozone(HDFS-7240) fails {{mvn clean package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true}} {code} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project hadoop-mapreduce-examples: Compilation failure: Compilation failure: [ERROR] /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[50,17] package org.slf4j does not exist [ERROR] /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[51,17] package org.slf4j does not exist [ERROR] /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraGen.java:[69,24] cannot find symbol [ERROR] symbol: class Logger [ERROR] location: class org.apache.hadoop.examples.terasort.TeraGen [ERROR] /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[53,17] package org.slf4j does not exist [ERROR] /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/BaileyBorweinPlouffe.java:[86,24] cannot find symbol [ERROR] symbol: class Logger [ERROR] location: class org.apache.hadoop.examples.BaileyBorweinPlouffe [ERROR] /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[51,17] package org.slf4j does not exist [ERROR] /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java:[80,24] cannot find symbol [ERROR] symbol: class Logger [ERROR] location: class org.apache.hadoop.examples.DBCountPageView [ERROR] /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[57,17] package org.slf4j does not exist [ERROR] /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistSum.java:[69,24] cannot find symbol [ERROR] symbol: class Logger [ERROR] location: class org.apache.hadoop.examples.pi.DistSum [ERROR] /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[23,17] package org.slf4j does not exist [ERROR] /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/dancing/DancingLinks.java:[38,24] cannot find symbol [ERROR] symbol: class Logger [ERROR] location: class org.apache.hadoop.examples.dancing.DancingLinks [ERROR] /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java:[40,17] package org.slf4j does not exist [ERROR] /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java:[50,24] cannot find symbol [ERROR] symbol: class Logger [ERROR] location: class org.apache.hadoop.examples.terasort.TeraSort [ERROR] /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraOutputFormat.java:[40,17] package org.slf4j does not exist [ERROR] /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraOutputFormat.java:[46,24] cannot find symbol [ERROR] symbol: class Logger [ERROR] location: class org.apache.hadoop.examples.terasort.TeraOutputFormat [ERROR] /Users/msingh/code/work/apache/cblock/ozone_review2/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraScheduler.java:[29,17] package org.slf4j does not exist [ERROR]
[jira] [Commented] (HDFS-12490) Ozone: OzoneClient: OzoneBucket should have information about the bucket creation time
[ https://issues.apache.org/jira/browse/HDFS-12490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173624#comment-16173624 ] Nandakumar commented on HDFS-12490: --- Thanks [~msingh] for working on this. +1, The patch looks good to me. > Ozone: OzoneClient: OzoneBucket should have information about the bucket > creation time > -- > > Key: HDFS-12490 > URL: https://issues.apache.org/jira/browse/HDFS-12490 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Fix For: HDFS-7240 > > Attachments: HDFS-12490-HDFS-7240.001.patch > > > OzoneBucket should have information about the bucket creation time. > OzoneFileSystem needs creation time to display the file status information > for the root of the filesystem. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12371) "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX
[ https://issues.apache.org/jira/browse/HDFS-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173607#comment-16173607 ] Hanisha Koneru commented on HDFS-12371: --- The test failures are unrelated. > "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX > - > > Key: HDFS-12371 > URL: https://issues.apache.org/jira/browse/HDFS-12371 > Project: Hadoop HDFS > Issue Type: Bug > Components: metrics >Affects Versions: 2.7.1 >Reporter: Sai Nukavarapu >Assignee: Hanisha Koneru > Attachments: HDFS-12371.001.patch, HDFS-12371.002.patch > > > "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX > Looking at the code, i see below description. > {noformat} > `BlockVerificationFailures` | Total number of verifications failures | > `BlocksVerified` | Total number of blocks verified | > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12487) FsDatasetSpi.isValidBlock() lacks null pointer check inside and neither do the callers
[ https://issues.apache.org/jira/browse/HDFS-12487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173432#comment-16173432 ] Anu Engineer edited comment on HDFS-12487 at 9/20/17 5:55 PM: -- Thanks for the patch. I know the first time it is a lot of work to set up the system. Thanks for doing that. One small review comment: Can we please add a Log statement before we return? I was thinking something like this. {code} LOG.info("NextBlock call returned null. No valid blocks to copy. {}", item.toJson()); {code} When you attach the next patch, just increment the 001. to be 002, That is, please name your new patch {{HDFS-12487.002.patch}} was (Author: anu): Thanks for the patch. I know the first time it is a lot of work to set up the system. Thanks for doing that. One small review comment: Can we please add a Log statement before we return? I was thinking something like this. {code} LOG.info("NextBlock call returned null. No valid blocks to copy. {}". item.toJson()); {code} When you attach the next patch, just increment the 001. to be 002, That is, please name your new patch {{HDFS-12487.002.patch}} > FsDatasetSpi.isValidBlock() lacks null pointer check inside and neither do > the callers > -- > > Key: HDFS-12487 > URL: https://issues.apache.org/jira/browse/HDFS-12487 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover, diskbalancer >Affects Versions: 3.0.0 > Environment: CentOS 6.8 x64 > CPU:4 core > Memory:16GB > Hadoop: Release 3.0.0-alpha4 >Reporter: liumi >Assignee: liumi > Fix For: 3.1.0 > > Attachments: HDFS-12487.001.patch > > Original Estimate: 0h > Remaining Estimate: 0h > > BlockIteratorImpl.nextBlock() will look for the blocks in the source volume, > if there are no blocks any more, it will return null up to > DiskBalancer.getBlockToCopy(). However, the DiskBalancer.getBlockToCopy() > will check whether it's a valid block. > When I look into the FsDatasetSpi.isValidBlock(), I find that it doesn't > check the null pointer! In fact, we firstly need to check whether it's null > or not, or exception will occur. > This bug is hard to find, because the DiskBalancer hardly copy all the data > of one volume to others. Even if some times we may copy all the data of one > volume to other volumes, when the bug occurs, the copy process has already > done. > However, when we try to copy all the data of two or more volumes to other > volumes in more than one step, the thread will be shut down, which is caused > by the bug above. > The bug can fixed by two ways: > 1)Before the call of FsDatasetSpi.isValidBlock(), we check the null pointer > 2)Check the null pointer inside the implementation of > FsDatasetSpi.isValidBlock() -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12493) Correct javadoc for BackupNode#startActiveServices
[ https://issues.apache.org/jira/browse/HDFS-12493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173577#comment-16173577 ] Anu Engineer commented on HDFS-12493: - [~msingh] There are some checkstyle warnings. I know some editors and checkstyle can have a different opinion at times. Care to take a quick look? > Correct javadoc for BackupNode#startActiveServices > -- > > Key: HDFS-12493 > URL: https://issues.apache.org/jira/browse/HDFS-12493 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Trivial > Attachments: HDFS-12493.001.patch > > > Following javadoc warning needs to be fixed for > {{BackupNode#startActiveServices}} > Javadoc links are not linked correctly. > {code} > /** > * Start services for BackupNode. > * > * The following services should be muted > * (not run or not pass any control commands to DataNodes) > * on BackupNode: > * {@link LeaseManager.Monitor} protected by SafeMode. > * {@link BlockManager.RedundancyMonitor} protected by SafeMode. > * {@link HeartbeatManager.Monitor} protected by SafeMode. > * {@link DatanodeAdminManager.Monitor} need to prohibit refreshNodes(). > * {@link PendingReconstructionBlocks.PendingReconstructionMonitor} > * harmless, because RedundancyMonitor is muted. > */ > @Override > public void startActiveServices() throws IOException { > try { > namesystem.startActiveServices(); > } catch (Throwable t) { > doImmediateShutdown(t); > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12475) Ozone : add document for using Datanode http address
[ https://issues.apache.org/jira/browse/HDFS-12475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173568#comment-16173568 ] Chen Liang edited comment on HDFS-12475 at 9/20/17 5:47 PM: Hi [~cheersyang], what I meant by "sharing with WebHDFS" was that WebHDFS also uses this datanode http server (which listens on this port) for transferring data. And the point of this ticket is to make this clear in the doc, because all over the doc we use addresses like 127.0.0.1:9864, but this won't work for users who have changed the datanode http port setting. So we should just make this note clear in doc and let the user know that they need to use whatever port they set instead. You are right, it is more clear to say datanode http port, rather than mentioning WebHDFS for more confusion. Will update the title and description. was (Author: vagarychen): Hi [~cheersyang], what I meant by "sharing with WebHDFS" was that WebHDFS seems to be the main user of this datanode http server (which listens on this port) for transferring data. And the point of this ticket is to make this clear in the doc, because all over the doc we use addresses like 127.0.0.1:9864, but this won't work for users who have changed the datanode http port setting. So we should just make this note clear in doc and let the user know that they need to use whatever port they set instead. You are right, it is more clear to say datanode http port, rather than mentioning WebHDFS for more confusion. Will update the title and description. > Ozone : add document for using Datanode http address > > > Key: HDFS-12475 > URL: https://issues.apache.org/jira/browse/HDFS-12475 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Lokesh Jain > Labels: ozoneDoc > > Currently Ozone's REST API uses the port 9864, all commands mentioned in > OzoneCommandShell.md use the address localhost:9864. > This port was used by Datanode http server, which is now shared by Ozone. > Changing this config means user should be using the value of this setting > rather than localhost:9864 as in doc. The value is controlled by the config > key {{dfs.datanode.http.address}}. We should document this information in > {{OzoneCommandShell.md}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12475) Ozone : add document for using Datanode http address
[ https://issues.apache.org/jira/browse/HDFS-12475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-12475: -- Description: Currently Ozone's REST API uses the port 9864, all commands mentioned in OzoneCommandShell.md use the address localhost:9864. This port was used by Datanode http server, which is now shared by Ozone. Changing this config means user should be using the value of this setting rather than localhost:9864 as in doc. The value is controlled by the config key {{dfs.datanode.http.address}}. We should document this information in {{OzoneCommandShell.md}}. was: Currently Ozone's REST API uses the port 9864, all commands mentioned in OzoneCommandShell.md use the address localhost:9864. This port was used by WebHDFS and is now shared by Ozone. The value is controlled by the config key {{dfs.datanode.http.address}}. We should document this information in {{OzoneCommandShell.md}}. > Ozone : add document for using Datanode http address > > > Key: HDFS-12475 > URL: https://issues.apache.org/jira/browse/HDFS-12475 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Lokesh Jain > Labels: ozoneDoc > > Currently Ozone's REST API uses the port 9864, all commands mentioned in > OzoneCommandShell.md use the address localhost:9864. > This port was used by Datanode http server, which is now shared by Ozone. > Changing this config means user should be using the value of this setting > rather than localhost:9864 as in doc. The value is controlled by the config > key {{dfs.datanode.http.address}}. We should document this information in > {{OzoneCommandShell.md}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12496) Make QuorumJournalManager timeout properties configurable
[ https://issues.apache.org/jira/browse/HDFS-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173570#comment-16173570 ] Arpit Agarwal edited comment on HDFS-12496 at 9/20/17 5:46 PM: --- Thanks for the updated patch [~ajayydv]. Couple of additional comments: # The new config key should be in hdfs-default.xml and it should be renamed to something like {{dfs.qjm.operations.timeout}}. # Can you please update the description to state that the value accepts suffixes standard units like ns, ms, s and m. And if no unit is specified then the default is milliseconds. Also you can specify the default as 60s or 1m instead of 6. There are other keys that accept the same suffixes, you can copy this sentence from their description. {code} + Common key to set timeout in milliseconds for related operations in + QuorumJournalManager. {code} was (Author: arpitagarwal): Thanks for the updated patch [~ajayydv]. Couple of additional comments: # The new config key should be in hdfs-default.xml and it should # Can you please update the description to state that the value accepts suffixes standard units like ns, ms, s and m. And if no unit is specified then the default is milliseconds. Also you can specify the default as 60s or 1m instead of 6. There are other keys that accept the same suffixes, you can copy this sentence from their description. {code} + Common key to set timeout in milliseconds for related operations in + QuorumJournalManager. {code} # The config key should be renamed to something like {{dfs.qjm.operations.timeout}}. > Make QuorumJournalManager timeout properties configurable > - > > Key: HDFS-12496 > URL: https://issues.apache.org/jira/browse/HDFS-12496 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12496.01.patch, HDFS-12496.02.patch > > > Make QuorumJournalManager timeout properties configurable using a common key. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12496) Make QuorumJournalManager timeout properties configurable
[ https://issues.apache.org/jira/browse/HDFS-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173570#comment-16173570 ] Arpit Agarwal commented on HDFS-12496: -- Thanks for the updated patch [~ajayydv]. Couple of additional comments: # The new config key should be in hdfs-default.xml and it should # Can you please update the description to state that the value accepts suffixes standard units like ns, ms, s and m. And if no unit is specified then the default is milliseconds. Also you can specify the default as 60s or 1m instead of 6. There are other keys that accept the same suffixes, you can copy this sentence from their description. {code} + Common key to set timeout in milliseconds for related operations in + QuorumJournalManager. {code} # The config key should be renamed to something like {{dfs.qjm.operations.timeout}}. > Make QuorumJournalManager timeout properties configurable > - > > Key: HDFS-12496 > URL: https://issues.apache.org/jira/browse/HDFS-12496 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12496.01.patch, HDFS-12496.02.patch > > > Make QuorumJournalManager timeout properties configurable using a common key. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12475) Ozone : add document for using Datanode http address
[ https://issues.apache.org/jira/browse/HDFS-12475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-12475: -- Summary: Ozone : add document for using Datanode http address (was: Ozone : add document for port sharing with WebHDFS) > Ozone : add document for using Datanode http address > > > Key: HDFS-12475 > URL: https://issues.apache.org/jira/browse/HDFS-12475 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Lokesh Jain > Labels: ozoneDoc > > Currently Ozone's REST API uses the port 9864, all commands mentioned in > OzoneCommandShell.md use the address localhost:9864. > This port was used by WebHDFS and is now shared by Ozone. The value is > controlled by the config key {{dfs.datanode.http.address}}. We should > document this information in {{OzoneCommandShell.md}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12475) Ozone : add document for port sharing with WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-12475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173568#comment-16173568 ] Chen Liang commented on HDFS-12475: --- Hi [~cheersyang], what I meant by "sharing with WebHDFS" was that WebHDFS seems to be the main user of this datanode http server (which listens on this port) for transferring data. And the point of this ticket is to make this clear in the doc, because all over the doc we use addresses like 127.0.0.1:9864, but this won't work for users who have changed the datanode http port setting. So we should just make this note clear in doc and let the user know that they need to use whatever port they set instead. You are right, it is more clear to say datanode http port, rather than mentioning WebHDFS for more confusion. Will update the title and description. > Ozone : add document for port sharing with WebHDFS > -- > > Key: HDFS-12475 > URL: https://issues.apache.org/jira/browse/HDFS-12475 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Lokesh Jain > Labels: ozoneDoc > > Currently Ozone's REST API uses the port 9864, all commands mentioned in > OzoneCommandShell.md use the address localhost:9864. > This port was used by WebHDFS and is now shared by Ozone. The value is > controlled by the config key {{dfs.datanode.http.address}}. We should > document this information in {{OzoneCommandShell.md}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12503) Ozone: some UX improvements to oz_debug
[ https://issues.apache.org/jira/browse/HDFS-12503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173542#comment-16173542 ] Hadoop QA commented on HDFS-12503: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 18m 1s{color} | {color:red} root in HDFS-7240 failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 55s{color} | {color:red} hadoop-hdfs in HDFS-7240 failed. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 45s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 29s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 44s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 12s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}127m 17s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12503 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888065/HDFS-12503-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1666982ca8e7 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 2a94ce9 | | Default Java | 1.8.0_144 | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/21241/artifact/patchprocess/branch-mvninstall-root.txt | | findbugs | v3.1.0-RC1 | | javadoc | https://builds.apache.org/job/PreCommit-HDFS-Build/21241/artifact/patchprocess/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/21241/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | javadoc | https://builds.apache.org/job/PreCommit-HDFS-Build/21241/artifact/patchprocess/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21241/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | |
[jira] [Commented] (HDFS-11035) Better documentation for maintenace mode and upgrade domain
[ https://issues.apache.org/jira/browse/HDFS-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173497#comment-16173497 ] Hudson commented on HDFS-11035: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12927 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12927/]) HDFS-11035. Better documentation for maintenace mode and upgrade domain. (mingma: rev ce943eb17a4218d8ac1f5293c6726122371d8442) * (add) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDataNodeAdminGuide.md * (edit) hadoop-project/src/site/site.xml * (add) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsUpgradeDomain.md > Better documentation for maintenace mode and upgrade domain > --- > > Key: HDFS-11035 > URL: https://issues.apache.org/jira/browse/HDFS-11035 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, documentation >Affects Versions: 2.9.0 >Reporter: Wei-Chiu Chuang >Assignee: Ming Ma > Fix For: 2.9.0, 3.0.0-beta1, 3.1.0 > > Attachments: HDFS-11035-2.patch, HDFS-11035.patch > > > HDFS-7541 added upgrade domain and HDFS-7877 added maintenance mode. Existing > documentation about these two features are scarce and the implementation have > evolved from the original design doc. Looking at code and Javadoc and I still > don't quite get how I can get datanodes into maintenance mode/ set up a > upgrade domain. > File this jira to propose that we write an up-to-date description of these > two features. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-7877) Support maintenance state for datanodes
[ https://issues.apache.org/jira/browse/HDFS-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ming Ma resolved HDFS-7877. --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.0 3.0.0-beta1 2.9.0 All sub tasks have been resolved. Thanks [~ctrezzo] [~eddyxu] [~manojg] [~elek] [~linyiqun] and others for the contribution and discussion. > Support maintenance state for datanodes > --- > > Key: HDFS-7877 > URL: https://issues.apache.org/jira/browse/HDFS-7877 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, namenode >Reporter: Ming Ma >Assignee: Ming Ma > Fix For: 2.9.0, 3.0.0-beta1, 3.1.0 > > Attachments: HDFS-7877-2.patch, HDFS-7877.patch, > Supportmaintenancestatefordatanodes-2.pdf, > Supportmaintenancestatefordatanodes.pdf > > > This requirement came up during the design for HDFS-7541. Given this feature > is mostly independent of upgrade domain feature, it is better to track it > under a separate jira. The design and draft patch will be available soon. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11035) Better documentation for maintenace mode and upgrade domain
[ https://issues.apache.org/jira/browse/HDFS-11035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ming Ma updated HDFS-11035: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.0 3.0.0-beta1 2.9.0 Status: Resolved (was: Patch Available) Thanks [~ctrezzo]. I have committed the patch to trunk, branch-3.0 and branch-2. > Better documentation for maintenace mode and upgrade domain > --- > > Key: HDFS-11035 > URL: https://issues.apache.org/jira/browse/HDFS-11035 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, documentation >Affects Versions: 2.9.0 >Reporter: Wei-Chiu Chuang >Assignee: Ming Ma > Fix For: 2.9.0, 3.0.0-beta1, 3.1.0 > > Attachments: HDFS-11035-2.patch, HDFS-11035.patch > > > HDFS-7541 added upgrade domain and HDFS-7877 added maintenance mode. Existing > documentation about these two features are scarce and the implementation have > evolved from the original design doc. Looking at code and Javadoc and I still > don't quite get how I can get datanodes into maintenance mode/ set up a > upgrade domain. > File this jira to propose that we write an up-to-date description of these > two features. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12514) Cancelled HedgedReads cause block to be marked as suspect on Windows
[ https://issues.apache.org/jira/browse/HDFS-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-12514: -- Status: Patch Available (was: In Progress) > Cancelled HedgedReads cause block to be marked as suspect on Windows > > > Key: HDFS-12514 > URL: https://issues.apache.org/jira/browse/HDFS-12514 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client >Reporter: Lukas Majercak >Assignee: Lukas Majercak > Attachments: HDFS-12514.001.patch > > > DFSClient with hedged reads enabled will often close previous spawned > connections if it successfully reads from one of them. This can result in > DataNode's BlockSender getting a socket exception and wrongly marking the > block as suspect and to be rescanned for errors. > This patch is aimed at adding windows specific network related exception > messages to be ignored in BlockSender.sendPacket. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-6450) Support non-positional hedged reads in HDFS
[ https://issues.apache.org/jira/browse/HDFS-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173490#comment-16173490 ] Hadoop QA commented on HDFS-6450: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 14s{color} | {color:red} HDFS-6450 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-6450 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12702940/HDFS-7782-001.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21245/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Support non-positional hedged reads in HDFS > --- > > Key: HDFS-6450 > URL: https://issues.apache.org/jira/browse/HDFS-6450 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.4.0 >Reporter: Colin P. McCabe >Assignee: Liang Xie > Attachments: HDFS-6450-like-pread.txt > > > HDFS-5776 added support for hedged positional reads. We should also support > hedged non-position reads (aka regular reads). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12514) Cancelled HedgedReads cause block to be marked as suspect on Windows
[ https://issues.apache.org/jira/browse/HDFS-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-12514: -- Description: DFSClient with hedged reads enabled will often close previous spawned connections if it successfully reads from one of them. This can result in DataNode's BlockSender getting a socket exception and wrongly marking the block as suspect and to be rescanned for errors. This patch is aimed at adding windows specific network related exception messages to be ignored in BlockSender.sendPacket. > Cancelled HedgedReads cause block to be marked as suspect on Windows > > > Key: HDFS-12514 > URL: https://issues.apache.org/jira/browse/HDFS-12514 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client >Reporter: Lukas Majercak >Assignee: Lukas Majercak > Attachments: HDFS-12514.001.patch > > > DFSClient with hedged reads enabled will often close previous spawned > connections if it successfully reads from one of them. This can result in > DataNode's BlockSender getting a socket exception and wrongly marking the > block as suspect and to be rescanned for errors. > This patch is aimed at adding windows specific network related exception > messages to be ignored in BlockSender.sendPacket. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12514) Cancelled HedgedReads cause block to be marked as suspect on Windows
[ https://issues.apache.org/jira/browse/HDFS-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Majercak updated HDFS-12514: -- Attachment: HDFS-12514.001.patch > Cancelled HedgedReads cause block to be marked as suspect on Windows > > > Key: HDFS-12514 > URL: https://issues.apache.org/jira/browse/HDFS-12514 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client >Reporter: Lukas Majercak >Assignee: Lukas Majercak > Attachments: HDFS-12514.001.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDFS-12514) Cancelled HedgedReads cause block to be marked as suspect on Windows
[ https://issues.apache.org/jira/browse/HDFS-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-12514 started by Lukas Majercak. - > Cancelled HedgedReads cause block to be marked as suspect on Windows > > > Key: HDFS-12514 > URL: https://issues.apache.org/jira/browse/HDFS-12514 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client >Reporter: Lukas Majercak >Assignee: Lukas Majercak > Attachments: HDFS-12514.001.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12514) Cancelled HedgedReads cause block to be marked as suspect on Windows
Lukas Majercak created HDFS-12514: - Summary: Cancelled HedgedReads cause block to be marked as suspect on Windows Key: HDFS-12514 URL: https://issues.apache.org/jira/browse/HDFS-12514 Project: Hadoop HDFS Issue Type: Bug Components: datanode, hdfs-client Reporter: Lukas Majercak Assignee: Lukas Majercak -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12464) Ozone: More detailed documentation about the ozone components
[ https://issues.apache.org/jira/browse/HDFS-12464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173474#comment-16173474 ] Hadoop QA commented on HDFS-12464: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 37s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 16m 6s{color} | {color:red} root in HDFS-7240 failed. {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 16s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 21m 26s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12464 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888079/HDFS-12464-HDFS-7240.001.patch | | Optional Tests | asflicense mvnsite xml | | uname | Linux e70560f0a3fb 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 2a94ce9 | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/21244/artifact/patchprocess/branch-mvninstall-root.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/21244/artifact/patchprocess/whitespace-eol.txt | | asflicense | https://builds.apache.org/job/PreCommit-HDFS-Build/21244/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: hadoop-project hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21244/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: More detailed documentation about the ozone components > - > > Key: HDFS-12464 > URL: https://issues.apache.org/jira/browse/HDFS-12464 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12464-HDFS-7240.001.patch, > HDFS-7240-HDFS-12464.001.patch > > > I started to write a more detailed introduction about the Ozone components. > The goal is to explain the basic responsibility of the components and the > basic network topology (which components sends messages and to where?). > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12513) Create UI page to show Ozone configs by tags
Ajay Kumar created HDFS-12513: - Summary: Create UI page to show Ozone configs by tags Key: HDFS-12513 URL: https://issues.apache.org/jira/browse/HDFS-12513 Project: Hadoop HDFS Issue Type: New Feature Affects Versions: HDFS-7240 Reporter: Ajay Kumar Assignee: Ajay Kumar Fix For: HDFS-7240 Create UI page to show Ozone configs by tags -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12273) Federation UI
[ https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173460#comment-16173460 ] Íñigo Goiri commented on HDFS-12273: I created HDFS-12510 for adding security to the web server and HDFS-12512 to add support for WebHDFS. > Federation UI > - > > Key: HDFS-12273 > URL: https://issues.apache.org/jira/browse/HDFS-12273 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Fix For: HDFS-10467 > > Attachments: federationUI-1.png, federationUI-2.png, > federationUI-3.png, HDFS-12273-HDFS-10467-000.patch, > HDFS-12273-HDFS-10467-001.patch, HDFS-12273-HDFS-10467-002.patch, > HDFS-12273-HDFS-10467-003.patch, HDFS-12273-HDFS-10467-004.patch > > > Add the Web UI to the Router to expose the status of the federated cluster. > It includes the federation metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12512) RBF: Add WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12512: --- Component/s: fs > RBF: Add WebHDFS > > > Key: HDFS-12512 > URL: https://issues.apache.org/jira/browse/HDFS-12512 > Project: Hadoop HDFS > Issue Type: Improvement > Components: fs >Reporter: Íñigo Goiri > Labels: RBF > > The Router currently does not support WebHDFS. It needs to implement > something similar to {{NamenodeWebHdfsMethods}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12506) Ozone: ListBucket is too slow
[ https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173455#comment-16173455 ] Anu Engineer edited comment on HDFS-12506 at 9/20/17 4:34 PM: -- [~cheersyang], Very good find. You are right this approach won't scale. I am +1 on [~xyao] 's idea and the extension that [~nandakumar131] proposed. was (Author: anu): [~cheersyang], Very good find. You are right this approach won't scale. > Ozone: ListBucket is too slow > - > > Key: HDFS-12506 > URL: https://issues.apache.org/jira/browse/HDFS-12506 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Priority: Blocker > Labels: ozoneMerge > > Generated 3 million keys in ozone, and run {{listBucket}} command to get a > list of buckets under a volume, > {code} > bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei > {code} > this call spent over *15 seconds* to finish. The problem was caused by the > inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like > following > {code} > /v1/b1 > /v1/b1/k1 > /v1/b1/k2 > /v1/b1/k3 > /v1/b2 > /v1/b2/k1 > /v1/b2/k2 > /v1/b2/k3 > /v1/b3 > /v1/b4 > {code} > keys are sorted in nature order so when we do list buckets under a volume e.g > /v1, we need to seek to /v1 point and start to iterate and filter keys, this > ends up with scanning all keys under volume /v1. The problem with this design > is we don't have an efficient approach to locate all buckets without scanning > the keys. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10467) Router-based HDFS federation
[ https://issues.apache.org/jira/browse/HDFS-10467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-10467: --- Labels: RBF (was: ) > Router-based HDFS federation > > > Key: HDFS-10467 > URL: https://issues.apache.org/jira/browse/HDFS-10467 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.1 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Labels: RBF > Fix For: HDFS-10467 > > Attachments: HDFS-10467.002.patch, HDFS-10467.PoC.001.patch, > HDFS-10467.PoC.patch, HDFS Router Federation.pdf, > HDFS-Router-Federation-Prototype.patch > > > Add a Router to provide a federated view of multiple HDFS clusters. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12510) RBF: Add security to UI
[ https://issues.apache.org/jira/browse/HDFS-12510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12510: --- Labels: RBF (was: ) > RBF: Add security to UI > --- > > Key: HDFS-12510 > URL: https://issues.apache.org/jira/browse/HDFS-12510 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Íñigo Goiri > Labels: RBF > > HDFS-12273 implemented the UI for Router Based Federation without security. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12512) RBF: Add WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12512: --- Labels: RBF (was: ) > RBF: Add WebHDFS > > > Key: HDFS-12512 > URL: https://issues.apache.org/jira/browse/HDFS-12512 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Íñigo Goiri > Labels: RBF > > The Router currently does not support WebHDFS. It needs to implement > something similar to {{NamenodeWebHdfsMethods}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12512) RBF: Add WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12512: --- Description: The Router currently does not support WebHDFS. It needs to implement something similar to {{NamenodeWebHdfsMethods}}. > RBF: Add WebHDFS > > > Key: HDFS-12512 > URL: https://issues.apache.org/jira/browse/HDFS-12512 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Íñigo Goiri > Labels: RBF > > The Router currently does not support WebHDFS. It needs to implement > something similar to {{NamenodeWebHdfsMethods}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12506) Ozone: ListBucket is too slow
[ https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173455#comment-16173455 ] Anu Engineer commented on HDFS-12506: - [~cheersyang], Very good find. You are right this approach won't scale. > Ozone: ListBucket is too slow > - > > Key: HDFS-12506 > URL: https://issues.apache.org/jira/browse/HDFS-12506 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Priority: Blocker > Labels: ozoneMerge > > Generated 3 million keys in ozone, and run {{listBucket}} command to get a > list of buckets under a volume, > {code} > bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei > {code} > this call spent over *15 seconds* to finish. The problem was caused by the > inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like > following > {code} > /v1/b1 > /v1/b1/k1 > /v1/b1/k2 > /v1/b1/k3 > /v1/b2 > /v1/b2/k1 > /v1/b2/k2 > /v1/b2/k3 > /v1/b3 > /v1/b4 > {code} > keys are sorted in nature order so when we do list buckets under a volume e.g > /v1, we need to seek to /v1 point and start to iterate and filter keys, this > ends up with scanning all keys under volume /v1. The problem with this design > is we don't have an efficient approach to locate all buckets without scanning > the keys. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12506) Ozone: ListBucket is too slow
[ https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173440#comment-16173440 ] Xiaoyu Yao edited comment on HDFS-12506 at 9/20/17 4:31 PM: We should keep a common prefix for all higher level containers like (volume, bucket) to avoid mixing the prefix with objects. Otherwise, list volume might still have the same overhead of iterating all the keys like we had for bucket case here unless you choose the right prefix like {{/v1/#}} to listVolume. Both proposal should work. was (Author: xyao): We should keep a common prefix for all higher level containers like (volume, bucket) to avoid mixing the prefix with objects. If we don't have /#v1/#b1, list volume might still have the same overhead of iterating all the keys like we had for bucket case here unless you want use /v1/# prefix to listvolume. Both should work. > Ozone: ListBucket is too slow > - > > Key: HDFS-12506 > URL: https://issues.apache.org/jira/browse/HDFS-12506 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Priority: Blocker > Labels: ozoneMerge > > Generated 3 million keys in ozone, and run {{listBucket}} command to get a > list of buckets under a volume, > {code} > bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei > {code} > this call spent over *15 seconds* to finish. The problem was caused by the > inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like > following > {code} > /v1/b1 > /v1/b1/k1 > /v1/b1/k2 > /v1/b1/k3 > /v1/b2 > /v1/b2/k1 > /v1/b2/k2 > /v1/b2/k3 > /v1/b3 > /v1/b4 > {code} > keys are sorted in nature order so when we do list buckets under a volume e.g > /v1, we need to seek to /v1 point and start to iterate and filter keys, this > ends up with scanning all keys under volume /v1. The problem with this design > is we don't have an efficient approach to locate all buckets without scanning > the keys. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12473) Change hosts JSON file format
[ https://issues.apache.org/jira/browse/HDFS-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ming Ma updated HDFS-12473: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.0 2.8.3 2.8.2 3.0.0-beta1 2.9.0 Status: Resolved (was: Patch Available) Thanks [~manojg] and [~zhz]. I have committed patch to trunk, branch-3.0, branch-2, branch-2.8 and branch 2.8.2. Besides the branch-2 diff mentioned above, the patch for branch-2.8/branch.2.8.2 is slightly different in the unit tests as maintenance state only exists in branch-2 and above. > Change hosts JSON file format > - > > Key: HDFS-12473 > URL: https://issues.apache.org/jira/browse/HDFS-12473 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ming Ma >Assignee: Ming Ma > Fix For: 2.9.0, 3.0.0-beta1, 2.8.2, 2.8.3, 3.1.0 > > Attachments: HDFS-12473-2.patch, HDFS-12473-3.patch, > HDFS-12473-4.patch, HDFS-12473-5.patch, HDFS-12473-6.patch, > HDFS-12473-branch-2.patch, HDFS-12473.patch > > > The existing host JSON file format doesn't have a top-level token. > {noformat} > {"hostName": "host1"} > {"hostName": "host2", "upgradeDomain": "ud0"} > {"hostName": "host3", "adminState": "DECOMMISSIONED"} > {"hostName": "host4", "upgradeDomain": "ud2", "adminState": > "DECOMMISSIONED"} > {"hostName": "host5", "port": 8090} > {"hostName": "host6", "adminState": "IN_MAINTENANCE"} > {"hostName": "host7", "adminState": "IN_MAINTENANCE", > "maintenanceExpireTimeInMS": "112233"} > {noformat} > Instead, to conform with the JSON standard it should be like > {noformat} > [ > {"hostName": "host1"}, > {"hostName": "host2", "upgradeDomain": "ud0"}, > {"hostName": "host3", "adminState": "DECOMMISSIONED"}, > {"hostName": "host4", "upgradeDomain": "ud2", "adminState": > "DECOMMISSIONED"}, > {"hostName": "host5", "port": 8090}, > {"hostName": "host6", "adminState": "IN_MAINTENANCE"}, > {"hostName": "host7", "adminState": "IN_MAINTENANCE", > "maintenanceExpireTimeInMS": "112233"} > ] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12512) RBF: Add WebHDFS
Íñigo Goiri created HDFS-12512: -- Summary: RBF: Add WebHDFS Key: HDFS-12512 URL: https://issues.apache.org/jira/browse/HDFS-12512 Project: Hadoop HDFS Issue Type: Improvement Reporter: Íñigo Goiri -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12510) RBF: Add security to UI
[ https://issues.apache.org/jira/browse/HDFS-12510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173446#comment-16173446 ] Íñigo Goiri commented on HDFS-12510: As pointed out by [~raviprak] in HDFS-12273, we should do something like: https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java#L315 > RBF: Add security to UI > --- > > Key: HDFS-12510 > URL: https://issues.apache.org/jira/browse/HDFS-12510 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Íñigo Goiri > > HDFS-12273 implemented the UI for Router Based Federation without security. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12473) Change hosts JSON file format
[ https://issues.apache.org/jira/browse/HDFS-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173447#comment-16173447 ] Hadoop QA commented on HDFS-12473: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 57s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 1s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 1 unchanged - 5 fixed = 1 total (was 6) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}686m 15s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 13m 58s{color} | {color:red} The patch generated 106 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}748m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | | hadoop.hdfs.server.datanode.TestFsDatasetCache | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart | | | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes | | | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages | | | hadoop.hdfs.client.impl.TestBlockReaderFactory | | | hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys | | | hadoop.hdfs.server.datanode.checker.TestStorageLocationChecker | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshotManager | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration | | | hadoop.hdfs.client.impl.TestBlockReaderRemote | | | hadoop.hdfs.server.namenode.TestStorageRestore | | | hadoop.hdfs.server.namenode.TestFileLimit | | | hadoop.hdfs.server.blockmanagement.TestNodeCount | | | hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots | | | hadoop.hdfs.server.namenode.TestFileContextAcl | | | hadoop.hdfs.TestFileCreationClient | | |
[jira] [Work started] (HDFS-12511) Add tags to ozone config
[ https://issues.apache.org/jira/browse/HDFS-12511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-12511 started by Ajay Kumar. - > Add tags to ozone config > > > Key: HDFS-12511 > URL: https://issues.apache.org/jira/browse/HDFS-12511 > Project: Hadoop HDFS > Issue Type: New Feature >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: HDFS-7240 > > > Add tags to ozone config: > Example: > {code} > > ozone.ksm.handler.count.key > 200 > OZONE,PERFORMANCE,KSM > > The number of RPC handler threads for each KSM service endpoint. > > > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12511) Add tags to ozone config
Ajay Kumar created HDFS-12511: - Summary: Add tags to ozone config Key: HDFS-12511 URL: https://issues.apache.org/jira/browse/HDFS-12511 Project: Hadoop HDFS Issue Type: New Feature Affects Versions: HDFS-7240 Reporter: Ajay Kumar Assignee: Ajay Kumar Fix For: HDFS-7240 Add tags to ozone config: Example: {code} ozone.ksm.handler.count.key 200 OZONE,PERFORMANCE,KSM The number of RPC handler threads for each KSM service endpoint. {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12506) Ozone: ListBucket is too slow
[ https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173440#comment-16173440 ] Xiaoyu Yao edited comment on HDFS-12506 at 9/20/17 4:28 PM: We should keep a common prefix for all higher level containers like (volume, bucket) to avoid mixing the prefix with objects. If we don't have /#v1/#b1, list volume might still have the same overhead of iterating all the keys like we had for bucket case here unless you want use /v1/# prefix to listvolume. Both should work. was (Author: xyao): We should keep a common prefix for all higher level containers like (volume, bucket) to avoid mixing the prefix with objects. If we don't have /#v1/#b1, list volume will still have the same overhead of iterating all the keys like we had for bucket case here. > Ozone: ListBucket is too slow > - > > Key: HDFS-12506 > URL: https://issues.apache.org/jira/browse/HDFS-12506 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Priority: Blocker > Labels: ozoneMerge > > Generated 3 million keys in ozone, and run {{listBucket}} command to get a > list of buckets under a volume, > {code} > bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei > {code} > this call spent over *15 seconds* to finish. The problem was caused by the > inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like > following > {code} > /v1/b1 > /v1/b1/k1 > /v1/b1/k2 > /v1/b1/k3 > /v1/b2 > /v1/b2/k1 > /v1/b2/k2 > /v1/b2/k3 > /v1/b3 > /v1/b4 > {code} > keys are sorted in nature order so when we do list buckets under a volume e.g > /v1, we need to seek to /v1 point and start to iterate and filter keys, this > ends up with scanning all keys under volume /v1. The problem with this design > is we don't have an efficient approach to locate all buckets without scanning > the keys. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12507) javadoc: error - class file for org.apache.http.annotation.ThreadSafe not found
[ https://issues.apache.org/jira/browse/HDFS-12507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh reassigned HDFS-12507: Assignee: Mukul Kumar Singh > javadoc: error - class file for org.apache.http.annotation.ThreadSafe not > found > --- > > Key: HDFS-12507 > URL: https://issues.apache.org/jira/browse/HDFS-12507 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Tsz Wo Nicholas Sze >Assignee: Mukul Kumar Singh > > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-javadoc-plugin:2.10.4:jar (module-javadocs) on > project hadoop-hdfs-client: MavenReportException: Error while generating > Javadoc: > [ERROR] Exit code: 1 - > /Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java:694: > warning - Tag @link: reference not found: StripingCell > [ERROR] javadoc: error - class file for org.apache.http.annotation.ThreadSafe > not found > [ERROR] > [ERROR] Command line was: > /Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home/jre/../bin/javadoc > -J-Xmx768m @options @packages > [ERROR] > [ERROR] Refer to the generated Javadoc files in > '/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs-client/target/api' > dir. > {code} > To reproduce the error above, run > {code} > mvn package -Pdist -DskipTests -DskipDocs -Dtar > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12510) RBF: Add security to UI
Íñigo Goiri created HDFS-12510: -- Summary: RBF: Add security to UI Key: HDFS-12510 URL: https://issues.apache.org/jira/browse/HDFS-12510 Project: Hadoop HDFS Issue Type: Improvement Reporter: Íñigo Goiri HDFS-12273 implemented the UI for Router Based Federation without security. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12506) Ozone: ListBucket is too slow
[ https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173440#comment-16173440 ] Xiaoyu Yao commented on HDFS-12506: --- We should keep a common prefix for all higher level containers like (volume, bucket) to avoid mixing the prefix with objects. If we don't have /#v1/#b1, list volume will still have the same overhead of iterating all the keys like we had for bucket case here. > Ozone: ListBucket is too slow > - > > Key: HDFS-12506 > URL: https://issues.apache.org/jira/browse/HDFS-12506 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Priority: Blocker > Labels: ozoneMerge > > Generated 3 million keys in ozone, and run {{listBucket}} command to get a > list of buckets under a volume, > {code} > bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei > {code} > this call spent over *15 seconds* to finish. The problem was caused by the > inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like > following > {code} > /v1/b1 > /v1/b1/k1 > /v1/b1/k2 > /v1/b1/k3 > /v1/b2 > /v1/b2/k1 > /v1/b2/k2 > /v1/b2/k3 > /v1/b3 > /v1/b4 > {code} > keys are sorted in nature order so when we do list buckets under a volume e.g > /v1, we need to seek to /v1 point and start to iterate and filter keys, this > ends up with scanning all keys under volume /v1. The problem with this design > is we don't have an efficient approach to locate all buckets without scanning > the keys. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12473) Change hosts JSON file format
[ https://issues.apache.org/jira/browse/HDFS-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173436#comment-16173436 ] Hudson commented on HDFS-12473: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12926 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12926/]) HDFS-12473. Change hosts JSON file format. (mingma: rev 230b85d5865b7e08fb7aaeab45295b5b966011ef) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestCombinedHostsFileReader.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileWriter.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CombinedHostFileManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/dfs.hosts.json * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/legacy.dfs.hosts.json * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/CombinedHostsFileReader.java > Change hosts JSON file format > - > > Key: HDFS-12473 > URL: https://issues.apache.org/jira/browse/HDFS-12473 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ming Ma >Assignee: Ming Ma > Attachments: HDFS-12473-2.patch, HDFS-12473-3.patch, > HDFS-12473-4.patch, HDFS-12473-5.patch, HDFS-12473-6.patch, > HDFS-12473-branch-2.patch, HDFS-12473.patch > > > The existing host JSON file format doesn't have a top-level token. > {noformat} > {"hostName": "host1"} > {"hostName": "host2", "upgradeDomain": "ud0"} > {"hostName": "host3", "adminState": "DECOMMISSIONED"} > {"hostName": "host4", "upgradeDomain": "ud2", "adminState": > "DECOMMISSIONED"} > {"hostName": "host5", "port": 8090} > {"hostName": "host6", "adminState": "IN_MAINTENANCE"} > {"hostName": "host7", "adminState": "IN_MAINTENANCE", > "maintenanceExpireTimeInMS": "112233"} > {noformat} > Instead, to conform with the JSON standard it should be like > {noformat} > [ > {"hostName": "host1"}, > {"hostName": "host2", "upgradeDomain": "ud0"}, > {"hostName": "host3", "adminState": "DECOMMISSIONED"}, > {"hostName": "host4", "upgradeDomain": "ud2", "adminState": > "DECOMMISSIONED"}, > {"hostName": "host5", "port": 8090}, > {"hostName": "host6", "adminState": "IN_MAINTENANCE"}, > {"hostName": "host7", "adminState": "IN_MAINTENANCE", > "maintenanceExpireTimeInMS": "112233"} > ] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12487) FsDatasetSpi.isValidBlock() lacks null pointer check inside and neither do the callers
[ https://issues.apache.org/jira/browse/HDFS-12487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173432#comment-16173432 ] Anu Engineer commented on HDFS-12487: - Thanks for the patch. I know the first time it is a lot of work to set up the system. Thanks for doing that. One small review comment: Can we please add a Log statement before we return? I was thinking something like this. {code} LOG.info("NextBlock call returned null. No valid blocks to copy. {}". item.toJson()); {code} When you attach the next patch, just increment the 001. to be 002, That is, please name your new patch {{HDFS-12487.002.patch}} > FsDatasetSpi.isValidBlock() lacks null pointer check inside and neither do > the callers > -- > > Key: HDFS-12487 > URL: https://issues.apache.org/jira/browse/HDFS-12487 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover, diskbalancer >Affects Versions: 3.0.0 > Environment: CentOS 6.8 x64 > CPU:4 core > Memory:16GB > Hadoop: Release 3.0.0-alpha4 >Reporter: liumi >Assignee: liumi > Fix For: 3.1.0 > > Attachments: HDFS-12487.001.patch > > Original Estimate: 0h > Remaining Estimate: 0h > > BlockIteratorImpl.nextBlock() will look for the blocks in the source volume, > if there are no blocks any more, it will return null up to > DiskBalancer.getBlockToCopy(). However, the DiskBalancer.getBlockToCopy() > will check whether it's a valid block. > When I look into the FsDatasetSpi.isValidBlock(), I find that it doesn't > check the null pointer! In fact, we firstly need to check whether it's null > or not, or exception will occur. > This bug is hard to find, because the DiskBalancer hardly copy all the data > of one volume to others. Even if some times we may copy all the data of one > volume to other volumes, when the bug occurs, the copy process has already > done. > However, when we try to copy all the data of two or more volumes to other > volumes in more than one step, the thread will be shut down, which is caused > by the bug above. > The bug can fixed by two ways: > 1)Before the call of FsDatasetSpi.isValidBlock(), we check the null pointer > 2)Check the null pointer inside the implementation of > FsDatasetSpi.isValidBlock() -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12506) Ozone: ListBucket is too slow
[ https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173430#comment-16173430 ] Nandakumar edited comment on HDFS-12506 at 9/20/17 4:19 PM: +1 for [~xyao]'s idea, I was also thinking of the same. One small change though For Volume /#v1 For Bucket /v1/#b1 Keys can be stored as they are stored now With this we can iterate and get list of volumes without iterating over buckets, and get list of buckets without iterating over keys. Something like {code} /#v1 /#v2 /#v3 /v1/#b1 /v1/#b2 /v2/#b1 /v3/#b1 /v1/b1/k1 /v2/b2/k2 {code} was (Author: nandakumar131): +1 for [~xyao]'s idea, I was also thinking of the same. One small change though For Volume /#v1 For Bucket /v1/#b1 Keys can be stored as they are stored now With this we can iterate and get list of volumes without iterating over buckets, and get list of buckets without iterating over keys. Something lime {code} /#v1 /#v2 /#v3 /v1/#b1 /v1/#b2 /v2/#b1 /v3/#b1 /v1/b1/k1 /v2/b2/k2 {code} > Ozone: ListBucket is too slow > - > > Key: HDFS-12506 > URL: https://issues.apache.org/jira/browse/HDFS-12506 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Priority: Blocker > Labels: ozoneMerge > > Generated 3 million keys in ozone, and run {{listBucket}} command to get a > list of buckets under a volume, > {code} > bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei > {code} > this call spent over *15 seconds* to finish. The problem was caused by the > inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like > following > {code} > /v1/b1 > /v1/b1/k1 > /v1/b1/k2 > /v1/b1/k3 > /v1/b2 > /v1/b2/k1 > /v1/b2/k2 > /v1/b2/k3 > /v1/b3 > /v1/b4 > {code} > keys are sorted in nature order so when we do list buckets under a volume e.g > /v1, we need to seek to /v1 point and start to iterate and filter keys, this > ends up with scanning all keys under volume /v1. The problem with this design > is we don't have an efficient approach to locate all buckets without scanning > the keys. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12506) Ozone: ListBucket is too slow
[ https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173430#comment-16173430 ] Nandakumar commented on HDFS-12506: --- +1 for [~xyao]'s idea, I was also thinking of the same. One small change though For Volume /#v1 For Bucket /v1/#b1 Keys can be stored as they are stored now With this we can iterate and get list of volumes without iterating over buckets, and get list of buckets without iterating over keys. Something lime {code} /#v1 /#v2 /#v3 /v1/#b1 /v1/#b2 /v2/#b1 /v3/#b1 /v1/b1/k1 /v2/b2/k2 {code} > Ozone: ListBucket is too slow > - > > Key: HDFS-12506 > URL: https://issues.apache.org/jira/browse/HDFS-12506 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Priority: Blocker > Labels: ozoneMerge > > Generated 3 million keys in ozone, and run {{listBucket}} command to get a > list of buckets under a volume, > {code} > bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei > {code} > this call spent over *15 seconds* to finish. The problem was caused by the > inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like > following > {code} > /v1/b1 > /v1/b1/k1 > /v1/b1/k2 > /v1/b1/k3 > /v1/b2 > /v1/b2/k1 > /v1/b2/k2 > /v1/b2/k3 > /v1/b3 > /v1/b4 > {code} > keys are sorted in nature order so when we do list buckets under a volume e.g > /v1, we need to seek to /v1 point and start to iterate and filter keys, this > ends up with scanning all keys under volume /v1. The problem with this design > is we don't have an efficient approach to locate all buckets without scanning > the keys. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12464) Ozone: More detailed documentation about the ozone components
[ https://issues.apache.org/jira/browse/HDFS-12464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDFS-12464: Attachment: HDFS-12464-HDFS-7240.001.patch First usable version is uploaded. Please check the validity of the statements about Ozone (and feel free to fix the language problems, if you see any...) > Ozone: More detailed documentation about the ozone components > - > > Key: HDFS-12464 > URL: https://issues.apache.org/jira/browse/HDFS-12464 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12464-HDFS-7240.001.patch, > HDFS-7240-HDFS-12464.001.patch > > > I started to write a more detailed introduction about the Ozone components. > The goal is to explain the basic responsibility of the components and the > basic network topology (which components sends messages and to where?). > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12464) Ozone: More detailed documentation about the ozone components
[ https://issues.apache.org/jira/browse/HDFS-12464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDFS-12464: Status: Patch Available (was: Open) > Ozone: More detailed documentation about the ozone components > - > > Key: HDFS-12464 > URL: https://issues.apache.org/jira/browse/HDFS-12464 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: HDFS-7240 >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12464-HDFS-7240.001.patch, > HDFS-7240-HDFS-12464.001.patch > > > I started to write a more detailed introduction about the Ozone components. > The goal is to explain the basic responsibility of the components and the > basic network topology (which components sends messages and to where?). > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12506) Ozone: ListBucket is too slow
[ https://issues.apache.org/jira/browse/HDFS-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16173408#comment-16173408 ] Xiaoyu Yao commented on HDFS-12506: --- Thanks [~cheersyang] for reporting this. An easy fix might be assigning a different prefix for the volume, bucket object key itself. Example, For volume in your example will be keyed like /#v1 For bucket in your example will be keyed like /#v1/#b1 A regular key be keyed as-is today without the special prefix: /v1/b1/k1 This way, if you want to just list volume or bucket, it will not be affected by how many objects contained. With some minor changes in the KSM MetadataManager, we should be able handle this with better performance. Let me know your thoughts. > Ozone: ListBucket is too slow > - > > Key: HDFS-12506 > URL: https://issues.apache.org/jira/browse/HDFS-12506 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Priority: Blocker > Labels: ozoneMerge > > Generated 3 million keys in ozone, and run {{listBucket}} command to get a > list of buckets under a volume, > {code} > bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-15143 -user wwei > {code} > this call spent over *15 seconds* to finish. The problem was caused by the > inflexible structure of KSM DB. Right now {{ksm.db}} stores keys like > following > {code} > /v1/b1 > /v1/b1/k1 > /v1/b1/k2 > /v1/b1/k3 > /v1/b2 > /v1/b2/k1 > /v1/b2/k2 > /v1/b2/k3 > /v1/b3 > /v1/b4 > {code} > keys are sorted in nature order so when we do list buckets under a volume e.g > /v1, we need to seek to /v1 point and start to iterate and filter keys, this > ends up with scanning all keys under volume /v1. The problem with this design > is we don't have an efficient approach to locate all buckets without scanning > the keys. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12496) Make QuorumJournalManager timeout properties configurable
[ https://issues.apache.org/jira/browse/HDFS-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12496: -- Attachment: HDFS-12496.02.patch [~hkoneru],[~arpitagarwal] Thanks for review. Updated patch with suggested changes. > Make QuorumJournalManager timeout properties configurable > - > > Key: HDFS-12496 > URL: https://issues.apache.org/jira/browse/HDFS-12496 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12496.01.patch, HDFS-12496.02.patch > > > Make QuorumJournalManager timeout properties configurable using a common key. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12386: -- Status: Patch Available (was: Open) > Add fsserver defaults call to WebhdfsFileSystem. > > > Key: HDFS-12386 > URL: https://issues.apache.org/jira/browse/HDFS-12386 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Minor > Attachments: HDFS-12386-1.patch, HDFS-12386-2.patch, HDFS-12386.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12386: -- Attachment: HDFS-12386-2.patch Attaching same patch again to make jenkins happy. > Add fsserver defaults call to WebhdfsFileSystem. > > > Key: HDFS-12386 > URL: https://issues.apache.org/jira/browse/HDFS-12386 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Minor > Attachments: HDFS-12386-1.patch, HDFS-12386-2.patch, HDFS-12386.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12386: -- Status: Open (was: Patch Available) > Add fsserver defaults call to WebhdfsFileSystem. > > > Key: HDFS-12386 > URL: https://issues.apache.org/jira/browse/HDFS-12386 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Minor > Attachments: HDFS-12386-1.patch, HDFS-12386.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12386: -- Attachment: (was: HDFS-12386-2.patch) > Add fsserver defaults call to WebhdfsFileSystem. > > > Key: HDFS-12386 > URL: https://issues.apache.org/jira/browse/HDFS-12386 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Minor > Attachments: HDFS-12386-1.patch, HDFS-12386.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12386: -- Attachment: (was: HDFS-12386-2.patch) > Add fsserver defaults call to WebhdfsFileSystem. > > > Key: HDFS-12386 > URL: https://issues.apache.org/jira/browse/HDFS-12386 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Minor > Attachments: HDFS-12386-1.patch, HDFS-12386.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12386: -- Attachment: HDFS-12386-2.patch > Add fsserver defaults call to WebhdfsFileSystem. > > > Key: HDFS-12386 > URL: https://issues.apache.org/jira/browse/HDFS-12386 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Minor > Attachments: HDFS-12386-1.patch, HDFS-12386.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org