[jira] [Commented] (HDFS-13915) replace datanode failed because of NameNodeRpcServer#getAdditionalDatanode returning excessive datanodeInfo

2018-11-13 Thread Jiandan Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686177#comment-16686177
 ] 

Jiandan Yang  commented on HDFS-13915:
--

I add a case  in [^HDFS-13915.001.patch] based on trunk to reproduce issue. 
HI, [~szetszwo]  BlockStoragePolicy#chooseStorageTypes may return excessive 
storageType, and I do not understander why after looking through related code. 
Can we remove excessive storageType?

{code:java}
if (storageTypes.size() < expectedSize) {
  LOG.warn("Failed to place enough replicas: expected size is {}"
  + " but only {} storage types can be selected (replication={},"
  + " selected={}, unavailable={}" + ", removed={}" + ", policy={}"
  + ")", expectedSize, storageTypes.size(), replication, storageTypes,
  unavailables, removed, this);
} else if (storageTypes.size() > expectedSize) {
//should remove excess storageType to return expectedSize storageType
}
{code}



> replace datanode failed because of  NameNodeRpcServer#getAdditionalDatanode 
> returning excessive datanodeInfo
> 
>
> Key: HDFS-13915
> URL: https://issues.apache.org/jira/browse/HDFS-13915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
> Environment: 
>Reporter: Jiandan Yang 
>Priority: Major
>
> Consider following situation:
> 1. create a file with ALLSSD policy
> 2. return [SSD,SSD,DISK] due to lack of SSD space
> 3. client call NameNodeRpcServer#getAdditionalDatanode when recovering write 
> pipeline and replacing bad datanode
> 4. BlockPlacementPolicyDefault#chooseTarget will call 
> StoragePolicy#chooseStorageTypes(3, [SSD,DISK], none, false), but 
> chooseStorageTypes return [SSD,SSD]
> {code:java}
>   @Test
>   public void testAllSSDFallbackAndNonNewBlock() {
> final BlockStoragePolicy allSSD = POLICY_SUITE.getPolicy(ALLSSD);
> List storageTypes = allSSD.chooseStorageTypes((short) 3,
> Arrays.asList(StorageType.DISK, StorageType.SSD),
> EnumSet.noneOf(StorageType.class), false);
> assertEquals(2, storageTypes.size());
> assertEquals(StorageType.SSD, storageTypes.get(0));
> assertEquals(StorageType.SSD, storageTypes.get(1));
>   }
> {code}
> 5. do numOfReplicas = requiredStorageTypes.size() and numOfReplicas is set to 
> 2 and choose additional two datanodes
> 6. BlockPlacementPolicyDefault#chooseTarget return four datanodes to client
> 7. DataStreamer#findNewDatanode find nodes.length != original.length + 1  and 
> throw IOException, and finally lead to write failed
> {code:java}
> private int findNewDatanode(final DatanodeInfo[] original
>   ) throws IOException {
> if (nodes.length != original.length + 1) {
>   throw new IOException(
>   "Failed to replace a bad datanode on the existing pipeline "
>   + "due to no more good datanodes being available to try. "
>   + "(Nodes: current=" + Arrays.asList(nodes)
>   + ", original=" + Arrays.asList(original) + "). "
>   + "The current failed datanode replacement policy is "
>   + dfsClient.dtpReplaceDatanodeOnFailure
>   + ", and a client may configure this via '"
>   + BlockWrite.ReplaceDatanodeOnFailure.POLICY_KEY
>   + "' in its configuration.");
> }
> for(int i = 0; i < nodes.length; i++) {
>   int j = 0;
>   for(; j < original.length && !nodes[i].equals(original[j]); j++);
>   if (j == original.length) {
> return i;
>   }
> }
> throw new IOException("Failed: new datanode not found: nodes="
> + Arrays.asList(nodes) + ", original=" + Arrays.asList(original));
>   }
> {code}
> client warn logs is:
>  {code:java}
> WARN [DataStreamer for file 
> /home/yarn/opensearch/in/data/120141286/0_65535/table/ucs_process/MANIFEST-093545
>  block BP-1742758844-11.138.8.184-1483707043031:blk_7086344902_6012765313] 
> org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[11.138.5.4:50010,DS-04826cfc-1885-4213-a58b-8606845c5c42,SSD],
>  
> DatanodeInfoWithStorage[11.138.5.9:50010,DS-f6d8eb8b-2550-474b-a692-c991d7a6f6b3,SSD],
>  
> DatanodeInfoWithStorage[11.138.5.153:50010,DS-f5d77ca0-6fe3-4523-8ca8-5af975f845b6,SSD],
>  
> DatanodeInfoWithStorage[11.138.9.156:50010,DS-0d15ea12-1bad--84f7-1a4917a1e194,DISK]],
>  
> original=[DatanodeInfoWithStorage[11.138.5.4:50010,DS-04826cfc-1885-4213-a58b-8606845c5c42,SSD],
>  
> DatanodeInfoWithStorage[11.138.9.156:50010,DS-0d15ea12-1bad--84f7-1a4917a1e194,DISK]]).
>  The current failed d

[jira] [Commented] (HDDS-774) Remove OpenContainerBlockMap from datanode

2018-11-13 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686174#comment-16686174
 ] 

Jitendra Nath Pandey commented on HDDS-774:
---

+1 for the patch.

> Remove OpenContainerBlockMap from datanode
> --
>
> Key: HDDS-774
> URL: https://issues.apache.org/jira/browse/HDDS-774
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-774.000.patch, HDDS-774.001.patch
>
>
> With HDDS-675, partial flush of uncommitted keys on Datanodes is not 
> required. OpenContainerBlockMap hence serves no purpose anymore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2018-11-13 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686147#comment-16686147
 ] 

Weiwei Yang commented on HDFS-6874:
---

Hi [~elgoiri]

The changes in webHdfsFileSystem was for the downward compatibility. Without 
this, 

Both {{TestHttpFSFWithWebhdfsFileSystem}} and 
{{TestHttpFSFWithSWebhdfsFileSystem}} will fail when operation is 
{{GETFILEBLOCKLOCATIONS}}, with following error
{noformat}
org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
Unsupported Operation [GET_BLOCK_LOCATIONS]
at 
org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:89)
 at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:509)
 at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:135)
 at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:745)
 at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:820)
 at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:648)
 at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:686)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
 at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:682)
 at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileBlockLocations(WebHdfsFileSystem.java:1761)
 at 
org.apache.hadoop.fs.http.client.BaseTestHttpFSWith.testGetFileBlockLocations(BaseTestHttpFSWith.java:1663)
 at 
org.apache.hadoop.fs.http.client.BaseTestHttpFSWith.operation(BaseTestHttpFSWith.java:1207)
 at 
org.apache.hadoop.fs.http.client.BaseTestHttpFSWith.testOperation(BaseTestHttpFSWith.java:1235)
 at sun.reflect.GeneratedMethodAccessor48.invoke(Unknown Source)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.apache.hadoop.test.TestHdfsHelper$HdfsStatement.evaluate(TestHdfsHelper.java:94)
 at org.apache.hadoop.test.TestDirHelper$1.evaluate(TestDirHelper.java:106)
 at 
org.apache.hadoop.test.TestExceptionHelper$1.evaluate(TestExceptionHelper.java:42)
 at org.apache.hadoop.test.TestJettyHelper$1.evaluate(TestJettyHelper.java:74)
 at org.apache.hadoop.test.TestDirHelper$1.evaluate(TestDirHelper.java:106)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
 at org.junit.runners.Suite.runChild(Suite.java:128)
 at org.junit.runners.Suite.runChild(Suite.java:27)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
 at org.junit.runners.Suite.runChild(Suite.java:128)
 at org.junit.runners.Suite.runChild(Suite.java:27)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
 at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
 at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
 at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:4

[jira] [Comment Edited] (HDFS-14045) Use different metrics in DataNode to better measure latency of heartbeat/blockReports/incrementalBlockReports of Active/Standby NN

2018-11-13 Thread Jiandan Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686140#comment-16686140
 ] 

Jiandan Yang  edited comment on HDFS-14045 at 11/14/18 6:49 AM:


There are many "[ERROR] Error occurred in starting fork, check output in log" 
in test log,  and I think there may be something wrong with Jenkins.

Uploading [^HDFS-14045.010.patch] to trigger Jenkins.


was (Author: yangjiandan):
There are many "[ERROR] Error occurred in starting fork, check output in log" 
in test log,  and I think there may be something wrong with Jenkins.

Uploading [^HDFS-14045] to trigger Jenkins.

> Use different metrics in DataNode to better measure latency of 
> heartbeat/blockReports/incrementalBlockReports of Active/Standby NN
> --
>
> Key: HDFS-14045
> URL: https://issues.apache.org/jira/browse/HDFS-14045
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: HDFS-14045.001.patch, HDFS-14045.002.patch, 
> HDFS-14045.003.patch, HDFS-14045.004.patch, HDFS-14045.005.patch, 
> HDFS-14045.006.patch, HDFS-14045.007.patch, HDFS-14045.008.patch, 
> HDFS-14045.009.patch
>
>
> Currently DataNode uses same metrics to measure rpc latency of NameNode, but 
> Active and Standby usually have different performance at the same time, 
> especially in large cluster. For example, rpc latency of Standby is very long 
> when Standby is catching up editlog. We may misunderstand the state of HDFS. 
> Using different metrics for Active and standby can help us obtain more 
> precise metric data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14045) Use different metrics in DataNode to better measure latency of heartbeat/blockReports/incrementalBlockReports of Active/Standby NN

2018-11-13 Thread Jiandan Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686140#comment-16686140
 ] 

Jiandan Yang  commented on HDFS-14045:
--

There are many "[ERROR] Error occurred in starting fork, check output in log" 
in test log,  and I think there may be something wrong with Jenkins.

Uploading [^HDFS-14045] to trigger Jenkins.

> Use different metrics in DataNode to better measure latency of 
> heartbeat/blockReports/incrementalBlockReports of Active/Standby NN
> --
>
> Key: HDFS-14045
> URL: https://issues.apache.org/jira/browse/HDFS-14045
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: HDFS-14045.001.patch, HDFS-14045.002.patch, 
> HDFS-14045.003.patch, HDFS-14045.004.patch, HDFS-14045.005.patch, 
> HDFS-14045.006.patch, HDFS-14045.007.patch, HDFS-14045.008.patch, 
> HDFS-14045.009.patch
>
>
> Currently DataNode uses same metrics to measure rpc latency of NameNode, but 
> Active and Standby usually have different performance at the same time, 
> especially in large cluster. For example, rpc latency of Standby is very long 
> when Standby is catching up editlog. We may misunderstand the state of HDFS. 
> Using different metrics for Active and standby can help us obtain more 
> precise metric data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14076) NameNodeResourceChecker#isResourceAvailable() should check the linux filesystem inode usage.

2018-11-13 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HDFS-14076:
-

 Summary: NameNodeResourceChecker#isResourceAvailable() should 
check the linux filesystem inode usage.
 Key: HDFS-14076
 URL: https://issues.apache.org/jira/browse/HDFS-14076
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.1.1
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore


Linux system gives the *"No space left on device"* exception in two cases.
 # Disk space is not available 
 # Linux inode limit is exceeded

NameNodeResourceChecker currently only checking the disk space, it should check 
the inode limit also.

We got *"No space left on device"* exception even disk space was there and roll 
edit operation is failed. After analysis we found that Inode usage for the 
system is 100%.
{noformat}
2018-11-10 18:59:37,913 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Error: starting log segment 796xx failed for (journal 
JournalAndStream(mgr=FileJournalManager(root=/opt/xx), stream=null))
java.io.FileNotFoundException: 
/opt/xxx/edits_inprogress_7964819 (No space left on device)
 at java.io.RandomAccessFile.open0(Native Method)
 at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
 at java.io.RandomAccessFile.(RandomAccessFile.java:243)
 at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileOutputStream.(EditLogFileOutputStream.java:88){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14045) Use different metrics in DataNode to better measure latency of heartbeat/blockReports/incrementalBlockReports of Active/Standby NN

2018-11-13 Thread Jiandan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-14045:
-
Attachment: HDFS-14045.010.patch

> Use different metrics in DataNode to better measure latency of 
> heartbeat/blockReports/incrementalBlockReports of Active/Standby NN
> --
>
> Key: HDFS-14045
> URL: https://issues.apache.org/jira/browse/HDFS-14045
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: HDFS-14045.001.patch, HDFS-14045.002.patch, 
> HDFS-14045.003.patch, HDFS-14045.004.patch, HDFS-14045.005.patch, 
> HDFS-14045.006.patch, HDFS-14045.007.patch, HDFS-14045.008.patch, 
> HDFS-14045.009.patch, HDFS-14045.010.patch
>
>
> Currently DataNode uses same metrics to measure rpc latency of NameNode, but 
> Active and Standby usually have different performance at the same time, 
> especially in large cluster. For example, rpc latency of Standby is very long 
> when Standby is catching up editlog. We may misunderstand the state of HDFS. 
> Using different metrics for Active and standby can help us obtain more 
> precise metric data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14045) Use different metrics in DataNode to better measure latency of heartbeat/blockReports/incrementalBlockReports of Active/Standby NN

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686136#comment-16686136
 ] 

Hadoop QA commented on HDFS-14045:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
53s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}146m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
34s{color} | {color:red} The patch generated 7 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}251m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|   | hadoop.TestRefreshCallQueue |
|   | hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs |
|   | hadoop.hdfs.TestRollingUpgradeRollback |
|   | hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.fs.viewfs.TestViewFileSystemWithTruncate |
|   | hadoop.hdfs.server.namenode.TestListOpenFiles |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotStatsMXBean |
|   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
|   | hadoop.hdfs.TestModTime |
|   | hadoop.hdfs.TestDFSClientFailover |
|   | hadoop.hdfs.TestDisableConnCache |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.TestDataStream |
|   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
|   | hadoop.hdfs.server.nameno

[jira] [Comment Edited] (HDFS-14045) Use different metrics in DataNode to better measure latency of heartbeat/blockReports/incrementalBlockReports of Active/Standby NN

2018-11-13 Thread Jiandan Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686134#comment-16686134
 ] 

Jiandan Yang  edited comment on HDFS-14045 at 11/14/18 6:41 AM:


Thanks [~elgoiri] for you comments.
{quote}TestDataNodeMetrics#testNNRpcMetricsWithFederationAndHA(), 
testNNRpcMetricsWithFederation() and testNNRpcMetricsWithHA(), no need to 
extract the suffix.
{quote}
I've remove suffix in [^HDFS-14045.009.patch]
{quote}I'm not sure about the Unknown-Unknown behavior, if we cannot determine 
the id, we may want to just leave it as it was?
{quote}
Do you mean do not make metrics when suffix is Unknown-Unknown?I do not 
understand what you mean.
{quote}Which unit test makes sure that HeartbeatsNumOps and HeartbeatsAvgTime 
are still showing the old values? It looks good but just to verify.
{quote}
A good suggestion, I've add verification about HeartbeatsNumOps in 
[^HDFS-14045.009.patch]


was (Author: yangjiandan):
Thanks [~elgoiri] for you comments.
{quote}
TestDataNodeMetrics#testNNRpcMetricsWithFederationAndHA(), 
testNNRpcMetricsWithFederation() and testNNRpcMetricsWithHA(), no need to 
extract the suffix.
{quote}
I've remove suffix in [^HDFS-14045.009.patch]
{quote}
 I'm not sure about the Unknown-Unknown behavior, if we cannot determine the 
id, we may want to just leave it as it was?
{quote}
Do you mean do not make metrics when suffix is Unknown-Unknown?I do not 
understand what your mean.
{quote}
Which unit test makes sure that HeartbeatsNumOps and HeartbeatsAvgTime are 
still showing the old values? It looks good but just to verify.
{quote}
A good suggestion, I've add verification about HeartbeatsNumOps in  
[^HDFS-14045.009.patch]

> Use different metrics in DataNode to better measure latency of 
> heartbeat/blockReports/incrementalBlockReports of Active/Standby NN
> --
>
> Key: HDFS-14045
> URL: https://issues.apache.org/jira/browse/HDFS-14045
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: HDFS-14045.001.patch, HDFS-14045.002.patch, 
> HDFS-14045.003.patch, HDFS-14045.004.patch, HDFS-14045.005.patch, 
> HDFS-14045.006.patch, HDFS-14045.007.patch, HDFS-14045.008.patch, 
> HDFS-14045.009.patch
>
>
> Currently DataNode uses same metrics to measure rpc latency of NameNode, but 
> Active and Standby usually have different performance at the same time, 
> especially in large cluster. For example, rpc latency of Standby is very long 
> when Standby is catching up editlog. We may misunderstand the state of HDFS. 
> Using different metrics for Active and standby can help us obtain more 
> precise metric data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14045) Use different metrics in DataNode to better measure latency of heartbeat/blockReports/incrementalBlockReports of Active/Standby NN

2018-11-13 Thread Jiandan Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686134#comment-16686134
 ] 

Jiandan Yang  edited comment on HDFS-14045 at 11/14/18 6:38 AM:


Thanks [~elgoiri] for you comments.
{quote}
TestDataNodeMetrics#testNNRpcMetricsWithFederationAndHA(), 
testNNRpcMetricsWithFederation() and testNNRpcMetricsWithHA(), no need to 
extract the suffix.
{quote}
I've remove suffix in [^HDFS-14045.009.patch]
{quote}
 I'm not sure about the Unknown-Unknown behavior, if we cannot determine the 
id, we may want to just leave it as it was?
{quote}
Do you mean do not make metrics when suffix is Unknown-Unknown?I do not 
understand what your mean.
{quote}
Which unit test makes sure that HeartbeatsNumOps and HeartbeatsAvgTime are 
still showing the old values? It looks good but just to verify.
{quote}
A good suggestion, I've add verification about HeartbeatsNumOps in  
[^HDFS-14045.009.patch]


was (Author: yangjiandan):
Thanks [~elgoiri] for you comments.
{quota}
TestDataNodeMetrics#testNNRpcMetricsWithFederationAndHA(), 
testNNRpcMetricsWithFederation() and testNNRpcMetricsWithHA(), no need to 
extract the suffix.
{quota}
I've remove suffix in [^HDFS-14045.009.patch]
{quota}
 I'm not sure about the Unknown-Unknown behavior, if we cannot determine the 
id, we may want to just leave it as it was?
{quota}
Do you mean do not make metrics when suffix is Unknown-Unknown?I do not 
understand what your mean.
{quota}
Which unit test makes sure that HeartbeatsNumOps and HeartbeatsAvgTime are 
still showing the old values? It looks good but just to verify.
{quota}
A good suggestion, I've add verification about HeartbeatsNumOps in  
[^HDFS-14045.009.patch]

> Use different metrics in DataNode to better measure latency of 
> heartbeat/blockReports/incrementalBlockReports of Active/Standby NN
> --
>
> Key: HDFS-14045
> URL: https://issues.apache.org/jira/browse/HDFS-14045
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: HDFS-14045.001.patch, HDFS-14045.002.patch, 
> HDFS-14045.003.patch, HDFS-14045.004.patch, HDFS-14045.005.patch, 
> HDFS-14045.006.patch, HDFS-14045.007.patch, HDFS-14045.008.patch, 
> HDFS-14045.009.patch
>
>
> Currently DataNode uses same metrics to measure rpc latency of NameNode, but 
> Active and Standby usually have different performance at the same time, 
> especially in large cluster. For example, rpc latency of Standby is very long 
> when Standby is catching up editlog. We may misunderstand the state of HDFS. 
> Using different metrics for Active and standby can help us obtain more 
> precise metric data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14045) Use different metrics in DataNode to better measure latency of heartbeat/blockReports/incrementalBlockReports of Active/Standby NN

2018-11-13 Thread Jiandan Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686134#comment-16686134
 ] 

Jiandan Yang  commented on HDFS-14045:
--

Thanks [~elgoiri] for you comments.
{quota}
TestDataNodeMetrics#testNNRpcMetricsWithFederationAndHA(), 
testNNRpcMetricsWithFederation() and testNNRpcMetricsWithHA(), no need to 
extract the suffix.
{quota}
I've remove suffix in [^HDFS-14045.009.patch]
{quota}
 I'm not sure about the Unknown-Unknown behavior, if we cannot determine the 
id, we may want to just leave it as it was?
{quota}
Do you mean do not make metrics when suffix is Unknown-Unknown?I do not 
understand what your mean.
{quota}
Which unit test makes sure that HeartbeatsNumOps and HeartbeatsAvgTime are 
still showing the old values? It looks good but just to verify.
{quota}
A good suggestion, I've add verification about HeartbeatsNumOps in  
[^HDFS-14045.009.patch]

> Use different metrics in DataNode to better measure latency of 
> heartbeat/blockReports/incrementalBlockReports of Active/Standby NN
> --
>
> Key: HDFS-14045
> URL: https://issues.apache.org/jira/browse/HDFS-14045
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: HDFS-14045.001.patch, HDFS-14045.002.patch, 
> HDFS-14045.003.patch, HDFS-14045.004.patch, HDFS-14045.005.patch, 
> HDFS-14045.006.patch, HDFS-14045.007.patch, HDFS-14045.008.patch, 
> HDFS-14045.009.patch
>
>
> Currently DataNode uses same metrics to measure rpc latency of NameNode, but 
> Active and Standby usually have different performance at the same time, 
> especially in large cluster. For example, rpc latency of Standby is very long 
> when Standby is catching up editlog. We may misunderstand the state of HDFS. 
> Using different metrics for Active and standby can help us obtain more 
> precise metric data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14075) NPE while Edit Logging

2018-11-13 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14075:

Priority: Critical  (was: Major)

> NPE while Edit Logging
> --
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14075) NPE while Edit Logging

2018-11-13 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686126#comment-16686126
 ] 

Ayush Saxena commented on HDFS-14075:
-

Solution ::
dfs.namenode.edits.dir.minimum configuration shouldn't be considered in case of 
HA.It should be 0 for HA mode.

> NPE while Edit Logging
> --
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14075) NPE while Edit Logging (Minimum Redundant Count )

2018-11-13 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14075:

Summary: NPE while Edit Logging (Minimum Redundant Count )  (was: NPE while 
Edit Logging )

> NPE while Edit Logging (Minimum Redundant Count )
> -
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14075) NPE while Edit Logging

2018-11-13 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14075:

Summary: NPE while Edit Logging  (was: NPE while Edit Logging (Minimum 
Redundant Count ))

> NPE while Edit Logging
> --
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14075) NPE while Edit Logging

2018-11-13 Thread Ayush Saxena (JIRA)
Ayush Saxena created HDFS-14075:
---

 Summary: NPE while Edit Logging 
 Key: HDFS-14075
 URL: https://issues.apache.org/jira/browse/HDFS-14075
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ayush Saxena
Assignee: Ayush Saxena


{noformat}
2018-11-10 18:59:38,427 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Exception while edit logging: null
java.lang.NullPointerException
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
 at java.lang.Thread.run(Thread.java:745)
2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
status 1: Exception while edit logging: null
2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
SHUTDOWN_MSG:

{noformat}

Before NPE Received the following Exception

{noformat}
INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
Call#23241 Retry#0 
org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 

java.io.IOException: Unable to start log segment 7964819: too few journals 
successfully started.
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
at 
org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
at 
org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
Caused by: java.io.IOException: starting log segment 7964819 failed for too 
many journals
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
... 15 more
{noformat}





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-813) [JDK11] mvn javadoc:javadoc -Phdds fails

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686119#comment-16686119
 ] 

Hadoop QA commented on HDDS-813:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} hadoop-hdds: The patch generated 6 new + 0 
unchanged - 3 fixed = 6 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 29s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 28s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-813 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948072/HDDS-813.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3c8488304aff 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/person

[jira] [Commented] (HDDS-801) Quasi close the container when close is not executed via Ratis

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686120#comment-16686120
 ] 

Hadoop QA commented on HDDS-801:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDDS-801 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-801 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948084/HDDS-801.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1705/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Quasi close the container when close is not executed via Ratis
> --
>
> Key: HDDS-801
> URL: https://issues.apache.org/jira/browse/HDDS-801
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-801.000.patch, HDDS-801.001.patch
>
>
> When datanode received CloseContainerCommand and the replication type is not 
> RATIS, we should QUASI close the container. After quasi-closing the container 
> an ICR has to be sent to SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-801) Quasi close the container when close is not executed via Ratis

2018-11-13 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-801:
-
Status: Patch Available  (was: In Progress)

> Quasi close the container when close is not executed via Ratis
> --
>
> Key: HDDS-801
> URL: https://issues.apache.org/jira/browse/HDDS-801
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-801.000.patch, HDDS-801.001.patch
>
>
> When datanode received CloseContainerCommand and the replication type is not 
> RATIS, we should QUASI close the container. After quasi-closing the container 
> an ICR has to be sent to SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-801) Quasi close the container when close is not executed via Ratis

2018-11-13 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-801:
-
Attachment: HDDS-801.001.patch

> Quasi close the container when close is not executed via Ratis
> --
>
> Key: HDDS-801
> URL: https://issues.apache.org/jira/browse/HDDS-801
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-801.000.patch, HDDS-801.001.patch
>
>
> When datanode received CloseContainerCommand and the replication type is not 
> RATIS, we should QUASI close the container. After quasi-closing the container 
> an ICR has to be sent to SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-836) Create Ozone identifier for delegation token and block token

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686105#comment-16686105
 ] 

Hadoop QA commented on HDDS-836:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  7m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
45s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
53s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 7s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} HDDS-4 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 15m 
34s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 17s{color} | {color:orange} root: The patch generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 10m 
29s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
0s{color} | {color:red} hadoop-ozone/common generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 40s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 31s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-ozone/common |
|  |  Uninitialized read of maxKeyLen in new 
org.apache.hadoop.ozone.security.OzoneSecretKey(int, long, byte[], byte[])  At 
OzoneSecretKey.java:new org.apache.hadoop.ozone.security.OzoneSecretKey(int, 
long, byte[], byte[])  At OzoneSecretKey.java:[line 80] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-836 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948066/HDDS

[jira] [Updated] (HDDS-813) [JDK11] mvn javadoc:javadoc -Phdds fails

2018-11-13 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-813:
---
Labels: javadoc  (was: )

> [JDK11] mvn javadoc:javadoc -Phdds fails
> 
>
> Key: HDDS-813
> URL: https://issues.apache.org/jira/browse/HDDS-813
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: javadoc
> Attachments: HDDS-813.001.patch
>
>
> {{mvn javadoc:javadoc -Phdds}} fails on Java 11
> {noformat}
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java:107:
>  error: bad use of '>'
> [ERROR]* @param count count must be > 0.
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/LocatedContainer.java:85:
>  error: unknown tag: DatanodeInfo
> [ERROR]   * @return Set nodes that currently host the container
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/ScmLocatedBlock.java:71:
>  error: unknown tag: DatanodeInfo
> [ERROR]   * @return List nodes that currently host the block
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/Auditable.java:28:
>  error: malformed HTML
> [ERROR]   * @return Map with values to be logged in audit.
> [ERROR]                 ^
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/Auditable.java:28:
>  error: bad use of '>'
> [ERROR]   * @return Map with values to be logged in audit.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-813) [JDK11] mvn javadoc:javadoc -Phdds fails

2018-11-13 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-813:
---
Summary: [JDK11] mvn javadoc:javadoc -Phdds fails  (was: [JDK11] mvn 
javadoc:javadoc fails)

> [JDK11] mvn javadoc:javadoc -Phdds fails
> 
>
> Key: HDDS-813
> URL: https://issues.apache.org/jira/browse/HDDS-813
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-813.001.patch
>
>
> {{mvn javadoc:javadoc -Phdds}} fails on Java 11
> {noformat}
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java:107:
>  error: bad use of '>'
> [ERROR]* @param count count must be > 0.
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/LocatedContainer.java:85:
>  error: unknown tag: DatanodeInfo
> [ERROR]   * @return Set nodes that currently host the container
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/ScmLocatedBlock.java:71:
>  error: unknown tag: DatanodeInfo
> [ERROR]   * @return List nodes that currently host the block
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/Auditable.java:28:
>  error: malformed HTML
> [ERROR]   * @return Map with values to be logged in audit.
> [ERROR]                 ^
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/Auditable.java:28:
>  error: bad use of '>'
> [ERROR]   * @return Map with values to be logged in audit.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-813) [JDK11] mvn javadoc:javadoc fails

2018-11-13 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-813:
---
Attachment: HDDS-813.001.patch
Status: Patch Available  (was: Open)

[~ajisakaa] san - Thank you for reporting the issue. Attached patch 001 for 
your review.

This will generate few checkstyle issues for ContainerReader.java & 
HddsVolume.java.

Based on our previous hadoop javadocs JIRA, we can ignore these checkstyle 
violations as those are for comments for ease of developers.

> [JDK11] mvn javadoc:javadoc fails
> -
>
> Key: HDDS-813
> URL: https://issues.apache.org/jira/browse/HDDS-813
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-813.001.patch
>
>
> {{mvn javadoc:javadoc -Phdds}} fails on Java 11
> {noformat}
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java:107:
>  error: bad use of '>'
> [ERROR]* @param count count must be > 0.
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/LocatedContainer.java:85:
>  error: unknown tag: DatanodeInfo
> [ERROR]   * @return Set nodes that currently host the container
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/ScmLocatedBlock.java:71:
>  error: unknown tag: DatanodeInfo
> [ERROR]   * @return List nodes that currently host the block
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/Auditable.java:28:
>  error: malformed HTML
> [ERROR]   * @return Map with values to be logged in audit.
> [ERROR]                 ^
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/Auditable.java:28:
>  error: bad use of '>'
> [ERROR]   * @return Map with values to be logged in audit.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-13 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686088#comment-16686088
 ] 

Bharat Viswanadham commented on HDDS-816:
-

I have attached a proposal approach, and I am currently working on the code 
changes.

I am working on the code changes, will submit the patch soon.

 

> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, Metrics for number of volumes, 
> buckets, keys.pdf, Proposed Approach.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-13 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-816:

Attachment: Proposed Approach.pdf

> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, Metrics for number of volumes, 
> buckets, keys.pdf, Proposed Approach.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-813) [JDK11] mvn javadoc:javadoc fails

2018-11-13 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686083#comment-16686083
 ] 

Akira Ajisaka commented on HDDS-813:


[~ljain], this issue is not fixed in hadoop-hdds-common module. This module is 
compiled only when -Phdds option is used.

> [JDK11] mvn javadoc:javadoc fails
> -
>
> Key: HDDS-813
> URL: https://issues.apache.org/jira/browse/HDDS-813
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> {{mvn javadoc:javadoc -Phdds}} fails on Java 11
> {noformat}
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java:107:
>  error: bad use of '>'
> [ERROR]* @param count count must be > 0.
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/LocatedContainer.java:85:
>  error: unknown tag: DatanodeInfo
> [ERROR]   * @return Set nodes that currently host the container
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/ScmLocatedBlock.java:71:
>  error: unknown tag: DatanodeInfo
> [ERROR]   * @return List nodes that currently host the block
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/Auditable.java:28:
>  error: malformed HTML
> [ERROR]   * @return Map with values to be logged in audit.
> [ERROR]                 ^
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/Auditable.java:28:
>  error: bad use of '>'
> [ERROR]   * @return Map with values to be logged in audit.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-813) [JDK11] mvn javadoc:javadoc fails

2018-11-13 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686072#comment-16686072
 ] 

Lokesh Jain commented on HDDS-813:
--

[~dineshchitlangia] I think the issue has already been fixed by HADOOP-15904.

> [JDK11] mvn javadoc:javadoc fails
> -
>
> Key: HDDS-813
> URL: https://issues.apache.org/jira/browse/HDDS-813
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> {{mvn javadoc:javadoc -Phdds}} fails on Java 11
> {noformat}
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java:107:
>  error: bad use of '>'
> [ERROR]* @param count count must be > 0.
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/LocatedContainer.java:85:
>  error: unknown tag: DatanodeInfo
> [ERROR]   * @return Set nodes that currently host the container
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/ScmLocatedBlock.java:71:
>  error: unknown tag: DatanodeInfo
> [ERROR]   * @return List nodes that currently host the block
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/Auditable.java:28:
>  error: malformed HTML
> [ERROR]   * @return Map with values to be logged in audit.
> [ERROR]                 ^
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/Auditable.java:28:
>  error: bad use of '>'
> [ERROR]   * @return Map with values to be logged in audit.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-836) Create Ozone identifier for delegation token and block token

2018-11-13 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686055#comment-16686055
 ] 

Ajay Kumar commented on HDDS-836:
-

[~xyao] thanks for review. patch v1 to address comments and jenkins issues.
{quote}OzoneBlockTokenIdentifier.java

Line 67: I think we should build the user based on the blockid instead of using 
null or empty string here.
{quote}
 done.
{quote}OzoneSecretKey.java

Line 58: should we put this into security config or some dev only 
configuration? 
{quote}
 assigned it from constructor. Client using it can pass this value from config.
{quote}Line 113-154: can we separate these key encode/decoe stuff into utility 
class or use the ca client interface?
{quote}
Moved it to SecurityUtil.( this class doesn't look like a CertificateClient.)

> Create Ozone identifier for delegation token and block token
> 
>
> Key: HDDS-836
> URL: https://issues.apache.org/jira/browse/HDDS-836
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-836-HDDS-4.00.patch, HDDS-836-HDDS-4.01.patch
>
>
> Create Ozone identifier for delegation token and block token.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-836) Create Ozone identifier for delegation token and block token

2018-11-13 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-836:

Attachment: HDDS-836-HDDS-4.01.patch

> Create Ozone identifier for delegation token and block token
> 
>
> Key: HDDS-836
> URL: https://issues.apache.org/jira/browse/HDDS-836
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-836-HDDS-4.00.patch, HDDS-836-HDDS-4.01.patch
>
>
> Create Ozone identifier for delegation token and block token.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14064) WEBHDFS: Support Enable/Disable EC Policy

2018-11-13 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14064:

Attachment: HDFS-14064-04.patch

> WEBHDFS: Support Enable/Disable EC Policy
> -
>
> Key: HDFS-14064
> URL: https://issues.apache.org/jira/browse/HDFS-14064
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14064-01.patch, HDFS-14064-02.patch, 
> HDFS-14064-03.patch, HDFS-14064-04.patch, HDFS-14064-04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686028#comment-16686028
 ] 

Hadoop QA commented on HDFS-14035:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
55s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m  
7s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
7s{color} | {color:green} HDFS-12943 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 15m 
46s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
16s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 12m 
10s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 45s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 53s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 35s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed CTEST tests | test_test

[jira] [Commented] (HDFS-14069) Better debuggability for datanode decommissioning

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686025#comment-16686025
 ] 

Hadoop QA commented on HDFS-14069:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 9 new + 650 unchanged - 0 fixed = 659 total (was 650) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}148m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
25s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}203m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStream |
|   | hadoop.hdfs.server.mover.TestMover |
|   | 
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
|
|   | hadoop.hdfs.server.mover.TestStorageMover |
|   | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy |
|   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocalMetrics |
|   | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.TestHFlush |
|   | hadoop.hdfs.web.Tes

[jira] [Updated] (HDFS-14045) Use different metrics in DataNode to better measure latency of heartbeat/blockReports/incrementalBlockReports of Active/Standby NN

2018-11-13 Thread Jiandan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated HDFS-14045:
-
Attachment: HDFS-14045.009.patch

> Use different metrics in DataNode to better measure latency of 
> heartbeat/blockReports/incrementalBlockReports of Active/Standby NN
> --
>
> Key: HDFS-14045
> URL: https://issues.apache.org/jira/browse/HDFS-14045
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: HDFS-14045.001.patch, HDFS-14045.002.patch, 
> HDFS-14045.003.patch, HDFS-14045.004.patch, HDFS-14045.005.patch, 
> HDFS-14045.006.patch, HDFS-14045.007.patch, HDFS-14045.008.patch, 
> HDFS-14045.009.patch
>
>
> Currently DataNode uses same metrics to measure rpc latency of NameNode, but 
> Active and Standby usually have different performance at the same time, 
> especially in large cluster. For example, rpc latency of Standby is very long 
> when Standby is catching up editlog. We may misunderstand the state of HDFS. 
> Using different metrics for Active and standby can help us obtain more 
> precise metric data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14017) ObserverReadProxyProviderWithIPFailover should work with HA configuration

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685997#comment-16685997
 ] 

Hadoop QA commented on HDFS-14017:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
44s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-12943 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 
26s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 9 new + 4 unchanged - 3 fixed = 13 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 
31s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 49s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14017 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948029/HDFS-14017-HDFS-12943.010.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4676290b7fac 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / 8b5277f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25513/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25513/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit

[jira] [Commented] (HDDS-836) Create Ozone identifier for delegation token and block token

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685993#comment-16685993
 ] 

Hadoop QA commented on HDDS-836:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  7m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
51s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
29s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 7s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} HDDS-4 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 15m 
30s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
49s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  7s{color} | {color:orange} root: The patch generated 7 new + 0 unchanged - 
0 fixed = 7 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 10m  
7s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 39s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 33s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 31s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
38s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-836 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948041/HDDS-836-HDDS-4.00.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  shadedclient  findbugs  checkstyle  |
| uname | Linux 7c52c4ab396c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDDS-4 / 6

[jira] [Commented] (HDFS-14067) Allow manual failover between standby and observer

2018-11-13 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685992#comment-16685992
 ] 

Konstantin Shvachko commented on HDFS-14067:


OK sounds like changing server side is too much for this jira. Let's target it 
for later, while for testing using {{-forceManual}} should be sufficient.

> Allow manual failover between standby and observer
> --
>
> Key: HDFS-14067
> URL: https://issues.apache.org/jira/browse/HDFS-14067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14067-HDFS-12943.000.patch
>
>
> Currently if automatic failover is enabled in a HA environment, transition 
> from standby to observer would be blocked:
> {code}
> [hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
> Automatic failover is enabled for NameNode at 
> Refusing to manually manage HA state, since it may cause
> a split-brain scenario or other incorrect state.
> If you are very sure you know what you are doing, please
> specify the --forcemanual flag.
> {code}
> We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-8) Add OzoneManager Delegation Token support

2018-11-13 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685983#comment-16685983
 ] 

Xiaoyu Yao commented on HDDS-8:
---

[~jnp], AbstractDelegationTokenSecretManager has assumption on using symmetric 
key scheme inside it e.g., the Map of masterKeyId->DelegationKey (single shared 
key).

If we want to reuse it, we will have to introduce dependency change on 
hadoop-common to fit the OzoneSecretKey (asymmetric key pair). This may not be 
desirable now that we don't have dependency on SNAPSHOT jar of hadoop-common 
any more. 

Agree that we might be able to reuse some of the 
AbstractDelegationTokenSelector/Identifier code. 

> Add OzoneManager Delegation Token support
> -
>
> Key: HDDS-8
> URL: https://issues.apache.org/jira/browse/HDDS-8
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-8-HDDS-4.00.patch, HDDS-8-HDDS-4.01.patch, 
> HDDS-8-HDDS-4.02.patch, HDDS-8-HDDS-4.03.patch, HDDS-8-HDDS-4.04.patch, 
> HDDS-8-HDDS-4.05.patch, HDDS-8-HDDS-4.06.patch, HDDS-8-HDDS-4.07.patch, 
> HDDS-8-HDDS-4.08.patch, HDDS-8-HDDS-4.09.patch, HDDS-8-HDDS-4.10.patch, 
> HDDS-8-HDDS-4.11.patch, HDDS-8-HDDS-4.12.patch, HDDS-8-HDDS-4.13.patch
>
>
> Add delegation token functionality to Ozone layer. We will re-use hadoop rpc 
> layer TOKEN authentication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-836) Create Ozone identifier for delegation token and block token

2018-11-13 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685976#comment-16685976
 ] 

Xiaoyu Yao commented on HDDS-836:
-

Thanks [~ajayydv] for the patch. It looks good to me overall. Just have two 
minor issues:

 

OzoneBlockTokenIdentifier.java

Line 67: I think we should build the user based on the blockid instead of using 
null or empty string here.

 

 

OzoneSecretKey.java

Line 58: should we put this into security config or some dev only 
configuration? 

 

Line 113-154: can we separate these key encode/decoe stuff into utility class 
or use the ca client interface?

 

 

> Create Ozone identifier for delegation token and block token
> 
>
> Key: HDDS-836
> URL: https://issues.apache.org/jira/browse/HDDS-836
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-836-HDDS-4.00.patch
>
>
> Create Ozone identifier for delegation token and block token.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14017) ObserverReadProxyProviderWithIPFailover should work with HA configuration

2018-11-13 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685962#comment-16685962
 ] 

Konstantin Shvachko edited comment on HDFS-14017 at 11/14/18 1:06 AM:
--

For the record, my assumptions in the comment above were incorrect based on my 
recent evaluation of the state of the art. Here is how IPFailoverPP is 
configured:
{code:java}
// Client uses only these two lines from core-site.xml
fs.defaultFS = virtual-address-nn.in.com:8020
dfs.client.failover.proxy.provider.virtual-address-nn.in.com = 
o.a.h...IPFailoverProxyProvider

// Standard HA configuration for the NameNode in hdfs-site-xml
dfs.nameservices = my-cluster
dfs.ha.namenodes.my-cluster = nn1, nn2
dfs.namenode.rpc-address.my-cluster.nn1 = physical-address-ha1.in.com:8020
dfs.namenode.rpc-address.my-cluster.nn2 = physical-address-ha2.in.com:8020
{code}
>From HDFS-6334 I understand IPFPP was intentionally made to look like it talks 
>to a single NameNode. Which looks hacky now. We have multiple NameNodes and 
>the Proxy provider is in control which NN it should direct the call, so using 
>NN's logical name (aka nameserviceID) seems the right way for newly developed 
>proxy providers. We should still support current way for IPFPP for backward 
>compatibility, so be it.

For ORPPwithIPF we still need to know virtual address for NameNode failover. I 
suggest we add a new parameter for that, adding it to the config above:
{code:java}
dfs.client.failover.ipfailover.virtual-address.my-cluster = 
virtual-address-nn.in.com:8020
{code}
So the ORPP part will use {{dfs.nameservices}} to obtain physical addresses of 
NNs, and the IPF part will instantiate IPFPP based on 
{{dfs.client.failover.ipfailover.virtual-address}} parameter.
 And we can still support traditional IPFPP (without Observer) using current 
{{core-site.xml}} configuration.


was (Author: shv):
For the record, my assumptions in the comment above were incorrect based on my 
recent evaluation of the state of the art. Here is how IPFailoverPP is 
configured:
{code:java}
// Client uses only these two lines from core-site.xml
fs.defaultFS = virtual-address-nn.in.com:8020
dfs.client.failover.proxy.provider.virtual-address-nn.in.com = 
o.a.h...IPFailoverProxyProvider

// Standard HA configuration for the NameNode in hdfs-site-xml
dfs.nameservices = my-cluster
dfs.ha.namenodes.my-cluster = nn1, nn2
dfs.namenode.rpc-address.my-cluster.nn1 = physical-address-ha1.in.com:8020
dfs.namenode.rpc-address.my-cluster.nn2 = physical-address-ha2.in.com:8020
{code}
>From HDFS-6334 I understand IPFPP was intentionally made to look like it talks 
>to a single node. Which looks hacky now. We have multiple NameNodes and the 
>Proxy provider is in control which NN it should direct the call, so using NN's 
>logical name (aka nameserviceID) seems the right way for newly developed proxy 
>providers. We should still support current way for IPFPP for backward 
>compatibility, so be it.

For ORPPwithIPF we still need to know virtual address for NameNode failover. I 
suggest we add a new parameter for that, adding it to the config above:
{code:java}
dfs.client.failover.ipfailover.virtual-address.my-cluster = 
virtual-address-nn.in.com:8020
{code}
So the ORPP part will use {{dfs.nameservices}} to obtain physical addresses of 
NNs, and the IPF part will instantiate IPFPP based on 
{{dfs.client.failover.ipfailover.virtual-address}} parameter.
And we can still support traditional IPFPP (without Observer) using current 
{{core-site.xml}} configuration.

> ObserverReadProxyProviderWithIPFailover should work with HA configuration
> -
>
> Key: HDFS-14017
> URL: https://issues.apache.org/jira/browse/HDFS-14017
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14017-HDFS-12943.001.patch, 
> HDFS-14017-HDFS-12943.002.patch, HDFS-14017-HDFS-12943.003.patch, 
> HDFS-14017-HDFS-12943.004.patch, HDFS-14017-HDFS-12943.005.patch, 
> HDFS-14017-HDFS-12943.006.patch, HDFS-14017-HDFS-12943.008.patch, 
> HDFS-14017-HDFS-12943.009.patch, HDFS-14017-HDFS-12943.010.patch
>
>
> Currently {{ObserverReadProxyProviderWithIPFailover}} extends 
> {{ObserverReadProxyProvider}}, and the only difference is changing the proxy 
> factory to use {{IPFailoverProxyProvider}}. However this is not enough 
> because when calling constructor of {{ObserverReadProxyProvider}} in 
> super(...), the follow line:
> {code:java}
> nameNodeProxies = getProxyAddresses(uri,
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY);
> {code}
> will try to resolve the all configured NN addresses to do configured 
> failover. But in the case of IPFailover, this does not really app

[jira] [Comment Edited] (HDFS-14017) ObserverReadProxyProviderWithIPFailover should work with HA configuration

2018-11-13 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685962#comment-16685962
 ] 

Konstantin Shvachko edited comment on HDFS-14017 at 11/14/18 1:06 AM:
--

For the record, my assumptions in [the comment 
above|https://issues.apache.org/jira/browse/HDFS-14017?focusedCommentId=16682588&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16682588]
 were incorrect based on my recent evaluation of the state of the art. Here is 
how IPFailoverPP is configured:
{code:java}
// Client uses only these two lines from core-site.xml
fs.defaultFS = virtual-address-nn.in.com:8020
dfs.client.failover.proxy.provider.virtual-address-nn.in.com = 
o.a.h...IPFailoverProxyProvider

// Standard HA configuration for the NameNode in hdfs-site-xml
dfs.nameservices = my-cluster
dfs.ha.namenodes.my-cluster = nn1, nn2
dfs.namenode.rpc-address.my-cluster.nn1 = physical-address-ha1.in.com:8020
dfs.namenode.rpc-address.my-cluster.nn2 = physical-address-ha2.in.com:8020
{code}
>From HDFS-6334 I understand IPFPP was intentionally made to look like it talks 
>to a single NameNode. Which looks hacky now. We have multiple NameNodes and 
>the Proxy provider is in control which NN it should direct the call, so using 
>NN's logical name (aka nameserviceID) seems the right way for newly developed 
>proxy providers. We should still support current way for IPFPP for backward 
>compatibility, so be it.

For ORPPwithIPF we still need to know virtual address for NameNode failover. I 
suggest we add a new parameter for that, adding it to the config above:
{code:java}
dfs.client.failover.ipfailover.virtual-address.my-cluster = 
virtual-address-nn.in.com:8020
{code}
So the ORPP part will use {{dfs.nameservices}} to obtain physical addresses of 
NNs, and the IPF part will instantiate IPFPP based on 
{{dfs.client.failover.ipfailover.virtual-address}} parameter.
 And we can still support traditional IPFPP (without Observer) using current 
{{core-site.xml}} configuration.


was (Author: shv):
For the record, my assumptions in the comment above were incorrect based on my 
recent evaluation of the state of the art. Here is how IPFailoverPP is 
configured:
{code:java}
// Client uses only these two lines from core-site.xml
fs.defaultFS = virtual-address-nn.in.com:8020
dfs.client.failover.proxy.provider.virtual-address-nn.in.com = 
o.a.h...IPFailoverProxyProvider

// Standard HA configuration for the NameNode in hdfs-site-xml
dfs.nameservices = my-cluster
dfs.ha.namenodes.my-cluster = nn1, nn2
dfs.namenode.rpc-address.my-cluster.nn1 = physical-address-ha1.in.com:8020
dfs.namenode.rpc-address.my-cluster.nn2 = physical-address-ha2.in.com:8020
{code}
>From HDFS-6334 I understand IPFPP was intentionally made to look like it talks 
>to a single NameNode. Which looks hacky now. We have multiple NameNodes and 
>the Proxy provider is in control which NN it should direct the call, so using 
>NN's logical name (aka nameserviceID) seems the right way for newly developed 
>proxy providers. We should still support current way for IPFPP for backward 
>compatibility, so be it.

For ORPPwithIPF we still need to know virtual address for NameNode failover. I 
suggest we add a new parameter for that, adding it to the config above:
{code:java}
dfs.client.failover.ipfailover.virtual-address.my-cluster = 
virtual-address-nn.in.com:8020
{code}
So the ORPP part will use {{dfs.nameservices}} to obtain physical addresses of 
NNs, and the IPF part will instantiate IPFPP based on 
{{dfs.client.failover.ipfailover.virtual-address}} parameter.
 And we can still support traditional IPFPP (without Observer) using current 
{{core-site.xml}} configuration.

> ObserverReadProxyProviderWithIPFailover should work with HA configuration
> -
>
> Key: HDFS-14017
> URL: https://issues.apache.org/jira/browse/HDFS-14017
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14017-HDFS-12943.001.patch, 
> HDFS-14017-HDFS-12943.002.patch, HDFS-14017-HDFS-12943.003.patch, 
> HDFS-14017-HDFS-12943.004.patch, HDFS-14017-HDFS-12943.005.patch, 
> HDFS-14017-HDFS-12943.006.patch, HDFS-14017-HDFS-12943.008.patch, 
> HDFS-14017-HDFS-12943.009.patch, HDFS-14017-HDFS-12943.010.patch
>
>
> Currently {{ObserverReadProxyProviderWithIPFailover}} extends 
> {{ObserverReadProxyProvider}}, and the only difference is changing the proxy 
> factory to use {{IPFailoverProxyProvider}}. However this is not enough 
> because when calling constructor of {{ObserverReadProxyProvider}} in 
> super(...), the follow line:
> {code:java}
> nameNodeProxies = getProxyAddresses(uri,
> HdfsClientConfigKeys.DFS_NAMEN

[jira] [Commented] (HDFS-14017) ObserverReadProxyProviderWithIPFailover should work with HA configuration

2018-11-13 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685962#comment-16685962
 ] 

Konstantin Shvachko commented on HDFS-14017:


For the record, my assumptions in the comment above were incorrect based on my 
recent evaluation of the state of the art. Here is how IPFailoverPP is 
configured:
{code:java}
// Client uses only these two lines from core-site.xml
fs.defaultFS = virtual-address-nn.in.com:8020
dfs.client.failover.proxy.provider.virtual-address-nn.in.com = 
o.a.h...IPFailoverProxyProvider

// Standard HA configuration for the NameNode in hdfs-site-xml
dfs.nameservices = my-cluster
dfs.ha.namenodes.my-cluster = nn1, nn2
dfs.namenode.rpc-address.my-cluster.nn1 = physical-address-ha1.in.com:8020
dfs.namenode.rpc-address.my-cluster.nn2 = physical-address-ha2.in.com:8020
{code}
>From HDFS-6334 I understand IPFPP was intentionally made to look like it talks 
>to a single node. Which looks hacky now. We have multiple NameNodes and the 
>Proxy provider is in control which NN it should direct the call, so using NN's 
>logical name (aka nameserviceID) seems the right way for newly developed proxy 
>providers. We should still support current way for IPFPP for backward 
>compatibility, so be it.

For ORPPwithIPF we still need to know virtual address for NameNode failover. I 
suggest we add a new parameter for that, adding it to the config above:
{code:java}
dfs.client.failover.ipfailover.virtual-address.my-cluster = 
virtual-address-nn.in.com:8020
{code}
So the ORPP part will use {{dfs.nameservices}} to obtain physical addresses of 
NNs, and the IPF part will instantiate IPFPP based on 
{{dfs.client.failover.ipfailover.virtual-address}} parameter.
And we can still support traditional IPFPP (without Observer) using current 
{{core-site.xml}} configuration.

> ObserverReadProxyProviderWithIPFailover should work with HA configuration
> -
>
> Key: HDFS-14017
> URL: https://issues.apache.org/jira/browse/HDFS-14017
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14017-HDFS-12943.001.patch, 
> HDFS-14017-HDFS-12943.002.patch, HDFS-14017-HDFS-12943.003.patch, 
> HDFS-14017-HDFS-12943.004.patch, HDFS-14017-HDFS-12943.005.patch, 
> HDFS-14017-HDFS-12943.006.patch, HDFS-14017-HDFS-12943.008.patch, 
> HDFS-14017-HDFS-12943.009.patch, HDFS-14017-HDFS-12943.010.patch
>
>
> Currently {{ObserverReadProxyProviderWithIPFailover}} extends 
> {{ObserverReadProxyProvider}}, and the only difference is changing the proxy 
> factory to use {{IPFailoverProxyProvider}}. However this is not enough 
> because when calling constructor of {{ObserverReadProxyProvider}} in 
> super(...), the follow line:
> {code:java}
> nameNodeProxies = getProxyAddresses(uri,
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY);
> {code}
> will try to resolve the all configured NN addresses to do configured 
> failover. But in the case of IPFailover, this does not really apply.
>  
> A second issue closely related is about delegation token. For example, in 
> current IPFailover setup, say we have a virtual host nn.xyz.com, which points 
> to either of two physical nodes nn1.xyz.com or nn2.xyz.com. In current HDFS, 
> there is always only one DT being exchanged, which has hostname nn.xyz.com. 
> Server only issues this DT, and client only knows the host nn.xyz.com, so all 
> is good. But in Observer read, even with IPFailover, the client will no 
> longer contacting nn.xyz.com, but will actively reaching to nn1.xyz.com and 
> nn2.xyz.com. During this process, current code will look for DT associated 
> with hostname nn1.xyz.com or nn2.xyz.com, which is different from the DT 
> given by NN. causing Token authentication to fail. This happens in 
> {{AbstractDelegationTokenSelector#selectToken}}. New IPFailover proxy 
> provider will need to resolve this as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14067) Allow manual failover between standby and observer

2018-11-13 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685956#comment-16685956
 ] 

Chao Sun commented on HDFS-14067:
-

bq. Chao, I was going to say that the .000 patch only fixes client and does not 
address the server-side check but it is sounding like this JIRA is tending 
towards a "Won't Fix". 

Good find [~zero45]. Will keep this in mind in case we'll do another patch.

bq. Yes keeping Observer as Standby in ZK is incorrect. Can we change the state 
in ZK for Observer? We now can change state from Standby to Active and vice 
versa. Should we be able to change it the same way for Standby to/from Observer 
transitions?

Currently the active/standby status in ZK is managed by watching the ZK lock, 
I'm not sure how observer can fit into this picture. To populate the observer 
state into ZK, we'll need to change the {{ZKFCProtocol}} and use that in 
{{transitionToObserver}}.

> Allow manual failover between standby and observer
> --
>
> Key: HDFS-14067
> URL: https://issues.apache.org/jira/browse/HDFS-14067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14067-HDFS-12943.000.patch
>
>
> Currently if automatic failover is enabled in a HA environment, transition 
> from standby to observer would be blocked:
> {code}
> [hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
> Automatic failover is enabled for NameNode at 
> Refusing to manually manage HA state, since it may cause
> a split-brain scenario or other incorrect state.
> If you are very sure you know what you are doing, please
> specify the --forcemanual flag.
> {code}
> We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-13 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685954#comment-16685954
 ] 

Chen Liang commented on HDFS-14035:
---

Somehow missed Erik's comment, the suppress warning is not needed anymore, 
removed in v016 patch.

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch, 
> HDFS-14035-HDFS-12943.002.patch, HDFS-14035-HDFS-12943.003.patch, 
> HDFS-14035-HDFS-12943.004.patch, HDFS-14035-HDFS-12943.005.patch, 
> HDFS-14035-HDFS-12943.006.patch, HDFS-14035-HDFS-12943.007.patch, 
> HDFS-14035-HDFS-12943.008.patch, HDFS-14035-HDFS-12943.009.patch, 
> HDFS-14035-HDFS-12943.010.patch, HDFS-14035-HDFS-12943.011.patch, 
> HDFS-14035-HDFS-12943.012.patch, HDFS-14035-HDFS-12943.013.patch, 
> HDFS-14035-HDFS-12943.014.patch, HDFS-14035-HDFS-12943.015.patch, 
> HDFS-14035-HDFS-12943.016.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-13 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14035:
--
Attachment: HDFS-14035-HDFS-12943.016.patch

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch, 
> HDFS-14035-HDFS-12943.002.patch, HDFS-14035-HDFS-12943.003.patch, 
> HDFS-14035-HDFS-12943.004.patch, HDFS-14035-HDFS-12943.005.patch, 
> HDFS-14035-HDFS-12943.006.patch, HDFS-14035-HDFS-12943.007.patch, 
> HDFS-14035-HDFS-12943.008.patch, HDFS-14035-HDFS-12943.009.patch, 
> HDFS-14035-HDFS-12943.010.patch, HDFS-14035-HDFS-12943.011.patch, 
> HDFS-14035-HDFS-12943.012.patch, HDFS-14035-HDFS-12943.013.patch, 
> HDFS-14035-HDFS-12943.014.patch, HDFS-14035-HDFS-12943.015.patch, 
> HDFS-14035-HDFS-12943.016.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-13 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14035:
--
Attachment: (was: HDFS-14035-HDFS-12943.016.patch)

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch, 
> HDFS-14035-HDFS-12943.002.patch, HDFS-14035-HDFS-12943.003.patch, 
> HDFS-14035-HDFS-12943.004.patch, HDFS-14035-HDFS-12943.005.patch, 
> HDFS-14035-HDFS-12943.006.patch, HDFS-14035-HDFS-12943.007.patch, 
> HDFS-14035-HDFS-12943.008.patch, HDFS-14035-HDFS-12943.009.patch, 
> HDFS-14035-HDFS-12943.010.patch, HDFS-14035-HDFS-12943.011.patch, 
> HDFS-14035-HDFS-12943.012.patch, HDFS-14035-HDFS-12943.013.patch, 
> HDFS-14035-HDFS-12943.014.patch, HDFS-14035-HDFS-12943.015.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-13 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14035:
--
Attachment: HDFS-14035-HDFS-12943.016.patch

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch, 
> HDFS-14035-HDFS-12943.002.patch, HDFS-14035-HDFS-12943.003.patch, 
> HDFS-14035-HDFS-12943.004.patch, HDFS-14035-HDFS-12943.005.patch, 
> HDFS-14035-HDFS-12943.006.patch, HDFS-14035-HDFS-12943.007.patch, 
> HDFS-14035-HDFS-12943.008.patch, HDFS-14035-HDFS-12943.009.patch, 
> HDFS-14035-HDFS-12943.010.patch, HDFS-14035-HDFS-12943.011.patch, 
> HDFS-14035-HDFS-12943.012.patch, HDFS-14035-HDFS-12943.013.patch, 
> HDFS-14035-HDFS-12943.014.patch, HDFS-14035-HDFS-12943.015.patch, 
> HDFS-14035-HDFS-12943.016.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-8) Add OzoneManager Delegation Token support

2018-11-13 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685943#comment-16685943
 ] 

Jitendra Nath Pandey commented on HDDS-8:
-

# Did you consider {{OzoneSecretManager}} extending 
AbstractDelegationTokenSecretManager? You might be able to re-use bunch of code.
# {{OzoneDelegationTokenSelector}} should extend 
{{AbstractDelegationTokenSelector}}

> Add OzoneManager Delegation Token support
> -
>
> Key: HDDS-8
> URL: https://issues.apache.org/jira/browse/HDDS-8
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-8-HDDS-4.00.patch, HDDS-8-HDDS-4.01.patch, 
> HDDS-8-HDDS-4.02.patch, HDDS-8-HDDS-4.03.patch, HDDS-8-HDDS-4.04.patch, 
> HDDS-8-HDDS-4.05.patch, HDDS-8-HDDS-4.06.patch, HDDS-8-HDDS-4.07.patch, 
> HDDS-8-HDDS-4.08.patch, HDDS-8-HDDS-4.09.patch, HDDS-8-HDDS-4.10.patch, 
> HDDS-8-HDDS-4.11.patch, HDDS-8-HDDS-4.12.patch, HDDS-8-HDDS-4.13.patch
>
>
> Add delegation token functionality to Ozone layer. We will re-use hadoop rpc 
> layer TOKEN authentication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14067) Allow manual failover between standby and observer

2018-11-13 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685933#comment-16685933
 ] 

Konstantin Shvachko commented on HDFS-14067:


??if we manually transition a standby to observer, ZK will still keep the old 
state and even make the observer eligible for election, which is incorrect.??

Yes keeping Observer as Standby in ZK is incorrect. Can we change the state in 
ZK for Observer? We now can change state from Standby to Active and vice versa. 
Should we be able to change it the same way for Standby to/from Observer 
transitions?

> Allow manual failover between standby and observer
> --
>
> Key: HDFS-14067
> URL: https://issues.apache.org/jira/browse/HDFS-14067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14067-HDFS-12943.000.patch
>
>
> Currently if automatic failover is enabled in a HA environment, transition 
> from standby to observer would be blocked:
> {code}
> [hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
> Automatic failover is enabled for NameNode at 
> Refusing to manually manage HA state, since it may cause
> a split-brain scenario or other incorrect state.
> If you are very sure you know what you are doing, please
> specify the --forcemanual flag.
> {code}
> We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14067) Allow manual failover between standby and observer

2018-11-13 Thread Plamen Jeliazkov (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685930#comment-16685930
 ] 

Plamen Jeliazkov commented on HDFS-14067:
-

Thanks [~csun] and [~xkrogen], I understand better now.

Chao, I was going to say that the .000 patch only fixes client and does not 
address the server-side check but it is sounding like this JIRA is tending 
towards a "Won't Fix". 
I leave it up to you guys for how to proceed here; if you end up posting 
another patch I will review it / test it.

For the sake of testing as a whole though I will simply use `--forcemanual` for 
now.

If we intend to support automatic failover then it seems we will need to 
complicate things.


> Allow manual failover between standby and observer
> --
>
> Key: HDFS-14067
> URL: https://issues.apache.org/jira/browse/HDFS-14067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14067-HDFS-12943.000.patch
>
>
> Currently if automatic failover is enabled in a HA environment, transition 
> from standby to observer would be blocked:
> {code}
> [hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
> Automatic failover is enabled for NameNode at 
> Refusing to manually manage HA state, since it may cause
> a split-brain scenario or other incorrect state.
> If you are very sure you know what you are doing, please
> specify the --forcemanual flag.
> {code}
> We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-836) Create Ozone identifier for delegation token and block token

2018-11-13 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685920#comment-16685920
 ] 

Ajay Kumar commented on HDDS-836:
-

cc: [~xyao]

> Create Ozone identifier for delegation token and block token
> 
>
> Key: HDDS-836
> URL: https://issues.apache.org/jira/browse/HDDS-836
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-836-HDDS-4.00.patch
>
>
> Create Ozone identifier for delegation token and block token.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685924#comment-16685924
 ] 

Hadoop QA commented on HDFS-14035:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
56s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
56s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
58s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
43s{color} | {color:green} HDFS-12943 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 14m 
37s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
22s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 10m 
38s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 36s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 21s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 40s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed CTEST tests | test_test

[jira] [Updated] (HDDS-836) Create Ozone identifier for delegation token and block token

2018-11-13 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-836:

Status: Patch Available  (was: Open)

> Create Ozone identifier for delegation token and block token
> 
>
> Key: HDDS-836
> URL: https://issues.apache.org/jira/browse/HDDS-836
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-836-HDDS-4.00.patch
>
>
> Create Ozone identifier for delegation token and block token.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685917#comment-16685917
 ] 

Hadoop QA commented on HDFS-14015:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
35m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m  
6s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14015 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948036/HDFS-14015.010.patch |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux af15756c9039 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a13be20 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25510/testReport/ |
| Max. process+thread count | 443 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25510/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch, 
> HDFS-14015.003.patch, HDFS-14015.004.patch, HDFS-14015.005.patch, 
> HDFS-14015.006.patch, HDFS-14015.007.patch, HDFS-14015.008.patch, 
> HDFS-14015.009.patch, HD

[jira] [Commented] (HDFS-14067) Allow manual failover between standby and observer

2018-11-13 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685912#comment-16685912
 ] 

Konstantin Shvachko commented on HDFS-14067:


??we are already calling {{HAServiceProtocol}} methods in the state transition, 
so whoever calls it should already be authenticated, is that correct???

Taking my comment about using {{ClientProtocol.getHAServiceState()}} back. 
[~csun] you are right, this is an admin command, we should use 
{{HAServiceProtocol}}.

> Allow manual failover between standby and observer
> --
>
> Key: HDFS-14067
> URL: https://issues.apache.org/jira/browse/HDFS-14067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14067-HDFS-12943.000.patch
>
>
> Currently if automatic failover is enabled in a HA environment, transition 
> from standby to observer would be blocked:
> {code}
> [hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
> Automatic failover is enabled for NameNode at 
> Refusing to manually manage HA state, since it may cause
> a split-brain scenario or other incorrect state.
> If you are very sure you know what you are doing, please
> specify the --forcemanual flag.
> {code}
> We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-836) Create Ozone identifier for delegation token and block token

2018-11-13 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-836:

Attachment: HDDS-836-HDDS-4.00.patch

> Create Ozone identifier for delegation token and block token
> 
>
> Key: HDDS-836
> URL: https://issues.apache.org/jira/browse/HDDS-836
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-836-HDDS-4.00.patch
>
>
> Create Ozone identifier for delegation token and block token.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14067) Allow manual failover between standby and observer

2018-11-13 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685910#comment-16685910
 ] 

Chao Sun commented on HDFS-14067:
-

[~zero45] observer state is not currently an extension of standby, instead it 
is a separate state in {{HAServiceProtocol}}, which is used by failover 
mechanism. Regarding the "double promotion", I think the original design is to 
favor standby over observer in failover. Otherwise, we may get unexpected 
behavior from the system (e.g., the only observer gets transitioned into 
active, and all traffic now goes to it). In this case, the failover controller 
will need to recognize the observer state as well, which complicates things.

[~xkrogen] yes agree. I also think we should start to look into HDFS-13182 
soon. 

> Allow manual failover between standby and observer
> --
>
> Key: HDFS-14067
> URL: https://issues.apache.org/jira/browse/HDFS-14067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14067-HDFS-12943.000.patch
>
>
> Currently if automatic failover is enabled in a HA environment, transition 
> from standby to observer would be blocked:
> {code}
> [hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
> Automatic failover is enabled for NameNode at 
> Refusing to manually manage HA state, since it may cause
> a split-brain scenario or other incorrect state.
> If you are very sure you know what you are doing, please
> specify the --forcemanual flag.
> {code}
> We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14067) Allow manual failover between standby and observer

2018-11-13 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685901#comment-16685901
 ] 

Erik Krogen commented on HDFS-14067:


{quote}
In our case, I do see some potential inconsistency. Currently the 
ZKFailoverController only tracks active and standby state. And, if we manually 
transition a standby to observer, ZK will still keep the old state and even 
make the observer eligible for election, which is incorrect. This somewhat 
overlaps with HDFS-13182.
{quote}
Great, thanks for the explanation [~csun]! That sounds like exactly the type of 
concern I had but didn't know enough about ZKFailoverController to know exactly 
where the issue might be.

I don't think we have an easy way to solve this until HDFS-13182 is completed. 
Personally, I feel it makes more sense for administrators to transition in/out 
of observer using {{--forcemanual}} for now to force them to consider the 
potential effects of this inconsistency.

> Allow manual failover between standby and observer
> --
>
> Key: HDFS-14067
> URL: https://issues.apache.org/jira/browse/HDFS-14067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14067-HDFS-12943.000.patch
>
>
> Currently if automatic failover is enabled in a HA environment, transition 
> from standby to observer would be blocked:
> {code}
> [hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
> Automatic failover is enabled for NameNode at 
> Refusing to manually manage HA state, since it may cause
> a split-brain scenario or other incorrect state.
> If you are very sure you know what you are doing, please
> specify the --forcemanual flag.
> {code}
> We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14067) Allow manual failover between standby and observer

2018-11-13 Thread Plamen Jeliazkov (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685894#comment-16685894
 ] 

Plamen Jeliazkov commented on HDFS-14067:
-

Hey guys,

Just want to throw in my thoughts here since I am testing this. Of course feel 
free to object / criticize.

I am working off the assumption that Observer state is just an extension of 
Standby state but allows reads as well.
With that assumption, I also assume that `transitionToStandby` on an already 
Standby node should be a no-op, even with automatic failover enabled.
This is why I am expecting manual transition from Standby to Observer to work 
even if I have automatic failover enabled.
Then the path to "automatic failover" with Observers should be, as I understand 
it from design, just a matter of going from Observer->Standby->Active, when the 
time comes.

Maybe we are just missing the treatment of Observer as a "double promotion" in 
the automatic failover scenario?

Hope I made sense there. Thanks.

> Allow manual failover between standby and observer
> --
>
> Key: HDFS-14067
> URL: https://issues.apache.org/jira/browse/HDFS-14067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14067-HDFS-12943.000.patch
>
>
> Currently if automatic failover is enabled in a HA environment, transition 
> from standby to observer would be blocked:
> {code}
> [hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
> Automatic failover is enabled for NameNode at 
> Refusing to manually manage HA state, since it may cause
> a split-brain scenario or other incorrect state.
> If you are very sure you know what you are doing, please
> specify the --forcemanual flag.
> {code}
> We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Better debuggability for datanode decommissioning

2018-11-13 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Attachment: HDFS-14069.001.patch

> Better debuggability for datanode decommissioning
> -
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch, HDFS-14069.001.patch
>
>
> Currently, we don't provide any debugging info for decommissioning DN, it is 
> difficult to determine which blocks are on their last replica. We have two 
> design options:
>  # Add block info for blocks with low replication (configurable)
>  ** Advantages:
>  *** Initial debugging information would be more thorough
>  *** Easier initial implementation
>  ** Disadvantages:
>  *** Add load to normal NN operation by checking every time a DN is 
> decommissioned
>  *** More difficult to add debugging information later on
>  # Create a new api for querying more detailed info about one DN
>  ** Advantages:
>  *** We wouldnt be adding more load to the NN in normal operation
>  *** Much easier to extend in the future with more info
>  ** Disadvantages:
>  *** Getting the info on demand for this case will be much more expensive 
> actually, cause we will have to find all the blocks on that DN, and then go 
> through all the blocks again and count how many replicas we have etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14015) Improve error handling in hdfsThreadDestructor in native thread local storage

2018-11-13 Thread Daniel Templeton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-14015:

Attachment: HDFS-14015.010.patch

> Improve error handling in hdfsThreadDestructor in native thread local storage
> -
>
> Key: HDFS-14015
> URL: https://issues.apache.org/jira/browse/HDFS-14015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HDFS-14015.001.patch, HDFS-14015.002.patch, 
> HDFS-14015.003.patch, HDFS-14015.004.patch, HDFS-14015.005.patch, 
> HDFS-14015.006.patch, HDFS-14015.007.patch, HDFS-14015.008.patch, 
> HDFS-14015.009.patch, HDFS-14015.010.patch
>
>
> In the hdfsThreadDestructor() function, we ignore the return value from the 
> DetachCurrentThread() call.  We are seeing cases where a native thread dies 
> while holding a JVM monitor, and it doesn't release the monitor.  We're 
> hoping that logging this error instead of ignoring it will shed some light on 
> the issue.  In any case, it's good programming practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-836) Create Ozone identifier for delegation token and block token

2018-11-13 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-836:

Description: Create Ozone identifier for delegation token and block token.  
(was: Add delegation token functionality to Ozone layer. We will re-use hadoop 
rpc layer TOKEN authentication.)

> Create Ozone identifier for delegation token and block token
> 
>
> Key: HDDS-836
> URL: https://issues.apache.org/jira/browse/HDDS-836
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
>
> Create Ozone identifier for delegation token and block token.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14067) Allow manual failover between standby and observer

2018-11-13 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685870#comment-16685870
 ] 

Chao Sun commented on HDFS-14067:
-

Thanks for the clarification [~xkrogen]. I think the check was introduced 
HADOOP-8247, and the reason was because if we allow manual and auto failover at 
the same time, the state tracked by ZK could be inconsistent with the actual 
state. For instance, in a HA cluster with 1 active and 1 standby, potentially 
one could manually transition the state of the NNs and swap their roles, but 
the ZK will still track the old state.

In our case, I do see some potential inconsistency. Currently the 
ZKFailoverController only tracks active and standby state. And, if we manually 
transition a standby to observer, ZK will still keep the old state and even 
make the observer eligible for election, which is incorrect. This somewhat 
overlaps with HDFS-13182.

> Allow manual failover between standby and observer
> --
>
> Key: HDFS-14067
> URL: https://issues.apache.org/jira/browse/HDFS-14067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14067-HDFS-12943.000.patch
>
>
> Currently if automatic failover is enabled in a HA environment, transition 
> from standby to observer would be blocked:
> {code}
> [hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
> Automatic failover is enabled for NameNode at 
> Refusing to manually manage HA state, since it may cause
> a split-brain scenario or other incorrect state.
> If you are very sure you know what you are doing, please
> specify the --forcemanual flag.
> {code}
> We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14064) WEBHDFS: Support Enable/Disable EC Policy

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685869#comment-16685869
 ] 

Hadoop QA commented on HDFS-14064:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 57s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
245 unchanged - 0 fixed = 247 total (was 245) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
34s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}154m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
17s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}241m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|   | hadoop.hdfs.TestAclsEndToEnd |
|   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStream |
|   | hadoop.hdfs.server.blockmanagement.TestSequentialBlockGroupId |
|   | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.hdfs.TestDFSPermission |
|   | 
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
|
|   | hadoop.hdfs.server.mover.TestSt

[jira] [Created] (HDDS-836) Create Ozone identifier for delegation token and block token

2018-11-13 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-836:
---

 Summary: Create Ozone identifier for delegation token and block 
token
 Key: HDDS-836
 URL: https://issues.apache.org/jira/browse/HDDS-836
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Security
Reporter: Ajay Kumar
Assignee: Ajay Kumar
 Fix For: 0.4.0


Add delegation token functionality to Ozone layer. We will re-use hadoop rpc 
layer TOKEN authentication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14017) ObserverReadProxyProviderWithIPFailover should work with HA configuration

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685861#comment-16685861
 ] 

Hadoop QA commented on HDFS-14017:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  7m 
47s{color} | {color:red} root in HDFS-12943 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-12943 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 12m 
55s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 9 new + 3 unchanged - 3 fixed = 12 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 
23s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 43s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14017 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948029/HDFS-14017-HDFS-12943.010.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 36c18075 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / 8b5277f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25508/artifact/out/branch-mvninstall-root.txt
 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25508/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25508/artifact/ou

[jira] [Issue Comment Deleted] (HDFS-14067) Allow manual failover between standby and observer

2018-11-13 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-14067:

Comment: was deleted

(was: Thanks for the clarification [~xkrogen]. I think the check was introduced 
HADOOP-8247, and the reason was because if we allow manual and auto failover at 
the same time, the state tracked by ZK could be inconsistent with the actual 
state. For instance, in a HA cluster with 1 active and 1 standby, potentially 
one could manually transition the state of the NNs and swap their roles, but 
the ZK will still track the old state.

In our case, we currently make the observer state a special case of standby 
state, so to ZK they are the same. Therefore, I think allowing transition 
between observer and standby is safe, without the check. There are still some 
doubts around this design though, such that how we can make observer eligible 
for both manual and auto failover. This is tracked by HDFS-13182.)

> Allow manual failover between standby and observer
> --
>
> Key: HDFS-14067
> URL: https://issues.apache.org/jira/browse/HDFS-14067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14067-HDFS-12943.000.patch
>
>
> Currently if automatic failover is enabled in a HA environment, transition 
> from standby to observer would be blocked:
> {code}
> [hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
> Automatic failover is enabled for NameNode at 
> Refusing to manually manage HA state, since it may cause
> a split-brain scenario or other incorrect state.
> If you are very sure you know what you are doing, please
> specify the --forcemanual flag.
> {code}
> We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-13 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685843#comment-16685843
 ] 

Arpit Agarwal commented on HDDS-816:


bq. 2. Independent from the selected approach I hope that it can be implemented 
as part of the DBTable abstraction. Would help to adjust/switch over later 
without too much pain.
+1

> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, Metrics for number of volumes, 
> buckets, keys.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-13 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685842#comment-16685842
 ] 

Chen Liang commented on HDFS-14035:
---

Thanks for the review [~csun], fixing check style with v015 patch.

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch, 
> HDFS-14035-HDFS-12943.002.patch, HDFS-14035-HDFS-12943.003.patch, 
> HDFS-14035-HDFS-12943.004.patch, HDFS-14035-HDFS-12943.005.patch, 
> HDFS-14035-HDFS-12943.006.patch, HDFS-14035-HDFS-12943.007.patch, 
> HDFS-14035-HDFS-12943.008.patch, HDFS-14035-HDFS-12943.009.patch, 
> HDFS-14035-HDFS-12943.010.patch, HDFS-14035-HDFS-12943.011.patch, 
> HDFS-14035-HDFS-12943.012.patch, HDFS-14035-HDFS-12943.013.patch, 
> HDFS-14035-HDFS-12943.014.patch, HDFS-14035-HDFS-12943.015.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-13 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14035:
--
Attachment: HDFS-14035-HDFS-12943.015.patch

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch, 
> HDFS-14035-HDFS-12943.002.patch, HDFS-14035-HDFS-12943.003.patch, 
> HDFS-14035-HDFS-12943.004.patch, HDFS-14035-HDFS-12943.005.patch, 
> HDFS-14035-HDFS-12943.006.patch, HDFS-14035-HDFS-12943.007.patch, 
> HDFS-14035-HDFS-12943.008.patch, HDFS-14035-HDFS-12943.009.patch, 
> HDFS-14035-HDFS-12943.010.patch, HDFS-14035-HDFS-12943.011.patch, 
> HDFS-14035-HDFS-12943.012.patch, HDFS-14035-HDFS-12943.013.patch, 
> HDFS-14035-HDFS-12943.014.patch, HDFS-14035-HDFS-12943.015.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14067) Allow manual failover between standby and observer

2018-11-13 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685840#comment-16685840
 ] 

Chao Sun commented on HDFS-14067:
-

Thanks for the clarification [~xkrogen]. I think the check was introduced 
HADOOP-8247, and the reason was because if we allow manual and auto failover at 
the same time, the state tracked by ZK could be inconsistent with the actual 
state. For instance, in a HA cluster with 1 active and 1 standby, potentially 
one could manually transition the state of the NNs and swap their roles, but 
the ZK will still track the old state.

In our case, we currently make the observer state a special case of standby 
state, so to ZK they are the same. Therefore, I think allowing transition 
between observer and standby is safe, without the check. There are still some 
doubts around this design though, such that how we can make observer eligible 
for both manual and auto failover. This is tracked by HDFS-13182.

> Allow manual failover between standby and observer
> --
>
> Key: HDFS-14067
> URL: https://issues.apache.org/jira/browse/HDFS-14067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14067-HDFS-12943.000.patch
>
>
> Currently if automatic failover is enabled in a HA environment, transition 
> from standby to observer would be blocked:
> {code}
> [hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
> Automatic failover is enabled for NameNode at 
> Refusing to manually manage HA state, since it may cause
> a split-brain scenario or other incorrect state.
> If you are very sure you know what you are doing, please
> specify the --forcemanual flag.
> {code}
> We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14017) ObserverReadProxyProviderWithIPFailover should work with HA configuration

2018-11-13 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685811#comment-16685811
 ] 

Chen Liang commented on HDFS-14017:
---

Thanks for the the detailed suggestions and post sharing [~xkrogen]! Post v010 
patch. Will file another JIRA for follow up.

> ObserverReadProxyProviderWithIPFailover should work with HA configuration
> -
>
> Key: HDFS-14017
> URL: https://issues.apache.org/jira/browse/HDFS-14017
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14017-HDFS-12943.001.patch, 
> HDFS-14017-HDFS-12943.002.patch, HDFS-14017-HDFS-12943.003.patch, 
> HDFS-14017-HDFS-12943.004.patch, HDFS-14017-HDFS-12943.005.patch, 
> HDFS-14017-HDFS-12943.006.patch, HDFS-14017-HDFS-12943.008.patch, 
> HDFS-14017-HDFS-12943.009.patch, HDFS-14017-HDFS-12943.010.patch
>
>
> Currently {{ObserverReadProxyProviderWithIPFailover}} extends 
> {{ObserverReadProxyProvider}}, and the only difference is changing the proxy 
> factory to use {{IPFailoverProxyProvider}}. However this is not enough 
> because when calling constructor of {{ObserverReadProxyProvider}} in 
> super(...), the follow line:
> {code:java}
> nameNodeProxies = getProxyAddresses(uri,
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY);
> {code}
> will try to resolve the all configured NN addresses to do configured 
> failover. But in the case of IPFailover, this does not really apply.
>  
> A second issue closely related is about delegation token. For example, in 
> current IPFailover setup, say we have a virtual host nn.xyz.com, which points 
> to either of two physical nodes nn1.xyz.com or nn2.xyz.com. In current HDFS, 
> there is always only one DT being exchanged, which has hostname nn.xyz.com. 
> Server only issues this DT, and client only knows the host nn.xyz.com, so all 
> is good. But in Observer read, even with IPFailover, the client will no 
> longer contacting nn.xyz.com, but will actively reaching to nn1.xyz.com and 
> nn2.xyz.com. During this process, current code will look for DT associated 
> with hostname nn1.xyz.com or nn2.xyz.com, which is different from the DT 
> given by NN. causing Token authentication to fail. This happens in 
> {{AbstractDelegationTokenSelector#selectToken}}. New IPFailover proxy 
> provider will need to resolve this as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14017) ObserverReadProxyProviderWithIPFailover should work with HA configuration

2018-11-13 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14017:
--
Attachment: HDFS-14017-HDFS-12943.010.patch

> ObserverReadProxyProviderWithIPFailover should work with HA configuration
> -
>
> Key: HDFS-14017
> URL: https://issues.apache.org/jira/browse/HDFS-14017
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14017-HDFS-12943.001.patch, 
> HDFS-14017-HDFS-12943.002.patch, HDFS-14017-HDFS-12943.003.patch, 
> HDFS-14017-HDFS-12943.004.patch, HDFS-14017-HDFS-12943.005.patch, 
> HDFS-14017-HDFS-12943.006.patch, HDFS-14017-HDFS-12943.008.patch, 
> HDFS-14017-HDFS-12943.009.patch, HDFS-14017-HDFS-12943.010.patch
>
>
> Currently {{ObserverReadProxyProviderWithIPFailover}} extends 
> {{ObserverReadProxyProvider}}, and the only difference is changing the proxy 
> factory to use {{IPFailoverProxyProvider}}. However this is not enough 
> because when calling constructor of {{ObserverReadProxyProvider}} in 
> super(...), the follow line:
> {code:java}
> nameNodeProxies = getProxyAddresses(uri,
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY);
> {code}
> will try to resolve the all configured NN addresses to do configured 
> failover. But in the case of IPFailover, this does not really apply.
>  
> A second issue closely related is about delegation token. For example, in 
> current IPFailover setup, say we have a virtual host nn.xyz.com, which points 
> to either of two physical nodes nn1.xyz.com or nn2.xyz.com. In current HDFS, 
> there is always only one DT being exchanged, which has hostname nn.xyz.com. 
> Server only issues this DT, and client only knows the host nn.xyz.com, so all 
> is good. But in Observer read, even with IPFailover, the client will no 
> longer contacting nn.xyz.com, but will actively reaching to nn1.xyz.com and 
> nn2.xyz.com. During this process, current code will look for DT associated 
> with hostname nn1.xyz.com or nn2.xyz.com, which is different from the DT 
> given by NN. causing Token authentication to fail. This happens in 
> {{AbstractDelegationTokenSelector#selectToken}}. New IPFailover proxy 
> provider will need to resolve this as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-13 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685772#comment-16685772
 ] 

Chao Sun commented on HDFS-14035:
-

LGTM too. Some minor nits though:

1. there's an unused import in {{DFSClient}}.
2. there are many style issues - do we need to address them?

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch, 
> HDFS-14035-HDFS-12943.002.patch, HDFS-14035-HDFS-12943.003.patch, 
> HDFS-14035-HDFS-12943.004.patch, HDFS-14035-HDFS-12943.005.patch, 
> HDFS-14035-HDFS-12943.006.patch, HDFS-14035-HDFS-12943.007.patch, 
> HDFS-14035-HDFS-12943.008.patch, HDFS-14035-HDFS-12943.009.patch, 
> HDFS-14035-HDFS-12943.010.patch, HDFS-14035-HDFS-12943.011.patch, 
> HDFS-14035-HDFS-12943.012.patch, HDFS-14035-HDFS-12943.013.patch, 
> HDFS-14035-HDFS-12943.014.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-835) Use storageSize instead of Long for buffer size configs in Ozone Client

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685749#comment-16685749
 ] 

Hadoop QA commented on HDDS-835:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  7m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  1s{color} | {color:orange} root: The patch generated 4 new + 4 unchanged - 
0 fixed = 8 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 37s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 27s{color} 
| {color:red} client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 29s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 28s{color} 
| {color:red} objectstore-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 33s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 30s{color} 
| {color:red} ozonefs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense

[jira] [Commented] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)

2018-11-13 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685741#comment-16685741
 ] 

CR Hota commented on HDFS-13972:


[~elgoiri] [~brahmareddy] Could you help rebase HDFS-13891 branch with trunk. 
Need the refactor changes done to name node to proceed on this.

Thanks !

> RBF: Support for Delegation Token (WebHDFS)
> ---
>
> Key: HDFS-13972
> URL: https://issues.apache.org/jira/browse/HDFS-13972
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
>
> HDFS Router should support issuing HDFS delegation tokens through WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-13 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685740#comment-16685740
 ] 

Arpit Agarwal commented on HDDS-816:


Approach #4 (traverse all keys) cannot work in practice. It may take many hours 
to iterate over billions of keys.

I am also hesitant about approach #3 without knowing the implementation details 
of how the estimate is done. If RocksDB is simply counting number of writes 
into the log then the estimate could be very wrong e.g. if there are frequent 
overwrites. LSMs typically do not know if a putKey is an overwrite or not when 
the operation happens.

I like Bharat's proposed approach. Let's just persist the current key count 
periodically to a separate file (don't store it in RocksDB), and also on 
shutdown. This let's us limit the 'staleness' of the metric to a few 
seconds/minutes.



> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, Metrics for number of volumes, 
> buckets, keys.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14006) RBF: Support to get Router object from web context instead of Namenode

2018-11-13 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685736#comment-16685736
 ] 

CR Hota commented on HDFS-14006:


[~elgoiri] Agree with you. Definitely reusability is something we will look 
into, but at this time, its better to decouple both and make sure each work 
fine without refactoring namenode dependencies in this area. This refactor has 
more risks compared to others we have been doing with respect to security.

> RBF: Support to get Router object from web context instead of Namenode
> --
>
> Key: HDFS-14006
> URL: https://issues.apache.org/jira/browse/HDFS-14006
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
>
> Router currently uses Namenode web resources to read and verify delegation 
> tokens. This model doesn't work when router will be deployed in secured mode. 
> This change will introduce router's own UserProvider resource and 
> dependencies.
> In the current deployment one can see this exception.
> {"RemoteException":\{"exception":"ClassCastException","javaClassName":"java.lang.ClassCastException","message":"org.apache.hadoop.hdfs.server.federation.router.Router
>  cannot be cast to org.apache.hadoop.hdfs.server.namenode.NameNode"}}
> In the proposed change, router will maintain its own web resource, that will 
> be similar to current namenode, but modified to get back a router instance 
> instead of namenode.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13732) ECAdmin should print the policy name when an EC policy is set

2018-11-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685726#comment-16685726
 ] 

Hudson commented on HDFS-13732:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15421 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15421/])
Revert "HDFS-13732. ECAdmin should print the policy name when an EC (xiao: rev 
9da6054ca4ff6f8bb19506d80685b17d2c79)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testErasureCodingConf.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/ECAdmin.java


> ECAdmin should print the policy name when an EC policy is set
> -
>
> Key: HDFS-13732
> URL: https://issues.apache.org/jira/browse/HDFS-13732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, tools
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Zsolt Venczel
>Priority: Trivial
> Fix For: 3.2.0
>
> Attachments: EC_Policy.PNG, HDFS-13732.01.patch
>
>
> Scenerio:
> If the new policy apart from the default EC policy is set for the HDFS 
> directory, then the console message is coming as "Set default erasure coding 
> policy on "
> Expected output:
> It would be good If the EC policy name is displayed when the policy is set...
>  
> Actual output:
> Set default erasure coding policy on 
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-13732) ECAdmin should print the policy name when an EC policy is set

2018-11-13 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reopened HDFS-13732:
--

> ECAdmin should print the policy name when an EC policy is set
> -
>
> Key: HDFS-13732
> URL: https://issues.apache.org/jira/browse/HDFS-13732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, tools
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Zsolt Venczel
>Priority: Trivial
> Fix For: 3.2.0
>
> Attachments: EC_Policy.PNG, HDFS-13732.01.patch
>
>
> Scenerio:
> If the new policy apart from the default EC policy is set for the HDFS 
> directory, then the console message is coming as "Set default erasure coding 
> policy on "
> Expected output:
> It would be good If the EC policy name is displayed when the policy is set...
>  
> Actual output:
> Set default erasure coding policy on 
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13732) ECAdmin should print the policy name when an EC policy is set

2018-11-13 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685731#comment-16685731
 ] 

Xiao Chen commented on HDFS-13732:
--

Also ping [~arpitagarwal] as an FYI since you helped on backporting.

> ECAdmin should print the policy name when an EC policy is set
> -
>
> Key: HDFS-13732
> URL: https://issues.apache.org/jira/browse/HDFS-13732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, tools
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Zsolt Venczel
>Priority: Trivial
> Fix For: 3.2.0
>
> Attachments: EC_Policy.PNG, HDFS-13732.01.patch
>
>
> Scenerio:
> If the new policy apart from the default EC policy is set for the HDFS 
> directory, then the console message is coming as "Set default erasure coding 
> policy on "
> Expected output:
> It would be good If the EC policy name is displayed when the policy is set...
>  
> Actual output:
> Set default erasure coding policy on 
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13732) ECAdmin should print the policy name when an EC policy is set

2018-11-13 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13732:
-
 Hadoop Flags: Incompatible change,Reviewed  (was: Reviewed)
Fix Version/s: (was: 3.1.2)

See 
https://issues.apache.org/jira/browse/HDFS-13998?focusedCommentId=16685723&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16685723
 , I have reverted this from trunk, branch-3.2 and branch-3.1 since it's 
considered incompatible.

[~sunilg] is it too late to revert from 3.2.0?

> ECAdmin should print the policy name when an EC policy is set
> -
>
> Key: HDFS-13732
> URL: https://issues.apache.org/jira/browse/HDFS-13732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, tools
>Affects Versions: 3.0.0
>Reporter: Soumyapn
>Assignee: Zsolt Venczel
>Priority: Trivial
> Fix For: 3.2.0
>
> Attachments: EC_Policy.PNG, HDFS-13732.01.patch
>
>
> Scenerio:
> If the new policy apart from the default EC policy is set for the HDFS 
> directory, then the console message is coming as "Set default erasure coding 
> policy on "
> Expected output:
> It would be good If the EC policy name is displayed when the policy is set...
>  
> Actual output:
> Set default erasure coding policy on 
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13998) ECAdmin NPE with -setPolicy -replicate

2018-11-13 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685723#comment-16685723
 ] 

Xiao Chen commented on HDFS-13998:
--

bq.  admin commands.
-setPolicy is _not_ an admin command. Users who have write perm can set policy, 
similar to setxattr. IMO this is the most confusing part - as a user you can 
set policy, but if you set with default you don't know what's the default and 
have to explicitly get.


bq. incompatible
That's an interesting question. I went to check the compat guideline, it does 
mention
{quote}
All Hadoop CLI paths, usage, and output SHALL be considered Public and Stable 
unless documented as experimental and subject to change.

Note that the CLI output SHALL be considered distinct from the log output 
generated by the Hadoop CLIs. The latter SHALL be governed by the policy on log 
output. Note also that for CLI output, all changes SHALL be considered 
incompatible changes.
{quote}
I was under the impression that the explicit {{InterfaceAudience.Private}} on 
the ECAdmin class means we can change it. It seems the compat guideline for CLI 
should take precedence since the CLI isn't marked experimental. So. (see 
below)

bq. If you all agree, we should revert HDFS-13732 before we ship 3.2 release 
and Follow up for next release..?
I'll go ahead and revert HDFS-13732 entirely. Will leave it up for discussion 
on how we do this on trunk.

> ECAdmin NPE with -setPolicy -replicate
> --
>
> Key: HDFS-13998
> URL: https://issues.apache.org/jira/browse/HDFS-13998
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Xiao Chen
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13998.01.patch, HDFS-13998.02.patch, 
> HDFS-13998.03.patch
>
>
> HDFS-13732 tried to improve the output of the console tool. But we missed the 
> fact that for replication, {{getErasureCodingPolicy}} would return null.
> This jira is to fix it in ECAdmin, and add a unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685715#comment-16685715
 ] 

Hadoop QA commented on HDFS-14035:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
39s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
58s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} HDFS-12943 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 13m 
57s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
8s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-hdfs-project: The patch generated 29 new 
+ 227 unchanged - 0 fixed = 256 total (was 227) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 10m 
40s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 20s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 56s{color} | 

[jira] [Commented] (HDDS-774) Remove OpenContainerBlockMap from datanode

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685699#comment-16685699
 ] 

Hadoop QA commented on HDDS-774:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 36s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 34s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-774 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948012/HDDS-774.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5ce9c6a2f91d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-13 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685679#comment-16685679
 ] 

Erik Krogen commented on HDFS-14035:


v014 patch LGTM. Is the {{MagicConstant}} warning that you suppressed just 
complaining about the use of {{+ 1}} for the increment?

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch, 
> HDFS-14035-HDFS-12943.002.patch, HDFS-14035-HDFS-12943.003.patch, 
> HDFS-14035-HDFS-12943.004.patch, HDFS-14035-HDFS-12943.005.patch, 
> HDFS-14035-HDFS-12943.006.patch, HDFS-14035-HDFS-12943.007.patch, 
> HDFS-14035-HDFS-12943.008.patch, HDFS-14035-HDFS-12943.009.patch, 
> HDFS-14035-HDFS-12943.010.patch, HDFS-14035-HDFS-12943.011.patch, 
> HDFS-14035-HDFS-12943.012.patch, HDFS-14035-HDFS-12943.013.patch, 
> HDFS-14035-HDFS-12943.014.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-120) Adding HDDS datanode Audit Log

2018-11-13 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685665#comment-16685665
 ] 

Dinesh Chitlangia commented on HDDS-120:


mvninstall and test failures are unrelated to the patch.

 

> Adding HDDS datanode Audit Log
> --
>
> Key: HDDS-120
> URL: https://issues.apache.org/jira/browse/HDDS-120
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-120.001.patch, HDDS-120.002.patch, 
> HDDS-120.003.patch, HDDS-120.004.patch, HDDS-120.005.patch
>
>
> This can be useful to find users who overload the DNs. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14006) RBF: Support to get Router object from web context instead of Namenode

2018-11-13 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685633#comment-16685633
 ] 

Íñigo Goiri commented on HDFS-14006:


Eventually we may want to take some of JspHelper and make it static and just 
used from the new RouterJspHelper.
We can repurpose this JIRA for that once we have a clear understanding on what 
can be reused.
Anyway, in my mind this JIRA should only be a refactor eventually.


> RBF: Support to get Router object from web context instead of Namenode
> --
>
> Key: HDFS-14006
> URL: https://issues.apache.org/jira/browse/HDFS-14006
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
>
> Router currently uses Namenode web resources to read and verify delegation 
> tokens. This model doesn't work when router will be deployed in secured mode. 
> This change will introduce router's own UserProvider resource and 
> dependencies.
> In the current deployment one can see this exception.
> {"RemoteException":\{"exception":"ClassCastException","javaClassName":"java.lang.ClassCastException","message":"org.apache.hadoop.hdfs.server.federation.router.Router
>  cannot be cast to org.apache.hadoop.hdfs.server.namenode.NameNode"}}
> In the proposed change, router will maintain its own web resource, that will 
> be similar to current namenode, but modified to get back a router instance 
> instead of namenode.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-834) Datanode goes OOM based because of segment size

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685621#comment-16685621
 ] 

Hadoop QA commented on HDDS-834:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 29s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 29s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-834 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948005/HDDS-834.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 35c72d088749 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 671fd65 |
| maven 

[jira] [Commented] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-13 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685620#comment-16685620
 ] 

Chen Liang commented on HDFS-14035:
---

Discussed with [~xkrogen] offline, seems we can also resolve race condition in 
the unit test, but avoid using sleep, by making an uncoordinated call to server 
early. This will initialize the observer proxy and also sets the state id on 
client side. Post v014 patch, also added a couple missing javadoc.

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch, 
> HDFS-14035-HDFS-12943.002.patch, HDFS-14035-HDFS-12943.003.patch, 
> HDFS-14035-HDFS-12943.004.patch, HDFS-14035-HDFS-12943.005.patch, 
> HDFS-14035-HDFS-12943.006.patch, HDFS-14035-HDFS-12943.007.patch, 
> HDFS-14035-HDFS-12943.008.patch, HDFS-14035-HDFS-12943.009.patch, 
> HDFS-14035-HDFS-12943.010.patch, HDFS-14035-HDFS-12943.011.patch, 
> HDFS-14035-HDFS-12943.012.patch, HDFS-14035-HDFS-12943.013.patch, 
> HDFS-14035-HDFS-12943.014.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-835) Use storageSize instead of Long for buffer size configs in Ozone Client

2018-11-13 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-835:
-
Description: As per [~msingh] review comments in HDDS-675 , for 
streamBufferFlushSize, streamBufferMaxSize, blockSize configs, we should use 
getStorageSize instead of a long value, This Jira aims to address this.  (was: 
As per [~msingh] review comments, for streamBufferFlushSize, 
streamBufferMaxSize, blockSize configs, we should use getStorageSize instead of 
a long value, This Jira aims to address this.)

> Use storageSize instead of Long for buffer size configs in Ozone Client
> ---
>
> Key: HDDS-835
> URL: https://issues.apache.org/jira/browse/HDDS-835
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-835.000.patch
>
>
> As per [~msingh] review comments in HDDS-675 , for streamBufferFlushSize, 
> streamBufferMaxSize, blockSize configs, we should use getStorageSize instead 
> of a long value, This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-13 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14035:
--
Attachment: HDFS-14035-HDFS-12943.014.patch

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch, 
> HDFS-14035-HDFS-12943.002.patch, HDFS-14035-HDFS-12943.003.patch, 
> HDFS-14035-HDFS-12943.004.patch, HDFS-14035-HDFS-12943.005.patch, 
> HDFS-14035-HDFS-12943.006.patch, HDFS-14035-HDFS-12943.007.patch, 
> HDFS-14035-HDFS-12943.008.patch, HDFS-14035-HDFS-12943.009.patch, 
> HDFS-14035-HDFS-12943.010.patch, HDFS-14035-HDFS-12943.011.patch, 
> HDFS-14035-HDFS-12943.012.patch, HDFS-14035-HDFS-12943.013.patch, 
> HDFS-14035-HDFS-12943.014.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-835) Use storageSize instead of Long for buffer size configs in Ozone Client

2018-11-13 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685617#comment-16685617
 ] 

Shashikant Banerjee commented on HDDS-835:
--

[~msingh], please have a look.

> Use storageSize instead of Long for buffer size configs in Ozone Client
> ---
>
> Key: HDDS-835
> URL: https://issues.apache.org/jira/browse/HDDS-835
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-835.000.patch
>
>
> As per [~msingh] review comments, for streamBufferFlushSize, 
> streamBufferMaxSize, blockSize configs, we should use getStorageSize instead 
> of a long value, This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-835) Use storageSize instead of Long for buffer size configs in Ozone Client

2018-11-13 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-835:
-
Attachment: HDDS-835.000.patch

> Use storageSize instead of Long for buffer size configs in Ozone Client
> ---
>
> Key: HDDS-835
> URL: https://issues.apache.org/jira/browse/HDDS-835
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-835.000.patch
>
>
> As per [~msingh] review comments, for streamBufferFlushSize, 
> streamBufferMaxSize, blockSize configs, we should use getStorageSize instead 
> of a long value, This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-835) Use storageSize instead of Long for buffer size configs in Ozone Client

2018-11-13 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-835:
-
Status: Patch Available  (was: Open)

> Use storageSize instead of Long for buffer size configs in Ozone Client
> ---
>
> Key: HDDS-835
> URL: https://issues.apache.org/jira/browse/HDDS-835
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-835.000.patch
>
>
> As per [~msingh] review comments, for streamBufferFlushSize, 
> streamBufferMaxSize, blockSize configs, we should use getStorageSize instead 
> of a long value, This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14006) RBF: Support to get Router object from web context instead of Namenode

2018-11-13 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685614#comment-16685614
 ] 

CR Hota commented on HDFS-14006:


[~elgoiri]  [~brahmareddy]

For this refactoring, after the analysis I could do, it may be good to just 
leave current namenode code as is. JspHelper is a class that was designed to be 
a static method holder and not really designed to be extensible. Rather than 
changing JspHelper and all its dependencies and also UserProvider(depends on 
JspHelper methods), its better to introduce RouterJspHelper and 
RouterUserProvider. There will be some duplicate code, but since this area 
doesn't change often and matured, it seems to be like a good idea to just leave 
current code as is.

In short this ticket can be marked "Workaround". At sometime in the future, 
once Router security is stabilized, we can open a new ticket to holistically 
look at how this area can be redesigned for both namenode and router 
collectively.

 

 

> RBF: Support to get Router object from web context instead of Namenode
> --
>
> Key: HDFS-14006
> URL: https://issues.apache.org/jira/browse/HDFS-14006
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
>
> Router currently uses Namenode web resources to read and verify delegation 
> tokens. This model doesn't work when router will be deployed in secured mode. 
> This change will introduce router's own UserProvider resource and 
> dependencies.
> In the current deployment one can see this exception.
> {"RemoteException":\{"exception":"ClassCastException","javaClassName":"java.lang.ClassCastException","message":"org.apache.hadoop.hdfs.server.federation.router.Router
>  cannot be cast to org.apache.hadoop.hdfs.server.namenode.NameNode"}}
> In the proposed change, router will maintain its own web resource, that will 
> be similar to current namenode, but modified to get back a router instance 
> instead of namenode.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-120) Adding HDDS datanode Audit Log

2018-11-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685609#comment-16685609
 ] 

Hadoop QA commented on HDDS-120:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
17s{color} | {color:red} dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
23s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
30s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 40s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 40s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 37s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not 

[jira] [Updated] (HDFS-14064) WEBHDFS: Support Enable/Disable EC Policy

2018-11-13 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14064:

Attachment: HDFS-14064-04.patch

> WEBHDFS: Support Enable/Disable EC Policy
> -
>
> Key: HDFS-14064
> URL: https://issues.apache.org/jira/browse/HDFS-14064
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14064-01.patch, HDFS-14064-02.patch, 
> HDFS-14064-03.patch, HDFS-14064-04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14017) ObserverReadProxyProviderWithIPFailover should work with HA configuration

2018-11-13 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685603#comment-16685603
 ] 

Erik Krogen commented on HDFS-14017:


Hey [~vagarychen], v009 patch LGTM as long as we have a follow-on JIRA to sort 
out the federation issues and reduce reliance on assumptions about the 
nameservices available. My only comments are regarding the Javadoc on 
ORPPWithIPF. The expanded Javadoc is great and I appreciate the example of how 
to configure it but could be improved a little:
* There are a few typos: "virtal address", "stanby"
* "Extends ObserverReadProxyProvider" -> Link ORPP here?
* For the list, I would prefer to see proper HTML {{}}, {{}} tags
* For the in-line {{}} tags, using Javadoc version with {{@code}} would 
be better 
* For the {{}} block used for the configs, I think {{}} would be 
more applicable here. I think [this 
post|https://reflectoring.io/howto-format-code-snippets-in-javadoc/] has a 
really good breakdown of the different options of how to format code in Javadoc.

Oh, also, you have a star-import in ORPPWithIPF that shouldn't be there.

> ObserverReadProxyProviderWithIPFailover should work with HA configuration
> -
>
> Key: HDFS-14017
> URL: https://issues.apache.org/jira/browse/HDFS-14017
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14017-HDFS-12943.001.patch, 
> HDFS-14017-HDFS-12943.002.patch, HDFS-14017-HDFS-12943.003.patch, 
> HDFS-14017-HDFS-12943.004.patch, HDFS-14017-HDFS-12943.005.patch, 
> HDFS-14017-HDFS-12943.006.patch, HDFS-14017-HDFS-12943.008.patch, 
> HDFS-14017-HDFS-12943.009.patch
>
>
> Currently {{ObserverReadProxyProviderWithIPFailover}} extends 
> {{ObserverReadProxyProvider}}, and the only difference is changing the proxy 
> factory to use {{IPFailoverProxyProvider}}. However this is not enough 
> because when calling constructor of {{ObserverReadProxyProvider}} in 
> super(...), the follow line:
> {code:java}
> nameNodeProxies = getProxyAddresses(uri,
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY);
> {code}
> will try to resolve the all configured NN addresses to do configured 
> failover. But in the case of IPFailover, this does not really apply.
>  
> A second issue closely related is about delegation token. For example, in 
> current IPFailover setup, say we have a virtual host nn.xyz.com, which points 
> to either of two physical nodes nn1.xyz.com or nn2.xyz.com. In current HDFS, 
> there is always only one DT being exchanged, which has hostname nn.xyz.com. 
> Server only issues this DT, and client only knows the host nn.xyz.com, so all 
> is good. But in Observer read, even with IPFailover, the client will no 
> longer contacting nn.xyz.com, but will actively reaching to nn1.xyz.com and 
> nn2.xyz.com. During this process, current code will look for DT associated 
> with hostname nn1.xyz.com or nn2.xyz.com, which is different from the DT 
> given by NN. causing Token authentication to fail. This happens in 
> {{AbstractDelegationTokenSelector#selectToken}}. New IPFailover proxy 
> provider will need to resolve this as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-774) Remove OpenContainerBlockMap from datanode

2018-11-13 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-774:
-
Status: Patch Available  (was: Open)

> Remove OpenContainerBlockMap from datanode
> --
>
> Key: HDDS-774
> URL: https://issues.apache.org/jira/browse/HDDS-774
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-774.000.patch, HDDS-774.001.patch
>
>
> With HDDS-675, partial flush of uncommitted keys on Datanodes is not 
> required. OpenContainerBlockMap hence serves no purpose anymore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >