[jira] [Updated] (HDFS-12828) OIV ReverseXML Processor fails with escaped characters

2018-04-17 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-12828:
-
   Resolution: Fixed
Fix Version/s: 2.8.5
   3.0.3
   2.9.2
   3.1.1
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-3.1, branch-3.0, branch-2, branch-2.9, and 
branch-2.8. Thanks [~xkrogen] for the contribution!

> OIV ReverseXML Processor fails with escaped characters
> --
>
> Key: HDFS-12828
> URL: https://issues.apache.org/jira/browse/HDFS-12828
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Critical
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3, 2.8.5
>
> Attachments: HDFS-12828.000.patch, fsimage_008.xml
>
>
> The HDFS OIV ReverseXML processor fails if the XML file contains escaped 
> characters:
> {code}
> ekrogen at ekrogen-ld1 in 
> ~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk!
> ± $HADOOP_HOME/bin/hdfs dfs -fs hdfs://localhost:9000/ -ls /
> Found 4 items
> drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:48 /foo
> drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:49 /foo"
> drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:50 /foo`
> drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:49 /foo&
> {code}
> Then after doing {{saveNamespace}} on that NameNode...
> {code}
> ekrogen at ekrogen-ld1 in 
> ~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk!
> ± $HADOOP_HOME/bin/hdfs oiv -i 
> /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008 -o 
> /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml -p XML
> ekrogen at ekrogen-ld1 in 
> ~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk!
> ± $HADOOP_HOME/bin/hdfs oiv -i 
> /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml -o 
> /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml.rev -p 
> ReverseXML
> OfflineImageReconstructor failed: unterminated entity ref starting with &
> org.apache.hadoop.hdfs.util.XMLUtils$UnmanglingError: unterminated entity ref 
> starting with &
> at 
> org.apache.hadoop.hdfs.util.XMLUtils.unmangleXmlString(XMLUtils.java:232)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildrenHelper(OfflineImageReconstructor.java:383)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildrenHelper(OfflineImageReconstructor.java:379)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildren(OfflineImageReconstructor.java:418)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.access$1000(OfflineImageReconstructor.java:95)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$INodeSectionProcessor.process(OfflineImageReconstructor.java:524)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1710)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1765)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:191)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:134)
> {code}
> See attachments for relevant fsimage XML file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12828) OIV ReverseXML Processor fails with escaped characters

2018-04-17 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-12828:
-
Priority: Critical  (was: Major)
Hadoop Flags: Reviewed
 Summary: OIV ReverseXML Processor fails with escaped characters  (was: 
OIV ReverseXML Processor Fails With Escaped Characters)

> OIV ReverseXML Processor fails with escaped characters
> --
>
> Key: HDFS-12828
> URL: https://issues.apache.org/jira/browse/HDFS-12828
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Critical
> Attachments: HDFS-12828.000.patch, fsimage_008.xml
>
>
> The HDFS OIV ReverseXML processor fails if the XML file contains escaped 
> characters:
> {code}
> ekrogen at ekrogen-ld1 in 
> ~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk!
> ± $HADOOP_HOME/bin/hdfs dfs -fs hdfs://localhost:9000/ -ls /
> Found 4 items
> drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:48 /foo
> drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:49 /foo"
> drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:50 /foo`
> drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:49 /foo&
> {code}
> Then after doing {{saveNamespace}} on that NameNode...
> {code}
> ekrogen at ekrogen-ld1 in 
> ~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk!
> ± $HADOOP_HOME/bin/hdfs oiv -i 
> /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008 -o 
> /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml -p XML
> ekrogen at ekrogen-ld1 in 
> ~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk!
> ± $HADOOP_HOME/bin/hdfs oiv -i 
> /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml -o 
> /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml.rev -p 
> ReverseXML
> OfflineImageReconstructor failed: unterminated entity ref starting with &
> org.apache.hadoop.hdfs.util.XMLUtils$UnmanglingError: unterminated entity ref 
> starting with &
> at 
> org.apache.hadoop.hdfs.util.XMLUtils.unmangleXmlString(XMLUtils.java:232)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildrenHelper(OfflineImageReconstructor.java:383)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildrenHelper(OfflineImageReconstructor.java:379)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildren(OfflineImageReconstructor.java:418)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.access$1000(OfflineImageReconstructor.java:95)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$INodeSectionProcessor.process(OfflineImageReconstructor.java:524)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1710)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1765)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:191)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:134)
> {code}
> See attachments for relevant fsimage XML file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12828) OIV ReverseXML Processor Fails With Escaped Characters

2018-04-17 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441925#comment-16441925
 ] 

Akira Ajisaka commented on HDFS-12828:
--

+1, nice catch!

> OIV ReverseXML Processor Fails With Escaped Characters
> --
>
> Key: HDFS-12828
> URL: https://issues.apache.org/jira/browse/HDFS-12828
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-12828.000.patch, fsimage_008.xml
>
>
> The HDFS OIV ReverseXML processor fails if the XML file contains escaped 
> characters:
> {code}
> ekrogen at ekrogen-ld1 in 
> ~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk!
> ± $HADOOP_HOME/bin/hdfs dfs -fs hdfs://localhost:9000/ -ls /
> Found 4 items
> drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:48 /foo
> drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:49 /foo"
> drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:50 /foo`
> drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:49 /foo&
> {code}
> Then after doing {{saveNamespace}} on that NameNode...
> {code}
> ekrogen at ekrogen-ld1 in 
> ~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk!
> ± $HADOOP_HOME/bin/hdfs oiv -i 
> /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008 -o 
> /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml -p XML
> ekrogen at ekrogen-ld1 in 
> ~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk!
> ± $HADOOP_HOME/bin/hdfs oiv -i 
> /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml -o 
> /tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml.rev -p 
> ReverseXML
> OfflineImageReconstructor failed: unterminated entity ref starting with &
> org.apache.hadoop.hdfs.util.XMLUtils$UnmanglingError: unterminated entity ref 
> starting with &
> at 
> org.apache.hadoop.hdfs.util.XMLUtils.unmangleXmlString(XMLUtils.java:232)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildrenHelper(OfflineImageReconstructor.java:383)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildrenHelper(OfflineImageReconstructor.java:379)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildren(OfflineImageReconstructor.java:418)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.access$1000(OfflineImageReconstructor.java:95)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$INodeSectionProcessor.process(OfflineImageReconstructor.java:524)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1710)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1765)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:191)
> at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:134)
> {code}
> See attachments for relevant fsimage XML file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13472) Compilation error in trunk in hadoop-aws

2018-04-17 Thread Mohammad Arshad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Arshad updated HDFS-13472:
---
Description: 
*Problem:* hadoop trunk compilation is failing
 *Root Cause:*
 compilation error is coming from 
{{org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase}}. Compilation error 
is "The method getArgumentAt(int, Class) is undefined for 
the type InvocationOnMock".

StagingTestBase is using getArgumentAt(int, Class) method 
which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
Class) method is available only from version 2.0.0-beta

*Expectations:*
 Either mockito-all version to be upgraded or test case to be written only with 
available functions in 1.8.5.

  was:
*Problem:* hadoop trunk compilation is failing
*Root Cause:*
compilation error is coming from 
{{org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase}}. Compilation error 
is "The method getArgumentAt(int, Class) is undefined for 
the type InvocationOnMock".

StagingTestBase is using getArgumentAt(int, Class) method 
which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
Class)  method is available only from version 2.0.20-beta

*Expectations:*
Either mockito-all  version to be upgraded or test case to be written only with 
available functions in 1.8.5. 


> Compilation error in trunk in hadoop-aws 
> -
>
> Key: HDFS-13472
> URL: https://issues.apache.org/jira/browse/HDFS-13472
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Mohammad Arshad
>Priority: Major
>
> *Problem:* hadoop trunk compilation is failing
>  *Root Cause:*
>  compilation error is coming from 
> {{org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase}}. Compilation 
> error is "The method getArgumentAt(int, Class) is 
> undefined for the type InvocationOnMock".
> StagingTestBase is using getArgumentAt(int, Class) method 
> which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
> Class) method is available only from version 2.0.0-beta
> *Expectations:*
>  Either mockito-all version to be upgraded or test case to be written only 
> with available functions in 1.8.5.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13472) Compilation error in trunk in hadoop-aws

2018-04-17 Thread Mohammad Arshad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Arshad updated HDFS-13472:
---
Description: 
*Problem:* hadoop trunk compilation is failing
*Root Cause:*
compilation error is coming from 
{{org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase}}. Compilation error 
is "The method getArgumentAt(int, Class) is undefined for 
the type InvocationOnMock".

StagingTestBase is using getArgumentAt(int, Class) method 
which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
Class)  method is available only from version 2.0.20-beta

*Expectations:*
Either mockito-all  version to be upgraded or test case to be written only with 
available functions in 1.8.5. 

  was:
*Problem: *hadoop trunk compilation is failing
*Root Cause:*
compilation error is coming from 
{{org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase}}. Compilation error 
is "The method getArgumentAt(int, Class) is undefined for 
the type InvocationOnMock".

StagingTestBase is using getArgumentAt(int, Class) method 
which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
Class)  method is available only from version 2.0.20-beta

*Expectations:*
Either mockito-all  version to be upgraded or test case to be written only with 
available functions in 1.8.5. 


> Compilation error in trunk in hadoop-aws 
> -
>
> Key: HDFS-13472
> URL: https://issues.apache.org/jira/browse/HDFS-13472
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Mohammad Arshad
>Priority: Major
>
> *Problem:* hadoop trunk compilation is failing
> *Root Cause:*
> compilation error is coming from 
> {{org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase}}. Compilation 
> error is "The method getArgumentAt(int, Class) is 
> undefined for the type InvocationOnMock".
> StagingTestBase is using getArgumentAt(int, Class) method 
> which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
> Class)  method is available only from version 2.0.20-beta
> *Expectations:*
> Either mockito-all  version to be upgraded or test case to be written only 
> with available functions in 1.8.5. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13472) Compilation error in trunk in hadoop-aws

2018-04-17 Thread Mohammad Arshad (JIRA)
Mohammad Arshad created HDFS-13472:
--

 Summary: Compilation error in trunk in hadoop-aws 
 Key: HDFS-13472
 URL: https://issues.apache.org/jira/browse/HDFS-13472
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Mohammad Arshad


*Problem: *hadoop trunk compilation is failing
*Root Cause:*
compilation error is coming from 
{{org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase}}. Compilation error 
is "The method getArgumentAt(int, Class) is undefined for 
the type InvocationOnMock".

StagingTestBase is using getArgumentAt(int, Class) method 
which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
Class)  method is available only from version 2.0.20-beta

*Expectations:*
Either mockito-all  version to be upgraded or test case to be written only with 
available functions in 1.8.5. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13403) libhdfs++: Use hdfs::IoService object rather than asio::io_service

2018-04-17 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441819#comment-16441819
 ] 

James Clampffer commented on HDFS-13403:


Hi [~pifta]

Thanks for pointing this out!  I'm not sure why the missing include isn't 
causing issues when I build on GCC and Clang in the docker container but 
functional should be included in ioservice.h.  I do see some errors in the 
clang build due to -Winconsistent-missing-override that don't show up when 
using GCC.  Are those what you're seeing? I'll get patch up with the include 
and virtual override warning fixes tomorrow.

The post-commit jenkins build bailed out before it hit 
hadoop_hdfs_native_client so I don't think that's related.

> libhdfs++: Use hdfs::IoService object rather than asio::io_service
> --
>
> Key: HDFS-13403
> URL: https://issues.apache.org/jira/browse/HDFS-13403
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Critical
> Attachments: HDFS-13403.000.patch
>
>
> At the moment the hdfs::IoService is a simple wrapper over asio's io_service 
> object.  I'd like to make this smarter and have it do things like track which 
> tasks are queued, validate that dependencies of tasks exist, and monitor 
> ioservice throughput and contention.  In order to get there we need to use 
> have all components in the library to go through the hdfs::IoService rather 
> than directly interacting with the asio::io_service.  The only time the 
> asio::io_service should be used is when calling things like asio::async_write 
> that need an io_service&.  HDFS-11884 will be able get rid of those remaining 
> instances once this work is in place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13470) RBF: Add Browse the Filesystem button to the UI

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441813#comment-16441813
 ] 

genericqa commented on HDFS-13470:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
36m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 35 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13470 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919518/HDFS-13470.000.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux 0d2f1551f401 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e4313e7 |
| maven | version: Apache Maven 3.3.9 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23977/artifact/out/whitespace-tabs.txt
 |
| Max. process+thread count | 320 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23977/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Add Browse the Filesystem button to the UI
> ---
>
> Key: HDFS-13470
> URL: https://issues.apache.org/jira/browse/HDFS-13470
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13470.000.patch
>
>
> After HDFS-12512 added WebHDFS, we can add the support to browse the 
> filesystem to the UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13470) RBF: Add Browse the Filesystem button to the UI

2018-04-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13470:
---
Assignee: Íñigo Goiri
  Status: Patch Available  (was: Open)

> RBF: Add Browse the Filesystem button to the UI
> ---
>
> Key: HDFS-13470
> URL: https://issues.apache.org/jira/browse/HDFS-13470
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13470.000.patch
>
>
> After HDFS-12512 added WebHDFS, we can add the support to browse the 
> filesystem to the UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13470) RBF: Add Browse the Filesystem button to the UI

2018-04-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13470:
---
Attachment: HDFS-13470.000.patch

> RBF: Add Browse the Filesystem button to the UI
> ---
>
> Key: HDFS-13470
> URL: https://issues.apache.org/jira/browse/HDFS-13470
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13470.000.patch
>
>
> After HDFS-12512 added WebHDFS, we can add the support to browse the 
> filesystem to the UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13471) RBF: Add Browse the Filesystem button to the UI

2018-04-17 Thread JIRA
Íñigo Goiri created HDFS-13471:
--

 Summary: RBF: Add Browse the Filesystem button to the UI
 Key: HDFS-13471
 URL: https://issues.apache.org/jira/browse/HDFS-13471
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri


After HDFS-12512 added WebHDFS, we can add the support to browse the filesystem 
to the UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Deleted] (HDFS-13471) RBF: Add Browse the Filesystem button to the UI

2018-04-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri deleted HDFS-13471:
---


> RBF: Add Browse the Filesystem button to the UI
> ---
>
> Key: HDFS-13471
> URL: https://issues.apache.org/jira/browse/HDFS-13471
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
>
> After HDFS-12512 added WebHDFS, we can add the support to browse the 
> filesystem to the UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13470) RBF: Add Browse the Filesystem button to the UI

2018-04-17 Thread JIRA
Íñigo Goiri created HDFS-13470:
--

 Summary: RBF: Add Browse the Filesystem button to the UI
 Key: HDFS-13470
 URL: https://issues.apache.org/jira/browse/HDFS-13470
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri


After HDFS-12512 added WebHDFS, we can add the support to browse the filesystem 
to the UI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13442) Ozone: Handle Datanode Registration failure

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441679#comment-16441679
 ] 

genericqa commented on HDFS-13442:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
20s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
17s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
18s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
51s{color} | {color:red} hadoop-hdds/common in HDFS-7240 has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
15s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} container-service in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} server-scm in HDFS-7240 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
16s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 35s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
11s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
11s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
13s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
13s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
13s{color} | {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
13s{color} | {color:red} server-scm in the patch failed. {color} |
|| || || || 

[jira] [Commented] (HDFS-13311) RBF: TestRouterAdminCLI#testCreateInvalidEntry fails on Windows

2018-04-17 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441670#comment-16441670
 ] 

Chris Douglas commented on HDFS-13311:
--

bq. Chris Douglas, should I move this to HADOOP?
Sorry, missed this. In the future, renaming the JIRA to describe the change 
(and citing the failure it's fixing) would probably be easier to work with, but 
it's not critical.

> RBF: TestRouterAdminCLI#testCreateInvalidEntry fails on Windows
> ---
>
> Key: HDFS-13311
> URL: https://issues.apache.org/jira/browse/HDFS-13311
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
>  Labels: RBF, windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13311.000.patch
>
>
> The Windows runs show that TestRouterAdminCLI#testCreateInvalidEntry fail 
> with NPE:
> {code}
> [ERROR] 
> testCreateInvalidEntry(org.apache.hadoop.hdfs.server.federation.router.TestRouterAdminCLI)
>   Time elapsed: 0.008 s  <<< ERROR!
> java.lang.NullPointerException
> at 
> org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows(GenericOptionsParser.java:529)
> at 
> org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:568)
> at 
> org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:174)
> at 
> org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:156)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at 
> org.apache.hadoop.hdfs.server.federation.router.TestRouterAdminCLI.testCreateInvalidEntry(TestRouterAdminCLI.java:444)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13272) DataNodeHttpServer to have configurable HttpServer2 threads

2018-04-17 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441657#comment-16441657
 ] 

Chris Douglas commented on HDFS-13272:
--

Ping [~kihwal], [~xkrogen]

> DataNodeHttpServer to have configurable HttpServer2 threads
> ---
>
> Key: HDFS-13272
> URL: https://issues.apache.org/jira/browse/HDFS-13272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
>
> In HDFS-7279, the Jetty server on the DataNode was hard-coded to use 10 
> threads. In addition to the possibility of this being too few threads, it is 
> much higher than necessary in resource constrained environments such as 
> MiniDFSCluster. To avoid compatibility issues, rather than using 
> {{HttpServer2#HTTP_MAX_THREADS}} directly, we can introduce a new 
> configuration for the DataNode's thread pool size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13148) Unit test for EZ with KMS and Federation

2018-04-17 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441639#comment-16441639
 ] 

Xiao Chen commented on HDFS-13148:
--

Thanks [~hanishakoneru] for working on this and Xiaoyu / Rushabh for reviews.

What is the goal of the improved testing? It seems we're trying make sure if 
there are 2 HDFS clusters with the same KMS, they will each work correctly. 
Although in theory KMS isn't impacted by HDFS, I agree having a test to cover 
it is nice.

I think a more interesting test is with viewfs: have a federated cluster, and 
make sure all [CryptoAdmin 
CLIs|http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html#crypto_command-line_interface]
 work as expected. For example, does -listZones return a union of the 2 
clusters? (We may end up with less line of codes if this is covered as a new 
TestCryptoAdminCLIWithFederation class and the test case in xml, but don't feel 
strongly)

> Unit test for EZ with KMS and Federation
> 
>
> Key: HDFS-13148
> URL: https://issues.apache.org/jira/browse/HDFS-13148
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13148.001.patch, HDFS-13148.002.patch, 
> HDFS-13148.003.patch
>
>
> It would be good to have some unit tests for testing KMS and EZ on a 
> federated cluster. We can start with basic EZ operations. For example, create 
> EZs on two namespaces with different keys using one KMS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13442) Ozone: Handle Datanode Registration failure

2018-04-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-13442:
-
Status: Patch Available  (was: Open)

> Ozone: Handle Datanode Registration failure
> ---
>
> Key: HDFS-13442
> URL: https://issues.apache.org/jira/browse/HDFS-13442
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13442-HDFS-7240.001.patch
>
>
> If a datanode is not able to register itself, we need to handle that 
> correctly. 
> If the number of unsuccessful attempts to register with the SCM exceeds a 
> configurable max number, the datanode should not make any more attempts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13462) Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers

2018-04-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13462:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.4
   2.9.2
   3.1.1
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

Thanks [~lukmajercak] for   [^HDFS-13462_branch-2.000.patch].
Yetus for branch-2 is not running lately so I tested locally with the failed 
unit tests from before and it worked.
I committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9.

> Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers
> --
>
> Key: HDFS-13462
> URL: https://issues.apache.org/jira/browse/HDFS-13462
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, journal-node
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13462.000.patch, HDFS-13462.001.patch, 
> HDFS-13462.002.patch, HDFS-13462_branch-2.000.patch
>
>
> Allow configurable bind-host for JournalNode's HTTP and RPC servers to allow 
> overriding the hostname for which the server accepts connections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13462) Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441591#comment-16441591
 ] 

genericqa commented on HDFS-13462:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDFS-13462 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13462 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919493/HDFS-13462_branch-2.000.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23975/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers
> --
>
> Key: HDFS-13462
> URL: https://issues.apache.org/jira/browse/HDFS-13462
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, journal-node
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13462.000.patch, HDFS-13462.001.patch, 
> HDFS-13462.002.patch, HDFS-13462_branch-2.000.patch
>
>
> Allow configurable bind-host for JournalNode's HTTP and RPC servers to allow 
> overriding the hostname for which the server accepts connections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13462) Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers

2018-04-17 Thread Lukas Majercak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441577#comment-16441577
 ] 

Lukas Majercak commented on HDFS-13462:
---

Added branch-2 patch.

> Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers
> --
>
> Key: HDFS-13462
> URL: https://issues.apache.org/jira/browse/HDFS-13462
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, journal-node
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13462.000.patch, HDFS-13462.001.patch, 
> HDFS-13462.002.patch, HDFS-13462_branch-2.000.patch
>
>
> Allow configurable bind-host for JournalNode's HTTP and RPC servers to allow 
> overriding the hostname for which the server accepts connections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13462) Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers

2018-04-17 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13462:
--
Attachment: HDFS-13462_branch-2.000.patch

> Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers
> --
>
> Key: HDFS-13462
> URL: https://issues.apache.org/jira/browse/HDFS-13462
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, journal-node
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13462.000.patch, HDFS-13462.001.patch, 
> HDFS-13462.002.patch, HDFS-13462_branch-2.000.patch
>
>
> Allow configurable bind-host for JournalNode's HTTP and RPC servers to allow 
> overriding the hostname for which the server accepts connections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13466) RBF: Add more router-related information to the UI

2018-04-17 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated HDFS-13466:
---
   Resolution: Fixed
Fix Version/s: 3.0.4
   2.9.2
   3.1.1
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

Thanks for the review [~elgoiri]. Committed to trunk, branch-3.1, branch-3.0, 
branch-2 and branch-2.9.

> RBF: Add more router-related information to the UI
> --
>
> Key: HDFS-13466
> URL: https://issues.apache.org/jira/browse/HDFS-13466
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13466.001.patch, pic.png
>
>
> Currently in NameNode UI, the Summary part also includes information:
> {noformat}
> Security is off.
> Safemode is off.
>  files and directories, * blocks =  total filesystem object(s).
> Heap Memory used  GB of  GB Heap Memory. Max Heap Memory is  GB.
> Non Heap Memory used  MB of  MB Commited Non Heap Memory. Max Non 
> Heap Memory is .
> {noformat}
> We could add similar information for router, for better visibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13248) RBF: Namenode need to choose block location for the client

2018-04-17 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441568#comment-16441568
 ] 

Daryn Sharp commented on HDFS-13248:


Do realize this will cause security vulnerabilities if not carefully 
implemented.  You cannot pass the origin in the caller context (as Arpit said, 
it's opaque) or through optional NN RPC arguments.  You cannot trust the 
client, and attempting to verify the client in every single rpc method will be 
too expensive.

The best bet is probably adding a remoteAddr to the IpcConnectionContext.  At 
the IPC level, if the remoteAddr is defined and the peer is in a "trusted" host 
list, set that as the Connection's remote address.  The expense of verifying 
will only happen once per connection.  Now ACLs, proxy user authz, audit 
logging, etc will/should all seamless work.  We do something similar with 
webhdfs routed through the call queue.


> RBF: Namenode need to choose block location for the client
> --
>
> Key: HDFS-13248
> URL: https://issues.apache.org/jira/browse/HDFS-13248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, 
> clientMachine-call-path.jpeg, debug-info-1.jpeg, debug-info-2.jpeg
>
>
> When execute a put operation via router, the NameNode will choose block 
> location for the router, not for the real client. This will affect the file's 
> locality.
> I think on both NameNode and Router, we should add a new addBlock method, or 
> add a parameter for the current addBlock method, to pass the real client 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13469) RBF: Support InodeID in the Router

2018-04-17 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441564#comment-16441564
 ] 

Chris Douglas commented on HDFS-13469:
--

HDFS-7878 didn't add an identifier to the payload that could be used for this, 
though [~jingzhao] 
[suggested|https://issues.apache.org/jira/browse/HDFS-7878?focusedCommentId=15468143=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15468143]
 the name service ID. It would be a good sanity check, even in clusters that 
don't run a router.

> RBF: Support InodeID in the Router
> --
>
> Key: HDFS-13469
> URL: https://issues.apache.org/jira/browse/HDFS-13469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
>
> The Namenode supports identifying files through inode identifiers.
> Currently the Router does not handle this properly, we need to add this 
> functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13469) RBF: Support InodeID in the Router

2018-04-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441543#comment-16441543
 ] 

Íñigo Goiri commented on HDFS-13469:


As [~daryn] pointed out in HDFS-12615, each namesystem uses a special path 
under "/.reserved/.inodes" to handle inodes.
I'm not really sure about the structure of this and if there is a way to map 
inodes to subclusters.
I remember [~chris.douglas] added some full identifier.
Is the namespace id available there?
Does anybody have a pointer to the documentation for the inodes?

> RBF: Support InodeID in the Router
> --
>
> Key: HDFS-13469
> URL: https://issues.apache.org/jira/browse/HDFS-13469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
>
> The Namenode supports identifying files through inode identifiers.
> Currently the Router does not handle this properly, we need to add this 
> functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12615) Router-based HDFS federation phase 2

2018-04-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441539#comment-16441539
 ] 

Íñigo Goiri commented on HDFS-12615:


Thanks [~daryn], I created HDFS-13469 to discuss and track this part.
I'm not really sure how to handle this as we would need to know the location of 
all the ids.
Anyway, let's discuss there.

> Router-based HDFS federation phase 2
> 
>
> Key: HDFS-12615
> URL: https://issues.apache.org/jira/browse/HDFS-12615
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: RBF
>
> This umbrella JIRA tracks set of improvements over the Router-based HDFS 
> federation (HDFS-10467).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13469) RBF: Support InodeID in the Router

2018-04-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13469:
---
Description: 
The Namenode supports identifying files through inode identifiers.
Currently the Router does not handle this properly, we need to add this 
functionality.

> RBF: Support InodeID in the Router
> --
>
> Key: HDFS-13469
> URL: https://issues.apache.org/jira/browse/HDFS-13469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
>
> The Namenode supports identifying files through inode identifiers.
> Currently the Router does not handle this properly, we need to add this 
> functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13469) RBF: Support InodeID in the Router

2018-04-17 Thread JIRA
Íñigo Goiri created HDFS-13469:
--

 Summary: RBF: Support InodeID in the Router
 Key: HDFS-13469
 URL: https://issues.apache.org/jira/browse/HDFS-13469
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13442) Ozone: Handle Datanode Registration failure

2018-04-17 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441531#comment-16441531
 ] 

Anu Engineer commented on HDFS-13442:
-

[~hanishakoneru] Thanks for pointing out the error in my understanding. You are 
right, we will continue retrying. I will commit this patch as soon as the merge 
is done. In case I forget, feel free to ping on this JIRA after the merge is 
done. Thanks for taking care of this issue.

> Ozone: Handle Datanode Registration failure
> ---
>
> Key: HDFS-13442
> URL: https://issues.apache.org/jira/browse/HDFS-13442
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13442-HDFS-7240.001.patch
>
>
> If a datanode is not able to register itself, we need to handle that 
> correctly. 
> If the number of unsuccessful attempts to register with the SCM exceeds a 
> configurable max number, the datanode should not make any more attempts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13442) Ozone: Handle Datanode Registration failure

2018-04-17 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441526#comment-16441526
 ] 

Hanisha Koneru commented on HDFS-13442:
---

Thanks for the review [~anu].

This patch only modifies the case when we get _errorNodeNotPermitted_. This 
happens when the node is able to contact the SCM but SCM does not register the 
node. 
{quote}if the data nodes boot up earlier than SCM we would not want the data 
nodes to do silent after 10 tries
{quote}
In this case, the datanode keeps retrying as the EndPointTask state remains as 
{{HEARTBEAT}}. In the code snippet below, if the datanode does not get a 
response from SCM, it catches the exception and logs it, if needed.
{code:java}
try {
  SCMRegisteredCmdResponseProto response = rpcEndPoint.getEndPoint()
  .register(datanodeDetails.getProtoBufMessage(),
  conf.getStrings(ScmConfigKeys.OZONE_SCM_NAMES));
  ...
  ...
  processResponse(response);
} catch (IOException ex) {
  rpcEndPoint.logIfNeeded(ex);
}
{code}
{quote}also in the case, we get the error, errorNodeNotPermitted, should we 
shut down the data node and create some kind of error record on SCM so we can 
get that info back from SCM? I am also ok with the current approach where we 
will let the system slowly go time out.
{quote}
I think we should let the DN make a few retries before shutting it down.

> Ozone: Handle Datanode Registration failure
> ---
>
> Key: HDFS-13442
> URL: https://issues.apache.org/jira/browse/HDFS-13442
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13442-HDFS-7240.001.patch
>
>
> If a datanode is not able to register itself, we need to handle that 
> correctly. 
> If the number of unsuccessful attempts to register with the SCM exceeds a 
> configurable max number, the datanode should not make any more attempts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13442) Ozone: Handle Datanode Registration failure

2018-04-17 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441526#comment-16441526
 ] 

Hanisha Koneru edited comment on HDFS-13442 at 4/17/18 9:34 PM:


Thanks for the review [~anu].

This patch only modifies the case when we get _errorNodeNotPermitted_. This 
happens when the node is able to contact the SCM but SCM does not register the 
node. 
{quote}if the data nodes boot up earlier than SCM we would not want the data 
nodes to do silent after 10 tries
{quote}
In this case, the datanode keeps retrying as the EndPointTask state remains as 
{{REGISTER}}. In the code snippet below, if the datanode does not get a 
response from SCM, it catches the exception and logs it, if needed.
{code:java}
try {
  SCMRegisteredCmdResponseProto response = rpcEndPoint.getEndPoint()
  .register(datanodeDetails.getProtoBufMessage(),
  conf.getStrings(ScmConfigKeys.OZONE_SCM_NAMES));
  ...
  ...
  processResponse(response);
} catch (IOException ex) {
  rpcEndPoint.logIfNeeded(ex);
}
{code}
{quote}also in the case, we get the error, errorNodeNotPermitted, should we 
shut down the data node and create some kind of error record on SCM so we can 
get that info back from SCM? I am also ok with the current approach where we 
will let the system slowly go time out.
{quote}
I think we should let the DN make a few retries before shutting it down.


was (Author: hanishakoneru):
Thanks for the review [~anu].

This patch only modifies the case when we get _errorNodeNotPermitted_. This 
happens when the node is able to contact the SCM but SCM does not register the 
node. 
{quote}if the data nodes boot up earlier than SCM we would not want the data 
nodes to do silent after 10 tries
{quote}
In this case, the datanode keeps retrying as the EndPointTask state remains as 
{{HEARTBEAT}}. In the code snippet below, if the datanode does not get a 
response from SCM, it catches the exception and logs it, if needed.
{code:java}
try {
  SCMRegisteredCmdResponseProto response = rpcEndPoint.getEndPoint()
  .register(datanodeDetails.getProtoBufMessage(),
  conf.getStrings(ScmConfigKeys.OZONE_SCM_NAMES));
  ...
  ...
  processResponse(response);
} catch (IOException ex) {
  rpcEndPoint.logIfNeeded(ex);
}
{code}
{quote}also in the case, we get the error, errorNodeNotPermitted, should we 
shut down the data node and create some kind of error record on SCM so we can 
get that info back from SCM? I am also ok with the current approach where we 
will let the system slowly go time out.
{quote}
I think we should let the DN make a few retries before shutting it down.

> Ozone: Handle Datanode Registration failure
> ---
>
> Key: HDFS-13442
> URL: https://issues.apache.org/jira/browse/HDFS-13442
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13442-HDFS-7240.001.patch
>
>
> If a datanode is not able to register itself, we need to handle that 
> correctly. 
> If the number of unsuccessful attempts to register with the SCM exceeds a 
> configurable max number, the datanode should not make any more attempts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13462) Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers

2018-04-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441524#comment-16441524
 ] 

Íñigo Goiri commented on HDFS-13462:


Committed to branch-3.0, branch-3.1 and trunk.
It does not apply to branch-2.
[~lukmajercak] do you mind providing a patch for those?

> Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers
> --
>
> Key: HDFS-13462
> URL: https://issues.apache.org/jira/browse/HDFS-13462
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, journal-node
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13462.000.patch, HDFS-13462.001.patch, 
> HDFS-13462.002.patch
>
>
> Allow configurable bind-host for JournalNode's HTTP and RPC servers to allow 
> overriding the hostname for which the server accepts connections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13462) Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers

2018-04-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441514#comment-16441514
 ] 

Íñigo Goiri commented on HDFS-13462:


The run for  [^HDFS-13462.002.patch] from Yetus is a little weird as it has 
some unrelated javac issue.
The new unit tests run fine and the TestReconstructStripedFile failure seems 
unrelated.
The checkstyles warnings follow the same approach as the rest of the file so 
ignoring.
+1
Committing all the way to 2.9.

> Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers
> --
>
> Key: HDFS-13462
> URL: https://issues.apache.org/jira/browse/HDFS-13462
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, journal-node
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13462.000.patch, HDFS-13462.001.patch, 
> HDFS-13462.002.patch
>
>
> Allow configurable bind-host for JournalNode's HTTP and RPC servers to allow 
> overriding the hostname for which the server accepts connections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13462) Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers

2018-04-17 Thread Lukas Majercak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441507#comment-16441507
 ] 

Lukas Majercak commented on HDFS-13462:
---

Do we wanna fix the 80 char limit in DFSConfigKeys? Otherwise this seems clean, 
the unit test doesn't seem to be related to the change.

> Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers
> --
>
> Key: HDFS-13462
> URL: https://issues.apache.org/jira/browse/HDFS-13462
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, journal-node
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13462.000.patch, HDFS-13462.001.patch, 
> HDFS-13462.002.patch
>
>
> Allow configurable bind-host for JournalNode's HTTP and RPC servers to allow 
> overriding the hostname for which the server accepts connections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13462) Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441471#comment-16441471
 ] 

genericqa commented on HDFS-13462:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 50s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 3 new + 390 unchanged - 
3 fixed = 393 total (was 393) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 441 unchanged - 0 fixed = 444 total (was 441) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReconstructStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13462 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919464/HDFS-13462.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux e72ae3f71c42 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bb92bfb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23974/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 

[jira] [Commented] (HDFS-12615) Router-based HDFS federation phase 2

2018-04-17 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441470#comment-16441470
 ] 

Daryn Sharp commented on HDFS-12615:


Correct, [~ywskycn].  That's the jira that original started the feature.

bq. What are the ClientProtocol methods that would rely on ids?

Some methods directly take a fileId, which the router currently ignores.  Every 
namesystem operation handles "/.reserved/.inodes/NNN".

> Router-based HDFS federation phase 2
> 
>
> Key: HDFS-12615
> URL: https://issues.apache.org/jira/browse/HDFS-12615
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: RBF
>
> This umbrella JIRA tracks set of improvements over the Router-based HDFS 
> federation (HDFS-10467).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13403) libhdfs++: Use hdfs::IoService object rather than asio::io_service

2018-04-17 Thread Istvan Fajth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441466#comment-16441466
 ] 

Istvan Fajth commented on HDFS-13403:
-

Hello [~James C],

I have run into an error when I tried to build the project on my Mac with `mvn 
clean install -Pnative -DskipTests`. The compile failed at ioservice.h with the 
following error:

{{ [exec] [ 30%] Building CXX object 
main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/ioservice_impl.cc.o}}
{{ [exec] In file included from 
/Users//IdeaProjects/hadoop-apache/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.cc:19:}}
{{ [exec] In file included from 
/Users//IdeaProjects/hadoop-apache/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.h:22:}}
{{ [exec] 
/Users//IdeaProjects/hadoop-apache/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:109:30:
 error: no template named 'function' in namespace 'std'}}
{{ [exec] virtual void PostTask(std::function asyncTask) = 0;}}
{{ [exec] ~^}}
{{ }}

 

There are further compile errors as well, if needed I can provide a full output.

After adding #include  to the ioservice.h the error goes away. I am 
not sure if it is something in my environment, but the Jenkins build failed 
somewhere else, though as I see it have skipped the Apache Hadoop HDFS Native 
Client package.

> libhdfs++: Use hdfs::IoService object rather than asio::io_service
> --
>
> Key: HDFS-13403
> URL: https://issues.apache.org/jira/browse/HDFS-13403
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Critical
> Attachments: HDFS-13403.000.patch
>
>
> At the moment the hdfs::IoService is a simple wrapper over asio's io_service 
> object.  I'd like to make this smarter and have it do things like track which 
> tasks are queued, validate that dependencies of tasks exist, and monitor 
> ioservice throughput and contention.  In order to get there we need to use 
> have all components in the library to go through the hdfs::IoService rather 
> than directly interacting with the asio::io_service.  The only time the 
> asio::io_service should be used is when calling things like asio::async_write 
> that need an io_service&.  HDFS-11884 will be able get rid of those remaining 
> instances once this work is in place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13462) Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers

2018-04-17 Thread Lukas Majercak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441421#comment-16441421
 ] 

Lukas Majercak commented on HDFS-13462:
---

This is patch001 yetus results, waiting for patch002.

> Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers
> --
>
> Key: HDFS-13462
> URL: https://issues.apache.org/jira/browse/HDFS-13462
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, journal-node
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13462.000.patch, HDFS-13462.001.patch, 
> HDFS-13462.002.patch
>
>
> Allow configurable bind-host for JournalNode's HTTP and RPC servers to allow 
> overriding the hostname for which the server accepts connections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13462) Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441410#comment-16441410
 ] 

genericqa commented on HDFS-13462:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 442 unchanged - 0 fixed = 445 total (was 442) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13462 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919451/HDFS-13462.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux fc4272f1f1f6 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1d6e43d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23972/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Created] (HDFS-13468) Add erasure coding metrics into ReadStatistics

2018-04-17 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-13468:


 Summary: Add erasure coding metrics into ReadStatistics
 Key: HDFS-13468
 URL: https://issues.apache.org/jira/browse/HDFS-13468
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding
Affects Versions: 3.0.1, 3.1.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu


Expose Erasure Coding related metrics for InputStream in ReadStatistics. 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13129) Add a test for DfsAdmin refreshSuperUserGroupsConfiguration

2018-04-17 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13129:
--
   Resolution: Fixed
Fix Version/s: 3.1.1
   3.2.0
   Status: Resolved  (was: Patch Available)

> Add a test for DfsAdmin refreshSuperUserGroupsConfiguration
> ---
>
> Key: HDFS-13129
> URL: https://issues.apache.org/jira/browse/HDFS-13129
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: namenode
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HDFS-13129.001.patch, HDFS-13129.002.patch, 
> HDFS-13129.003.patch
>
>
> UserGroup can be refreshed using -refreshSuperUserGroupsConfiguration. This 
> jira will add a test to verify that the user group information is updated 
> correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13129) Add a test for DfsAdmin refreshSuperUserGroupsConfiguration

2018-04-17 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441390#comment-16441390
 ] 

Bharat Viswanadham commented on HDFS-13129:
---

Thank You [~msingh] for reporting and working on this and [~ajayydv] for review.

I have committed this to trunk and branch-3.1

 

> Add a test for DfsAdmin refreshSuperUserGroupsConfiguration
> ---
>
> Key: HDFS-13129
> URL: https://issues.apache.org/jira/browse/HDFS-13129
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: namenode
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Attachments: HDFS-13129.001.patch, HDFS-13129.002.patch, 
> HDFS-13129.003.patch
>
>
> UserGroup can be refreshed using -refreshSuperUserGroupsConfiguration. This 
> jira will add a test to verify that the user group information is updated 
> correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13129) Add a test for DfsAdmin refreshSuperUserGroupsConfiguration

2018-04-17 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441354#comment-16441354
 ] 

Ajay Kumar edited comment on HDFS-13129 at 4/17/18 7:22 PM:


[~bharatviswa] we can commit this. Exception i am getting is due to another bug 
in DefaultImpersonationProvider. Created [HADOOP-15395] for tracking it. 
[~msingh] thanks for root cause.

+1 for latest patch.


was (Author: ajayydv):
[~bharatviswa] we can commit this. Exception i am getting is due to another bug 
in DefaultImpersonationProvider. [~msingh] thanks for root cause.

+1 for latest patch.

> Add a test for DfsAdmin refreshSuperUserGroupsConfiguration
> ---
>
> Key: HDFS-13129
> URL: https://issues.apache.org/jira/browse/HDFS-13129
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: namenode
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Attachments: HDFS-13129.001.patch, HDFS-13129.002.patch, 
> HDFS-13129.003.patch
>
>
> UserGroup can be refreshed using -refreshSuperUserGroupsConfiguration. This 
> jira will add a test to verify that the user group information is updated 
> correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13129) Add a test for DfsAdmin refreshSuperUserGroupsConfiguration

2018-04-17 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441354#comment-16441354
 ] 

Ajay Kumar edited comment on HDFS-13129 at 4/17/18 7:11 PM:


[~bharatviswa] we can commit this. Exception i am getting is due to another bug 
in DefaultImpersonationProvider. [~msingh] thanks for root cause.

+1 for latest patch.


was (Author: ajayydv):
[~bharatviswa] we can commit this. Exception i am getting is due to another bug 
in DefaultImpersonationProvider. [~msingh] thanks for root cause.

> Add a test for DfsAdmin refreshSuperUserGroupsConfiguration
> ---
>
> Key: HDFS-13129
> URL: https://issues.apache.org/jira/browse/HDFS-13129
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: namenode
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Attachments: HDFS-13129.001.patch, HDFS-13129.002.patch, 
> HDFS-13129.003.patch
>
>
> UserGroup can be refreshed using -refreshSuperUserGroupsConfiguration. This 
> jira will add a test to verify that the user group information is updated 
> correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13129) Add a test for DfsAdmin refreshSuperUserGroupsConfiguration

2018-04-17 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441354#comment-16441354
 ] 

Ajay Kumar commented on HDFS-13129:
---

[~bharatviswa] we can commit this. Exception i am getting is due to another bug 
in DefaultImpersonationProvider.

> Add a test for DfsAdmin refreshSuperUserGroupsConfiguration
> ---
>
> Key: HDFS-13129
> URL: https://issues.apache.org/jira/browse/HDFS-13129
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: namenode
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Attachments: HDFS-13129.001.patch, HDFS-13129.002.patch, 
> HDFS-13129.003.patch
>
>
> UserGroup can be refreshed using -refreshSuperUserGroupsConfiguration. This 
> jira will add a test to verify that the user group information is updated 
> correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13129) Add a test for DfsAdmin refreshSuperUserGroupsConfiguration

2018-04-17 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441354#comment-16441354
 ] 

Ajay Kumar edited comment on HDFS-13129 at 4/17/18 7:07 PM:


[~bharatviswa] we can commit this. Exception i am getting is due to another bug 
in DefaultImpersonationProvider. [~msingh] thanks for root cause.


was (Author: ajayydv):
[~bharatviswa] we can commit this. Exception i am getting is due to another bug 
in DefaultImpersonationProvider.

> Add a test for DfsAdmin refreshSuperUserGroupsConfiguration
> ---
>
> Key: HDFS-13129
> URL: https://issues.apache.org/jira/browse/HDFS-13129
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: namenode
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Attachments: HDFS-13129.001.patch, HDFS-13129.002.patch, 
> HDFS-13129.003.patch
>
>
> UserGroup can be refreshed using -refreshSuperUserGroupsConfiguration. This 
> jira will add a test to verify that the user group information is updated 
> correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13129) Add a test for DfsAdmin refreshSuperUserGroupsConfiguration

2018-04-17 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441351#comment-16441351
 ] 

Bharat Viswanadham edited comment on HDFS-13129 at 4/17/18 7:04 PM:


+1 LGTM. And the test failures are not related to this patch.

[~ajayydv] review comments are addressed in patch v03. Do you want to have a 
look into this once again?

If no further comments will commit it.


was (Author: bharatviswa):
+1 LGTM.

[~ajayydv] review comments are addressed in patch v03. Do you want to have a 
look into this once again?

If no further comments will commit it.

> Add a test for DfsAdmin refreshSuperUserGroupsConfiguration
> ---
>
> Key: HDFS-13129
> URL: https://issues.apache.org/jira/browse/HDFS-13129
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: namenode
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Attachments: HDFS-13129.001.patch, HDFS-13129.002.patch, 
> HDFS-13129.003.patch
>
>
> UserGroup can be refreshed using -refreshSuperUserGroupsConfiguration. This 
> jira will add a test to verify that the user group information is updated 
> correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13129) Add a test for DfsAdmin refreshSuperUserGroupsConfiguration

2018-04-17 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441351#comment-16441351
 ] 

Bharat Viswanadham edited comment on HDFS-13129 at 4/17/18 7:04 PM:


+1 LGTM.

[~ajayydv] review comments are addressed in patch v03. Do you want to have a 
look into this once again?

If no further comments will commit it.


was (Author: bharatviswa):
+1 LGTM.

[~ajayydv] review comments are addressed in patch v03. 

Will commit this shortly.

> Add a test for DfsAdmin refreshSuperUserGroupsConfiguration
> ---
>
> Key: HDFS-13129
> URL: https://issues.apache.org/jira/browse/HDFS-13129
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: namenode
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Attachments: HDFS-13129.001.patch, HDFS-13129.002.patch, 
> HDFS-13129.003.patch
>
>
> UserGroup can be refreshed using -refreshSuperUserGroupsConfiguration. This 
> jira will add a test to verify that the user group information is updated 
> correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13129) Add a test for DfsAdmin refreshSuperUserGroupsConfiguration

2018-04-17 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441351#comment-16441351
 ] 

Bharat Viswanadham commented on HDFS-13129:
---

+1 LGTM.

[~ajayydv] review comments are addressed in patch v03. 

Will commit this shortly.

> Add a test for DfsAdmin refreshSuperUserGroupsConfiguration
> ---
>
> Key: HDFS-13129
> URL: https://issues.apache.org/jira/browse/HDFS-13129
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: namenode
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Attachments: HDFS-13129.001.patch, HDFS-13129.002.patch, 
> HDFS-13129.003.patch
>
>
> UserGroup can be refreshed using -refreshSuperUserGroupsConfiguration. This 
> jira will add a test to verify that the user group information is updated 
> correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13466) RBF: Add more router-related information to the UI

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441345#comment-16441345
 ] 

genericqa commented on HDFS-13466:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
38m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13466 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919458/HDFS-13466.001.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux 1c645409534c 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1d6e43d |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23973/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Add more router-related information to the UI
> --
>
> Key: HDFS-13466
> URL: https://issues.apache.org/jira/browse/HDFS-13466
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13466.001.patch, pic.png
>
>
> Currently in NameNode UI, the Summary part also includes information:
> {noformat}
> Security is off.
> Safemode is off.
>  files and directories, * blocks =  total filesystem object(s).
> Heap Memory used  GB of  GB Heap Memory. Max Heap Memory is  GB.
> Non Heap Memory used  MB of  MB Commited Non Heap Memory. Max Non 
> Heap Memory is .
> {noformat}
> We could add similar information for router, for better visibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13466) RBF: Add more router-related information to the UI

2018-04-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441341#comment-16441341
 ] 

Íñigo Goiri commented on HDFS-13466:


It makes sense to show this information.
For consistency with the Namenode I think it makes sense to show the number of 
files and blocks where  [^HDFS-13466.001.patch] is showing them.
I have to say that those values look better in the table but I'm fine to move 
it there for consistency.

> RBF: Add more router-related information to the UI
> --
>
> Key: HDFS-13466
> URL: https://issues.apache.org/jira/browse/HDFS-13466
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13466.001.patch, pic.png
>
>
> Currently in NameNode UI, the Summary part also includes information:
> {noformat}
> Security is off.
> Safemode is off.
>  files and directories, * blocks =  total filesystem object(s).
> Heap Memory used  GB of  GB Heap Memory. Max Heap Memory is  GB.
> Non Heap Memory used  MB of  MB Commited Non Heap Memory. Max Non 
> Heap Memory is .
> {noformat}
> We could add similar information for router, for better visibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13466) RBF: Add more router-related information to the UI

2018-04-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441343#comment-16441343
 ] 

Íñigo Goiri commented on HDFS-13466:


+1 on [^HDFS-13466.001.patch] 

> RBF: Add more router-related information to the UI
> --
>
> Key: HDFS-13466
> URL: https://issues.apache.org/jira/browse/HDFS-13466
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13466.001.patch, pic.png
>
>
> Currently in NameNode UI, the Summary part also includes information:
> {noformat}
> Security is off.
> Safemode is off.
>  files and directories, * blocks =  total filesystem object(s).
> Heap Memory used  GB of  GB Heap Memory. Max Heap Memory is  GB.
> Non Heap Memory used  MB of  MB Commited Non Heap Memory. Max Non 
> Heap Memory is .
> {noformat}
> We could add similar information for router, for better visibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12615) Router-based HDFS federation phase 2

2018-04-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441337#comment-16441337
 ] 

Íñigo Goiri commented on HDFS-12615:


[~ywskycn] can you link the other JIRA for context?
I haven't done any active effort to support inode ids, we need to go through 
those case.
What are the ClientProtocol methods that would rely on ids?

> Router-based HDFS federation phase 2
> 
>
> Key: HDFS-12615
> URL: https://issues.apache.org/jira/browse/HDFS-12615
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: RBF
>
> This umbrella JIRA tracks set of improvements over the Router-based HDFS 
> federation (HDFS-10467).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-04-17 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441318#comment-16441318
 ] 

BELUGA BEHR commented on HDFS-13448:


In regards to the configuration, I don't love having yet another nob to turn, 
but there's always that one project out there that wants to disable this for 
security reasons (a bad actor purposely  putting additional load on the network 
with sub-optimal block placement).

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13433) webhdfs requests can be routed incorrectly in federated cluster

2018-04-17 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441313#comment-16441313
 ] 

Arpit Agarwal edited comment on HDFS-13433 at 4/17/18 6:33 PM:
---

bq. Just because the NN has an nsId doesn't mean it overrides the defaultFS 
authority.
[~daryn], this {{clientNamenodeAddress}} is exclusively used for webhdfs 
redirects. The redirect URL should use the nameservice that the NN belongs to, 
and not {{fs.defaultFS}}.

We've seen this in federated clusters, e.g. a webhdfs create request sent to a 
NameNode in ns2 uses a redirect url with fs.defaultFS=ns1.


was (Author: arpitagarwal):
bq. Just because the NN has an nsId doesn't mean it overrides the defaultFS 
authority.
[~daryn], this {{clientNamenodeAddress}} is used for webhdfs redirects. The 
redirect URL should use the nameservice that the NN belongs to, and not 
{{fs.defaultFS}}.

We've seen this in federated clusters, e.g. a webhdfs create request sent to a 
NameNode in ns2 uses a redirect url with fs.defaultFS=ns1.

> webhdfs requests can be routed incorrectly in federated cluster
> ---
>
> Key: HDFS-13433
> URL: https://issues.apache.org/jira/browse/HDFS-13433
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Critical
> Attachments: HDFS-13433.01.patch, HDFS-13433.02.patch, 
> HDFS-13433.03.patch, HDFS-13433.04.patch
>
>
> In the following HA+Federated setup with two nameservices ns1 and ns2:
> # ns1 -> namenodes nn1, nn2
> # ns2 -> namenodes nn3, nn4
> # fs.defaultFS is {{hdfs://ns1}}.
> A webhdfs request issued to nn3/nn4 will be routed to ns1. This is because 
> {{setClientNamenodeAddress}} initializes {{NameNode#clientNamenodeAddress}} 
> using fs.defaultFS before the config is overriden.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13433) webhdfs requests can be routed incorrectly in federated cluster

2018-04-17 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441313#comment-16441313
 ] 

Arpit Agarwal commented on HDFS-13433:
--

bq. Just because the NN has an nsId doesn't mean it overrides the defaultFS 
authority.
[~daryn], this {{clientNamenodeAddress}} is used for webhdfs redirects. The 
redirect URL should use the nameservice that the NN belongs to, and not 
{{fs.defaultFS}}.

We've seen this in federated clusters, e.g. a webhdfs create request sent to a 
NameNode in ns2 uses a redirect url with fs.defaultFS=ns1.

> webhdfs requests can be routed incorrectly in federated cluster
> ---
>
> Key: HDFS-13433
> URL: https://issues.apache.org/jira/browse/HDFS-13433
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Critical
> Attachments: HDFS-13433.01.patch, HDFS-13433.02.patch, 
> HDFS-13433.03.patch, HDFS-13433.04.patch
>
>
> In the following HA+Federated setup with two nameservices ns1 and ns2:
> # ns1 -> namenodes nn1, nn2
> # ns2 -> namenodes nn3, nn4
> # fs.defaultFS is {{hdfs://ns1}}.
> A webhdfs request issued to nn3/nn4 will be routed to ns1. This is because 
> {{setClientNamenodeAddress}} initializes {{NameNode#clientNamenodeAddress}} 
> using fs.defaultFS before the config is overriden.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13243) Get CorruptBlock because of calling close and sync in same time

2018-04-17 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441312#comment-16441312
 ] 

Daryn Sharp commented on HDFS-13243:


Maybe I overlooked some details, but please summarize the problem.  As best I 
can tell:
* thread1 is writing and closes the stream
* thread2 is syncing the stream
* thread1 calls commits the block with size 141232
* thread2 fsyncs with size 2054413
* DNs report block with size 2054413, marked corrupt

Am I wrong?  If no, how on earth can the block be committed with a size _less_ 
than a racing fsync?  This sounds like a serious client-side issue.

Agree the server-side logic needs to be improved.  Note that 
{{FileUnderConstructionFeature#updateLengthOfLastBlock}} guards against some of 
the invalid/malicious cases but unfortunately via asserts.  They don't need to 
be quasi-duplicated in this patch.

We cannot simply return success in some invalid cases: ie. fsync when the file 
has no blocks, size is negative, size is less than less synced/committed size.  
That just masks bugs.

Also, we shouldn't need all the new factories.  The tests must verify that 
namesystem calls, in various specific orders, with specific arguments that are 
good/bad, either succeed or fail.  We can't rely on the behavior of writing to 
a stream to prove the correctness of the namesystem guarding against invalid 
input from the client.

> Get CorruptBlock because of calling close and sync in same time
> ---
>
> Key: HDFS-13243
> URL: https://issues.apache.org/jira/browse/HDFS-13243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
>Priority: Critical
> Attachments: HDFS-13243-v1.patch, HDFS-13243-v2.patch, 
> HDFS-13243-v3.patch, HDFS-13243-v4.patch, HDFS-13243-v5.patch, 
> HDFS-13243-v6.patch
>
>
> HDFS File might get broken because of corrupt block(s) that could be produced 
> by calling close and sync in the same time.
> When calling close was not successful, UCBlock status would change to 
> COMMITTED, and if a sync request gets popped from queue and processed, sync 
> operation would change the last block length.
> After that, DataNode would report all received block to NameNode, and will 
> check Block length of all COMMITTED Blocks. But the block length was already 
> different between recorded in NameNode memory and reported by DataNode, and 
> consequently, the last block is marked as corruptted because of inconsistent 
> length.
>  
> {panel:title=Log in my hdfs}
> 2018-03-05 04:05:39,261 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocate blk_1085498930_11758129\{UCState=UNDER_CONSTRUCTION, 
> truncateBlock=null, primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  for 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,760 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> fsync: 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
>  for DFSClient_NONMAPREDUCE_1077513762_1
> 2018-03-05 04:05:39,761 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 2) in 
> file 
> /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519845790686.default.1520193926515
> 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.0.0.220:50010 is added to 
> blk_1085498930_11758129\{UCState=COMMITTED, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW],
>  
> ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW],
>  
> ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]]}
>  size 2054413
> 2018-03-05 

[jira] [Created] (HDFS-13467) Validation of Encryption zones path should be done in Distcp class.

2018-04-17 Thread Rushabh S Shah (JIRA)
Rushabh S Shah created HDFS-13467:
-

 Summary: Validation of Encryption zones path should be done in 
Distcp class.
 Key: HDFS-13467
 URL: https://issues.apache.org/jira/browse/HDFS-13467
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rushabh S Shah
Assignee: Rushabh S Shah


Currently validation of EZ paths are done in {{SimpleCopyListing}} class.
Distcp allows anyone to override {{CopyListing}} class and have their own 
version of CopyListing class but setting 
{{DistCpConstants.CONF_LABEL_PRESERVE_RAWXATTRS}} conf is done in 
{{SimpleCopyListing#validatePaths}} method.
If someone is overriding {{CopyListing}} class then they also need to include 
all this validation in their over-rid class.
Ideally validation of EZ path and setting of {{preserveRawXAttrs}} conf should 
be done at Distcp class level and not at CopyListing level.
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13462) Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers

2018-04-17 Thread Lukas Majercak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441291#comment-16441291
 ] 

Lukas Majercak commented on HDFS-13462:
---

Patch002 to fix the other test failures.

> Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers
> --
>
> Key: HDFS-13462
> URL: https://issues.apache.org/jira/browse/HDFS-13462
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, journal-node
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13462.000.patch, HDFS-13462.001.patch, 
> HDFS-13462.002.patch
>
>
> Allow configurable bind-host for JournalNode's HTTP and RPC servers to allow 
> overriding the hostname for which the server accepts connections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13462) Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers

2018-04-17 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13462:
--
Attachment: HDFS-13462.002.patch

> Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers
> --
>
> Key: HDFS-13462
> URL: https://issues.apache.org/jira/browse/HDFS-13462
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, journal-node
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13462.000.patch, HDFS-13462.001.patch, 
> HDFS-13462.002.patch
>
>
> Allow configurable bind-host for JournalNode's HTTP and RPC servers to allow 
> overriding the hostname for which the server accepts connections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-04-17 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441280#comment-16441280
 ] 

BELUGA BEHR edited comment on HDFS-13448 at 4/17/18 6:09 PM:
-

[~daryn] Thank you for the review.

 
I choose to place it in this location because {{AddBlockFlag.NO_LOCAL_WRITE}} 
is there as well.  I wanted to keep the logic in the same code area.  Perhaps 
there is a dialogue to be had about the best place to enforce these flags 
holistically, but it's not altogether unreasonable to think that the flags 
should handled within the block placement logic... perhaps someone chooses to 
write an implementation that always ignores these flags or overloads them to 
mean different things.  In such a scenario, their implementation would never 
see the source node if the flags were enforced at the higher level.

If we want to move the location of these flags, I would suggest a new JIRA to 
perhaps move both of them.

Sorry about the extra stuff that sneaked in there.

The failed unit tests do not appear to me to be related.


was (Author: belugabehr):
[~daryn] Thank you for the review.

 
I choose to place it in this location because {{AddBlockFlag.NO_LOCAL_WRITE}} 
is there as well.  I wanted to keep the logic in the same code area.  Perhaps 
there is a dialogue to be had about the best place to enforce these flags 
holistically, but it's not altogether unreasonable to think that the flags 
should handled within the block placement logic... perhaps someone chooses to 
write an implementation that always ignores these flags or overloads them to 
mean different things.  In such a scenario, their implementation would never 
see the source node if the flags were enforced at the higher level.

Sorry about the extra stuff that sneaked in there.

The failed unit tests do not appear to me to be related.

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-04-17 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441280#comment-16441280
 ] 

BELUGA BEHR commented on HDFS-13448:


[~daryn] Thank you for the review.

 
I choose to place it in this location because {{AddBlockFlag.NO_LOCAL_WRITE}} 
is there as well.  I wanted to keep the logic in the same code area.  Perhaps 
there is a dialogue to be had about the best place to enforce these flags 
holistically, but it's not altogether unreasonable to think that the flags 
should handled within the block placement logic... perhaps someone chooses to 
write an implementation that always ignores these flags or overloads them to 
mean different things.  In such a scenario, their implementation would never 
see the source node if the flags were enforced at the higher level.

Sorry about the extra stuff that sneaked in there.

The failed unit tests do not appear to me to be related.

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13422) Ozone: Fix whitespaces and license issues in HDFS-7240 branch

2018-04-17 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-13422:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks [~ljain] for the contribution and all for the reviews. I've tested it 
locally and commit the patch to the feature branch. 

> Ozone: Fix whitespaces and license issues in HDFS-7240 branch
> -
>
> Key: HDFS-13422
> URL: https://issues.apache.org/jira/browse/HDFS-13422
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13422-HDFS-7240.001.patch, 
> HDFS-13422-HDFS-7240.002.patch
>
>
> This jira will be used to fix various findbugs, javac, license and findbugs 
> issues in HDFS-7240 branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13466) RBF: Add more router-related information to the UI

2018-04-17 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441260#comment-16441260
 ] 

Wei Yan commented on HDFS-13466:


Put [^HDFS-13466.001.patch], it will look like pic.png.

BTW, not sure whether there is a bug there not, but "bin/hdfs dfsrouteradmin 
-safemode -enter" looks not working...looking into it, will fix in another Jira 
if there is an issue there.

> RBF: Add more router-related information to the UI
> --
>
> Key: HDFS-13466
> URL: https://issues.apache.org/jira/browse/HDFS-13466
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13466.001.patch, pic.png
>
>
> Currently in NameNode UI, the Summary part also includes information:
> {noformat}
> Security is off.
> Safemode is off.
>  files and directories, * blocks =  total filesystem object(s).
> Heap Memory used  GB of  GB Heap Memory. Max Heap Memory is  GB.
> Non Heap Memory used  MB of  MB Commited Non Heap Memory. Max Non 
> Heap Memory is .
> {noformat}
> We could add similar information for router, for better visibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13466) RBF: Add more router-related information to the UI

2018-04-17 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated HDFS-13466:
---
Status: Patch Available  (was: Open)

> RBF: Add more router-related information to the UI
> --
>
> Key: HDFS-13466
> URL: https://issues.apache.org/jira/browse/HDFS-13466
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13466.001.patch, pic.png
>
>
> Currently in NameNode UI, the Summary part also includes information:
> {noformat}
> Security is off.
> Safemode is off.
>  files and directories, * blocks =  total filesystem object(s).
> Heap Memory used  GB of  GB Heap Memory. Max Heap Memory is  GB.
> Non Heap Memory used  MB of  MB Commited Non Heap Memory. Max Non 
> Heap Memory is .
> {noformat}
> We could add similar information for router, for better visibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13466) RBF: Add more router-related information to the UI

2018-04-17 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated HDFS-13466:
---
Attachment: pic.png

> RBF: Add more router-related information to the UI
> --
>
> Key: HDFS-13466
> URL: https://issues.apache.org/jira/browse/HDFS-13466
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13466.001.patch, pic.png
>
>
> Currently in NameNode UI, the Summary part also includes information:
> {noformat}
> Security is off.
> Safemode is off.
>  files and directories, * blocks =  total filesystem object(s).
> Heap Memory used  GB of  GB Heap Memory. Max Heap Memory is  GB.
> Non Heap Memory used  MB of  MB Commited Non Heap Memory. Max Non 
> Heap Memory is .
> {noformat}
> We could add similar information for router, for better visibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13466) RBF: Add more router-related information to the UI

2018-04-17 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated HDFS-13466:
---
Attachment: HDFS-13466.001.patch

> RBF: Add more router-related information to the UI
> --
>
> Key: HDFS-13466
> URL: https://issues.apache.org/jira/browse/HDFS-13466
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13466.001.patch, pic.png
>
>
> Currently in NameNode UI, the Summary part also includes information:
> {noformat}
> Security is off.
> Safemode is off.
>  files and directories, * blocks =  total filesystem object(s).
> Heap Memory used  GB of  GB Heap Memory. Max Heap Memory is  GB.
> Non Heap Memory used  MB of  MB Commited Non Heap Memory. Max Non 
> Heap Memory is .
> {noformat}
> We could add similar information for router, for better visibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441258#comment-16441258
 ] 

genericqa commented on HDFS-13448:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 24m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  8s{color} | {color:orange} root: The patch generated 3 new + 475 unchanged 
- 0 fixed = 478 total (was 475) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 14s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
6s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}224m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13448 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919421/HDFS-13448.3.patch |
| 

[jira] [Commented] (HDFS-13433) webhdfs requests can be routed incorrectly in federated cluster

2018-04-17 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441218#comment-16441218
 ] 

Daryn Sharp commented on HDFS-13433:


This does not appear to be right.  {{testFederationWithoutHa}} illustrates the 
problem.  Just because the NN has an nsId doesn't mean it overrides the 
defaultFS authority.

I'm not aware of the ways the defaultFS is used during NN startup, but it's 
authority should probably be exactly what's sent in the redirect.

> webhdfs requests can be routed incorrectly in federated cluster
> ---
>
> Key: HDFS-13433
> URL: https://issues.apache.org/jira/browse/HDFS-13433
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Critical
> Attachments: HDFS-13433.01.patch, HDFS-13433.02.patch, 
> HDFS-13433.03.patch, HDFS-13433.04.patch
>
>
> In the following HA+Federated setup with two nameservices ns1 and ns2:
> # ns1 -> namenodes nn1, nn2
> # ns2 -> namenodes nn3, nn4
> # fs.defaultFS is {{hdfs://ns1}}.
> A webhdfs request issued to nn3/nn4 will be routed to ns1. This is because 
> {{setClientNamenodeAddress}} initializes {{NameNode#clientNamenodeAddress}} 
> using fs.defaultFS before the config is overriden.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-04-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441217#comment-16441217
 ] 

Íñigo Goiri commented on HDFS-13443:


Thanks [~arshad.mohammad], do you mind adding a trunk patch too?
Yetus is not very good with branch-2 lately.

Some comments:
* Make the logs use the logger approach, for example in {{Router}}, we should 
do: {{LOG.info("{} service is enabled", 
MountTableRefreshService.class.getSimpleName());}}
* Make MOUNT_TABLE_CACHE_IMMEDIATE_UPDATE_MAX_TIME use 
{{Configuration#getTimeDuration()}} instead of {{getInt()}}.
* Typo in {{MountTableStore}}, "Can not refresh munt table cache as state store 
is not available", we can also make it shorter to fit one line, something like: 
"Cannot refresh mount table: state store not available""
* {{MountTableStore#updateCache()}} could have a name a little more descriptive 
like, {{updateCacheAllRouters()}}.
* As this is a new feature, I would like to have it disabled by default 
(MOUNT_TABLE_CACHE_IMMEDIATE_UPDATE_DEFAULT to be false).

For {{MountTableRefreshService}}: 
* The javadoc should be a little more descriptive and define the cycle: get the 
routers, start updater, waiting...
* We could make {{new HashMap()}} just {{new 
HashMap<>()}}.
* The name for {{addressClientMap}} is not very clear; something like 
{{routerClientsCache}}?
* Should we make this map a proper cache with expiration and so on? Use some of 
the guava caches that create the client if not available and clean them up 
after a while.
* Not sure what would happen if we have a client and then the Router goes down; 
we should test cases where Routers leave after we setup the addressClientMap.
* Aren't we having a cascading effect that will never stop here? We update, 
then we have R1 triggering all the updates, the same for R2 and so on. I would 
add a counter for the number of updates and track this in the unit test (it 
should have something like 4 routers).

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-branch-2.001.patch, 
> HDFS-13443-branch-2.002.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13462) Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers

2018-04-17 Thread Lukas Majercak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441210#comment-16441210
 ] 

Lukas Majercak commented on HDFS-13462:
---

Added patch001 to fix TestHdfsConfigFields + checkstyle.

> Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers
> --
>
> Key: HDFS-13462
> URL: https://issues.apache.org/jira/browse/HDFS-13462
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, journal-node
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13462.000.patch, HDFS-13462.001.patch
>
>
> Allow configurable bind-host for JournalNode's HTTP and RPC servers to allow 
> overriding the hostname for which the server accepts connections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13462) Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers

2018-04-17 Thread Lukas Majercak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HDFS-13462:
--
Attachment: HDFS-13462.001.patch

> Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers
> --
>
> Key: HDFS-13462
> URL: https://issues.apache.org/jira/browse/HDFS-13462
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, journal-node
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13462.000.patch, HDFS-13462.001.patch
>
>
> Allow configurable bind-host for JournalNode's HTTP and RPC servers to allow 
> overriding the hostname for which the server accepts connections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13129) Add a test for DfsAdmin refreshSuperUserGroupsConfiguration

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441178#comment-16441178
 ] 

genericqa commented on HDFS-13129:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 24 unchanged - 1 fixed = 24 total (was 25) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HDFS-13129 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919427/HDFS-13129.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1d09f8248576 3.13.0-141-generic #190-Ubuntu SMP Fri Jan 19 
12:52:38 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1d6e43d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23971/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23971/testReport/ |
| Max. process+thread count | 3867 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| 

[jira] [Commented] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-04-17 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441174#comment-16441174
 ] 

Daryn Sharp commented on HDFS-13448:


I don't think another placement policy is needed, nor do I think the placement 
policy is where the change should be made.  This client requested feature 
should be independent of  the actual placement policy.  Ie. It shouldn't be 
necessary for every policy to understand this feature.

Not telling the placement policy "where you are" effectively removes the 
ability for any locality.  Why not change {{FSDirWriteFileOp#
chooseTargetForNewBlock}} to pass no clientNode to {{chooseTarget4NewBlock}}?

I agree, don't think a config is necessary.

(Aside: please resist the urge to reformat code beyond the scope of the change.)

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13466) RBF: Add more router-related information to the UI

2018-04-17 Thread Wei Yan (JIRA)
Wei Yan created HDFS-13466:
--

 Summary: RBF: Add more router-related information to the UI
 Key: HDFS-13466
 URL: https://issues.apache.org/jira/browse/HDFS-13466
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Wei Yan
Assignee: Wei Yan


Currently in NameNode UI, the Summary part also includes information:
{noformat}
Security is off.
Safemode is off.
 files and directories, * blocks =  total filesystem object(s).
Heap Memory used  GB of  GB Heap Memory. Max Heap Memory is  GB.
Non Heap Memory used  MB of  MB Commited Non Heap Memory. Max Non Heap 
Memory is .
{noformat}
We could add similar information for router, for better visibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13441) DataNode missed BlockKey update from NameNode due to HeartbeatResponse was dropped

2018-04-17 Thread yunjiong zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441110#comment-16441110
 ] 

yunjiong zhao commented on HDFS-13441:
--

[~hexiaoqiao], DataNode can't use NamenodeProtocol.

> DataNode missed BlockKey update from NameNode due to HeartbeatResponse was 
> dropped
> --
>
> Key: HDFS-13441
> URL: https://issues.apache.org/jira/browse/HDFS-13441
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Affects Versions: 2.7.1
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
>Priority: Major
> Attachments: HDFS-13441.002.patch, HDFS-13441.patch
>
>
> After NameNode failover, lots of application failed due to some DataNodes 
> can't re-compute password from block token.
> {code:java}
> 2018-04-11 20:10:52,448 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> hdc3-lvs01-400-1701-048.stratus.lvs.ebay.com:50010:DataXceiver error 
> processing unknown operation  src: /10.142.74.116:57404 dst: 
> /10.142.77.45:50010
> javax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring password 
> [Caused by org.apache.hadoop.security.token.SecretManager$InvalidToken: Can't 
> re-compute password for block_token_identifier (expiryDate=1523538652448, 
> keyId=1762737944, userId=hadoop, 
> blockPoolId=BP-36315570-10.103.108.13-1423055488042, blockId=12142862700, 
> access modes=[WRITE]), since the required block key (keyID=1762737944) 
> doesn't exist.]
>         at 
> com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java:598)
>         at 
> com.sun.security.sasl.digest.DigestMD5Server.evaluateResponse(DigestMD5Server.java:244)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslParticipant.evaluateChallengeOrResponse(SaslParticipant.java:115)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:376)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getSaslStreams(SaslDataTransferServer.java:300)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:127)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:194)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.security.token.SecretManager$InvalidToken: Can't 
> re-compute password for block_token_identifier (expiryDate=1523538652448, 
> keyId=1762737944, userId=hadoop, 
> blockPoolId=BP-36315570-10.103.108.13-1423055488042, blockId=12142862700, 
> access modes=[WRITE]), since the required block key (keyID=1762737944) 
> doesn't exist.
>         at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.retrievePassword(BlockTokenSecretManager.java:382)
>         at 
> org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.retrievePassword(BlockPoolTokenSecretManager.java:79)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.buildServerPassword(SaslDataTransferServer.java:318)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.access$100(SaslDataTransferServer.java:73)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer$2.apply(SaslDataTransferServer.java:297)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer$SaslServerCallbackHandler.handle(SaslDataTransferServer.java:241)
>         at 
> com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java:589)
>         ... 7 more
> {code}
>  
> In the DataNode log, we didn't see DataNode update block keys around 
> 2018-04-11 09:55:00 and around 2018-04-11 19:55:00.
> {code:java}
> 2018-04-10 14:51:36,424 INFO 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager: Setting 
> block keys
> 2018-04-10 23:55:38,420 INFO 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager: Setting 
> block keys
> 2018-04-11 00:51:34,792 INFO 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager: Setting 
> block keys
> 2018-04-11 10:51:39,403 INFO 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager: Setting 
> block keys
> 2018-04-11 20:51:44,422 INFO 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager: Setting 
> block keys
> 2018-04-12 02:54:47,855 INFO 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager: Setting 
> block keys
> 2018-04-12 05:55:44,456 INFO 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager: Setting 
> block keys
> {code}
> The reason is there is 

[jira] [Commented] (HDFS-13451) Fix Some Potential NPE

2018-04-17 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441106#comment-16441106
 ] 

Daryn Sharp commented on HDFS-13451:


I disagree with removing the possibility of NPEs by simply ignoring invalid 
states.  It will only further mask bugs. It will introduce more insanely 
difficult to root cause bugs that would have never happened if the code blew up 
early due to the invalid state that should be impossible.

The UGI auth method _cannot_ be PROXY w/o having a real user.  The UGI is in an 
illegal state that cannot be ignored.

When {{BlockInfo#getDatanode(int)}} returns null it's semantically equivalent 
to ArrayIndexOutOfBoundsException – which it arguably should be.  Treating this 
condition as "ok" and skippable only masks bugs.  The existing callers are 
being bad.

The offline image change appears to simply ignore what is effectively a 
corrupted image format.

> Fix Some Potential NPE
> --
>
> Key: HDFS-13451
> URL: https://issues.apache.org/jira/browse/HDFS-13451
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: lujie
>Priority: Major
> Attachments: HDFS-13451_1.patch
>
>
> We have developed a static analysis tool 
> [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential 
> NPE. Our analysis shows that some callees may return null in corner case(e.g. 
> node crash , IO exception), some of their callers have  _!=null_ check but 
> some do not have. In this issue we post a patch which can add  !=null  based 
> on existed !=null  check. For example:
> callee BlockInfo#getDatanode may return null:
> {code:java}
> public DatanodeDescriptor getDatanode(int index) {
> DatanodeStorageInfo storage = getStorageInfo(index);
>return storage == null ? null : storage.getDatanodeDescriptor();
> }
> {code}
> it has 4 callers, 3 of them have !=null checker, like in 
> CacheReplicationMonitor#addNewPendingCached:
> {code:java}
> DatanodeDescriptor datanode = blockInfo.getDatanode(i);
> if (datanode == null) {
>continue;
> }
> {code}
> but in caller NamenodeFsck#blockIdCK have no !null checker, we add checker 
> just like CacheReplicationMonitor#addNewPendingCached
> {code:java}
> DatanodeDescriptor dn = blockInfo.getDatanode(idx);
> if (dn == null) {
> continue;
> }
> {code}
> But due to we are not very  familiar with HDFS, hope some expert can review 
> it.
> Thanks



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13407) Ozone: Use separated version schema for Hdds/Ozone projects

2018-04-17 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-13407:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

Thanks [~elek] for the contribution. I've tested it locally and committed the 
patch to the feature branch. 

> Ozone: Use separated version schema for Hdds/Ozone projects
> ---
>
> Key: HDFS-13407
> URL: https://issues.apache.org/jira/browse/HDFS-13407
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13407-HDFS-7240.001.patch, 
> HDFS-13407-HDFS-7240.002.patch
>
>
> The community is voted to manage Hdds/Ozone in-tree but with different 
> release cycle. To achieve this we need to separated the versions of 
> hdds/ozone projects from the mainline hadoop version (3.2.0-currently).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13326) RBF: Improve the interfaces to modify and view mount tables

2018-04-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441094#comment-16441094
 ] 

Íñigo Goiri commented on HDFS-13326:


In that case, I would prefer for splitting this JIRA in two:
* The part that doesn't break compatibility (adding the update part).
* The part that removes functionality from the add behavior.

In this way, new patches will be easier to maintain and we can keep 2.9 and so 
on up to date in code.

> RBF: Improve the interfaces to modify and view mount tables
> ---
>
> Key: HDFS-13326
> URL: https://issues.apache.org/jira/browse/HDFS-13326
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Gang Li
>Priority: Minor
> Attachments: HDFS-13326.000.patch, HDFS-13326.001.patch
>
>
> From DFSRouterAdmin cmd, currently the update logic is implemented inside add 
> operation, where it has some limitation (e.g. cannot update "readonly" or 
> removing a destination).  Given the RPC alreadys separate add and update 
> operations, it would be better to do the same in cmd level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12615) Router-based HDFS federation phase 2

2018-04-17 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441093#comment-16441093
 ] 

Wei Yan commented on HDFS-12615:


[~daryn] asked about how RBF supporting file access using InodeID in another 
discussion. I'm not sure I understand the question correctly, I guess sth 
related to HDFS-4489? Daryn, pls correct me if I'm wrong here :)

[~elgoiri], do you have context about supporting this?

> Router-based HDFS federation phase 2
> 
>
> Key: HDFS-12615
> URL: https://issues.apache.org/jira/browse/HDFS-12615
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: RBF
>
> This umbrella JIRA tracks set of improvements over the Router-based HDFS 
> federation (HDFS-10467).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13462) Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers

2018-04-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441086#comment-16441086
 ] 

Íñigo Goiri commented on HDFS-13462:


 [^HDFS-13462.000.patch] looks good.
* We need to fix TestHdfsConfigFields
* A couple unit test seem suspicious; let's see if they fail in the next round 
too
* Fix the checkstyle

> Add BIND_HOST configuration for JournalNode's HTTP and RPC Servers
> --
>
> Key: HDFS-13462
> URL: https://issues.apache.org/jira/browse/HDFS-13462
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, journal-node
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HDFS-13462.000.patch
>
>
> Allow configurable bind-host for JournalNode's HTTP and RPC servers to allow 
> overriding the hostname for which the server accepts connections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13369) FSCK Report broken with RequestHedgingProxyProvider

2018-04-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441082#comment-16441082
 ] 

Íñigo Goiri commented on HDFS-13369:


Not sure what's the best approach.
In principle I would go for a fix to {{RequestHedgingProxyProvider}} instead of 
fixing FSCK.

In any case,  [^HDFS-13369.001.patch] is just by-passing the issue, not solving 
it.
This is equivalent to disabling the {{RequestHedgingProxyProvider}}, you could 
just set the client at the beginning of the test to be the configured failover 
one and you would get the same behavior.

> FSCK Report broken with RequestHedgingProxyProvider 
> 
>
> Key: HDFS-13369
> URL: https://issues.apache.org/jira/browse/HDFS-13369
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.3
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13369.001.patch
>
>
> Scenario:-
> 1.Configure the RequestHedgingProxy
> 2. write some files in file system
> 3. Take FSCK report for the above files
>  
> {noformat}
> bin> hdfs fsck /file1 -locations -files -blocks
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler
>  cannot be cast to org.apache.hadoop.ipc.RpcInvocationHandler
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:626)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.getConnectionId(RetryInvocationHandler.java:438)
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:628)
> at org.apache.hadoop.ipc.RPC.getServerAddress(RPC.java:611)
> at org.apache.hadoop.hdfs.HAUtil.getAddressOfActive(HAUtil.java:263)
> at 
> org.apache.hadoop.hdfs.tools.DFSck.getCurrentNamenodeAddress(DFSck.java:257)
> at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:319)
> at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:156)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:153)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:152)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:385){noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13398) Hdfs recursive listing operation is very slow

2018-04-17 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441039#comment-16441039
 ] 

Mukul Kumar Singh edited comment on HDFS-13398 at 4/17/18 3:52 PM:
---

Thanks for working on this [~ajaysachdev] and [~jcwik]. Please find my comments 
as following.

1) The current patch is not applying right now. Can you please rebase the patch 
over latest trunk ?
2) Also can you please upload the patch with filename as 
"HDFS-13398.001.patch". Please follow the guidelines for naming the patch as 
https://wiki.apache.org/hadoop/HowToContribute#Naming_your_patch.
3) Also can the config "fs.threads" can be replaced with a command line 
argument. I feel that will help in controlling the parallelization for each 
command. We can certainly have a default value in the case it is not specified.
4) Also it will be great if some unit tests can be added for the patch.



was (Author: msingh):
Thanks for working on this [~ajaysachdev]. Please find my comments as following.

1) The current patch is not applying right now. Can you please rebase the patch 
over latest trunk ?
2) Also can you please upload the patch with filename as 
"HDFS-13398.001.patch". Please follow the guidelines for naming the patch as 
https://wiki.apache.org/hadoop/HowToContribute#Naming_your_patch.
3) Also can the config "fs.threads" can be replaced with a command line 
argument. I feel that will help in controlling the parallelization for each 
command. We can certainly have a default value in the case it is not specified.
4) Also it will be great if some unit tests can be added for the patch.


> Hdfs recursive listing operation is very slow
> -
>
> Key: HDFS-13398
> URL: https://issues.apache.org/jira/browse/HDFS-13398
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
> Environment: HCFS file system where HDP 2.6.1 is connected to ECS 
> (Object Store).
>Reporter: Ajay Sachdev
>Assignee: Ajay Sachdev
>Priority: Major
> Fix For: 2.7.1
>
> Attachments: parallelfsPatch
>
>
> The hdfs dfs -ls -R command is sequential in nature and is very slow for a 
> HCFS system. We have seen around 6 mins for 40K directory/files structure.
> The proposal is to use multithreading approach to speed up recursive list, du 
> and count operations.
> We have tried a ForkJoinPool implementation to improve performance for 
> recursive listing operation.
> [https://github.com/jasoncwik/hadoop-release/tree/parallel-fs-cli]
> commit id : 
> 82387c8cd76c2e2761bd7f651122f83d45ae8876
> Another implementation is to use Java Executor Service to improve performance 
> to run listing operation in multiple threads in parallel. This has 
> significantly reduced the time to 40 secs from 6 mins.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13165) [SPS]: Collects successfully moved block details via IBR

2018-04-17 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441057#comment-16441057
 ] 

Daryn Sharp edited comment on HDFS-13165 at 4/17/18 3:47 PM:
-

I've asked a few times why a new command (ie. newest name in this patch is 
{{DNA_MOVEBLOCK)}} is required to move blocks across storages instead of using 
the existing {{DNA_TRANSFER?  }}I've heard mention of delHint issues, but I'm 
unclear why it's an issue?

The delHint is needed to avoid split-brain fights between the NN and the 
balancer.  Both the old and new locations may equally satisfy the block 
placement policy, so the delHint ensures the NN won't delete the new replica 
created by the balancer.  For a storage movement, the placement policy takes 
the storage policy into account while computing invalidations so the new 
location won't be invalidated.

I'm struggling to understand why any datanode changes are needed to move blocks 
between storages when the DN already has all the necessary functionality.  
Sure, DNA_TRANSFER could be optimized to short-circuit for local moves, ala 
DNA_REPLACE, by using {{FsDatasetSpi#moveBlockAcrossStorage}} but that's just 
an optimization.  Using the existing commands also avoid version 
incompatibilities.

What am I missing?

 


was (Author: daryn):
I've asked a few times why a new command (ie. newest name in this patch is 
{{DNA_MOVEBLOCK)}} is required to move blocks across storages instead of using 
the existing {{DNA_TRANSFER?  I've heard mention of delHint issues, but I'm 
unclear why it's an issue?}}

The delHint is needed to avoid split-brain fights between the NN and the 
balancer.  Both the old and new locations may equally satisfy the storage 
placement policy, so the delHint ensures the NN won't delete the new replica 
created by the balancer.  For a storage movement, the placement policy takes 
SPS into account while computing invalidations so the new location won't be 
invalidated.

I'm struggling to understand why any datanode changes are needed to move blocks 
between storages when the DN already has all the necessary functionality.  
Sure, DNA_TRANSFER could be optimized to short-circuit for local moves, ala 
DNA_REPLACE, by using {{FsDatasetSpi#moveBlockAcrossStorage}} but that's just 
an optimization.  Using the existing commands also avoid version 
incompatibilities.

What am I missing?

 

> [SPS]: Collects successfully moved block details via IBR
> 
>
> Key: HDFS-13165
> URL: https://issues.apache.org/jira/browse/HDFS-13165
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Major
> Attachments: HDFS-13165-HDFS-10285-00.patch, 
> HDFS-13165-HDFS-10285-01.patch, HDFS-13165-HDFS-10285-02.patch, 
> HDFS-13165-HDFS-10285-03.patch, HDFS-13165-HDFS-10285-04.patch, 
> HDFS-13165-HDFS-10285-05.patch, HDFS-13165-HDFS-10285-06.patch, 
> HDFS-13165-HDFS-10285-07.patch, HDFS-13165-HDFS-10285-08.patch, 
> HDFS-13166-HDFS-10285-07.patch
>
>
> This task to make use of the existing IBR to get moved block details and 
> remove unwanted future tracking logic exists in BlockStorageMovementTracker 
> code, this is no more needed as the file level tracking maintained at NN 
> itself.
> Following comments taken from HDFS-10285, 
> [here|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16347472=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16347472]
> Comment-3)
> {quote}BPServiceActor
> Is it actually sending back the moved blocks? Aren’t IBRs sufficient?{quote}
> Comment-21)
> {quote}
> BlockStorageMovementTracker
> Many data structures are riddled with non-threadsafe race conditions and risk 
> of CMEs.
> Ex. The moverTaskFutures map. Adding new blocks and/or adding to a block's 
> list of futures is synchronized. However the run loop does an unsynchronized 
> block get, unsynchronized future remove, unsynchronized isEmpty, possibly 
> another unsynchronized get, only then does it do a synchronized remove of the 
> block. The whole chunk of code should be synchronized.
> Is the problematic moverTaskFutures even needed? It's aggregating futures 
> per-block for seemingly no reason. Why track all the futures at all instead 
> of just relying on the completion service? As best I can tell:
> It's only used to determine if a future from the completion service should be 
> ignored during shutdown. Shutdown sets the running boolean to false and 
> clears the entire datastructure so why not use the running boolean like a 
> check just a little further down?
> As synchronization to sleep up to 2 seconds before performing a blocking 
> moverCompletionService.take, but only when it thinks there are no active 
> futures. I'll ignore the missed notify 

[jira] [Commented] (HDFS-13165) [SPS]: Collects successfully moved block details via IBR

2018-04-17 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441057#comment-16441057
 ] 

Daryn Sharp commented on HDFS-13165:


I've asked a few times why a new command (ie. newest name in this patch is 
{{DNA_MOVEBLOCK)}} is required to move blocks across storages instead of using 
the existing {{DNA_TRANSFER?  I've heard mention of delHint issues, but I'm 
unclear why it's an issue?}}

The delHint is needed to avoid split-brain fights between the NN and the 
balancer.  Both the old and new locations may equally satisfy the storage 
placement policy, so the delHint ensures the NN won't delete the new replica 
created by the balancer.  For a storage movement, the placement policy takes 
SPS into account while computing invalidations so the new location won't be 
invalidated.

I'm struggling to understand why any datanode changes are needed to move blocks 
between storages when the DN already has all the necessary functionality.  
Sure, DNA_TRANSFER could be optimized to short-circuit for local moves, ala 
DNA_REPLACE, by using {{FsDatasetSpi#moveBlockAcrossStorage}} but that's just 
an optimization.  Using the existing commands also avoid version 
incompatibilities.

What am I missing?

 

> [SPS]: Collects successfully moved block details via IBR
> 
>
> Key: HDFS-13165
> URL: https://issues.apache.org/jira/browse/HDFS-13165
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Major
> Attachments: HDFS-13165-HDFS-10285-00.patch, 
> HDFS-13165-HDFS-10285-01.patch, HDFS-13165-HDFS-10285-02.patch, 
> HDFS-13165-HDFS-10285-03.patch, HDFS-13165-HDFS-10285-04.patch, 
> HDFS-13165-HDFS-10285-05.patch, HDFS-13165-HDFS-10285-06.patch, 
> HDFS-13165-HDFS-10285-07.patch, HDFS-13165-HDFS-10285-08.patch, 
> HDFS-13166-HDFS-10285-07.patch
>
>
> This task to make use of the existing IBR to get moved block details and 
> remove unwanted future tracking logic exists in BlockStorageMovementTracker 
> code, this is no more needed as the file level tracking maintained at NN 
> itself.
> Following comments taken from HDFS-10285, 
> [here|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16347472=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16347472]
> Comment-3)
> {quote}BPServiceActor
> Is it actually sending back the moved blocks? Aren’t IBRs sufficient?{quote}
> Comment-21)
> {quote}
> BlockStorageMovementTracker
> Many data structures are riddled with non-threadsafe race conditions and risk 
> of CMEs.
> Ex. The moverTaskFutures map. Adding new blocks and/or adding to a block's 
> list of futures is synchronized. However the run loop does an unsynchronized 
> block get, unsynchronized future remove, unsynchronized isEmpty, possibly 
> another unsynchronized get, only then does it do a synchronized remove of the 
> block. The whole chunk of code should be synchronized.
> Is the problematic moverTaskFutures even needed? It's aggregating futures 
> per-block for seemingly no reason. Why track all the futures at all instead 
> of just relying on the completion service? As best I can tell:
> It's only used to determine if a future from the completion service should be 
> ignored during shutdown. Shutdown sets the running boolean to false and 
> clears the entire datastructure so why not use the running boolean like a 
> check just a little further down?
> As synchronization to sleep up to 2 seconds before performing a blocking 
> moverCompletionService.take, but only when it thinks there are no active 
> futures. I'll ignore the missed notify race that the bounded wait masks, but 
> the real question is why not just do the blocking take?
> Why all the complexity? Am I missing something?
> BlocksMovementsStatusHandler
> Suffers same type of thread safety issues as StoragePolicySatisfyWorker. Ex. 
> blockIdVsMovementStatus is inconsistent synchronized. Does synchronize to 
> return an unmodifiable list which sadly does nothing to protect the caller 
> from CME.
> handle is iterating over a non-thread safe list.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13326) RBF: Improve the interfaces to modify and view mount tables

2018-04-17 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441052#comment-16441052
 ] 

Wei Yan commented on HDFS-13326:


{quote}But one thing I'm concerning that this will be a incompatible change. 
That is say, some users used to use -add option for updating mount tables won't 
make sense. And they should use -update instead. We would like to add a 
incompatible tag for this JIRA and make this change only for branch-2 and trunk.
{quote}
Agree. Will do that when commiting.

> RBF: Improve the interfaces to modify and view mount tables
> ---
>
> Key: HDFS-13326
> URL: https://issues.apache.org/jira/browse/HDFS-13326
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Gang Li
>Priority: Minor
> Attachments: HDFS-13326.000.patch, HDFS-13326.001.patch
>
>
> From DFSRouterAdmin cmd, currently the update logic is implemented inside add 
> operation, where it has some limitation (e.g. cannot update "readonly" or 
> removing a destination).  Given the RPC alreadys separate add and update 
> operations, it would be better to do the same in cmd level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13398) Hdfs recursive listing operation is very slow

2018-04-17 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441039#comment-16441039
 ] 

Mukul Kumar Singh commented on HDFS-13398:
--

Thanks for working on this [~ajaysachdev]. Please find my comments as following.

1) The current patch is not applying right now. Can you please rebase the patch 
over latest trunk ?
2) Also can you please upload the patch with filename as 
"HDFS-13398.001.patch". Please follow the guidelines for naming the patch as 
https://wiki.apache.org/hadoop/HowToContribute#Naming_your_patch.
3) Also can the config "fs.threads" can be replaced with a command line 
argument. I feel that will help in controlling the parallelization for each 
command. We can certainly have a default value in the case it is not specified.
4) Also it will be great if some unit tests can be added for the patch.


> Hdfs recursive listing operation is very slow
> -
>
> Key: HDFS-13398
> URL: https://issues.apache.org/jira/browse/HDFS-13398
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
> Environment: HCFS file system where HDP 2.6.1 is connected to ECS 
> (Object Store).
>Reporter: Ajay Sachdev
>Assignee: Ajay Sachdev
>Priority: Major
> Fix For: 2.7.1
>
> Attachments: parallelfsPatch
>
>
> The hdfs dfs -ls -R command is sequential in nature and is very slow for a 
> HCFS system. We have seen around 6 mins for 40K directory/files structure.
> The proposal is to use multithreading approach to speed up recursive list, du 
> and count operations.
> We have tried a ForkJoinPool implementation to improve performance for 
> recursive listing operation.
> [https://github.com/jasoncwik/hadoop-release/tree/parallel-fs-cli]
> commit id : 
> 82387c8cd76c2e2761bd7f651122f83d45ae8876
> Another implementation is to use Java Executor Service to improve performance 
> to run listing operation in multiple threads in parallel. This has 
> significantly reduced the time to 40 secs from 6 mins.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13129) Add a test for DfsAdmin refreshSuperUserGroupsConfiguration

2018-04-17 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440926#comment-16440926
 ] 

Mukul Kumar Singh commented on HDFS-13129:
--

Thanks for the review [~ajayydv]. I have addressed the review comments in the 
v3 patch.

> Add a test for DfsAdmin refreshSuperUserGroupsConfiguration
> ---
>
> Key: HDFS-13129
> URL: https://issues.apache.org/jira/browse/HDFS-13129
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: namenode
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Attachments: HDFS-13129.001.patch, HDFS-13129.002.patch, 
> HDFS-13129.003.patch
>
>
> UserGroup can be refreshed using -refreshSuperUserGroupsConfiguration. This 
> jira will add a test to verify that the user group information is updated 
> correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13129) Add a test for DfsAdmin refreshSuperUserGroupsConfiguration

2018-04-17 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-13129:
-
Attachment: HDFS-13129.003.patch

> Add a test for DfsAdmin refreshSuperUserGroupsConfiguration
> ---
>
> Key: HDFS-13129
> URL: https://issues.apache.org/jira/browse/HDFS-13129
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: namenode
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Attachments: HDFS-13129.001.patch, HDFS-13129.002.patch, 
> HDFS-13129.003.patch
>
>
> UserGroup can be refreshed using -refreshSuperUserGroupsConfiguration. This 
> jira will add a test to verify that the user group information is updated 
> correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13465) Overlapping lease recoveries cause NPE in NN

2018-04-17 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-13465:
--

 Summary: Overlapping lease recoveries cause NPE in NN
 Key: HDFS-13465
 URL: https://issues.apache.org/jira/browse/HDFS-13465
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.8.0
Reporter: Daryn Sharp


Overlapping lease recoveries for the same file will NPE in the DatanodeManager 
while creating LeaseRecoveryCommands, possibly losing other recovery commands.
 * client1 calls recoverLease, file is added to DN1's recovery queue
 * client2 calls recoverLease, file is added to DN2's recovery queue
 * one DN heartbeats, gets the block recovery command and it completes the 
synchronization before the other DN heartbeats; ie. file is closed.
 * other DN heartbeats, takes block from recovery queue, assumes it's still 
under construction, gets a NPE calling getExpectedLocations

{code:java}
//check lease recovery
BlockInfo[] blocks = nodeinfo.getLeaseRecoveryCommand(Integer.MAX_VALUE);
if (blocks != null) {
  BlockRecoveryCommand brCommand = new BlockRecoveryCommand(
  blocks.length);
  for (BlockInfo b : blocks) {
BlockUnderConstructionFeature uc = b.getUnderConstructionFeature();
assert uc != null;
final DatanodeStorageInfo[] storages = uc.getExpectedStorageLocations();
{code}
This is "ok" to the NN state if only 1 block was queued.  All recoveries are 
lost if multiple blocks were queued.  Recovery will not occur until the client 
explicitly retries or the lease monitor recovers the lease.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-04-17 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13448:
---
Attachment: HDFS-13448.3.patch

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-04-17 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13448:
---
Status: Patch Available  (was: Open)

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 3.0.1, 2.9.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-04-17 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-13448:
---
Status: Open  (was: Patch Available)

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 3.0.1, 2.9.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch, 
> HDFS-13448.3.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13404) RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails

2018-04-17 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440310#comment-16440310
 ] 

Takanobu Asanuma edited comment on HDFS-13404 at 4/17/18 1:58 PM:
--

Hi, [~szetszwo]. Could you please advise me about WebHDFS?

>From HDFS-13353 and this jira, {{WebHdfsFileSystem}} seems to have a little 
>latency to complete some operations after executing the APIs. Is this the 
>specification of {{WebHdfsFileSystem}} or not desired behavior? I added 
>{{fs.contract.create-visibility-delayed=true}} to the contract of WebHDFS in 
>HDFS-13353. But on second thought, it might be wrong.


was (Author: tasanuma0829):
Hi, [~szetszwo]. Could you please advise me about WebHDFS?

>From HDFS-13353 and this jira, WebHDFS seems to have a little latency to 
>complete some operations after executing the APIs. Is this the specification 
>of WebHDFS or not desired behavior? I added 
>{{fs.contract.create-visibility-delayed=true}} to the contract of WebHDFS in 
>HDFS-13353. But on second thought, it might be wrong.

> RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails
> --
>
> Key: HDFS-13404
> URL: https://issues.apache.org/jira/browse/HDFS-13404
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: detailed_error.log
>
>
> This is reported by [~elgoiri].
> {noformat}
> java.io.FileNotFoundException: 
> Failed to append to non-existent file /test/test/target for client 127.0.0.1
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:104)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2621)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:485)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> ...
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:549)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:527)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$FsPathOutputStreamRunner$1.close(WebHdfsFileSystem.java:1013)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractAppendTest.testRenameFileBeingAppended(AbstractContractAppendTest.java:139)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-04-17 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440877#comment-16440877
 ] 

BELUGA BEHR commented on HDFS-13448:


... not to mention that including this feature requires a client change to 
provide the flags.  It seems weird to provide flags in the default client that 
are ignored by the server.

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.1.patch, HDFS-13448.2.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13422) Ozone: Fix whitespaces and license issues in HDFS-7240 branch

2018-04-17 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDFS-13422:
---
Attachment: HDFS-13422-HDFS-7240.002.patch

> Ozone: Fix whitespaces and license issues in HDFS-7240 branch
> -
>
> Key: HDFS-13422
> URL: https://issues.apache.org/jira/browse/HDFS-13422
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13422-HDFS-7240.001.patch, 
> HDFS-13422-HDFS-7240.002.patch
>
>
> This jira will be used to fix various findbugs, javac, license and findbugs 
> issues in HDFS-7240 branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13422) Ozone: Fix whitespaces and license issues in HDFS-7240 branch

2018-04-17 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDFS-13422:
---
Attachment: (was: HDFS-13422-HDFS-7240.002.patch)

> Ozone: Fix whitespaces and license issues in HDFS-7240 branch
> -
>
> Key: HDFS-13422
> URL: https://issues.apache.org/jira/browse/HDFS-13422
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-13422-HDFS-7240.001.patch, 
> HDFS-13422-HDFS-7240.002.patch
>
>
> This jira will be used to fix various findbugs, javac, license and findbugs 
> issues in HDFS-7240 branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-04-17 Thread Mohammad Arshad (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440853#comment-16440853
 ] 

Mohammad Arshad commented on HDFS-13443:


Submitting new patch HDFS-13443-branch-2.002.patch for following change
# removed force parameter for refresh API. Now cache is always updated 
forcefully
# added header in MountTableRefreshService and MountTableRefreshThread

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-branch-2.001.patch, 
> HDFS-13443-branch-2.002.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-04-17 Thread Mohammad Arshad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Arshad updated HDFS-13443:
---
Attachment: HDFS-13443-branch-2.002.patch

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-branch-2.001.patch, 
> HDFS-13443-branch-2.002.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >