[jira] [Commented] (HDFS-14313) Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du

2019-07-27 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894619#comment-16894619
 ] 

Yiqun Lin commented on HDFS-14313:
--

Thanks for updating the patch, [~leosun08]. Some review comments for the patch:

*FSCachingGetSpaceUsed*
 Line 35: Can we update the annotation to {{@InterfaceAudience.Private}} since 
this is only specified for HDFS module.?
 Line 38: Remove the public keyword for LOG instance.
 Line 53: Add the final keyword for the FsVolumeImpl variable.
 Line 54: Add the final keyword too.
 Line 75: We don't pass the config to use the threshold time now, still we need 
to override this method? If don't need, the change made in
 GetSpaceUsed can also be reverted.

*FsDatasetImpl*
{noformat}
FsVolumeList#addBlockPool -> FsVolumeImpl#addBlockPool -> new BlockPoolSlice -> 
FsDatasetImpl#deepCopyReplica. If deepCopyReplica use datasetock, it appears 
deadlock.
{noformat}
This comment can simplified to 'The deepCopyReplica call does't use the 
datasetock since it will lead the potential deadlock with the

{@link FsVolumeList#addBlockPool} call.'

*ReplicaCachingGetSpaceUsed*
 Line 41:
{noformat}
To use set fs.getspaceused.classname to 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed
 in your core-site.xml.
{noformat}
can update to a more readable way:
{noformat}
Setting fs.getspaceused.classname to 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed
 in core-site.xml if we want to enable this class.
{noformat}
Line 46: Update the annotation to {{@InterfaceAudience.Private}}.
 Line 49: Remove the public keyword.
 Line 51: Add the final keyword.
 Line 52: Add the final keyword.
 Line76: I would prefer to define a static final variable to hard-coded the 
value in ReplicaCachingGetSpaceUsed.
 Line 77: Can we use the parameter way to print the log? Like 
LOG.debug("BlockPoolId: {}, replicas size: {}, copy replicas duration: {}ms.", 
obj1, obj2...}
 Line 95: The same comment for Line 76.
 The 96:The same comment for Line 77.
 Line 101: Update replicaCachingGetSpaceUsed to ReplicaCachingGetSpaceUsed.

*TestReplicaCachingGetSpaceUsed*
 Line 42: Please add the definition comment for this class.
 Line 51: We can use Configuration#setClass call to set the class impl
{noformat}
conf.setClass("fs.getspaceused.classname", ReplicaCachingGetSpaceUsed.class, 
CachingGetSpaceUsed.class);
{noformat}
Line 61: It will be a good behavior to clean the hdfs path we created for the 
test.
 Line 69: As I have mentioned before, can we have an additional comparison for 
the DU impl class? The most of lines can be reused for these two getused impl 
class. Just passing different key value with restart the mini cluster and 
comparing the used space.

 

> Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory  
> instead of df/du
> 
>
> Key: HDFS-14313
> URL: https://issues.apache.org/jira/browse/HDFS-14313
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, performance
>Affects Versions: 2.6.0, 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14313.000.patch, HDFS-14313.001.patch, 
> HDFS-14313.002.patch, HDFS-14313.003.patch, HDFS-14313.004.patch, 
> HDFS-14313.005.patch, HDFS-14313.006.patch, HDFS-14313.007.patch, 
> HDFS-14313.008.patch
>
>
> There are two ways of DU/DF getting used space that are insufficient.
>  #  Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike.
>  #  Running DF is inaccurate when the disk sharing by multiple datanode or 
> other servers.
>  Getting hdfs used space from  FsDatasetImpl#volumeMap#ReplicaInfos in memory 
> is very small and accurate. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14660) [SBN Read] ObserverNameNode should throw StandbyException for requests not from ObserverProxyProvider

2019-07-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894614#comment-16894614
 ] 

Hudson commented on HDFS-14660:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16995 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16995/])
HDFS-14660. [SBN Read] ObserverNameNode should throw StandbyException 
(ayushsaxena: rev 02bd02b5af761b6b24fdc4e8e7ede72a51870d5b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GlobalStateIdContext.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConsistentReadsObserver.java


> [SBN Read] ObserverNameNode should throw StandbyException for requests not 
> from ObserverProxyProvider
> -
>
> Key: HDFS-14660
> URL: https://issues.apache.org/jira/browse/HDFS-14660
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14660.000.patch, HDFS-14660.001.patch, 
> HDFS-14660.002.patch, HDFS-14660.003.patch, HDFS-14660.004.patch
>
>
> In a HDFS HA cluster with consistent reads enabled (HDFS-12943), clients 
> could be using either {{ObserverReadProxyProvider}}, 
> {{ConfiguredProxyProvider}}, or something else. Since observer is just a 
> special type of SBN and we allow transitions between them, a client NOT using 
> {{ObserverReadProxyProvider}} will need to have 
> {{dfs.ha.namenodes.}} include all NameNodes in the cluster, and 
> therefore, it may send request to a observer node.
> For this case, we should check whether the {{stateId}} in the incoming RPC 
> header is set or not, and throw an {{StandbyException}} when it is not. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14660) [SBN Read] ObserverNameNode should throw StandbyException for requests not from ObserverProxyProvider

2019-07-27 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14660:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> [SBN Read] ObserverNameNode should throw StandbyException for requests not 
> from ObserverProxyProvider
> -
>
> Key: HDFS-14660
> URL: https://issues.apache.org/jira/browse/HDFS-14660
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14660.000.patch, HDFS-14660.001.patch, 
> HDFS-14660.002.patch, HDFS-14660.003.patch, HDFS-14660.004.patch
>
>
> In a HDFS HA cluster with consistent reads enabled (HDFS-12943), clients 
> could be using either {{ObserverReadProxyProvider}}, 
> {{ConfiguredProxyProvider}}, or something else. Since observer is just a 
> special type of SBN and we allow transitions between them, a client NOT using 
> {{ObserverReadProxyProvider}} will need to have 
> {{dfs.ha.namenodes.}} include all NameNodes in the cluster, and 
> therefore, it may send request to a observer node.
> For this case, we should check whether the {{stateId}} in the incoming RPC 
> header is set or not, and throw an {{StandbyException}} when it is not. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14660) [SBN Read] ObserverNameNode should throw StandbyException for requests not from ObserverProxyProvider

2019-07-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894613#comment-16894613
 ] 

Ayush Saxena commented on HDFS-14660:
-

Committed to trunk.
Thanx [~csun] for the contribution.
[~Harsha1206] and [~xkrogen] for the reviews!!!

> [SBN Read] ObserverNameNode should throw StandbyException for requests not 
> from ObserverProxyProvider
> -
>
> Key: HDFS-14660
> URL: https://issues.apache.org/jira/browse/HDFS-14660
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14660.000.patch, HDFS-14660.001.patch, 
> HDFS-14660.002.patch, HDFS-14660.003.patch, HDFS-14660.004.patch
>
>
> In a HDFS HA cluster with consistent reads enabled (HDFS-12943), clients 
> could be using either {{ObserverReadProxyProvider}}, 
> {{ConfiguredProxyProvider}}, or something else. Since observer is just a 
> special type of SBN and we allow transitions between them, a client NOT using 
> {{ObserverReadProxyProvider}} will need to have 
> {{dfs.ha.namenodes.}} include all NameNodes in the cluster, and 
> therefore, it may send request to a observer node.
> For this case, we should check whether the {{stateId}} in the incoming RPC 
> header is set or not, and throw an {{StandbyException}} when it is not. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14660) [SBN Read] ObserverNameNode should throw StandbyException for requests not from ObserverProxyProvider

2019-07-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894603#comment-16894603
 ] 

Ayush Saxena commented on HDFS-14660:
-

v004 LGTM +1
Committing Shortly!!!

> [SBN Read] ObserverNameNode should throw StandbyException for requests not 
> from ObserverProxyProvider
> -
>
> Key: HDFS-14660
> URL: https://issues.apache.org/jira/browse/HDFS-14660
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14660.000.patch, HDFS-14660.001.patch, 
> HDFS-14660.002.patch, HDFS-14660.003.patch, HDFS-14660.004.patch
>
>
> In a HDFS HA cluster with consistent reads enabled (HDFS-12943), clients 
> could be using either {{ObserverReadProxyProvider}}, 
> {{ConfiguredProxyProvider}}, or something else. Since observer is just a 
> special type of SBN and we allow transitions between them, a client NOT using 
> {{ObserverReadProxyProvider}} will need to have 
> {{dfs.ha.namenodes.}} include all NameNodes in the cluster, and 
> therefore, it may send request to a observer node.
> For this case, we should check whether the {{stateId}} in the incoming RPC 
> header is set or not, and throw an {{StandbyException}} when it is not. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14449) Expose total number of dt in jmx for Namenode

2019-07-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894597#comment-16894597
 ] 

Íñigo Goiri commented on HDFS-14449:


No need to change all the asserts, just the new ones; up to you.
In any case, if we go for this, we should switch {{assertTrue(null != 
dtSecretManager.retrievePassword(identifier));}} to {{assertNotNull}}.

> Expose total number of dt in jmx for Namenode
> -
>
> Key: HDFS-14449
> URL: https://issues.apache.org/jira/browse/HDFS-14449
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-14449.001.patch, HDFS-14449.002.patch, 
> HDFS-14449.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14461) RBF: Fix intermittently failing kerberos related unit test

2019-07-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894596#comment-16894596
 ] 

Íñigo Goiri commented on HDFS-14461:


Correct, my only requirement is to not mix.
I had multiple JIRAs with comments in both threads and it's quite hard to 
follow.

> RBF: Fix intermittently failing kerberos related unit test
> --
>
> Key: HDFS-14461
> URL: https://issues.apache.org/jira/browse/HDFS-14461
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14461.001.patch, HDFS-14461.002.patch
>
>
> TestRouterHttpDelegationToken#testGetDelegationToken fails intermittently. It 
> may be due to some race condition before using the keytab that's created for 
> testing.
>  
> {code:java}
>  Failed
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.testGetDelegationToken
>  Failing for the past 1 build (Since 
> [!https://builds.apache.org/static/1e9ab9cc/images/16x16/red.png! 
> #26721|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/] )
>  [Took 89 
> ms.|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/testReport/org.apache.hadoop.hdfs.server.federation.security/TestRouterHttpDelegationToken/testGetDelegationToken/history]
>   
>  Error Message
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED
> h3. Stacktrace
> org.apache.hadoop.service.ServiceStateException: 
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:173) 
> at 
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.setup(TestRouterHttpDelegationToken.java:99)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) 
> Caused by: org.apache.hadoop.security.KerberosAuthException: failure to 
> login: for principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  

[jira] [Commented] (HDFS-14660) [SBN Read] ObserverNameNode should throw StandbyException for requests not from ObserverProxyProvider

2019-07-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894512#comment-16894512
 ] 

Hadoop QA commented on HDFS-14660:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14660 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976056/HDFS-14660.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 50bb8a6fffc4 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2fe450c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27319/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27319/testReport/ |
| Max. process+thread count | 4345 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27319/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This 

[jira] [Commented] (HDFS-7868) Use proper blocksize to choose target for blocks

2019-07-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894483#comment-16894483
 ] 

Hadoop QA commented on HDFS-7868:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-7868 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-7868 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12702957/HDFS-7868-001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27320/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Use proper blocksize to choose target for blocks
> 
>
> Key: HDFS-7868
> URL: https://issues.apache.org/jira/browse/HDFS-7868
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: zhouyingchao
>Assignee: Lisheng Sun
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7868-001.patch
>
>
> In BlockPlacementPolicyDefault.java:isGoodTarget, the passed-in blockSize is 
> used to determine if there is enough room for a new block on a data node. 
> However, in two conditions the blockSize might not be proper for the purpose: 
> (a) the passed in block size is just the size of the last block of a file, 
> which might be very small (for e.g., called from 
> BlockManager.ReplicationWork.chooseTargets). (b) A file which might be 
> created with a smaller blocksize.
> In these conditions, the calculated scheduledSize might be smaller than the 
> actual value, which finally might lead to following failure of writing or 
> replication.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-7868) Use proper blocksize to choose target for blocks

2019-07-27 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun reassigned HDFS-7868:
-

Assignee: Lisheng Sun  (was: zhouyingchao)

> Use proper blocksize to choose target for blocks
> 
>
> Key: HDFS-7868
> URL: https://issues.apache.org/jira/browse/HDFS-7868
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: zhouyingchao
>Assignee: Lisheng Sun
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7868-001.patch
>
>
> In BlockPlacementPolicyDefault.java:isGoodTarget, the passed-in blockSize is 
> used to determine if there is enough room for a new block on a data node. 
> However, in two conditions the blockSize might not be proper for the purpose: 
> (a) the passed in block size is just the size of the last block of a file, 
> which might be very small (for e.g., called from 
> BlockManager.ReplicationWork.chooseTargets). (b) A file which might be 
> created with a smaller blocksize.
> In these conditions, the calculated scheduledSize might be smaller than the 
> actual value, which finally might lead to following failure of writing or 
> replication.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14660) [SBN Read] ObserverNameNode should throw StandbyException for requests not from ObserverProxyProvider

2019-07-27 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894474#comment-16894474
 ] 

Chao Sun commented on HDFS-14660:
-

Fixed and attached patch v4.

> [SBN Read] ObserverNameNode should throw StandbyException for requests not 
> from ObserverProxyProvider
> -
>
> Key: HDFS-14660
> URL: https://issues.apache.org/jira/browse/HDFS-14660
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14660.000.patch, HDFS-14660.001.patch, 
> HDFS-14660.002.patch, HDFS-14660.003.patch, HDFS-14660.004.patch
>
>
> In a HDFS HA cluster with consistent reads enabled (HDFS-12943), clients 
> could be using either {{ObserverReadProxyProvider}}, 
> {{ConfiguredProxyProvider}}, or something else. Since observer is just a 
> special type of SBN and we allow transitions between them, a client NOT using 
> {{ObserverReadProxyProvider}} will need to have 
> {{dfs.ha.namenodes.}} include all NameNodes in the cluster, and 
> therefore, it may send request to a observer node.
> For this case, we should check whether the {{stateId}} in the incoming RPC 
> header is set or not, and throw an {{StandbyException}} when it is not. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14660) [SBN Read] ObserverNameNode should throw StandbyException for requests not from ObserverProxyProvider

2019-07-27 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-14660:

Attachment: HDFS-14660.004.patch

> [SBN Read] ObserverNameNode should throw StandbyException for requests not 
> from ObserverProxyProvider
> -
>
> Key: HDFS-14660
> URL: https://issues.apache.org/jira/browse/HDFS-14660
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14660.000.patch, HDFS-14660.001.patch, 
> HDFS-14660.002.patch, HDFS-14660.003.patch, HDFS-14660.004.patch
>
>
> In a HDFS HA cluster with consistent reads enabled (HDFS-12943), clients 
> could be using either {{ObserverReadProxyProvider}}, 
> {{ConfiguredProxyProvider}}, or something else. Since observer is just a 
> special type of SBN and we allow transitions between them, a client NOT using 
> {{ObserverReadProxyProvider}} will need to have 
> {{dfs.ha.namenodes.}} include all NameNodes in the cluster, and 
> therefore, it may send request to a observer node.
> For this case, we should check whether the {{stateId}} in the incoming RPC 
> header is set or not, and throw an {{StandbyException}} when it is not. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14313) Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du

2019-07-27 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894465#comment-16894465
 ] 

Lisheng Sun commented on HDFS-14313:


hi [~linyiqun] [~jojochuang] Could you have time to review this patch? Thank 
you a lot.

> Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory  
> instead of df/du
> 
>
> Key: HDFS-14313
> URL: https://issues.apache.org/jira/browse/HDFS-14313
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, performance
>Affects Versions: 2.6.0, 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14313.000.patch, HDFS-14313.001.patch, 
> HDFS-14313.002.patch, HDFS-14313.003.patch, HDFS-14313.004.patch, 
> HDFS-14313.005.patch, HDFS-14313.006.patch, HDFS-14313.007.patch, 
> HDFS-14313.008.patch
>
>
> There are two ways of DU/DF getting used space that are insufficient.
>  #  Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike.
>  #  Running DF is inaccurate when the disk sharing by multiple datanode or 
> other servers.
>  Getting hdfs used space from  FsDatasetImpl#volumeMap#ReplicaInfos in memory 
> is very small and accurate. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12826) Document Saying the RPC port, But it's required IPC port in Balancer Document.

2019-07-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894455#comment-16894455
 ] 

Hadoop QA commented on HDFS-12826:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
28m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-12826 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897990/HDFS-12826.patch |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux c0158fe2032c 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2fe450c |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 447 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27317/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Document Saying the RPC port, But it's required IPC port in Balancer Document.
> --
>
> Key: HDFS-12826
> URL: https://issues.apache.org/jira/browse/HDFS-12826
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover, documentation
>Affects Versions: 3.0.0-beta1
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
> Attachments: HDFS-12826.patch
>
>
> In {{Adding a new Namenode to an existing HDFS cluster}} , refreshNamenodes 
> command required IPC port but in Documentation it's saying the RPC port.
> http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/Federation.html#Balancer
> {noformat} 
> bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
> -refreshNamenodes host-name:65110
> refreshNamenodes: Unknown protocol: 
> org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
> bin.:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
> -refreshNamenodes
> Usage: hdfs dfsadmin [-refreshNamenodes datanode-host:ipc_port]
> bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
> -refreshNamenodes host-name:50077
> bin>:~/hdfsdata/HA/install/hadoop/datanode/bin>
> {noformat} 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14669) TestDirectoryScanner#testDirectoryScannerInFederatedCluster fails intermittently in trunk

2019-07-27 Thread qiang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894453#comment-16894453
 ] 

qiang Liu commented on HDFS-14669:
--

after doing some tests and anylize, I think if we successfully create some 
files with countent when there is only one datanode, the datanode blocks must 
has been created too. which means doing scan check multiple times is not 
necessary.

[~templedf] you introduced this multiple check logic in HDFS-13819 , if my 
anylize is wrong or missing something, just let me know.**

if everything is ok, I will submit a pach latter remove the use of 
GenericTestUtils.waitFor

> TestDirectoryScanner#testDirectoryScannerInFederatedCluster fails 
> intermittently in trunk
> -
>
> Key: HDFS-14669
> URL: https://issues.apache.org/jira/browse/HDFS-14669
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: datanode
>Affects Versions: 3.2.0
> Environment: env free
>Reporter: qiang Liu
>Assignee: qiang Liu
>Priority: Minor
>  Labels: scanner, test
> Attachments: HDFS-14669-trunk-001.patch, HDFS-14669-trunk.002.patch
>
>
> org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner#testDirectoryScannerInFederatedCluster
>  radomlly Failes because of write files of the same name, meaning intent to 
> write 2 files but  2 files are the same name, witch cause a race condition of 
> datanode delete block and the scan action count block.
>  
> Ref :: 
> [https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1207/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDirectoryScanner/testDirectoryScannerInFederatedCluster/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12125) Document the missing -removePolicy command of ec

2019-07-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894434#comment-16894434
 ] 

Hadoop QA commented on HDFS-12125:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-12125 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12125 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877031/HDFS-12125.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27318/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Document the missing -removePolicy command of ec
> 
>
> Key: HDFS-12125
> URL: https://issues.apache.org/jira/browse/HDFS-12125
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha4
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Major
> Attachments: HDFS-12125.001.patch
>
>
> Document the missing command -removePolicy in HDFSErasureCoding.md and 
> HDFSCommands.md and regroup the ec commands to improve the user experience.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14608) DataNode$DataTransfer should be named

2019-07-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894418#comment-16894418
 ] 

Ayush Saxena commented on HDFS-14608:
-

Thanx [~elgoiri].
{quote}We could also add the targets.
{quote}
Makes sense to me to have it.

> DataNode$DataTransfer should be named
> -
>
> Key: HDFS-14608
> URL: https://issues.apache.org/jira/browse/HDFS-14608
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14608.000.patch
>
>
> Currently, the {{DataTransfer}} thread has no name and it just outputs the 
> default {{toString()}}.
> This shows in the logs in jstack as something like:
> {code}
> 2019-06-25 11:01:01,211 INFO 
> [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@609ed67a] 
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at 
> CO4AEAPC1AF:10010: Transmitted 
> BP-1191059133-10.1.2.3-145702348:blk_1113379522_69745835 
> (numBytes=485214) to 10.1.2.3/10.1.2.3:10010
> {code}
> As this uses the {{Daemon}} class, the name is set based on:
> {code}
>   public Daemon(Runnable runnable) {
> super(runnable);
> this.runnable = runnable;
> this.setName(((Object)runnable).toString());
>   }
> {code}
> We should implement toString to at least have the name of the block being 
> transfferred or something similar to what DataXceiver does (e.g., HDFS-3375).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12125) Document the missing -removePolicy command of ec

2019-07-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894408#comment-16894408
 ] 

Ayush Saxena commented on HDFS-12125:
-

Thanx [~vincent he] for the report, I guess there a bunch of unrelated changes 
too, May be just adding a line for removePoliicy shall be enough. We should 
refrain from Tweaking other part or changing the CLI part 

 

> Document the missing -removePolicy command of ec
> 
>
> Key: HDFS-12125
> URL: https://issues.apache.org/jira/browse/HDFS-12125
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha4
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Major
> Attachments: HDFS-12125.001.patch
>
>
> Document the missing command -removePolicy in HDFSErasureCoding.md and 
> HDFSCommands.md and regroup the ec commands to improve the user experience.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12826) Document Saying the RPC port, But it's required IPC port in Balancer Document.

2019-07-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894404#comment-16894404
 ] 

Ayush Saxena commented on HDFS-12826:
-

Thanx [~peruguusha] for the patch.

LGTM +1,

If no further comments, shall push this in couple of days!!!

> Document Saying the RPC port, But it's required IPC port in Balancer Document.
> --
>
> Key: HDFS-12826
> URL: https://issues.apache.org/jira/browse/HDFS-12826
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover, documentation
>Affects Versions: 3.0.0-beta1
>Reporter: Harshakiran Reddy
>Assignee: usharani
>Priority: Minor
> Attachments: HDFS-12826.patch
>
>
> In {{Adding a new Namenode to an existing HDFS cluster}} , refreshNamenodes 
> command required IPC port but in Documentation it's saying the RPC port.
> http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/Federation.html#Balancer
> {noformat} 
> bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
> -refreshNamenodes host-name:65110
> refreshNamenodes: Unknown protocol: 
> org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
> bin.:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
> -refreshNamenodes
> Usage: hdfs dfsadmin [-refreshNamenodes datanode-host:ipc_port]
> bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
> -refreshNamenodes host-name:50077
> bin>:~/hdfsdata/HA/install/hadoop/datanode/bin>
> {noformat} 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14276) [SBN read] Reduce tailing overhead

2019-07-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894400#comment-16894400
 ] 

Ayush Saxena commented on HDFS-14276:
-

This much change in the test, solves it in my local :

{code:java}
public void testNNDirectorySize() throws Exception{
 Configuration conf = new Configuration();
 conf.setInt(DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_KEY, 1);
+conf.setInt(DFSConfigKeys.DFS_HA_LOGROLL_PERIOD_KEY, 1);
 MiniDFSCluster cluster = null;
@@ -700,8 +701,6 @@ public void testNNDirectorySize() throws Exception{
   FSNamesystem nn1 = cluster.getNamesystem(1);
-  checkNNDirSize(cluster.getNameDirs(0), nn0.getNameDirSize());
-  checkNNDirSize(cluster.getNameDirs(1), nn1.getNameDirSize());
   cluster.transitionToActive(0);
{code}

Just three lines,
[~jojochuang] would you like to give a check and update, If you are busy we can 
wait, or if you say, I can add this part on your behalf, provided this works. :)

> [SBN read] Reduce tailing overhead
> --
>
> Key: HDFS-14276
> URL: https://issues.apache.org/jira/browse/HDFS-14276
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, namenode
>Affects Versions: 3.3.0
> Environment: Hardware: 4-node cluster, each node has 4 core, Xeon 
> 2.5Ghz, 25GB memory.
> Software: CentOS 7.4, CDH 6.0 + Consistent Reads from Standby, Kerberos, SSL, 
> RPC encryption + Data Transfer Encryption.
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-14276.000.patch, Screen Shot 2019-02-12 at 10.51.41 
> PM.png, Screen Shot 2019-02-14 at 11.50.37 AM.png
>
>
> When Observer sets {{dfs.ha.tail-edits.period}} = {{0ms}}, it tails edit log 
> continuously in order to fetch the latest edits, but there is a lot of 
> overhead in doing so.
> Critically, edit log tailer should _not_ update NameDirSize metric every 
> time. It has nothing to do with fetching edits, and it involves lots of 
> directory space calculation.
> Profiler suggests a non-trivial chunk of time is spent for nothing.
> Other than this, the biggest overhead is in the communication to 
> serialize/deserialize messages to/from JNs. I am looking for ways to reduce 
> the cost because it's burning 30% of my CPU time even when the cluster is 
> idle.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14370) Edit log tailing fast-path should allow for backoff

2019-07-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894392#comment-16894392
 ] 

Ayush Saxena edited comment on HDFS-14370 at 7/27/19 10:15 AM:
---

Thanx [~xkrogen] for the patch, Seems fair enough,

A doubt, Is there any way to turn off this back-off mechanism? as if my 
requirement doesn't want me to have a backoff, usually we configure 0 interval 
for tailing edits for very loaded clusters, it may reach to a backoff stage may 
be easily, if for fractions the load is nil.

And secondly if I don't configure the back-off time, The default it shall take 
as 1 Min, so if my in general time is 0, the next shall get triggered at 1 Min, 
I guess by default we should keep the back-off disabled and the value of 
max-backoff to be same as that of sleeptimeMs if the max-backoff time isn't 
specified.


was (Author: ayushtkn):
Thanx [~xkrogen] for the patch, Seems fair enough,

A doubt, Is there any way to turn off this back-off mechanism? as if my 
requirement doesn't want me to have a backoff, usually we configure 0 interval 
for tailing edits for very loaded clusters, it may reach to a backoff stage may 
be easily, if for fractions the load is nil.

And secondly if I don't configure the back-off time, The default it shall take 
and 1 Min, so if my in general time is 0, the next shall get triggered at 1 
Min, I guess by default we shouldn't keep the back-off disabled and the value 
of max-backoff to be same as that of sleeptimeMs if the max-backoff time isn't 
specified.

> Edit log tailing fast-path should allow for backoff
> ---
>
> Key: HDFS-14370
> URL: https://issues.apache.org/jira/browse/HDFS-14370
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, qjm
>Affects Versions: 3.3.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-14370.000.patch
>
>
> As part of HDFS-13150, in-progress edit log tailing was changed to use an 
> RPC-based mechanism, thus allowing the edit log tailing frequency to be 
> turned way down, and allowing standby/observer NameNodes to be only a few 
> milliseconds stale as compared to the Active NameNode.
> When there is a high volume of transactions on the system, each RPC fetches 
> transactions and takes some time to process them, self-rate-limiting how 
> frequently an RPC is submitted. In a lightly loaded cluster, however, most of 
> these RPCs return an empty set of transactions, consuming a high 
> (de)serialization overhead for very little benefit. This was reported by 
> [~jojochuang] in HDFS-14276 and I have also seen it on a test cluster where 
> the SbNN was submitting 8000 RPCs per second that returned empty.
> I propose we add some sort of backoff to the tailing, so that if an empty 
> response is received, it will wait a longer period of time before submitting 
> a new RPC.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14370) Edit log tailing fast-path should allow for backoff

2019-07-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894392#comment-16894392
 ] 

Ayush Saxena commented on HDFS-14370:
-

Thanx [~xkrogen] for the patch, Seems fair enough,

A doubt, Is there any way to turn off this back-off mechanism? as if my 
requirement doesn't want me to have a backoff, usually we configure 0 interval 
for tailing edits for very loaded clusters, it may reach to a backoff stage may 
be easily, if for fractions the load is nil.

And secondly if I don't configure the back-off time, The default it shall take 
and 1 Min, so if my in general time is 0, the next shall get triggered at 1 
Min, I guess by default we shouldn't keep the back-off disabled and the value 
of max-backoff to be same as that of sleeptimeMs if the max-backoff time isn't 
specified.

> Edit log tailing fast-path should allow for backoff
> ---
>
> Key: HDFS-14370
> URL: https://issues.apache.org/jira/browse/HDFS-14370
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, qjm
>Affects Versions: 3.3.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-14370.000.patch
>
>
> As part of HDFS-13150, in-progress edit log tailing was changed to use an 
> RPC-based mechanism, thus allowing the edit log tailing frequency to be 
> turned way down, and allowing standby/observer NameNodes to be only a few 
> milliseconds stale as compared to the Active NameNode.
> When there is a high volume of transactions on the system, each RPC fetches 
> transactions and takes some time to process them, self-rate-limiting how 
> frequently an RPC is submitted. In a lightly loaded cluster, however, most of 
> these RPCs return an empty set of transactions, consuming a high 
> (de)serialization overhead for very little benefit. This was reported by 
> [~jojochuang] in HDFS-14276 and I have also seen it on a test cluster where 
> the SbNN was submitting 8000 RPCs per second that returned empty.
> I propose we add some sort of backoff to the tailing, so that if an empty 
> response is received, it will wait a longer period of time before submitting 
> a new RPC.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14461) RBF: Fix intermittently failing kerberos related unit test

2019-07-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894383#comment-16894383
 ] 

Ayush Saxena commented on HDFS-14461:
-

{quote}I think it is premature to start using PR. I have outlined a number of 
short coming using PR in the dev mailing list. We may want to wait for some of 
the out standing issues to close before recommending PR.
{quote}
[~eyang] I completely agree to this, There is not much awareness about the 
process either, I even couldn't find how to trigger Jenkins for the PR(It 
didn't shoot up automatically), and quite unrealistically the Yetus responded 
automatically after a Day, The JIRA summary even doesn't track the comments on 
PR, Little tough to track all the happenings too. The mails comes differently 
for me, too
 [~crh] I guess we should refrain from having it announced on the Main JIRA, 
until and unless a global consensus is reached, Let the dev decide, whichever 
way he is comfortable be. [~elgoiri] I guess, Just recommend to just use one 
process, either patch or PR on one JIRA, didn't advocate any preference.

> RBF: Fix intermittently failing kerberos related unit test
> --
>
> Key: HDFS-14461
> URL: https://issues.apache.org/jira/browse/HDFS-14461
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14461.001.patch, HDFS-14461.002.patch
>
>
> TestRouterHttpDelegationToken#testGetDelegationToken fails intermittently. It 
> may be due to some race condition before using the keytab that's created for 
> testing.
>  
> {code:java}
>  Failed
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.testGetDelegationToken
>  Failing for the past 1 build (Since 
> [!https://builds.apache.org/static/1e9ab9cc/images/16x16/red.png! 
> #26721|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/] )
>  [Took 89 
> ms.|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/testReport/org.apache.hadoop.hdfs.server.federation.security/TestRouterHttpDelegationToken/testGetDelegationToken/history]
>   
>  Error Message
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED
> h3. Stacktrace
> org.apache.hadoop.service.ServiceStateException: 
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:173) 
> at 
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.setup(TestRouterHttpDelegationToken.java:99)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>  at 

[jira] [Commented] (HDFS-14546) Document block placement policies

2019-07-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894381#comment-16894381
 ] 

Ayush Saxena commented on HDFS-14546:
-

Thanx [~Amithsha] for the details, Can you update as per the discussion?

> Document block placement policies
> -
>
> Key: HDFS-14546
> URL: https://issues.apache.org/jira/browse/HDFS-14546
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Amithsha
>Priority: Major
>  Labels: documentation
> Attachments: HDFS-14546-01.patch, HDFS-14546-02.patch, 
> HDFS-14546-03.patch, HDFS-14546-04.patch, HdfsDesign.patch
>
>
> Currently, all the documentation refers to the default block placement policy.
> However, over time there have been new policies:
> * BlockPlacementPolicyRackFaultTolerant (HDFS-7891)
> * BlockPlacementPolicyWithNodeGroup (HDFS-3601)
> * BlockPlacementPolicyWithUpgradeDomain (HDFS-9006)
> We should update the documentation to refer to them explaining their 
> particularities and probably how to setup each one of them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14616) Add the warn log when the volume available space isn't enough

2019-07-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894379#comment-16894379
 ] 

Ayush Saxena commented on HDFS-14616:
-

Need to fix these warnings, Apart seems OK :
https://builds.apache.org/job/PreCommit-HDFS-Build/27285/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt


> Add the warn log when the volume available space isn't enough
> -
>
> Key: HDFS-14616
> URL: https://issues.apache.org/jira/browse/HDFS-14616
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.7.2
>Reporter: liying
>Assignee: liying
>Priority: Minor
> Attachments: HDFS-14616.001.patch
>
>
> In the hadoop2 version, there is no warning log that the disk is not 
> available when using the disk. Therefore, the datanode log cannot be used to 
> check if the disk is not available ata certain time or for other problems.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14660) [SBN Read] ObserverNameNode should throw StandbyException for requests not from ObserverProxyProvider

2019-07-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894374#comment-16894374
 ] 

Ayush Saxena commented on HDFS-14660:
-

Guess some unused imports have got added, Checkstyle has complains. Give a check

> [SBN Read] ObserverNameNode should throw StandbyException for requests not 
> from ObserverProxyProvider
> -
>
> Key: HDFS-14660
> URL: https://issues.apache.org/jira/browse/HDFS-14660
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14660.000.patch, HDFS-14660.001.patch, 
> HDFS-14660.002.patch, HDFS-14660.003.patch
>
>
> In a HDFS HA cluster with consistent reads enabled (HDFS-12943), clients 
> could be using either {{ObserverReadProxyProvider}}, 
> {{ConfiguredProxyProvider}}, or something else. Since observer is just a 
> special type of SBN and we allow transitions between them, a client NOT using 
> {{ObserverReadProxyProvider}} will need to have 
> {{dfs.ha.namenodes.}} include all NameNodes in the cluster, and 
> therefore, it may send request to a observer node.
> For this case, we should check whether the {{stateId}} in the incoming RPC 
> header is set or not, and throw an {{StandbyException}} when it is not. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14660) [SBN Read] ObserverNameNode should throw StandbyException for requests not from ObserverProxyProvider

2019-07-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894359#comment-16894359
 ] 

Hadoop QA commented on HDFS-14660:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}177m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14660 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976026/HDFS-14660.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux be7a2786442d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2fe450c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27315/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27315/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27315/testReport/ |
| Max. process+thread count | 4039 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (HDFS-14449) Expose total number of dt in jmx for Namenode

2019-07-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894357#comment-16894357
 ] 

Hadoop QA commented on HDFS-14449:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 30s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14449 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976022/HDFS-14449.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 68dbe3f7a4c3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh 

[jira] [Commented] (HDFS-14669) TestDirectoryScanner#testDirectoryScannerInFederatedCluster fails intermittently in trunk

2019-07-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894338#comment-16894338
 ] 

Ayush Saxena commented on HDFS-14669:
-

HDFS-13819 introduced that part, give a check if it has any pointers, or if the 
people involved could help in any way

> TestDirectoryScanner#testDirectoryScannerInFederatedCluster fails 
> intermittently in trunk
> -
>
> Key: HDFS-14669
> URL: https://issues.apache.org/jira/browse/HDFS-14669
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: datanode
>Affects Versions: 3.2.0
> Environment: env free
>Reporter: qiang Liu
>Assignee: qiang Liu
>Priority: Minor
>  Labels: scanner, test
> Attachments: HDFS-14669-trunk-001.patch, HDFS-14669-trunk.002.patch
>
>
> org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner#testDirectoryScannerInFederatedCluster
>  radomlly Failes because of write files of the same name, meaning intent to 
> write 2 files but  2 files are the same name, witch cause a race condition of 
> datanode delete block and the scan action count block.
>  
> Ref :: 
> [https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1207/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDirectoryScanner/testDirectoryScannerInFederatedCluster/]
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14672) Backport HDFS-12703 to branch-2

2019-07-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894290#comment-16894290
 ] 

Hadoop QA commented on HDFS-14672:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 6s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} branch-2 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} branch-2 passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 26 unchanged - 0 fixed = 27 total (was 26) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:da675796017 |
| JIRA Issue | HDFS-14672 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976021/HDFS-12703.branch-2.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ebf26f58127c 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality |