[jira] [Commented] (HDFS-13239) Fix non-empty dir warning message when setting default EC policy

2018-03-09 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394073#comment-16394073
 ] 

Bharat Viswanadham commented on HDFS-13239:
---

[~xiaochen]

Addressed review comments in v03 patch

> Fix non-empty dir warning message when setting default EC policy
> 
>
> Key: HDFS-13239
> URL: https://issues.apache.org/jira/browse/HDFS-13239
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HDFS-13239.00.patch, HDFS-13239.01.patch, 
> HDFS-13239.02.patch, HDFS-13239.03.patch
>
>
> When EC policy is set on a non-empty directory, the following warning message 
> is given:
> {code}
> $hdfs ec -setPolicy -policy RS-6-3-1024k -path /ec1
> Warning: setting erasure coding policy on a non-empty directory will not 
> automatically convert existing files to RS-6-3-1024k
> {code}
> When we do not specify the -policy parameter when setting EC policy on a 
> directory, it takes the default EC policy. Setting default EC policy in this 
> way on a non-empty directory gives the following warning message:
> {code}
> $hdfs ec -setPolicy -path /ec2
> Warning: setting erasure coding policy on a non-empty directory will not 
> automatically convert existing files to null
> {code}
> Notice that the warning message in the 2nd case has the ecPolicy name shown 
> as null. We should instead give the default EC policy name in this message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13239) Fix non-empty dir warning message when setting default EC policy

2018-03-09 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13239:
--
Attachment: HDFS-13239.03.patch

> Fix non-empty dir warning message when setting default EC policy
> 
>
> Key: HDFS-13239
> URL: https://issues.apache.org/jira/browse/HDFS-13239
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HDFS-13239.00.patch, HDFS-13239.01.patch, 
> HDFS-13239.02.patch, HDFS-13239.03.patch
>
>
> When EC policy is set on a non-empty directory, the following warning message 
> is given:
> {code}
> $hdfs ec -setPolicy -policy RS-6-3-1024k -path /ec1
> Warning: setting erasure coding policy on a non-empty directory will not 
> automatically convert existing files to RS-6-3-1024k
> {code}
> When we do not specify the -policy parameter when setting EC policy on a 
> directory, it takes the default EC policy. Setting default EC policy in this 
> way on a non-empty directory gives the following warning message:
> {code}
> $hdfs ec -setPolicy -path /ec2
> Warning: setting erasure coding policy on a non-empty directory will not 
> automatically convert existing files to null
> {code}
> Notice that the warning message in the 2nd case has the ecPolicy name shown 
> as null. We should instead give the default EC policy name in this message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11600) Refactor TestDFSStripedOutputStreamWithFailure test classes

2018-03-09 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394067#comment-16394067
 ] 

Rakesh R commented on HDFS-11600:
-

Thanks [~Sammi] for the useful work. Overall the idea looks good to me. I've 
few comments, please take a look at the below points.
# Please keep all the member variables together at the top of the class for 
better readbility.
{code}
  private int[][] dnIndexSuite;
  protected List lengths;
  protected static final Random RANDOM = new Random();
  MiniDFSCluster cluster;
  DistributedFileSystem dfs;
  final Path dir = new Path("/"
  + TestDFSStripedOutputStreamWithFailureBase.class.getSimpleName());
{code}
# I hope you have named the class with "P" to represent parameterized class. 
Can we give a meaningful name instead of appending with letter "P" - 
{{TestDFSStripedOutputStreamWithFailureP}}, 
{{TestDFSStripedOutputStreamWithFailurePWithRandomECPolicy}}.
# Do we need both timeouts?
{code} 
+  @Rule
+  public Timeout globalTimeout = new Timeout(60);
+
+  @Test(timeout = 24)
+  public void run() {
{code}
# 
{{TestDFSStripedOutputStreamWithFailureBase#testCloseWithExceptionsInStreamer}} 
function is not used anywhere. Whats the purpose of this?
# Please give proper naming to this test.
{code}
  @Test(timeout = 24)
  public void run() {
{code}

> Refactor TestDFSStripedOutputStreamWithFailure test classes
> ---
>
> Key: HDFS-11600
> URL: https://issues.apache.org/jira/browse/HDFS-11600
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Priority: Minor
> Attachments: HDFS-11600-1.patch, HDFS-11600.002.patch, 
> HDFS-11600.003.patch, HDFS-11600.004.patch
>
>
> TestDFSStripedOutputStreamWithFailure has a great number of subclasses. The 
> tests are parameterized based on the name of these subclasses.
> Seems like we could parameterize these tests with JUnit and then not need all 
> these separate test classes.
> Another note, the tests will randomly return instead of running the test. 
> Using {{Assume}} instead would make it more clear in the test output that 
> these tests were skipped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13230) RBF: ConnectionManager's cleanup task will compare each pool's own active conns with its total conns

2018-03-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394065#comment-16394065
 ] 

ASF GitHub Bot commented on HDFS-13230:
---

Github user wuweiwei95 commented on the pull request:


https://github.com/apache/hadoop/commit/0c2b969e0161a068bf9ae013c4b95508dfb90a8a#commitcomment-28027526
  
There is a typo in Jira ID, should be HDFS-13230.


> RBF: ConnectionManager's cleanup task will compare each pool's own active 
> conns with its total conns
> 
>
> Key: HDFS-13230
> URL: https://issues.apache.org/jira/browse/HDFS-13230
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Chao Sun
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0
>
> Attachments: HDFS-13230.000.patch, HDFS-13230.001.patch
>
>
> In the cleanup task:
> {code:java}
> long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
> int total = pool.getNumConnections();
> int active = getNumActiveConnections();
> if (timeSinceLastActive > connectionCleanupPeriodMs ||
> {code}
> the 3rd line should be pool.getNumActiveConnections()
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13198) RBF: RouterHeartbeatService throws out CachedStateStore related exceptions when starting router

2018-03-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394053#comment-16394053
 ] 

genericqa commented on HDFS-13198:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 32s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13198 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12912363/HDFS-13198.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c5b8734bc7c2 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4743d4a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23389/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23389/testReport/ |
| Ma

[jira] [Comment Edited] (HDFS-13248) RBF: namenode need to choose block location for the client

2018-03-09 Thread Weiwei Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394003#comment-16394003
 ] 

Weiwei Wu edited comment on HDFS-13248 at 3/10/18 5:58 AM:
---

[~elgoiri] Do you mean clientMachine's IP? Take a look at 
[^clientMachine-call-path.jpeg]

The clientMachine's IP is get from Server property InetAddress addr.

I think we can get client's IP from clientName, but I'm not sure how to decode 
clientname.


was (Author: wuweiwei):
Do you mean clientMachine's IP? Take a look at [^clientMachine-call-path.jpeg]

The clientMachine's IP is get from Server property InetAddress addr.

I think we can get client's IP from clientName, but I'm not sure how to decode 
clientname.

> RBF: namenode need to choose block location for the client
> --
>
> Key: HDFS-13248
> URL: https://issues.apache.org/jira/browse/HDFS-13248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: clientMachine-call-path.jpeg, debug-info-1.jpeg, 
> debug-info-2.jpeg
>
>
> When execute a put operation via router, the NameNode will choose block 
> location for the router, not for the real client. This will affect the file's 
> locality.
> I think on both NameNode and Router, we should add a new addBlock method, or 
> add a parameter for the current addBlock method, to pass the real client 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13195) DataNode conf page cannot display the current value after reconfig

2018-03-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394048#comment-16394048
 ] 

genericqa commented on HDFS-13195:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2.7 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
21s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} branch-2.7 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 60 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}175m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  2m 
13s{color} | {color:red} The patch generated 167 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}207m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:44 |
| Failed junit tests | hadoop.hdfs.server.datanode.TestBatchIbr |
|   | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.TestDFSShellGenericOptions |
|   | hadoop.hdfs.web.TestWebHdfsTokens |
|   | hadoop.hdfs.TestParallelShortCircuitReadNoChecksum |
|   | hadoop.hdfs.TestBlockReaderFactory |
|   | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes |
|   | hadoop.hdfs.TestBlockMissingException |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.TestFileStatus |
| Timed out junit tests | org.apache.hadoop.hdfs.TestModTime |
|   | org.apache.hadoop.hdfs.TestEncryptionZonesWithHA |
|   | org.apache.hadoop.hdfs.TestSmallBlock |
|   | org.apache.hadoop.hdfs.TestWriteRead |
|   | org.apache.hadoop.hdfs.TestFileCreationEmpty |
|   | org.apache.hadoop.fs.TestEnhancedByteBufferAccess |
|   | org.apache.hadoop.hdfs.TestSetrepIncreasing |
|   | org.apache.hadoop.hdfs.TestSetrepDecreasing |
|   | org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | org.apache.hadoop.hdfs.TestQuota |
|   | org.apache.hadoop.hdfs.TestFileCreation |
|   | org.apache.hadoop.hdfs.TestDataTransferKeepalive |
|   | org.apache.hadoop.hdfs.TestFileAppend |
|   | org.apache.hadoop.hdfs.TestPread |
|   | org.apache.hadoop.hdfs.TestFsShellPermission |
|   | org.apache.hadoop.hdfs.TestDFSFinalize |
|   | org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade |
|   | org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs |
|   | org.apache.hadoop.hdfs.

[jira] [Commented] (HDFS-13257) Code cleanup: INode never throws QuotaExceededException

2018-03-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394043#comment-16394043
 ] 

genericqa commented on HDFS-13257:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 216 unchanged - 14 fixed = 216 total (was 230) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}149m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}196m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13257 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913864/h13257_20170309b.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b00d02cea6bc 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8133cd5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23386/artifact/out/patc

[jira] [Commented] (HDFS-13239) Fix non-empty dir warning message when setting default EC policy

2018-03-09 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394033#comment-16394033
 ] 

Xiao Chen commented on HDFS-13239:
--

Thanks for revving [~bharatviswa].

A minor comment:
In patch 2, we check {{ecPolicyName == null}} 2 times, once before list status 
and once after, both for printing the default name.
Can we just do 1 check after {{setEracureCodingPolicy}}, then if it's null, 
(and because it's not used afterwards) set it to the string {{default erasure 
coding policy}}? This way, we can just do 1 println for each message, without 
the need for if-else blocks. We may need to change the 'Set xxx' message a 
little bit to make sense of the sentence, which I think should be fine.



> Fix non-empty dir warning message when setting default EC policy
> 
>
> Key: HDFS-13239
> URL: https://issues.apache.org/jira/browse/HDFS-13239
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HDFS-13239.00.patch, HDFS-13239.01.patch, 
> HDFS-13239.02.patch
>
>
> When EC policy is set on a non-empty directory, the following warning message 
> is given:
> {code}
> $hdfs ec -setPolicy -policy RS-6-3-1024k -path /ec1
> Warning: setting erasure coding policy on a non-empty directory will not 
> automatically convert existing files to RS-6-3-1024k
> {code}
> When we do not specify the -policy parameter when setting EC policy on a 
> directory, it takes the default EC policy. Setting default EC policy in this 
> way on a non-empty directory gives the following warning message:
> {code}
> $hdfs ec -setPolicy -path /ec2
> Warning: setting erasure coding policy on a non-empty directory will not 
> automatically convert existing files to null
> {code}
> Notice that the warning message in the 2nd case has the ecPolicy name shown 
> as null. We should instead give the default EC policy name in this message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-336) dfsadmin -report should report number of blocks from datanode

2018-03-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394031#comment-16394031
 ] 

genericqa commented on HDFS-336:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 33s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
25s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}149m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-336 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913869/HDFS-336.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  xml  |
| uname | Linux 5097a25bbad7 4.4.

[jira] [Commented] (HDFS-11481) hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories

2018-03-09 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394030#comment-16394030
 ] 

Yongjun Zhang commented on HDFS-11481:
--

Hi [~mavinmar...@gmail.com],

I did some study. Would you please share:
 # which release you observed the issue reported here?
 # Did you verify that the patch you created solve the problem reported here 
(snapshotDiff on /.reserved/raw)?

The reason I was asking is, the change you did in the patch here simply to 
allow setSnapshot to take /.reserved/raw kind of path, but it did not seem to 
do anything with snapshotDiff. I suspect the issue is fixed by HDFS-10997 but 
yet to find out.  Seems to me that resolvePath might be the way to go.

Thanks.

--Yongjun

> hdfs snapshotDiff /.reserved/raw/... fails on snapshottable directories
> ---
>
> Key: HDFS-11481
> URL: https://issues.apache.org/jira/browse/HDFS-11481
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Mavin Martin
>Assignee: Mavin Martin
>Priority: Minor
> Attachments: HDFS-11481-branch-2.6.0.001.patch, HDFS-11481.001.patch, 
> HDFS-11481.002.patch
>
>
> Successful command:
> {code}
> #> hdfs snapshotDiff /tmp/dir s1 s2
> Difference between snapshot s1 and snapshot s2 under directory /tmp/dir:
> M   .
> +   ./file1.txt
> {code}
> Unsuccessful command:
> {code}
> #> hdfs snapshotDiff /.reserved/raw/tmp/dir s1 s2
> snapshotDiff: Directory does not exist: /.reserved/raw/tmp/dir
> {code}
> Prefixing with raw path should run successfully and return same output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13240) RBF: Update some inaccurate document descriptions

2018-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394016#comment-16394016
 ] 

Hudson commented on HDFS-13240:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13809 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13809/])
HDFS-13240. RBF: Update some inaccurate document descriptions. (yqlin: rev 
4743d4a2c70a213a41804a24c776e6db00e1b90d)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSRouterFederation.md


> RBF: Update some inaccurate document descriptions
> -
>
> Key: HDFS-13240
> URL: https://issues.apache.org/jira/browse/HDFS-13240
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0
>
> Attachments: HDFS-13240.001.patch, HDFS-13240.002.patch
>
>
> In RBF doc, there are some places not describing accurately 
> (https://issues.apache.org/jira/browse/HDFS-13214?focusedCommentId=16389026&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16389026).
>  This will mislead users sometimes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13248) RBF: namenode need to choose block location for the client

2018-03-09 Thread Weiwei Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394003#comment-16394003
 ] 

Weiwei Wu edited comment on HDFS-13248 at 3/10/18 3:58 AM:
---

Do you mean clientMachine's IP? Take a look at [^clientMachine-call-path.jpeg]

The clientMachine's IP is get from Server property InetAddress addr.

I think we can get client's IP from clientName, but I'm not sure how to decode 
clientname.


was (Author: wuweiwei):
Do you mean clientMachine's IP? Take a look at [^clientMachine-call-path.jpeg]

The clientMachine's IP is get from Server property InetAddress addr.

I think we can get client's IP from clintName, but I'm not sure how to decode 
clientname.

> RBF: namenode need to choose block location for the client
> --
>
> Key: HDFS-13248
> URL: https://issues.apache.org/jira/browse/HDFS-13248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: clientMachine-call-path.jpeg, debug-info-1.jpeg, 
> debug-info-2.jpeg
>
>
> When execute a put operation via router, the NameNode will choose block 
> location for the router, not for the real client. This will affect the file's 
> locality.
> I think on both NameNode and Router, we should add a new addBlock method, or 
> add a parameter for the current addBlock method, to pass the real client 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13248) RBF: namenode need to choose block location for the client

2018-03-09 Thread Weiwei Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394003#comment-16394003
 ] 

Weiwei Wu edited comment on HDFS-13248 at 3/10/18 3:58 AM:
---

Do you mean clientMachine's IP? Take a look at [^clientMachine-call-path.jpeg]

The clientMachine's IP is get from Server property InetAddress addr.

I think we can get client's IP from clintName, but I'm not sure how to decode 
clientname.


was (Author: wuweiwei):
Do you mean clientMachine's IP?

The clientMachine's IP is get from Server property InetAddress addr.

!clientMachine-call-path.jpeg!

> RBF: namenode need to choose block location for the client
> --
>
> Key: HDFS-13248
> URL: https://issues.apache.org/jira/browse/HDFS-13248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: clientMachine-call-path.jpeg, debug-info-1.jpeg, 
> debug-info-2.jpeg
>
>
> When execute a put operation via router, the NameNode will choose block 
> location for the router, not for the real client. This will affect the file's 
> locality.
> I think on both NameNode and Router, we should add a new addBlock method, or 
> add a parameter for the current addBlock method, to pass the real client 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13248) RBF: namenode need to choose block location for the client

2018-03-09 Thread Weiwei Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394003#comment-16394003
 ] 

Weiwei Wu commented on HDFS-13248:
--

Do you mean clientMachine's IP?

The clientMachine's IP is get from Server property InetAddress addr.

!clientMachine-call-path.jpeg!

> RBF: namenode need to choose block location for the client
> --
>
> Key: HDFS-13248
> URL: https://issues.apache.org/jira/browse/HDFS-13248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: clientMachine-call-path.jpeg, debug-info-1.jpeg, 
> debug-info-2.jpeg
>
>
> When execute a put operation via router, the NameNode will choose block 
> location for the router, not for the real client. This will affect the file's 
> locality.
> I think on both NameNode and Router, we should add a new addBlock method, or 
> add a parameter for the current addBlock method, to pass the real client 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13248) RBF: namenode need to choose block location for the client

2018-03-09 Thread Weiwei Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Wu updated HDFS-13248:
-
Attachment: clientMachine-call-path.jpeg

> RBF: namenode need to choose block location for the client
> --
>
> Key: HDFS-13248
> URL: https://issues.apache.org/jira/browse/HDFS-13248
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: clientMachine-call-path.jpeg, debug-info-1.jpeg, 
> debug-info-2.jpeg
>
>
> When execute a put operation via router, the NameNode will choose block 
> location for the router, not for the real client. This will affect the file's 
> locality.
> I think on both NameNode and Router, we should add a new addBlock method, or 
> add a parameter for the current addBlock method, to pass the real client 
> information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13240) RBF: Update some inaccurate document descriptions

2018-03-09 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13240:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   3.0.2
   2.9.1
   2.10.0
   3.1.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9.
Thanks [~elgoiri] and [~ywskycn] for the review!

> RBF: Update some inaccurate document descriptions
> -
>
> Key: HDFS-13240
> URL: https://issues.apache.org/jira/browse/HDFS-13240
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0
>
> Attachments: HDFS-13240.001.patch, HDFS-13240.002.patch
>
>
> In RBF doc, there are some places not describing accurately 
> (https://issues.apache.org/jira/browse/HDFS-13214?focusedCommentId=16389026&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16389026).
>  This will mislead users sometimes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11950) Disable libhdfs zerocopy test on Mac

2018-03-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393997#comment-16393997
 ] 

genericqa commented on HDFS-11950:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
25m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-11950 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913868/HDFS-11950.001.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 3330c970de31 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8133cd5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23387/testReport/ |
| Max. process+thread count | 473 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23387/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Disable libhdfs zerocopy test on Mac
> 
>
> Key: HDFS-11950
> URL: https://issues.apache.org/jira/browse/HDFS-11950
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: libhdfs
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: Akira Ajisaka
>Priority: Minor
> Attachments: HDFS-11950.001.patch
>
>
> Since libhdfs zerocopy test is expected to fail on Mac, just disable it.
> {noformat}
>  [exec] Test project 
> /Users/jzhuge/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target
>  [exec] Start 1: test_test_libhdfs_threaded_hdfs_static
>  [exec] 1/3 Test #1

[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue

2018-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393994#comment-16393994
 ] 

Hudson commented on HDFS-13212:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #13808 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13808/])
HDFS-13212. RBF: Fix router location cache issue. Contributed by Weiwei 
(inigoiri: rev afe1a3ccd56a12fec900360a8a2855c080728e65)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java


> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Assignee: Weiwei Wu
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0
>
> Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, 
> HDFS-13212-003.patch, HDFS-13212-004.patch, HDFS-13212-005.patch, 
> HDFS-13212-006.patch, HDFS-13212-007.patch, HDFS-13212-008.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393995#comment-16393995
 ] 

Hudson commented on HDFS-13232:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #13808 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13808/])
HDFS-13232. RBF: ConnectionPool should return first usable connection. 
(inigoiri: rev 8133cd5305d7913453abb2d48da12f672c0ce334)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java


> RBF: ConnectionPool should return first usable connection
> -
>
> Key: HDFS-13232
> URL: https://issues.apache.org/jira/browse/HDFS-13232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Ekanth S
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0
>
> Attachments: HDFS-13232.001.patch, HDFS-13232.002.patch, 
> HDFS-13232.003.patch
>
>
> In current ConnectionPool.getConnection(), it will return the first active 
> connection:
> {code:java}
> for (int i=0; i   int index = (threadIndex + i) % size;
>   conn = tmpConnections.get(index);
>   if (conn != null && !conn.isUsable()) {
> return conn;
>   }
> }
> {code}
> Here "!conn.isUsable()" should be "conn.isUsable()".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13254) RBF: Cannot mv/cp file cross namespace

2018-03-09 Thread Weiwei Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393992#comment-16393992
 ] 

Weiwei Wu commented on HDFS-13254:
--

[~ywskycn] Thank you for your detailed explanation. The two solutions you 
provided can indeed avoid this problem.

[~elgoiri], I want the following scenario:
Use RBF as a file system for users and provide the same API interface as the 
original HDFS.
For example, give the Spark user a directory /spark with mount points
/spark ==> 1->/spark
/spark/pathA ==> 2->/spark
/spark/pathB ==> 3->/spark

The user does not need to know the mount ns corresponding to each directory. It 
only needs to change the access path in the original program to hdfs://ns-fed/ 
and the program can run.

There are several advantages to this:
1. When the underlying ns change, the user program does not need to be modified.
2. The user program does not need to be modified when the user program is 
migrated to other rbf clusters.

> RBF: Cannot mv/cp file cross namespace
> --
>
> Key: HDFS-13254
> URL: https://issues.apache.org/jira/browse/HDFS-13254
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Priority: Major
>
> When I try to mv a file from a namespace to another, the client return an 
> error.
>  
> Do we have any plan to support cp/mv file cross namespace?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13226) RBF: We should throw the failure validate and refuse this mount entry

2018-03-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393989#comment-16393989
 ] 

genericqa commented on HDFS-13226:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 9 new + 1 unchanged - 0 fixed = 10 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}140m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}196m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13226 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913847/HDFS-13226.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fcda2f9d2aa9 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7b0dc31 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23382/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23382/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Resul

[jira] [Updated] (HDFS-13255) RBF: Fail when try to remove mount point paths

2018-03-09 Thread Weiwei Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Wu updated HDFS-13255:
-
Description: 
when delete a ns-fed path which include mount point paths, will issue a error.

Need to delete each mount point path independently.

Operation step:
{code:java}
[hadp@root]$ hdfs dfsrouteradmin -ls
Mount Table Entries:
Source Destinations Owner Group Mode Quota/Usage
/rm-test-all/rm-test-ns10 ns10->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
SsQuota: -/-]
/rm-test-all/rm-test-ns2 ns1->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
SsQuota: -/-]
[hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
Found 2 items
-rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
-rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
[hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns2/
Found 2 items
-rw-r--r-- 3 hadp supergroup 101 2018-03-07 16:57 
hdfs://ns-fed/rm-test-all/rm-test-ns2/NOTICE.txt
-rw-r--r-- 3 hadp supergroup 1366 2018-03-07 16:57 
hdfs://ns-fed/rm-test-all/rm-test-ns2/README.txt
[hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
Found 2 items
-rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
-rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
[hadp@root]$ hdfs dfs -rm -r hdfs://ns-fed/rm-test-all/
rm: Failed to move to trash: hdfs://ns-fed/rm-test-all. Consider using 
-skipTrash option
[hadp@root]$ hdfs dfs -rm -r -skipTrash hdfs://ns-fed/rm-test-all/
rm: `hdfs://ns-fed/rm-test-all': Input/output error
{code}

  was:
when delete a ns-fed path which include mount point paths, will issue a error.

Need to delete echo mount point path independently.

Operation step:
{code:java}
[hadp@root]$ hdfs dfsrouteradmin -ls
Mount Table Entries:
Source Destinations Owner Group Mode Quota/Usage
/rm-test-all/rm-test-ns10 ns10->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
SsQuota: -/-]
/rm-test-all/rm-test-ns2 ns1->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
SsQuota: -/-]
[hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
Found 2 items
-rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
-rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
[hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns2/
Found 2 items
-rw-r--r-- 3 hadp supergroup 101 2018-03-07 16:57 
hdfs://ns-fed/rm-test-all/rm-test-ns2/NOTICE.txt
-rw-r--r-- 3 hadp supergroup 1366 2018-03-07 16:57 
hdfs://ns-fed/rm-test-all/rm-test-ns2/README.txt
[hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
Found 2 items
-rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
-rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
[hadp@root]$ hdfs dfs -rm -r hdfs://ns-fed/rm-test-all/
rm: Failed to move to trash: hdfs://ns-fed/rm-test-all. Consider using 
-skipTrash option
[hadp@root]$ hdfs dfs -rm -r -skipTrash hdfs://ns-fed/rm-test-all/
rm: `hdfs://ns-fed/rm-test-all': Input/output error
{code}


> RBF: Fail when try to remove mount point paths
> --
>
> Key: HDFS-13255
> URL: https://issues.apache.org/jira/browse/HDFS-13255
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Priority: Major
>
> when delete a ns-fed path which include mount point paths, will issue a error.
> Need to delete each mount point path independently.
> Operation step:
> {code:java}
> [hadp@root]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /rm-test-all/rm-test-ns10 ns10->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
> SsQuota: -/-]
> /rm-test-all/rm-test-ns2 ns1->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
> SsQuota: -/-]
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
> -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns2/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 101 2018-03-07 16:57 
> hdfs://ns-fed/rm-test-all/rm-test-ns2/NOTICE.txt
> -rw-r--r-- 3 hadp supergroup 1366 2018-03-07 16:57 
> hdfs://ns-fed/rm-test-all/rm-test-ns2/README.txt
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
> -rw-r--r-- 3 hadp supergroup 

[jira] [Assigned] (HDFS-13245) RBF: State store DBMS implementation

2018-03-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-13245:
--

Assignee: maobaolong

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2018-03-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13245:
---
Description: Add a DBMS implementation for the State Store.

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: maobaolong
>Priority: Major
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13204) RBF: Optimize name service safe mode icon

2018-03-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-13204:
--

Assignee: liuhongtong

> RBF: Optimize name service safe mode icon
> -
>
> Key: HDFS-13204
> URL: https://issues.apache.org/jira/browse/HDFS-13204
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: liuhongtong
>Assignee: liuhongtong
>Priority: Minor
> Attachments: HDFS-13204.001.patch, image-2018-02-28-18-33-09-972.png, 
> image-2018-02-28-18-33-47-661.png, image-2018-02-28-18-35-35-708.png
>
>
> In federation health webpage, the safe mode icons of Subclusters and Routers 
> are inconsistent.
> The safe mode icon of Subclusters may induce users the name service is 
> maintaining.
> !image-2018-02-28-18-33-09-972.png!
> The safe mode icon of Routers:
> !image-2018-02-28-18-33-47-661.png!
> In fact, if the name service is in safe mode, users can't do writing related 
> operations. So I think the safe mode icon in Subclusters should be modified, 
> which may be more reasonable.
> !image-2018-02-28-18-35-35-708.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13198) RBF: RouterHeartbeatService throws out CachedStateStore related exceptions when starting router

2018-03-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13198:
---
Status: Patch Available  (was: Open)

> RBF: RouterHeartbeatService throws out CachedStateStore related exceptions 
> when starting router
> ---
>
> Key: HDFS-13198
> URL: https://issues.apache.org/jira/browse/HDFS-13198
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HDFS-13198.000.patch
>
>
> Exception looks like:
> {code:java}
> 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get 
> version for class 
> org.apache.hadoop.hdfs.server.federation.store.MembershipStore: Cached State 
> Store not initialized, MembershipState records not valid
> 2018-02-23 19:04:56,341 ERROR router.RouterHeartbeatService: Cannot get 
> version for class 
> org.apache.hadoop.hdfs.server.federation.store.MountTableStore: Cached State 
> Store not initialized, MountTable records not valid
> Exception in thread "Router Heartbeat Async" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreSerializableImpl.serialize(StateStoreSerializableImpl.java:60)
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl.putAll(StateStoreZooKeeperImpl.java:191)
> at 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreBaseImpl.put(StateStoreBaseImpl.java:75)
> at 
> org.apache.hadoop.hdfs.server.federation.store.impl.RouterStoreImpl.routerHeartbeat(RouterStoreImpl.java:88)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.updateStateStore(RouterHeartbeatService.java:95)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.access$000(RouterHeartbeatService.java:43)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService$1.run(RouterHeartbeatService.java:68)
> at java.lang.Thread.run(Thread.java:748){code}
> This is because, during starting the Router, the CachedStateStore hasn't been 
> initialized and cannot serve requests. Although the router will still be 
> started, it would be better to fix the exceptions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393980#comment-16393980
 ] 

Íñigo Goiri commented on HDFS-13232:


I committed  [^HDFS-13232.003.patch] to trunk, branch-3.1, branch-3.0, 
branch-2, and branch-2.9.
Yetus had issues with the commit as it was concurrent.
Thanks [~ekanth] for the fix and [~ywskycn] and [~csun] for the review.

> RBF: ConnectionPool should return first usable connection
> -
>
> Key: HDFS-13232
> URL: https://issues.apache.org/jira/browse/HDFS-13232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Ekanth S
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0
>
> Attachments: HDFS-13232.001.patch, HDFS-13232.002.patch, 
> HDFS-13232.003.patch
>
>
> In current ConnectionPool.getConnection(), it will return the first active 
> connection:
> {code:java}
> for (int i=0; i   int index = (threadIndex + i) % size;
>   conn = tmpConnections.get(index);
>   if (conn != null && !conn.isUsable()) {
> return conn;
>   }
> }
> {code}
> Here "!conn.isUsable()" should be "conn.isUsable()".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13232:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   3.0.2
   2.9.1
   2.10.0
   3.1.0
   Status: Resolved  (was: Patch Available)

> RBF: ConnectionPool should return first usable connection
> -
>
> Key: HDFS-13232
> URL: https://issues.apache.org/jira/browse/HDFS-13232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Ekanth S
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0
>
> Attachments: HDFS-13232.001.patch, HDFS-13232.002.patch, 
> HDFS-13232.003.patch
>
>
> In current ConnectionPool.getConnection(), it will return the first active 
> connection:
> {code:java}
> for (int i=0; i   int index = (threadIndex + i) % size;
>   conn = tmpConnections.get(index);
>   if (conn != null && !conn.isUsable()) {
> return conn;
>   }
> }
> {code}
> Here "!conn.isUsable()" should be "conn.isUsable()".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393979#comment-16393979
 ] 

genericqa commented on HDFS-13232:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-13232 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13232 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913861/HDFS-13232.003.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23385/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: ConnectionPool should return first usable connection
> -
>
> Key: HDFS-13232
> URL: https://issues.apache.org/jira/browse/HDFS-13232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Ekanth S
>Priority: Minor
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0
>
> Attachments: HDFS-13232.001.patch, HDFS-13232.002.patch, 
> HDFS-13232.003.patch
>
>
> In current ConnectionPool.getConnection(), it will return the first active 
> connection:
> {code:java}
> for (int i=0; i   int index = (threadIndex + i) % size;
>   conn = tmpConnections.get(index);
>   if (conn != null && !conn.isUsable()) {
> return conn;
>   }
> }
> {code}
> Here "!conn.isUsable()" should be "conn.isUsable()".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-09 Thread Chao Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393969#comment-16393969
 ] 

Chao Sun commented on HDFS-13232:
-

[~elgoiri] Yes the missing case is covered in this patch. Thanks [~ekanth]!

> RBF: ConnectionPool should return first usable connection
> -
>
> Key: HDFS-13232
> URL: https://issues.apache.org/jira/browse/HDFS-13232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Ekanth S
>Priority: Minor
> Attachments: HDFS-13232.001.patch, HDFS-13232.002.patch, 
> HDFS-13232.003.patch
>
>
> In current ConnectionPool.getConnection(), it will return the first active 
> connection:
> {code:java}
> for (int i=0; i   int index = (threadIndex + i) % size;
>   conn = tmpConnections.get(index);
>   if (conn != null && !conn.isUsable()) {
> return conn;
>   }
> }
> {code}
> Here "!conn.isUsable()" should be "conn.isUsable()".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-336) dfsadmin -report should report number of blocks from datanode

2018-03-09 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-336:

Status: Patch Available  (was: In Progress)

> dfsadmin -report should report number of blocks from datanode
> -
>
> Key: HDFS-336
> URL: https://issues.apache.org/jira/browse/HDFS-336
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lohit Vijayarenu
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-336.00.patch
>
>
> _hadoop dfsadmin -report_ seems to miss number of blocks from a datanode. 
> Number of blocks hosted by a datanode is a good info which should be included 
> in the report. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-336) dfsadmin -report should report number of blocks from datanode

2018-03-09 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-336:

Attachment: HDFS-336.00.patch

> dfsadmin -report should report number of blocks from datanode
> -
>
> Key: HDFS-336
> URL: https://issues.apache.org/jira/browse/HDFS-336
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lohit Vijayarenu
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-336.00.patch
>
>
> _hadoop dfsadmin -report_ seems to miss number of blocks from a datanode. 
> Number of blocks hosted by a datanode is a good info which should be included 
> in the report. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-336) dfsadmin -report should report number of blocks from datanode

2018-03-09 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-336 started by Bharat Viswanadham.
---
> dfsadmin -report should report number of blocks from datanode
> -
>
> Key: HDFS-336
> URL: https://issues.apache.org/jira/browse/HDFS-336
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lohit Vijayarenu
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
>
> _hadoop dfsadmin -report_ seems to miss number of blocks from a datanode. 
> Number of blocks hosted by a datanode is a good info which should be included 
> in the report. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393953#comment-16393953
 ] 

genericqa commented on HDFS-13232:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}124m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}178m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReconstructStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13232 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913841/HDFS-13232.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux afb937bcb5ce 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9a082fb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23379/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23379/testReport/ |
| Max. process+thread count | 3136 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23379/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |

[jira] [Updated] (HDFS-11950) Disable libhdfs zerocopy test on Mac

2018-03-09 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-11950:
-
Status: Patch Available  (was: Open)

Attached a patch to skip the test. Verified on my local MacBook Pro.

> Disable libhdfs zerocopy test on Mac
> 
>
> Key: HDFS-11950
> URL: https://issues.apache.org/jira/browse/HDFS-11950
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: libhdfs
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: Akira Ajisaka
>Priority: Minor
> Attachments: HDFS-11950.001.patch
>
>
> Since libhdfs zerocopy test is expected to fail on Mac, just disable it.
> {noformat}
>  [exec] Test project 
> /Users/jzhuge/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target
>  [exec] Start 1: test_test_libhdfs_threaded_hdfs_static
>  [exec] 1/3 Test #1: test_test_libhdfs_threaded_hdfs_static ...   Passed  
>   9.73 sec
>  [exec] Start 2: test_test_libhdfs_zerocopy_hdfs_static
>  [exec] 2/3 Test #2: test_test_libhdfs_zerocopy_hdfs_static ...***Failed  
>   6.56 sec
>  [exec] Start 3: test_test_native_mini_dfs
>  [exec] Errors while running CTest
>  [exec] 3/3 Test #3: test_test_native_mini_dfs    Passed  
>   7.45 sec
>  [exec]
>  [exec] 67% tests passed, 1 tests failed out of 3
>  [exec]
>  [exec] Total Test time (real) =  23.74 sec
>  [exec]
>  [exec] The following tests FAILED:
>  [exec] 2 - test_test_libhdfs_zerocopy_hdfs_static (Failed)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11950) Disable libhdfs zerocopy test on Mac

2018-03-09 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-11950:
-
Attachment: HDFS-11950.001.patch

> Disable libhdfs zerocopy test on Mac
> 
>
> Key: HDFS-11950
> URL: https://issues.apache.org/jira/browse/HDFS-11950
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: libhdfs
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: Akira Ajisaka
>Priority: Minor
> Attachments: HDFS-11950.001.patch
>
>
> Since libhdfs zerocopy test is expected to fail on Mac, just disable it.
> {noformat}
>  [exec] Test project 
> /Users/jzhuge/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target
>  [exec] Start 1: test_test_libhdfs_threaded_hdfs_static
>  [exec] 1/3 Test #1: test_test_libhdfs_threaded_hdfs_static ...   Passed  
>   9.73 sec
>  [exec] Start 2: test_test_libhdfs_zerocopy_hdfs_static
>  [exec] 2/3 Test #2: test_test_libhdfs_zerocopy_hdfs_static ...***Failed  
>   6.56 sec
>  [exec] Start 3: test_test_native_mini_dfs
>  [exec] Errors while running CTest
>  [exec] 3/3 Test #3: test_test_native_mini_dfs    Passed  
>   7.45 sec
>  [exec]
>  [exec] 67% tests passed, 1 tests failed out of 3
>  [exec]
>  [exec] Total Test time (real) =  23.74 sec
>  [exec]
>  [exec] The following tests FAILED:
>  [exec] 2 - test_test_libhdfs_zerocopy_hdfs_static (Failed)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11950) Disable libhdfs zerocopy test on Mac

2018-03-09 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HDFS-11950:


Assignee: Akira Ajisaka

> Disable libhdfs zerocopy test on Mac
> 
>
> Key: HDFS-11950
> URL: https://issues.apache.org/jira/browse/HDFS-11950
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: libhdfs
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: Akira Ajisaka
>Priority: Minor
> Attachments: HDFS-11950.001.patch
>
>
> Since libhdfs zerocopy test is expected to fail on Mac, just disable it.
> {noformat}
>  [exec] Test project 
> /Users/jzhuge/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target
>  [exec] Start 1: test_test_libhdfs_threaded_hdfs_static
>  [exec] 1/3 Test #1: test_test_libhdfs_threaded_hdfs_static ...   Passed  
>   9.73 sec
>  [exec] Start 2: test_test_libhdfs_zerocopy_hdfs_static
>  [exec] 2/3 Test #2: test_test_libhdfs_zerocopy_hdfs_static ...***Failed  
>   6.56 sec
>  [exec] Start 3: test_test_native_mini_dfs
>  [exec] Errors while running CTest
>  [exec] 3/3 Test #3: test_test_native_mini_dfs    Passed  
>   7.45 sec
>  [exec]
>  [exec] 67% tests passed, 1 tests failed out of 3
>  [exec]
>  [exec] Total Test time (real) =  23.74 sec
>  [exec]
>  [exec] The following tests FAILED:
>  [exec] 2 - test_test_libhdfs_zerocopy_hdfs_static (Failed)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue

2018-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393935#comment-16393935
 ] 

Íñigo Goiri commented on HDFS-13212:


Thanks [~wuweiwei] for the contribution.
Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9.

> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Assignee: Weiwei Wu
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0
>
> Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, 
> HDFS-13212-003.patch, HDFS-13212-004.patch, HDFS-13212-005.patch, 
> HDFS-13212-006.patch, HDFS-13212-007.patch, HDFS-13212-008.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13195) DataNode conf page cannot display the current value after reconfig

2018-03-09 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393936#comment-16393936
 ] 

Arpit Agarwal commented on HDFS-13195:
--

I manually triggered another build this time.

> DataNode conf page  cannot display the current value after reconfig
> ---
>
> Key: HDFS-13195
> URL: https://issues.apache.org/jira/browse/HDFS-13195
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-13195-branch-2.7.001.patch, 
> HDFS-13195-branch-2.7.002.patch, HDFS-13195.001.patch, HDFS-13195.002.patch
>
>
> Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i 
> reconfig this key, the conf page's value is still the old config value.
> The reason is that:
> {code:java}
> public DatanodeHttpServer(final Configuration conf,
>   final DataNode datanode,
>   final ServerSocketChannel externalHttpChannel)
> throws IOException {
> this.conf = conf;
> Configuration confForInfoServer = new Configuration(conf);
> confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
> HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0";))
> .setFindPort(true);
> this.infoServer = builder.build();
> {code}
> The confForInfoServer is a new configuration instance, while the dfsadmin 
> reconfig the datanode's config, the config result cannot reflect to 
> confForInfoServer, so we should use the datanode's conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13212) RBF: Fix router location cache issue

2018-03-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13212:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   3.0.2
   2.9.1
   2.10.0
   3.1.0
   Status: Resolved  (was: Patch Available)

> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Assignee: Weiwei Wu
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0
>
> Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, 
> HDFS-13212-003.patch, HDFS-13212-004.patch, HDFS-13212-005.patch, 
> HDFS-13212-006.patch, HDFS-13212-007.patch, HDFS-13212-008.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13195) DataNode conf page cannot display the current value after reconfig

2018-03-09 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393932#comment-16393932
 ] 

Arpit Agarwal commented on HDFS-13195:
--

I don't think it works anymore. You can reattach a new patch file with the same 
contents and a different name to trigger another run.

> DataNode conf page  cannot display the current value after reconfig
> ---
>
> Key: HDFS-13195
> URL: https://issues.apache.org/jira/browse/HDFS-13195
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-13195-branch-2.7.001.patch, 
> HDFS-13195-branch-2.7.002.patch, HDFS-13195.001.patch, HDFS-13195.002.patch
>
>
> Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i 
> reconfig this key, the conf page's value is still the old config value.
> The reason is that:
> {code:java}
> public DatanodeHttpServer(final Configuration conf,
>   final DataNode datanode,
>   final ServerSocketChannel externalHttpChannel)
> throws IOException {
> this.conf = conf;
> Configuration confForInfoServer = new Configuration(conf);
> confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
> HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0";))
> .setFindPort(true);
> this.infoServer = builder.build();
> {code}
> The confForInfoServer is a new configuration instance, while the dfsadmin 
> reconfig the datanode's config, the config result cannot reflect to 
> confForInfoServer, so we should use the datanode's conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13252) Code refactoring: Remove Diff.ListType

2018-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393927#comment-16393927
 ] 

Hudson commented on HDFS-13252:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13807 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13807/])
HDFS-13252. Code refactoring: Remove Diff.ListType. (szetszwo: rev 
ba0da2785d251745969f88a50d33ce61876d91aa)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotFSImageFormat.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSetQuotaWithSnapshot.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotDiffListingInfo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotTestHelper.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotDiffInfo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/Diff.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java


> Code refactoring: Remove Diff.ListType
> --
>
> Key: HDFS-13252
> URL: https://issues.apache.org/jira/browse/HDFS-13252
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: h13252_20170308.patch, h13252_20170309.patch
>
>
> In Diff, there are only two lists, created and deleted.  It is easier to 
> trace the code if the methods have the list type in the method name, instead 
> of passing a ListType parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13257) Code cleanup: INode never throws QuotaExceededException

2018-03-09 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393924#comment-16393924
 ] 

Tsz Wo Nicholas Sze commented on HDFS-13257:


h13257_20170309b.patch: more code removal.

> Code cleanup: INode never throws QuotaExceededException
> ---
>
> Key: HDFS-13257
> URL: https://issues.apache.org/jira/browse/HDFS-13257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: h13257_20170309.patch, h13257_20170309b.patch
>
>
> The quota verification logic is changed in a way that INode never throws 
> QuotaExceededException.  The {{verify}} parameter is always false in 
> addSpaceConsumed(..) and addSpaceConsumed2Parent(..).
> This provide an opportunity for some code cleanup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13257) Code cleanup: INode never throws QuotaExceededException

2018-03-09 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-13257:
---
Attachment: h13257_20170309b.patch

> Code cleanup: INode never throws QuotaExceededException
> ---
>
> Key: HDFS-13257
> URL: https://issues.apache.org/jira/browse/HDFS-13257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: h13257_20170309.patch, h13257_20170309b.patch
>
>
> The quota verification logic is changed in a way that INode never throws 
> QuotaExceededException.  The {{verify}} parameter is always false in 
> addSpaceConsumed(..) and addSpaceConsumed2Parent(..).
> This provide an opportunity for some code cleanup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393923#comment-16393923
 ] 

Íñigo Goiri commented on HDFS-13232:


+1 on  [^HDFS-13232.003.patch].
I'll wait a day for Yetus and [~csun] for feedback.

> RBF: ConnectionPool should return first usable connection
> -
>
> Key: HDFS-13232
> URL: https://issues.apache.org/jira/browse/HDFS-13232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Ekanth S
>Priority: Minor
> Attachments: HDFS-13232.001.patch, HDFS-13232.002.patch, 
> HDFS-13232.003.patch
>
>
> In current ConnectionPool.getConnection(), it will return the first active 
> connection:
> {code:java}
> for (int i=0; i   int index = (threadIndex + i) % size;
>   conn = tmpConnections.get(index);
>   if (conn != null && !conn.isUsable()) {
> return conn;
>   }
> }
> {code}
> Here "!conn.isUsable()" should be "conn.isUsable()".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13204) RBF: Optimize name service safe mode icon

2018-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393921#comment-16393921
 ] 

Íñigo Goiri commented on HDFS-13204:


I thought the agreement was to create a new set of icons in the CSS and use 
those.
We would use the same icons though but just naming them properly (e.g., 
federationdfshealth-router-*).
We can change them to fancier ones in a new JIRA.

> RBF: Optimize name service safe mode icon
> -
>
> Key: HDFS-13204
> URL: https://issues.apache.org/jira/browse/HDFS-13204
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: liuhongtong
>Priority: Minor
> Attachments: HDFS-13204.001.patch, image-2018-02-28-18-33-09-972.png, 
> image-2018-02-28-18-33-47-661.png, image-2018-02-28-18-35-35-708.png
>
>
> In federation health webpage, the safe mode icons of Subclusters and Routers 
> are inconsistent.
> The safe mode icon of Subclusters may induce users the name service is 
> maintaining.
> !image-2018-02-28-18-33-09-972.png!
> The safe mode icon of Routers:
> !image-2018-02-28-18-33-47-661.png!
> In fact, if the name service is in safe mode, users can't do writing related 
> operations. So I think the safe mode icon in Subclusters should be modified, 
> which may be more reasonable.
> !image-2018-02-28-18-35-35-708.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13195) DataNode conf page cannot display the current value after reconfig

2018-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393919#comment-16393919
 ] 

Íñigo Goiri commented on HDFS-13195:


{quote}
you mean that i can cancel the patch, and submit the patch again?
{quote}
Yes, similar to what I commented in HDFS-13241.
There is a sequence that works but I can never remember it...

> DataNode conf page  cannot display the current value after reconfig
> ---
>
> Key: HDFS-13195
> URL: https://issues.apache.org/jira/browse/HDFS-13195
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-13195-branch-2.7.001.patch, 
> HDFS-13195-branch-2.7.002.patch, HDFS-13195.001.patch, HDFS-13195.002.patch
>
>
> Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i 
> reconfig this key, the conf page's value is still the old config value.
> The reason is that:
> {code:java}
> public DatanodeHttpServer(final Configuration conf,
>   final DataNode datanode,
>   final ServerSocketChannel externalHttpChannel)
> throws IOException {
> this.conf = conf;
> Configuration confForInfoServer = new Configuration(conf);
> confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
> HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0";))
> .setFindPort(true);
> this.infoServer = builder.build();
> {code}
> The confForInfoServer is a new configuration instance, while the dfsadmin 
> reconfig the datanode's config, the config result cannot reflect to 
> confForInfoServer, so we should use the datanode's conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-09 Thread Ekanth S (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393915#comment-16393915
 ] 

Ekanth S commented on HDFS-13232:
-

Sure. Fixed the white line change with the new patch.

> RBF: ConnectionPool should return first usable connection
> -
>
> Key: HDFS-13232
> URL: https://issues.apache.org/jira/browse/HDFS-13232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Ekanth S
>Priority: Minor
> Attachments: HDFS-13232.001.patch, HDFS-13232.002.patch, 
> HDFS-13232.003.patch
>
>
> In current ConnectionPool.getConnection(), it will return the first active 
> connection:
> {code:java}
> for (int i=0; i   int index = (threadIndex + i) % size;
>   conn = tmpConnections.get(index);
>   if (conn != null && !conn.isUsable()) {
> return conn;
>   }
> }
> {code}
> Here "!conn.isUsable()" should be "conn.isUsable()".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13241) RBF: TestRouterSafemode failed if the port 8888 is in use

2018-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393916#comment-16393916
 ] 

Íñigo Goiri commented on HDFS-13241:


I'm not super fluent with JIRA and I usually do it by trial error but there is 
a way to trigger Yetus by cancelling the patch and submitting again (or a 
similar sequence).

> RBF: TestRouterSafemode failed if the port  is in use
> -
>
> Key: HDFS-13241
> URL: https://issues.apache.org/jira/browse/HDFS-13241
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, test
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13241.001.patch
>
>
> TestRouterSafemode failed if the port  is in use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13254) RBF: Cannot mv/cp file cross namespace

2018-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393912#comment-16393912
 ] 

Íñigo Goiri commented on HDFS-13254:


HDFS-2139 goes on the lines of what we discussed in HDFS-13123.
We can eventually make it faster but I think we need to start with the 
Rebalancer approach and build from there.

> RBF: Cannot mv/cp file cross namespace
> --
>
> Key: HDFS-13254
> URL: https://issues.apache.org/jira/browse/HDFS-13254
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Priority: Major
>
> When I try to mv a file from a namespace to another, the client return an 
> error.
>  
> Do we have any plan to support cp/mv file cross namespace?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-09 Thread Ekanth S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekanth S updated HDFS-13232:

Attachment: HDFS-13232.003.patch

> RBF: ConnectionPool should return first usable connection
> -
>
> Key: HDFS-13232
> URL: https://issues.apache.org/jira/browse/HDFS-13232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Ekanth S
>Priority: Minor
> Attachments: HDFS-13232.001.patch, HDFS-13232.002.patch, 
> HDFS-13232.003.patch
>
>
> In current ConnectionPool.getConnection(), it will return the first active 
> connection:
> {code:java}
> for (int i=0; i   int index = (threadIndex + i) % size;
>   conn = tmpConnections.get(index);
>   if (conn != null && !conn.isUsable()) {
> return conn;
>   }
> }
> {code}
> Here "!conn.isUsable()" should be "conn.isUsable()".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393911#comment-16393911
 ] 

genericqa commented on HDFS-13232:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 36s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}138m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}186m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.fs.viewfs.TestViewFileSystemLinkFallback |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13232 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913831/HDFS-13232.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 70adfebf8230 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9a082fb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23378/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23378/testReport/ |
| Max. process+thread count | 3797 (vs. ulimit of 1) |
| modules | C: hadoo

[jira] [Commented] (HDFS-13240) RBF: Update some inaccurate document descriptions

2018-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393909#comment-16393909
 ] 

Íñigo Goiri commented on HDFS-13240:


+1 on [^HDFS-13240.002.patch].
I think this covers what [~Tao Jie] commented.

> RBF: Update some inaccurate document descriptions
> -
>
> Key: HDFS-13240
> URL: https://issues.apache.org/jira/browse/HDFS-13240
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-13240.001.patch, HDFS-13240.002.patch
>
>
> In RBF doc, there are some places not describing accurately 
> (https://issues.apache.org/jira/browse/HDFS-13214?focusedCommentId=16389026&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16389026).
>  This will mislead users sometimes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393905#comment-16393905
 ] 

Íñigo Goiri commented on HDFS-13232:


Avoid removing the extra line in {{TestConnectionManager}}.
Other than that it looks good to me.
[~csun] do you mind confirming the test that was missing in HDFS-12330 is 
enough here?

> RBF: ConnectionPool should return first usable connection
> -
>
> Key: HDFS-13232
> URL: https://issues.apache.org/jira/browse/HDFS-13232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Ekanth S
>Priority: Minor
> Attachments: HDFS-13232.001.patch, HDFS-13232.002.patch
>
>
> In current ConnectionPool.getConnection(), it will return the first active 
> connection:
> {code:java}
> for (int i=0; i   int index = (threadIndex + i) % size;
>   conn = tmpConnections.get(index);
>   if (conn != null && !conn.isUsable()) {
> return conn;
>   }
> }
> {code}
> Here "!conn.isUsable()" should be "conn.isUsable()".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13257) Code cleanup: INode never throws QuotaExceededException

2018-03-09 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-13257:
---
Description: 
The quota verification logic is changed in a way that INode never throws 
QuotaExceededException.  The {{verify}} parameter is always false in 
addSpaceConsumed(..) and addSpaceConsumed2Parent(..).

This provide an opportunity for some code cleanup.

  was:
It seems the quota verification logic is changed in a way that INode never 
throws QuotaExceededException.  The {{verify}} parameter is always false in 
addSpaceConsumed(..) and addSpaceConsumed2Parent(..).

This provide an opportunity for some code cleanup.


> Code cleanup: INode never throws QuotaExceededException
> ---
>
> Key: HDFS-13257
> URL: https://issues.apache.org/jira/browse/HDFS-13257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: h13257_20170309.patch
>
>
> The quota verification logic is changed in a way that INode never throws 
> QuotaExceededException.  The {{verify}} parameter is always false in 
> addSpaceConsumed(..) and addSpaceConsumed2Parent(..).
> This provide an opportunity for some code cleanup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13257) Code cleanup: INode never throws QuotaExceededException

2018-03-09 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-13257:
---
Status: Patch Available  (was: Open)

h13257_20170309.patch: 1st patch.

> Code cleanup: INode never throws QuotaExceededException
> ---
>
> Key: HDFS-13257
> URL: https://issues.apache.org/jira/browse/HDFS-13257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: h13257_20170309.patch
>
>
> It seems the quota verification logic is changed in a way that INode never 
> throws QuotaExceededException.  The {{verify}} parameter is always false in 
> addSpaceConsumed(..) and addSpaceConsumed2Parent(..).
> This provide an opportunity for some code cleanup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13226) RBF: We should throw the failure validate and refuse this mount entry

2018-03-09 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393902#comment-16393902
 ] 

Yiqun Lin commented on HDFS-13226:
--

Thanks [~maobaolong] for updating the patch. Almost looks good now. Only one 
minor comment:

In {{Mountable#validate}} and {{MembershipState#validate}}, "this" info is 
missing after the change. There is only error message printed..
{code}
 if (getNameserviceId() == null || getNameserviceId().length() == 0) {
-  //LOG.error("Invalid registration, no nameservice specified " + this);
-  ret = false;
+  throw new IllegalArgumentException(
+  ERROR_MSG_NO_NS_SPECIFIED);
 }
{code}:


> RBF: We should throw the failure validate and refuse this mount entry
> -
>
> Key: HDFS-13226
> URL: https://issues.apache.org/jira/browse/HDFS-13226
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: RBF
> Fix For: 3.2.0
>
> Attachments: HDFS-13226.001.patch, HDFS-13226.002.patch, 
> HDFS-13226.003.patch, HDFS-13226.004.patch, HDFS-13226.005.patch, 
> HDFS-13226.006.patch, HDFS-13226.007.patch
>
>
> one of the mount entry source path rule is that the source path must start 
> with '\', somebody didn't follow the rule and execute the following command:
> {code:bash}
> $ hdfs dfsrouteradmin -add addnode/ ns1 /addnode/
> {code}
> But, the console show we are successful add this entry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13257) Code cleanup: INode never throws QuotaExceededException

2018-03-09 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-13257:
---
Attachment: h13257_20170309.patch

> Code cleanup: INode never throws QuotaExceededException
> ---
>
> Key: HDFS-13257
> URL: https://issues.apache.org/jira/browse/HDFS-13257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: h13257_20170309.patch
>
>
> It seems the quota verification logic is changed in a way that INode never 
> throws QuotaExceededException.  The {{verify}} parameter is always false in 
> addSpaceConsumed(..) and addSpaceConsumed2Parent(..).
> This provide an opportunity for some code cleanup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13257) Code cleanup: INode never throws QuotaExceededException

2018-03-09 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-13257:
--

 Summary: Code cleanup: INode never throws QuotaExceededException
 Key: HDFS-13257
 URL: https://issues.apache.org/jira/browse/HDFS-13257
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


It seems the quota verification logic is changed in a way that INode never 
throws QuotaExceededException.  The {{verify}} parameter is always false in 
addSpaceConsumed(..) and addSpaceConsumed2Parent(..).

This provide an opportunity for some code cleanup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13240) RBF: Update some inaccurate document descriptions

2018-03-09 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393883#comment-16393883
 ] 

Yiqun Lin commented on HDFS-13240:
--

[~elgoiri], do you have other some comments? I'd lilke to commit today if it 
also looks good to you. Thanks.

> RBF: Update some inaccurate document descriptions
> -
>
> Key: HDFS-13240
> URL: https://issues.apache.org/jira/browse/HDFS-13240
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-13240.001.patch, HDFS-13240.002.patch
>
>
> In RBF doc, there are some places not describing accurately 
> (https://issues.apache.org/jira/browse/HDFS-13214?focusedCommentId=16389026&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16389026).
>  This will mislead users sometimes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13244) Add stack, conf, metrics links to utilities dropdown in NN webUI

2018-03-09 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393860#comment-16393860
 ] 

Bharat Viswanadham commented on HDFS-13244:
---

Thank You [~hanishakoneru] and [~ajayydv] for review and committing the changes.

> Add stack, conf, metrics links to utilities dropdown in NN webUI
> 
>
> Key: HDFS-13244
> URL: https://issues.apache.org/jira/browse/HDFS-13244
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 3.1.0, 3.0.1, 3.2.0
>
> Attachments: HDFS-13244.00.patch, Screen Shot 2018-03-07 at 11.28.27 
> AM.png
>
>
> Add stack, conf, metrics links to utilities dropdown in NN webUI 
> cc [~arpitagarwal] for suggesting this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13023) Journal Sync does not work on a secure cluster

2018-03-09 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-13023:
--
Fix Version/s: 3.1.0

> Journal Sync does not work on a secure cluster
> --
>
> Key: HDFS-13023
> URL: https://issues.apache.org/jira/browse/HDFS-13023
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HDFS-13023.00.patch, HDFS-13023.01.patch, 
> HDFS-13023.02.patch, HDFS-13023.03.patch
>
>
> Fails with the following exception.
> {code}
> 2018-01-10 01:15:40,517 INFO server.JournalNodeSyncer 
> (JournalNodeSyncer.java:syncWithJournalAtIndex(235)) - Syncing Journal 
> /0.0.0.0:8485 with xxx, journal id: mycluster
>  2018-01-10 01:15:40,583 ERROR server.JournalNodeSyncer 
> (JournalNodeSyncer.java:syncWithJournalAtIndex(259)) - Could not sync with 
> Journal at xxx/xxx:8485
>  com.google.protobuf.ServiceException: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  User nn/xxx (auth:PROXY) via jn/xxx (auth:KERBEROS) is not authorized for 
> protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: 
> this service is only accessible by nn/x...@example.com
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:242)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>  at com.sun.proxy.$Proxy16.getEditLogManifest(Unknown Source)
>  at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.syncWithJournalAtIndex(JournalNodeSyncer.java:254)
>  at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.syncJournals(JournalNodeSyncer.java:230)
>  at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.lambda$startSyncJournalsDaemon$0(JournalNodeSyncer.java:190)
>  at java.lang.Thread.run(Thread.java:748)
>  Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  User nn/xxx (auth:PROXY) via jn/xxx (auth:KERBEROS) is not authorized for 
> protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: 
> this service is only accessible by nn/xxx
>  at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1491)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1437)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
>  ... 6 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13244) Add stack, conf, metrics links to utilities dropdown in NN webUI

2018-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393849#comment-16393849
 ] 

Hudson commented on HDFS-13244:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13806 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13806/])
HDFS-13244. Add stack, conf, metrics links to utilities dropdown in NN 
(hanishakoneru: rev 4eeff62f6925991bca725b1ede5308055817de80)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html


> Add stack, conf, metrics links to utilities dropdown in NN webUI
> 
>
> Key: HDFS-13244
> URL: https://issues.apache.org/jira/browse/HDFS-13244
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 3.1.0, 3.0.1, 3.2.0
>
> Attachments: HDFS-13244.00.patch, Screen Shot 2018-03-07 at 11.28.27 
> AM.png
>
>
> Add stack, conf, metrics links to utilities dropdown in NN webUI 
> cc [~arpitagarwal] for suggesting this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13255) RBF: Fail when try to remove mount point paths

2018-03-09 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393835#comment-16393835
 ] 

maobaolong commented on HDFS-13255:
---

[~wuweiwei] I see, it is really a problem.

> RBF: Fail when try to remove mount point paths
> --
>
> Key: HDFS-13255
> URL: https://issues.apache.org/jira/browse/HDFS-13255
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Priority: Major
>
> when delete a ns-fed path which include mount point paths, will issue a error.
> Need to delete echo mount point path independently.
> Operation step:
> {code:java}
> [hadp@root]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source Destinations Owner Group Mode Quota/Usage
> /rm-test-all/rm-test-ns10 ns10->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
> SsQuota: -/-]
> /rm-test-all/rm-test-ns2 ns1->/rm-test hadp hadp rwxr-xr-x [NsQuota: -/-, 
> SsQuota: -/-]
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
> -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns2/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 101 2018-03-07 16:57 
> hdfs://ns-fed/rm-test-all/rm-test-ns2/NOTICE.txt
> -rw-r--r-- 3 hadp supergroup 1366 2018-03-07 16:57 
> hdfs://ns-fed/rm-test-all/rm-test-ns2/README.txt
> [hadp@root]$ hdfs dfs -ls hdfs://ns-fed/rm-test-all/rm-test-ns10/
> Found 2 items
> -rw-r--r-- 3 hadp supergroup 3118 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/core-site.xml
> -rw-r--r-- 3 hadp supergroup 7481 2018-03-07 21:52 
> hdfs://ns-fed/rm-test-all/rm-test-ns10/hdfs-site.xml
> [hadp@root]$ hdfs dfs -rm -r hdfs://ns-fed/rm-test-all/
> rm: Failed to move to trash: hdfs://ns-fed/rm-test-all. Consider using 
> -skipTrash option
> [hadp@root]$ hdfs dfs -rm -r -skipTrash hdfs://ns-fed/rm-test-all/
> rm: `hdfs://ns-fed/rm-test-all': Input/output error
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13254) RBF: Cannot mv/cp file cross namespace

2018-03-09 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393833#comment-16393833
 ] 

maobaolong commented on HDFS-13254:
---

Do you think this Jira  make sense? 
https://issues.apache.org/jira/browse/HDFS-2139

> RBF: Cannot mv/cp file cross namespace
> --
>
> Key: HDFS-13254
> URL: https://issues.apache.org/jira/browse/HDFS-13254
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Weiwei Wu
>Priority: Major
>
> When I try to mv a file from a namespace to another, the client return an 
> error.
>  
> Do we have any plan to support cp/mv file cross namespace?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13252) Code refactoring: Remove Diff.ListType

2018-03-09 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-13252:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Thanks Ajay and Arpit for reviewing the patches.

I have committed this.

> Code refactoring: Remove Diff.ListType
> --
>
> Key: HDFS-13252
> URL: https://issues.apache.org/jira/browse/HDFS-13252
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: h13252_20170308.patch, h13252_20170309.patch
>
>
> In Diff, there are only two lists, created and deleted.  It is easier to 
> trace the code if the methods have the list type in the method name, instead 
> of passing a ListType parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13244) Add stack, conf, metrics links to utilities dropdown in NN webUI

2018-03-09 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393830#comment-16393830
 ] 

Hanisha Koneru commented on HDFS-13244:
---

Committed to trunk, branch-3.1 and branch-3.0.

Thanks for the contribution [~bharatviswa] and thanks for the review [~ajayydv].

> Add stack, conf, metrics links to utilities dropdown in NN webUI
> 
>
> Key: HDFS-13244
> URL: https://issues.apache.org/jira/browse/HDFS-13244
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 3.1.0, 3.0.1, 3.2.0
>
> Attachments: HDFS-13244.00.patch, Screen Shot 2018-03-07 at 11.28.27 
> AM.png
>
>
> Add stack, conf, metrics links to utilities dropdown in NN webUI 
> cc [~arpitagarwal] for suggesting this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13244) Add stack, conf, metrics links to utilities dropdown in NN webUI

2018-03-09 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-13244:
--
   Resolution: Fixed
Fix Version/s: 3.2.0
   3.0.1
   3.1.0
   Status: Resolved  (was: Patch Available)

> Add stack, conf, metrics links to utilities dropdown in NN webUI
> 
>
> Key: HDFS-13244
> URL: https://issues.apache.org/jira/browse/HDFS-13244
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 3.1.0, 3.0.1, 3.2.0
>
> Attachments: HDFS-13244.00.patch, Screen Shot 2018-03-07 at 11.28.27 
> AM.png
>
>
> Add stack, conf, metrics links to utilities dropdown in NN webUI 
> cc [~arpitagarwal] for suggesting this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13241) RBF: TestRouterSafemode failed if the port 8888 is in use

2018-03-09 Thread maobaolong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maobaolong updated HDFS-13241:
--
Status: Patch Available  (was: Open)

> RBF: TestRouterSafemode failed if the port  is in use
> -
>
> Key: HDFS-13241
> URL: https://issues.apache.org/jira/browse/HDFS-13241
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, test
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13241.001.patch
>
>
> TestRouterSafemode failed if the port  is in use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13241) RBF: TestRouterSafemode failed if the port 8888 is in use

2018-03-09 Thread maobaolong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maobaolong updated HDFS-13241:
--
Status: Open  (was: Patch Available)

> RBF: TestRouterSafemode failed if the port  is in use
> -
>
> Key: HDFS-13241
> URL: https://issues.apache.org/jira/browse/HDFS-13241
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, test
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13241.001.patch
>
>
> TestRouterSafemode failed if the port  is in use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13195) DataNode conf page cannot display the current value after reconfig

2018-03-09 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393827#comment-16393827
 ] 

maobaolong commented on HDFS-13195:
---

[~elgoiri] Thank you to teach me this skill, you mean that i can cancel the 
patch, and submit the patch again?

If so, i have done. Waiting the next test.

> DataNode conf page  cannot display the current value after reconfig
> ---
>
> Key: HDFS-13195
> URL: https://issues.apache.org/jira/browse/HDFS-13195
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-13195-branch-2.7.001.patch, 
> HDFS-13195-branch-2.7.002.patch, HDFS-13195.001.patch, HDFS-13195.002.patch
>
>
> Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i 
> reconfig this key, the conf page's value is still the old config value.
> The reason is that:
> {code:java}
> public DatanodeHttpServer(final Configuration conf,
>   final DataNode datanode,
>   final ServerSocketChannel externalHttpChannel)
> throws IOException {
> this.conf = conf;
> Configuration confForInfoServer = new Configuration(conf);
> confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
> HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0";))
> .setFindPort(true);
> this.infoServer = builder.build();
> {code}
> The confForInfoServer is a new configuration instance, while the dfsadmin 
> reconfig the datanode's config, the config result cannot reflect to 
> confForInfoServer, so we should use the datanode's conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13252) Code refactoring: Remove Diff.ListType

2018-03-09 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393825#comment-16393825
 ] 

Ajay Kumar commented on HDFS-13252:
---

+1

> Code refactoring: Remove Diff.ListType
> --
>
> Key: HDFS-13252
> URL: https://issues.apache.org/jira/browse/HDFS-13252
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: h13252_20170308.patch, h13252_20170309.patch
>
>
> In Diff, there are only two lists, created and deleted.  It is easier to 
> trace the code if the methods have the list type in the method name, instead 
> of passing a ListType parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13195) DataNode conf page cannot display the current value after reconfig

2018-03-09 Thread maobaolong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maobaolong updated HDFS-13195:
--
Status: Open  (was: Patch Available)

> DataNode conf page  cannot display the current value after reconfig
> ---
>
> Key: HDFS-13195
> URL: https://issues.apache.org/jira/browse/HDFS-13195
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-13195-branch-2.7.001.patch, 
> HDFS-13195-branch-2.7.002.patch, HDFS-13195.001.patch, HDFS-13195.002.patch
>
>
> Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i 
> reconfig this key, the conf page's value is still the old config value.
> The reason is that:
> {code:java}
> public DatanodeHttpServer(final Configuration conf,
>   final DataNode datanode,
>   final ServerSocketChannel externalHttpChannel)
> throws IOException {
> this.conf = conf;
> Configuration confForInfoServer = new Configuration(conf);
> confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
> HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0";))
> .setFindPort(true);
> this.infoServer = builder.build();
> {code}
> The confForInfoServer is a new configuration instance, while the dfsadmin 
> reconfig the datanode's config, the config result cannot reflect to 
> confForInfoServer, so we should use the datanode's conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13195) DataNode conf page cannot display the current value after reconfig

2018-03-09 Thread maobaolong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maobaolong updated HDFS-13195:
--
Status: Patch Available  (was: Open)

> DataNode conf page  cannot display the current value after reconfig
> ---
>
> Key: HDFS-13195
> URL: https://issues.apache.org/jira/browse/HDFS-13195
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-13195-branch-2.7.001.patch, 
> HDFS-13195-branch-2.7.002.patch, HDFS-13195.001.patch, HDFS-13195.002.patch
>
>
> Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i 
> reconfig this key, the conf page's value is still the old config value.
> The reason is that:
> {code:java}
> public DatanodeHttpServer(final Configuration conf,
>   final DataNode datanode,
>   final ServerSocketChannel externalHttpChannel)
> throws IOException {
> this.conf = conf;
> Configuration confForInfoServer = new Configuration(conf);
> confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
> HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0";))
> .setFindPort(true);
> this.infoServer = builder.build();
> {code}
> The confForInfoServer is a new configuration instance, while the dfsadmin 
> reconfig the datanode's config, the config result cannot reflect to 
> confForInfoServer, so we should use the datanode's conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13204) RBF: Optimize name service safe mode icon

2018-03-09 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393823#comment-16393823
 ] 

maobaolong commented on HDFS-13204:
---

[~elgoiri] Maybe we can specify some icon now, and i think we can fix the 
display bug here by given some icon whatever.

I think maybe [~liuhongtong] not good at choose an right icon, we can finish 
the icon choosing task to the new Jira for the right man, maybe a UI designer.

> RBF: Optimize name service safe mode icon
> -
>
> Key: HDFS-13204
> URL: https://issues.apache.org/jira/browse/HDFS-13204
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: liuhongtong
>Priority: Minor
> Attachments: HDFS-13204.001.patch, image-2018-02-28-18-33-09-972.png, 
> image-2018-02-28-18-33-47-661.png, image-2018-02-28-18-35-35-708.png
>
>
> In federation health webpage, the safe mode icons of Subclusters and Routers 
> are inconsistent.
> The safe mode icon of Subclusters may induce users the name service is 
> maintaining.
> !image-2018-02-28-18-33-09-972.png!
> The safe mode icon of Routers:
> !image-2018-02-28-18-33-47-661.png!
> In fact, if the name service is in safe mode, users can't do writing related 
> operations. So I think the safe mode icon in Subclusters should be modified, 
> which may be more reasonable.
> !image-2018-02-28-18-35-35-708.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13195) DataNode conf page cannot display the current value after reconfig

2018-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393818#comment-16393818
 ] 

Íñigo Goiri commented on HDFS-13195:


{quote}
[~kihwal] I cannot understand what the relationship of the failed test and my 
patch, i just add i single line. puzzling.
{quote}
The build with branch-2 has issues related to memory, too many processes, etc. 
Sometimes this happens.
March 12th there is a bug bash to try to make this functional again.
Meanwhile, try to submit again.

> DataNode conf page  cannot display the current value after reconfig
> ---
>
> Key: HDFS-13195
> URL: https://issues.apache.org/jira/browse/HDFS-13195
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-13195-branch-2.7.001.patch, 
> HDFS-13195-branch-2.7.002.patch, HDFS-13195.001.patch, HDFS-13195.002.patch
>
>
> Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i 
> reconfig this key, the conf page's value is still the old config value.
> The reason is that:
> {code:java}
> public DatanodeHttpServer(final Configuration conf,
>   final DataNode datanode,
>   final ServerSocketChannel externalHttpChannel)
> throws IOException {
> this.conf = conf;
> Configuration confForInfoServer = new Configuration(conf);
> confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
> HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0";))
> .setFindPort(true);
> this.infoServer = builder.build();
> {code}
> The confForInfoServer is a new configuration instance, while the dfsadmin 
> reconfig the datanode's config, the config result cannot reflect to 
> confForInfoServer, so we should use the datanode's conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue

2018-03-09 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393815#comment-16393815
 ] 

maobaolong commented on HDFS-13212:
---

+1 [^HDFS-13212-008.patch] LGTM.

> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Assignee: Weiwei Wu
>Priority: Major
> Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, 
> HDFS-13212-003.patch, HDFS-13212-004.patch, HDFS-13212-005.patch, 
> HDFS-13212-006.patch, HDFS-13212-007.patch, HDFS-13212-008.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13241) RBF: TestRouterSafemode failed if the port 8888 is in use

2018-03-09 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393814#comment-16393814
 ] 

maobaolong commented on HDFS-13241:
---

[~elgoiri]
{noformat}
Do you mind posting again to let Yetus run?{noformat}
I'm not get your instrument? I just don't know what do you want me to post 
again?
 * patch?
 * test command?
 * test result?

> RBF: TestRouterSafemode failed if the port  is in use
> -
>
> Key: HDFS-13241
> URL: https://issues.apache.org/jira/browse/HDFS-13241
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, test
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Attachments: HDFS-13241.001.patch
>
>
> TestRouterSafemode failed if the port  is in use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13195) DataNode conf page cannot display the current value after reconfig

2018-03-09 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393806#comment-16393806
 ] 

maobaolong commented on HDFS-13195:
---

[~kihwal] I cannot understand what the relationship of the failed test and my 
patch, i just add i single line. puzzling.

> DataNode conf page  cannot display the current value after reconfig
> ---
>
> Key: HDFS-13195
> URL: https://issues.apache.org/jira/browse/HDFS-13195
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-13195-branch-2.7.001.patch, 
> HDFS-13195-branch-2.7.002.patch, HDFS-13195.001.patch, HDFS-13195.002.patch
>
>
> Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i 
> reconfig this key, the conf page's value is still the old config value.
> The reason is that:
> {code:java}
> public DatanodeHttpServer(final Configuration conf,
>   final DataNode datanode,
>   final ServerSocketChannel externalHttpChannel)
> throws IOException {
> this.conf = conf;
> Configuration confForInfoServer = new Configuration(conf);
> confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
> HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0";))
> .setFindPort(true);
> this.infoServer = builder.build();
> {code}
> The confForInfoServer is a new configuration instance, while the dfsadmin 
> reconfig the datanode's config, the config result cannot reflect to 
> confForInfoServer, so we should use the datanode's conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13226) RBF: We should throw the failure validate and refuse this mount entry

2018-03-09 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393800#comment-16393800
 ] 

maobaolong commented on HDFS-13226:
---

[~elgoiri] Yeah, it is easy to do, and i have updated the new patch, PTAL.

> RBF: We should throw the failure validate and refuse this mount entry
> -
>
> Key: HDFS-13226
> URL: https://issues.apache.org/jira/browse/HDFS-13226
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: RBF
> Fix For: 3.2.0
>
> Attachments: HDFS-13226.001.patch, HDFS-13226.002.patch, 
> HDFS-13226.003.patch, HDFS-13226.004.patch, HDFS-13226.005.patch, 
> HDFS-13226.006.patch, HDFS-13226.007.patch
>
>
> one of the mount entry source path rule is that the source path must start 
> with '\', somebody didn't follow the rule and execute the following command:
> {code:bash}
> $ hdfs dfsrouteradmin -add addnode/ ns1 /addnode/
> {code}
> But, the console show we are successful add this entry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13226) RBF: We should throw the failure validate and refuse this mount entry

2018-03-09 Thread maobaolong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maobaolong updated HDFS-13226:
--
Attachment: HDFS-13226.007.patch

> RBF: We should throw the failure validate and refuse this mount entry
> -
>
> Key: HDFS-13226
> URL: https://issues.apache.org/jira/browse/HDFS-13226
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: RBF
> Fix For: 3.2.0
>
> Attachments: HDFS-13226.001.patch, HDFS-13226.002.patch, 
> HDFS-13226.003.patch, HDFS-13226.004.patch, HDFS-13226.005.patch, 
> HDFS-13226.006.patch, HDFS-13226.007.patch
>
>
> one of the mount entry source path rule is that the source path must start 
> with '\', somebody didn't follow the rule and execute the following command:
> {code:bash}
> $ hdfs dfsrouteradmin -add addnode/ ns1 /addnode/
> {code}
> But, the console show we are successful add this entry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13190) Document WebHDFS support for snapshot diff

2018-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393796#comment-16393796
 ] 

Hudson commented on HDFS-13190:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13805 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13805/])
HDFS-13190. Document WebHDFS support for snapshot diff (aajisaka: rev 
7b0dc310208ee5bc191c9accb3d1312513145653)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md


> Document WebHDFS support for snapshot diff
> --
>
> Key: HDFS-13190
> URL: https://issues.apache.org/jira/browse/HDFS-13190
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, webhdfs
>Reporter: Xiaoyu Yao
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 3.1.0, 3.0.2
>
> Attachments: HDFS-13190.001.patch, HDFS-13190.002.patch
>
>
> This ticket is opened to document the WebHDFS: Add support for snasphot diff 
> from HDFS-13052 in WebHDFS.md.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13256) Make FileChecksumHelper propagate per-datanode exceptions as part of top-level exception when all datanode attempts fail

2018-03-09 Thread Dennis Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393794#comment-16393794
 ] 

Dennis Huo commented on HDFS-13256:
---

Uploaded [^HDFS-13256-poc.patch] which is diffbased off of patch 8 in 
https://issues.apache.org/jira/browse/HDFS-13056 - there are probably more 
places too where adding "setCause" or "addSuppressed" would make sense, but 
also currently LambdaTestUtils via GenericTestUtils only checks the message 
against toString() and not the entire contents of the exception including 
cause/suppressed. I verified that at least just using the same 
StringUtils.stringifyException(t) as used in the assertion message makes the 
test function as expected, but I don't know if there might be compelling 
reasons to keep GenericTestUtils.assertExceptionContains only checking against 
toString() instead of the verbose stringified exception. Another option is to 
pass through a boolean in LambdaTestUtils or something to indicate whether the 
expected message can be checked against the verbose representation or only 
against toString().

> Make FileChecksumHelper propagate per-datanode exceptions as part of 
> top-level exception when all datanode attempts fail
> 
>
> Key: HDFS-13256
> URL: https://issues.apache.org/jira/browse/HDFS-13256
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13256-poc.patch
>
>
> Make FileChecksumHelper propagate per-datanode exceptions as part of 
> top-level exception when all datanode attempts fail



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13224) RBF: Resolvers to support mount points across multiple subclusters

2018-03-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393770#comment-16393770
 ] 

genericqa commented on HDFS-13224:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 31s{color} | {color:orange} root: The patch generated 1 new + 19 unchanged - 
2 fixed = 20 total (was 21) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 19s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
22s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}137m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
31s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
56s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color

[jira] [Comment Edited] (HDFS-12773) RBF: Improve State Store FS implementation

2018-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393744#comment-16393744
 ] 

Íñigo Goiri edited comment on HDFS-12773 at 3/9/18 11:24 PM:
-

The failed unit tests are unrelated.
The related unit tests were also executed successfully 
[here|https://builds.apache.org/job/PreCommit-HDFS-Build/23373/testReport/org.apache.hadoop.hdfs.server.federation.store.driver/].
Anybody up for review? [~zhengxg3], [~ywskycn], [~linyiqun]?


was (Author: elgoiri):
The failed unit tests are unrelated.
The related unit tests were also executed successfully 
[here|http://example.com]https://builds.apache.org/job/PreCommit-HDFS-Build/23373/testReport/org.apache.hadoop.hdfs.server.federation.store.driver/].
Anybody up for review? [~zhengxg3], [~ywskycn], [~linyiqun]?

> RBF: Improve State Store FS implementation
> --
>
> Key: HDFS-12773
> URL: https://issues.apache.org/jira/browse/HDFS-12773
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-12773.000.patch, HDFS-12773.001.patch, 
> HDFS-12773.002.patch, HDFS-12773.003.patch
>
>
> HDFS-10630 introduced a filesystem implementation of the State Store for unit 
> tests. However, this implementation doesn't handle multiple writers 
> concurrently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13256) Make FileChecksumHelper propagate per-datanode exceptions as part of top-level exception when all datanode attempts fail

2018-03-09 Thread Dennis Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Huo updated HDFS-13256:
--
Attachment: HDFS-13256-poc.patch

> Make FileChecksumHelper propagate per-datanode exceptions as part of 
> top-level exception when all datanode attempts fail
> 
>
> Key: HDFS-13256
> URL: https://issues.apache.org/jira/browse/HDFS-13256
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13256-poc.patch
>
>
> Make FileChecksumHelper propagate per-datanode exceptions as part of 
> top-level exception when all datanode attempts fail



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13252) Code refactoring: Remove Diff.ListType

2018-03-09 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393758#comment-16393758
 ] 

Tsz Wo Nicholas Sze commented on HDFS-13252:


The test failure does not seem related.  All the checkstyle warnings are 
LineLength.  I will just fix them before committing the patch.

> Code refactoring: Remove Diff.ListType
> --
>
> Key: HDFS-13252
> URL: https://issues.apache.org/jira/browse/HDFS-13252
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: h13252_20170308.patch, h13252_20170309.patch
>
>
> In Diff, there are only two lists, created and deleted.  It is easier to 
> trace the code if the methods have the list type in the method name, instead 
> of passing a ListType parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12773) RBF: Improve State Store FS implementation

2018-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393744#comment-16393744
 ] 

Íñigo Goiri commented on HDFS-12773:


The failed unit tests are unrelated.
The related unit tests were also executed successfully 
[here|http://example.com]https://builds.apache.org/job/PreCommit-HDFS-Build/23373/testReport/org.apache.hadoop.hdfs.server.federation.store.driver/].
Anybody up for review? [~zhengxg3], [~ywskycn], [~linyiqun]?

> RBF: Improve State Store FS implementation
> --
>
> Key: HDFS-12773
> URL: https://issues.apache.org/jira/browse/HDFS-12773
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-12773.000.patch, HDFS-12773.001.patch, 
> HDFS-12773.002.patch, HDFS-12773.003.patch
>
>
> HDFS-10630 introduced a filesystem implementation of the State Store for unit 
> tests. However, this implementation doesn't handle multiple writers 
> concurrently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13190) Document WebHDFS support for snapshot diff

2018-03-09 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13190:
-
   Resolution: Fixed
Fix Version/s: 3.0.2
   3.1.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-3.1, and branch-3.0. Thank you, [~ljain] and [~xyao]!

> Document WebHDFS support for snapshot diff
> --
>
> Key: HDFS-13190
> URL: https://issues.apache.org/jira/browse/HDFS-13190
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, webhdfs
>Reporter: Xiaoyu Yao
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 3.1.0, 3.0.2
>
> Attachments: HDFS-13190.001.patch, HDFS-13190.002.patch
>
>
> This ticket is opened to document the WebHDFS: Add support for snasphot diff 
> from HDFS-13052 in WebHDFS.md.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13190) Document WebHDFS support for snapshot diff

2018-03-09 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-13190:
-
Hadoop Flags: Reviewed
 Summary: Document WebHDFS support for snapshot diff  (was: Document 
WebHDFS support for snasphot diff)

> Document WebHDFS support for snapshot diff
> --
>
> Key: HDFS-13190
> URL: https://issues.apache.org/jira/browse/HDFS-13190
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, webhdfs
>Reporter: Xiaoyu Yao
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDFS-13190.001.patch, HDFS-13190.002.patch
>
>
> This ticket is opened to document the WebHDFS: Add support for snasphot diff 
> from HDFS-13052 in WebHDFS.md.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13252) Code refactoring: Remove Diff.ListType

2018-03-09 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393735#comment-16393735
 ] 

genericqa commented on HDFS-13252:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 
397 unchanged - 6 fixed = 397 total (was 403) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 5 new + 283 unchanged - 6 fixed = 288 total (was 289) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}144m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}194m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.federation.router.TestRouterSafemode |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13252 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913814/h13252_20170309.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0fa694d5130e 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 99ab511 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findb

[jira] [Comment Edited] (HDFS-13244) Add stack, conf, metrics links to utilities dropdown in NN webUI

2018-03-09 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393728#comment-16393728
 ] 

Hanisha Koneru edited comment on HDFS-13244 at 3/9/18 11:01 PM:


Looks like Jenkins cannot process html changes. Thanks for pointing it out 
[~elgoiri].

Tested it on a test cluster. I will commit this shortly.


was (Author: hanishakoneru):
Looks like Jenkins cannot process html changes. Thanks for pointing it out 
[~elgoiri].

I will commit this shortly.

> Add stack, conf, metrics links to utilities dropdown in NN webUI
> 
>
> Key: HDFS-13244
> URL: https://issues.apache.org/jira/browse/HDFS-13244
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13244.00.patch, Screen Shot 2018-03-07 at 11.28.27 
> AM.png
>
>
> Add stack, conf, metrics links to utilities dropdown in NN webUI 
> cc [~arpitagarwal] for suggesting this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13244) Add stack, conf, metrics links to utilities dropdown in NN webUI

2018-03-09 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393728#comment-16393728
 ] 

Hanisha Koneru commented on HDFS-13244:
---

Looks like Jenkins cannot process html changes. Thanks for pointing it out 
[~elgoiri].

I will commit this shortly.

> Add stack, conf, metrics links to utilities dropdown in NN webUI
> 
>
> Key: HDFS-13244
> URL: https://issues.apache.org/jira/browse/HDFS-13244
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13244.00.patch, Screen Shot 2018-03-07 at 11.28.27 
> AM.png
>
>
> Add stack, conf, metrics links to utilities dropdown in NN webUI 
> cc [~arpitagarwal] for suggesting this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13056) Expose file-level composite CRCs in HDFS which are comparable across different instances/layouts

2018-03-09 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393722#comment-16393722
 ] 

Ajay Kumar commented on HDFS-13056:
---

+1 (non-binding)
[~dennishuo], agree, created [HDFS-13256] to track it separately.
 

> Expose file-level composite CRCs in HDFS which are comparable across 
> different instances/layouts
> 
>
> Key: HDFS-13056
> URL: https://issues.apache.org/jira/browse/HDFS-13056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, distcp, erasure-coding, federation, hdfs
>Affects Versions: 3.0.0
>Reporter: Dennis Huo
>Assignee: Dennis Huo
>Priority: Major
> Attachments: HDFS-13056-branch-2.8.001.patch, 
> HDFS-13056-branch-2.8.002.patch, HDFS-13056-branch-2.8.003.patch, 
> HDFS-13056-branch-2.8.004.patch, HDFS-13056-branch-2.8.005.patch, 
> HDFS-13056-branch-2.8.poc1.patch, HDFS-13056.001.patch, HDFS-13056.002.patch, 
> HDFS-13056.003.patch, HDFS-13056.003.patch, HDFS-13056.004.patch, 
> HDFS-13056.005.patch, HDFS-13056.006.patch, HDFS-13056.007.patch, 
> HDFS-13056.008.patch, Reference_only_zhen_PPOC_hadoop2.6.X.diff, 
> hdfs-file-composite-crc32-v1.pdf, hdfs-file-composite-crc32-v2.pdf, 
> hdfs-file-composite-crc32-v3.pdf
>
>
> FileChecksum was first introduced in 
> [https://issues-test.apache.org/jira/browse/HADOOP-3981] and ever since then 
> has remained defined as MD5-of-MD5-of-CRC, where per-512-byte chunk CRCs are 
> already stored as part of datanode metadata, and the MD5 approach is used to 
> compute an aggregate value in a distributed manner, with individual datanodes 
> computing the MD5-of-CRCs per-block in parallel, and the HDFS client 
> computing the second-level MD5.
>  
> A shortcoming of this approach which is often brought up is the fact that 
> this FileChecksum is sensitive to the internal block-size and chunk-size 
> configuration, and thus different HDFS files with different block/chunk 
> settings cannot be compared. More commonly, one might have different HDFS 
> clusters which use different block sizes, in which case any data migration 
> won't be able to use the FileChecksum for distcp's rsync functionality or for 
> verifying end-to-end data integrity (on top of low-level data integrity 
> checks applied at data transfer time).
>  
> This was also revisited in https://issues.apache.org/jira/browse/HDFS-8430 
> during the addition of checksum support for striped erasure-coded files; 
> while there was some discussion of using CRC composability, it still 
> ultimately settled on hierarchical MD5 approach, which also adds the problem 
> that checksums of basic replicated files are not comparable to striped files.
>  
> This feature proposes to add a "COMPOSITE-CRC" FileChecksum type which uses 
> CRC composition to remain completely chunk/block agnostic, and allows 
> comparison between striped vs replicated files, between different HDFS 
> instances, and possible even between HDFS and other external storage systems. 
> This feature can also be added in-place to be compatible with existing block 
> metadata, and doesn't need to change the normal path of chunk verification, 
> so is minimally invasive. This also means even large preexisting HDFS 
> deployments could adopt this feature to retroactively sync data. A detailed 
> design document can be found here: 
> https://storage.googleapis.com/dennishuo/hdfs-file-composite-crc32-v1.pdf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-09 Thread Ekanth S (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393719#comment-16393719
 ] 

Ekanth S edited comment on HDFS-13232 at 3/9/18 10:51 PM:
--

Thanks [~ywskycn] and [~elgoiri] for the quick review. Updated the patch with 
the missing test, extra check for number of connections and formatting.


was (Author: ekanth):
Thanks Wei and Inigo for the quick review. Updated the patch with the missing 
test, extra check for number of connections and formatting.

> RBF: ConnectionPool should return first usable connection
> -
>
> Key: HDFS-13232
> URL: https://issues.apache.org/jira/browse/HDFS-13232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Ekanth S
>Priority: Minor
> Attachments: HDFS-13232.001.patch, HDFS-13232.002.patch
>
>
> In current ConnectionPool.getConnection(), it will return the first active 
> connection:
> {code:java}
> for (int i=0; i   int index = (threadIndex + i) % size;
>   conn = tmpConnections.get(index);
>   if (conn != null && !conn.isUsable()) {
> return conn;
>   }
> }
> {code}
> Here "!conn.isUsable()" should be "conn.isUsable()".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-09 Thread Ekanth S (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393719#comment-16393719
 ] 

Ekanth S commented on HDFS-13232:
-

Thanks Wei and Inigo for the quick review. Updated the patch with the missing 
test, extra check for number of connections and formatting.

> RBF: ConnectionPool should return first usable connection
> -
>
> Key: HDFS-13232
> URL: https://issues.apache.org/jira/browse/HDFS-13232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Ekanth S
>Priority: Minor
> Attachments: HDFS-13232.001.patch, HDFS-13232.002.patch
>
>
> In current ConnectionPool.getConnection(), it will return the first active 
> connection:
> {code:java}
> for (int i=0; i   int index = (threadIndex + i) % size;
>   conn = tmpConnections.get(index);
>   if (conn != null && !conn.isUsable()) {
> return conn;
>   }
> }
> {code}
> Here "!conn.isUsable()" should be "conn.isUsable()".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13256) Make FileChecksumHelper propagate per-datanode exceptions as part of top-level exception when all datanode attempts fail

2018-03-09 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDFS-13256:
-

 Summary: Make FileChecksumHelper propagate per-datanode exceptions 
as part of top-level exception when all datanode attempts fail
 Key: HDFS-13256
 URL: https://issues.apache.org/jira/browse/HDFS-13256
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ajay Kumar


Make FileChecksumHelper propagate per-datanode exceptions as part of top-level 
exception when all datanode attempts fail



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-09 Thread Ekanth S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekanth S updated HDFS-13232:

Attachment: HDFS-13232.002.patch

> RBF: ConnectionPool should return first usable connection
> -
>
> Key: HDFS-13232
> URL: https://issues.apache.org/jira/browse/HDFS-13232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Ekanth S
>Priority: Minor
> Attachments: HDFS-13232.001.patch, HDFS-13232.002.patch
>
>
> In current ConnectionPool.getConnection(), it will return the first active 
> connection:
> {code:java}
> for (int i=0; i   int index = (threadIndex + i) % size;
>   conn = tmpConnections.get(index);
>   if (conn != null && !conn.isUsable()) {
> return conn;
>   }
> }
> {code}
> Here "!conn.isUsable()" should be "conn.isUsable()".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13215) RBF: Move Router to its own module

2018-03-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393717#comment-16393717
 ] 

Íñigo Goiri commented on HDFS-13215:


Can we do our own extension of TestHdfsConfigFields here?

> RBF: Move Router to its own module
> --
>
> Key: HDFS-13215
> URL: https://issues.apache.org/jira/browse/HDFS-13215
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Wei Yan
>Priority: Major
> Attachments: HDFS-13215.000.patch
>
>
> We are splitting the HDFS client code base and potentially Router-based 
> Federation is also independent enough to be in its own package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >