[jira] [Updated] (HDFS-12116) BlockReportTestBase#blockReport_08 and #blockReport_08 intermittently fail

2017-07-10 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12116:
-
Attachment: 
TEST-org.apache.hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage.xml

Attaching a failure log

> BlockReportTestBase#blockReport_08 and #blockReport_08 intermittently fail
> --
>
> Key: HDFS-12116
> URL: https://issues.apache.org/jira/browse/HDFS-12116
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: 
> TEST-org.apache.hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage.xml
>
>
> This seems to be long-standing, but the failure rate (~10%) is slightly 
> higher in dist-test run in using cdh.
> In both _08 and _09 tests:
> # an attempt is made to make a replica in {{TEMPORARY}}
>  state, by {{waitForTempReplica}}.
> # Once that's returned, the test goes on to verify block reports shows 
> correct pending replication blocks.
> But there's a race condition. If the replica is replicated between steps #1 
> and #2, {{getPendingReplicationBlocks}} could return 0 or 1, depending on how 
> many replicas are replicated, hence failing the test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11264) [SPS]: Double checks to ensure that SPS/Mover are not running together

2017-07-10 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-11264:

Attachment: HDFS-11264-HDFS-10285-02.patch

Thanks [~umamaheswararao] for the reviews. Attached new patch addressing it.

> [SPS]: Double checks to ensure that SPS/Mover are not running together
> --
>
> Key: HDFS-11264
> URL: https://issues.apache.org/jira/browse/HDFS-11264
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Rakesh R
> Attachments: HDFS-11264-HDFS-10285-01.patch, 
> HDFS-11264-HDFS-10285-02.patch
>
>
> As discussed in HDFS-10885, double checks needed to insure SPS/Mover not 
> running together, otherwise it may cause some issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11264) [SPS]: Double checks to ensure that SPS/Mover are not running together

2017-07-10 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081684#comment-16081684
 ] 

Uma Maheswara Rao G commented on HDFS-11264:


nit:  exist quickly.--> exit quickly. ?

Other than that patch looks good.

> [SPS]: Double checks to ensure that SPS/Mover are not running together
> --
>
> Key: HDFS-11264
> URL: https://issues.apache.org/jira/browse/HDFS-11264
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Rakesh R
> Attachments: HDFS-11264-HDFS-10285-01.patch
>
>
> As discussed in HDFS-10885, double checks needed to insure SPS/Mover not 
> running together, otherwise it may cause some issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11345) Document the configuration key for FSNamesystem lock fairness

2017-07-10 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081654#comment-16081654
 ] 

Akira Ajisaka commented on HDFS-11345:
--

+1 for branch-2.7. Thanks Brahma.

> Document the configuration key for FSNamesystem lock fairness
> -
>
> Key: HDFS-11345
> URL: https://issues.apache.org/jira/browse/HDFS-11345
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, namenode
>Reporter: Zhe Zhang
>Assignee: Erik Krogen
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-11345.000.patch, HADOOP-11345.001.patch, 
> HDFS-11345.002.patch
>
>
> Per [earlier | 
> https://issues.apache.org/jira/browse/HDFS-5239?focusedCommentId=15536471=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15536471]
>  discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081613#comment-16081613
 ] 

Hadoop QA commented on HDFS-6874:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
27s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-6874 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876552/HDFS-6874.07.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2c6510808256 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f1efa14 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20222/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20222/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6874.02.patch, HDFS-6874.03.patch, 
> HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874.06.patch, 
> HDFS-6874.07.patch, HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, 
> HDFS-6874.patch
>
>
> GETFILEBLOCKLOCATIONS operation is missing 

[jira] [Updated] (HDFS-11965) [SPS]: Should give chance to satisfy the low redundant blocks before removing the xattr

2017-07-10 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-11965:

Fix Version/s: (was: HDFS010285)
   HDFS-10285

> [SPS]: Should give chance to satisfy the low redundant blocks before removing 
> the xattr
> ---
>
> Key: HDFS-11965
> URL: https://issues.apache.org/jira/browse/HDFS-11965
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: HDFS-10285
>
> Attachments: HDFS-11965-HDFS-10285.001.patch, 
> HDFS-11965-HDFS-10285.002.patch, HDFS-11965-HDFS-10285.003.patch, 
> HDFS-11965-HDFS-10285.004.patch, HDFS-11965-HDFS-10285.005.patch, 
> HDFS-11965-HDFS-10285.006.patch, HDFS-11965-HDFS-10285.007.patch, 
> HDFS-11965-HDFS-10285.008.patch
>
>
> The test case is failing because all the required replicas are not moved in 
> expected storage. This is happened because of delay in datanode registration 
> after cluster restart.
> Scenario :
> 1. Start cluster with 3 DataNodes.
> 2. Create file and set storage policy to WARM.
> 3. Restart the cluster.
> 4. Now Namenode and two DataNodes started first and  got registered with 
> NameNode. (one datanode  not yet registered)
> 5. SPS scheduled block movement based on available DataNodes (It will move 
> one replica in ARCHIVE based on policy).
> 6. Block movement also success and Xattr removed from the file because this 
> condition is true {{itemInfo.isAllBlockLocsAttemptedToSatisfy()}}.
> {code}
> if (itemInfo != null
> && !itemInfo.isAllBlockLocsAttemptedToSatisfy()) {
>   blockStorageMovementNeeded
>   .add(storageMovementAttemptedResult.getTrackId());
> 
> ..
> } else {
> 
> ..
>   this.sps.postBlkStorageMovementCleanup(
>   storageMovementAttemptedResult.getTrackId());
> }
> {code}
> 7. Now third DN registered with namenode and its reported one more DISK 
> replica. Now Namenode has two DISK and one ARCHIVE replica.
> In test case we have condition to check the number of DISK replica..
> {code} DFSTestUtil.waitExpectedStorageType(testFileName, StorageType.DISK, 1, 
> timeout, fs);{code}
> This condition never became true and test case will be timed out.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11965) [SPS]: Should give chance to satisfy the low redundant blocks before removing the xattr

2017-07-10 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081605#comment-16081605
 ] 

Surendra Singh Lilhore commented on HDFS-11965:
---

Thanks [~umamaheswararao] for review and commit..

> [SPS]: Should give chance to satisfy the low redundant blocks before removing 
> the xattr
> ---
>
> Key: HDFS-11965
> URL: https://issues.apache.org/jira/browse/HDFS-11965
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: HDFS-10285
>
> Attachments: HDFS-11965-HDFS-10285.001.patch, 
> HDFS-11965-HDFS-10285.002.patch, HDFS-11965-HDFS-10285.003.patch, 
> HDFS-11965-HDFS-10285.004.patch, HDFS-11965-HDFS-10285.005.patch, 
> HDFS-11965-HDFS-10285.006.patch, HDFS-11965-HDFS-10285.007.patch, 
> HDFS-11965-HDFS-10285.008.patch
>
>
> The test case is failing because all the required replicas are not moved in 
> expected storage. This is happened because of delay in datanode registration 
> after cluster restart.
> Scenario :
> 1. Start cluster with 3 DataNodes.
> 2. Create file and set storage policy to WARM.
> 3. Restart the cluster.
> 4. Now Namenode and two DataNodes started first and  got registered with 
> NameNode. (one datanode  not yet registered)
> 5. SPS scheduled block movement based on available DataNodes (It will move 
> one replica in ARCHIVE based on policy).
> 6. Block movement also success and Xattr removed from the file because this 
> condition is true {{itemInfo.isAllBlockLocsAttemptedToSatisfy()}}.
> {code}
> if (itemInfo != null
> && !itemInfo.isAllBlockLocsAttemptedToSatisfy()) {
>   blockStorageMovementNeeded
>   .add(storageMovementAttemptedResult.getTrackId());
> 
> ..
> } else {
> 
> ..
>   this.sps.postBlkStorageMovementCleanup(
>   storageMovementAttemptedResult.getTrackId());
> }
> {code}
> 7. Now third DN registered with namenode and its reported one more DISK 
> replica. Now Namenode has two DISK and one ARCHIVE replica.
> In test case we have condition to check the number of DISK replica..
> {code} DFSTestUtil.waitExpectedStorageType(testFileName, StorageType.DISK, 1, 
> timeout, fs);{code}
> This condition never became true and test case will be timed out.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11874) [SPS]: Document the SPS feature

2017-07-10 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R reassigned HDFS-11874:
---

Assignee: Uma Maheswara Rao G

> [SPS]: Document the SPS feature
> ---
>
> Key: HDFS-11874
> URL: https://issues.apache.org/jira/browse/HDFS-11874
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: ArchivalStorage.html, HDFS-11874-HDFS-10285-001.patch
>
>
> This JIRA is for tracking the documentation about the feature



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2017-07-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081586#comment-16081586
 ] 

Weiwei Yang commented on HDFS-6874:
---

Sure [~szetszwo], just uploaded a new version to add a test to verify files 
with multiple blocks. Thanks for the comments.

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6874.02.patch, HDFS-6874.03.patch, 
> HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874.06.patch, 
> HDFS-6874.07.patch, HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, 
> HDFS-6874.patch
>
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2017-07-10 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-6874:
--
Attachment: HDFS-6874.07.patch

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6874.02.patch, HDFS-6874.03.patch, 
> HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874.06.patch, 
> HDFS-6874.07.patch, HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, 
> HDFS-6874.patch
>
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081584#comment-16081584
 ] 

John Zhuge commented on HDFS-12052:
---

Thanks [~3opan]. Will commit tomorrow morning so that others have a chance to 
comment or review.

> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch, 
> HDFS-12052.05.patch, HDFS-12052.06.patch, HDFS-12052.07.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12115) Ozone: SCM: Add queryNode RPC Call

2017-07-10 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-12115:
---

 Summary: Ozone: SCM: Add queryNode RPC Call
 Key: HDFS-12115
 URL: https://issues.apache.org/jira/browse/HDFS-12115
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: HDFS-7240


Add queryNode RPC to Storage container location protocol. This allows 
applications like SCM CLI to get the list of nodes in various states, like 
Healthy, live or Dead.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread Zoran Dimitrijevic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081560#comment-16081560
 ] 

Zoran Dimitrijevic commented on HDFS-12052:
---

[~jzhuge] thanks. Removed unnecessary import and it seems it's finally all 
good. Please submit the patch when you have time. Cheers.

> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch, 
> HDFS-12052.05.patch, HDFS-12052.06.patch, HDFS-12052.07.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081552#comment-16081552
 ] 

Hadoop QA commented on HDFS-12052:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs-httpfs: The patch 
generated 0 new + 53 unchanged - 9 fixed = 53 total (was 62) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
20s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12052 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876550/HDFS-12052.07.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 052717cd9ef9 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f1efa14 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20221/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20221/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, 

[jira] [Updated] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread Zoran Dimitrijevic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoran Dimitrijevic updated HDFS-12052:
--
Attachment: HDFS-12052.07.patch

> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch, 
> HDFS-12052.05.patch, HDFS-12052.06.patch, HDFS-12052.07.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread Zoran Dimitrijevic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081472#comment-16081472
 ] 

Zoran Dimitrijevic commented on HDFS-12052:
---

I run dev-support/bin/test-patch --dirty-workspace --plugins=checkstyle,maven 
HDFS-12052.06.patch and it was +1. So, how do I see which new style offense 
still exists in this patch?

| Vote |  Subsystem |  Runtime   | Comment

|  +1  |mvninstall  |   9m 48s   | trunk passed 
|  +1  |checkstyle  |   0m 13s   | trunk passed 
|  +1  |checkstyle  |   0m  9s   | the patch passed 
|  ||  10m 21s   | 



> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch, 
> HDFS-12052.05.patch, HDFS-12052.06.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081494#comment-16081494
 ] 

John Zhuge edited comment on HDFS-12052 at 7/11/17 1:54 AM:


Clicked the link 
https://builds.apache.org/job/PreCommit-HDFS-Build/20219/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt:
{noformat}
./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSAuthenticationFilter.java:22:import
 org.apache.hadoop.fs.http.server.HttpFSServerWebServer;:1:
Redundant import from the same package - 
org.apache.hadoop.fs.http.server.HttpFSServerWebServer. [RedundantImport]
{noformat}



was (Author: jzhuge):
Clicked the link 
https://builds.apache.org/job/PreCommit-HDFS-Build/20219/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt:
{noformat}
./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSAuthenticationFilter.java:22:import
 org.apache.hadoop.fs.http.server.HttpFSServerWebServer;:1: Redundant import 
from the same package - org.apache.hadoop.fs.http.server.HttpFSServerWebServer. 
[RedundantImport]
{noformat}


> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch, 
> HDFS-12052.05.patch, HDFS-12052.06.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11965) [SPS]: Should give chance to satisfy the low redundant blocks before removing the xattr

2017-07-10 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081470#comment-16081470
 ] 

Uma Maheswara Rao G commented on HDFS-11965:


Test failure is about "address already in us use"
I just ran the test, it is passing.

---
 T E S T S
---

---
 T E S T S
---
Running 
org.apache.hadoop.hdfs.server.namenode.TestStoragePolicySatisfierWithStripedFile
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 123.421 sec - 
in 
org.apache.hadoop.hdfs.server.namenode.TestStoragePolicySatisfierWithStripedFile

Results :

Tests run: 4, Failures: 0, Errors: 0, Skipped: 0

> [SPS]: Should give chance to satisfy the low redundant blocks before removing 
> the xattr
> ---
>
> Key: HDFS-11965
> URL: https://issues.apache.org/jira/browse/HDFS-11965
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11965-HDFS-10285.001.patch, 
> HDFS-11965-HDFS-10285.002.patch, HDFS-11965-HDFS-10285.003.patch, 
> HDFS-11965-HDFS-10285.004.patch, HDFS-11965-HDFS-10285.005.patch, 
> HDFS-11965-HDFS-10285.006.patch, HDFS-11965-HDFS-10285.007.patch, 
> HDFS-11965-HDFS-10285.008.patch
>
>
> The test case is failing because all the required replicas are not moved in 
> expected storage. This is happened because of delay in datanode registration 
> after cluster restart.
> Scenario :
> 1. Start cluster with 3 DataNodes.
> 2. Create file and set storage policy to WARM.
> 3. Restart the cluster.
> 4. Now Namenode and two DataNodes started first and  got registered with 
> NameNode. (one datanode  not yet registered)
> 5. SPS scheduled block movement based on available DataNodes (It will move 
> one replica in ARCHIVE based on policy).
> 6. Block movement also success and Xattr removed from the file because this 
> condition is true {{itemInfo.isAllBlockLocsAttemptedToSatisfy()}}.
> {code}
> if (itemInfo != null
> && !itemInfo.isAllBlockLocsAttemptedToSatisfy()) {
>   blockStorageMovementNeeded
>   .add(storageMovementAttemptedResult.getTrackId());
> 
> ..
> } else {
> 
> ..
>   this.sps.postBlkStorageMovementCleanup(
>   storageMovementAttemptedResult.getTrackId());
> }
> {code}
> 7. Now third DN registered with namenode and its reported one more DISK 
> replica. Now Namenode has two DISK and one ARCHIVE replica.
> In test case we have condition to check the number of DISK replica..
> {code} DFSTestUtil.waitExpectedStorageType(testFileName, StorageType.DISK, 1, 
> timeout, fs);{code}
> This condition never became true and test case will be timed out.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11965) [SPS]: Should give chance to satisfy the low redundant blocks before removing the xattr

2017-07-10 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-11965:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS010285
   Status: Resolved  (was: Patch Available)

> [SPS]: Should give chance to satisfy the low redundant blocks before removing 
> the xattr
> ---
>
> Key: HDFS-11965
> URL: https://issues.apache.org/jira/browse/HDFS-11965
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: HDFS010285
>
> Attachments: HDFS-11965-HDFS-10285.001.patch, 
> HDFS-11965-HDFS-10285.002.patch, HDFS-11965-HDFS-10285.003.patch, 
> HDFS-11965-HDFS-10285.004.patch, HDFS-11965-HDFS-10285.005.patch, 
> HDFS-11965-HDFS-10285.006.patch, HDFS-11965-HDFS-10285.007.patch, 
> HDFS-11965-HDFS-10285.008.patch
>
>
> The test case is failing because all the required replicas are not moved in 
> expected storage. This is happened because of delay in datanode registration 
> after cluster restart.
> Scenario :
> 1. Start cluster with 3 DataNodes.
> 2. Create file and set storage policy to WARM.
> 3. Restart the cluster.
> 4. Now Namenode and two DataNodes started first and  got registered with 
> NameNode. (one datanode  not yet registered)
> 5. SPS scheduled block movement based on available DataNodes (It will move 
> one replica in ARCHIVE based on policy).
> 6. Block movement also success and Xattr removed from the file because this 
> condition is true {{itemInfo.isAllBlockLocsAttemptedToSatisfy()}}.
> {code}
> if (itemInfo != null
> && !itemInfo.isAllBlockLocsAttemptedToSatisfy()) {
>   blockStorageMovementNeeded
>   .add(storageMovementAttemptedResult.getTrackId());
> 
> ..
> } else {
> 
> ..
>   this.sps.postBlkStorageMovementCleanup(
>   storageMovementAttemptedResult.getTrackId());
> }
> {code}
> 7. Now third DN registered with namenode and its reported one more DISK 
> replica. Now Namenode has two DISK and one ARCHIVE replica.
> In test case we have condition to check the number of DISK replica..
> {code} DFSTestUtil.waitExpectedStorageType(testFileName, StorageType.DISK, 1, 
> timeout, fs);{code}
> This condition never became true and test case will be timed out.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081494#comment-16081494
 ] 

John Zhuge commented on HDFS-12052:
---

Clicked the link 
https://builds.apache.org/job/PreCommit-HDFS-Build/20219/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt:
{noformat}
./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSAuthenticationFilter.java:22:import
 org.apache.hadoop.fs.http.server.HttpFSServerWebServer;:1: Redundant import 
from the same package - org.apache.hadoop.fs.http.server.HttpFSServerWebServer. 
[RedundantImport]
{noformat}


> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch, 
> HDFS-12052.05.patch, HDFS-12052.06.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11965) [SPS]: Should give chance to satisfy the low redundant blocks before removing the xattr

2017-07-10 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081488#comment-16081488
 ] 

Uma Maheswara Rao G commented on HDFS-11965:


Thanks [~surendrasingh] for the contribution!
Thanks [~rakeshr] for the reviews

I have just pushed to branch

> [SPS]: Should give chance to satisfy the low redundant blocks before removing 
> the xattr
> ---
>
> Key: HDFS-11965
> URL: https://issues.apache.org/jira/browse/HDFS-11965
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11965-HDFS-10285.001.patch, 
> HDFS-11965-HDFS-10285.002.patch, HDFS-11965-HDFS-10285.003.patch, 
> HDFS-11965-HDFS-10285.004.patch, HDFS-11965-HDFS-10285.005.patch, 
> HDFS-11965-HDFS-10285.006.patch, HDFS-11965-HDFS-10285.007.patch, 
> HDFS-11965-HDFS-10285.008.patch
>
>
> The test case is failing because all the required replicas are not moved in 
> expected storage. This is happened because of delay in datanode registration 
> after cluster restart.
> Scenario :
> 1. Start cluster with 3 DataNodes.
> 2. Create file and set storage policy to WARM.
> 3. Restart the cluster.
> 4. Now Namenode and two DataNodes started first and  got registered with 
> NameNode. (one datanode  not yet registered)
> 5. SPS scheduled block movement based on available DataNodes (It will move 
> one replica in ARCHIVE based on policy).
> 6. Block movement also success and Xattr removed from the file because this 
> condition is true {{itemInfo.isAllBlockLocsAttemptedToSatisfy()}}.
> {code}
> if (itemInfo != null
> && !itemInfo.isAllBlockLocsAttemptedToSatisfy()) {
>   blockStorageMovementNeeded
>   .add(storageMovementAttemptedResult.getTrackId());
> 
> ..
> } else {
> 
> ..
>   this.sps.postBlkStorageMovementCleanup(
>   storageMovementAttemptedResult.getTrackId());
> }
> {code}
> 7. Now third DN registered with namenode and its reported one more DISK 
> replica. Now Namenode has two DISK and one ARCHIVE replica.
> In test case we have condition to check the number of DISK replica..
> {code} DFSTestUtil.waitExpectedStorageType(testFileName, StorageType.DISK, 1, 
> timeout, fs);{code}
> This condition never became true and test case will be timed out.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11965) [SPS]: Should give chance to satisfy the low redundant blocks before removing the xattr

2017-07-10 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081482#comment-16081482
 ] 

Uma Maheswara Rao G commented on HDFS-11965:


+1 on latest patch

> [SPS]: Should give chance to satisfy the low redundant blocks before removing 
> the xattr
> ---
>
> Key: HDFS-11965
> URL: https://issues.apache.org/jira/browse/HDFS-11965
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11965-HDFS-10285.001.patch, 
> HDFS-11965-HDFS-10285.002.patch, HDFS-11965-HDFS-10285.003.patch, 
> HDFS-11965-HDFS-10285.004.patch, HDFS-11965-HDFS-10285.005.patch, 
> HDFS-11965-HDFS-10285.006.patch, HDFS-11965-HDFS-10285.007.patch, 
> HDFS-11965-HDFS-10285.008.patch
>
>
> The test case is failing because all the required replicas are not moved in 
> expected storage. This is happened because of delay in datanode registration 
> after cluster restart.
> Scenario :
> 1. Start cluster with 3 DataNodes.
> 2. Create file and set storage policy to WARM.
> 3. Restart the cluster.
> 4. Now Namenode and two DataNodes started first and  got registered with 
> NameNode. (one datanode  not yet registered)
> 5. SPS scheduled block movement based on available DataNodes (It will move 
> one replica in ARCHIVE based on policy).
> 6. Block movement also success and Xattr removed from the file because this 
> condition is true {{itemInfo.isAllBlockLocsAttemptedToSatisfy()}}.
> {code}
> if (itemInfo != null
> && !itemInfo.isAllBlockLocsAttemptedToSatisfy()) {
>   blockStorageMovementNeeded
>   .add(storageMovementAttemptedResult.getTrackId());
> 
> ..
> } else {
> 
> ..
>   this.sps.postBlkStorageMovementCleanup(
>   storageMovementAttemptedResult.getTrackId());
> }
> {code}
> 7. Now third DN registered with namenode and its reported one more DISK 
> replica. Now Namenode has two DISK and one ARCHIVE replica.
> In test case we have condition to check the number of DISK replica..
> {code} DFSTestUtil.waitExpectedStorageType(testFileName, StorageType.DISK, 1, 
> timeout, fs);{code}
> This condition never became true and test case will be timed out.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12113) `hadoop fs -setrep` requries huge amount of memory on client side

2017-07-10 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081460#comment-16081460
 ] 

Brahma Reddy Battula commented on HDFS-12113:
-

dupe of HADOOP-12502..?

> `hadoop fs -setrep` requries huge amount of memory on client side
> -
>
> Key: HDFS-12113
> URL: https://issues.apache.org/jira/browse/HDFS-12113
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0, 2.6.5
> Environment: Java 7
>Reporter: Ruslan Dautkhanov
>
> {code}
> $ hadoop fs -setrep -w 3 /
> {code}
> was failing with 
> {noformat}
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:2367)
> at 
> java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
> at 
> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
> at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)
> at java.lang.StringBuilder.append(StringBuilder.java:132)
> at 
> org.apache.hadoop.fs.shell.PathData.getStringForChildPath(PathData.java:305)
> at org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:272)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:76)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> {noformat}
> Until hadoop fs cli command's Java heap memory was allowed to grow to 5Gb:
> {code}
> HADOOP_HEAPSIZE=5000 hadoop fs -setrep -w 3 /
> {code}
> Notice that this setrep change was done for whole HDFS filesystem.
> So looks like there is a dependency on amount of memory used by `hadoop fs 
> -setrep` command on how many files total HDFS has? This is not a huge HDFS 
> filesystem, I would say even "small" by current standards.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081451#comment-16081451
 ] 

John Zhuge commented on HDFS-12052:
---

+1 LGTM pending the last minor checkstyle issue

> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch, 
> HDFS-12052.05.patch, HDFS-12052.06.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12114) Ensure correct HttpFS property names

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081454#comment-16081454
 ] 

Hadoop QA commented on HDFS-12114:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
0s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
19s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12114 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876539/HDFS-12114.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  shellcheck  shelldocs  xml  |
| uname | Linux c2106e9452bf 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5496a34 |
| Default Java | 1.8.0_131 |
| shellcheck | v0.4.6 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20220/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20220/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ensure correct HttpFS property names
> 
>
> Key: HDFS-12114
> URL: 

[jira] [Commented] (HDFS-11110) Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-0?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081439#comment-16081439
 ] 

Hadoop QA commented on HDFS-0:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
58s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 32 unchanged - 0 fixed = 35 total (was 32) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-0 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837330/HDFS-0.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8663b7e3c8ad 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5496a34 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20218/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20218/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20218/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20218/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Commented] (HDFS-11096) Support rolling upgrade between 2.x and 3.x

2017-07-10 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081348#comment-16081348
 ] 

Sean Mackrory commented on HDFS-11096:
--

That's a good idea, [~rchiang]. Adding in some delays between each step should 
be trivial and would dramatically increase the surface area of the test.

Also, regarding the YARN-3583 issue, it was fortunately addressed by YARN-6143. 
I have marked the JIRAs as related.

> Support rolling upgrade between 2.x and 3.x
> ---
>
> Key: HDFS-11096
> URL: https://issues.apache.org/jira/browse/HDFS-11096
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Lei (Eddy) Xu
>Priority: Blocker
>
> trunk has a minimum software version of 3.0.0-alpha1. This means we can't 
> rolling upgrade between branch-2 and trunk.
> This is a showstopper for large deployments. Unless there are very compelling 
> reasons to break compatibility, let's restore the ability to rolling upgrade 
> to 3.x releases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12114) Ensure correct HttpFS property names

2017-07-10 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-12114:
--
Attachment: HDFS-12114.001.patch

Patch 001
* Rename “hadoop.httpfs.ssl.enabled” to “httpfs.ssl.enabled”
* Rename “hadoop.httpfs.http.host to “httpfs.http.hostname” inline with env var 
HTTPFS_HTTP_HOSTNAME
* Rename “hadoop.httpfs.http.port” to “httpfs.http.port”
* Rename “hadoop.httpfs.http.administrators” to “httpfs.http.administrators”
* Properly deprecate env var HTTPFS_HTTP_HOSTNAME
* Remove unnecessary code in hadoop-httpfs.sh

Testing Done
* HttpFS sanity tests in insecure, SSL, and SSL+Kerberos mode
{noformat}
 ✓ httpfs daemonlog
 ✓ httpfs servlet /jmx
 ✓ httpfs servlet /conf
 ✓ httpfs servlet /logLevel
 ✓ httpfs servlet /logs
 ✓ httpfs servlet /stacks
 ✓ httpfs ls
{noformat}


> Ensure correct HttpFS property names
> 
>
> Key: HDFS-12114
> URL: https://issues.apache.org/jira/browse/HDFS-12114
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha4
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HDFS-12114.001.patch
>
>
> The patch for HDFS-10860 used 2 diffferent property names to indicate SSL is 
> enabled for HttpFS: hadoop.httpfs.ssl.enabled and httpfs.ssl.enabled. The 
> correct one is {{httpfs.ssl.enabled}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12114) Ensure correct HttpFS property names

2017-07-10 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-12114:
--
Status: Patch Available  (was: Open)

> Ensure correct HttpFS property names
> 
>
> Key: HDFS-12114
> URL: https://issues.apache.org/jira/browse/HDFS-12114
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha4
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HDFS-12114.001.patch
>
>
> The patch for HDFS-10860 used 2 diffferent property names to indicate SSL is 
> enabled for HttpFS: hadoop.httpfs.ssl.enabled and httpfs.ssl.enabled. The 
> correct one is {{httpfs.ssl.enabled}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12114) Ensure correct HttpFS property names

2017-07-10 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081411#comment-16081411
 ] 

John Zhuge commented on HDFS-12114:
---

To be consistent, rename hadoop.httpfs.http.host, hadoop.httpfs.http.port, and 
hadoop.httpfs.http.administrators as well.

> Ensure correct HttpFS property names
> 
>
> Key: HDFS-12114
> URL: https://issues.apache.org/jira/browse/HDFS-12114
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha4
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> The patch for HDFS-10860 used 2 diffferent property names to indicate SSL is 
> enabled for HttpFS: hadoop.httpfs.ssl.enabled and httpfs.ssl.enabled. The 
> correct one is {{httpfs.ssl.enabled}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081409#comment-16081409
 ] 

Hadoop QA commented on HDFS-12052:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-httpfs: The 
patch generated 1 new + 52 unchanged - 9 fixed = 53 total (was 61) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
17s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12052 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876534/HDFS-12052.06.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux baafe598894f 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5496a34 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20219/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20219/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20219/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran 

[jira] [Updated] (HDFS-12114) Ensure correct HttpFS property names

2017-07-10 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-12114:
--
Summary: Ensure correct HttpFS property names  (was: Incorrect property 
name to indicate SSL is enabled for HttpFS)

> Ensure correct HttpFS property names
> 
>
> Key: HDFS-12114
> URL: https://issues.apache.org/jira/browse/HDFS-12114
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha4
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> The patch for HDFS-10860 used 2 diffferent property names to indicate SSL is 
> enabled for HttpFS: hadoop.httpfs.ssl.enabled and httpfs.ssl.enabled. The 
> correct one is {{httpfs.ssl.enabled}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081394#comment-16081394
 ] 

John Zhuge commented on HDFS-12052:
---

Yetus only reports checkstyle issues on the modified lines.

I am also very eager to make everything I touch as pretty as possible. 
Unfortunately too many gratuitous changes are not manageable when you have to 
maintain dozens of branches and do backports constantly, often in different 
commit orders. Clean backports are very much appreciated. Thank you [~3opan] 
for removing the style fixes.

That said, style fixes on or very close to the modified lines are ok. Please do 
file refactor JIRAs where we can anticipate conflicts and resolve them just 
once.

> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch, 
> HDFS-12052.05.patch, HDFS-12052.06.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11989) Ozone: add TestKeysRatis, TestBucketsRatis and TestVolumeRatis

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081368#comment-16081368
 ] 

Hadoop QA commented on HDFS-11989:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
18s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 8 new + 11 unchanged - 0 fixed = 19 total (was 11) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11989 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876508/HDFS-11989-HDFS-7240.20170710.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dbab2348ebb8 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 87154fc |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20217/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20217/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20217/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20217/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Updated] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread Zoran Dimitrijevic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoran Dimitrijevic updated HDFS-12052:
--
Attachment: HDFS-12052.06.patch

OK, so I am using the const which I made package private and corrected it to 
httpfs.ssl.enabled.

[~jzhuge] I have removed unrelated style fixes (about 50 of those) from this 
patch. However, we should fix them and I can fix them in a different patch - 
because it's really hard to refactor code when even a simple test file contains 
60 style issues that are trivial to fix...

All good now?

> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch, 
> HDFS-12052.05.patch, HDFS-12052.06.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081276#comment-16081276
 ] 

John Zhuge commented on HDFS-12052:
---

Sorry for the confusion. The correct property name is {{httpfs.ssl.enabled}}. 
The property names are documented in doc and httpfs-default.xml. Environment 
variables are deprecated.

HDFS-10860 made mistakes in using both hadoop.httpfs.ssl.enabled and 
httpfs.ssl.enabled. Filed HDFS-12114 to fix.

For your patch, please use the following code:
{code:java}
  conf.setBoolean(HttpFSServerWebServer.SSL_ENABLED_KEY, true);
{code}
You'd have to make {{HttpFSServerWebServer.SSL_ENABLED_KEY}} package private.

> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch, 
> HDFS-12052.05.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12114) Incorrect property name to indicate SSL is enabled for HttpFS

2017-07-10 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-12114:
--
Description: The patch for HDFS-10860 used 2 diffferent property names to 
indicate SSL is enabled for HttpFS: hadoop.httpfs.ssl.enabled and 
httpfs.ssl.enabled. The correct one is {{httpfs.ssl.enabled}}.  (was: The patch 
for HDFS-10860 used 2 diffferent property names to indicate SSL is enabled for 
HttpFS:{{hadoop.httpfs.ssl.enabled}} and {{httpfs.ssl.enabled}}. The correct 
one is {{httpfs.ssl.enabled}}.)

> Incorrect property name to indicate SSL is enabled for HttpFS
> -
>
> Key: HDFS-12114
> URL: https://issues.apache.org/jira/browse/HDFS-12114
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha4
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> The patch for HDFS-10860 used 2 diffferent property names to indicate SSL is 
> enabled for HttpFS: hadoop.httpfs.ssl.enabled and httpfs.ssl.enabled. The 
> correct one is {{httpfs.ssl.enabled}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11965) [SPS]: Should give chance to satisfy the low redundant blocks before removing the xattr

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080985#comment-16080985
 ] 

Hadoop QA commented on HDFS-11965:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
59s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10285 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
49s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-10285 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.namenode.TestStoragePolicySatisfierWithStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11965 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876453/HDFS-11965-HDFS-10285.008.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a21007e6fdca 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / 258fdc6 |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20213/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20213/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20213/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [SPS]: Should give chance to satisfy the low redundant blocks before removing 
> the xattr
> ---
>
> Key: HDFS-11965
>

[jira] [Created] (HDFS-12114) Incorrect property name to indicate SSL is enabled for HttpFS

2017-07-10 Thread John Zhuge (JIRA)
John Zhuge created HDFS-12114:
-

 Summary: Incorrect property name to indicate SSL is enabled for 
HttpFS
 Key: HDFS-12114
 URL: https://issues.apache.org/jira/browse/HDFS-12114
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: httpfs
Affects Versions: 3.0.0-alpha4
Reporter: John Zhuge
Assignee: John Zhuge


The patch for HDFS-10860 used 2 diffferent property names to indicate SSL is 
enabled for HttpFS:{{hadoop.httpfs.ssl.enabled}} and {{httpfs.ssl.enabled}}. 
The correct one is {{httpfs.ssl.enabled}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081240#comment-16081240
 ] 

Ravi Prakash edited comment on HDFS-12052 at 7/10/17 10:00 PM:
---

I tried understanding the whole sequence of configurations set in environment 
variables, xml files, properties being passed around between {{HttpFS*}} and 
{{HttpServer2}} and {{AuthenticationFilter}}, and threw up my hands. 

It makes sense to me that when HttpFS is configured with SSL, the delegation 
tokens returned by the server should be SWEBHDFS.

Just one change, could you please use {{HttpFSServerWebServer.SSL_ENABLED_KEY}} 
instead of the hard coded string {{"httpfs.ssl.enabled"}}?

I too am fine with the style fixes in this patch itself. 


was (Author: raviprak):
I tried understanding the whole sequence of configurations set in environment 
variables, xml files, properties being passed around between {{HttpFS*}} and 
{{HttpServer2}} and {{AuthenticationFilter}}, and threw up my hands. 

It makes sense to me that when HttpFS is configured with SSL, the delegation 
tokens returned by the server should be SWEBHDFS.

Just one change, could you please use {{HttpFSServerWebServer.SSL_ENABLED_KEY}} 
instead of the hard coded string {{"httpfs.ssl.enabled"}}?

> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch, 
> HDFS-12052.05.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081240#comment-16081240
 ] 

Ravi Prakash commented on HDFS-12052:
-

I tried understanding the whole sequence of configurations set in environment 
variables, xml files, properties being passed around between {{HttpFS*}} and 
{{HttpServer2}} and {{AuthenticationFilter}}, and threw up my hands. 

It makes sense to me that when HttpFS is configured with SSL, the delegation 
tokens returned by the server should be SWEBHDFS.

Just one change, could you please use {{HttpFSServerWebServer.SSL_ENABLED_KEY}} 
instead of the hard coded string {{"httpfs.ssl.enabled"}}?

> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch, 
> HDFS-12052.05.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11989) Ozone: add TestKeysRatis, TestBucketsRatis and TestVolumeRatis

2017-07-10 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-11989:
---
Attachment: HDFS-11989-HDFS-7240.20170710.patch

HDFS-11989-HDFS-7240.20170710.patch: sync'ed with the branch.

> Ozone: add TestKeysRatis, TestBucketsRatis and TestVolumeRatis
> --
>
> Key: HDFS-11989
> URL: https://issues.apache.org/jira/browse/HDFS-11989
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: HDFS-11989-HDFS-7240.20170618.patch, 
> HDFS-11989-HDFS-7240.20170620b.patch, HDFS-11989-HDFS-7240.20170620c.patch, 
> HDFS-11989-HDFS-7240.20170620.patch, HDFS-11989-HDFS-7240.20170621b.patch, 
> HDFS-11989-HDFS-7240.20170621c.patch, HDFS-11989-HDFS-7240.20170621.patch, 
> HDFS-11989-HDFS-7240.20170710.patch
>
>
> Add Ratis tests similar to TestKeys, TestBuckets and TestVolume.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12112) TestBlockManager#testBlockManagerMachinesArray sometimes fails with NPE

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081209#comment-16081209
 ] 

Hadoop QA commented on HDFS-12112:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
7s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12112 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876485/HDFS-12112.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux aea70341ed7d 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 09653ea |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20215/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20215/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20215/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20215/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestBlockManager#testBlockManagerMachinesArray sometimes fails with NPE
> ---
>
> Key: HDFS-12112
> URL: 

[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080981#comment-16080981
 ] 

Allen Wittenauer commented on HDFS-12026:
-

OK, it sounds like we're all on the same page. :)

I think adding clang to the Dockerfile is fine.  I'm not sure which compiler 
cmake picks up by default, but we can add a maven test-patch profile stanza to 
test the "other" compiler during CI. That's all more than reasonable and I'm 
really happy you folks are thinking about it.  I'm just more concerned about 
the burden this may put on non-CI stuff. :)

Doing a quick search, it looks like there is only one left: 
hadoop-project/pom.xml if you need an example.  That profile setting turns on 
java linting, FWIW.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flags:
> -std=c++11 -stdlib=libc++
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12112) TestBlockManager#testBlockManagerMachinesArray sometimes fails with NPE

2017-07-10 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12112:
---
Attachment: HDFS-12112.001.patch

Upload patch rev 001. Added a simple {{DFSTestUtil#waitForReplication()}} call 
to wait for replication.

> TestBlockManager#testBlockManagerMachinesArray sometimes fails with NPE
> ---
>
> Key: HDFS-12112
> URL: https://issues.apache.org/jira/browse/HDFS-12112
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
> Environment: CDH5.12.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-12112.001.patch
>
>
> Found the following error:
> {quote}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockManagerMachinesArray(TestBlockManager.java:1202)
> {quote}
> The NPE suggests corruptStorageDataNode in the following code snippet could 
> be null.
> {code}
> for(int i=0; i {code}
> Looking at the code, the test does not wait for file replication to happen, 
> which is why corruptStorageDataNode (the DN of the second replica) is null.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2017-07-10 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080931#comment-16080931
 ] 

Tsz Wo Nicholas Sze commented on HDFS-6874:
---

Thanks for the update.  The patch looks good.

For the test, could you add another test file which has multiple blocks and 
replicas?  The current test file only has one block and one replica.

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6874.02.patch, HDFS-6874.03.patch, 
> HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874.06.patch, 
> HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, HDFS-6874.patch
>
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081044#comment-16081044
 ] 

Hadoop QA commented on HDFS-12052:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs-httpfs: The patch 
generated 0 new + 7 unchanged - 54 fixed = 7 total (was 61) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
24s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12052 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876483/HDFS-12052.05.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f7537a72e3c1 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 09653ea |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20214/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20214/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, 

[jira] [Created] (HDFS-12113) `hadoop fs -setrep` requries huge amount of memory on client side

2017-07-10 Thread Ruslan Dautkhanov (JIRA)
Ruslan Dautkhanov created HDFS-12113:


 Summary: `hadoop fs -setrep` requries huge amount of memory on 
client side
 Key: HDFS-12113
 URL: https://issues.apache.org/jira/browse/HDFS-12113
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.5, 2.6.0
 Environment: Java 7
Reporter: Ruslan Dautkhanov


{code}
$ hadoop fs -setrep -w 3 /
{code}

was failing with 
{noformat}
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2367)
at 
java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
at 
java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)
at java.lang.StringBuilder.append(StringBuilder.java:132)
at org.apache.hadoop.fs.shell.PathData.getStringForChildPath(PathData.java:305)
at org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:272)
at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
at 
org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:76)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
{noformat}

Until hadoop fs cli command's Java heap memory was allowed to grow to 5Gb:
{code}
HADOOP_HEAPSIZE=5000 hadoop fs -setrep -w 3 /
{code}

Notice that this setrep change was done for whole HDFS filesystem.

So looks like there is a dependency on amount of memory used by `hadoop fs 
-setrep` command on how many files total HDFS has? This is not a huge HDFS 
filesystem, I would say even "small" by current standards.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12112) TestBlockManager#testBlockManagerMachinesArray sometimes fails with NPE

2017-07-10 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12112:
---
Status: Patch Available  (was: Open)

> TestBlockManager#testBlockManagerMachinesArray sometimes fails with NPE
> ---
>
> Key: HDFS-12112
> URL: https://issues.apache.org/jira/browse/HDFS-12112
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
> Environment: CDH5.12.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-12112.001.patch
>
>
> Found the following error:
> {quote}
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockManagerMachinesArray(TestBlockManager.java:1202)
> {quote}
> The NPE suggests corruptStorageDataNode in the following code snippet could 
> be null.
> {code}
> for(int i=0; i {code}
> Looking at the code, the test does not wait for file replication to happen, 
> which is why corruptStorageDataNode (the DN of the second replica) is null.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080956#comment-16080956
 ] 

Allen Wittenauer commented on HDFS-12052:
-

bq. They add noise to the code review and will complicate backport.

I don't think this is going to be easy to backport anyway, given the shifts in 
code between 2 and 3. Personally, I'd prefer we fix style issues here anyway so 
that future commits don't have these issues.  It's just a guideline to not fix 
style issues.

The much bigger issue is this one:

bq. Do we want to use httpfs.ssl.enabled or hadoop.httpfs.ssl.enabled ?

This is a much bigger problem.  It looks like the shell profile code is wrong, 
but [~jzhuge] would know for sure.  We should probably fix this at the same 
time while we are here.

> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread Zoran Dimitrijevic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoran Dimitrijevic updated HDFS-12052:
--
Attachment: HDFS-12052.05.patch

Removed unrelated style fixes from all but tests. I'll upload one more patch 
which doesn't have any unrelated style fixes even in that file if needed.

Can you please comment about hadoop prefix and whether we want to fix that in 
this patch or in a different patch?

> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch, 
> HDFS-12052.05.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12112) TestBlockManager#testBlockManagerMachinesArray sometimes fails with NPE

2017-07-10 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-12112:
--

 Summary: TestBlockManager#testBlockManagerMachinesArray sometimes 
fails with NPE
 Key: HDFS-12112
 URL: https://issues.apache.org/jira/browse/HDFS-12112
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0-beta1
 Environment: CDH5.12.0
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Minor


Found the following error:
{quote}
java.lang.NullPointerException: null
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockManagerMachinesArray(TestBlockManager.java:1202)
{quote}
The NPE suggests corruptStorageDataNode in the following code snippet could be 
null.
{code}
for(int i=0; i

[jira] [Commented] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread Zoran Dimitrijevic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080797#comment-16080797
 ] 

Zoran Dimitrijevic commented on HDFS-12052:
---

Sure [~jzhuge]. I had to reformat the tests which are then detected as new code 
by style checks - I'll remove all style fixes unrelated to test refactoring. 
You can then review the new patch which will contain these fixes so that we 
don't waste this opportunity to clean up the style as well?

> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12103) libhdfs++: Provide workaround to support cancel on filesystem connect until HDFS-11437 is resolved

2017-07-10 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-12103:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for reviewing [~xiaowei.zhu]!  Committed to HDFS-8707.

Manual testing was done by verifying the steps in the workaround procedure can 
cancel a slow connection.  Fix also gets run with and without valgrind as part 
of another project on a regular basis - in that case it's too closely coupled 
to the project to isolate the test.  My hope is to fix the root issue and 
revert this in the next 3-4 weeks once I finish up HDFS-11807 and HDFS-12111.

> libhdfs++: Provide workaround to support cancel on filesystem connect until 
> HDFS-11437 is resolved
> --
>
> Key: HDFS-12103
> URL: https://issues.apache.org/jira/browse/HDFS-12103
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-12103.HDFS-8707.000.patch
>
>
> HDFS-11437 is going to take a non-trivial amount of work to do right.  In the 
> meantime it'd be nice to have a way to cancel pending connections (even when 
> the FS claimed they are finished).  
> Proposed workaround is to relax the rules about when 
> FileSystem::CancelPending connect can be called since it isn't able to 
> properly determine when it's connected anyway.  In order to determine when 
> the FS has connected you can do some simple RPC call since that will wait on 
> failover.  If CancelPending can be called during that first RPC call then it 
> will effectively be canceling FileSystem::Connect
> Current cancel rules - asterisk on steps where CancelPending is allowed
> FileSystem::Connect called
> FileSystem communicates with first NN *
> FileSystem::Connect returns - even if it hasn't communicated with the active 
> NN
> Proposed relaxation
> FileSystem::Connect called
> FileSystem communicates with first NN*
> FileSystem::Connect returns *
> FileSystem::GetFileInfo called * -any namenode RPC call will do, ignore perm 
> errors
> RPC engine blocks until it hits the active or runs out of retries *
> FileSystem::GetFileInfo returns
> It'd be up to the user to add in the dummy NN RPC call.  Once HDFS-11437 is 
> fixed this workaround can be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11096) Support rolling upgrade between 2.x and 3.x

2017-07-10 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080837#comment-16080837
 ] 

Ray Chiang commented on HDFS-11096:
---

Thanks for keeping this rolling [~mackrorysd] (no pun intended).

I was just thinking on this subject this morning and one thing occurred to me.  
Do we have the capability to freeze the rolling upgrade process?  If we can set 
up combinations like:

* 25% old/75% new
* 50% old/50% new
* 75% old/25% new

while jobs continue to run, we could probably speed up some of the error 
finding process.

Or maybe there's a better approach that would give similar results?

> Support rolling upgrade between 2.x and 3.x
> ---
>
> Key: HDFS-11096
> URL: https://issues.apache.org/jira/browse/HDFS-11096
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Lei (Eddy) Xu
>Priority: Blocker
>
> trunk has a minimum software version of 3.0.0-alpha1. This means we can't 
> rolling upgrade between branch-2 and trunk.
> This is a showstopper for large deployments. Unless there are very compelling 
> reasons to break compatibility, let's restore the ability to rolling upgrade 
> to 3.x releases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11096) Support rolling upgrade between 2.x and 3.x

2017-07-10 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080798#comment-16080798
 ] 

Sean Mackrory commented on HDFS-11096:
--

For anyone following this thread, I've come back to this and pushed some 
updates to the tests at https://github.com/mackrorysd/hadoop-compatibility.
* fixes for some idempotence / SSH automation problems that could've popped up 
before in the rolling upgrade test, and actually validating the sorted data.
* [~eddyxu] wrote a little framework for writing tests against mini HDFS 
clusters of 2 different versions in Python, and a test that you can cp between 
2 clusters. I did a bit of refactoring and added tests that check for similar 
output for most of the "hdfs dfs" commands currently in Hadoop 2.

> Support rolling upgrade between 2.x and 3.x
> ---
>
> Key: HDFS-11096
> URL: https://issues.apache.org/jira/browse/HDFS-11096
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Lei (Eddy) Xu
>Priority: Blocker
>
> trunk has a minimum software version of 3.0.0-alpha1. This means we can't 
> rolling upgrade between branch-2 and trunk.
> This is a showstopper for large deployments. Unless there are very compelling 
> reasons to break compatibility, let's restore the ability to rolling upgrade 
> to 3.x releases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11908) libhdfs++: Authentication failure when first NN of kerberized HA cluster is standby

2017-07-10 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-11908:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed this to HDFS-8707.  HDFS-12111 filed for CI testing.

The manual tests I did were pretty simple: set up an HA kerberized cluster and 
make sure dfs.namenodes has the standby NN listed first.  When it tries to fail 
over and connect to the active you'll get warnings about simple auth not being 
supported.  Apply this patch and those go away.  Same thing with the first NN 
shut down.  Repeated the test with gdb attached to make sure that AuthInfo was 
actually being default initialized to use simple auth in the failing case and 
sasl auth with patch.

> libhdfs++: Authentication failure when first NN of kerberized HA cluster is 
> standby
> ---
>
> Key: HDFS-11908
> URL: https://issues.apache.org/jira/browse/HDFS-11908
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-11908.HDFS-8707.000.patch
>
>
> Library won't properly authenticate to kerberized HA cluster if the first 
> namenode it tries to connect to is the standby.  RpcConnection ends up 
> attempting to use simple auth.
> Control flow to connect to NN for the first time:
> # RpcConnection constructed with a pointer to the RpcEngine as the only 
> argument
> # RpcConnection::Connect(server endpoints, auth_info, callback called)
> ** auth_info contains the SASL mechanism to use + the delegation token if we 
> already have one
> Control flow to connect to NN after failover:
> # RpcEngine::NewConnection called, allocates an RpcConnection exactly how 
> step 1 above would
> # RpcEngine::InitializeConnection called, sets event hooks and a string for 
> cluster name
> # Rpc calls sent using RpcConnection::PreEnqueueRequests called to add RPC 
> message that didn't make it on last call due to standby exception
> # RpcConnection::ConnectAndFlush called to send RPC packets. This only takes 
> server endpoints, no auth info
> To fix:
> RpcEngine::InitializeConnection just needs to set RpcConnection::auth_info_ 
> from the existing RpcEngine::auth_info_, even better would be setting this in 
> the constructor so if an RpcConnection exists it can be expected to be in a 
> usable state.  I'll get a diff up once I sort out CI build failures.
> Also really need to get CI test coverage for HA and kerberos because this 
> issue should not have been around for so long.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12052) Set "SWEBHDFS delegation" as DT kind if ssl is enabled in HttpFS

2017-07-10 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080738#comment-16080738
 ] 

John Zhuge commented on HDFS-12052:
---

Thank you [~3opan] for the excellent find and the patch submitted.

Could you please revert the unrelated format changes? They add noise to the 
code review and will complicate backport. Please read 
https://wiki.apache.org/hadoop/HowToContribute regard "reformat" :
{noformat}
Please do not:

* reformat code unrelated to the bug being fixed: formatting changes should be 
separate patches/commits.
{noformat}

Please note the reformat might been done automatically by your IDE.

> Set "SWEBHDFS delegation" as DT kind if ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080738#comment-16080738
 ] 

John Zhuge edited comment on HDFS-12052 at 7/10/17 5:58 PM:


Thank you [~3opan] for the excellent find and the patch submitted.

Could you please revert the unrelated format changes? They add noise to the 
code review and will complicate backport. Please read 
https://wiki.apache.org/hadoop/HowToContribute regarding "reformat" :
{noformat}
Please do not:

* reformat code unrelated to the bug being fixed: formatting changes should be 
separate patches/commits.
{noformat}

Please note the reformat might been done automatically by your IDE.


was (Author: jzhuge):
Thank you [~3opan] for the excellent find and the patch submitted.

Could you please revert the unrelated format changes? They add noise to the 
code review and will complicate backport. Please read 
https://wiki.apache.org/hadoop/HowToContribute regard "reformat" :
{noformat}
Please do not:

* reformat code unrelated to the bug being fixed: formatting changes should be 
separate patches/commits.
{noformat}

Please note the reformat might been done automatically by your IDE.

> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-07-10 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-12052:
--
Summary: Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS  
(was: Set "SWEBHDFS delegation" as DT kind if ssl is enabled in HttpFS)

> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12082) BlockInvalidateLimit value is incorrectly set after namenode heartbeat interval reconfigured

2017-07-10 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080714#comment-16080714
 ] 

Chen Liang commented on HDFS-12082:
---

Thanks [~cheersyang] for the update! +1 on v003 patch 

> BlockInvalidateLimit value is incorrectly set after namenode heartbeat 
> interval reconfigured 
> -
>
> Key: HDFS-12082
> URL: https://issues.apache.org/jira/browse/HDFS-12082
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12082.001.patch, HDFS-12082.002.patch, 
> HDFS-12082.003.patch
>
>
> HDFS-1477 provides an option to reconfigured namenode heartbeat interval 
> without restarting the namenode. When the heartbeat interval is reconfigured, 
> {{blockInvalidateLimit}} gets recounted
> {code}
>  this.blockInvalidateLimit = Math.max(20 * (int) (intervalSeconds),
> DFSConfigKeys.DFS_BLOCK_INVALIDATE_LIMIT_DEFAULT);
> {code}
> this doesn't honor the existing value set by {{dfs.block.invalidate.limit}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11965) [SPS]: Should give chance to satisfy the low redundant blocks before removing the xattr

2017-07-10 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-11965:
--
Attachment: HDFS-11965-HDFS-10285.008.patch

Thanks [~rakeshr] for review..
Attached updated patch..

> [SPS]: Should give chance to satisfy the low redundant blocks before removing 
> the xattr
> ---
>
> Key: HDFS-11965
> URL: https://issues.apache.org/jira/browse/HDFS-11965
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11965-HDFS-10285.001.patch, 
> HDFS-11965-HDFS-10285.002.patch, HDFS-11965-HDFS-10285.003.patch, 
> HDFS-11965-HDFS-10285.004.patch, HDFS-11965-HDFS-10285.005.patch, 
> HDFS-11965-HDFS-10285.006.patch, HDFS-11965-HDFS-10285.007.patch, 
> HDFS-11965-HDFS-10285.008.patch
>
>
> The test case is failing because all the required replicas are not moved in 
> expected storage. This is happened because of delay in datanode registration 
> after cluster restart.
> Scenario :
> 1. Start cluster with 3 DataNodes.
> 2. Create file and set storage policy to WARM.
> 3. Restart the cluster.
> 4. Now Namenode and two DataNodes started first and  got registered with 
> NameNode. (one datanode  not yet registered)
> 5. SPS scheduled block movement based on available DataNodes (It will move 
> one replica in ARCHIVE based on policy).
> 6. Block movement also success and Xattr removed from the file because this 
> condition is true {{itemInfo.isAllBlockLocsAttemptedToSatisfy()}}.
> {code}
> if (itemInfo != null
> && !itemInfo.isAllBlockLocsAttemptedToSatisfy()) {
>   blockStorageMovementNeeded
>   .add(storageMovementAttemptedResult.getTrackId());
> 
> ..
> } else {
> 
> ..
>   this.sps.postBlkStorageMovementCleanup(
>   storageMovementAttemptedResult.getTrackId());
> }
> {code}
> 7. Now third DN registered with namenode and its reported one more DISK 
> replica. Now Namenode has two DISK and one ARCHIVE replica.
> In test case we have condition to check the number of DISK replica..
> {code} DFSTestUtil.waitExpectedStorageType(testFileName, StorageType.DISK, 1, 
> timeout, fs);{code}
> This condition never became true and test case will be timed out.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11264) [SPS]: Double checks to ensure that SPS/Mover are not running together

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080648#comment-16080648
 ] 

Hadoop QA commented on HDFS-11264:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10285 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
20s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-10285 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}112m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.ha.TestHAStateTransitions |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | org.apache.hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationHandler 
|
|   | org.apache.hadoop.hdfs.server.namenode.ha.TestStateTransitionFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11264 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876419/HDFS-11264-HDFS-10285-01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1e59dfde6c0c 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / 258fdc6 |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20212/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20212/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20212/console |
| Powered by | 

[jira] [Comment Edited] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080591#comment-16080591
 ] 

Allen Wittenauer edited comment on HDFS-12109 at 7/10/17 4:20 PM:
--

The HADOOP_CONF_DIR environment variable is how the shell scripts find where 
hadoop-env.sh is located. Defining it inside hadoop-env.sh won't work. Given 
what I can imply from your description, hadoop 3.x would work fine because it 
can autodetermine where stuff is located based upon the executable location.  
But hadoop 2.x has a lot of bugs, so it needs to have (minimally) HADOOP_PREFIX 
defined outside of the shell script code.  If that is defined, it should know 
where everything is located, including auto-defining HADOOP_CONF_DIR to be 
HADOOP_PREFIX/etc/hadoop.


was (Author: aw):
The HADOOP_CONF_DIR environment variable is how the shell scripts find where 
hadoop-env.sh is located.  Given what I can imply from your description, hadoop 
3.x would work fine because it can autodetermine where stuff is located based 
upon the executable location.  But hadoop 2.x has a lot of bugs, so it needs to 
have (minimally) HADOOP_PREFIX defined outside of the shell script code.  If 
that is defined, it should know where everything is located, including 
auto-defining HADOOP_CONF_DIR to be HADOOP_PREFIX/etc/hadoop.

> "fs" java.net.UnknownHostException when HA NameNode is used
> ---
>
> Key: HDFS-12109
> URL: https://issues.apache.org/jira/browse/HDFS-12109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
> Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [hadoop@namenode01 ~]$ uname -a
> Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@namenode01 ~]$ java -version
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Luigi Di Fraia
>
> After setting up an HA NameNode configuration, the following invocation of 
> "fs" fails:
> [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
> -ls: java.net.UnknownHostException: saccluster
> It works if properties are defined as per below:
> /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
> -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>  -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
> -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
> -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /
> These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
> per below:
> 
> dfs.nameservices
> saccluster
> 
> 
> dfs.ha.namenodes.saccluster
> namenode01,namenode02
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode01
> namenode01:8020
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode02
> namenode02:8020
> 
> 
> dfs.namenode.http-address.saccluster.namenode01
> namenode01:50070
> 
> 
> dfs.namenode.http-address.saccluster.namenode02
> namenode02:50070
> 
> 
> dfs.namenode.shared.edits.dir
> 
> qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster
> 
> 
> dfs.client.failover.proxy.provider.mycluster
> 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
> In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as 
> per below:
> 
> fs.defaultFS
> hdfs://saccluster
> 
> In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:
> export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"
> Is "fs" trying to read these properties from somewhere else, such as a 
> separate client configuration file?
> Apologies if I am missing something obvious here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080591#comment-16080591
 ] 

Allen Wittenauer commented on HDFS-12109:
-

The HADOOP_CONF_DIR environment variable is how the shell scripts find where 
hadoop-env.sh is located.  Given what I can imply from your description, hadoop 
3.x would work fine because it can autodetermine where stuff is located based 
upon the executable location.  But hadoop 2.x has a lot of bugs, so it needs to 
have (minimally) HADOOP_PREFIX defined outside of the shell script code.  If 
that is defined, it should know where everything is located, including 
auto-defining HADOOP_CONF_DIR to be HADOOP_PREFIX/etc/hadoop.

> "fs" java.net.UnknownHostException when HA NameNode is used
> ---
>
> Key: HDFS-12109
> URL: https://issues.apache.org/jira/browse/HDFS-12109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
> Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [hadoop@namenode01 ~]$ uname -a
> Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@namenode01 ~]$ java -version
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Luigi Di Fraia
>
> After setting up an HA NameNode configuration, the following invocation of 
> "fs" fails:
> [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
> -ls: java.net.UnknownHostException: saccluster
> It works if properties are defined as per below:
> /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
> -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>  -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
> -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
> -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /
> These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
> per below:
> 
> dfs.nameservices
> saccluster
> 
> 
> dfs.ha.namenodes.saccluster
> namenode01,namenode02
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode01
> namenode01:8020
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode02
> namenode02:8020
> 
> 
> dfs.namenode.http-address.saccluster.namenode01
> namenode01:50070
> 
> 
> dfs.namenode.http-address.saccluster.namenode02
> namenode02:50070
> 
> 
> dfs.namenode.shared.edits.dir
> 
> qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster
> 
> 
> dfs.client.failover.proxy.provider.mycluster
> 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
> In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as 
> per below:
> 
> fs.defaultFS
> hdfs://saccluster
> 
> In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:
> export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"
> Is "fs" trying to read these properties from somewhere else, such as a 
> separate client configuration file?
> Apologies if I am missing something obvious here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11915) Sync rbw dir on the first hsync() to avoid file lost on power failure

2017-07-10 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080579#comment-16080579
 ] 

Kihwal Lee commented on HDFS-11915:
---

The approach looks good.

> Sync rbw dir on the first hsync() to avoid file lost on power failure
> -
>
> Key: HDFS-11915
> URL: https://issues.apache.org/jira/browse/HDFS-11915
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kanaka Kumar Avvaru
>Assignee: Vinayakumar B
>Priority: Critical
> Attachments: HDFS-11915-01.patch
>
>
> As discussed in HDFS-5042, there is a chance to lose blocks on power failure 
> if rbw file creation entry is not yet sync to device. Then the block created 
> is nowhere exists on disk. Neither in rbw nor in finalized. 
> As suggested by [~kihwal], will discuss and track it in this JIRA.
> As suggested by [~vinayrpet], May be first hsync() request on block file can 
> call fsync on its parent directory (rbw) directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12102) VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt block

2017-07-10 Thread Ashwin Ramesh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080593#comment-16080593
 ] 

Ashwin Ramesh commented on HDFS-12102:
--

[~nroberts] Added config options, removed unnecessary debug statements, set 
fast scan to disabled by default and is now enabled via a boolean in the 
config, and removed corruptBlockThreshold. 
[~arpitagarwal] The change is a volume scanner feature that starts a continuous 
and high bandwidth scan in order to scan a volume much more quickly when 
corruption has been determined to be highly likely. 

> VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt 
> block
> 
>
> Key: HDFS-12102
> URL: https://issues.apache.org/jira/browse/HDFS-12102
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Affects Versions: 2.8.2
>Reporter: Ashwin Ramesh
>Priority: Minor
> Fix For: 2.8.2
>
> Attachments: HDFS-12102-001.patch, HDFS-12102-002.patch
>
>
> When the Volume scanner sees a corrupt block, it restarts the scan and scans 
> the blocks at much faster rate with a negligible scan period. This is so that 
> it doesn't take 3 weeks to report blocks since a corrupt block means 
> increased likelihood that there are more corrupt blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12111) libhdfs++: Expose HA and Kerberos options for C++ minidfscluster bindings

2017-07-10 Thread James Clampffer (JIRA)
James Clampffer created HDFS-12111:
--

 Summary: libhdfs++: Expose HA and Kerberos options for C++ 
minidfscluster bindings
 Key: HDFS-12111
 URL: https://issues.apache.org/jira/browse/HDFS-12111
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: James Clampffer
Assignee: James Clampffer


Provide an easy way to instantiate the hdfs::MiniCluster object with HA and/or 
Kerberos enabled.  The majority of the existing CI tests should be able to run 
in those environments and a few HA and Kerberos smoke tests can be added as 
part of this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12102) VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt block

2017-07-10 Thread Ashwin Ramesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwin Ramesh updated HDFS-12102:
-
Attachment: HDFS-12102-002.patch

> VolumeScanner throttle dropped (fast scan enabled) when there is a corrupt 
> block
> 
>
> Key: HDFS-12102
> URL: https://issues.apache.org/jira/browse/HDFS-12102
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Affects Versions: 2.8.2
>Reporter: Ashwin Ramesh
>Priority: Minor
> Fix For: 2.8.2
>
> Attachments: HDFS-12102-001.patch, HDFS-12102-002.patch
>
>
> When the Volume scanner sees a corrupt block, it restarts the scan and scans 
> the blocks at much faster rate with a negligible scan period. This is so that 
> it doesn't take 3 weeks to report blocks since a corrupt block means 
> increased likelihood that there are more corrupt blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11915) Sync rbw dir on the first hsync() to avoid file lost on power failure

2017-07-10 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-11915:
--
Description: 
As discussed in HDFS-5042, there is a chance to lose blocks on power failure if 
rbw file creation entry is not yet sync to device. Then the block created is 
nowhere exists on disk. Neither in rbw nor in finalized. 

As suggested by [~kihwal], will discuss and track it in this JIRA.

As suggested by [~vinayrpet], May be first hsync() request on block file can 
call fsync on its parent directory (rbw) directory.



  was:
As discussed in HDFS-5042, there is a chance to loose blocks on power failure 
if rbw file creation entry is not yet sync to device. Then the block created is 
nowhere exists on disk. Neither in rbw nor in finalized. 

As suggested by [~kihwal], will discuss and track it in this JIRA.

As suggested by [~vinayrpet], May be first hsync() request on block file can 
call fsync on its parent directory (rbw) directory.




> Sync rbw dir on the first hsync() to avoid file lost on power failure
> -
>
> Key: HDFS-11915
> URL: https://issues.apache.org/jira/browse/HDFS-11915
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kanaka Kumar Avvaru
>Assignee: Vinayakumar B
>Priority: Critical
> Attachments: HDFS-11915-01.patch
>
>
> As discussed in HDFS-5042, there is a chance to lose blocks on power failure 
> if rbw file creation entry is not yet sync to device. Then the block created 
> is nowhere exists on disk. Neither in rbw nor in finalized. 
> As suggested by [~kihwal], will discuss and track it in this JIRA.
> As suggested by [~vinayrpet], May be first hsync() request on block file can 
> call fsync on its parent directory (rbw) directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12104) libhdfs++: Make sure all steps in SaslProtocol end up calling AuthComplete

2017-07-10 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-12104:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for reviewing this [~xiaowei.zhu].  I committed to HDFS-8707.  Reasons 
for not adding tests listed in previous comment.

> libhdfs++:  Make sure all steps in SaslProtocol end up calling AuthComplete
> ---
>
> Key: HDFS-12104
> URL: https://issues.apache.org/jira/browse/HDFS-12104
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-12104.HDFS-8707.000.patch
>
>
> SaslProtocol provides an abstraction for stepping through the authentication 
> challenge and response stages in the Cyrus SASL library by chaining callbacks 
> together (next one is invoked when async io is done).
> To authenticate SaslProtocol::Authenticate is called, and when authentication 
> is finished SaslProtocol::AuthComplete is called which invokes an 
> authentication completion callback.  There's a couple cases where the 
> intermediate callbacks return without calling AuthComplete which breaks 
> applications that take advantage of that callback.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12110) libhdfs++: Rebase 8707 branch onto an up to date version of trunk

2017-07-10 Thread James Clampffer (JIRA)
James Clampffer created HDFS-12110:
--

 Summary: libhdfs++: Rebase 8707 branch onto an up to date version 
of trunk
 Key: HDFS-12110
 URL: https://issues.apache.org/jira/browse/HDFS-12110
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: James Clampffer
Assignee: James Clampffer


It's been way too long since this has been done and it's time to start knocking 
down blockers for merging into trunk.  Can most likely just copy/paste the 
libhdfs++ directory into a newer version of master.  Want to track it in a jira 
since it's likely to cause conflicts when pulling the updated branch for the 
first time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5040) Audit log for admin commands/ logging output of all DFS admin commands

2017-07-10 Thread Kuhu Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080458#comment-16080458
 ] 

Kuhu Shukla commented on HDFS-5040:
---

[~brahmareddy], sure. Will update patch in a couple days. Let me know if that 
would be ok. Thanks a lot!

> Audit log for admin commands/ logging output of all DFS admin commands
> --
>
> Key: HDFS-5040
> URL: https://issues.apache.org/jira/browse/HDFS-5040
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Raghu C Doppalapudi
>Assignee: Kuhu Shukla
>  Labels: BB2015-05-TBR
> Attachments: HDFS-5040.patch, HDFS-5040.patch, HDFS-5040.patch
>
>
> enable audit log for all the admin commands/also provide ability to log all 
> the admin commands in separate log file, at this point all the logging is 
> displayed on the console.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11264) [SPS]: Double checks to ensure that SPS/Mover are not running together

2017-07-10 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080492#comment-16080492
 ] 

Rakesh R commented on HDFS-11264:
-

Added extra Mover status check after updating the {{SPS#isRunning=true}}, which 
will ensure that Mover will not start after SPS startup. With this, below is 
the startup flow at both ends,
+Mover Tool:+
1) creates {{/system/mover.id}} path
2) check SPS running status
3) if SPS running then Mover exists by deleting path .

+SPS service:+
1) check Mover status by checking {{/system/mover.id}} path existence
2) update {{SPS#isRunning=true}}
3) check Mover status by checking {{/system/mover.id}} path existence
4)  if Mover instance exists then stop SPS by setting {{SPS#isRunning=false}}

> [SPS]: Double checks to ensure that SPS/Mover are not running together
> --
>
> Key: HDFS-11264
> URL: https://issues.apache.org/jira/browse/HDFS-11264
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Rakesh R
> Attachments: HDFS-11264-HDFS-10285-01.patch
>
>
> As discussed in HDFS-10885, double checks needed to insure SPS/Mover not 
> running together, otherwise it may cause some issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-10 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080488#comment-16080488
 ] 

James Clampffer commented on HDFS-12026:


Thanks [~aw]!  So do you think clang should just be removed from the 
dockerfile?  I can always set up a VM with a cron job that kicks off clang 
builds to get some coverage for now - I imagine once people on OSX start 
building the library more often any build regressions will be noticed quickly.

Filed HDFS-12110 to get this synced up to trunk.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flags:
> -std=c++11 -stdlib=libc++
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11975) Provide a system-default EC policy

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080446#comment-16080446
 ] 

Hadoop QA commented on HDFS-11975:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
24s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
41s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
24s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.fs.viewfs.TestViewFileSystemHdfs |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11975 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876410/HDFS-11975-008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux a336ea5599f5 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality 

[jira] [Updated] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-10 Thread Luigi Di Fraia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luigi Di Fraia updated HDFS-12109:
--
Description: 
After setting up an HA NameNode configuration, the following invocation of "fs" 
fails:

[hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
-ls: java.net.UnknownHostException: saccluster

It works if properties are defined as per below:

/usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
-Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
-Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
-Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /

These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
per below:


dfs.nameservices
saccluster


dfs.ha.namenodes.saccluster
namenode01,namenode02


dfs.namenode.rpc-address.saccluster.namenode01
namenode01:8020


dfs.namenode.rpc-address.saccluster.namenode02
namenode02:8020


dfs.namenode.http-address.saccluster.namenode01
namenode01:50070


dfs.namenode.http-address.saccluster.namenode02
namenode02:50070


dfs.namenode.shared.edits.dir

qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster


dfs.client.failover.proxy.provider.mycluster

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as per 
below:


fs.defaultFS
hdfs://saccluster


In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:

export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

Is "fs" trying to read these properties from somewhere else, such as a separate 
client configuration file?

Apologies if I am missing something obvious here.

  was:
After setting up an HA NameNode configuration, the following invocation of "fs" 
fails:

[hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
-ls: java.net.UnknownHostException: saccluster

It works if properties are defined as per below:

/usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
-Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
-Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
-Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /

These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
per below:


dfs.nameservices
saccluster


dfs.ha.namenodes.saccluster
namenode01,namenode02


dfs.namenode.rpc-address.saccluster.namenode01
namenode01:8020


dfs.namenode.rpc-address.saccluster.namenode02
namenode02:8020


dfs.namenode.http-address.saccluster.namenode01
namenode01:50070


dfs.namenode.http-address.saccluster.namenode02
namenode02:50070


dfs.namenode.shared.edits.dir

qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster


dfs.client.failover.proxy.provider.mycluster

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as per 
below:


fs.defaultFS
hdfs://saccluster


In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:

export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

Is "fs" trying to read these properties from somewhere else, such as a separate 
client configuration file?

Apologies if I a missing something obvious here.


> "fs" java.net.UnknownHostException when HA NameNode is used
> ---
>
> Key: HDFS-12109
> URL: https://issues.apache.org/jira/browse/HDFS-12109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
> Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [hadoop@namenode01 ~]$ uname -a
> Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@namenode01 ~]$ java -version
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Luigi Di Fraia
>
> After setting up an HA NameNode configuration, the following invocation of 
> "fs" 

[jira] [Updated] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-10 Thread Luigi Di Fraia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luigi Di Fraia updated HDFS-12109:
--
Description: 
After setting up an HA NameNode configuration, the following invocation of "fs" 
fails:

[hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
-ls: java.net.UnknownHostException: saccluster

It works if properties are defined as per below:

/usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
-Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
-Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
-Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /

These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
per below:


dfs.nameservices
saccluster


dfs.ha.namenodes.saccluster
namenode01,namenode02


dfs.namenode.rpc-address.saccluster.namenode01
namenode01:8020


dfs.namenode.rpc-address.saccluster.namenode02
namenode02:8020


dfs.namenode.http-address.saccluster.namenode01
namenode01:50070


dfs.namenode.http-address.saccluster.namenode02
namenode02:50070


dfs.namenode.shared.edits.dir

qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster


dfs.client.failover.proxy.provider.mycluster

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as per 
below:


fs.defaultFS
hdfs://saccluster


In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:

export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

Is "fs" trying to read these properties from somewhere else, such as a separate 
client configuration file?

Apologies if I a missing something obvious here.

  was:
After setting up an HA NameNode configuration, the following invocation of "fs" 
fails:

[hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
-ls: java.net.UnknownHostException: saccluster

It works if properties are defined as per below:

/usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
-Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
-Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
-Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /

These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
per below:


dfs.nameservices
saccluster


dfs.ha.namenodes.saccluster
namenode01,namenode02


dfs.namenode.rpc-address.saccluster.namenode01
namenode01:8020


dfs.namenode.rpc-address.saccluster.namenode02
namenode02:8020


dfs.namenode.http-address.saccluster.namenode01
namenode01:50070


dfs.namenode.http-address.saccluster.namenode02
namenode02:50070


dfs.namenode.shared.edits.dir

qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster


dfs.client.failover.proxy.provider.mycluster

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:

export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

Is "fs" trying to read these properties from somewhere else, such as a separate 
client configuration file?

Apologies if I a missing something obvious here.


> "fs" java.net.UnknownHostException when HA NameNode is used
> ---
>
> Key: HDFS-12109
> URL: https://issues.apache.org/jira/browse/HDFS-12109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
> Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [hadoop@namenode01 ~]$ uname -a
> Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@namenode01 ~]$ java -version
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Luigi Di Fraia
>
> After setting up an HA NameNode configuration, the following invocation of 
> "fs" fails:
> [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
> -ls: java.net.UnknownHostException: saccluster
> It works if properties are 

[jira] [Updated] (HDFS-11264) [SPS]: Double checks to ensure that SPS/Mover are not running together

2017-07-10 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-11264:

Assignee: Rakesh R  (was: Wei Zhou)
Target Version/s: HDFS-10285
  Status: Patch Available  (was: Open)

> [SPS]: Double checks to ensure that SPS/Mover are not running together
> --
>
> Key: HDFS-11264
> URL: https://issues.apache.org/jira/browse/HDFS-11264
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Rakesh R
> Attachments: HDFS-11264-HDFS-10285-01.patch
>
>
> As discussed in HDFS-10885, double checks needed to insure SPS/Mover not 
> running together, otherwise it may cause some issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11264) [SPS]: Double checks to ensure that SPS/Mover are not running together

2017-07-10 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-11264:

Attachment: HDFS-11264-HDFS-10285-01.patch

> [SPS]: Double checks to ensure that SPS/Mover are not running together
> --
>
> Key: HDFS-11264
> URL: https://issues.apache.org/jira/browse/HDFS-11264
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Attachments: HDFS-11264-HDFS-10285-01.patch
>
>
> As discussed in HDFS-10885, double checks needed to insure SPS/Mover not 
> running together, otherwise it may cause some issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11973) libhdfs++: Remove redundant directories in examples

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080341#comment-16080341
 ] 

Hadoop QA commented on HDFS-11973:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
23s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
3s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
2s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-8707 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m  
8s{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.8.0_131. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m  8s{color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_131. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  8s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.8.0_131. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m  9s{color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_131. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  9s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m  9s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5ae34ac |
| JIRA Issue | HDFS-11973 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876412/HDFS-11973.HDFS-8707.000.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 3fd6e352dec4 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 821f971 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20211/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_131.txt
 |
| cc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20211/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_131.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20211/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_131.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20211/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_131.txt
 |
| cc | 

[jira] [Commented] (HDFS-5040) Audit log for admin commands/ logging output of all DFS admin commands

2017-07-10 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080334#comment-16080334
 ] 

Brahma Reddy Battula commented on HDFS-5040:


[~kshukla] thanks for working on this.

It's good to have audit log for all the admin commands. can you update the 
patch...?

> Audit log for admin commands/ logging output of all DFS admin commands
> --
>
> Key: HDFS-5040
> URL: https://issues.apache.org/jira/browse/HDFS-5040
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Raghu C Doppalapudi
>Assignee: Kuhu Shukla
>  Labels: BB2015-05-TBR
> Attachments: HDFS-5040.patch, HDFS-5040.patch, HDFS-5040.patch
>
>
> enable audit log for all the admin commands/also provide ability to log all 
> the admin commands in separate log file, at this point all the logging is 
> displayed on the console.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-10 Thread Luigi Di Fraia (JIRA)
Luigi Di Fraia created HDFS-12109:
-

 Summary: "fs" java.net.UnknownHostException when HA NameNode is 
used
 Key: HDFS-12109
 URL: https://issues.apache.org/jira/browse/HDFS-12109
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs
Affects Versions: 2.8.0
 Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
[hadoop@namenode01 ~]$ uname -a
Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 2017 
x86_64 x86_64 x86_64 GNU/Linux
[hadoop@namenode01 ~]$ java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
Reporter: Luigi Di Fraia


After setting up an HA NameNode configuration, the following invocation of "fs" 
fails:

[hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
-ls: java.net.UnknownHostException: saccluster

It works if properties are defined as per below:

/usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
-Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
-Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
-Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /

These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
per below:


dfs.nameservices
saccluster


dfs.ha.namenodes.saccluster
namenode01,namenode02


dfs.namenode.rpc-address.saccluster.namenode01
namenode01:8020


dfs.namenode.rpc-address.saccluster.namenode02
namenode02:8020


dfs.namenode.http-address.saccluster.namenode01
namenode01:50070


dfs.namenode.http-address.saccluster.namenode02
namenode02:50070


dfs.namenode.shared.edits.dir

qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster


dfs.client.failover.proxy.provider.mycluster

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:

export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

Is "fs" trying to read these properties from somewhere else, such as a separate 
client configuration file?

Apologies if I a missing something obvious here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11973) libhdfs++: Remove redundant directories in examples

2017-07-10 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11973:
-
Assignee: Anatoli Shein
  Status: Patch Available  (was: Open)

> libhdfs++: Remove redundant directories in examples
> ---
>
> Key: HDFS-11973
> URL: https://issues.apache.org/jira/browse/HDFS-11973
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>
> In order to keep consistent with the tools and tests I think we should remove 
> one level of directories in examples folder. 
> E.g this directory:
> /hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/c/cat/cat.c
> Should become this:
> /hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/c/cat.c
> Removing the redundant directories will also simplify our cmake file 
> maintenance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11973) libhdfs++: Remove redundant directories in examples

2017-07-10 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11973:
-
Attachment: HDFS-11973.HDFS-8707.000.patch

Quick patch to remove the redundant directories.

> libhdfs++: Remove redundant directories in examples
> ---
>
> Key: HDFS-11973
> URL: https://issues.apache.org/jira/browse/HDFS-11973
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-11973.HDFS-8707.000.patch
>
>
> In order to keep consistent with the tools and tests I think we should remove 
> one level of directories in examples folder. 
> E.g this directory:
> /hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/c/cat/cat.c
> Should become this:
> /hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/c/cat.c
> Removing the redundant directories will also simplify our cmake file 
> maintenance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11975) Provide a system-default EC policy

2017-07-10 Thread luhuichun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuichun updated HDFS-11975:
-
Attachment: HDFS-11975-008.patch

update with small change

> Provide a system-default EC policy
> --
>
> Key: HDFS-11975
> URL: https://issues.apache.org/jira/browse/HDFS-11975
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: luhuichun
> Attachments: HDFS-11975-001.patch, HDFS-11975-002.patch, 
> HDFS-11975-003.patch, HDFS-11975-004.patch, HDFS-11975-005.patch, 
> HDFS-11975-006.patch, HDFS-11975-007.patch, HDFS-11975-008.patch
>
>
> From the usability point of view, it'd be nice to be able to specify a 
> system-wide EC policy, i.e., in {{hdfs-site.xml}}. For most of users / admin 
> / downstream projects, it is not necessary to know the tradeoffs of the EC 
> policy, considering that it requires the knowledge of EC, the actual physical 
> topology of the clusters, and many other factors (i.e., network, cluster size 
> and etc).
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12026) libhdfs++: Fix compilation errors and warnings when compiling with Clang

2017-07-10 Thread Anatoli Shein (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080283#comment-16080283
 ] 

Anatoli Shein commented on HDFS-12026:
--

Thank you for the feedback [~aw].

I agree that we should not require both gcc and clang for building the native 
components, and also agree that the hardcoded /usr/bin paths need to be fixed. 
This was just the first patch where I tried forcing the clang build and testing 
it to see where it would fail :)

I think the actual way we should try to do this is to detect the default 
compiler in the environment and use it for the build, and only use both 
compilers (gcc and clang) during CI.

I also definitely agree that we should try and get this branch up to speed with 
trunk as soon as possible since we have been putting this off for too long.

> libhdfs++: Fix compilation errors and warnings when compiling with Clang 
> -
>
> Key: HDFS-12026
> URL: https://issues.apache.org/jira/browse/HDFS-12026
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-12026.HDFS-8707.000.patch, 
> HDFS-12026.HDFS-8707.001.patch, HDFS-12026.HDFS-8707.002.patch, 
> HDFS-12026.HDFS-8707.003.patch, HDFS-12026.HDFS-8707.004.patch, 
> HDFS-12026.HDFS-8707.005.patch, HDFS-12026.HDFS-8707.006.patch
>
>
> Currently multiple errors and warnings prevent libhdfspp from being compiled 
> with clang. It should compile cleanly using flags:
> -std=c++11 -stdlib=libc++
> and also warning flags:
> -Weverything -Wno-c++98-compat -Wno-missing-prototypes 
> -Wno-c++98-compat-pedantic -Wno-padded -Wno-covered-switch-default 
> -Wno-missing-noreturn -Wno-unknown-pragmas -Wconversion -Werror



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11965) [SPS]: Should give chance to satisfy the low redundant blocks before removing the xattr

2017-07-10 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080186#comment-16080186
 ] 

Rakesh R commented on HDFS-11965:
-

Apart from the below comments, latest patch looks good. I will commit the 
changes to branch once its addressed.
 # uncomment the timeout part {{@Test//(timeout = 30)}}
 # minor typos: {{FEW_LOW_REDUNDENCY_BLOCKS}} to {{FEW_LOW_REDUNDANCY_BLOCKS}}

> [SPS]: Should give chance to satisfy the low redundant blocks before removing 
> the xattr
> ---
>
> Key: HDFS-11965
> URL: https://issues.apache.org/jira/browse/HDFS-11965
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11965-HDFS-10285.001.patch, 
> HDFS-11965-HDFS-10285.002.patch, HDFS-11965-HDFS-10285.003.patch, 
> HDFS-11965-HDFS-10285.004.patch, HDFS-11965-HDFS-10285.005.patch, 
> HDFS-11965-HDFS-10285.006.patch, HDFS-11965-HDFS-10285.007.patch
>
>
> The test case is failing because all the required replicas are not moved in 
> expected storage. This is happened because of delay in datanode registration 
> after cluster restart.
> Scenario :
> 1. Start cluster with 3 DataNodes.
> 2. Create file and set storage policy to WARM.
> 3. Restart the cluster.
> 4. Now Namenode and two DataNodes started first and  got registered with 
> NameNode. (one datanode  not yet registered)
> 5. SPS scheduled block movement based on available DataNodes (It will move 
> one replica in ARCHIVE based on policy).
> 6. Block movement also success and Xattr removed from the file because this 
> condition is true {{itemInfo.isAllBlockLocsAttemptedToSatisfy()}}.
> {code}
> if (itemInfo != null
> && !itemInfo.isAllBlockLocsAttemptedToSatisfy()) {
>   blockStorageMovementNeeded
>   .add(storageMovementAttemptedResult.getTrackId());
> 
> ..
> } else {
> 
> ..
>   this.sps.postBlkStorageMovementCleanup(
>   storageMovementAttemptedResult.getTrackId());
> }
> {code}
> 7. Now third DN registered with namenode and its reported one more DISK 
> replica. Now Namenode has two DISK and one ARCHIVE replica.
> In test case we have condition to check the number of DISK replica..
> {code} DFSTestUtil.waitExpectedStorageType(testFileName, StorageType.DISK, 1, 
> timeout, fs);{code}
> This condition never became true and test case will be timed out.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11874) [SPS]: Document the SPS feature

2017-07-10 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080179#comment-16080179
 ] 

Rakesh R commented on HDFS-11874:
-

Thanks [~umamaheswararao] for the patch. I'm adding few comments, please take 
care.

# bq. but it will move the blocks physically across storage medias
This has to be {{but it won't move the blocks physically across storage 
medias.}}.
# bq. one of the following option
Please change to {{one of the following options}}
# Following are few cases where we could highlight words:
(a) {{user can call HdfsAdmin API satisfyStoragePolicy}} to {{user can call 
`HdfsAdmin` API `satisfyStoragePolicy()`}}
(b) {{satisfyStoragePolicy API on a directory}} to {{`satisfyStoragePolicy()` 
API on a directory}}
(c) {{HdfsAdmin API: “public void satisfyStoragePolicy(final Path path) throws 
IOException”}} to {{HdfsAdmin API: `public void satisfyStoragePolicy(final Path 
path) throws IOException`}}
(d) {{| 'path' | A path which requires blocks storage movement. |}} to {{| 
`path` | A path which requires blocks storage movement. |}}
# {{Satisfy Storage Policy}} command section is not included in the patch, 
please include this when preparing the patch.
# Also, we can include {{Configuration}} section, where we can capture 
{{activate}} or {{deactivate}} the SPS service.


> [SPS]: Document the SPS feature
> ---
>
> Key: HDFS-11874
> URL: https://issues.apache.org/jira/browse/HDFS-11874
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Uma Maheswara Rao G
> Attachments: ArchivalStorage.html, HDFS-11874-HDFS-10285-001.patch
>
>
> This JIRA is for tracking the documentation about the feature



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12108) Hdfs tail -f command keeps printing the last line in loop when more data is not available

2017-07-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080130#comment-16080130
 ] 

Weiwei Yang commented on HDFS-12108:


Hi [~nitiraj.rathore]

Thanks for reporting this issue. I am trying to reproduce it but seem cannot 

{noformat}
[wwei@localhost]$ echo "this is a line without ending char" > /tmp/test.log
[wwei@localhost]$ tr -d '\n' < /tmp/test.log > /tmp/test1.log
[wwei@localhost]$ cat -A /tmp/test1.log
this is a line without ending char[wwei@localhost]$

// put to hdfs and then tail
[wwei@localhost]$ ./bin/hdfs dfs -tail -f /test1.log
this is a line without ending char
{noformat}

can you help to provide more steps how to reproduce this? Thanks


> Hdfs tail -f command keeps printing the last line in loop when more data is 
> not available
> -
>
> Key: HDFS-12108
> URL: https://issues.apache.org/jira/browse/HDFS-12108
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.3
>Reporter: Nitiraj Singh Rathore
>
> I tried to do the simple tail -f expecting that new data will keep appearing 
> on console, but I found out that in the absence of new data the last line of 
> the file keeps printing again and again. See the below output. For 
> clarification I have also pasted the output of cat command for the same file.
> bq. [hdfs@c6401 lib]$ hdfs dfs -tail -f 
> /ats/active/application_1499594381431_0001/appattempt_1499594381431_0001_01/domainlog-appattempt_1499594381431_0001_01
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001","readers":"*","writers":"hive"}
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
> bq. 
> bq. [hdfs@c6401 lib]$ hdfs dfs -cat  
> /ats/active/application_1499594381431_0001/appattempt_1499594381431_0001_01/domainlog-appattempt_1499594381431_0001_01
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001","readers":"*","writers":"hive"}
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}[hdfs@c6401
>  lib]$



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12085) Reconfigure namenode heartbeat interval fails if the interval was set with time unit

2017-07-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080124#comment-16080124
 ] 

Weiwei Yang commented on HDFS-12085:


Sure, thanks [~linyiqun]. I thought this was a bug impacting basic user 
experience so triage it to a critical bug. Thanks for correcting it, I agree 
the change is small.

> Reconfigure namenode heartbeat interval fails if the interval was set with 
> time unit
> 
>
> Key: HDFS-12085
> URL: https://issues.apache.org/jira/browse/HDFS-12085
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, tools
>Affects Versions: 3.0.0-alpha4
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: HDFS-12085.001.patch, HDFS-12085.002.patch
>
>
> It fails when I set duration with time unit, e.g 5s, error
> {noformat}
> Reconfiguring status for node [localhost:8111]: started at Tue Jul 04 
> 08:14:18 PDT 2017 and finished at Tue Jul 04 08:14:18 PDT 2017.
> FAILED: Change property dfs.heartbeat.interval
>   From: "3s"
>   To: "5s"
>   Error: For input string: "5s".
> {noformat}
> time unit support was added via HDFS-9847.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12105) Ozone: listVolumes doesn't work from ozone commandline

2017-07-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080126#comment-16080126
 ] 

Weiwei Yang commented on HDFS-12105:


Thanks [~linyiqun], much appreciated. Right now the ozone shell commands are 
not fully tested, so there might be more bugs, it makes sense to have a 
comprehensive test case to cover. Thanks a lot.

> Ozone: listVolumes doesn't work from ozone commandline
> --
>
> Key: HDFS-12105
> URL: https://issues.apache.org/jira/browse/HDFS-12105
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: HDFS-7240
>
> Attachments: HDFS-12105-HDFS-7240.001.patch, 
> HDFS-12105-HDFS-7240.002.patch
>
>
> Now ozone listVolume on server side was implemented in HDFS-11773, but ozone 
> client-side (CLI listVolume command) doesn't support prefix, startKey and 
> maxKey arguments yet. This JIRA will implement on this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12085) Reconfigure namenode heartbeat interval fails if the interval was set with time unit

2017-07-10 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080110#comment-16080110
 ] 

Yiqun Lin commented on HDFS-12085:
--

LGTM, +1. Will commit tomorrow in case others have further comments on this.
I change the priority of this JIRA since this seems a minor change not a 
critical bug. Thanks [~cheersyang].

> Reconfigure namenode heartbeat interval fails if the interval was set with 
> time unit
> 
>
> Key: HDFS-12085
> URL: https://issues.apache.org/jira/browse/HDFS-12085
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, tools
>Affects Versions: 3.0.0-alpha4
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: HDFS-12085.001.patch, HDFS-12085.002.patch
>
>
> It fails when I set duration with time unit, e.g 5s, error
> {noformat}
> Reconfiguring status for node [localhost:8111]: started at Tue Jul 04 
> 08:14:18 PDT 2017 and finished at Tue Jul 04 08:14:18 PDT 2017.
> FAILED: Change property dfs.heartbeat.interval
>   From: "3s"
>   To: "5s"
>   Error: For input string: "5s".
> {noformat}
> time unit support was added via HDFS-9847.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12085) Reconfigure namenode heartbeat interval fails if the interval was set with time unit

2017-07-10 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12085:
-
Affects Version/s: 3.0.0-alpha4
 Target Version/s: 3.0.0-beta1
 Priority: Minor  (was: Critical)
  Summary: Reconfigure namenode heartbeat interval fails if the 
interval was set with time unit  (was: Reconfigure namenode interval fails if 
the interval was set with time unit)

> Reconfigure namenode heartbeat interval fails if the interval was set with 
> time unit
> 
>
> Key: HDFS-12085
> URL: https://issues.apache.org/jira/browse/HDFS-12085
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, tools
>Affects Versions: 3.0.0-alpha4
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: HDFS-12085.001.patch, HDFS-12085.002.patch
>
>
> It fails when I set duration with time unit, e.g 5s, error
> {noformat}
> Reconfiguring status for node [localhost:8111]: started at Tue Jul 04 
> 08:14:18 PDT 2017 and finished at Tue Jul 04 08:14:18 PDT 2017.
> FAILED: Change property dfs.heartbeat.interval
>   From: "3s"
>   To: "5s"
>   Error: For input string: "5s".
> {noformat}
> time unit support was added via HDFS-9847.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12105) Ozone: listVolumes doesn't work from ozone commandline

2017-07-10 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080097#comment-16080097
 ] 

Yiqun Lin commented on HDFS-12105:
--

Thanks [~cheersyang] for the review and commit! I see now we are missing some 
unit tests for ozone shell command. I will take care of this recently and add 
more test for this. Thanks again.

> Ozone: listVolumes doesn't work from ozone commandline
> --
>
> Key: HDFS-12105
> URL: https://issues.apache.org/jira/browse/HDFS-12105
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: HDFS-7240
>
> Attachments: HDFS-12105-HDFS-7240.001.patch, 
> HDFS-12105-HDFS-7240.002.patch
>
>
> Now ozone listVolume on server side was implemented in HDFS-11773, but ozone 
> client-side (CLI listVolume command) doesn't support prefix, startKey and 
> maxKey arguments yet. This JIRA will implement on this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12108) Hdfs tail -f command keeps printing the last line in loop when more data is not available

2017-07-10 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned HDFS-12108:
--

Assignee: (was: Weiwei Yang)

> Hdfs tail -f command keeps printing the last line in loop when more data is 
> not available
> -
>
> Key: HDFS-12108
> URL: https://issues.apache.org/jira/browse/HDFS-12108
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.3
>Reporter: Nitiraj Singh Rathore
>
> I tried to do the simple tail -f expecting that new data will keep appearing 
> on console, but I found out that in the absence of new data the last line of 
> the file keeps printing again and again. See the below output. For 
> clarification I have also pasted the output of cat command for the same file.
> bq. [hdfs@c6401 lib]$ hdfs dfs -tail -f 
> /ats/active/application_1499594381431_0001/appattempt_1499594381431_0001_01/domainlog-appattempt_1499594381431_0001_01
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001","readers":"*","writers":"hive"}
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
> bq. 
> bq. [hdfs@c6401 lib]$ hdfs dfs -cat  
> /ats/active/application_1499594381431_0001/appattempt_1499594381431_0001_01/domainlog-appattempt_1499594381431_0001_01
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001","readers":"*","writers":"hive"}
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}[hdfs@c6401
>  lib]$



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12108) Hdfs tail -f command keeps printing the last line in loop when more data is not available

2017-07-10 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned HDFS-12108:
--

Assignee: Weiwei Yang

> Hdfs tail -f command keeps printing the last line in loop when more data is 
> not available
> -
>
> Key: HDFS-12108
> URL: https://issues.apache.org/jira/browse/HDFS-12108
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.3
>Reporter: Nitiraj Singh Rathore
>Assignee: Weiwei Yang
>
> I tried to do the simple tail -f expecting that new data will keep appearing 
> on console, but I found out that in the absence of new data the last line of 
> the file keeps printing again and again. See the below output. For 
> clarification I have also pasted the output of cat command for the same file.
> bq. [hdfs@c6401 lib]$ hdfs dfs -tail -f 
> /ats/active/application_1499594381431_0001/appattempt_1499594381431_0001_01/domainlog-appattempt_1499594381431_0001_01
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001","readers":"*","writers":"hive"}
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
> bq. 
> bq. [hdfs@c6401 lib]$ hdfs dfs -cat  
> /ats/active/application_1499594381431_0001/appattempt_1499594381431_0001_01/domainlog-appattempt_1499594381431_0001_01
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001","readers":"*","writers":"hive"}
> bq. 
> {"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}[hdfs@c6401
>  lib]$



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11345) Document the configuration key for FSNamesystem lock fairness

2017-07-10 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16080045#comment-16080045
 ] 

Brahma Reddy Battula commented on HDFS-11345:
-

As this make configurable, shouldn't be good to have in {{branch-2.7}} also..?

> Document the configuration key for FSNamesystem lock fairness
> -
>
> Key: HDFS-11345
> URL: https://issues.apache.org/jira/browse/HDFS-11345
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, namenode
>Reporter: Zhe Zhang
>Assignee: Erik Krogen
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-11345.000.patch, HADOOP-11345.001.patch, 
> HDFS-11345.002.patch
>
>
> Per [earlier | 
> https://issues.apache.org/jira/browse/HDFS-5239?focusedCommentId=15536471=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15536471]
>  discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12108) Hdfs tail -f command keeps printing the last line in loop when more data is not available

2017-07-10 Thread Nitiraj Singh Rathore (JIRA)
Nitiraj Singh Rathore created HDFS-12108:


 Summary: Hdfs tail -f command keeps printing the last line in loop 
when more data is not available
 Key: HDFS-12108
 URL: https://issues.apache.org/jira/browse/HDFS-12108
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.3
Reporter: Nitiraj Singh Rathore


I tried to do the simple tail -f expecting that new data will keep appearing on 
console, but I found out that in the absence of new data the last line of the 
file keeps printing again and again. See the below output. For clarification I 
have also pasted the output of cat command for the same file.

bq. [hdfs@c6401 lib]$ hdfs dfs -tail -f 
/ats/active/application_1499594381431_0001/appattempt_1499594381431_0001_01/domainlog-appattempt_1499594381431_0001_01
bq. 
{"id":"Tez_ATS_application_1499594381431_0001","readers":"*","writers":"hive"}
bq. 
{"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
bq. 
{"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
bq. 
{"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
bq. 
{"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}
bq. 
bq. [hdfs@c6401 lib]$ hdfs dfs -cat  
/ats/active/application_1499594381431_0001/appattempt_1499594381431_0001_01/domainlog-appattempt_1499594381431_0001_01
bq. 
{"id":"Tez_ATS_application_1499594381431_0001","readers":"*","writers":"hive"}
bq. 
{"id":"Tez_ATS_application_1499594381431_0001_1","readers":"*","writers":"hive"}[hdfs@c6401
 lib]$



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12052) Set "SWEBHDFS delegation" as DT kind if ssl is enabled in HttpFS

2017-07-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16079947#comment-16079947
 ] 

Hadoop QA commented on HDFS-12052:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs-httpfs: The patch 
generated 0 new + 3 unchanged - 59 fixed = 3 total (was 62) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
16s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12052 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876366/HDFS-12052.04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 801416647b2e 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3de47ab |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20209/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20209/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Set "SWEBHDFS delegation" as DT kind if ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, 

  1   2   >