[jira] [Commented] (HDDS-847) TestBlockDeletion is failing

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691360#comment-16691360
 ] 

Hadoop QA commented on HDDS-847:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-ozone/integration-test: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 28s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-847 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948669/HDDS-847.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0d47b96a9783 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cfb915f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1753/artifact/out/diff-checkstyle-hadoop-ozone_integration-test.txt
 |
| unit | 
https://builds.apache.org/job/PreCom

[jira] [Commented] (HDFS-14075) NPE while Edit Logging

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691398#comment-16691398
 ] 

Hadoop QA commented on HDFS-14075:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}144m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}199m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestRollingUpgradeRollback |
|   | hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands |
|   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
|   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
|   | hadoop.hdfs.TestDFSStripedOutputStream |
|   | hadoop.hdfs.server.blockmanagement.TestSequentialBlockGroupId |
|   | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.server.mover.TestStorageMover |
|   | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.tools.TestViewFSStoragePolicyCommands |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.TestFileStatus |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDistributedFileSystemWithECFile |
|   | hadoop.hdfs.TestParallelUnixDomainRead

[jira] [Commented] (HDDS-718) Introduce new SCM Commands to list and close Pipelines

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691400#comment-16691400
 ] 

Hadoop QA commented on HDDS-718:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
21s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
15s{color} | {color:red} client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
17s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
15s{color} | {color:red} tools in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 14m  
3s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 14m  3s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m  3s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  2s{color} | {color:orange} root: The patch generated 2 new + 0 unchanged - 
0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
30s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
26s{color} | {color:red} client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
29s{color} | {color:red} server-scm in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} tools in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
31s{color} | {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color}

[jira] [Commented] (HDDS-845) Create a new raftClient instance for every watch request for Ratis

2018-11-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691431#comment-16691431
 ] 

Hudson commented on HDDS-845:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15459 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15459/])
HDDS-845. Create a new raftClient instance for every watch request for 
(shashikant: rev 10cf5773ba32566dd76730e32a3ccdf2b3bd4d09)
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientSpi.java


> Create a new raftClient instance for every watch request for Ratis
> --
>
> Key: HDDS-845
> URL: https://issues.apache.org/jira/browse/HDDS-845
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-845.000.patch, HDDS-845.001.patch
>
>
> Currently , watch request go throw sliding window in ratis and hence block as 
> well as get blocked for other requests submitted before . These are read only 
> requests and not necessarily require to go throw the sliding window, Until 
> this gets addressed in Ratis, its better and efficient to create a new raft 
> Client instance for watch request in XceiverClientRatis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-845) Create a new raftClient instance for every watch request for Ratis

2018-11-19 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-845:
-
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Thanks [~msingh], [~jnp] for the review. I have committed this change to trunk.

> Create a new raftClient instance for every watch request for Ratis
> --
>
> Key: HDDS-845
> URL: https://issues.apache.org/jira/browse/HDDS-845
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-845.000.patch, HDDS-845.001.patch
>
>
> Currently , watch request go throw sliding window in ratis and hence block as 
> well as get blocked for other requests submitted before . These are read only 
> requests and not necessarily require to go throw the sliding window, Until 
> this gets addressed in Ratis, its better and efficient to create a new raft 
> Client instance for watch request in XceiverClientRatis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-837) Persist originNodeId as part of .container file in datanode

2018-11-19 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691435#comment-16691435
 ] 

Nanda kumar commented on HDDS-837:
--

Thanks [~jnp] for the review, committed it to trunk.

> Persist originNodeId as part of .container file in datanode
> ---
>
> Key: HDDS-837
> URL: https://issues.apache.org/jira/browse/HDDS-837
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-837.000.patch, HDDS-837.wip.patch
>
>
> To differentiate the replica of QUASI_CLOSED containers we need 
> {{originNodeId}} field. With this field, we can uniquely identify a 
> QUASI_CLOSED container replica. This will be needed when we want to CLOSE a 
> QUASI_CLOSED container.
> This field will be set by the node where the container is created and stored 
> as part of {{.container}} file and will be sent as part of ContainerReport to 
> SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-718) Introduce new SCM Commands to list and close Pipelines

2018-11-19 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691437#comment-16691437
 ] 

Lokesh Jain commented on HDDS-718:
--

[~msingh] Thanks for reviewing the patch! v3 patch addresses your comments.

> Introduce new SCM Commands to list and close Pipelines
> --
>
> Key: HDDS-718
> URL: https://issues.apache.org/jira/browse/HDDS-718
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Blocker
> Attachments: HDDS-718.001.patch, HDDS-718.002.patch, 
> HDDS-718.003.patch
>
>
> We need to have a tear-down pipeline command in SCM so that an administrator 
> can close/destroy a pipeline in the cluster.
> HDDS-695 brings in the commands in branch ozone-0.3, this Jira is for porting 
> them to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-837) Persist originNodeId as part of .container file in datanode

2018-11-19 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-837:
-
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

> Persist originNodeId as part of .container file in datanode
> ---
>
> Key: HDDS-837
> URL: https://issues.apache.org/jira/browse/HDDS-837
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-837.000.patch, HDDS-837.wip.patch
>
>
> To differentiate the replica of QUASI_CLOSED containers we need 
> {{originNodeId}} field. With this field, we can uniquely identify a 
> QUASI_CLOSED container replica. This will be needed when we want to CLOSE a 
> QUASI_CLOSED container.
> This field will be set by the node where the container is created and stored 
> as part of {{.container}} file and will be sent as part of ContainerReport to 
> SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-718) Introduce new SCM Commands to list and close Pipelines

2018-11-19 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-718:
-
Attachment: HDDS-718.003.patch

> Introduce new SCM Commands to list and close Pipelines
> --
>
> Key: HDDS-718
> URL: https://issues.apache.org/jira/browse/HDDS-718
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Blocker
> Attachments: HDDS-718.001.patch, HDDS-718.002.patch, 
> HDDS-718.003.patch
>
>
> We need to have a tear-down pipeline command in SCM so that an administrator 
> can close/destroy a pipeline in the cluster.
> HDDS-695 brings in the commands in branch ozone-0.3, this Jira is for porting 
> them to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.

2018-11-19 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691458#comment-16691458
 ] 

He Xiaoqiao commented on HDFS-12862:


LGTM for [^HDFS-12862-trunk.003.patch] , ping [~daryn],[~jojochuang] Do you 
mind another review?

> CacheDirective may invalidata,when NN restart or make a transition to Active.
> -
>
> Key: HDFS-12862
> URL: https://issues.apache.org/jira/browse/HDFS-12862
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching, hdfs
>Affects Versions: 2.7.1
> Environment: 
>Reporter: Wang XL
>Priority: Major
>  Labels: patch
> Attachments: HDFS-12862-branch-2.7.1.001.patch, 
> HDFS-12862-trunk.002.patch, HDFS-12862-trunk.003.patch
>
>
> The logic in FSNDNCacheOp#modifyCacheDirective is not correct.  when modify 
> cacheDirective,the expiration in directive may be a relative expiryTime, and 
> EditLog will serial a relative expiry time.
> {code:java}
> // Some comments here
> static void modifyCacheDirective(
>   FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo 
> directive,
>   EnumSet flags, boolean logRetryCache) throws IOException {
> final FSPermissionChecker pc = getFsPermissionChecker(fsn);
> cacheManager.modifyDirective(directive, pc, flags);
> fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache);
>   }
> {code}
> But when SBN replay the log ,it will invoke 
> FSImageSerialization#readCacheDirectiveInfo  as a absolute expiryTime.It will 
> result in the inconsistency .
> {code:java}
>   public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in)
>   throws IOException {
> CacheDirectiveInfo.Builder builder =
> new CacheDirectiveInfo.Builder();
> builder.setId(readLong(in));
> int flags = in.readInt();
> if ((flags & 0x1) != 0) {
>   builder.setPath(new Path(readString(in)));
> }
> if ((flags & 0x2) != 0) {
>   builder.setReplication(readShort(in));
> }
> if ((flags & 0x4) != 0) {
>   builder.setPool(readString(in));
> }
> if ((flags & 0x8) != 0) {
>   builder.setExpiration(
>   CacheDirectiveInfo.Expiration.newAbsolute(readLong(in)));
> }
> if ((flags & ~0xF) != 0) {
>   throw new IOException("unknown flags set in " +
>   "ModifyCacheDirectiveInfoOp: " + flags);
> }
> return builder.build();
>   }
> {code}
> In other words, fsn.getEditLog().logModifyCacheDirectiveInfo(directive, 
> logRetryCache)  may serial a relative expiry time,But  
> builder.setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(readLong(in)))
>read it as a absolute expiryTime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-837) Persist originNodeId as part of .container file in datanode

2018-11-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691467#comment-16691467
 ] 

Hudson commented on HDDS-837:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15460 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15460/])
HDDS-837. Persist originNodeId as part of .container file in datanode. (nanda: 
rev 5a7ca6ac3e9964c1fbd5bb654d39d5b8fb731701)
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestKeyValueContainerData.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/interfaces/TestHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerCommandHandler.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainer.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestTarContainerPacker.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/DownloadAndImportReplicator.java
* (edit) hadoop-hdds/container-service/src/test/resources/incorrect.container
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestChunkManagerImpl.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/replication/TestReplicationSupervisor.java
* (edit) hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerData.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerDataYaml.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Handler.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestBlockManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueBlockIterator.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerDataYaml.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerController.java
* (edit) 
hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerSet.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueHandler.java
* (edit) 
hadoop-hdds/container-service/src/test/resources/additionalfields.container
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerDeletionChoosingPolicy.java
* (edit) 
hadoop-hdds/container-service/src/test/resources/incorrect.checksum.container
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/metrics/TestContainerMetrics.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestHddsDispatcher.java


> Persist originNodeId as part of .container file in datanode
> ---
>
> Key: HDDS-837
> URL: https://issues.apache.org/jira/browse/HDDS-837
> 

[jira] [Created] (HDDS-850) ReadStateMachineData hits OverlappingFileLockException in ContainerStateMachine

2018-11-19 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-850:


 Summary: ReadStateMachineData hits OverlappingFileLockException in 
ContainerStateMachine
 Key: HDDS-850
 URL: https://issues.apache.org/jira/browse/HDDS-850
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.4.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.4.0


{code:java}
2018-11-16 09:54:41,599 ERROR org.apache.ratis.server.impl.LogAppender: 
GrpcLogAppender(0813f1a9-61be-4cab-aa05-d5640f4a8341 -> 
c6ad906f-7e71-4bac-bde3-d22bc1aa8c7d) hit IOException while loading raft log

org.apache.ratis.server.storage.RaftLogIOException: 
0813f1a9-61be-4cab-aa05-d5640f4a8341: Failed readStateMachineData for (t:2, 
i:1), STATEMACHINELOGENTRY, client-7D19FB803B1E, cid=0

        at 
org.apache.ratis.server.storage.RaftLog$EntryWithData.getEntry(RaftLog.java:370)

        at 
org.apache.ratis.server.impl.LogAppender$LogEntryBuffer.getAppendRequest(LogAppender.java:167)

        at 
org.apache.ratis.server.impl.LogAppender.createRequest(LogAppender.java:216)

        at 
org.apache.ratis.grpc.server.GrpcLogAppender.appendLog(GrpcLogAppender.java:152)

        at 
org.apache.ratis.grpc.server.GrpcLogAppender.runAppenderImpl(GrpcLogAppender.java:96)

        at 
org.apache.ratis.server.impl.LogAppender.runAppender(LogAppender.java:100)

        at java.lang.Thread.run(Thread.java:745)

Caused by: java.nio.channels.OverlappingFileLockException

        at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255)

        at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152)

        at 
sun.nio.ch.AsynchronousFileChannelImpl.addToFileLockTable(AsynchronousFileChannelImpl.java:178)

        at 
sun.nio.ch.SimpleAsynchronousFileChannelImpl.implLock(SimpleAsynchronousFileChannelImpl.java:185)

        at 
sun.nio.ch.AsynchronousFileChannelImpl.lock(AsynchronousFileChannelImpl.java:118)

        at 
org.apache.hadoop.ozone.container.keyvalue.helpers.ChunkUtils.readData(ChunkUtils.java:178)

        at 
org.apache.hadoop.ozone.container.keyvalue.impl.ChunkManagerImpl.readChunk(ChunkManagerImpl.java:197)

        at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleReadChunk(KeyValueHandler.java:542)

        at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:174)

        at 
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:178)

        at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:290)

        at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.readStateMachineData(ContainerStateMachine.java:404)

        at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$readStateMachineData$6(ContainerStateMachine.java:462)

        at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)

        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

        ... 1 more

2018-11-16 09:54:41,597 ERROR org.apache.ratis.server.impl.LogAppender: 
GrpcLogAppender(0813f1a9-61be-4cab-aa05-d5640f4a8341 -> 
e3e9a703-55bb-482b-a0a1-ce8000474ac2) hit IOException while loading raft log

org.apache.ratis.server.storage.RaftLogIOException: 
0813f1a9-61be-4cab-aa05-d5640f4a8341: Failed readStateMachineData for (t:2, 
i:2), STATEMACHINELOGENTRY, client-7D19FB803B1E, cid=2

        at 
org.apache.ratis.server.storage.RaftLog$EntryWithData.getEntry(RaftLog.java:370)

        at 
org.apache.ratis.server.impl.LogAppender$LogEntryBuffer.getAppendRequest(LogAppender.java:167)

        at 
org.apache.ratis.server.impl.LogAppender.createRequest(LogAppender.java:216)

        at 
org.apache.ratis.grpc.server.GrpcLogAppender.appendLog(GrpcLogAppender.java:152)

        at 
org.apache.ratis.grpc.server.GrpcLogAppender.runAppenderImpl(GrpcLogAppender.java:96)

        at 
org.apache.ratis.server.impl.LogAppender.runAppender(LogAppender.java:100)

        at java.lang.Thread.run(Thread.java:745)

Caused by: java.nio.channels.OverlappingFileLockException

        at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255)

        at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152)

        at 
sun.nio.ch.AsynchronousFileChannelImpl.addToFileLockTable(AsynchronousFileChannelImpl.java:178)

        at 
sun.nio.ch.SimpleAsynchronousFileChannelImpl.implLock(SimpleAsynchronousFileChannelImpl.java:185)

        at 
sun.nio.ch.AsynchronousFileChannelImpl.lock(AsynchronousFileChannelImpl.java:118)

        at 
org.apache.hadoop.ozone.container.keyvalue.help

[jira] [Created] (HDFS-14088) RequestHedgingProxyProvider can throw NullPointerException when failvoer due to no lock on currentUsedProxy

2018-11-19 Thread Yuxuan Wang (JIRA)
Yuxuan Wang created HDFS-14088:
--

 Summary: RequestHedgingProxyProvider can throw 
NullPointerException when failvoer due to no lock on currentUsedProxy
 Key: HDFS-14088
 URL: https://issues.apache.org/jira/browse/HDFS-14088
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Yuxuan Wang



{code:java}
if (currentUsedProxy != null) {
try {
  Object retVal = method.invoke(currentUsedProxy.proxy, args);
  LOG.debug("Invocation successful on [{}]",
  currentUsedProxy.proxyInfo);
{code}
If a thread run try block and then other thread trigger a fail over calling 
method
{code:java}
@Override
  public synchronized void performFailover(T currentProxy) {
toIgnore = this.currentUsedProxy.proxyInfo;
this.currentUsedProxy = null;
  }
{code}
It will set currentUsedProxy to null, and the first thread can throw a 
NullPointerException.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-850) ReadStateMachineData hits OverlappingFileLockException in ContainerStateMachine

2018-11-19 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-850:
-
Description: 
{code:java}
2018-11-16 09:54:41,599 ERROR org.apache.ratis.server.impl.LogAppender: 
GrpcLogAppender(0813f1a9-61be-4cab-aa05-d5640f4a8341 -> 
c6ad906f-7e71-4bac-bde3-d22bc1aa8c7d) hit IOException while loading raft log

org.apache.ratis.server.storage.RaftLogIOException: 
0813f1a9-61be-4cab-aa05-d5640f4a8341: Failed readStateMachineData for (t:2, 
i:1), STATEMACHINELOGENTRY, client-7D19FB803B1E, cid=0

        at 
org.apache.ratis.server.storage.RaftLog$EntryWithData.getEntry(RaftLog.java:370)

        at 
org.apache.ratis.server.impl.LogAppender$LogEntryBuffer.getAppendRequest(LogAppender.java:167)

        at 
org.apache.ratis.server.impl.LogAppender.createRequest(LogAppender.java:216)

        at 
org.apache.ratis.grpc.server.GrpcLogAppender.appendLog(GrpcLogAppender.java:152)

        at 
org.apache.ratis.grpc.server.GrpcLogAppender.runAppenderImpl(GrpcLogAppender.java:96)

        at 
org.apache.ratis.server.impl.LogAppender.runAppender(LogAppender.java:100)

        at java.lang.Thread.run(Thread.java:745)

Caused by: java.nio.channels.OverlappingFileLockException

        at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255)

        at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152)

        at 
sun.nio.ch.AsynchronousFileChannelImpl.addToFileLockTable(AsynchronousFileChannelImpl.java:178)

        at 
sun.nio.ch.SimpleAsynchronousFileChannelImpl.implLock(SimpleAsynchronousFileChannelImpl.java:185)

        at 
sun.nio.ch.AsynchronousFileChannelImpl.lock(AsynchronousFileChannelImpl.java:118)

        at 
org.apache.hadoop.ozone.container.keyvalue.helpers.ChunkUtils.readData(ChunkUtils.java:178)

        at 
org.apache.hadoop.ozone.container.keyvalue.impl.ChunkManagerImpl.readChunk(ChunkManagerImpl.java:197)

        at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleReadChunk(KeyValueHandler.java:542)

        at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:174)

        at 
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:178)

        at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:290)

        at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.readStateMachineData(ContainerStateMachine.java:404)

        at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$readStateMachineData$6(ContainerStateMachine.java:462)

        at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)

        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

        ... 1 more

{code}
This happens in the Ratis leader where the stateMachineData is not  cached 
segements in Ratis while it gets a request for ReadStateMachineData while 
writeStateMachineData is not completed yet. The approach would be to cache the 
stateMachineData inside ContainerStateMachine and not cache it inside ratis.

  was:
{code:java}
2018-11-16 09:54:41,599 ERROR org.apache.ratis.server.impl.LogAppender: 
GrpcLogAppender(0813f1a9-61be-4cab-aa05-d5640f4a8341 -> 
c6ad906f-7e71-4bac-bde3-d22bc1aa8c7d) hit IOException while loading raft log

org.apache.ratis.server.storage.RaftLogIOException: 
0813f1a9-61be-4cab-aa05-d5640f4a8341: Failed readStateMachineData for (t:2, 
i:1), STATEMACHINELOGENTRY, client-7D19FB803B1E, cid=0

        at 
org.apache.ratis.server.storage.RaftLog$EntryWithData.getEntry(RaftLog.java:370)

        at 
org.apache.ratis.server.impl.LogAppender$LogEntryBuffer.getAppendRequest(LogAppender.java:167)

        at 
org.apache.ratis.server.impl.LogAppender.createRequest(LogAppender.java:216)

        at 
org.apache.ratis.grpc.server.GrpcLogAppender.appendLog(GrpcLogAppender.java:152)

        at 
org.apache.ratis.grpc.server.GrpcLogAppender.runAppenderImpl(GrpcLogAppender.java:96)

        at 
org.apache.ratis.server.impl.LogAppender.runAppender(LogAppender.java:100)

        at java.lang.Thread.run(Thread.java:745)

Caused by: java.nio.channels.OverlappingFileLockException

        at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255)

        at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152)

        at 
sun.nio.ch.AsynchronousFileChannelImpl.addToFileLockTable(AsynchronousFileChannelImpl.java:178)

        at 
sun.nio.ch.SimpleAsynchronousFileChannelImpl.implLock(SimpleAsynchronousFileChannelImpl.java:185)

        at 
sun.nio.ch.AsynchronousFileChannelImpl.lock(AsynchronousFileChannelImpl.java:118)

        at 
org.apache.hadoop.ozone.contain

[jira] [Updated] (HDDS-850) ReadStateMachineData hits OverlappingFileLockException in ContainerStateMachine

2018-11-19 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-850:
-
Description: 
{code:java}
2018-11-16 09:54:41,599 ERROR org.apache.ratis.server.impl.LogAppender: 
GrpcLogAppender(0813f1a9-61be-4cab-aa05-d5640f4a8341 -> 
c6ad906f-7e71-4bac-bde3-d22bc1aa8c7d) hit IOException while loading raft log

org.apache.ratis.server.storage.RaftLogIOException: 
0813f1a9-61be-4cab-aa05-d5640f4a8341: Failed readStateMachineData for (t:2, 
i:1), STATEMACHINELOGENTRY, client-7D19FB803B1E, cid=0

        at 
org.apache.ratis.server.storage.RaftLog$EntryWithData.getEntry(RaftLog.java:370)

        at 
org.apache.ratis.server.impl.LogAppender$LogEntryBuffer.getAppendRequest(LogAppender.java:167)

        at 
org.apache.ratis.server.impl.LogAppender.createRequest(LogAppender.java:216)

        at 
org.apache.ratis.grpc.server.GrpcLogAppender.appendLog(GrpcLogAppender.java:152)

        at 
org.apache.ratis.grpc.server.GrpcLogAppender.runAppenderImpl(GrpcLogAppender.java:96)

        at 
org.apache.ratis.server.impl.LogAppender.runAppender(LogAppender.java:100)

        at java.lang.Thread.run(Thread.java:745)

Caused by: java.nio.channels.OverlappingFileLockException

        at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255)

        at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152)

        at 
sun.nio.ch.AsynchronousFileChannelImpl.addToFileLockTable(AsynchronousFileChannelImpl.java:178)

        at 
sun.nio.ch.SimpleAsynchronousFileChannelImpl.implLock(SimpleAsynchronousFileChannelImpl.java:185)

        at 
sun.nio.ch.AsynchronousFileChannelImpl.lock(AsynchronousFileChannelImpl.java:118)

        at 
org.apache.hadoop.ozone.container.keyvalue.helpers.ChunkUtils.readData(ChunkUtils.java:178)

        at 
org.apache.hadoop.ozone.container.keyvalue.impl.ChunkManagerImpl.readChunk(ChunkManagerImpl.java:197)

        at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleReadChunk(KeyValueHandler.java:542)

        at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:174)

        at 
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:178)

        at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:290)

        at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.readStateMachineData(ContainerStateMachine.java:404)

        at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$readStateMachineData$6(ContainerStateMachine.java:462)

        at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)

        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

        ... 1 more

{code}
This happens in the Ratis leader where the stateMachineData is not  in the 
cached segments in Ratis while it gets a request for ReadStateMachineData while 
writeStateMachineData is not completed yet. The approach would be to cache the 
stateMachineData inside ContainerStateMachine and not cache it inside ratis.

  was:
{code:java}
2018-11-16 09:54:41,599 ERROR org.apache.ratis.server.impl.LogAppender: 
GrpcLogAppender(0813f1a9-61be-4cab-aa05-d5640f4a8341 -> 
c6ad906f-7e71-4bac-bde3-d22bc1aa8c7d) hit IOException while loading raft log

org.apache.ratis.server.storage.RaftLogIOException: 
0813f1a9-61be-4cab-aa05-d5640f4a8341: Failed readStateMachineData for (t:2, 
i:1), STATEMACHINELOGENTRY, client-7D19FB803B1E, cid=0

        at 
org.apache.ratis.server.storage.RaftLog$EntryWithData.getEntry(RaftLog.java:370)

        at 
org.apache.ratis.server.impl.LogAppender$LogEntryBuffer.getAppendRequest(LogAppender.java:167)

        at 
org.apache.ratis.server.impl.LogAppender.createRequest(LogAppender.java:216)

        at 
org.apache.ratis.grpc.server.GrpcLogAppender.appendLog(GrpcLogAppender.java:152)

        at 
org.apache.ratis.grpc.server.GrpcLogAppender.runAppenderImpl(GrpcLogAppender.java:96)

        at 
org.apache.ratis.server.impl.LogAppender.runAppender(LogAppender.java:100)

        at java.lang.Thread.run(Thread.java:745)

Caused by: java.nio.channels.OverlappingFileLockException

        at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255)

        at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152)

        at 
sun.nio.ch.AsynchronousFileChannelImpl.addToFileLockTable(AsynchronousFileChannelImpl.java:178)

        at 
sun.nio.ch.SimpleAsynchronousFileChannelImpl.implLock(SimpleAsynchronousFileChannelImpl.java:185)

        at 
sun.nio.ch.AsynchronousFileChannelImpl.lock(AsynchronousFileChannelImpl.java:118)

        at 
org.apache.hadoop.ozone.c

[jira] [Updated] (HDDS-850) ReadStateMachineData hits OverlappingFileLockException in ContainerStateMachine

2018-11-19 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-850:
-
Description: 
{code:java}
2018-11-16 09:54:41,599 ERROR org.apache.ratis.server.impl.LogAppender: 
GrpcLogAppender(0813f1a9-61be-4cab-aa05-d5640f4a8341 -> 
c6ad906f-7e71-4bac-bde3-d22bc1aa8c7d) hit IOException while loading raft log

org.apache.ratis.server.storage.RaftLogIOException: 
0813f1a9-61be-4cab-aa05-d5640f4a8341: Failed readStateMachineData for (t:2, 
i:1), STATEMACHINELOGENTRY, client-7D19FB803B1E, cid=0

        at 
org.apache.ratis.server.storage.RaftLog$EntryWithData.getEntry(RaftLog.java:370)

        at 
org.apache.ratis.server.impl.LogAppender$LogEntryBuffer.getAppendRequest(LogAppender.java:167)

        at 
org.apache.ratis.server.impl.LogAppender.createRequest(LogAppender.java:216)

        at 
org.apache.ratis.grpc.server.GrpcLogAppender.appendLog(GrpcLogAppender.java:152)

        at 
org.apache.ratis.grpc.server.GrpcLogAppender.runAppenderImpl(GrpcLogAppender.java:96)

        at 
org.apache.ratis.server.impl.LogAppender.runAppender(LogAppender.java:100)

        at java.lang.Thread.run(Thread.java:745)

Caused by: java.nio.channels.OverlappingFileLockException

        at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255)

        at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152)

        at 
sun.nio.ch.AsynchronousFileChannelImpl.addToFileLockTable(AsynchronousFileChannelImpl.java:178)

        at 
sun.nio.ch.SimpleAsynchronousFileChannelImpl.implLock(SimpleAsynchronousFileChannelImpl.java:185)

        at 
sun.nio.ch.AsynchronousFileChannelImpl.lock(AsynchronousFileChannelImpl.java:118)

        at 
org.apache.hadoop.ozone.container.keyvalue.helpers.ChunkUtils.readData(ChunkUtils.java:178)

        at 
org.apache.hadoop.ozone.container.keyvalue.impl.ChunkManagerImpl.readChunk(ChunkManagerImpl.java:197)

        at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleReadChunk(KeyValueHandler.java:542)

        at 
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:174)

        at 
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:178)

        at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:290)

        at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.readStateMachineData(ContainerStateMachine.java:404)

        at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$readStateMachineData$6(ContainerStateMachine.java:462)

        at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)

        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

        ... 1 more

{code}
This happens in the Ratis leader where the stateMachineData is not  in the 
cached segements in Ratis while it gets a request for ReadStateMachineData 
while writeStateMachineData is not completed yet. The approach would be to 
cache the stateMachineData inside ContainerStateMachine and not cache it inside 
ratis.

  was:
{code:java}
2018-11-16 09:54:41,599 ERROR org.apache.ratis.server.impl.LogAppender: 
GrpcLogAppender(0813f1a9-61be-4cab-aa05-d5640f4a8341 -> 
c6ad906f-7e71-4bac-bde3-d22bc1aa8c7d) hit IOException while loading raft log

org.apache.ratis.server.storage.RaftLogIOException: 
0813f1a9-61be-4cab-aa05-d5640f4a8341: Failed readStateMachineData for (t:2, 
i:1), STATEMACHINELOGENTRY, client-7D19FB803B1E, cid=0

        at 
org.apache.ratis.server.storage.RaftLog$EntryWithData.getEntry(RaftLog.java:370)

        at 
org.apache.ratis.server.impl.LogAppender$LogEntryBuffer.getAppendRequest(LogAppender.java:167)

        at 
org.apache.ratis.server.impl.LogAppender.createRequest(LogAppender.java:216)

        at 
org.apache.ratis.grpc.server.GrpcLogAppender.appendLog(GrpcLogAppender.java:152)

        at 
org.apache.ratis.grpc.server.GrpcLogAppender.runAppenderImpl(GrpcLogAppender.java:96)

        at 
org.apache.ratis.server.impl.LogAppender.runAppender(LogAppender.java:100)

        at java.lang.Thread.run(Thread.java:745)

Caused by: java.nio.channels.OverlappingFileLockException

        at sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255)

        at sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152)

        at 
sun.nio.ch.AsynchronousFileChannelImpl.addToFileLockTable(AsynchronousFileChannelImpl.java:178)

        at 
sun.nio.ch.SimpleAsynchronousFileChannelImpl.implLock(SimpleAsynchronousFileChannelImpl.java:185)

        at 
sun.nio.ch.AsynchronousFileChannelImpl.lock(AsynchronousFileChannelImpl.java:118)

        at 
org.apache.hadoop.ozone

[jira] [Commented] (HDFS-14088) RequestHedgingProxyProvider can throw NullPointerException when failvoer due to no lock on currentUsedProxy

2018-11-19 Thread Yuxuan Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691488#comment-16691488
 ] 

Yuxuan Wang commented on HDFS-14088:


I'll attach a patch later.

> RequestHedgingProxyProvider can throw NullPointerException when failvoer due 
> to no lock on currentUsedProxy
> ---
>
> Key: HDFS-14088
> URL: https://issues.apache.org/jira/browse/HDFS-14088
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Yuxuan Wang
>Priority: Major
>
> {code:java}
> if (currentUsedProxy != null) {
> try {
>   Object retVal = method.invoke(currentUsedProxy.proxy, args);
>   LOG.debug("Invocation successful on [{}]",
>   currentUsedProxy.proxyInfo);
> {code}
> If a thread run try block and then other thread trigger a fail over calling 
> method
> {code:java}
> @Override
>   public synchronized void performFailover(T currentProxy) {
> toIgnore = this.currentUsedProxy.proxyInfo;
> this.currentUsedProxy = null;
>   }
> {code}
> It will set currentUsedProxy to null, and the first thread can throw a 
> NullPointerException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12749) DN may not send block report to NN after NN restart

2018-11-19 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691520#comment-16691520
 ] 

He Xiaoqiao commented on HDFS-12749:


[~kihwal],[~xkrogen] do you mind anymore review to push this issue forward.

> DN may not send block report to NN after NN restart
> ---
>
> Key: HDFS-12749
> URL: https://issues.apache.org/jira/browse/HDFS-12749
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1, 2.8.3, 2.7.5, 3.0.0, 2.9.1
>Reporter: TanYuxin
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-12749-branch-2.7.002.patch, 
> HDFS-12749-trunk.003.patch, HDFS-12749-trunk.004.patch, 
> HDFS-12749-trunk.005.patch, HDFS-12749.001.patch
>
>
> Now our cluster have thousands of DN, millions of files and blocks. When NN 
> restart, NN's load is very high.
> After NN restart,DN will call BPServiceActor#reRegister method to register. 
> But register RPC will get a IOException since NN is busy dealing with Block 
> Report.  The exception is caught at BPServiceActor#processCommand.
> Next is the caught IOException:
> {code:java}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Error processing 
> datanode Command
> java.io.IOException: Failed on local exception: java.io.IOException: 
> java.net.SocketTimeoutException: 6 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/DataNode_IP:Port remote=NameNode_Host/IP:Port]; Host Details : local 
> host is: "DataNode_Host/Datanode_IP"; destination host is: 
> "NameNode_Host":Port;
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:773)
> at org.apache.hadoop.ipc.Client.call(Client.java:1474)
> at org.apache.hadoop.ipc.Client.call(Client.java:1407)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy13.registerDatanode(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.registerDatanode(DatanodeProtocolClientSideTranslatorPB.java:126)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:793)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.reRegister(BPServiceActor.java:926)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:604)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:898)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:711)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:864)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The un-catched IOException breaks BPServiceActor#register, and the Block 
> Report can not be sent immediately. 
> {code}
>   /**
>* Register one bp with the corresponding NameNode
>* 
>* The bpDatanode needs to register with the namenode on startup in order
>* 1) to report which storage it is serving now and 
>* 2) to receive a registrationID
>*  
>* issued by the namenode to recognize registered datanodes.
>* 
>* @param nsInfo current NamespaceInfo
>* @see FSNamesystem#registerDatanode(DatanodeRegistration)
>* @throws IOException
>*/
>   void register(NamespaceInfo nsInfo) throws IOException {
> // The handshake() phase loaded the block pool storage
> // off disk - so update the bpRegistration object from that info
> DatanodeRegistration newBpRegistration = bpos.createRegistration();
> LOG.info(this + " beginning handshake with NN");
> while (shouldRun()) {
>   try {
> // Use returned registration from namenode with updated fields
> newBpRegistration = bpNamenode.registerDatanode(newBpRegistration);
> newBpRegistration.setNamespaceInfo(nsInfo);
> bpRegistration = newBpRegistration;
> break;
>   } catch(EOFException e) {  // namenode might have just restarted
> LOG.info("Problem connecting to server: " + nnAddr + " :"
> + e.getLocalizedMessage());
> sleepAndLogInterrupts(1000, "connecting to server");
>   } catch(SocketTimeoutException e) {  // namenode is busy
> LOG.info("Problem connecting to server: " + nnAddr);
> sleepAndLogInterrupts(1000, "connecting to server");
>   }
> }
> 
> LOG.info("Block pool " + this + " successfully registered with NN");
> bpos.registrationSucceeded(this, bpRegistration);
> // random short delay - helps scatter the BR from all DNs
> scheduler.scheduleBlockReport(dnConf.ini

[jira] [Commented] (HDDS-284) CRC for ChunksData

2018-11-19 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691536#comment-16691536
 ] 

Shashikant Banerjee commented on HDDS-284:
--

Thanks [~hanishakoneru] for updating the patch. The patch looks good to me 
overall. Some minor comments:
 # Checksum#longToBytes  can be replaced with Longs.toByteArray() from 
com.google.common.primitives.Longs package.
 # With the patch it aways seems to be computing the checksum in 
writeChunkToContainerCall. With HTTP headers, if the checksum is already 
available in a Rest call, we might not require to recompute again. Are we going 
to address such cases later?
 # ChunkManagerImpl#writeChunk:-> while handling the overWrites of a chunkFile 
we can just verify the checksum if its already present and return accordingly 
without actually doing I/O ( addressed as TODO in the code). We can also add 
the checksum verification here, though these can be addressed in a separate 
patch as well.
 # ChunkInputStream.java {color:#33}: L213-215 : why is this change 
specifically required? Is it just for making the tests added to work?{color}

> CRC for ChunksData
> --
>
> Key: HDDS-284
> URL: https://issues.apache.org/jira/browse/HDDS-284
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: CRC and Error Detection for Containers.pdf, 
> HDDS-284.00.patch, HDDS-284.005.patch, HDDS-284.01.patch, HDDS-284.02.patch, 
> HDDS-284.03.patch, HDDS-284.04.patch, Interleaving CRC and Error Detection 
> for Containers.pdf
>
>
> This Jira is to add CRC for chunks data.
>  
>  
> Right now a Chunk Info structure looks like this:
>  
> _message ChunkInfo {_
>  _required string chunkName =_ _1__;_
> _required uint64 offset =_ _2__;_
> _required uint64 len =_ _3__;_
> _optional string checksum =_ _4__;_
> _repeated KeyValue metadata =_ _5__;_
> _}_
>  
> _Proposal is to change ChunkInfo structure as below:_
>  
> _message ChunkInfo {_
>  _required string chunkName =_ _1__;_
>  _required uint64 offset =_ _2__;_
>  _required uint64 len =_ _3__;_
>  _optional bytes checksum =_ _4__;_
>  _optional CRCType checksumType =_ _5__;_
>  _optional string legacyMetadata =_ _6__;_
>  _optional string legacyData =_ _7__;_
>  _repeated KeyValue metadata =_ _8__;_
> _}_
>  
> _Instead of changing disk format, we put the checksum, checksumtype and 
> legacy data fields in to chunkInfo._
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14089) RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService

2018-11-19 Thread Ranith Sardar (JIRA)
Ranith Sardar created HDFS-14089:


 Summary: RBF: Failed to specify server's Kerberos pricipal name in 
NamenodeHeartbeatService
 Key: HDFS-14089
 URL: https://issues.apache.org/jira/browse/HDFS-14089
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ranith Sardar
 Fix For: HDFS-13891


DFSZKFailoverController, DFSHAAdmin setting the conf for 
"HADOOP_SECURITY_SERVICE_USER_NAME_KEY".  Need to add the configuration for 
NamenodeHeartbeatService as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14089) RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService

2018-11-19 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar reassigned HDFS-14089:


Assignee: Ranith Sardar

> RBF: Failed to specify server's Kerberos pricipal name in 
> NamenodeHeartbeatService
> --
>
> Key: HDFS-14089
> URL: https://issues.apache.org/jira/browse/HDFS-14089
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Fix For: HDFS-13891
>
>
> DFSZKFailoverController, DFSHAAdmin setting the conf for 
> "HADOOP_SECURITY_SERVICE_USER_NAME_KEY".  Need to add the configuration for 
> NamenodeHeartbeatService as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14075) NPE while Edit Logging

2018-11-19 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14075:

Attachment: HDFS-14075-04.patch

> NPE while Edit Logging
> --
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14075-01.patch, HDFS-14075-02.patch, 
> HDFS-14075-03.patch, HDFS-14075-04.patch, HDFS-14075-04.patch
>
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14075) NPE while Edit Logging

2018-11-19 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691580#comment-16691580
 ] 

Ayush Saxena commented on HDFS-14075:
-

Test Failures due to 

java.lang.OutOfMemoryError: unable to create new native thread.

uploaded once again v4 to trigger build.

> NPE while Edit Logging
> --
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14075-01.patch, HDFS-14075-02.patch, 
> HDFS-14075-03.patch, HDFS-14075-04.patch, HDFS-14075-04.patch
>
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-718) Introduce new SCM Commands to list and close Pipelines

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691577#comment-16691577
 ] 

Hadoop QA commented on HDDS-718:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 45s{color} 
| {color:red} common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 41s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 41s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m  1s{color} | 
{color:black} {colo

[jira] [Commented] (HDDS-847) TestBlockDeletion is failing

2018-11-19 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691583#comment-16691583
 ] 

Nanda kumar commented on HDDS-847:
--

+1, will commit this shortly.

> TestBlockDeletion is failing
> 
>
> Key: HDDS-847
> URL: https://issues.apache.org/jira/browse/HDDS-847
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-847.001.patch
>
>
> {{TestBlockDeletion}} is failing with the below exception
> {code}
> [ERROR] 
> testBlockDeletion(org.apache.hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion)
>   Time elapsed: 28.017 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion.testBlockDeletion(TestBlockDeletion.java:165)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-847) TestBlockDeletion is failing

2018-11-19 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-847:
-
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

> TestBlockDeletion is failing
> 
>
> Key: HDDS-847
> URL: https://issues.apache.org/jira/browse/HDDS-847
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-847.001.patch
>
>
> {{TestBlockDeletion}} is failing with the below exception
> {code}
> [ERROR] 
> testBlockDeletion(org.apache.hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion)
>   Time elapsed: 28.017 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion.testBlockDeletion(TestBlockDeletion.java:165)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-847) TestBlockDeletion is failing

2018-11-19 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691584#comment-16691584
 ] 

Nanda kumar commented on HDDS-847:
--

Thanks [~ljain] for the contribution, committed it to trunk.

> TestBlockDeletion is failing
> 
>
> Key: HDDS-847
> URL: https://issues.apache.org/jira/browse/HDDS-847
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-847.001.patch
>
>
> {{TestBlockDeletion}} is failing with the below exception
> {code}
> [ERROR] 
> testBlockDeletion(org.apache.hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion)
>   Time elapsed: 28.017 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion.testBlockDeletion(TestBlockDeletion.java:165)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-839) Wait for other services in the started script of hadoop-runner base docker image

2018-11-19 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-839:
--
Attachment: HDDS-839-docker-hadoop-runner.002.patch

> Wait for other services in the started script of hadoop-runner base docker 
> image
> 
>
> Key: HDDS-839
> URL: https://issues.apache.org/jira/browse/HDDS-839
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-839-docker-hadoop-runner.001.patch, 
> HDDS-839-docker-hadoop-runner.002.patch
>
>
> As described in the parent issue, we need a simple method to handle service 
> dependencies in kubernetes clusters (usually as a workaround when some 
> clients can't re-try with renewed dns information).
> But it also could be useful to minimize the wait time in the docker-compose 
> clusters.
> The easiest implementation is modifying the started script of the 
> apache/hadoop-runner base image and add a bash loop which checks the 
> availability of the TCP port (with netcat). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-839) Wait for other services in the started script of hadoop-runner base docker image

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691595#comment-16691595
 ] 

Hadoop QA commented on HDDS-839:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} docker {color} | {color:blue}  0m  
5s{color} | {color:blue} Dockerfile 
'/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/sourcedir/dev-support/docker/Dockerfile'
 not found, falling back to built-in. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  3m 
13s{color} | {color:red} Docker failed to build yetus/hadoop:date2018-11-19. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-839 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948701/HDDS-839-docker-hadoop-runner.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1755/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Wait for other services in the started script of hadoop-runner base docker 
> image
> 
>
> Key: HDDS-839
> URL: https://issues.apache.org/jira/browse/HDDS-839
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-839-docker-hadoop-runner.001.patch, 
> HDDS-839-docker-hadoop-runner.002.patch
>
>
> As described in the parent issue, we need a simple method to handle service 
> dependencies in kubernetes clusters (usually as a workaround when some 
> clients can't re-try with renewed dns information).
> But it also could be useful to minimize the wait time in the docker-compose 
> clusters.
> The easiest implementation is modifying the started script of the 
> apache/hadoop-runner base image and add a bash loop which checks the 
> availability of the TCP port (with netcat). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-851) Provide official apache docker image for Ozone

2018-11-19 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-851:
-

 Summary: Provide official apache docker image for Ozone 
 Key: HDDS-851
 URL: https://issues.apache.org/jira/browse/HDDS-851
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Elek, Marton
Assignee: Elek, Marton


Similar to the apache/hadoop:2 and apache/hadoop:3 images I propose to provide 
apache/ozone docker images which includes the voted release binaries.

The image can follow all the conventions from HADOOP-14898

1. BRANCHING

I propose to create new docker branches:

docker-ozone-0.3.0-alpha
docker-ozone-latest

And ask INFRA to register docker-ozone-(.*) in the dockerhub to create 
apache/ozone: images

2. RUNNING

I propose to create a default runner script which starts om + scm + datanode + 
s3g all together. With this approach you can start a full ozone cluster as easy 
as

{code}
docker run -p 9878:9878 -p 9876:9876 -p 9874:9874 -d apache/ozone
{code}

That's all. This is an all-in-one docker image which is ready to try out.

3. RUNNING with compose

I propose to include a default docker-compose + config file in the image. To 
start a multi-node pseudo cluster it will be enough to execute:

{code}
docker run apache/ozone cat docker-compose.yaml > docker-compose.yaml
docker run apache/ozone cat docker-config > docker-config
docker-compose up -d
{code}

That's all, and you have a multi-(pseudo)node ozone cluster which could be 
scaled up and down with ozone.

4. k8s

Later we can also provide k8s resource files with the same approach:

{code}
docker run apache/ozone cat k8s.yaml | kubectl apply -f -
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-847) TestBlockDeletion is failing

2018-11-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691601#comment-16691601
 ] 

Hudson commented on HDDS-847:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15461 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15461/])
HDDS-847. TestBlockDeletion is failing. Contributed by Lokesh Jain. (nanda: rev 
93666087bc58a2b4b92147e475872030ae64c620)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/OzoneTestUtils.java


> TestBlockDeletion is failing
> 
>
> Key: HDDS-847
> URL: https://issues.apache.org/jira/browse/HDDS-847
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-847.001.patch
>
>
> {{TestBlockDeletion}} is failing with the below exception
> {code}
> [ERROR] 
> testBlockDeletion(org.apache.hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion)
>   Time elapsed: 28.017 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion.testBlockDeletion(TestBlockDeletion.java:165)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-851) Provide official apache docker image for Ozone

2018-11-19 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-851:
--
Attachment: docker-ozone-latest.tar.gz

> Provide official apache docker image for Ozone 
> ---
>
> Key: HDDS-851
> URL: https://issues.apache.org/jira/browse/HDDS-851
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: docker-ozone-latest.tar.gz
>
>
> Similar to the apache/hadoop:2 and apache/hadoop:3 images I propose to 
> provide apache/ozone docker images which includes the voted release binaries.
> The image can follow all the conventions from HADOOP-14898
> 1. BRANCHING
> I propose to create new docker branches:
> docker-ozone-0.3.0-alpha
> docker-ozone-latest
> And ask INFRA to register docker-ozone-(.*) in the dockerhub to create 
> apache/ozone: images
> 2. RUNNING
> I propose to create a default runner script which starts om + scm + datanode 
> + s3g all together. With this approach you can start a full ozone cluster as 
> easy as
> {code}
> docker run -p 9878:9878 -p 9876:9876 -p 9874:9874 -d apache/ozone
> {code}
> That's all. This is an all-in-one docker image which is ready to try out.
> 3. RUNNING with compose
> I propose to include a default docker-compose + config file in the image. To 
> start a multi-node pseudo cluster it will be enough to execute:
> {code}
> docker run apache/ozone cat docker-compose.yaml > docker-compose.yaml
> docker run apache/ozone cat docker-config > docker-config
> docker-compose up -d
> {code}
> That's all, and you have a multi-(pseudo)node ozone cluster which could be 
> scaled up and down with ozone.
> 4. k8s
> Later we can also provide k8s resource files with the same approach:
> {code}
> docker run apache/ozone cat k8s.yaml | kubectl apply -f -
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12862) CacheDirective may invalidata,when NN restart or make a transition to Active.

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691613#comment-16691613
 ] 

Hadoop QA commented on HDFS-12862:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 65 unchanged - 0 fixed = 72 total (was 65) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestCacheDirectives |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-12862 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935653/HDFS-12862-trunk.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f8413022f719 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5a7ca6a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/2/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/2/testReport/ |

[jira] [Commented] (HDDS-851) Provide official apache docker image for Ozone

2018-11-19 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691612#comment-16691612
 ] 

Elek, Marton commented on HDDS-851:
---

As will be the first commit on a new empty branch I included the patch as a tar 
file instead of a patch file.

To test it, uncompress it and build:

{code}
./build.sh
{code}

Now you have two options. You can start an all-in-one docker with:

{code}
docker run -d -p 9878:9878 -p 9876:9876 -p 9874:9874 apache/ozone 
{code}

And check the localhost:9878/localhost:9876/localhost:9874

You can also test the docker-compose file:

{code}
docker-compose down
docker-compose up -d
{code}

And check the web ui-s:

*IMPORTANT* this patch depends on the HDDS-839 (WAITFOR patch). I built the 
apache/hadoop-runner with the proposed patch before the build locally

*IMPORTANT2* This patch also depends on 0.3.0-alpha release as the recent 
hadoop-runner changes are not backward compatible (scm --init instead of 
-init). The tar file use the rc artifact in the Dockerfile and the real final 
URL is commented out. It will be committed with the real final URL after a 
successfull 0.3.0-alpha vote.

> Provide official apache docker image for Ozone 
> ---
>
> Key: HDDS-851
> URL: https://issues.apache.org/jira/browse/HDDS-851
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: docker-ozone-latest.tar.gz
>
>
> Similar to the apache/hadoop:2 and apache/hadoop:3 images I propose to 
> provide apache/ozone docker images which includes the voted release binaries.
> The image can follow all the conventions from HADOOP-14898
> 1. BRANCHING
> I propose to create new docker branches:
> docker-ozone-0.3.0-alpha
> docker-ozone-latest
> And ask INFRA to register docker-ozone-(.*) in the dockerhub to create 
> apache/ozone: images
> 2. RUNNING
> I propose to create a default runner script which starts om + scm + datanode 
> + s3g all together. With this approach you can start a full ozone cluster as 
> easy as
> {code}
> docker run -p 9878:9878 -p 9876:9876 -p 9874:9874 -d apache/ozone
> {code}
> That's all. This is an all-in-one docker image which is ready to try out.
> 3. RUNNING with compose
> I propose to include a default docker-compose + config file in the image. To 
> start a multi-node pseudo cluster it will be enough to execute:
> {code}
> docker run apache/ozone cat docker-compose.yaml > docker-compose.yaml
> docker run apache/ozone cat docker-config > docker-config
> docker-compose up -d
> {code}
> That's all, and you have a multi-(pseudo)node ozone cluster which could be 
> scaled up and down with ozone.
> 4. k8s
> Later we can also provide k8s resource files with the same approach:
> {code}
> docker run apache/ozone cat k8s.yaml | kubectl apply -f -
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-852) Remove unnecessary stdout printing from apache/hadoop-runner

2018-11-19 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-852:
-

 Summary: Remove unnecessary stdout printing from 
apache/hadoop-runner
 Key: HDDS-852
 URL: https://issues.apache.org/jira/browse/HDDS-852
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton
Assignee: Elek, Marton


Latest apache/hadoop-runner always prints out an informal line before executing 
anything:

{code}
docker run apache/hadoop-runner ls
Setting up environment!
...
{code}

This "Setting up environment!" line is always there.

Here I propose to delete this one line from the starter script.

REASONING:

As I proposed in HDDS-851 we can provide very easy way to getting started with 
ozone with executing commands inside the official apache/ozone docker image:

For example:

{code}
docker run apache/ozone cat docker-compose.yaml > docker-compose.yaml
{code}

This pattern (executing command inside + redirecting the output) can't be done 
with this generic "Setting up environment!" line.

I think it's safe to delete as all the optional steps (kerberos init, wait for 
additional services, generate configs, etc) could have separated (non-default) 
logging lines. 





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-852) Remove unnecessary stdout printing from apache/hadoop-runner

2018-11-19 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-852:
--
Status: Patch Available  (was: Open)

> Remove unnecessary stdout printing from apache/hadoop-runner
> 
>
> Key: HDDS-852
> URL: https://issues.apache.org/jira/browse/HDDS-852
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-852-docker-hadoop-runner.001.patch
>
>
> Latest apache/hadoop-runner always prints out an informal line before 
> executing anything:
> {code}
> docker run apache/hadoop-runner ls
> Setting up environment!
> ...
> {code}
> This "Setting up environment!" line is always there.
> Here I propose to delete this one line from the starter script.
> REASONING:
> As I proposed in HDDS-851 we can provide very easy way to getting started 
> with ozone with executing commands inside the official apache/ozone docker 
> image:
> For example:
> {code}
> docker run apache/ozone cat docker-compose.yaml > docker-compose.yaml
> {code}
> This pattern (executing command inside + redirecting the output) can't be 
> done with this generic "Setting up environment!" line.
> I think it's safe to delete as all the optional steps (kerberos init, wait 
> for additional services, generate configs, etc) could have separated 
> (non-default) logging lines. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13369) FSCK Report broken with RequestHedgingProxyProvider

2018-11-19 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-13369:
-
Attachment: HDFS-13369.006.patch

> FSCK Report broken with RequestHedgingProxyProvider 
> 
>
> Key: HDFS-13369
> URL: https://issues.apache.org/jira/browse/HDFS-13369
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.3, 3.0.0, 3.1.0
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13369.001.patch, HDFS-13369.002.patch, 
> HDFS-13369.003.patch, HDFS-13369.004.patch, HDFS-13369.005.patch, 
> HDFS-13369.006.patch
>
>
> Scenario:-
> 1.Configure the RequestHedgingProxy
> 2. write some files in file system
> 3. Take FSCK report for the above files
>  
> {noformat}
> bin> hdfs fsck /file1 -locations -files -blocks
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler
>  cannot be cast to org.apache.hadoop.ipc.RpcInvocationHandler
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:626)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.getConnectionId(RetryInvocationHandler.java:438)
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:628)
> at org.apache.hadoop.ipc.RPC.getServerAddress(RPC.java:611)
> at org.apache.hadoop.hdfs.HAUtil.getAddressOfActive(HAUtil.java:263)
> at 
> org.apache.hadoop.hdfs.tools.DFSck.getCurrentNamenodeAddress(DFSck.java:257)
> at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:319)
> at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:156)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:153)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:152)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:385){noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13369) FSCK Report broken with RequestHedgingProxyProvider

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691636#comment-16691636
 ] 

Hadoop QA commented on HDFS-13369:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-13369 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13369 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948710/HDFS-13369.006.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25558/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> FSCK Report broken with RequestHedgingProxyProvider 
> 
>
> Key: HDFS-13369
> URL: https://issues.apache.org/jira/browse/HDFS-13369
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.3, 3.0.0, 3.1.0
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13369.001.patch, HDFS-13369.002.patch, 
> HDFS-13369.003.patch, HDFS-13369.004.patch, HDFS-13369.005.patch, 
> HDFS-13369.006.patch
>
>
> Scenario:-
> 1.Configure the RequestHedgingProxy
> 2. write some files in file system
> 3. Take FSCK report for the above files
>  
> {noformat}
> bin> hdfs fsck /file1 -locations -files -blocks
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler
>  cannot be cast to org.apache.hadoop.ipc.RpcInvocationHandler
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:626)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.getConnectionId(RetryInvocationHandler.java:438)
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:628)
> at org.apache.hadoop.ipc.RPC.getServerAddress(RPC.java:611)
> at org.apache.hadoop.hdfs.HAUtil.getAddressOfActive(HAUtil.java:263)
> at 
> org.apache.hadoop.hdfs.tools.DFSck.getCurrentNamenodeAddress(DFSck.java:257)
> at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:319)
> at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:156)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:153)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:152)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:385){noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-852) Remove unnecessary stdout printing from apache/hadoop-runner

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691640#comment-16691640
 ] 

Hadoop QA commented on HDDS-852:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} docker {color} | {color:blue}  0m  
4s{color} | {color:blue} Dockerfile 
'/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/sourcedir/dev-support/docker/Dockerfile'
 not found, falling back to built-in. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  2m 
56s{color} | {color:red} Docker failed to build yetus/hadoop:date2018-11-19. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-852 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948707/HDDS-852-docker-hadoop-runner.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1756/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove unnecessary stdout printing from apache/hadoop-runner
> 
>
> Key: HDDS-852
> URL: https://issues.apache.org/jira/browse/HDDS-852
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-852-docker-hadoop-runner.001.patch
>
>
> Latest apache/hadoop-runner always prints out an informal line before 
> executing anything:
> {code}
> docker run apache/hadoop-runner ls
> Setting up environment!
> ...
> {code}
> This "Setting up environment!" line is always there.
> Here I propose to delete this one line from the starter script.
> REASONING:
> As I proposed in HDDS-851 we can provide very easy way to getting started 
> with ozone with executing commands inside the official apache/ozone docker 
> image:
> For example:
> {code}
> docker run apache/ozone cat docker-compose.yaml > docker-compose.yaml
> {code}
> This pattern (executing command inside + redirecting the output) can't be 
> done with this generic "Setting up environment!" line.
> I think it's safe to delete as all the optional steps (kerberos init, wait 
> for additional services, generate configs, etc) could have separated 
> (non-default) logging lines. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-852) Remove unnecessary stdout printing from apache/hadoop-runner

2018-11-19 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-852:
--
Attachment: HDDS-852-docker-hadoop-runner.001.patch

> Remove unnecessary stdout printing from apache/hadoop-runner
> 
>
> Key: HDDS-852
> URL: https://issues.apache.org/jira/browse/HDDS-852
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-852-docker-hadoop-runner.001.patch
>
>
> Latest apache/hadoop-runner always prints out an informal line before 
> executing anything:
> {code}
> docker run apache/hadoop-runner ls
> Setting up environment!
> ...
> {code}
> This "Setting up environment!" line is always there.
> Here I propose to delete this one line from the starter script.
> REASONING:
> As I proposed in HDDS-851 we can provide very easy way to getting started 
> with ozone with executing commands inside the official apache/ozone docker 
> image:
> For example:
> {code}
> docker run apache/ozone cat docker-compose.yaml > docker-compose.yaml
> {code}
> This pattern (executing command inside + redirecting the output) can't be 
> done with this generic "Setting up environment!" line.
> I think it's safe to delete as all the optional steps (kerberos init, wait 
> for additional services, generate configs, etc) could have separated 
> (non-default) logging lines. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14089) RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService

2018-11-19 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14089:
-
Attachment: HDFS-14089.patch

> RBF: Failed to specify server's Kerberos pricipal name in 
> NamenodeHeartbeatService
> --
>
> Key: HDFS-14089
> URL: https://issues.apache.org/jira/browse/HDFS-14089
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Fix For: HDFS-13891
>
> Attachments: HDFS-14089.patch
>
>
> DFSZKFailoverController, DFSHAAdmin setting the conf for 
> "HADOOP_SECURITY_SERVICE_USER_NAME_KEY".  Need to add the configuration for 
> NamenodeHeartbeatService as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14089) RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService

2018-11-19 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14089:
-
Status: Patch Available  (was: Open)

> RBF: Failed to specify server's Kerberos pricipal name in 
> NamenodeHeartbeatService
> --
>
> Key: HDFS-14089
> URL: https://issues.apache.org/jira/browse/HDFS-14089
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Fix For: HDFS-13891
>
> Attachments: HDFS-14089.patch
>
>
> DFSZKFailoverController, DFSHAAdmin setting the conf for 
> "HADOOP_SECURITY_SERVICE_USER_NAME_KEY".  Need to add the configuration for 
> NamenodeHeartbeatService as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14089) RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService

2018-11-19 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691645#comment-16691645
 ] 

Ranith Sardar commented on HDFS-14089:
--

Attached the basic patch. Please review it once.

> RBF: Failed to specify server's Kerberos pricipal name in 
> NamenodeHeartbeatService
> --
>
> Key: HDFS-14089
> URL: https://issues.apache.org/jira/browse/HDFS-14089
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Fix For: HDFS-13891
>
> Attachments: HDFS-14089.patch
>
>
> DFSZKFailoverController, DFSHAAdmin setting the conf for 
> "HADOOP_SECURITY_SERVICE_USER_NAME_KEY".  Need to add the configuration for 
> NamenodeHeartbeatService as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14075) NPE while Edit Logging

2018-11-19 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14075:

Attachment: HDFS-14075-04.patch

> NPE while Edit Logging
> --
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14075-01.patch, HDFS-14075-02.patch, 
> HDFS-14075-03.patch, HDFS-14075-04.patch, HDFS-14075-04.patch, 
> HDFS-14075-04.patch
>
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-795) RocksDb specific classes leak from DBStore/Table interfaces

2018-11-19 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-795:
--
Attachment: HDDS-795.005.patch

> RocksDb specific classes leak from DBStore/Table interfaces
> ---
>
> Key: HDDS-795
> URL: https://issues.apache.org/jira/browse/HDDS-795
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-795.001.patch, HDDS-795.002.patch, 
> HDDS-795.003.patch, HDDS-795.004.patch, HDDS-795.005.patch
>
>
> org.apache.hadoop.utils.db.RocksDB and Table interfaces provide a 
> vendor-independent way to access any key value store. 
> The default implementation uses RocksDb but other implementation also could 
> be used (for example an InMemory implementation for testing only).
> The current Table interface contains methods which depend on RocksDB specific 
> classes. For example:
> {code}
> public interface DBStore extends AutoCloseable {
> //...
> /**
>* Return the Column Family handle. TODO: This leaks an RockDB abstraction
>* into Ozone code, cleanup later.
>*
>* @return ColumnFamilyHandle
>*/
>   ColumnFamilyHandle getHandle();
> //...
> {code}
> We need to remove the RocksDB specific classes from the generic interfaces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13960) hdfs dfs -checksum command should optionally show block size in output

2018-11-19 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDFS-13960:
---
Attachment: HDFS-13960.001.patch

> hdfs dfs -checksum command should optionally show block size in output
> --
>
> Key: HDFS-13960
> URL: https://issues.apache.org/jira/browse/HDFS-13960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Lokesh Jain
>Priority: Minor
> Attachments: HDFS-13960.001.patch
>
>
> The hdfs checksum command computes the checksum in a distributed manner, 
> which would take into account the block size. In other words, the block size 
> determines how the file will be broken up.
> Therefore it can happen that the checksum command produces different outputs 
> for the exact same file only differing in the block size: 
> checksum(fileABlock1) + checksum(fileABlock2) != checksum(fileABlock1 + 
> fileABlock2)
> I suggest to add an option to the hdfs dfs -checksum command which would 
> displays the block size along with the output, and that could also be helpful 
> in some other cases where this piece of information is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13960) hdfs dfs -checksum command should optionally show block size in output

2018-11-19 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDFS-13960:
---
Status: Patch Available  (was: Open)

> hdfs dfs -checksum command should optionally show block size in output
> --
>
> Key: HDFS-13960
> URL: https://issues.apache.org/jira/browse/HDFS-13960
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Lokesh Jain
>Priority: Minor
> Attachments: HDFS-13960.001.patch
>
>
> The hdfs checksum command computes the checksum in a distributed manner, 
> which would take into account the block size. In other words, the block size 
> determines how the file will be broken up.
> Therefore it can happen that the checksum command produces different outputs 
> for the exact same file only differing in the block size: 
> checksum(fileABlock1) + checksum(fileABlock2) != checksum(fileABlock1 + 
> fileABlock2)
> I suggest to add an option to the hdfs dfs -checksum command which would 
> displays the block size along with the output, and that could also be helpful 
> in some other cases where this piece of information is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-816) Create OM metrics for bucket, volume, keys

2018-11-19 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691698#comment-16691698
 ] 

Elek, Marton commented on HDDS-816:
---

I am not sure if we can shutdown the whole metrics system from one simple 
metric...

{code}
@@ -478,5 +540,6 @@ public long getNumListS3BucketsFails() {
   public void unRegister() {
 MetricsSystem ms = DefaultMetricsSystem.instance();
 ms.unregisterSource(SOURCE_NAME);
+ms.shutdown();
   }
 }
{code}

> Create OM metrics for bucket, volume, keys
> --
>
> Key: HDDS-816
> URL: https://issues.apache.org/jira/browse/HDDS-816
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-816.00.patch, HDDS-816.01.patch, HDDS-816.03.patch, 
> HDDS-816.04.patch, HDDS-816.05.patch, HDDS-816.06.patch, Metrics for number 
> of volumes, buckets, keys.pdf, Proposed Approach.pdf
>
>
> This Jira is used to create the following metrics in Ozone manager.
>  # number of volumes 
>  # number of buckets
>  # number of keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10943) rollEditLog expects empty EditsDoubleBuffer.bufCurrent which is not guaranteed

2018-11-19 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691701#comment-16691701
 ] 

He Xiaoqiao commented on HDFS-10943:


[~daryn],[~kihwal],[~zhz],[~yzhangal]
Unfortunately, I meet this issue again recently. another interesting note was 
that ioutil of NN keeps on 100 for 2~5min before NN crash (the same observation 
when I first meet this issue), I am not sure it is related with this issue. 
After digging, I do not find that {{FileJournalManager}} will block 
{{JournalSetOutputStream}} or lead to NN crash, and FileJournal is not required 
by default if HA using QJM, in other word, even filejournal do not write/sync 
successfully, NN process will not terminate.
This is just additional information for reference and DO NOT resolve it util 
now.

> rollEditLog expects empty EditsDoubleBuffer.bufCurrent which is not guaranteed
> --
>
> Key: HDFS-10943
> URL: https://issues.apache.org/jira/browse/HDFS-10943
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Priority: Major
>
> Per the following trace stack:
> {code}
> FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: finalize log 
> segment 10562075963, 10562174157 failed for required journal 
> (JournalAndStream(mgr=QJM to [0.0.0.1:8485, 0.0.0.2:8485, 0.0.0.3:8485, 
> 0.0.0.4:8485, 0.0.0.5:8485], stream=QuorumOutputStream starting at txid 
> 10562075963))
> java.io.IOException: FSEditStream has 49708 bytes still to be flushed and 
> cannot be closed.
> at 
> org.apache.hadoop.hdfs.server.namenode.EditsDoubleBuffer.close(EditsDoubleBuffer.java:66)
> at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumOutputStream.close(QuorumOutputStream.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalAndStream.closeStream(JournalSet.java:115)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet$4.apply(JournalSet.java:235)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.finalizeLogSegment(JournalSet.java:231)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1243)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1243)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:6437)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1002)
> at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:142)
> at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12025)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> 2016-09-23 21:40:59,618 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Aborting 
> QuorumOutputStream starting at txid 10562075963
> {code}
> The exception is from  EditsDoubleBuffer
> {code}
>  public void close() throws IOException {
> Preconditions.checkNotNull(bufCurrent);
> Preconditions.checkNotNull(bufReady);
> int bufSize = bufCurrent.size();
> if (bufSize != 0) {
>   throw new IOException("FSEditStream has " + bufSize
>   + " bytes still to be flushed and cannot be closed.");
> }
> IOUtils.cleanup(null, bufCurrent, bufReady);
> bufCurrent = bufReady = null;
>   }
> {code}
> We can see that FSNamesystem.rollEditLog expects  
> EditsDoubleBuffer.bufCurrent to be empty.
> Edits are recorded via FSEditLog$logSync, which does:
> {code}
>* The data is double-buffered within each edit log implementation so that
>* in-memory writing can occur in parallel with the on-disk writing.
>*
>* Each sync occurs in three steps:
>*   1. synchronized, it swaps the double buffer and sets the isSyncRunning
>*  flag.
>*   2. unsynchronized, it flushes the data t

[jira] [Commented] (HDDS-795) RocksDb specific classes leak from DBStore/Table interfaces

2018-11-19 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691693#comment-16691693
 ] 

Elek, Marton commented on HDDS-795:
---

Thanks the review [~ajayydv]

bq. DBStore

I fixed all the javadocs. + I removed the throw tag from the javadoc instead of 
adding to the method signature as it's not necessary for creating a new batch.

bq. BatchOperation: Should this have api for commit op as well?

It's part of the DBStore interface (void commitBatchOperation(BatchOperation 
operation) throws IOException;). BatchOperation is just a generic holder which 
can include all the required operations to commit.

bq. Table / Rename new put operation to "addToBatch"

Can't use this name exactly, as I have both put and delete operations with and 
without batch batch support. I need two new names.

But I understand your comment that the names of method with/without batch 
support should be more different. (I agree, it would be more understandable 
from the code).

I renamed them to deleteWithBatch and putWithBatch. Let me know if you have 
better name suggestions (but I need two new names which are operation specific)

bq: Unused imports ...

It was not detected by jenkins neither by my IDE. But during a rebase to trunk 
I saw import related conflicts. They should be disappeared with the rebase. 

bq. TestRDBTableStore L169 / L190

The additional assertions are added, thanks.

> RocksDb specific classes leak from DBStore/Table interfaces
> ---
>
> Key: HDDS-795
> URL: https://issues.apache.org/jira/browse/HDDS-795
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-795.001.patch, HDDS-795.002.patch, 
> HDDS-795.003.patch, HDDS-795.004.patch
>
>
> org.apache.hadoop.utils.db.RocksDB and Table interfaces provide a 
> vendor-independent way to access any key value store. 
> The default implementation uses RocksDb but other implementation also could 
> be used (for example an InMemory implementation for testing only).
> The current Table interface contains methods which depend on RocksDB specific 
> classes. For example:
> {code}
> public interface DBStore extends AutoCloseable {
> //...
> /**
>* Return the Column Family handle. TODO: This leaks an RockDB abstraction
>* into Ozone code, cleanup later.
>*
>* @return ColumnFamilyHandle
>*/
>   ColumnFamilyHandle getHandle();
> //...
> {code}
> We need to remove the RocksDB specific classes from the generic interfaces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-853) Option to force close a container in Datanode

2018-11-19 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-853:


 Summary: Option to force close a container in Datanode
 Key: HDDS-853
 URL: https://issues.apache.org/jira/browse/HDDS-853
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Datanode
Reporter: Nanda kumar
Assignee: Nanda kumar


We need an option to force close a container in Datanode. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14089) RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691707#comment-16691707
 ] 

Hadoop QA commented on HDFS-14089:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
59s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14089 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948711/HDFS-14089.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0381b69f6cc5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9366608 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25559/testReport/ |
| Max. process+thread count | 1451 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25559/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Failed to specify server

[jira] [Updated] (HDFS-13369) FSCK Report broken with RequestHedgingProxyProvider

2018-11-19 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-13369:
-
Attachment: HDFS-13369.007.patch

> FSCK Report broken with RequestHedgingProxyProvider 
> 
>
> Key: HDFS-13369
> URL: https://issues.apache.org/jira/browse/HDFS-13369
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.3, 3.0.0, 3.1.0
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13369.001.patch, HDFS-13369.002.patch, 
> HDFS-13369.003.patch, HDFS-13369.004.patch, HDFS-13369.005.patch, 
> HDFS-13369.006.patch, HDFS-13369.007.patch
>
>
> Scenario:-
> 1.Configure the RequestHedgingProxy
> 2. write some files in file system
> 3. Take FSCK report for the above files
>  
> {noformat}
> bin> hdfs fsck /file1 -locations -files -blocks
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler
>  cannot be cast to org.apache.hadoop.ipc.RpcInvocationHandler
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:626)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.getConnectionId(RetryInvocationHandler.java:438)
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:628)
> at org.apache.hadoop.ipc.RPC.getServerAddress(RPC.java:611)
> at org.apache.hadoop.hdfs.HAUtil.getAddressOfActive(HAUtil.java:263)
> at 
> org.apache.hadoop.hdfs.tools.DFSck.getCurrentNamenodeAddress(DFSck.java:257)
> at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:319)
> at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:156)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:153)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:152)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:385){noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14083) libhdfs logs errors when opened FS doesn't support ByteBufferReadable

2018-11-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691795#comment-16691795
 ] 

Steve Loughran commented on HDFS-14083:
---

thanks, I'll keep an eye on that, though it's  not a area of knowledge of mine. 
Remember to mark the JIRA with the hadoop version and hdfs component, here 
"native" and "test". thanks

> libhdfs logs errors when opened FS doesn't support ByteBufferReadable
> -
>
> Key: HDFS-14083
> URL: https://issues.apache.org/jira/browse/HDFS-14083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.3
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HADOOP-15928.001.patch, HADOOP-15928.002.patch, 
> HDFS-14083.003.patch, HDFS-14083.004.patch, HDFS-14083.005.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> {code}
> --
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> {code}
> h3. Root cause
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via {{hdfsOpenFileImpl()}} calls {{readDirect()}} which 
> is hitting this
> exception.
> h3. Fix:
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12749) DN may not send block report to NN after NN restart

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691789#comment-16691789
 ] 

Hadoop QA commented on HDFS-12749:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  5m  
3s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 47 unchanged - 0 fixed = 49 total (was 47) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}214m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}256m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|   | hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
|   | hadoop.hdfs.server.namenode.TestFileContextAcl |
|   | hadoop.hdfs.TestClientReportBadBlock |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStream |
|   | hadoop.hdfs.TestReservedRawPaths |
|   | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | 
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
|
|   | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy |
|   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.server.namenode.TestBackupNode |
|   | hadoop.hdfs.client.impl.TestClientBlockVerification |
|   | hadoop.hdfs.tools.TestViewFSStoragePolicyCommands |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdf

[jira] [Commented] (HDFS-14075) NPE while Edit Logging

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691807#comment-16691807
 ] 

Hadoop QA commented on HDFS-14075:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14075 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948713/HDFS-14075-04.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9b0922846ca3 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9366608 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25560/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25560/testReport/ |
| Max. process+thread count | 4021 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console out

[jira] [Updated] (HDDS-853) Option to force close a container in Datanode

2018-11-19 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-853:
-
Description: 
We need an option to force close a container in Datanode. When the container is 
marked as QuasiClosed, based on the blockCommitSequenceId SCM will decide the 
latest container replica and it will close try to close the QuasiClosed 
container.
For this, we need force close support in Datanode which will close the 
QuasiClosed container.

  was:We need an option to force close a container in Datanode. 


> Option to force close a container in Datanode
> -
>
> Key: HDDS-853
> URL: https://issues.apache.org/jira/browse/HDDS-853
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> We need an option to force close a container in Datanode. When the container 
> is marked as QuasiClosed, based on the blockCommitSequenceId SCM will decide 
> the latest container replica and it will close try to close the QuasiClosed 
> container.
> For this, we need force close support in Datanode which will close the 
> QuasiClosed container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14086) Failure in test_libhdfs_ops

2018-11-19 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-14086:

Labels: test  (was: )

> Failure in test_libhdfs_ops
> ---
>
> Key: HDFS-14086
> URL: https://issues.apache.org/jira/browse/HDFS-14086
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Reporter: Pranay Singh
>Priority: Minor
>  Labels: test
>
> test_libhdfs_ops hdfs_static test was not getting executed,  the issue that I 
> have fixed in HDFS-14083 is 
> seen because this test program is not getting executed so I had to change the 
> below file to 
> execute this test binary as a part of normal run. There are some failures 
> that are seen when this
> test program is run. This jira tracks those failures.
> Details of change to enable this test
> 
> hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/CMakeLists.txt
> add_libhdfs_test(test_libhdfs_ops hdfs_static) --->
> Failures that are seen when this test is run.
> -
> Name: file:/tmp/hsperfdata_root, Type: D, Replication: 1, BlockSize: 
> 33554432, Size: 0, LastMod: Tue Nov 13 18:03:20 2018
> Owner: root, Group: root, Permissions: 493 (rwxr-xr-x)
> hdfsGetHosts - SUCCESS! ... 
> hdfsChown(path=/tmp/testfile.txt, owner=(null), group=users): 
> FileSystem#setOwner error:
> Shell.ExitCodeException: chown: changing group of '/tmp/testfile.txt': 
> Operation not permitted
> ExitCodeException exitCode=1: chown: changing group of '/tmp/testfile.txt': 
> Operation not permitted
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
>   at org.apache.hadoop.util.Shell.run(Shell.java:901)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213)
>   at org.apache.hadoop.util.Shell.execCommand(Shell.java:1307)
>   at org.apache.hadoop.util.Shell.execCommand(Shell.java:1289)
>   at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1350)
>   at org.apache.hadoop.fs.FileUtil.setOwner(FileUtil.java:1152)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.setOwner(RawLocalFileSystem.java:851)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$2.apply(ChecksumFileSystem.java:520)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:489)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.setOwner(ChecksumFileSystem.java:523)
> hdfsChown: Failed!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14086) Failure in test_libhdfs_ops

2018-11-19 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-14086:

   Priority: Minor  (was: Major)
Component/s: native

> Failure in test_libhdfs_ops
> ---
>
> Key: HDFS-14086
> URL: https://issues.apache.org/jira/browse/HDFS-14086
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Reporter: Pranay Singh
>Priority: Minor
>  Labels: test
>
> test_libhdfs_ops hdfs_static test was not getting executed,  the issue that I 
> have fixed in HDFS-14083 is 
> seen because this test program is not getting executed so I had to change the 
> below file to 
> execute this test binary as a part of normal run. There are some failures 
> that are seen when this
> test program is run. This jira tracks those failures.
> Details of change to enable this test
> 
> hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/CMakeLists.txt
> add_libhdfs_test(test_libhdfs_ops hdfs_static) --->
> Failures that are seen when this test is run.
> -
> Name: file:/tmp/hsperfdata_root, Type: D, Replication: 1, BlockSize: 
> 33554432, Size: 0, LastMod: Tue Nov 13 18:03:20 2018
> Owner: root, Group: root, Permissions: 493 (rwxr-xr-x)
> hdfsGetHosts - SUCCESS! ... 
> hdfsChown(path=/tmp/testfile.txt, owner=(null), group=users): 
> FileSystem#setOwner error:
> Shell.ExitCodeException: chown: changing group of '/tmp/testfile.txt': 
> Operation not permitted
> ExitCodeException exitCode=1: chown: changing group of '/tmp/testfile.txt': 
> Operation not permitted
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
>   at org.apache.hadoop.util.Shell.run(Shell.java:901)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213)
>   at org.apache.hadoop.util.Shell.execCommand(Shell.java:1307)
>   at org.apache.hadoop.util.Shell.execCommand(Shell.java:1289)
>   at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1350)
>   at org.apache.hadoop.fs.FileUtil.setOwner(FileUtil.java:1152)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.setOwner(RawLocalFileSystem.java:851)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$2.apply(ChecksumFileSystem.java:520)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:489)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.setOwner(ChecksumFileSystem.java:523)
> hdfsChown: Failed!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14086) Failure in test_libhdfs_ops

2018-11-19 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-14086:

Affects Version/s: 3.0.3

> Failure in test_libhdfs_ops
> ---
>
> Key: HDFS-14086
> URL: https://issues.apache.org/jira/browse/HDFS-14086
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.0.3
>Reporter: Pranay Singh
>Priority: Minor
>  Labels: test
>
> test_libhdfs_ops hdfs_static test was not getting executed,  the issue that I 
> have fixed in HDFS-14083 is 
> seen because this test program is not getting executed so I had to change the 
> below file to 
> execute this test binary as a part of normal run. There are some failures 
> that are seen when this
> test program is run. This jira tracks those failures.
> Details of change to enable this test
> 
> hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/CMakeLists.txt
> add_libhdfs_test(test_libhdfs_ops hdfs_static) --->
> Failures that are seen when this test is run.
> -
> Name: file:/tmp/hsperfdata_root, Type: D, Replication: 1, BlockSize: 
> 33554432, Size: 0, LastMod: Tue Nov 13 18:03:20 2018
> Owner: root, Group: root, Permissions: 493 (rwxr-xr-x)
> hdfsGetHosts - SUCCESS! ... 
> hdfsChown(path=/tmp/testfile.txt, owner=(null), group=users): 
> FileSystem#setOwner error:
> Shell.ExitCodeException: chown: changing group of '/tmp/testfile.txt': 
> Operation not permitted
> ExitCodeException exitCode=1: chown: changing group of '/tmp/testfile.txt': 
> Operation not permitted
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
>   at org.apache.hadoop.util.Shell.run(Shell.java:901)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213)
>   at org.apache.hadoop.util.Shell.execCommand(Shell.java:1307)
>   at org.apache.hadoop.util.Shell.execCommand(Shell.java:1289)
>   at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1350)
>   at org.apache.hadoop.fs.FileUtil.setOwner(FileUtil.java:1152)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.setOwner(RawLocalFileSystem.java:851)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$2.apply(ChecksumFileSystem.java:520)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:489)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.setOwner(ChecksumFileSystem.java:523)
> hdfsChown: Failed!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14086) Failure in test_libhdfs_ops

2018-11-19 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-14086:

Issue Type: Bug  (was: Improvement)

> Failure in test_libhdfs_ops
> ---
>
> Key: HDFS-14086
> URL: https://issues.apache.org/jira/browse/HDFS-14086
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: native
>Affects Versions: 3.0.3
>Reporter: Pranay Singh
>Priority: Minor
>  Labels: test
>
> test_libhdfs_ops hdfs_static test was not getting executed,  the issue that I 
> have fixed in HDFS-14083 is 
> seen because this test program is not getting executed so I had to change the 
> below file to 
> execute this test binary as a part of normal run. There are some failures 
> that are seen when this
> test program is run. This jira tracks those failures.
> Details of change to enable this test
> 
> hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/CMakeLists.txt
> add_libhdfs_test(test_libhdfs_ops hdfs_static) --->
> Failures that are seen when this test is run.
> -
> Name: file:/tmp/hsperfdata_root, Type: D, Replication: 1, BlockSize: 
> 33554432, Size: 0, LastMod: Tue Nov 13 18:03:20 2018
> Owner: root, Group: root, Permissions: 493 (rwxr-xr-x)
> hdfsGetHosts - SUCCESS! ... 
> hdfsChown(path=/tmp/testfile.txt, owner=(null), group=users): 
> FileSystem#setOwner error:
> Shell.ExitCodeException: chown: changing group of '/tmp/testfile.txt': 
> Operation not permitted
> ExitCodeException exitCode=1: chown: changing group of '/tmp/testfile.txt': 
> Operation not permitted
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
>   at org.apache.hadoop.util.Shell.run(Shell.java:901)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213)
>   at org.apache.hadoop.util.Shell.execCommand(Shell.java:1307)
>   at org.apache.hadoop.util.Shell.execCommand(Shell.java:1289)
>   at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1350)
>   at org.apache.hadoop.fs.FileUtil.setOwner(FileUtil.java:1152)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.setOwner(RawLocalFileSystem.java:851)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$2.apply(ChecksumFileSystem.java:520)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:489)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.setOwner(ChecksumFileSystem.java:523)
> hdfsChown: Failed!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-849) fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-849:
---
Component/s: (was: Ozone Datanode)
 test

> fix NPE in TestKeyValueHandler because of audit log write
> -
>
> Key: HDDS-849
> URL: https://issues.apache.org/jira/browse/HDDS-849
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-849.001.patch
>
>
> TestKeyValueHandler#testCloseInvalidContainer and 
> TestKeyValueHandler#testHandlerCommandHandling are failing because of the 
> following exception.
> {code}
> [ERROR] 
> testCloseInvalidContainer(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler)
>   Time elapsed: 0.006 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.audit.AuditLogger.logWriteFailure(AuditLogger.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.audit(HddsDispatcher.java:433)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:242)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler.testCloseInvalidContainer(TestKeyValueHandler.java:282)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-849) fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-849:
---
Attachment: HDDS-849.001.patch
Status: Patch Available  (was: Open)

[~msingh] Thank you for filing this issue. Attached patch 001 with proposed fix.

> fix NPE in TestKeyValueHandler because of audit log write
> -
>
> Key: HDDS-849
> URL: https://issues.apache.org/jira/browse/HDDS-849
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-849.001.patch
>
>
> TestKeyValueHandler#testCloseInvalidContainer and 
> TestKeyValueHandler#testHandlerCommandHandling are failing because of the 
> following exception.
> {code}
> [ERROR] 
> testCloseInvalidContainer(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler)
>   Time elapsed: 0.006 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.audit.AuditLogger.logWriteFailure(AuditLogger.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.audit(HddsDispatcher.java:433)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:242)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler.testCloseInvalidContainer(TestKeyValueHandler.java:282)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-795) RocksDb specific classes leak from DBStore/Table interfaces

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691869#comment-16691869
 ] 

Hadoop QA commented on HDDS-795:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 14s{color} | {color:orange} root: The patch generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 37s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 31s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-795 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948719/HDDS-795.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d8cdd07f1cdb 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9366608 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1757/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds

[jira] [Comment Edited] (HDDS-849) fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691858#comment-16691858
 ] 

Dinesh Chitlangia edited comment on HDDS-849 at 11/19/18 3:30 PM:
--

[~msingh] Thank you for filing this issue. Attached patch 001 with proposed fix.

The reason for the failure was since we are using mockito we are defining when 
to call real method and when to mock the objects, however, for audit related 
methods in HddsDispatcher( buildAuditMessageForSuccess & 
buildAuditMessageForFailure), no rules were defined and thus the message was 
returned as null causing the NPE.


was (Author: dineshchitlangia):
[~msingh] Thank you for filing this issue. Attached patch 001 with proposed fix.

> fix NPE in TestKeyValueHandler because of audit log write
> -
>
> Key: HDDS-849
> URL: https://issues.apache.org/jira/browse/HDDS-849
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-849.001.patch
>
>
> TestKeyValueHandler#testCloseInvalidContainer and 
> TestKeyValueHandler#testHandlerCommandHandling are failing because of the 
> following exception.
> {code}
> [ERROR] 
> testCloseInvalidContainer(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler)
>   Time elapsed: 0.006 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.audit.AuditLogger.logWriteFailure(AuditLogger.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.audit(HddsDispatcher.java:433)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:242)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler.testCloseInvalidContainer(TestKeyValueHandler.java:282)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-853) Option to force close a container in Datanode

2018-11-19 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-853:
-
Status: Patch Available  (was: Open)

> Option to force close a container in Datanode
> -
>
> Key: HDDS-853
> URL: https://issues.apache.org/jira/browse/HDDS-853
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-853.000.patch
>
>
> We need an option to force close a container in Datanode. When the container 
> is marked as QuasiClosed, based on the blockCommitSequenceId SCM will decide 
> the latest container replica and it will close try to close the QuasiClosed 
> container.
> For this, we need force close support in Datanode which will close the 
> QuasiClosed container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-853) Option to force close a container in Datanode

2018-11-19 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-853:
-
Attachment: HDDS-853.000.patch

> Option to force close a container in Datanode
> -
>
> Key: HDDS-853
> URL: https://issues.apache.org/jira/browse/HDDS-853
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-853.000.patch
>
>
> We need an option to force close a container in Datanode. When the container 
> is marked as QuasiClosed, based on the blockCommitSequenceId SCM will decide 
> the latest container replica and it will close try to close the QuasiClosed 
> container.
> For this, we need force close support in Datanode which will close the 
> QuasiClosed container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-853) Option to force close a container in Datanode

2018-11-19 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-853:
-
Status: Open  (was: Patch Available)

> Option to force close a container in Datanode
> -
>
> Key: HDDS-853
> URL: https://issues.apache.org/jira/browse/HDDS-853
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-853.000.patch
>
>
> We need an option to force close a container in Datanode. When the container 
> is marked as QuasiClosed, based on the blockCommitSequenceId SCM will decide 
> the latest container replica and it will close try to close the QuasiClosed 
> container.
> For this, we need force close support in Datanode which will close the 
> QuasiClosed container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14075) NPE while Edit Logging

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691921#comment-16691921
 ] 

Hadoop QA commented on HDFS-14075:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}223m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
27s{color} | {color:red} The patch generated 38 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}283m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencing |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.namenode.TestSaveNamespace |
|   | hadoop.hdfs.TestDisableConnCache |
|   | hadoop.hdfs.TestSeekBug |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.TestHDFSServerPorts |
|   | hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStream |
|   | hadoop.hdfs.server.blockmanagement.TestSequentialBlockGroupId |
|   | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.hdfs.TestDFSPermission |
|   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
|   | 
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
|
|   | hadoop.hdfs.server.namenode.TestXAttrConfigFlag |
|   | hadoop.hdfs.server.datanode.TestDiskError |
|   | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy |
|   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithK

[jira] [Commented] (HDDS-849) fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691912#comment-16691912
 ] 

Nanda kumar commented on HDDS-849:
--

+1, pending Jenkins.

> fix NPE in TestKeyValueHandler because of audit log write
> -
>
> Key: HDDS-849
> URL: https://issues.apache.org/jira/browse/HDDS-849
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-849.001.patch
>
>
> TestKeyValueHandler#testCloseInvalidContainer and 
> TestKeyValueHandler#testHandlerCommandHandling are failing because of the 
> following exception.
> {code}
> [ERROR] 
> testCloseInvalidContainer(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler)
>   Time elapsed: 0.006 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.audit.AuditLogger.logWriteFailure(AuditLogger.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.audit(HddsDispatcher.java:433)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:242)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler.testCloseInvalidContainer(TestKeyValueHandler.java:282)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-849) fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691929#comment-16691929
 ] 

Hadoop QA commented on HDDS-849:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 27s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-849 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948734/HDDS-849.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f05a5ce5411f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9366608 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1758/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1758/testReport/ |
| Max. process+thread count | 414 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1758/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> fix NPE in TestKeyValueHandler because of audit log write

[jira] [Updated] (HDDS-853) Option to force close a container in Datanode

2018-11-19 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-853:
-
Status: Patch Available  (was: Open)

> Option to force close a container in Datanode
> -
>
> Key: HDDS-853
> URL: https://issues.apache.org/jira/browse/HDDS-853
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-853.000.patch
>
>
> We need an option to force close a container in Datanode. When the container 
> is marked as QuasiClosed, based on the blockCommitSequenceId SCM will decide 
> the latest container replica and it will close try to close the QuasiClosed 
> container.
> For this, we need force close support in Datanode which will close the 
> QuasiClosed container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-718) Introduce new SCM Commands to list and close Pipelines

2018-11-19 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691950#comment-16691950
 ] 

Mukul Kumar Singh commented on HDDS-718:


Thanks for updating the patch [~ljain].
+1, v3 patch looks good to me. 

There are following nitpicks in the patch, will fix them while committing the 
patch.
1) ScmClient.java:176,178 PipelineID -> Pipeline
2) StorageContainerLocationProtocol:130,132 -> PipelineID -> Pipeline


> Introduce new SCM Commands to list and close Pipelines
> --
>
> Key: HDDS-718
> URL: https://issues.apache.org/jira/browse/HDDS-718
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Blocker
> Attachments: HDDS-718.001.patch, HDDS-718.002.patch, 
> HDDS-718.003.patch
>
>
> We need to have a tear-down pipeline command in SCM so that an administrator 
> can close/destroy a pipeline in the cluster.
> HDDS-695 brings in the commands in branch ozone-0.3, this Jira is for porting 
> them to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-854) TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky

2018-11-19 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-854:


 Summary: 
TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky
 Key: HDDS-854
 URL: https://issues.apache.org/jira/browse/HDDS-854
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Nanda kumar
Assignee: Nanda kumar


TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky. It 
times out while waiting for the mini cluster datanode to restart

{code}
at 
org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:389)
at 
org.apache.hadoop.ozone.MiniOzoneClusterImpl.waitForClusterToBeReady(MiniOzoneClusterImpl.java:122)
at 
org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:276)
at 
org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:283)
at 
org.apache.hadoop.ozone.client.rpc.TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures(TestFailureHandlingByClient.java:200)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13960) hdfs dfs -checksum command should optionally show block size in output

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16691974#comment-16691974
 ] 

Hadoop QA commented on HDFS-13960:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  4s{color} | {color:orange} root: The patch generated 1 new + 203 unchanged 
- 0 fixed = 204 total (was 203) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 23s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}183m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cli.TestCLI |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13960 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948720/HDFS-13960.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 879d71014b0d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9366608 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1

[jira] [Assigned] (HDDS-854) TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky

2018-11-19 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee reassigned HDDS-854:


Assignee: Shashikant Banerjee  (was: Nanda kumar)

> TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky
> ---
>
> Key: HDDS-854
> URL: https://issues.apache.org/jira/browse/HDDS-854
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>
> TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky. It 
> times out while waiting for the mini cluster datanode to restart
> {code}
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:389)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.waitForClusterToBeReady(MiniOzoneClusterImpl.java:122)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:276)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:283)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures(TestFailureHandlingByClient.java:200)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-854) TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky

2018-11-19 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692016#comment-16692016
 ] 

Shashikant Banerjee commented on HDDS-854:
--

[~nandakumar131], I will take care of this.

> TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky
> ---
>
> Key: HDDS-854
> URL: https://issues.apache.org/jira/browse/HDDS-854
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>
> TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky. It 
> times out while waiting for the mini cluster datanode to restart
> {code}
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:389)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.waitForClusterToBeReady(MiniOzoneClusterImpl.java:122)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:276)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:283)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures(TestFailureHandlingByClient.java:200)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-849) fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692017#comment-16692017
 ] 

Dinesh Chitlangia commented on HDDS-849:


failure unrelated to patch

> fix NPE in TestKeyValueHandler because of audit log write
> -
>
> Key: HDDS-849
> URL: https://issues.apache.org/jira/browse/HDDS-849
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-849.001.patch
>
>
> TestKeyValueHandler#testCloseInvalidContainer and 
> TestKeyValueHandler#testHandlerCommandHandling are failing because of the 
> following exception.
> {code}
> [ERROR] 
> testCloseInvalidContainer(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler)
>   Time elapsed: 0.006 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.audit.AuditLogger.logWriteFailure(AuditLogger.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.audit(HddsDispatcher.java:433)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:242)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler.testCloseInvalidContainer(TestKeyValueHandler.java:282)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-849) Fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-849:
-
Summary: Fix NPE in TestKeyValueHandler because of audit log write  (was: 
fix NPE in TestKeyValueHandler because of audit log write)

> Fix NPE in TestKeyValueHandler because of audit log write
> -
>
> Key: HDDS-849
> URL: https://issues.apache.org/jira/browse/HDDS-849
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-849.001.patch
>
>
> TestKeyValueHandler#testCloseInvalidContainer and 
> TestKeyValueHandler#testHandlerCommandHandling are failing because of the 
> following exception.
> {code}
> [ERROR] 
> testCloseInvalidContainer(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler)
>   Time elapsed: 0.006 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.audit.AuditLogger.logWriteFailure(AuditLogger.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.audit(HddsDispatcher.java:433)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:242)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler.testCloseInvalidContainer(TestKeyValueHandler.java:282)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-849) Fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-849:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix NPE in TestKeyValueHandler because of audit log write
> -
>
> Key: HDDS-849
> URL: https://issues.apache.org/jira/browse/HDDS-849
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-849.001.patch
>
>
> TestKeyValueHandler#testCloseInvalidContainer and 
> TestKeyValueHandler#testHandlerCommandHandling are failing because of the 
> following exception.
> {code}
> [ERROR] 
> testCloseInvalidContainer(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler)
>   Time elapsed: 0.006 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.audit.AuditLogger.logWriteFailure(AuditLogger.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.audit(HddsDispatcher.java:433)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:242)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler.testCloseInvalidContainer(TestKeyValueHandler.java:282)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-854) TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky

2018-11-19 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692026#comment-16692026
 ] 

Nanda kumar commented on HDDS-854:
--

Thanks [~shashikant]!

> TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky
> ---
>
> Key: HDDS-854
> URL: https://issues.apache.org/jira/browse/HDDS-854
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>
> TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures is flaky. It 
> times out while waiting for the mini cluster datanode to restart
> {code}
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:389)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.waitForClusterToBeReady(MiniOzoneClusterImpl.java:122)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:276)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:283)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestFailureHandlingByClient.testMultiBlockWritesWithDnFailures(TestFailureHandlingByClient.java:200)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-849) Fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692036#comment-16692036
 ] 

Dinesh Chitlangia commented on HDDS-849:


Thanks [~nandakumar131] for review and commit.

> Fix NPE in TestKeyValueHandler because of audit log write
> -
>
> Key: HDDS-849
> URL: https://issues.apache.org/jira/browse/HDDS-849
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-849.001.patch
>
>
> TestKeyValueHandler#testCloseInvalidContainer and 
> TestKeyValueHandler#testHandlerCommandHandling are failing because of the 
> following exception.
> {code}
> [ERROR] 
> testCloseInvalidContainer(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler)
>   Time elapsed: 0.006 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.audit.AuditLogger.logWriteFailure(AuditLogger.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.audit(HddsDispatcher.java:433)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:242)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler.testCloseInvalidContainer(TestKeyValueHandler.java:282)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-849) Fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692025#comment-16692025
 ] 

Nanda kumar commented on HDDS-849:
--

Thanks [~dineshchitlangia] for the contribution and to [~msingh] for reporting 
this. I committed it to trunk.

> Fix NPE in TestKeyValueHandler because of audit log write
> -
>
> Key: HDDS-849
> URL: https://issues.apache.org/jira/browse/HDDS-849
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-849.001.patch
>
>
> TestKeyValueHandler#testCloseInvalidContainer and 
> TestKeyValueHandler#testHandlerCommandHandling are failing because of the 
> following exception.
> {code}
> [ERROR] 
> testCloseInvalidContainer(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler)
>   Time elapsed: 0.006 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.audit.AuditLogger.logWriteFailure(AuditLogger.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.audit(HddsDispatcher.java:433)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:242)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler.testCloseInvalidContainer(TestKeyValueHandler.java:282)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-853) Option to force close a container in Datanode

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692029#comment-16692029
 ] 

Hadoop QA commented on HDDS-853:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 29s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-853 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948738/HDDS-853.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 33221380922e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9366608 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1759/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1759/testReport/ |
| Max. process+thread count | 417 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1759/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   ht

[jira] [Commented] (HDDS-849) Fix NPE in TestKeyValueHandler because of audit log write

2018-11-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692043#comment-16692043
 ] 

Hudson commented on HDDS-849:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15463 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15463/])
HDDS-849. Fix NPE in TestKeyValueHandler because of audit log write. (nanda: 
rev e7438a1b38ff1d2bb25aa9d849a227c6f354143b)
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueHandler.java


> Fix NPE in TestKeyValueHandler because of audit log write
> -
>
> Key: HDDS-849
> URL: https://issues.apache.org/jira/browse/HDDS-849
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-849.001.patch
>
>
> TestKeyValueHandler#testCloseInvalidContainer and 
> TestKeyValueHandler#testHandlerCommandHandling are failing because of the 
> following exception.
> {code}
> [ERROR] 
> testCloseInvalidContainer(org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler)
>   Time elapsed: 0.006 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.audit.AuditLogger.logWriteFailure(AuditLogger.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.audit(HddsDispatcher.java:433)
>   at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:242)
>   at 
> org.apache.hadoop.ozone.container.keyvalue.TestKeyValueHandler.testCloseInvalidContainer(TestKeyValueHandler.java:282)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-718) Introduce new SCM Commands to list and close Pipelines

2018-11-19 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-718:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for working on this [~ljain]. I have committed this to trunk.

> Introduce new SCM Commands to list and close Pipelines
> --
>
> Key: HDDS-718
> URL: https://issues.apache.org/jira/browse/HDDS-718
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Blocker
> Attachments: HDDS-718.001.patch, HDDS-718.002.patch, 
> HDDS-718.003.patch
>
>
> We need to have a tear-down pipeline command in SCM so that an administrator 
> can close/destroy a pipeline in the cluster.
> HDDS-695 brings in the commands in branch ozone-0.3, this Jira is for porting 
> them to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13369) FSCK Report broken with RequestHedgingProxyProvider

2018-11-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692055#comment-16692055
 ] 

Hadoop QA commented on HDFS-13369:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 59s{color} | {color:orange} root: The patch generated 7 new + 168 unchanged 
- 4 fixed = 175 total (was 172) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 20s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
46s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}198m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestDiskCheckerWithDiskIo |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.util.TestReadWriteDiskValidator |
|   | hadoop.hdfs.tools.TestDFSAdminWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13369 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948723/HDFS-13369.007.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux acd22b3d596e 4.4.0-138

[jira] [Comment Edited] (HDDS-835) Use storageSize instead of Long for buffer size configs in Ozone Client

2018-11-19 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692063#comment-16692063
 ] 

Mukul Kumar Singh edited comment on HDDS-835 at 11/19/18 5:56 PM:
--

Thanks for working on this [~shashikant], the patch looks really good to me.
There are some checkstyle issues with the patch. Some minor comments on the 
patch.

1) ozone-default.xml:627, this value should be 256MB i think
2) ScmConfigKeys:140, lets change OZONE_SCM_CHUNK_MAX_SIZE to 32MB as well
3) TestFailureHandlingByClient:91, the SCM_BLOCK size needs to be set here
4) XceiverServerRatis, can we also use the size config in newRaftProperties ?, 
this will help in cleaning up config handling.


was (Author: msingh):
Thanks for working on this [~shashikant].
There are some checkstyle issues with the patch.

1) ozone-default.xml:627, this value should be 256MB i think
2) ScmConfigKeys:140, lets change OZONE_SCM_CHUNK_MAX_SIZE to 32MB as well
3) TestFailureHandlingByClient:91, the SCM_BLOCK size needs to be set here
4) XceiverServerRatis, can we also use the size config in newRaftProperties ?, 
this will help in cleaning up config handling.

> Use storageSize instead of Long for buffer size configs in Ozone Client
> ---
>
> Key: HDDS-835
> URL: https://issues.apache.org/jira/browse/HDDS-835
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-835.000.patch
>
>
> As per [~msingh] review comments in HDDS-675 , for streamBufferFlushSize, 
> streamBufferMaxSize, blockSize configs, we should use getStorageSize instead 
> of a long value, This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-718) Introduce new SCM Commands to list and close Pipelines

2018-11-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692067#comment-16692067
 ] 

Hudson commented on HDDS-718:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15464 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15464/])
HDDS-718. Introduce new SCM Commands to list and close Pipelines. (msingh: rev 
b5d7b292c988de6a8555d472a4448275522b7622)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateMap.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
* (add) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestPipelineStateManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineManager.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SCMCLI.java
* (add) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ClosePipelineSubcommand.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
* (add) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/package-info.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-hdds/common/src/main/proto/StorageContainerLocationProtocol.proto
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/protocolPB/StorageContainerLocationProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java


> Introduce new SCM Commands to list and close Pipelines
> --
>
> Key: HDDS-718
> URL: https://issues.apache.org/jira/browse/HDDS-718
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Blocker
> Attachments: HDDS-718.001.patch, HDDS-718.002.patch, 
> HDDS-718.003.patch
>
>
> We need to have a tear-down pipeline command in SCM so that an administrator 
> can close/destroy a pipeline in the cluster.
> HDDS-695 brings in the commands in branch ozone-0.3, this Jira is for porting 
> them to trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14011) RBF: Add more information to HdfsFileStatus for a mount point

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692068#comment-16692068
 ] 

Íñigo Goiri commented on HDFS-14011:


[~surendrasingh], I think this is a reasonable interface for the mount points.
In HDFS-14085 we may want to have some richer semantic.
I'll put my thoughts in that JIRA.

> RBF: Add more information to HdfsFileStatus for a mount point
> -
>
> Key: HDFS-14011
> URL: https://issues.apache.org/jira/browse/HDFS-14011
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14011.01.patch, HDFS-14011.02.patch, 
> HDFS-14011.03.patch
>
>
> RouterClientProtocol#getMountPointStatus does not use information of the 
> mount point, therefore, 'hdfs dfs -ls' to a directory including mount point 
> returns the incorrect information.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-835) Use storageSize instead of Long for buffer size configs in Ozone Client

2018-11-19 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692063#comment-16692063
 ] 

Mukul Kumar Singh commented on HDDS-835:


Thanks for working on this [~shashikant].
There are some checkstyle issues with the patch.

1) ozone-default.xml:627, this value should be 256MB i think
2) ScmConfigKeys:140, lets change OZONE_SCM_CHUNK_MAX_SIZE to 32MB as well
3) TestFailureHandlingByClient:91, the SCM_BLOCK size needs to be set here
4) XceiverServerRatis, can we also use the size config in newRaftProperties ?, 
this will help in cleaning up config handling.

> Use storageSize instead of Long for buffer size configs in Ozone Client
> ---
>
> Key: HDDS-835
> URL: https://issues.apache.org/jira/browse/HDDS-835
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-835.000.patch
>
>
> As per [~msingh] review comments in HDDS-675 , for streamBufferFlushSize, 
> streamBufferMaxSize, blockSize configs, we should use getStorageSize instead 
> of a long value, This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14087) RBF: In Router UI NameNode heartbeat printing the negative values

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692071#comment-16692071
 ] 

Íñigo Goiri commented on HDFS-14087:


Can you post a screenshot or give more details where this happens?

> RBF: In Router UI NameNode heartbeat printing the negative values 
> --
>
> Key: HDFS-14087
> URL: https://issues.apache.org/jira/browse/HDFS-14087
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14087) RBF: In Router UI NameNode heartbeat printing the negative values

2018-11-19 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14087:
---
Summary: RBF: In Router UI NameNode heartbeat printing the negative values  
 (was: RBF : In Router UI NameNode heartbeat printing the negative values )

> RBF: In Router UI NameNode heartbeat printing the negative values 
> --
>
> Key: HDFS-14087
> URL: https://issues.apache.org/jira/browse/HDFS-14087
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
>  Labels: RBF
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14088) RequestHedgingProxyProvider can throw NullPointerException when failover due to no lock on currentUsedProxy

2018-11-19 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14088:
---
Summary: RequestHedgingProxyProvider can throw NullPointerException when 
failover due to no lock on currentUsedProxy  (was: RequestHedgingProxyProvider 
can throw NullPointerException when failvoer due to no lock on currentUsedProxy)

> RequestHedgingProxyProvider can throw NullPointerException when failover due 
> to no lock on currentUsedProxy
> ---
>
> Key: HDFS-14088
> URL: https://issues.apache.org/jira/browse/HDFS-14088
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Yuxuan Wang
>Priority: Major
>
> {code:java}
> if (currentUsedProxy != null) {
> try {
>   Object retVal = method.invoke(currentUsedProxy.proxy, args);
>   LOG.debug("Invocation successful on [{}]",
>   currentUsedProxy.proxyInfo);
> {code}
> If a thread run try block and then other thread trigger a fail over calling 
> method
> {code:java}
> @Override
>   public synchronized void performFailover(T currentProxy) {
> toIgnore = this.currentUsedProxy.proxyInfo;
> this.currentUsedProxy = null;
>   }
> {code}
> It will set currentUsedProxy to null, and the first thread can throw a 
> NullPointerException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14089) RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692074#comment-16692074
 ] 

Íñigo Goiri commented on HDFS-14089:


Thanks [~RANith] for the patch.
Let's do this as part of HDFS-13532.

> RBF: Failed to specify server's Kerberos pricipal name in 
> NamenodeHeartbeatService
> --
>
> Key: HDFS-14089
> URL: https://issues.apache.org/jira/browse/HDFS-14089
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Fix For: HDFS-13891
>
> Attachments: HDFS-14089.patch
>
>
> DFSZKFailoverController, DFSHAAdmin setting the conf for 
> "HADOOP_SECURITY_SERVICE_USER_NAME_KEY".  Need to add the configuration for 
> NamenodeHeartbeatService as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14089) RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692074#comment-16692074
 ] 

Íñigo Goiri edited comment on HDFS-14089 at 11/19/18 6:07 PM:
--

Thanks [~RANith] for the patch.
Let's do this as part of HDFS-13532.

Any easy way to test this? What would we need a secure mini ZK cluster?


was (Author: elgoiri):
Thanks [~RANith] for the patch.
Let's do this as part of HDFS-13532.

> RBF: Failed to specify server's Kerberos pricipal name in 
> NamenodeHeartbeatService
> --
>
> Key: HDFS-14089
> URL: https://issues.apache.org/jira/browse/HDFS-14089
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Fix For: HDFS-13891
>
> Attachments: HDFS-14089.patch
>
>
> DFSZKFailoverController, DFSHAAdmin setting the conf for 
> "HADOOP_SECURITY_SERVICE_USER_NAME_KEY".  Need to add the configuration for 
> NamenodeHeartbeatService as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14079) RBF: RouterAdmin should have failover concept for router

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692070#comment-16692070
 ] 

Íñigo Goiri commented on HDFS-14079:


[~surendrasingh] for the solution that [~crh] is talking about, there is no 
code change.
It would be a matter of putting the admin port behind a load balancer and 
setting the config to point to that endpoint.
Anyway, we probably want to set a full HA endpoint in addition.

> RBF: RouterAdmin should have failover concept for router
> 
>
> Key: HDFS-14079
> URL: https://issues.apache.org/jira/browse/HDFS-14079
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
>
> Currenlty {{RouterAdmin}} connect with only one router for admin operation, 
> if the configured router is down then router admin command is failing. It 
> should allow to configure all the router admin address.
> {code}
> // Initialize RouterClient
> try {
>   String address = getConf().getTrimmed(
>   RBFConfigKeys.DFS_ROUTER_ADMIN_ADDRESS_KEY,
>   RBFConfigKeys.DFS_ROUTER_ADMIN_ADDRESS_DEFAULT);
>   InetSocketAddress routerSocket = NetUtils.createSocketAddr(address);
>   client = new RouterClient(routerSocket, getConf());
> } catch (RPC.VersionMismatch v) {
>   System.err.println(
>   "Version mismatch between client and server... command aborted");
>   return exitCode;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14075) NPE while Edit Logging

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692076#comment-16692076
 ] 

Íñigo Goiri commented on HDFS-14075:


I have to say that Whitebox is pretty convenient, and at the end spy ends up 
doing the same with more steps.
Anyway, let's avoid it if that's the call.
For  [^HDFS-14075-04.patch], can we use {{LambdaTestUtils#intercept}}?

> NPE while Edit Logging
> --
>
> Key: HDFS-14075
> URL: https://issues.apache.org/jira/browse/HDFS-14075
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
> Attachments: HDFS-14075-01.patch, HDFS-14075-02.patch, 
> HDFS-14075-03.patch, HDFS-14075-04.patch, HDFS-14075-04.patch, 
> HDFS-14075-04.patch
>
>
> {noformat}
> 2018-11-10 18:59:38,427 FATAL 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Exception while edit 
> logging: null
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.doEditTransaction(FSEditLog.java:481)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$Edit.logEdit(FSEditLogAsync.java:288)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.run(FSEditLogAsync.java:232)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-11-10 18:59:38,532 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Exception while edit logging: null
> 2018-11-10 18:59:38,552 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> {noformat}
> Before NPE Received the following Exception
> {noformat}
> INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 65110, call 
> Call#23241 Retry#0 
> org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 
> 
> java.io.IOException: Unable to start log segment 7964819: too few journals 
> successfully started.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1385)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegmentAndWriteHeaderTxn(FSEditLog.java:1395)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1319)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1352)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4669)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1293)
>   at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:146)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12974)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> Caused by: java.io.IOException: starting log segment 7964819 failed for too 
> many journals
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:412)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:207)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1383)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13369) FSCK Report broken with RequestHedgingProxyProvider

2018-11-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16692079#comment-16692079
 ] 

Íñigo Goiri commented on HDFS-13369:


Thanks [~RANith] for rebasing.
Can we make the TODOs cleaner?

> FSCK Report broken with RequestHedgingProxyProvider 
> 
>
> Key: HDFS-13369
> URL: https://issues.apache.org/jira/browse/HDFS-13369
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.3, 3.0.0, 3.1.0
>Reporter: Harshakiran Reddy
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13369.001.patch, HDFS-13369.002.patch, 
> HDFS-13369.003.patch, HDFS-13369.004.patch, HDFS-13369.005.patch, 
> HDFS-13369.006.patch, HDFS-13369.007.patch
>
>
> Scenario:-
> 1.Configure the RequestHedgingProxy
> 2. write some files in file system
> 3. Take FSCK report for the above files
>  
> {noformat}
> bin> hdfs fsck /file1 -locations -files -blocks
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler
>  cannot be cast to org.apache.hadoop.ipc.RpcInvocationHandler
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:626)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.getConnectionId(RetryInvocationHandler.java:438)
> at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:628)
> at org.apache.hadoop.ipc.RPC.getServerAddress(RPC.java:611)
> at org.apache.hadoop.hdfs.HAUtil.getAddressOfActive(HAUtil.java:263)
> at 
> org.apache.hadoop.hdfs.tools.DFSck.getCurrentNamenodeAddress(DFSck.java:257)
> at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:319)
> at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:156)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:153)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:152)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:385){noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14088) RequestHedgingProxyProvider can throw NullPointerException when failover due to no lock on currentUsedProxy

2018-11-19 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-14088:
--

Assignee: Yuxuan Wang

> RequestHedgingProxyProvider can throw NullPointerException when failover due 
> to no lock on currentUsedProxy
> ---
>
> Key: HDFS-14088
> URL: https://issues.apache.org/jira/browse/HDFS-14088
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Yuxuan Wang
>Assignee: Yuxuan Wang
>Priority: Major
>
> {code:java}
> if (currentUsedProxy != null) {
> try {
>   Object retVal = method.invoke(currentUsedProxy.proxy, args);
>   LOG.debug("Invocation successful on [{}]",
>   currentUsedProxy.proxyInfo);
> {code}
> If a thread run try block and then other thread trigger a fail over calling 
> method
> {code:java}
> @Override
>   public synchronized void performFailover(T currentProxy) {
> toIgnore = this.currentUsedProxy.proxyInfo;
> this.currentUsedProxy = null;
>   }
> {code}
> It will set currentUsedProxy to null, and the first thread can throw a 
> NullPointerException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >