[jira] [Commented] (HDFS-14665) HttpFS: LISTSTATUS response is missing HDFS-specific fields

2019-08-14 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16906961#comment-16906961
 ] 

Siyao Meng commented on HDFS-14665:
---

Thanks for reviewing/committing [~jojochuang]!

Rebased on branch-3.1, PR: https://github.com/apache/hadoop/pull/1291

> HttpFS: LISTSTATUS response is missing HDFS-specific fields
> ---
>
> Key: HDFS-14665
> URL: https://issues.apache.org/jira/browse/HDFS-14665
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0, 3.2.1
>
>
> WebHDFS:
> {code:java}
> GET /webhdfs/v1/tmp/?op=LISTSTATUS&user.name=hdfs HTTP/1.1
> {code}
> {code}
> {
>   "FileStatuses": {
> "FileStatus": [
> ...
>   {
> "accessTime": 0,
> "blockSize": 0,
> "childrenNum": 0,
> "fileId": 16395,
> "group": "hadoop",
> "length": 0,
> "modificationTime": 1563893395614,
> "owner": "mapred",
> "pathSuffix": "logs",
> "permission": "1777",
> "replication": 0,
> "storagePolicy": 0,
> "type": "DIRECTORY"
>   }
> ]
>   }
> }
> {code}
> HttpFS:
> {code:java}
> GET /webhdfs/v1/tmp/?op=LISTSTATUS&user.name=hdfs HTTP/1.1
> {code}
> {code}
> {
>   "FileStatuses": {
> "FileStatus": [
> ...
>   {
> "pathSuffix": "logs",
> "type": "DIRECTORY",
> "length": 0,
> "owner": "mapred",
> "group": "hadoop",
> "permission": "1777",
> "accessTime": 0,
> "modificationTime": 1563893395614,
> "blockSize": 0,
> "replication": 0
>   }
> ]
>   }
> }
> {code}
> You can see the same LISTSTATUS request to HttpFS is missing 3 fields:
> {code}
> "childrenNum" (should only be none 0 for directories)
> "fileId"
> "storagePolicy"
> {code}
> The same applies to LISTSTATUS_BATCH, which might be using the same 
> underlying calls to compose the response.
> Root cause:
> [toJsonInner|https://github.com/apache/hadoop/blob/17e8cf501b384af93726e4f2e6f5e28c6e3a8f65/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java#L120]
>  didn't serialize the HDFS-specific keys from FileStatus.
> Also may file another Jira to align the order of the keys in the responses.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1929) OM started on recon host in ozonesecure compose

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1929?focusedWorklogId=294536&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294536
 ]

ASF GitHub Bot logged work on HDDS-1929:


Author: ASF GitHub Bot
Created on: 14/Aug/19 07:12
Start Date: 14/Aug/19 07:12
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1250: HDDS-1929. OM 
started on recon host in ozonesecure compose
URL: https://github.com/apache/hadoop/pull/1250#issuecomment-521129973
 
 
   Thanks @anuengineer for committing it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294536)
Time Spent: 1h 50m  (was: 1h 40m)

> OM started on recon host in ozonesecure compose 
> 
>
> Key: HDDS-1929
> URL: https://issues.apache.org/jira/browse/HDDS-1929
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> OM is started temporarily on {{recon}} host in {{ozonesecure}} compose:
> {noformat}
> recon_1 | 2019-08-07 19:41:46 INFO  OzoneManagerStarter:51 - STARTUP_MSG:
> recon_1 | /
> recon_1 | STARTUP_MSG: Starting OzoneManager
> recon_1 | STARTUP_MSG:   host = recon/192.168.16.4
> recon_1 | STARTUP_MSG:   args = [--init]
> ...
> recon_1 | SHUTDOWN_MSG: Shutting down OzoneManager at recon/192.168.16.4
> ...
> recon_1 | 2019-08-07 19:41:52 INFO  ReconServer:81 - Initializing Recon 
> server...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1963) OM DB Schema defintion in OmMetadataManagerImpl and OzoneConsts are not consistent

2019-08-14 Thread Sammi Chen (JIRA)
Sammi Chen created HDDS-1963:


 Summary: OM DB Schema defintion in OmMetadataManagerImpl and 
OzoneConsts are not consistent
 Key: HDDS-1963
 URL: https://issues.apache.org/jira/browse/HDDS-1963
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Sammi Chen


OzoneConsts.java

 * OM DB Schema:
   *  --
   *  |  KEY | VALUE   |
   *  --
   *  | $userName|  VolumeList |
   *  --
   *  | /#volumeName |  VolumeInfo |
   *  --
   *  | /#volumeName/#bucketName |  BucketInfo |
   *  --
   *  | /volumeName/bucketName/keyName   |  KeyInfo|
   *  --
   *  | #deleting#/volumeName/bucketName/keyName |  KeyInfo|
   *  --

OmMetadataManagerImpl.java

/**
   * OM RocksDB Structure .
   * 
   * OM DB stores metadata as KV pairs in different column families.
   * 
   * OM DB Schema:
   * |---|
   * |  Column Family |VALUE |
   * |---|
   * | userTable  | user->VolumeList |
   * |---|
   * | volumeTable| /volume->VolumeInfo  |
   * |---|
   * | bucketTable| /volume/bucket-> BucketInfo  |
   * |---|
   * | keyTable   | /volumeName/bucketName/keyName->KeyInfo  |
   * |---|
   * | deletedTable   | /volumeName/bucketName/keyName->KeyInfo  |
   * |---|
   * | openKey| /volumeName/bucketName/keyName/id->KeyInfo   |
   * |---|
   * | s3Table| s3BucketName -> /volumeName/bucketName   |
   * |---|
   * | s3SecretTable  | s3g_access_key_id -> s3Secret|
   * |---|
   * | dTokenTable| s3g_access_key_id -> s3Secret|
   * |---|
   * | prefixInfoTable | prefix -> PrefixInfo   |
   * |---|
   */

It's better to put OM DB Schema defintion in one place to resolve this 
inconsistency due to information redundancy. 





--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches

2019-08-14 Thread Istvan Fajth (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907005#comment-16907005
 ] 

Istvan Fajth commented on HDFS-13322:
-

Further information, that we discovered and discussed with [~abukor] and 
[~wolfosis] after dig into this together a bit more:

The FUSE context struct does not expose anything from the callers environment, 
mostly as it is not possible from the FUSE code's perspective, the struct we 
get in the calls coming in to fuse-dfs code contains the uid, gid, pid, umask 
of the caller, the fuse struct (which contains implementation details, and 
mount args mostly), and private data that the FS itself exposes. See the [code 
here|https://github.com/libfuse/libfuse/blob/master/include/fuse.h#L786-L804].

The limitation comes from deeper levels in the OS and process handling in 
POSIX, which is summarized pretty well in [this StackExchange 
question|https://unix.stackexchange.com/questions/29128/how-to-read-environment-variables-of-a-process]
 (See the answer from Jonathan Ben-Avraham edited by Toby Speight, currently 
the second answer).
 In a nutshell: the kernel, when executes and starts a process, puts the 
initial environment onto the stack of the process into a fixed length 
structure, this area of the stack is exposed in the /proc/[pid]/environ system 
path. After the start of the process, a POSIX process has a global __environ 
variable that is allocated and updated in the heap of the process, by libc 
routines every time the environment changes. This area is not accessible for 
other processes or for the kernel, at least it is not easy to access and the 
access is restricted, as you need ptrace, and the symbol table of the caller 
process, and also access permissions to the memory of the other process to do 
so.

Based on the research, this limitation is not something we can resolve from the 
fuse-dfs code, so if you need to use this feature, you need to apply the 
workaround and ensure that all dfs access via the fuse mount has the proper 
initial environment (i.e: it is forked from a process that already has the 
environment variable set so the forked process can inherit it in its initial 
environment.)

> fuse dfs - uid persists when switching between ticket caches
> 
>
> Key: HDFS-13322
> URL: https://issues.apache.org/jira/browse/HDFS-13322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.6.0
> Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed 
> Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
>  
>Reporter: Shoeb Sheyx
>Assignee: Istvan Fajth
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HDFS-13322.001.patch, HDFS-13322.002.patch, 
> HDFS-13322.003.patch, TestFuse.java, TestFuse2.java, catter.sh, catter2.sh, 
> perftest_new_behaviour_10k_different_1KB.txt, perftest_new_behaviour_1B.txt, 
> perftest_new_behaviour_1KB.txt, perftest_new_behaviour_1MB.txt, 
> perftest_old_behaviour_10k_different_1KB.txt, perftest_old_behaviour_1B.txt, 
> perftest_old_behaviour_1KB.txt, perftest_old_behaviour_1MB.txt, 
> testHDFS-13322.sh, test_after_patch.out, test_before_patch.out
>
>
> The symptoms of this issue are the same as described in HDFS-3608 except the 
> workaround that was applied (detect changes in UID ticket cache) doesn't 
> resolve the issue when multiple ticket caches are in use by the same user.
> Our use case requires that a job scheduler running as a specific uid obtain 
> separate kerberos sessions per job and that each of these sessions use a 
> separate cache. When switching sessions this way, no change is made to the 
> original ticket cache so the cached filesystem instance doesn't get 
> regenerated.
>  
> {{$ export KRB5CCNAME=/tmp/krb5cc_session1}}
> {{$ kinit user_a@domain}}
> {{$ touch /fuse_mount/tmp/testfile1}}
> {{$ ls -l /fuse_mount/tmp/testfile1}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile1*}}
> {{$ export KRB5CCNAME=/tmp/krb5cc_session2}}
> {{$ kinit user_b@domain}}
> {{$ touch /fuse_mount/tmp/testfile2}}
> {{$ ls -l /fuse_mount/tmp/testfile2}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile2*}}
> {{   }}{color:#d04437}*{{** expected owner to be user_b **}}*{color}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907040#comment-16907040
 ] 

Hadoop QA commented on HDFS-14423:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
57s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
32s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
52s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
44s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
49s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
52s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
1s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
5s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 45s{color} | {color:orange} root: The patch generated 1 new + 276 unchanged 
- 0 fixed = 277 total (was 276) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
38s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
29s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker 

[jira] [Work logged] (HDDS-1610) applyTransaction failure should not be lost on restart

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1610?focusedWorklogId=294613&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294613
 ]

ASF GitHub Bot logged work on HDDS-1610:


Author: ASF GitHub Bot
Created on: 14/Aug/19 09:02
Start Date: 14/Aug/19 09:02
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on pull request #1226: HDDS-1610. 
applyTransaction failure should not be lost on restart.
URL: https://github.com/apache/hadoop/pull/1226#discussion_r313771945
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -265,6 +269,13 @@ public void persistContainerSet(OutputStream out) throws 
IOException {
   public long takeSnapshot() throws IOException {
 TermIndex ti = getLastAppliedTermIndex();
 long startTime = Time.monotonicNow();
+if (!isStateMachineHealthy.get()) {
+  String msg =
+  "Failed to take snapshot " + " for " + gid + " as the stateMachine"
+  + " is unhealthy. The last applied index is at " + ti;
 
 Review comment:
   lets log this as well.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294613)
Time Spent: 4h 40m  (was: 4.5h)

> applyTransaction failure should not be lost on restart
> --
>
> Key: HDDS-1610
> URL: https://issues.apache.org/jira/browse/HDDS-1610
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> If the applyTransaction fails in the containerStateMachine, then the 
> container should not accept new writes on restart,.
> This can occur if
> # chunk write applyTransaction fails
> # container state update to UNHEALTHY also fails
> # Ratis snapshot is taken
> # Node restarts
> # container accepts new transactions



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1610) applyTransaction failure should not be lost on restart

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1610?focusedWorklogId=294614&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294614
 ]

ASF GitHub Bot logged work on HDDS-1610:


Author: ASF GitHub Bot
Created on: 14/Aug/19 09:02
Start Date: 14/Aug/19 09:02
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on pull request #1226: HDDS-1610. 
applyTransaction failure should not be lost on restart.
URL: https://github.com/apache/hadoop/pull/1226#discussion_r313772527
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -674,30 +681,60 @@ public void notifyIndexUpdate(long term, long index) {
   if (cmdType == Type.WriteChunk || cmdType ==Type.PutSmallFile) {
 builder.setCreateContainerSet(createContainerSet);
   }
+  CompletableFuture applyTransactionFuture =
+  new CompletableFuture<>();
   // Ensure the command gets executed in a separate thread than
   // stateMachineUpdater thread which is calling applyTransaction here.
-  CompletableFuture future = CompletableFuture
-  .supplyAsync(() -> runCommand(requestProto, builder.build()),
+  CompletableFuture future =
+  CompletableFuture.supplyAsync(
+  () -> runCommand(requestProto, builder.build()),
   getCommandExecutor(requestProto));
-
-  future.thenAccept(m -> {
+  future.thenApply(r -> {
 if (trx.getServerRole() == RaftPeerRole.LEADER) {
   long startTime = (long) trx.getStateMachineContext();
   metrics.incPipelineLatency(cmdType,
   Time.monotonicNowNanos() - startTime);
 }
-
-final Long previous =
-applyTransactionCompletionMap
-.put(index, trx.getLogEntry().getTerm());
-Preconditions.checkState(previous == null);
-if (cmdType == Type.WriteChunk || cmdType == Type.PutSmallFile) {
-  metrics.incNumBytesCommittedCount(
+if (r.getResult() != ContainerProtos.Result.SUCCESS) {
+  StorageContainerException sce =
+  new StorageContainerException(r.getMessage(), r.getResult());
+  LOG.error(
+  "gid {} : ApplyTransaction failed. cmd {} logIndex {} msg : "
+  + "{} Container Result: {}", gid, r.getCmdType(), index,
+  r.getMessage(), r.getResult());
+  metrics.incNumApplyTransactionsFails();
+  ratisServer.handleApplyTransactionFailure(gid, trx.getServerRole());
+  // Since the applyTransaction now is completed exceptionally,
+  // before any further snapshot is taken , the exception will be
+  // caught in stateMachineUpdater in Ratis and ratis server will
+  // shutdown.
+  applyTransactionFuture.completeExceptionally(sce);
 
 Review comment:
   lets move the ratisServer.handleApplyTransactionFailure(gid, 
trx.getServerRole()); as the last line in the if block.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294614)
Time Spent: 4h 50m  (was: 4h 40m)

> applyTransaction failure should not be lost on restart
> --
>
> Key: HDDS-1610
> URL: https://issues.apache.org/jira/browse/HDDS-1610
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> If the applyTransaction fails in the containerStateMachine, then the 
> container should not accept new writes on restart,.
> This can occur if
> # chunk write applyTransaction fails
> # container state update to UNHEALTHY also fails
> # Ratis snapshot is taken
> # Node restarts
> # container accepts new transactions



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1964) TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1964:
---

 Summary: TestOzoneClientProducer fails with ConnectException
 Key: HDDS-1964
 URL: https://issues.apache.org/jira/browse/HDDS-1964
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Affects Versions: 0.5.0
Reporter: Doroszlai, Attila


{code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/trunk/trunk-nightly-wxhxr/unit/hadoop-ozone/s3gateway/org.apache.hadoop.ozone.s3.TestOzoneClientProducer.txt}
---
Test set: org.apache.hadoop.ozone.s3.TestOzoneClientProducer
---
Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 222.239 s <<< 
FAILURE! - in org.apache.hadoop.ozone.s3.TestOzoneClientProducer
testGetClientFailure[0](org.apache.hadoop.ozone.s3.TestOzoneClientProducer)  
Time elapsed: 111.036 s  <<< FAILURE!
java.lang.AssertionError: 
 Expected to find 'Couldn't create protocol ' but got unexpected exception: 
java.net.ConnectException: Your endpoint configuration is wrong; For more 
details see:  http://wiki.apache.org/hadoop/UnsetHostnameOrPort
{code}

Log output (with local log4j config) reveals that connection is attempted to 
0.0.0.0:9862:

{code:title=log output}
2019-08-14 10:49:14,225 [main] INFO  ipc.Client 
(Client.java:handleConnectionFailure(948)) - Retrying connect to server: 
0.0.0.0/0.0.0.0:9862. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
{code}

The address 0.0.0.0:9862 was added as default in 
[HDDS-1920|https://github.com/apache/hadoop/commit/bf457797f607f3aeeb2292e63f440cb13e15a2d9].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14719) Correct the safemode threshold value in BlockManagerSafeMode

2019-08-14 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907063#comment-16907063
 ] 

Surendra Singh Lilhore commented on HDFS-14719:
---

LGTM, +1

> Correct the safemode threshold value in BlockManagerSafeMode
> 
>
> Key: HDFS-14719
> URL: https://issues.apache.org/jira/browse/HDFS-14719
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14719.002.patch, HDFS-14719.patch
>
>
> BlockManagerSafeMode is doing wrong parsing for safemode threshold. It is 
> storing float value in double, which will give different result some time. If 
> we store "0.999f" value in double then it will be converted to 
> "0.999128746033".
> {code:java}
> this.threshold = conf.getFloat(DFS_NAMENODE_SAFEMODE_THRESHOLD_PCT_KEY,
> DFS_NAMENODE_SAFEMODE_THRESHOLD_PCT_DEFAULT);{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1610) applyTransaction failure should not be lost on restart

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1610?focusedWorklogId=294635&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294635
 ]

ASF GitHub Bot logged work on HDDS-1610:


Author: ASF GitHub Bot
Created on: 14/Aug/19 09:17
Start Date: 14/Aug/19 09:17
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #1226: HDDS-1610. 
applyTransaction failure should not be lost on restart.
URL: https://github.com/apache/hadoop/pull/1226#discussion_r313780023
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -265,6 +269,13 @@ public void persistContainerSet(OutputStream out) throws 
IOException {
   public long takeSnapshot() throws IOException {
 TermIndex ti = getLastAppliedTermIndex();
 long startTime = Time.monotonicNow();
+if (!isStateMachineHealthy.get()) {
+  String msg =
+  "Failed to take snapshot " + " for " + gid + " as the stateMachine"
+  + " is unhealthy. The last applied index is at " + ti;
 
 Review comment:
   Addressed in the latest patch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294635)
Time Spent: 5h 10m  (was: 5h)

> applyTransaction failure should not be lost on restart
> --
>
> Key: HDDS-1610
> URL: https://issues.apache.org/jira/browse/HDDS-1610
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> If the applyTransaction fails in the containerStateMachine, then the 
> container should not accept new writes on restart,.
> This can occur if
> # chunk write applyTransaction fails
> # container state update to UNHEALTHY also fails
> # Ratis snapshot is taken
> # Node restarts
> # container accepts new transactions



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1610) applyTransaction failure should not be lost on restart

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1610?focusedWorklogId=294634&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294634
 ]

ASF GitHub Bot logged work on HDDS-1610:


Author: ASF GitHub Bot
Created on: 14/Aug/19 09:17
Start Date: 14/Aug/19 09:17
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #1226: HDDS-1610. 
applyTransaction failure should not be lost on restart.
URL: https://github.com/apache/hadoop/pull/1226#discussion_r313780014
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -674,30 +681,60 @@ public void notifyIndexUpdate(long term, long index) {
   if (cmdType == Type.WriteChunk || cmdType ==Type.PutSmallFile) {
 builder.setCreateContainerSet(createContainerSet);
   }
+  CompletableFuture applyTransactionFuture =
+  new CompletableFuture<>();
   // Ensure the command gets executed in a separate thread than
   // stateMachineUpdater thread which is calling applyTransaction here.
-  CompletableFuture future = CompletableFuture
-  .supplyAsync(() -> runCommand(requestProto, builder.build()),
+  CompletableFuture future =
+  CompletableFuture.supplyAsync(
+  () -> runCommand(requestProto, builder.build()),
   getCommandExecutor(requestProto));
-
-  future.thenAccept(m -> {
+  future.thenApply(r -> {
 if (trx.getServerRole() == RaftPeerRole.LEADER) {
   long startTime = (long) trx.getStateMachineContext();
   metrics.incPipelineLatency(cmdType,
   Time.monotonicNowNanos() - startTime);
 }
-
-final Long previous =
-applyTransactionCompletionMap
-.put(index, trx.getLogEntry().getTerm());
-Preconditions.checkState(previous == null);
-if (cmdType == Type.WriteChunk || cmdType == Type.PutSmallFile) {
-  metrics.incNumBytesCommittedCount(
+if (r.getResult() != ContainerProtos.Result.SUCCESS) {
+  StorageContainerException sce =
+  new StorageContainerException(r.getMessage(), r.getResult());
+  LOG.error(
+  "gid {} : ApplyTransaction failed. cmd {} logIndex {} msg : "
+  + "{} Container Result: {}", gid, r.getCmdType(), index,
+  r.getMessage(), r.getResult());
+  metrics.incNumApplyTransactionsFails();
+  ratisServer.handleApplyTransactionFailure(gid, trx.getServerRole());
+  // Since the applyTransaction now is completed exceptionally,
+  // before any further snapshot is taken , the exception will be
+  // caught in stateMachineUpdater in Ratis and ratis server will
+  // shutdown.
+  applyTransactionFuture.completeExceptionally(sce);
 
 Review comment:
   Addressed in the latest patch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294634)
Time Spent: 5h  (was: 4h 50m)

> applyTransaction failure should not be lost on restart
> --
>
> Key: HDDS-1610
> URL: https://issues.apache.org/jira/browse/HDDS-1610
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> If the applyTransaction fails in the containerStateMachine, then the 
> container should not accept new writes on restart,.
> This can occur if
> # chunk write applyTransaction fails
> # container state update to UNHEALTHY also fails
> # Ratis snapshot is taken
> # Node restarts
> # container accepts new transactions



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1898) GrpcReplicationService#download cannot replicate the container

2019-08-14 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDDS-1898:
---

Assignee: Lokesh Jain

> GrpcReplicationService#download cannot replicate the container
> --
>
> Key: HDDS-1898
> URL: https://issues.apache.org/jira/browse/HDDS-1898
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> Replication of container is failing because of rocksdb is unable to find the 
> underlying files.
> {code}
> 2019-08-02 14:07:26,670 INFO  replication.GrpcReplicationService 
> (GrpcReplicationService.java:close(124)) - 663284 bytes written to th
> e rpc stream from container 12
> 2019-08-02 14:07:26,670 ERROR replication.GrpcReplicationService 
> (GrpcReplicationService.java:download(65)) - Can't stream the contain
> er data
> java.io.FileNotFoundException: 
> /Users/msingh/code/apache/ozone/github/chaos_runs/hadoop-ozone/integration-test/target/test/data/MiniOzoneClusterImpl-403d87c2-5cbe-4511-8e14-dce727f10cf9/datanode-7/data/containers/hdds/9f2a75dc-3243-462a-a90e-c83f63ad0d55/current/containerDir0/12/metadata/12-dn-container.db/002084.log
>  (No such file or directory)
> at java.io.FileInputStream.open0(Native Method)
> at java.io.FileInputStream.open(FileInputStream.java:195)
> at java.io.FileInputStream.(FileInputStream.java:138)
> at 
> org.apache.hadoop.ozone.container.keyvalue.TarContainerPacker.includeFile(TarContainerPacker.java:243)
> at 
> org.apache.hadoop.ozone.container.keyvalue.TarContainerPacker.includePath(TarContainerPacker.java:233)
> at 
> org.apache.hadoop.ozone.container.keyvalue.TarContainerPacker.pack(TarContainerPacker.java:164)
> at 
> org.apache.hadoop.ozone.container.replication.OnDemandContainerReplicationSource.copyData(OnDemandContainerReplicationSource.java:67)
> at 
> org.apache.hadoop.ozone.container.replication.GrpcReplicationService.download(GrpcReplicationService.java:63)
> at 
> org.apache.hadoop.hdds.protocol.datanode.proto.IntraDatanodeProtocolServiceGrpc$MethodHandlers.invoke(IntraDatanodeProtocolServiceGrpc.java:217)
> at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:171)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:283)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:710)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Suppressed: java.io.IOException: This archives contains unclosed 
> entries.
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveOutputStream.finish(TarArchiveOutputStream.java:214)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveOutputStream.close(TarArchiveOutputStream.java:229)
> at 
> org.apache.hadoop.ozone.container.keyvalue.TarContainerPacker.pack(TarContainerPacker.java:173)
> ... 11 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14674) [SBN read] Got an unexpected txid when tail editlog

2019-08-14 Thread wangzhaohui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhaohui updated HDFS-14674:
---
Attachment: HDFS-14674-007.patch

> [SBN read] Got an unexpected txid when tail editlog
> ---
>
> Key: HDFS-14674
> URL: https://issues.apache.org/jira/browse/HDFS-14674
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Blocker
> Attachments: HDFS-14674-001.patch, HDFS-14674-003.patch, 
> HDFS-14674-004.patch, HDFS-14674-005.patch, HDFS-14674-006.patch, 
> HDFS-14674-007.patch, image.png
>
>
> Add the following configuration
> !image-2019-07-26-11-34-23-405.png!
> error:
> {code:java}
> //
> [2019-07-17T11:50:21.048+08:00] [INFO] [Edit log tailer] : replaying edit 
> log: 1/20512836 transactions completed. (0%) [2019-07-17T11:50:21.059+08:00] 
> [INFO] [Edit log tailer] : Edits file 
> http://ip/getJournal?jid=ns1003&segmentTxId=232056426162&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232056426162&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232056426162&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  of size 3126782311 edits # 500 loaded in 3 seconds 
> [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : Reading 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@51ceb7bc 
> expecting start txid #232056752162 [2019-07-17T11:50:21.059+08:00] [INFO] 
> [Edit log tailer] : Start loading edits file 
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  maxTxnipsToRead = 500 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log 
> tailer] : Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit 
> log tailer] ip: Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.061+08:00] [ERROR] [Edit 
> log tailer] : Unknown error encountered while tailing edits. Shutting down 
> standby NN. java.io.IOException: There appears to be a gap in the edit log. 
> We expected txid 232056752162, but got txid 232077264498. at 
> org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:239)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:895) at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>  at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
>  [2019-07-17T11:50:21.064+08:00] [INFO] [Edit log tailer] : Exiting with 
> status 1 [2019-07-17T11:50:21.066+08:00] [INFO] [Thread-1] : SHUTDOWN_MSG: 
> / SHUTDOWN_MSG: 
> Shutting down NameNode at ip 
> /
> {code}
>  
> if dfs.ha.tail-edits.max-txns-per-lock value is 500,when the namenode load 
> the editlog util 500,the current namenode will load the next editlog,but 
> editlog more than 500.So,namenode got an unexpected txid when tail editlog.
>  
>  
> {code:java}
> //
> [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : Edits file 
> http:/

[jira] [Work started] (HDDS-1964) TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1964 started by Doroszlai, Attila.
---
> TestOzoneClientProducer fails with ConnectException
> ---
>
> Key: HDDS-1964
> URL: https://issues.apache.org/jira/browse/HDDS-1964
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/trunk/trunk-nightly-wxhxr/unit/hadoop-ozone/s3gateway/org.apache.hadoop.ozone.s3.TestOzoneClientProducer.txt}
> ---
> Test set: org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> ---
> Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 222.239 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> testGetClientFailure[0](org.apache.hadoop.ozone.s3.TestOzoneClientProducer)  
> Time elapsed: 111.036 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'Couldn't create protocol ' but got unexpected exception: 
> java.net.ConnectException: Your endpoint configuration is wrong; For more 
> details see:  http://wiki.apache.org/hadoop/UnsetHostnameOrPort
> {code}
> Log output (with local log4j config) reveals that connection is attempted to 
> 0.0.0.0:9862:
> {code:title=log output}
> 2019-08-14 10:49:14,225 [main] INFO  ipc.Client 
> (Client.java:handleConnectionFailure(948)) - Retrying connect to server: 
> 0.0.0.0/0.0.0.0:9862. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> {code}
> The address 0.0.0.0:9862 was added as default in 
> [HDDS-1920|https://github.com/apache/hadoop/commit/bf457797f607f3aeeb2292e63f440cb13e15a2d9].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1964) TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila reassigned HDDS-1964:
---

Assignee: Doroszlai, Attila

> TestOzoneClientProducer fails with ConnectException
> ---
>
> Key: HDDS-1964
> URL: https://issues.apache.org/jira/browse/HDDS-1964
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/trunk/trunk-nightly-wxhxr/unit/hadoop-ozone/s3gateway/org.apache.hadoop.ozone.s3.TestOzoneClientProducer.txt}
> ---
> Test set: org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> ---
> Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 222.239 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> testGetClientFailure[0](org.apache.hadoop.ozone.s3.TestOzoneClientProducer)  
> Time elapsed: 111.036 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'Couldn't create protocol ' but got unexpected exception: 
> java.net.ConnectException: Your endpoint configuration is wrong; For more 
> details see:  http://wiki.apache.org/hadoop/UnsetHostnameOrPort
> {code}
> Log output (with local log4j config) reveals that connection is attempted to 
> 0.0.0.0:9862:
> {code:title=log output}
> 2019-08-14 10:49:14,225 [main] INFO  ipc.Client 
> (Client.java:handleConnectionFailure(948)) - Retrying connect to server: 
> 0.0.0.0/0.0.0.0:9862. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> {code}
> The address 0.0.0.0:9862 was added as default in 
> [HDDS-1920|https://github.com/apache/hadoop/commit/bf457797f607f3aeeb2292e63f440cb13e15a2d9].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14674) [SBN read] Got an unexpected txid when tail editlog

2019-08-14 Thread wangzhaohui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907105#comment-16907105
 ] 

wangzhaohui commented on HDFS-14674:


[~Captainhzy] thanks for your comment, if i add the conf 
{{DFS_HA_TAILEDITS_MAX_TXNS_PER_LOCK_KEY}}, then in this for{}
{code:java}
//
for (EditLogInputStream editIn : editStreams) {
...
}{code}
In the case of multiple editStreams,
 
the range read by the first editStream is [0-100],but never break, the second 
editStream is set to read[200-300],so there will got an unexpected txid when 
tail editlog.

> [SBN read] Got an unexpected txid when tail editlog
> ---
>
> Key: HDFS-14674
> URL: https://issues.apache.org/jira/browse/HDFS-14674
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Blocker
> Attachments: HDFS-14674-001.patch, HDFS-14674-003.patch, 
> HDFS-14674-004.patch, HDFS-14674-005.patch, HDFS-14674-006.patch, 
> HDFS-14674-007.patch, image.png
>
>
> Add the following configuration
> !image-2019-07-26-11-34-23-405.png!
> error:
> {code:java}
> //
> [2019-07-17T11:50:21.048+08:00] [INFO] [Edit log tailer] : replaying edit 
> log: 1/20512836 transactions completed. (0%) [2019-07-17T11:50:21.059+08:00] 
> [INFO] [Edit log tailer] : Edits file 
> http://ip/getJournal?jid=ns1003&segmentTxId=232056426162&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232056426162&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232056426162&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  of size 3126782311 edits # 500 loaded in 3 seconds 
> [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : Reading 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@51ceb7bc 
> expecting start txid #232056752162 [2019-07-17T11:50:21.059+08:00] [INFO] 
> [Edit log tailer] : Start loading edits file 
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  maxTxnipsToRead = 500 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log 
> tailer] : Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit 
> log tailer] ip: Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.061+08:00] [ERROR] [Edit 
> log tailer] : Unknown error encountered while tailing edits. Shutting down 
> standby NN. java.io.IOException: There appears to be a gap in the edit log. 
> We expected txid 232056752162, but got txid 232077264498. at 
> org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:239)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:895) at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>  at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
>  [2019-07-17T11:50:21.064+08:00] [INFO] [Edit log tailer] : Exiting with 
> status 1 [2019-07-17T11:50:21.066+08:00] [INFO] [Thread-1] : SHUTDOWN_MSG: 
> / SHUTDOWN_MSG: 
> Shutting down Na

[jira] [Commented] (HDFS-14674) [SBN read] Got an unexpected txid when tail editlog

2019-08-14 Thread zy.jordan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907118#comment-16907118
 ] 

zy.jordan commented on HDFS-14674:
--

Hi [~wangzhaohui], thanks your answer! It means that in the condition of more 
than one editslog file in the jn, the exception will be thrown, and shutdown 
the SBN, am I right? And i have also encountered this exception after take 
HDFS-19278. 

> [SBN read] Got an unexpected txid when tail editlog
> ---
>
> Key: HDFS-14674
> URL: https://issues.apache.org/jira/browse/HDFS-14674
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Blocker
> Attachments: HDFS-14674-001.patch, HDFS-14674-003.patch, 
> HDFS-14674-004.patch, HDFS-14674-005.patch, HDFS-14674-006.patch, 
> HDFS-14674-007.patch, image.png
>
>
> Add the following configuration
> !image-2019-07-26-11-34-23-405.png!
> error:
> {code:java}
> //
> [2019-07-17T11:50:21.048+08:00] [INFO] [Edit log tailer] : replaying edit 
> log: 1/20512836 transactions completed. (0%) [2019-07-17T11:50:21.059+08:00] 
> [INFO] [Edit log tailer] : Edits file 
> http://ip/getJournal?jid=ns1003&segmentTxId=232056426162&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232056426162&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232056426162&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  of size 3126782311 edits # 500 loaded in 3 seconds 
> [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : Reading 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@51ceb7bc 
> expecting start txid #232056752162 [2019-07-17T11:50:21.059+08:00] [INFO] 
> [Edit log tailer] : Start loading edits file 
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  maxTxnipsToRead = 500 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log 
> tailer] : Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit 
> log tailer] ip: Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003&segmentTxId=232077264498&storageInfo=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.061+08:00] [ERROR] [Edit 
> log tailer] : Unknown error encountered while tailing edits. Shutting down 
> standby NN. java.io.IOException: There appears to be a gap in the edit log. 
> We expected txid 232056752162, but got txid 232077264498. at 
> org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:239)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:895) at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>  at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
>  [2019-07-17T11:50:21.064+08:00] [INFO] [Edit log tailer] : Exiting with 
> status 1 [2019-07-17T11:50:21.066+08:00] [INFO] [Thread-1] : SHUTDOWN_MSG: 
> / SHUTDOWN_MSG: 
> Shutting down NameNode at ip 
> /
> {code}
>  
> if dfs.ha.tail-edits.max-txns-per-lock value is 500,when the namenode load 

[jira] [Work logged] (HDDS-1964) TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1964?focusedWorklogId=294654&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294654
 ]

ASF GitHub Bot logged work on HDDS-1964:


Author: ASF GitHub Bot
Created on: 14/Aug/19 09:59
Start Date: 14/Aug/19 09:59
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1292: HDDS-1964. 
TestOzoneClientProducer fails with ConnectException
URL: https://github.com/apache/hadoop/pull/1292
 
 
   ## What changes were proposed in this pull request?
   
   `TestOzoneClientProducer` verifies that `RpcClient` cannot be created 
because OM address is not configured.  The call to `producer.createClient()` is 
expected to fail with the message `Couldn't create protocol`, which is 
triggered by `IllegalArgumentException: Could not find any configured addresses 
for OM. Please configure the system with ozone.om.address`.  
bf457797f607f3aeeb2292e63f440cb13e15a2d9 added the default address as 
explicitly configured value, so client creation now progresses further and 
fails when it cannot connect to OM (which is not started by the unit test).
   
   This change simply sets the previous empty OM address for this test.
   
   It also adds log4j config for `s3gateway` tests to produce better output 
next time, because 
[currently](https://raw.githubusercontent.com/elek/ozone-ci/master/trunk/trunk-nightly-wxhxr/unit/hadoop-ozone/s3gateway/org.apache.hadoop.ozone.s3.TestOzoneClientProducer-output.txt)
 it is not very helpful.
   
   https://issues.apache.org/jira/browse/HDDS-1964
   
   ## How was this patch tested?
   
   ```
   $ mvn -Phdds -pl :hadoop-ozone-s3gateway test
   ...
   [INFO] Tests run: 77, Failures: 0, Errors: 0, Skipped: 0
   ...
   [INFO] BUILD SUCCESS
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294654)
Time Spent: 10m
Remaining Estimate: 0h

> TestOzoneClientProducer fails with ConnectException
> ---
>
> Key: HDDS-1964
> URL: https://issues.apache.org/jira/browse/HDDS-1964
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/trunk/trunk-nightly-wxhxr/unit/hadoop-ozone/s3gateway/org.apache.hadoop.ozone.s3.TestOzoneClientProducer.txt}
> ---
> Test set: org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> ---
> Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 222.239 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> testGetClientFailure[0](org.apache.hadoop.ozone.s3.TestOzoneClientProducer)  
> Time elapsed: 111.036 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'Couldn't create protocol ' but got unexpected exception: 
> java.net.ConnectException: Your endpoint configuration is wrong; For more 
> details see:  http://wiki.apache.org/hadoop/UnsetHostnameOrPort
> {code}
> Log output (with local log4j config) reveals that connection is attempted to 
> 0.0.0.0:9862:
> {code:title=log output}
> 2019-08-14 10:49:14,225 [main] INFO  ipc.Client 
> (Client.java:handleConnectionFailure(948)) - Retrying connect to server: 
> 0.0.0.0/0.0.0.0:9862. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> {code}
> The address 0.0.0.0:9862 was added as default in 
> [HDDS-1920|https://github.com/apache/hadoop/commit/bf457797f607f3aeeb2292e63f440cb13e15a2d9].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1964) TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1964:
-
Labels: pull-request-available  (was: )

> TestOzoneClientProducer fails with ConnectException
> ---
>
> Key: HDDS-1964
> URL: https://issues.apache.org/jira/browse/HDDS-1964
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/trunk/trunk-nightly-wxhxr/unit/hadoop-ozone/s3gateway/org.apache.hadoop.ozone.s3.TestOzoneClientProducer.txt}
> ---
> Test set: org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> ---
> Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 222.239 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> testGetClientFailure[0](org.apache.hadoop.ozone.s3.TestOzoneClientProducer)  
> Time elapsed: 111.036 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'Couldn't create protocol ' but got unexpected exception: 
> java.net.ConnectException: Your endpoint configuration is wrong; For more 
> details see:  http://wiki.apache.org/hadoop/UnsetHostnameOrPort
> {code}
> Log output (with local log4j config) reveals that connection is attempted to 
> 0.0.0.0:9862:
> {code:title=log output}
> 2019-08-14 10:49:14,225 [main] INFO  ipc.Client 
> (Client.java:handleConnectionFailure(948)) - Retrying connect to server: 
> 0.0.0.0/0.0.0.0:9862. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> {code}
> The address 0.0.0.0:9862 was added as default in 
> [HDDS-1920|https://github.com/apache/hadoop/commit/bf457797f607f3aeeb2292e63f440cb13e15a2d9].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1964) TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1964?focusedWorklogId=294655&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294655
 ]

ASF GitHub Bot logged work on HDDS-1964:


Author: ASF GitHub Bot
Created on: 14/Aug/19 10:00
Start Date: 14/Aug/19 10:00
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1292: HDDS-1964. 
TestOzoneClientProducer fails with ConnectException
URL: https://github.com/apache/hadoop/pull/1292#issuecomment-521183613
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294655)
Time Spent: 20m  (was: 10m)

> TestOzoneClientProducer fails with ConnectException
> ---
>
> Key: HDDS-1964
> URL: https://issues.apache.org/jira/browse/HDDS-1964
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/trunk/trunk-nightly-wxhxr/unit/hadoop-ozone/s3gateway/org.apache.hadoop.ozone.s3.TestOzoneClientProducer.txt}
> ---
> Test set: org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> ---
> Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 222.239 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> testGetClientFailure[0](org.apache.hadoop.ozone.s3.TestOzoneClientProducer)  
> Time elapsed: 111.036 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'Couldn't create protocol ' but got unexpected exception: 
> java.net.ConnectException: Your endpoint configuration is wrong; For more 
> details see:  http://wiki.apache.org/hadoop/UnsetHostnameOrPort
> {code}
> Log output (with local log4j config) reveals that connection is attempted to 
> 0.0.0.0:9862:
> {code:title=log output}
> 2019-08-14 10:49:14,225 [main] INFO  ipc.Client 
> (Client.java:handleConnectionFailure(948)) - Retrying connect to server: 
> 0.0.0.0/0.0.0.0:9862. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> {code}
> The address 0.0.0.0:9862 was added as default in 
> [HDDS-1920|https://github.com/apache/hadoop/commit/bf457797f607f3aeeb2292e63f440cb13e15a2d9].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1898) GrpcReplicationService#download cannot replicate the container

2019-08-14 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDDS-1898:
---

Assignee: (was: Lokesh Jain)

> GrpcReplicationService#download cannot replicate the container
> --
>
> Key: HDDS-1898
> URL: https://issues.apache.org/jira/browse/HDDS-1898
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> Replication of container is failing because of rocksdb is unable to find the 
> underlying files.
> {code}
> 2019-08-02 14:07:26,670 INFO  replication.GrpcReplicationService 
> (GrpcReplicationService.java:close(124)) - 663284 bytes written to th
> e rpc stream from container 12
> 2019-08-02 14:07:26,670 ERROR replication.GrpcReplicationService 
> (GrpcReplicationService.java:download(65)) - Can't stream the contain
> er data
> java.io.FileNotFoundException: 
> /Users/msingh/code/apache/ozone/github/chaos_runs/hadoop-ozone/integration-test/target/test/data/MiniOzoneClusterImpl-403d87c2-5cbe-4511-8e14-dce727f10cf9/datanode-7/data/containers/hdds/9f2a75dc-3243-462a-a90e-c83f63ad0d55/current/containerDir0/12/metadata/12-dn-container.db/002084.log
>  (No such file or directory)
> at java.io.FileInputStream.open0(Native Method)
> at java.io.FileInputStream.open(FileInputStream.java:195)
> at java.io.FileInputStream.(FileInputStream.java:138)
> at 
> org.apache.hadoop.ozone.container.keyvalue.TarContainerPacker.includeFile(TarContainerPacker.java:243)
> at 
> org.apache.hadoop.ozone.container.keyvalue.TarContainerPacker.includePath(TarContainerPacker.java:233)
> at 
> org.apache.hadoop.ozone.container.keyvalue.TarContainerPacker.pack(TarContainerPacker.java:164)
> at 
> org.apache.hadoop.ozone.container.replication.OnDemandContainerReplicationSource.copyData(OnDemandContainerReplicationSource.java:67)
> at 
> org.apache.hadoop.ozone.container.replication.GrpcReplicationService.download(GrpcReplicationService.java:63)
> at 
> org.apache.hadoop.hdds.protocol.datanode.proto.IntraDatanodeProtocolServiceGrpc$MethodHandlers.invoke(IntraDatanodeProtocolServiceGrpc.java:217)
> at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:171)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:283)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:710)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Suppressed: java.io.IOException: This archives contains unclosed 
> entries.
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveOutputStream.finish(TarArchiveOutputStream.java:214)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveOutputStream.close(TarArchiveOutputStream.java:229)
> at 
> org.apache.hadoop.ozone.container.keyvalue.TarContainerPacker.pack(TarContainerPacker.java:173)
> ... 11 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1561) Mark OPEN containers as QUASI_CLOSED as part of Ratis groupRemove

2019-08-14 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDDS-1561:
---

Assignee: Lokesh Jain  (was: Nanda kumar)

> Mark OPEN containers as QUASI_CLOSED as part of Ratis groupRemove
> -
>
> Key: HDDS-1561
> URL: https://issues.apache.org/jira/browse/HDDS-1561
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
>Priority: Blocker
>
> Right now, if a pipeline is destroyed by SCM, all the container on the 
> pipeline are marked as quasi closed when datanode received close container 
> command. SCM while processing these containers reports, marks these 
> containers as closed once majority of the nodes are available.
> This is however not a sufficient condition in cases where the raft log 
> directory is missing or corrupted. As the containers will not have all the 
> applied transaction. 
> To solve this problem, we should QUASI_CLOSE the containers in datanode as 
> part of ratis groupRemove. If a container is in OPEN state in datanode 
> without any active pipeline, it will be marked as Unhealthy while processing 
> close container command.
> cc [~jnp], [~shashikant], [~sdeka], [~nandakumar131]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-14 Thread Masatake Iwasaki (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-14423:

Attachment: HDFS-14423-branch-2.006.patch

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Masatake Iwasaki
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14423-branch-2.005.patch, 
> HDFS-14423-branch-2.006.patch, HDFS-14423.001.patch, HDFS-14423.002.patch, 
> HDFS-14423.003.patch, HDFS-14423.004.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-14 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907140#comment-16907140
 ] 

Masatake Iwasaki commented on HDFS-14423:
-

attached 006 addressing checksytle warnings.

> Percent (%) and plus (+) characters no longer work in WebHDFS
> -
>
> Key: HDFS-14423
> URL: https://issues.apache.org/jira/browse/HDFS-14423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0, 3.1.2
> Environment: Ubuntu 16.04, but I believe this is irrelevant.
>Reporter: Jing Wang
>Assignee: Masatake Iwasaki
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14423-branch-2.005.patch, 
> HDFS-14423-branch-2.006.patch, HDFS-14423.001.patch, HDFS-14423.002.patch, 
> HDFS-14423.003.patch, HDFS-14423.004.patch
>
>
> The following commands with percent (%) no longer work starting with version 
> 3.1:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/%
> $ hadoop/bin/hdfs dfs -cat webhdfs://localhost/%
> cat: URLDecoder: Incomplete trailing escape (%) pattern
> {code}
> Also, plus (+ ) characters get turned into spaces when doing DN operations:
> {code:java}
> $ hadoop/bin/hdfs dfs -touchz webhdfs://localhost/a+b
> $ hadoop/bin/hdfs dfs -mkdir webhdfs://localhost/c+d
> $ hadoop/bin/hdfs dfs -ls /
> Found 4 items
> -rw-r--r--   1 jing supergroup  0 2019-04-12 11:20 /a b
> drwxr-xr-x   - jing supergroup  0 2019-04-12 11:21 /c+d
> {code}
> I can confirm that these commands work correctly on 2.9 and 3.0. Also, the 
> usual hdfs:// client works as expected.
> I suspect a relation with HDFS-13176 or HDFS-13582, but I'm not sure what the 
> right fix is. Note that Hive uses % to escape special characters in partition 
> values, so banning % might not be a good option. For example, Hive will 
> create a paths like {{table_name/partition_key=%2F}} when 
> {{partition_key='/'}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1959) Decrement purge interval for Ratis logs

2019-08-14 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907142#comment-16907142
 ] 

Mukul Kumar Singh commented on HDDS-1959:
-

Thanks for taking this issue [~pingsutw]. Can you please provide a patch for 
this jira.

> Decrement purge interval for Ratis logs
> ---
>
> Key: HDDS-1959
> URL: https://issues.apache.org/jira/browse/HDDS-1959
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: kevin su
>Priority: Major
>
> Currently purge interval for ratis log("ozone.om.ratis.log.purge.gap") is set 
> at 100. The Jira aims to reduce the interval and set it to 10.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1610) applyTransaction failure should not be lost on restart

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1610?focusedWorklogId=294671&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294671
 ]

ASF GitHub Bot logged work on HDDS-1610:


Author: ASF GitHub Bot
Created on: 14/Aug/19 10:31
Start Date: 14/Aug/19 10:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1226: HDDS-1610. 
applyTransaction failure should not be lost on restart.
URL: https://github.com/apache/hadoop/pull/1226#issuecomment-521192456
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 70 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 70 | Maven dependency ordering for branch |
   | -1 | mvninstall | 141 | hadoop-ozone in trunk failed. |
   | -1 | compile | 49 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 915 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | trunk passed |
   | 0 | spotbugs | 201 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 101 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | -1 | mvninstall | 137 | hadoop-ozone in the patch failed. |
   | -1 | compile | 52 | hadoop-ozone in the patch failed. |
   | -1 | cc | 52 | hadoop-ozone in the patch failed. |
   | -1 | javac | 52 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 63 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 721 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | the patch passed |
   | -1 | findbugs | 101 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 339 | hadoop-hdds in the patch passed. |
   | -1 | unit | 329 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 4445 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.s3.TestOzoneClientProducer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1226 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 960f8c74122e 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0e4b757 |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/testReport/ |
   | Max. process+thread count | 427 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test 
hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1226/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   

[jira] [Created] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1965:
---

 Summary: Compile error due to leftover 
ScmBlockLocationTestIngClient file
 Key: HDDS-1965
 URL: https://issues.apache.org/jira/browse/HDDS-1965
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: build
Affects Versions: 0.5.0
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


{code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
[ERROR] COMPILATION ERROR : 
[INFO] -
[ERROR] 
/var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
 class ScmBlockLocationTestingClient is public, should be declared in a file 
named ScmBlockLocationTestingClient.java
[ERROR] 
/var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
 duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
[INFO] 2 errors 
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1965 started by Doroszlai, Attila.
---
> Compile error due to leftover ScmBlockLocationTestIngClient file
> 
>
> Key: HDDS-1965
> URL: https://issues.apache.org/jira/browse/HDDS-1965
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>
> {code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
>  class ScmBlockLocationTestingClient is public, should be declared in a file 
> named ScmBlockLocationTestingClient.java
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
>  duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1964) TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1964?focusedWorklogId=294684&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294684
 ]

ASF GitHub Bot logged work on HDDS-1964:


Author: ASF GitHub Bot
Created on: 14/Aug/19 11:04
Start Date: 14/Aug/19 11:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1292: HDDS-1964. 
TestOzoneClientProducer fails with ConnectException
URL: https://github.com/apache/hadoop/pull/1292#issuecomment-521201591
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 50 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 135 | hadoop-ozone in trunk failed. |
   | -1 | compile | 48 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 59 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 837 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 145 | trunk passed |
   | 0 | spotbugs | 192 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 99 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 136 | hadoop-ozone in the patch failed. |
   | -1 | compile | 50 | hadoop-ozone in the patch failed. |
   | -1 | javac | 50 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 61 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 632 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | the patch passed |
   | -1 | findbugs | 100 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 295 | hadoop-hdds in the patch passed. |
   | -1 | unit | 104 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 3856 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1292 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ebf668536fa9 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0e4b757 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/testReport/ |
   | Max. process+thread count | 520 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294684)
Time Spent: 

[jira] [Updated] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1965:
-
Labels: pull-request-available  (was: )

> Compile error due to leftover ScmBlockLocationTestIngClient file
> 
>
> Key: HDDS-1965
> URL: https://issues.apache.org/jira/browse/HDDS-1965
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>
> {code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
>  class ScmBlockLocationTestingClient is public, should be declared in a file 
> named ScmBlockLocationTestingClient.java
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
>  duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1965?focusedWorklogId=294690&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294690
 ]

ASF GitHub Bot logged work on HDDS-1965:


Author: ASF GitHub Bot
Created on: 14/Aug/19 11:13
Start Date: 14/Aug/19 11:13
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1293: HDDS-1965. 
Compile error due to leftover ScmBlockLocationTestIngClient file
URL: https://github.com/apache/hadoop/pull/1293
 
 
   ## What changes were proposed in this pull request?
   
   Typo in class name of `ScmBlockLocationTestingClient` was fixed in 
5a248de5115, but the original file is still present in the repo, causing 
compile error.
   
   https://issues.apache.org/jira/browse/HDDS-1965
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294690)
Time Spent: 10m
Remaining Estimate: 0h

> Compile error due to leftover ScmBlockLocationTestIngClient file
> 
>
> Key: HDDS-1965
> URL: https://issues.apache.org/jira/browse/HDDS-1965
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
>  class ScmBlockLocationTestingClient is public, should be declared in a file 
> named ScmBlockLocationTestingClient.java
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
>  duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1965?focusedWorklogId=294694&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294694
 ]

ASF GitHub Bot logged work on HDDS-1965:


Author: ASF GitHub Bot
Created on: 14/Aug/19 11:17
Start Date: 14/Aug/19 11:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1293: HDDS-1965. 
Compile error due to leftover ScmBlockLocationTestIngClient file
URL: https://github.com/apache/hadoop/pull/1293#issuecomment-521205005
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 67 | Docker mode activated. |
   ||| _ Prechecks _ |
   | -1 | dupname | 0 | The patch has 1  duplicated filenames that differ only 
in case. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1293/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1293 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f65e4a4ae90b 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0e4b757 |
   | dupname | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1293/1/artifact/out/dupnames.txt
 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1293/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294694)
Time Spent: 20m  (was: 10m)

> Compile error due to leftover ScmBlockLocationTestIngClient file
> 
>
> Key: HDDS-1965
> URL: https://issues.apache.org/jira/browse/HDDS-1965
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
>  class ScmBlockLocationTestingClient is public, should be declared in a file 
> named ScmBlockLocationTestingClient.java
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
>  duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1965?focusedWorklogId=294696&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294696
 ]

ASF GitHub Bot logged work on HDDS-1965:


Author: ASF GitHub Bot
Created on: 14/Aug/19 11:23
Start Date: 14/Aug/19 11:23
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1293: HDDS-1965. Compile 
error due to leftover ScmBlockLocationTestIngClient file
URL: https://github.com/apache/hadoop/pull/1293#issuecomment-521206661
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294696)
Time Spent: 0.5h  (was: 20m)

> Compile error due to leftover ScmBlockLocationTestIngClient file
> 
>
> Key: HDDS-1965
> URL: https://issues.apache.org/jira/browse/HDDS-1965
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
>  class ScmBlockLocationTestingClient is public, should be declared in a file 
> named ScmBlockLocationTestingClient.java
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
>  duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13762) Support non-volatile storage class memory(SCM) in HDFS cache directives

2019-08-14 Thread Feilong He (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feilong He updated HDFS-13762:
--
Attachment: (was: SCM_Cache_Perf_Results-v1.pdf)

> Support non-volatile storage class memory(SCM) in HDFS cache directives
> ---
>
> Key: HDFS-13762
> URL: https://issues.apache.org/jira/browse/HDFS-13762
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: caching, datanode
>Reporter: Sammi Chen
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-13762.000.patch, HDFS-13762.001.patch, 
> HDFS-13762.002.patch, HDFS-13762.003.patch, HDFS-13762.004.patch, 
> HDFS-13762.005.patch, HDFS-13762.006.patch, HDFS-13762.007.patch, 
> HDFS-13762.008.patch, SCMCacheDesign-2018-11-08.pdf, 
> SCMCacheDesign-2019-07-12.pdf, SCMCacheDesign-2019-07-16.pdf, 
> SCMCacheDesign-2019-3-26.pdf, SCMCacheTestPlan-2019-3-27.pdf, 
> SCMCacheTestPlan.pdf
>
>
> No-volatile storage class memory is a type of memory that can keep the data 
> content after power failure or between the power cycle. Non-volatile storage 
> class memory device usually has near access speed as memory DIMM while has 
> lower cost than memory.  So today It is usually used as a supplement to 
> memory to hold long tern persistent data, such as data in cache. 
> Currently in HDFS, we have OS page cache backed read only cache and RAMDISK 
> based lazy write cache.  Non-volatile memory suits for both these functions. 
> This Jira aims to enable storage class memory first in read cache. Although 
> storage class memory has non-volatile characteristics, to keep the same 
> behavior as current read only cache, we don't use its persistent 
> characteristics currently.  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1965:

Status: Patch Available  (was: In Progress)

> Compile error due to leftover ScmBlockLocationTestIngClient file
> 
>
> Key: HDDS-1965
> URL: https://issues.apache.org/jira/browse/HDDS-1965
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
>  class ScmBlockLocationTestingClient is public, should be declared in a file 
> named ScmBlockLocationTestingClient.java
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
>  duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1966) Wrong expected key ACL in acceptance test

2019-08-14 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1966:
---

 Summary: Wrong expected key ACL in acceptance test
 Key: HDDS-1966
 URL: https://issues.apache.org/jira/browse/HDDS-1966
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Affects Versions: 0.4.1
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


Acceptance test fails at ACL checks:

{code:title=https://elek.github.io/ozone-ci/trunk/trunk-nightly-wxhxr/acceptance/smokeresult/log.html#s1-s16-s2-t4-k2}
[ {
  "type" : "USER",
  "name" : "testuser/s...@example.com",
  "aclScope" : "ACCESS",
  "aclList" : [ "ALL" ]
}, {
  "type" : "GROUP",
  "name" : "root",
  "aclScope" : "ACCESS",
  "aclList" : [ "ALL" ]
}, {
  "type" : "GROUP",
  "name" : "superuser1",
  "aclScope" : "ACCESS",
  "aclList" : [ "ALL" ]
}, {
  "type" : "USER",
  "name" : "superuser1",
  "aclScope" : "ACCESS",
  "aclList" : [ "READ", "WRITE", "READ_ACL", "WRITE_ACL" ]
} ]' does not match '"type" : "GROUP",
.*"name" : "superuser1*",
.*"aclScope" : "ACCESS",
.*"aclList" : . "READ", "WRITE", "READ_ACL", "WRITE_ACL"'
{code}

The test [sets user 
ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L123],
 but [checks group 
ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L125].
  I think this passed previously due to a bug that was 
[fixed|https://github.com/apache/hadoop/pull/1234/files#diff-2d061b57a9838854d07da9e0eca64f31]
 by HDDS-1917.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1966) Wrong expected key ACL in acceptance test

2019-08-14 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1966 started by Doroszlai, Attila.
---
> Wrong expected key ACL in acceptance test
> -
>
> Key: HDDS-1966
> URL: https://issues.apache.org/jira/browse/HDDS-1966
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>
> Acceptance test fails at ACL checks:
> {code:title=https://elek.github.io/ozone-ci/trunk/trunk-nightly-wxhxr/acceptance/smokeresult/log.html#s1-s16-s2-t4-k2}
> [ {
>   "type" : "USER",
>   "name" : "testuser/s...@example.com",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "root",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "USER",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "READ", "WRITE", "READ_ACL", "WRITE_ACL" ]
> } ]' does not match '"type" : "GROUP",
> .*"name" : "superuser1*",
> .*"aclScope" : "ACCESS",
> .*"aclList" : . "READ", "WRITE", "READ_ACL", "WRITE_ACL"'
> {code}
> The test [sets user 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L123],
>  but [checks group 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L125].
>   I think this passed previously due to a bug that was 
> [fixed|https://github.com/apache/hadoop/pull/1234/files#diff-2d061b57a9838854d07da9e0eca64f31]
>  by HDDS-1917.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1965?focusedWorklogId=294721&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294721
 ]

ASF GitHub Bot logged work on HDDS-1965:


Author: ASF GitHub Bot
Created on: 14/Aug/19 12:20
Start Date: 14/Aug/19 12:20
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1293: HDDS-1965. Compile 
error due to leftover ScmBlockLocationTestIngClient file
URL: https://github.com/apache/hadoop/pull/1293#issuecomment-521222860
 
 
   @nandakumar131 please review
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294721)
Time Spent: 40m  (was: 0.5h)

> Compile error due to leftover ScmBlockLocationTestIngClient file
> 
>
> Key: HDDS-1965
> URL: https://issues.apache.org/jira/browse/HDDS-1965
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
>  class ScmBlockLocationTestingClient is public, should be declared in a file 
> named ScmBlockLocationTestingClient.java
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
>  duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1955) TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of assertion error

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1955?focusedWorklogId=294725&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294725
 ]

ASF GitHub Bot logged work on HDDS-1955:


Author: ASF GitHub Bot
Created on: 14/Aug/19 12:31
Start Date: 14/Aug/19 12:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1281: HDDS-1955. 
TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of 
assertion error. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1281#issuecomment-521226293
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1457 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 136 | hadoop-ozone in trunk failed. |
   | -1 | compile | 50 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 838 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 196 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 100 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 133 | hadoop-ozone in the patch failed. |
   | -1 | compile | 51 | hadoop-ozone in the patch failed. |
   | -1 | javac | 51 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 63 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 671 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | the patch passed |
   | -1 | findbugs | 102 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 275 | hadoop-hdds in the patch passed. |
   | -1 | unit | 323 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 5512 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.s3.TestOzoneClientProducer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1281 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 759520ee2bf7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0e4b757 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/testReport/ |
   | Max. process+thread count | 515 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1281/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to

[jira] [Commented] (HDFS-13762) Support non-volatile storage class memory(SCM) in HDFS cache directives

2019-08-14 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907240#comment-16907240
 ] 

Feilong He commented on HDFS-13762:
---

[~aajisaka], thanks for your comment. We are preparing refreshed test result 
recently and a new performance report will be shared here in the near future. 
Thanks again!

> Support non-volatile storage class memory(SCM) in HDFS cache directives
> ---
>
> Key: HDFS-13762
> URL: https://issues.apache.org/jira/browse/HDFS-13762
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: caching, datanode
>Reporter: Sammi Chen
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-13762.000.patch, HDFS-13762.001.patch, 
> HDFS-13762.002.patch, HDFS-13762.003.patch, HDFS-13762.004.patch, 
> HDFS-13762.005.patch, HDFS-13762.006.patch, HDFS-13762.007.patch, 
> HDFS-13762.008.patch, SCMCacheDesign-2018-11-08.pdf, 
> SCMCacheDesign-2019-07-12.pdf, SCMCacheDesign-2019-07-16.pdf, 
> SCMCacheDesign-2019-3-26.pdf, SCMCacheTestPlan-2019-3-27.pdf, 
> SCMCacheTestPlan.pdf
>
>
> No-volatile storage class memory is a type of memory that can keep the data 
> content after power failure or between the power cycle. Non-volatile storage 
> class memory device usually has near access speed as memory DIMM while has 
> lower cost than memory.  So today It is usually used as a supplement to 
> memory to hold long tern persistent data, such as data in cache. 
> Currently in HDFS, we have OS page cache backed read only cache and RAMDISK 
> based lazy write cache.  Non-volatile memory suits for both these functions. 
> This Jira aims to enable storage class memory first in read cache. Although 
> storage class memory has non-volatile characteristics, to keep the same 
> behavior as current read only cache, we don't use its persistent 
> characteristics currently.  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907242#comment-16907242
 ] 

Nanda kumar commented on HDDS-1965:
---

[~adoroszlai], I don't see the old file [ the one with "TestIng" --> 
ScmBlockLocationTestIngClient] in my local after HDDS-1947. 
I guess the problem is because of Mac OS, are you using linux?

> Compile error due to leftover ScmBlockLocationTestIngClient file
> 
>
> Key: HDDS-1965
> URL: https://issues.apache.org/jira/browse/HDDS-1965
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
>  class ScmBlockLocationTestingClient is public, should be declared in a file 
> named ScmBlockLocationTestingClient.java
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
>  duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907243#comment-16907243
 ] 

Nanda kumar commented on HDDS-1965:
---

\cc [~anu]

> Compile error due to leftover ScmBlockLocationTestIngClient file
> 
>
> Key: HDDS-1965
> URL: https://issues.apache.org/jira/browse/HDDS-1965
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
>  class ScmBlockLocationTestingClient is public, should be declared in a file 
> named ScmBlockLocationTestingClient.java
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
>  duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1965?focusedWorklogId=294737&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294737
 ]

ASF GitHub Bot logged work on HDDS-1965:


Author: ASF GitHub Bot
Created on: 14/Aug/19 13:09
Start Date: 14/Aug/19 13:09
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1293: 
HDDS-1965. Compile error due to leftover ScmBlockLocationTestIngClient file
URL: https://github.com/apache/hadoop/pull/1293
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294737)
Time Spent: 50m  (was: 40m)

> Compile error due to leftover ScmBlockLocationTestIngClient file
> 
>
> Key: HDDS-1965
> URL: https://issues.apache.org/jira/browse/HDDS-1965
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
>  class ScmBlockLocationTestingClient is public, should be declared in a file 
> named ScmBlockLocationTestingClient.java
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
>  duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1965:
--
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

Thanks [~adoroszlai] for the quick fix. Committed it to trunk.

> Compile error due to leftover ScmBlockLocationTestIngClient file
> 
>
> Key: HDDS-1965
> URL: https://issues.apache.org/jira/browse/HDDS-1965
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
>  class ScmBlockLocationTestingClient is public, should be declared in a file 
> named ScmBlockLocationTestingClient.java
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
>  duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14423) Percent (%) and plus (+) characters no longer work in WebHDFS

2019-08-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907252#comment-16907252
 ] 

Hadoop QA commented on HDFS-14423:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
39s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
1s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
59s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
41s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
59s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
17s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
6s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
26s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
10s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
33s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
59s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
35s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\

[jira] [Work logged] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1965?focusedWorklogId=294747&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294747
 ]

ASF GitHub Bot logged work on HDDS-1965:


Author: ASF GitHub Bot
Created on: 14/Aug/19 13:26
Start Date: 14/Aug/19 13:26
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1293: HDDS-1965. Compile 
error due to leftover ScmBlockLocationTestIngClient file
URL: https://github.com/apache/hadoop/pull/1293#issuecomment-521244832
 
 
   Thanks @nandakumar131 for quick review and commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294747)
Time Spent: 1h  (was: 50m)

> Compile error due to leftover ScmBlockLocationTestIngClient file
> 
>
> Key: HDDS-1965
> URL: https://issues.apache.org/jira/browse/HDDS-1965
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
>  class ScmBlockLocationTestingClient is public, should be declared in a file 
> named ScmBlockLocationTestingClient.java
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
>  duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907261#comment-16907261
 ] 

Hudson commented on HDDS-1965:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17118 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17118/])
HDDS-1965. Compile error due to leftover ScmBlockLocationTestIngClient (nanda: 
rev 83e452eceac63559c2f5146510ae3e89e310ac1e)
* (delete) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java


> Compile error due to leftover ScmBlockLocationTestIngClient file
> 
>
> Key: HDDS-1965
> URL: https://issues.apache.org/jira/browse/HDDS-1965
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
>  class ScmBlockLocationTestingClient is public, should be declared in a file 
> named ScmBlockLocationTestingClient.java
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
>  duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1955) TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of assertion error

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1955?focusedWorklogId=294752&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294752
 ]

ASF GitHub Bot logged work on HDDS-1955:


Author: ASF GitHub Bot
Created on: 14/Aug/19 13:33
Start Date: 14/Aug/19 13:33
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on issue #1281: HDDS-1955. 
TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of 
assertion error. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1281#issuecomment-521247411
 
 
   Failures are not related. They are because of #1293
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294752)
Time Spent: 1h  (was: 50m)

> TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of 
> assertion error
> --
>
> Key: HDDS-1955
> URL: https://issues.apache.org/jira/browse/HDDS-1955
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The test is failing because pipeline can be closed because of the datanode 
> shutdown. This can also cause a ContainerNotOpenException to be raised.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14674) [SBN read] Got an unexpected txid when tail editlog

2019-08-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907265#comment-16907265
 ] 

Hadoop QA commented on HDFS-14674:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 37m 
53s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 175 unchanged - 0 fixed = 182 total (was 175) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}236m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.blockmanagement.TestSequentialBlockId |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
|   | hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks |
|   | hadoop.hdfs.server.namenode.TestFSImage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14674 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12977571/HDFS-14674-007.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d7b4459eeb03 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0e4b757 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27504/artifact/out/diff-checkstyle-hadoop

[jira] [Commented] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907268#comment-16907268
 ] 

Anu Engineer commented on HDDS-1965:


Thanks for catching this and fixing it. The case-insensitivity is a pain 

> Compile error due to leftover ScmBlockLocationTestIngClient file
> 
>
> Key: HDDS-1965
> URL: https://issues.apache.org/jira/browse/HDDS-1965
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
>  class ScmBlockLocationTestingClient is public, should be declared in a file 
> named ScmBlockLocationTestingClient.java
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
>  duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1963) OM DB Schema defintion in OmMetadataManagerImpl and OzoneConsts are not consistent

2019-08-14 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907273#comment-16907273
 ] 

Anu Engineer commented on HDDS-1963:


Agree, Thanks for flagging this. Perhaps the OmMetadataManagerImpl is the way 
to go?

> OM DB Schema defintion in OmMetadataManagerImpl and OzoneConsts are not 
> consistent
> --
>
> Key: HDDS-1963
> URL: https://issues.apache.org/jira/browse/HDDS-1963
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Sammi Chen
>Priority: Major
>
> OzoneConsts.java
>  * OM DB Schema:
>*  --
>*  |  KEY | VALUE   |
>*  --
>*  | $userName|  VolumeList |
>*  --
>*  | /#volumeName |  VolumeInfo |
>*  --
>*  | /#volumeName/#bucketName |  BucketInfo |
>*  --
>*  | /volumeName/bucketName/keyName   |  KeyInfo|
>*  --
>*  | #deleting#/volumeName/bucketName/keyName |  KeyInfo|
>*  --
> OmMetadataManagerImpl.java
> /**
>* OM RocksDB Structure .
>* 
>* OM DB stores metadata as KV pairs in different column families.
>* 
>* OM DB Schema:
>* |---|
>* |  Column Family |VALUE |
>* |---|
>* | userTable  | user->VolumeList |
>* |---|
>* | volumeTable| /volume->VolumeInfo  |
>* |---|
>* | bucketTable| /volume/bucket-> BucketInfo  |
>* |---|
>* | keyTable   | /volumeName/bucketName/keyName->KeyInfo  |
>* |---|
>* | deletedTable   | /volumeName/bucketName/keyName->KeyInfo  |
>* |---|
>* | openKey| /volumeName/bucketName/keyName/id->KeyInfo   |
>* |---|
>* | s3Table| s3BucketName -> /volumeName/bucketName   |
>* |---|
>* | s3SecretTable  | s3g_access_key_id -> s3Secret|
>* |---|
>* | dTokenTable| s3g_access_key_id -> s3Secret|
>* |---|
>* | prefixInfoTable | prefix -> PrefixInfo   |
>* |---|
>*/
> It's better to put OM DB Schema defintion in one place to resolve this 
> inconsistency due to information redundancy. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1966) Wrong expected key ACL in acceptance test

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1966:
-
Labels: pull-request-available  (was: )

> Wrong expected key ACL in acceptance test
> -
>
> Key: HDDS-1966
> URL: https://issues.apache.org/jira/browse/HDDS-1966
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>
> Acceptance test fails at ACL checks:
> {code:title=https://elek.github.io/ozone-ci/trunk/trunk-nightly-wxhxr/acceptance/smokeresult/log.html#s1-s16-s2-t4-k2}
> [ {
>   "type" : "USER",
>   "name" : "testuser/s...@example.com",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "root",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "USER",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "READ", "WRITE", "READ_ACL", "WRITE_ACL" ]
> } ]' does not match '"type" : "GROUP",
> .*"name" : "superuser1*",
> .*"aclScope" : "ACCESS",
> .*"aclList" : . "READ", "WRITE", "READ_ACL", "WRITE_ACL"'
> {code}
> The test [sets user 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L123],
>  but [checks group 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L125].
>   I think this passed previously due to a bug that was 
> [fixed|https://github.com/apache/hadoop/pull/1234/files#diff-2d061b57a9838854d07da9e0eca64f31]
>  by HDDS-1917.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1966) Wrong expected key ACL in acceptance test

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1966?focusedWorklogId=294756&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294756
 ]

ASF GitHub Bot logged work on HDDS-1966:


Author: ASF GitHub Bot
Created on: 14/Aug/19 13:45
Start Date: 14/Aug/19 13:45
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1295: HDDS-1966. Wrong 
expected key ACL in acceptance test
URL: https://github.com/apache/hadoop/pull/1295#issuecomment-521252124
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294756)
Time Spent: 20m  (was: 10m)

> Wrong expected key ACL in acceptance test
> -
>
> Key: HDDS-1966
> URL: https://issues.apache.org/jira/browse/HDDS-1966
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Acceptance test fails at ACL checks:
> {code:title=https://elek.github.io/ozone-ci/trunk/trunk-nightly-wxhxr/acceptance/smokeresult/log.html#s1-s16-s2-t4-k2}
> [ {
>   "type" : "USER",
>   "name" : "testuser/s...@example.com",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "root",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "USER",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "READ", "WRITE", "READ_ACL", "WRITE_ACL" ]
> } ]' does not match '"type" : "GROUP",
> .*"name" : "superuser1*",
> .*"aclScope" : "ACCESS",
> .*"aclList" : . "READ", "WRITE", "READ_ACL", "WRITE_ACL"'
> {code}
> The test [sets user 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L123],
>  but [checks group 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L125].
>   I think this passed previously due to a bug that was 
> [fixed|https://github.com/apache/hadoop/pull/1234/files#diff-2d061b57a9838854d07da9e0eca64f31]
>  by HDDS-1917.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1966) Wrong expected key ACL in acceptance test

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1966?focusedWorklogId=294755&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294755
 ]

ASF GitHub Bot logged work on HDDS-1966:


Author: ASF GitHub Bot
Created on: 14/Aug/19 13:45
Start Date: 14/Aug/19 13:45
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1295: HDDS-1966. 
Wrong expected key ACL in acceptance test
URL: https://github.com/apache/hadoop/pull/1295
 
 
   ## What changes were proposed in this pull request?
   
   Acceptance test [fails at ACL 
checks](https://elek.github.io/ozone-ci/trunk/trunk-nightly-wxhxr/acceptance/smokeresult/log.html#s1-s16-s2-t4-k2):
   
   ```
   [ {
 "type" : "USER",
 "name" : "testuser/s...@example.com",
 "aclScope" : "ACCESS",
 "aclList" : [ "ALL" ]
   }, {
 "type" : "GROUP",
 "name" : "root",
 "aclScope" : "ACCESS",
 "aclList" : [ "ALL" ]
   }, {
 "type" : "GROUP",
 "name" : "superuser1",
 "aclScope" : "ACCESS",
 "aclList" : [ "ALL" ]
   }, {
 "type" : "USER",
 "name" : "superuser1",
 "aclScope" : "ACCESS",
 "aclList" : [ "READ", "WRITE", "READ_ACL", "WRITE_ACL" ]
   } ]' does not match '"type" : "GROUP",
   .*"name" : "superuser1*",
   .*"aclScope" : "ACCESS",
   .*"aclList" : . "READ", "WRITE", "READ_ACL", "WRITE_ACL"'
   ```
   
   The test [sets user 
ACL](https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L123),
 but [checks group 
ACL](https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L125).
  I think this passed previously due to a bug that was 
[fixed](https://github.com/apache/hadoop/pull/1234/files#diff-2d061b57a9838854d07da9e0eca64f31)
 by [HDDS-1917](https://issues.apache.org/jira/browse/HDDS-1917).
   
   https://issues.apache.org/jira/browse/HDDS-1966
   
   ## How was this patch tested?
   
   Ran `ozonesecure` acceptance test, verified that key ACL checks were passing.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294755)
Time Spent: 10m
Remaining Estimate: 0h

> Wrong expected key ACL in acceptance test
> -
>
> Key: HDDS-1966
> URL: https://issues.apache.org/jira/browse/HDDS-1966
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Acceptance test fails at ACL checks:
> {code:title=https://elek.github.io/ozone-ci/trunk/trunk-nightly-wxhxr/acceptance/smokeresult/log.html#s1-s16-s2-t4-k2}
> [ {
>   "type" : "USER",
>   "name" : "testuser/s...@example.com",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "root",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "USER",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "READ", "WRITE", "READ_ACL", "WRITE_ACL" ]
> } ]' does not match '"type" : "GROUP",
> .*"name" : "superuser1*",
> .*"aclScope" : "ACCESS",
> .*"aclList" : . "READ", "WRITE", "READ_ACL", "WRITE_ACL"'
> {code}
> The test [sets user 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L123],
>  but [checks group 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L125].
>   I think this passed previously due to a bug that was 
> [fixed|https://github.com/apache/hadoop/pull/1234/files#diff-2d061b57a9838854d07da9e0eca64f31]
>  by HDDS-1917.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1965) Compile error due to leftover ScmBlockLocationTestIngClient file

2019-08-14 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907277#comment-16907277
 ] 

Doroszlai, Attila commented on HDDS-1965:
-

Actually it was caught by Jenkins, runs on Linux.

I think after pulling this change, everyone on Mac will have the following 
local state:

{code}
Changes not staged for commit:
...
deleted:
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java
{code}

which can then be fixed by:

{code}
git checkout -- 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java
{code}

> Compile error due to leftover ScmBlockLocationTestIngClient file
> 
>
> Key: HDDS-1965
> URL: https://issues.apache.org/jira/browse/HDDS-1965
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:title=https://ci.anzix.net/job/ozone/17667/consoleText}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestIngClient.java:[65,8]
>  class ScmBlockLocationTestingClient is public, should be declared in a file 
> named ScmBlockLocationTestingClient.java
> [ERROR] 
> /var/jenkins_home/workspace/ozone@2/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ScmBlockLocationTestingClient.java:[65,8]
>  duplicate class: org.apache.hadoop.ozone.om.ScmBlockLocationTestingClient
> [INFO] 2 errors 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1966) Wrong expected key ACL in acceptance test

2019-08-14 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1966:

Status: Patch Available  (was: In Progress)

> Wrong expected key ACL in acceptance test
> -
>
> Key: HDDS-1966
> URL: https://issues.apache.org/jira/browse/HDDS-1966
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Acceptance test fails at ACL checks:
> {code:title=https://elek.github.io/ozone-ci/trunk/trunk-nightly-wxhxr/acceptance/smokeresult/log.html#s1-s16-s2-t4-k2}
> [ {
>   "type" : "USER",
>   "name" : "testuser/s...@example.com",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "root",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "USER",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "READ", "WRITE", "READ_ACL", "WRITE_ACL" ]
> } ]' does not match '"type" : "GROUP",
> .*"name" : "superuser1*",
> .*"aclScope" : "ACCESS",
> .*"aclList" : . "READ", "WRITE", "READ_ACL", "WRITE_ACL"'
> {code}
> The test [sets user 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L123],
>  but [checks group 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L125].
>   I think this passed previously due to a bug that was 
> [fixed|https://github.com/apache/hadoop/pull/1234/files#diff-2d061b57a9838854d07da9e0eca64f31]
>  by HDDS-1917.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-08-14 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14595:
---
Fix Version/s: 3.3.0

> HDFS-11848 breaks API compatibility
> ---
>
> Key: HDFS-14595
> URL: https://issues.apache.org/jira/browse/HDFS-14595
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HDFS-14595.001.patch, HDFS-14595.002.patch, 
> HDFS-14595.003.patch, HDFS-14595.004.patch, HDFS-14595.005.patch, 
> HDFS-14595.006.patch, hadoop_ 36e1870eab904d5a6f12ecfb1fdb52ca08d95ac5 to 
> b241194d56f97ee372cbec7062bcf155bc3df662 compatibility report.htm
>
>
> Our internal tool caught an API compatibility issue with HDFS-11848.
> HDFS-11848 adds an additional parameter to 
> DistributedFileSystem.listOpenFiles(), but it doesn't keep the existing API.
> This can cause issue when upgrading from Hadoop 2.9.0/2.8.3/3.0.0 to 
> 3.0.1/3.1.0 and above.
> Suggest:
> (1) Add back the old API (which was added in HDFS-10480), and mark it 
> deprecated.
> (2) Update release doc to enforce running API compatibility check for each 
> releases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-08-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907321#comment-16907321
 ] 

Hudson commented on HDFS-14595:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17119 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17119/])
HDFS-14595. HDFS-11848 breaks API compatibility. Contributed by Siyao (weichiu: 
rev 3c0382f1b933b7acfe55081f5bad46f9fe05a14b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHdfsAdmin.java


> HDFS-11848 breaks API compatibility
> ---
>
> Key: HDFS-14595
> URL: https://issues.apache.org/jira/browse/HDFS-14595
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HDFS-14595.001.patch, HDFS-14595.002.patch, 
> HDFS-14595.003.patch, HDFS-14595.004.patch, HDFS-14595.005.patch, 
> HDFS-14595.006.patch, hadoop_ 36e1870eab904d5a6f12ecfb1fdb52ca08d95ac5 to 
> b241194d56f97ee372cbec7062bcf155bc3df662 compatibility report.htm
>
>
> Our internal tool caught an API compatibility issue with HDFS-11848.
> HDFS-11848 adds an additional parameter to 
> DistributedFileSystem.listOpenFiles(), but it doesn't keep the existing API.
> This can cause issue when upgrading from Hadoop 2.9.0/2.8.3/3.0.0 to 
> 3.0.1/3.1.0 and above.
> Suggest:
> (1) Add back the old API (which was added in HDFS-10480), and mark it 
> deprecated.
> (2) Update release doc to enforce running API compatibility check for each 
> releases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11848) Enhance dfsadmin listOpenFiles command to list files under a given path

2019-08-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907322#comment-16907322
 ] 

Hudson commented on HDFS-11848:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17119 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17119/])
HDFS-14595. HDFS-11848 breaks API compatibility. Contributed by Siyao (weichiu: 
rev 3c0382f1b933b7acfe55081f5bad46f9fe05a14b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHdfsAdmin.java


> Enhance dfsadmin listOpenFiles command to list files under a given path
> ---
>
> Key: HDFS-11848
> URL: https://issues.apache.org/jira/browse/HDFS-11848
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Yiqun Lin
>Priority: Major
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HDFS-11848.001.patch, HDFS-11848.002.patch, 
> HDFS-11848.003.patch, HDFS-11848.004.patch
>
>
> HDFS-10480 adds {{listOpenFiles}} option is to {{dfsadmin}} command to list 
> all the open files in the system.
> One more thing that would be nice here is to filter the output on a passed 
> path or DataNode. Usecases: An admin might already know a stale file by path 
> (perhaps from fsck's -openforwrite), and wants to figure out who the lease 
> holder is. Proposal here is add suboptions to {{listOpenFiles}} to list files 
> filtered by path.
> {{LeaseManager#getINodeWithLeases(INodeDirectory)}} can be used to get the 
> open file list for any given ancestor directory.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-08-14 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14595:
---
   Resolution: Fixed
Fix Version/s: 3.1.3
   3.2.1
   Status: Resolved  (was: Patch Available)

Thanks [~smeng] for the patch and [~ayushtkn] for several iterations of review!

> HDFS-11848 breaks API compatibility
> ---
>
> Key: HDFS-14595
> URL: https://issues.apache.org/jira/browse/HDFS-14595
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Blocker
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14595.001.patch, HDFS-14595.002.patch, 
> HDFS-14595.003.patch, HDFS-14595.004.patch, HDFS-14595.005.patch, 
> HDFS-14595.006.patch, hadoop_ 36e1870eab904d5a6f12ecfb1fdb52ca08d95ac5 to 
> b241194d56f97ee372cbec7062bcf155bc3df662 compatibility report.htm
>
>
> Our internal tool caught an API compatibility issue with HDFS-11848.
> HDFS-11848 adds an additional parameter to 
> DistributedFileSystem.listOpenFiles(), but it doesn't keep the existing API.
> This can cause issue when upgrading from Hadoop 2.9.0/2.8.3/3.0.0 to 
> 3.0.1/3.1.0 and above.
> Suggest:
> (1) Add back the old API (which was added in HDFS-10480), and mark it 
> deprecated.
> (2) Update release doc to enforce running API compatibility check for each 
> releases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1955) TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of assertion error

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1955?focusedWorklogId=294802&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294802
 ]

ASF GitHub Bot logged work on HDDS-1955:


Author: ASF GitHub Bot
Created on: 14/Aug/19 15:09
Start Date: 14/Aug/19 15:09
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1281: 
HDDS-1955. TestBlockOutputStreamWithFailures#test2DatanodesFailure failing 
because of assertion error. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1281
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294802)
Time Spent: 1h 10m  (was: 1h)

> TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of 
> assertion error
> --
>
> Key: HDDS-1955
> URL: https://issues.apache.org/jira/browse/HDDS-1955
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The test is failing because pipeline can be closed because of the datanode 
> shutdown. This can also cause a ContainerNotOpenException to be raised.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1955) TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of assertion error

2019-08-14 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907338#comment-16907338
 ] 

Nanda kumar commented on HDDS-1955:
---

Thanks [~msingh] for the contribution. Committed it to trunk and ozone-0.4.1 
branch.

> TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of 
> assertion error
> --
>
> Key: HDDS-1955
> URL: https://issues.apache.org/jira/browse/HDDS-1955
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The test is failing because pipeline can be closed because of the datanode 
> shutdown. This can also cause a ContainerNotOpenException to be raised.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1955) TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of assertion error

2019-08-14 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1955:
--
   Resolution: Fixed
Fix Version/s: 0.5.0
   0.4.1
   Status: Resolved  (was: Patch Available)

> TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of 
> assertion error
> --
>
> Key: HDDS-1955
> URL: https://issues.apache.org/jira/browse/HDDS-1955
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The test is failing because pipeline can be closed because of the datanode 
> shutdown. This can also cause a ContainerNotOpenException to be raised.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14567) If kms-acls is failed to load, and it will never be reload

2019-08-14 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907339#comment-16907339
 ] 

Wei-Chiu Chuang commented on HDFS-14567:


Thanks. I think the production code fix looks good to me.
I am still not sure about the test. Will take another look.

>  If kms-acls is failed to load, and it will never be reload
> ---
>
> Key: HDFS-14567
> URL: https://issues.apache.org/jira/browse/HDFS-14567
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14567.patch
>
>
> Scenario : through one automation tool , we are generating kms-acls , though 
> the generation of kms-acls is not completed , the system will detect a 
> modification of kms-alcs and it will try to load
> Before getting the configuration we are modifiying last reload time , code 
> shown below
> {code:java}
> private Configuration loadACLsFromFile() {
> LOG.debug("Loading ACLs file");
> lastReload = System.currentTimeMillis();
> Configuration conf = KMSConfiguration.getACLsConf();
> // triggering the resource loading.
> conf.get(Type.CREATE.getAclConfigKey());
> return conf;
> }{code}
> if the kms-acls file written within next 100ms , the changes will not be 
> loaded as this condition "newer = f.lastModified() - time > 100" never meets 
> because we have modified last reload time before getting the configuration
> {code:java}
> public static boolean isACLsFileNewer(long time) {
> boolean newer = false;
> String confDir = System.getProperty(KMS_CONFIG_DIR);
> if (confDir != null) {
> Path confPath = new Path(confDir);
> if (!confPath.isUriPathAbsolute()) {
> throw new RuntimeException("System property '" + KMS_CONFIG_DIR +
> "' must be an absolute path: " + confDir);
> }
> File f = new File(confDir, KMS_ACLS_XML);
> LOG.trace("Checking file {}, modification time is {}, last reload time is"
> + " {}", f.getPath(), f.lastModified(), time);
> // at least 100ms newer than time, we do this to ensure the file
> // has been properly closed/flushed
> newer = f.lastModified() - time > 100;
> }
> return newer;
> } {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1955) TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of assertion error

2019-08-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907341#comment-16907341
 ] 

Hudson commented on HDDS-1955:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17120 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17120/])
HDDS-1955. TestBlockOutputStreamWithFailures#test2DatanodesFailure (nanda: rev 
2432356570140ec7f55e1ab56e442c373ff05a16)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestBlockOutputStreamWithFailures.java


> TestBlockOutputStreamWithFailures#test2DatanodesFailure failing because of 
> assertion error
> --
>
> Key: HDDS-1955
> URL: https://issues.apache.org/jira/browse/HDDS-1955
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The test is failing because pipeline can be closed because of the datanode 
> shutdown. This can also cause a ContainerNotOpenException to be raised.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14713) RBF: RouterAdmin supports refreshRouterArgs command but not on display

2019-08-14 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907342#comment-16907342
 ] 

Ayush Saxena commented on HDFS-14713:
-

v004 LGTM +1

> RBF: RouterAdmin supports refreshRouterArgs command but not on display
> --
>
> Key: HDFS-14713
> URL: https://issues.apache.org/jira/browse/HDFS-14713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-14713-000.patch, HDFS-14713-001.patch, 
> HDFS-14713-002.patch, HDFS-14713-003.patch, HDFS-14713-004.patch, after.png, 
> before.png
>
>
> When the cmd commond is null,the refreshRouterArgs command is not display 
> ,because there is one missing value in the String[] commands
> {code:java}
> //
> if (cmd == null) {
>   String[] commands =
>   {"-add", "-update", "-rm", "-ls", "-getDestination",
>   "-setQuota", "-clrQuota",
>   "-safemode", "-nameservice", "-getDisabledNameservices",
>   "-refresh"};
>   StringBuilder usage = new StringBuilder();
>   usage.append("Usage: hdfs dfsrouteradmin :\n");
>   for (int i = 0; i < commands.length; i++) {
> usage.append(getUsage(commands[i]));
> if (i + 1 < commands.length) {
>   usage.append("\n");
> }
>   }
>   
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1966) Wrong expected key ACL in acceptance test

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1966?focusedWorklogId=294803&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294803
 ]

ASF GitHub Bot logged work on HDDS-1966:


Author: ASF GitHub Bot
Created on: 14/Aug/19 15:12
Start Date: 14/Aug/19 15:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1295: HDDS-1966. Wrong 
expected key ACL in acceptance test
URL: https://github.com/apache/hadoop/pull/1295#issuecomment-521289037
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 69 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 605 | trunk passed |
   | +1 | compile | 370 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1836 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 553 | the patch passed |
   | +1 | compile | 374 | the patch passed |
   | +1 | javac | 374 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 744 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 362 | hadoop-hdds in the patch passed. |
   | -1 | unit | 656 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 5172 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.s3.TestOzoneClientProducer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1295/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1295 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient |
   | uname | Linux f6466b75cfe5 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 83e452e |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1295/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1295/1/testReport/ |
   | Max. process+thread count | 1263 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1295/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294803)
Time Spent: 0.5h  (was: 20m)

> Wrong expected key ACL in acceptance test
> -
>
> Key: HDDS-1966
> URL: https://issues.apache.org/jira/browse/HDDS-1966
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Acceptance test fails at ACL checks:
> {code:title=https://elek.github.io/ozone-ci/trunk/trunk-nightly-wxhxr/acceptance/smokeresult/log.html#s1-s16-s2-t4-k2}
> [ {
>   "type" : "USER",
>   "name" : "testuser/s...@example.com",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "root",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "USER",
>   "

[jira] [Updated] (HDFS-14713) RBF: RouterAdmin supports refreshRouterArgs command but not on display

2019-08-14 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14713:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> RBF: RouterAdmin supports refreshRouterArgs command but not on display
> --
>
> Key: HDFS-14713
> URL: https://issues.apache.org/jira/browse/HDFS-14713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14713-000.patch, HDFS-14713-001.patch, 
> HDFS-14713-002.patch, HDFS-14713-003.patch, HDFS-14713-004.patch, after.png, 
> before.png
>
>
> When the cmd commond is null,the refreshRouterArgs command is not display 
> ,because there is one missing value in the String[] commands
> {code:java}
> //
> if (cmd == null) {
>   String[] commands =
>   {"-add", "-update", "-rm", "-ls", "-getDestination",
>   "-setQuota", "-clrQuota",
>   "-safemode", "-nameservice", "-getDisabledNameservices",
>   "-refresh"};
>   StringBuilder usage = new StringBuilder();
>   usage.append("Usage: hdfs dfsrouteradmin :\n");
>   for (int i = 0; i < commands.length; i++) {
> usage.append(getUsage(commands[i]));
> if (i + 1 < commands.length) {
>   usage.append("\n");
> }
>   }
>   
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14713) RBF: RouterAdmin supports refreshRouterArgs command but not on display

2019-08-14 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907347#comment-16907347
 ] 

Ayush Saxena commented on HDFS-14713:
-

Committed to trunk.
Thanx [~wangzhaohui] for the contribution and [~elgoiri] for the review!!!

> RBF: RouterAdmin supports refreshRouterArgs command but not on display
> --
>
> Key: HDFS-14713
> URL: https://issues.apache.org/jira/browse/HDFS-14713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Attachments: HDFS-14713-000.patch, HDFS-14713-001.patch, 
> HDFS-14713-002.patch, HDFS-14713-003.patch, HDFS-14713-004.patch, after.png, 
> before.png
>
>
> When the cmd commond is null,the refreshRouterArgs command is not display 
> ,because there is one missing value in the String[] commands
> {code:java}
> //
> if (cmd == null) {
>   String[] commands =
>   {"-add", "-update", "-rm", "-ls", "-getDestination",
>   "-setQuota", "-clrQuota",
>   "-safemode", "-nameservice", "-getDisabledNameservices",
>   "-refresh"};
>   StringBuilder usage = new StringBuilder();
>   usage.append("Usage: hdfs dfsrouteradmin :\n");
>   for (int i = 0; i < commands.length; i++) {
> usage.append(getUsage(commands[i]));
> if (i + 1 < commands.length) {
>   usage.append("\n");
> }
>   }
>   
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14713) RBF: RouterAdmin supports refreshRouterArgs command but not on display

2019-08-14 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14713:

Parent Issue: HDFS-14603  (was: HDFS-13891)

> RBF: RouterAdmin supports refreshRouterArgs command but not on display
> --
>
> Key: HDFS-14713
> URL: https://issues.apache.org/jira/browse/HDFS-14713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14713-000.patch, HDFS-14713-001.patch, 
> HDFS-14713-002.patch, HDFS-14713-003.patch, HDFS-14713-004.patch, after.png, 
> before.png
>
>
> When the cmd commond is null,the refreshRouterArgs command is not display 
> ,because there is one missing value in the String[] commands
> {code:java}
> //
> if (cmd == null) {
>   String[] commands =
>   {"-add", "-update", "-rm", "-ls", "-getDestination",
>   "-setQuota", "-clrQuota",
>   "-safemode", "-nameservice", "-getDisabledNameservices",
>   "-refresh"};
>   StringBuilder usage = new StringBuilder();
>   usage.append("Usage: hdfs dfsrouteradmin :\n");
>   for (int i = 0; i < commands.length; i++) {
> usage.append(getUsage(commands[i]));
> if (i + 1 < commands.length) {
>   usage.append("\n");
> }
>   }
>   
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14713) RBF: RouterAdmin supports refreshRouterArgs command but not on display

2019-08-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907356#comment-16907356
 ] 

Hudson commented on HDFS-14713:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17121 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17121/])
HDFS-14713. RBF: RouterAdmin supports refreshRouterArgs command but not 
(ayushsaxena: rev b06c2345efffde2955b8c2d5fd954ad73b5d8677)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java


> RBF: RouterAdmin supports refreshRouterArgs command but not on display
> --
>
> Key: HDFS-14713
> URL: https://issues.apache.org/jira/browse/HDFS-14713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14713-000.patch, HDFS-14713-001.patch, 
> HDFS-14713-002.patch, HDFS-14713-003.patch, HDFS-14713-004.patch, after.png, 
> before.png
>
>
> When the cmd commond is null,the refreshRouterArgs command is not display 
> ,because there is one missing value in the String[] commands
> {code:java}
> //
> if (cmd == null) {
>   String[] commands =
>   {"-add", "-update", "-rm", "-ls", "-getDestination",
>   "-setQuota", "-clrQuota",
>   "-safemode", "-nameservice", "-getDisabledNameservices",
>   "-refresh"};
>   StringBuilder usage = new StringBuilder();
>   usage.append("Usage: hdfs dfsrouteradmin :\n");
>   for (int i = 0; i < commands.length; i++) {
> usage.append(getUsage(commands[i]));
> if (i + 1 < commands.length) {
>   usage.append("\n");
> }
>   }
>   
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1964) TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1964?focusedWorklogId=294812&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294812
 ]

ASF GitHub Bot logged work on HDDS-1964:


Author: ASF GitHub Bot
Created on: 14/Aug/19 15:50
Start Date: 14/Aug/19 15:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1292: HDDS-1964. 
TestOzoneClientProducer fails with ConnectException
URL: https://github.com/apache/hadoop/pull/1292#issuecomment-521304482
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 595 | trunk passed |
   | +1 | compile | 368 | trunk passed |
   | +1 | checkstyle | 71 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 935 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | trunk passed |
   | 0 | spotbugs | 434 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 633 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 559 | the patch passed |
   | +1 | compile | 375 | the patch passed |
   | +1 | javac | 375 | the patch passed |
   | +1 | checkstyle | 78 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 716 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | the patch passed |
   | +1 | findbugs | 726 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 367 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2560 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 8616 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1292 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 47d0c0356561 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 83e452e |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/2/testReport/ |
   | Max. process+thread count | 4873 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1292/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294812)
Time Spent: 40m  (was: 0.5h)

> TestOzoneClientProducer fails with ConnectException
> ---
>
> Key: HDDS-1964
> URL: https://issues.apache.org/jira/browse/HDDS-1964
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci

[jira] [Updated] (HDFS-14528) [SBN Read]Failover from Active to Standby Failed

2019-08-14 Thread Ravuri Sushma sree (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravuri Sushma sree updated HDFS-14528:
--
Attachment: HDFS-14528.2.Patch

> [SBN Read]Failover from Active to Standby Failed  
> --
>
> Key: HDFS-14528
> URL: https://issues.apache.org/jira/browse/HDFS-14528
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Reporter: Ravuri Sushma sree
>Assignee: Ravuri Sushma sree
>Priority: Major
> Attachments: HDFS-14528.2.Patch, ZKFC_issue.patch
>
>
> *Started an HA Cluster with three nodes [ _Active ,Standby ,Observer_ ]*
> *When trying to exectue the failover command from active to standby* 
> *._/hdfs haadmin  -failover nn1 nn2, below Exception is thrown_*
>   Operation failed: Call From X-X-X-X/X-X-X-X to Y-Y-Y-Y: failed on 
> connection exception: java.net.ConnectException: Connection refused
> This is encountered in two cases : When any other standby namenode is down or 
> when any other zkfc is down 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14687) Standby Namenode never come out of safemode when EC files are being written.

2019-08-14 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907387#comment-16907387
 ] 

Wei-Chiu Chuang commented on HDFS-14687:


Ping [~surendrasingh] appreciate your report. Looking at the fix, this looks 
really bad. Could you help update the patch? Thank you

> Standby Namenode never come out of safemode when EC files are being written.
> 
>
> Key: HDFS-14687
> URL: https://issues.apache.org/jira/browse/HDFS-14687
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, namenode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Critical
> Attachments: HDFS-14687.001.patch, HDFS-14687.002.patch
>
>
> When huge number of EC files are being written and SBN is restarted then it 
> will never come out of safe mode and required blocks count getting increase.
> {noformat}
> The reported blocks 16658401 needs additional 1702 blocks to reach the 
> threshold 0.9 of total blocks 16660120.
> The reported blocks 16658659 needs additional 2935 blocks to reach the 
> threshold 0.9 of total blocks 16661611.
> The reported blocks 16659947 needs additional 3868 blocks to reach the 
> threshold 0.9 of total blocks 16663832.
> The reported blocks 1335 needs additional 5116 blocks to reach the 
> threshold 0.9 of total blocks 16671468.
> The reported blocks 16669311 needs additional 6384 blocks to reach the 
> threshold 0.9 of total blocks 16675712.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14675) Increase Balancer Defaults Further

2019-08-14 Thread Stephen O'Donnell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-14675:
-
Attachment: HDFS-14675.001.patch

> Increase Balancer Defaults Further
> --
>
> Key: HDFS-14675
> URL: https://issues.apache.org/jira/browse/HDFS-14675
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14675.001.patch
>
>
> HDFS-10297 increased the balancer defaults to 50 for 
> dfs.datanode.balance.max.concurrent.moves and to 10MB/s for 
> dfs.datanode.balance.bandwidthPerSec.
> We have found that these settings often have to be increased further as users 
> find the balancer operates too slowly with 50 and 10MB/s. We often recommend 
> moving concurrent moves to between 200 and 300 and setting the bandwidth to 
> 100 or even 1000MB/s, and these settings seem to work well in practice.
> I would like to suggest we increase the balancer defaults further. I would 
> suggest 100 for concurrent moves and 100MB/s for the bandwidth, but I would 
> like to know what others think on this topic too.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14283) DFSInputStream to prefer cached replica

2019-08-14 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun reassigned HDFS-14283:
--

Assignee: Lisheng Sun

> DFSInputStream to prefer cached replica
> ---
>
> Key: HDFS-14283
> URL: https://issues.apache.org/jira/browse/HDFS-14283
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.0
> Environment: HDFS Caching
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
>
> HDFS Caching offers performance benefits. However, currently NameNode does 
> not treat cached replica with higher priority, so HDFS caching is only useful 
> when cache replication = 3, that is to say, all replicas are cached in 
> memory, so that a client doesn't randomly pick an uncached replica.
> HDFS-6846 proposed to let NameNode give higher priority to cached replica. 
> Changing a logic in NameNode is always tricky so that didn't get much 
> traction. Here I propose a different approach: let client (DFSInputStream) 
> prefer cached replica.
> A {{LocatedBlock}} object already contains cached replica location so a 
> client has the needed information. I think we can change 
> {{DFSInputStream#getBestNodeDNAddrPair()}} for this purpose.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14728) RBF: GetDatanodeReport causes a large GC pressure on the NameNodes

2019-08-14 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907418#comment-16907418
 ] 

Íñigo Goiri commented on HDFS-14728:


* As we are moving the cache out of NamenodeBeanMetrics into the 
RouterRpcServer, we should unify getNodes() and getNodesImpl().
* It is missing updates in hdfs-rbf-default.xml.
* Not sure about TestRouterRPCClientRetries.
* We can fix the checkstyles.

> RBF: GetDatanodeReport causes a large GC pressure on the NameNodes
> --
>
> Key: HDFS-14728
> URL: https://issues.apache.org/jira/browse/HDFS-14728
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14728-trunk-001.patch, HDFS-14728-trunk-002.patch, 
> HDFS-14728-trunk-003.patch
>
>
> When a cluster contains millions of DNs, *GetDatanodeReport* is pretty 
> expensive, and it will cause a large GC pressure on NameNode.
> When multiple NSs share the millions DNs by federation and the router listens 
> to the NSs, the problem will be more serious.
> All the NSs will be GC at the same time.
> RBF should cache the datanode report informations and have an option to 
> disable the cache.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1964) TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1964?focusedWorklogId=294853&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294853
 ]

ASF GitHub Bot logged work on HDDS-1964:


Author: ASF GitHub Bot
Created on: 14/Aug/19 16:41
Start Date: 14/Aug/19 16:41
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1292: HDDS-1964. 
TestOzoneClientProducer fails with ConnectException
URL: https://github.com/apache/hadoop/pull/1292#issuecomment-521323896
 
 
   @smengcl @anuengineer please review
   
   Here are the fixed unit tests:
   
https://ci.anzix.net/job/ozone/17670/testReport/org.apache.hadoop.ozone.s3/TestOzoneClientProducer/
   
   Failed acceptance test is being fixed in #1295.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294853)
Time Spent: 50m  (was: 40m)

> TestOzoneClientProducer fails with ConnectException
> ---
>
> Key: HDDS-1964
> URL: https://issues.apache.org/jira/browse/HDDS-1964
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/trunk/trunk-nightly-wxhxr/unit/hadoop-ozone/s3gateway/org.apache.hadoop.ozone.s3.TestOzoneClientProducer.txt}
> ---
> Test set: org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> ---
> Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 222.239 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> testGetClientFailure[0](org.apache.hadoop.ozone.s3.TestOzoneClientProducer)  
> Time elapsed: 111.036 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'Couldn't create protocol ' but got unexpected exception: 
> java.net.ConnectException: Your endpoint configuration is wrong; For more 
> details see:  http://wiki.apache.org/hadoop/UnsetHostnameOrPort
> {code}
> Log output (with local log4j config) reveals that connection is attempted to 
> 0.0.0.0:9862:
> {code:title=log output}
> 2019-08-14 10:49:14,225 [main] INFO  ipc.Client 
> (Client.java:handleConnectionFailure(948)) - Retrying connect to server: 
> 0.0.0.0/0.0.0.0:9862. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> {code}
> The address 0.0.0.0:9862 was added as default in 
> [HDDS-1920|https://github.com/apache/hadoop/commit/bf457797f607f3aeeb2292e63f440cb13e15a2d9].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir

2019-08-14 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907425#comment-16907425
 ] 

Eric Yang commented on HDFS-2470:
-

[~swagle] . Default to 700 is generally a good idea.  StorageDirectory is also 
used by datanode, and there is a [legacy version of HDFS Short 
circuit|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html]
 read that allows datanode storage directory to be controlled using 
dfs.datanode.data.dir.perm config.  By using 700 default, it may create 
incompatible change to application that depends on legacy HDFS short circuit 
read defaults.  Directory permission can default to 700 after the code logic 
checks against all permissions config with dirType to ensure we don't regress.

> NN should automatically set permissions on dfs.namenode.*.dir
> -
>
> Key: HDFS-2470
> URL: https://issues.apache.org/jira/browse/HDFS-2470
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Aaron T. Myers
>Assignee: Siddharth Wagle
>Priority: Major
> Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, 
> HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, HDFS-2470.06.patch
>
>
> Much as the DN currently sets the correct permissions for the 
> dfs.datanode.data.dir, the NN should do the same for the 
> dfs.namenode.(name|edit).dir.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1966) Wrong expected key ACL in acceptance test

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1966?focusedWorklogId=294854&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294854
 ]

ASF GitHub Bot logged work on HDDS-1966:


Author: ASF GitHub Bot
Created on: 14/Aug/19 16:42
Start Date: 14/Aug/19 16:42
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1295: HDDS-1966. 
Wrong expected key ACL in acceptance test
URL: https://github.com/apache/hadoop/pull/1295
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294854)
Time Spent: 40m  (was: 0.5h)

> Wrong expected key ACL in acceptance test
> -
>
> Key: HDDS-1966
> URL: https://issues.apache.org/jira/browse/HDDS-1966
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Acceptance test fails at ACL checks:
> {code:title=https://elek.github.io/ozone-ci/trunk/trunk-nightly-wxhxr/acceptance/smokeresult/log.html#s1-s16-s2-t4-k2}
> [ {
>   "type" : "USER",
>   "name" : "testuser/s...@example.com",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "root",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "USER",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "READ", "WRITE", "READ_ACL", "WRITE_ACL" ]
> } ]' does not match '"type" : "GROUP",
> .*"name" : "superuser1*",
> .*"aclScope" : "ACCESS",
> .*"aclList" : . "READ", "WRITE", "READ_ACL", "WRITE_ACL"'
> {code}
> The test [sets user 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L123],
>  but [checks group 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L125].
>   I think this passed previously due to a bug that was 
> [fixed|https://github.com/apache/hadoop/pull/1234/files#diff-2d061b57a9838854d07da9e0eca64f31]
>  by HDDS-1917.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1966) Wrong expected key ACL in acceptance test

2019-08-14 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1966:
---
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

Thank you for catching and fixing the issue. Committed to the trunk. 
[~nandakumar131] you might want to cherry-pick this into ozone-0.4.1

> Wrong expected key ACL in acceptance test
> -
>
> Key: HDDS-1966
> URL: https://issues.apache.org/jira/browse/HDDS-1966
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Acceptance test fails at ACL checks:
> {code:title=https://elek.github.io/ozone-ci/trunk/trunk-nightly-wxhxr/acceptance/smokeresult/log.html#s1-s16-s2-t4-k2}
> [ {
>   "type" : "USER",
>   "name" : "testuser/s...@example.com",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "root",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "USER",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "READ", "WRITE", "READ_ACL", "WRITE_ACL" ]
> } ]' does not match '"type" : "GROUP",
> .*"name" : "superuser1*",
> .*"aclScope" : "ACCESS",
> .*"aclList" : . "READ", "WRITE", "READ_ACL", "WRITE_ACL"'
> {code}
> The test [sets user 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L123],
>  but [checks group 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L125].
>   I think this passed previously due to a bug that was 
> [fixed|https://github.com/apache/hadoop/pull/1234/files#diff-2d061b57a9838854d07da9e0eca64f31]
>  by HDDS-1917.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1966) Wrong expected key ACL in acceptance test

2019-08-14 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1966:
--
Fix Version/s: 0.4.1

> Wrong expected key ACL in acceptance test
> -
>
> Key: HDDS-1966
> URL: https://issues.apache.org/jira/browse/HDDS-1966
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Acceptance test fails at ACL checks:
> {code:title=https://elek.github.io/ozone-ci/trunk/trunk-nightly-wxhxr/acceptance/smokeresult/log.html#s1-s16-s2-t4-k2}
> [ {
>   "type" : "USER",
>   "name" : "testuser/s...@example.com",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "root",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "USER",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "READ", "WRITE", "READ_ACL", "WRITE_ACL" ]
> } ]' does not match '"type" : "GROUP",
> .*"name" : "superuser1*",
> .*"aclScope" : "ACCESS",
> .*"aclList" : . "READ", "WRITE", "READ_ACL", "WRITE_ACL"'
> {code}
> The test [sets user 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L123],
>  but [checks group 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L125].
>   I think this passed previously due to a bug that was 
> [fixed|https://github.com/apache/hadoop/pull/1234/files#diff-2d061b57a9838854d07da9e0eca64f31]
>  by HDDS-1917.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1966) Wrong expected key ACL in acceptance test

2019-08-14 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907429#comment-16907429
 ] 

Nanda kumar commented on HDDS-1966:
---

Committed it to ozone-0.4.1 as well. Thanks for the flag [~anu].

> Wrong expected key ACL in acceptance test
> -
>
> Key: HDDS-1966
> URL: https://issues.apache.org/jira/browse/HDDS-1966
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Acceptance test fails at ACL checks:
> {code:title=https://elek.github.io/ozone-ci/trunk/trunk-nightly-wxhxr/acceptance/smokeresult/log.html#s1-s16-s2-t4-k2}
> [ {
>   "type" : "USER",
>   "name" : "testuser/s...@example.com",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "root",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "USER",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "READ", "WRITE", "READ_ACL", "WRITE_ACL" ]
> } ]' does not match '"type" : "GROUP",
> .*"name" : "superuser1*",
> .*"aclScope" : "ACCESS",
> .*"aclList" : . "READ", "WRITE", "READ_ACL", "WRITE_ACL"'
> {code}
> The test [sets user 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L123],
>  but [checks group 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L125].
>   I think this passed previously due to a bug that was 
> [fixed|https://github.com/apache/hadoop/pull/1234/files#diff-2d061b57a9838854d07da9e0eca64f31]
>  by HDDS-1917.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1966) Wrong expected key ACL in acceptance test

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1966?focusedWorklogId=294859&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294859
 ]

ASF GitHub Bot logged work on HDDS-1966:


Author: ASF GitHub Bot
Created on: 14/Aug/19 16:47
Start Date: 14/Aug/19 16:47
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1295: HDDS-1966. Wrong 
expected key ACL in acceptance test
URL: https://github.com/apache/hadoop/pull/1295#issuecomment-521326052
 
 
   Thanks @anuengineer and @nandakumar131 for commiting it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294859)
Time Spent: 50m  (was: 40m)

> Wrong expected key ACL in acceptance test
> -
>
> Key: HDDS-1966
> URL: https://issues.apache.org/jira/browse/HDDS-1966
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Acceptance test fails at ACL checks:
> {code:title=https://elek.github.io/ozone-ci/trunk/trunk-nightly-wxhxr/acceptance/smokeresult/log.html#s1-s16-s2-t4-k2}
> [ {
>   "type" : "USER",
>   "name" : "testuser/s...@example.com",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "root",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "USER",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "READ", "WRITE", "READ_ACL", "WRITE_ACL" ]
> } ]' does not match '"type" : "GROUP",
> .*"name" : "superuser1*",
> .*"aclScope" : "ACCESS",
> .*"aclList" : . "READ", "WRITE", "READ_ACL", "WRITE_ACL"'
> {code}
> The test [sets user 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L123],
>  but [checks group 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L125].
>   I think this passed previously due to a bug that was 
> [fixed|https://github.com/apache/hadoop/pull/1234/files#diff-2d061b57a9838854d07da9e0eca64f31]
>  by HDDS-1917.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1966) Wrong expected key ACL in acceptance test

2019-08-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907437#comment-16907437
 ] 

Hudson commented on HDDS-1966:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17123 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17123/])
HDDS-1966. Wrong expected key ACL in acceptance test (aengineer: rev 
06d8ac95226ef45aa810668f175a70a0ce9b7cb1)
* (edit) hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot
* (edit) hadoop-ozone/dist/src/main/smoketest/security/ozone-secure-fs.robot


> Wrong expected key ACL in acceptance test
> -
>
> Key: HDDS-1966
> URL: https://issues.apache.org/jira/browse/HDDS-1966
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Acceptance test fails at ACL checks:
> {code:title=https://elek.github.io/ozone-ci/trunk/trunk-nightly-wxhxr/acceptance/smokeresult/log.html#s1-s16-s2-t4-k2}
> [ {
>   "type" : "USER",
>   "name" : "testuser/s...@example.com",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "root",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "GROUP",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "ALL" ]
> }, {
>   "type" : "USER",
>   "name" : "superuser1",
>   "aclScope" : "ACCESS",
>   "aclList" : [ "READ", "WRITE", "READ_ACL", "WRITE_ACL" ]
> } ]' does not match '"type" : "GROUP",
> .*"name" : "superuser1*",
> .*"aclScope" : "ACCESS",
> .*"aclList" : . "READ", "WRITE", "READ_ACL", "WRITE_ACL"'
> {code}
> The test [sets user 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L123],
>  but [checks group 
> ACL|https://github.com/apache/hadoop/blob/0e4b757955ae8da1651b870c12458e3344c0b613/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot#L125].
>   I think this passed previously due to a bug that was 
> [fixed|https://github.com/apache/hadoop/pull/1234/files#diff-2d061b57a9838854d07da9e0eca64f31]
>  by HDDS-1917.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=294865&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294865
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 14/Aug/19 16:51
Start Date: 14/Aug/19 16:51
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1204: 
HDDS-1768. Audit xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#discussion_r313978226
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2999,23 +3016,36 @@ public OmKeyInfo lookupFile(OmKeyArgs args) throws 
IOException {
*/
   @Override
   public boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException {
-if(isAclEnabled) {
-  checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
-  obj.getVolumeName(), obj.getBucketName(), obj.getKeyName());
-}
-// TODO: Audit ACL operation.
-switch (obj.getResourceType()) {
-case VOLUME:
-  return volumeManager.addAcl(obj, acl);
-case BUCKET:
-  return bucketManager.addAcl(obj, acl);
-case KEY:
-  return keyManager.addAcl(obj, acl);
-case PREFIX:
-  return prefixManager.addAcl(obj, acl);
-default:
-  throw new OMException("Unexpected resource type: " +
-  obj.getResourceType(), INVALID_REQUEST);
+boolean auditSuccess = true;
+
+try{
+  if(isAclEnabled) {
+checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
+obj.getVolumeName(), obj.getBucketName(), obj.getKeyName());
+  }
+  switch (obj.getResourceType()) {
+  case VOLUME:
+return volumeManager.addAcl(obj, acl);
+  case BUCKET:
+return bucketManager.addAcl(obj, acl);
+  case KEY:
+return keyManager.addAcl(obj, acl);
+  case PREFIX:
+return prefixManager.addAcl(obj, acl);
+  default:
+throw new OMException("Unexpected resource type: " +
+obj.getResourceType(), INVALID_REQUEST);
+  }
+} catch(Exception ex) {
+  auditSuccess = false;
+  auditAcl(obj, Arrays.asList(acl), OMAction.ADD_ACL,
 
 Review comment:
   My comment is to change only modify the auditAcl as below.
   
   private void auditAcl(OzoneObj ozoneObj, List ozoneAcl,
 OMAction omAction, Exception ex) {
   Map auditMap = ozoneObj.toAuditMap();
   if(ozoneAcl != null) {
 auditMap.put(OzoneConsts.ACL, ozoneAcl.toString());
   }
   
   if(exception == null) {
 AUDIT.logWriteSuccess(
 buildAuditMessageForSuccess(omAction, auditMap));
   } else {
 AUDIT.logWriteFailure(
 buildAuditMessageForFailure(omAction, auditMap, ex));
   }
 }
   
   

 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294865)
Time Spent: 4h  (was: 3h 50m)

> Audit xxxAcl methods in OzoneManager
> 
>
> Key: HDDS-1768
> URL: https://issues.apache.org/jira/browse/HDDS-1768
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Audit permission failures from authorizer



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1768) Audit xxxAcl methods in OzoneManager

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?focusedWorklogId=294867&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294867
 ]

ASF GitHub Bot logged work on HDDS-1768:


Author: ASF GitHub Bot
Created on: 14/Aug/19 16:52
Start Date: 14/Aug/19 16:52
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1204: 
HDDS-1768. Audit xxxAcl methods in OzoneManager
URL: https://github.com/apache/hadoop/pull/1204#discussion_r313978226
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -2999,23 +3016,36 @@ public OmKeyInfo lookupFile(OmKeyArgs args) throws 
IOException {
*/
   @Override
   public boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException {
-if(isAclEnabled) {
-  checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
-  obj.getVolumeName(), obj.getBucketName(), obj.getKeyName());
-}
-// TODO: Audit ACL operation.
-switch (obj.getResourceType()) {
-case VOLUME:
-  return volumeManager.addAcl(obj, acl);
-case BUCKET:
-  return bucketManager.addAcl(obj, acl);
-case KEY:
-  return keyManager.addAcl(obj, acl);
-case PREFIX:
-  return prefixManager.addAcl(obj, acl);
-default:
-  throw new OMException("Unexpected resource type: " +
-  obj.getResourceType(), INVALID_REQUEST);
+boolean auditSuccess = true;
+
+try{
+  if(isAclEnabled) {
+checkAcls(obj.getResourceType(), obj.getStoreType(), ACLType.WRITE_ACL,
+obj.getVolumeName(), obj.getBucketName(), obj.getKeyName());
+  }
+  switch (obj.getResourceType()) {
+  case VOLUME:
+return volumeManager.addAcl(obj, acl);
+  case BUCKET:
+return bucketManager.addAcl(obj, acl);
+  case KEY:
+return keyManager.addAcl(obj, acl);
+  case PREFIX:
+return prefixManager.addAcl(obj, acl);
+  default:
+throw new OMException("Unexpected resource type: " +
+obj.getResourceType(), INVALID_REQUEST);
+  }
+} catch(Exception ex) {
+  auditSuccess = false;
+  auditAcl(obj, Arrays.asList(acl), OMAction.ADD_ACL,
 
 Review comment:
   My comment is to change only modify the auditAcl as below.
   
   ```
   private void auditAcl(OzoneObj ozoneObj, List ozoneAcl,
 OMAction omAction, Exception ex) {
   Map auditMap = ozoneObj.toAuditMap();
   if(ozoneAcl != null) {
 auditMap.put(OzoneConsts.ACL, ozoneAcl.toString());
   }
   
   if(exception == null) {
 AUDIT.logWriteSuccess(
 buildAuditMessageForSuccess(omAction, auditMap));
   } else {
 AUDIT.logWriteFailure(
 buildAuditMessageForFailure(omAction, auditMap, ex));
   }
 }
   ```
   
   

 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294867)
Time Spent: 4h 10m  (was: 4h)

> Audit xxxAcl methods in OzoneManager
> 
>
> Key: HDDS-1768
> URL: https://issues.apache.org/jira/browse/HDDS-1768
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Audit permission failures from authorizer



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1964) TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1964:

Status: Patch Available  (was: In Progress)

> TestOzoneClientProducer fails with ConnectException
> ---
>
> Key: HDDS-1964
> URL: https://issues.apache.org/jira/browse/HDDS-1964
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/trunk/trunk-nightly-wxhxr/unit/hadoop-ozone/s3gateway/org.apache.hadoop.ozone.s3.TestOzoneClientProducer.txt}
> ---
> Test set: org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> ---
> Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 222.239 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> testGetClientFailure[0](org.apache.hadoop.ozone.s3.TestOzoneClientProducer)  
> Time elapsed: 111.036 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'Couldn't create protocol ' but got unexpected exception: 
> java.net.ConnectException: Your endpoint configuration is wrong; For more 
> details see:  http://wiki.apache.org/hadoop/UnsetHostnameOrPort
> {code}
> Log output (with local log4j config) reveals that connection is attempted to 
> 0.0.0.0:9862:
> {code:title=log output}
> 2019-08-14 10:49:14,225 [main] INFO  ipc.Client 
> (Client.java:handleConnectionFailure(948)) - Retrying connect to server: 
> 0.0.0.0/0.0.0.0:9862. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> {code}
> The address 0.0.0.0:9862 was added as default in 
> [HDDS-1920|https://github.com/apache/hadoop/commit/bf457797f607f3aeeb2292e63f440cb13e15a2d9].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1957) MiniOzoneChaosCluster exits because of ArrayIndexOutOfBoundsException in load generator

2019-08-14 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila resolved HDDS-1957.
-
Resolution: Duplicate

> MiniOzoneChaosCluster exits because of ArrayIndexOutOfBoundsException in load 
> generator
> ---
>
> Key: HDDS-1957
> URL: https://issues.apache.org/jira/browse/HDDS-1957
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>
> MiniOzoneChaosCluster exits because of ArrayIndexOutOfBoundsException in load 
> generator.
> It is exiting because of the following exception.
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 1
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.readData(MiniOzoneLoadGenerator.java:153)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.startAgedFilesLoad(MiniOzoneLoadGenerator.java:216)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$1(MiniOzoneLoadGenerator.java:242)
> at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14706) Checksums are not checked if block meta file is less than 7 bytes

2019-08-14 Thread Chen Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907447#comment-16907447
 ] 

Chen Zhang commented on HDFS-14706:
---

{quote}I can see advantages to pushing the suspect block to the scanner and to 
just handling it directly in these special case, so I am happy to go either way.
{quote}
I also prefer to push the suspect block to the scanner, in my latest patch of 
HDFS-13709, D{{ataNode.reportBadBlocks}} will try to call 
{{blockScanner.markSuspectBlock}} if blockScanner is enabled, only report to 
NameNode when it's disabled.

In our company, we have hundreds of HBase clusters with provide online service, 
these cluster is very latency sensitive, so we disabled the blockScanner on 
these clusters to reduce affection of disk I/O, in this case, report bad block 
to NN is necessary.

> Checksums are not checked if block meta file is less than 7 bytes
> -
>
> Key: HDFS-14706
> URL: https://issues.apache.org/jira/browse/HDFS-14706
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14706.001.patch, HDFS-14706.002.patch
>
>
> If a block and its meta file are corrupted in a certain way, the corruption 
> can go unnoticed by a client, causing it to return invalid data.
> The meta file is expected to always have a header of 7 bytes and then a 
> series of checksums depending on the length of the block.
> If the metafile gets corrupted in such a way, that it is between zero and 
> less than 7 bytes in length, then the header is incomplete. In 
> BlockSender.java the logic checks if the metafile length is at least the size 
> of the header and if it is not, it does not error, but instead returns a NULL 
> checksum type to the client.
> https://github.com/apache/hadoop/blob/b77761b0e37703beb2c033029e4c0d5ad1dce794/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java#L327-L357
> If the client receives a NULL checksum client, it will not validate checksums 
> at all, and even corrupted data will be returned to the reader. This means 
> this corrupt will go unnoticed and HDFS will never repair it. Even the Volume 
> Scanner will not notice the corruption as the checksums are silently ignored.
> Additionally, if the meta file does have enough bytes so it attempts to load 
> the header, and the header is corrupted such that it is not valid, it can 
> cause the datanode Volume Scanner to exit, which an exception like the 
> following:
> {code}
> 2019-08-06 18:16:39,151 ERROR datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting because of exception 
> java.lang.IllegalArgumentException: id=51 out of range [0, 5)
>   at 
> org.apache.hadoop.util.DataChecksum$Type.valueOf(DataChecksum.java:76)
>   at 
> org.apache.hadoop.util.DataChecksum.newDataChecksum(DataChecksum.java:167)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:173)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:139)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:153)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.loadLastPartialChunkChecksum(FsVolumeImpl.java:1140)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FinalizedReplica.loadLastPartialChunkChecksum(FinalizedReplica.java:157)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.getPartialChunkChecksumForFinalized(BlockSender.java:451)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:266)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:446)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:558)
>   at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:633)
> 2019-08-06 18:16:39,152 INFO datanode.VolumeScanner: 
> VolumeScanner(/tmp/hadoop-sodonnell/dfs/data, 
> DS-7f103313-61ba-4d37-b63d-e8cf7d2ed5f7) exiting.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1923) static/docs/start.html page doesn't render correctly on Firefox

2019-08-14 Thread Doroszlai, Attila (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907449#comment-16907449
 ] 

Doroszlai, Attila commented on HDDS-1923:
-

[~msingh], can you please post a screenshot and your Firefox version?  
{{start.html}} looks OK to me in both Firefox and Chrome.

> static/docs/start.html page doesn't render correctly on Firefox
> ---
>
> Key: HDDS-1923
> URL: https://issues.apache.org/jira/browse/HDDS-1923
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Anu Engineer
>Priority: Blocker
>
> static/docs/start.html page doesn't render correctly on Firefox



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1964) TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1964:
--
Target Version/s: 0.4.1  (was: 0.5.0)

> TestOzoneClientProducer fails with ConnectException
> ---
>
> Key: HDDS-1964
> URL: https://issues.apache.org/jira/browse/HDDS-1964
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/trunk/trunk-nightly-wxhxr/unit/hadoop-ozone/s3gateway/org.apache.hadoop.ozone.s3.TestOzoneClientProducer.txt}
> ---
> Test set: org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> ---
> Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 222.239 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> testGetClientFailure[0](org.apache.hadoop.ozone.s3.TestOzoneClientProducer)  
> Time elapsed: 111.036 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'Couldn't create protocol ' but got unexpected exception: 
> java.net.ConnectException: Your endpoint configuration is wrong; For more 
> details see:  http://wiki.apache.org/hadoop/UnsetHostnameOrPort
> {code}
> Log output (with local log4j config) reveals that connection is attempted to 
> 0.0.0.0:9862:
> {code:title=log output}
> 2019-08-14 10:49:14,225 [main] INFO  ipc.Client 
> (Client.java:handleConnectionFailure(948)) - Retrying connect to server: 
> 0.0.0.0/0.0.0.0:9862. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> {code}
> The address 0.0.0.0:9862 was added as default in 
> [HDDS-1920|https://github.com/apache/hadoop/commit/bf457797f607f3aeeb2292e63f440cb13e15a2d9].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1964) TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907463#comment-16907463
 ] 

Hudson commented on HDDS-1964:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17124 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17124/])
HDDS-1964. TestOzoneClientProducer fails with ConnectException (aengineer: rev 
82420851645f1644f597e11e14a1d70bb8a7cc23)
* (add) hadoop-ozone/s3gateway/src/test/resources/log4j.properties
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/TestOzoneClientProducer.java


> TestOzoneClientProducer fails with ConnectException
> ---
>
> Key: HDDS-1964
> URL: https://issues.apache.org/jira/browse/HDDS-1964
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/trunk/trunk-nightly-wxhxr/unit/hadoop-ozone/s3gateway/org.apache.hadoop.ozone.s3.TestOzoneClientProducer.txt}
> ---
> Test set: org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> ---
> Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 222.239 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> testGetClientFailure[0](org.apache.hadoop.ozone.s3.TestOzoneClientProducer)  
> Time elapsed: 111.036 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'Couldn't create protocol ' but got unexpected exception: 
> java.net.ConnectException: Your endpoint configuration is wrong; For more 
> details see:  http://wiki.apache.org/hadoop/UnsetHostnameOrPort
> {code}
> Log output (with local log4j config) reveals that connection is attempted to 
> 0.0.0.0:9862:
> {code:title=log output}
> 2019-08-14 10:49:14,225 [main] INFO  ipc.Client 
> (Client.java:handleConnectionFailure(948)) - Retrying connect to server: 
> 0.0.0.0/0.0.0.0:9862. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> {code}
> The address 0.0.0.0:9862 was added as default in 
> [HDDS-1920|https://github.com/apache/hadoop/commit/bf457797f607f3aeeb2292e63f440cb13e15a2d9].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1964) TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1964?focusedWorklogId=294886&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294886
 ]

ASF GitHub Bot logged work on HDDS-1964:


Author: ASF GitHub Bot
Created on: 14/Aug/19 17:26
Start Date: 14/Aug/19 17:26
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1292: HDDS-1964. 
TestOzoneClientProducer fails with ConnectException
URL: https://github.com/apache/hadoop/pull/1292
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294886)
Time Spent: 1h 10m  (was: 1h)

> TestOzoneClientProducer fails with ConnectException
> ---
>
> Key: HDDS-1964
> URL: https://issues.apache.org/jira/browse/HDDS-1964
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/trunk/trunk-nightly-wxhxr/unit/hadoop-ozone/s3gateway/org.apache.hadoop.ozone.s3.TestOzoneClientProducer.txt}
> ---
> Test set: org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> ---
> Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 222.239 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> testGetClientFailure[0](org.apache.hadoop.ozone.s3.TestOzoneClientProducer)  
> Time elapsed: 111.036 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'Couldn't create protocol ' but got unexpected exception: 
> java.net.ConnectException: Your endpoint configuration is wrong; For more 
> details see:  http://wiki.apache.org/hadoop/UnsetHostnameOrPort
> {code}
> Log output (with local log4j config) reveals that connection is attempted to 
> 0.0.0.0:9862:
> {code:title=log output}
> 2019-08-14 10:49:14,225 [main] INFO  ipc.Client 
> (Client.java:handleConnectionFailure(948)) - Retrying connect to server: 
> 0.0.0.0/0.0.0.0:9862. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> {code}
> The address 0.0.0.0:9862 was added as default in 
> [HDDS-1920|https://github.com/apache/hadoop/commit/bf457797f607f3aeeb2292e63f440cb13e15a2d9].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1964) TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1964?focusedWorklogId=294885&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-294885
 ]

ASF GitHub Bot logged work on HDDS-1964:


Author: ASF GitHub Bot
Created on: 14/Aug/19 17:26
Start Date: 14/Aug/19 17:26
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1292: HDDS-1964. 
TestOzoneClientProducer fails with ConnectException
URL: https://github.com/apache/hadoop/pull/1292#issuecomment-521340758
 
 
   Thanks @anuengineer (82420851645f1644f597e11e14a1d70bb8a7cc23) and 
@nandakumar131 (b1e4eeef59632ca127f6dded46bde3af2ee8558b) for committing this.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 294885)
Time Spent: 1h  (was: 50m)

> TestOzoneClientProducer fails with ConnectException
> ---
>
> Key: HDDS-1964
> URL: https://issues.apache.org/jira/browse/HDDS-1964
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/trunk/trunk-nightly-wxhxr/unit/hadoop-ozone/s3gateway/org.apache.hadoop.ozone.s3.TestOzoneClientProducer.txt}
> ---
> Test set: org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> ---
> Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 222.239 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> testGetClientFailure[0](org.apache.hadoop.ozone.s3.TestOzoneClientProducer)  
> Time elapsed: 111.036 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'Couldn't create protocol ' but got unexpected exception: 
> java.net.ConnectException: Your endpoint configuration is wrong; For more 
> details see:  http://wiki.apache.org/hadoop/UnsetHostnameOrPort
> {code}
> Log output (with local log4j config) reveals that connection is attempted to 
> 0.0.0.0:9862:
> {code:title=log output}
> 2019-08-14 10:49:14,225 [main] INFO  ipc.Client 
> (Client.java:handleConnectionFailure(948)) - Retrying connect to server: 
> 0.0.0.0/0.0.0.0:9862. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> {code}
> The address 0.0.0.0:9862 was added as default in 
> [HDDS-1920|https://github.com/apache/hadoop/commit/bf457797f607f3aeeb2292e63f440cb13e15a2d9].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1964) TestOzoneClientProducer fails with ConnectException

2019-08-14 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1964:

   Resolution: Fixed
Fix Version/s: 0.5.0
   0.4.1
   Status: Resolved  (was: Patch Available)

> TestOzoneClientProducer fails with ConnectException
> ---
>
> Key: HDDS-1964
> URL: https://issues.apache.org/jira/browse/HDDS-1964
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/trunk/trunk-nightly-wxhxr/unit/hadoop-ozone/s3gateway/org.apache.hadoop.ozone.s3.TestOzoneClientProducer.txt}
> ---
> Test set: org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> ---
> Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 222.239 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.s3.TestOzoneClientProducer
> testGetClientFailure[0](org.apache.hadoop.ozone.s3.TestOzoneClientProducer)  
> Time elapsed: 111.036 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'Couldn't create protocol ' but got unexpected exception: 
> java.net.ConnectException: Your endpoint configuration is wrong; For more 
> details see:  http://wiki.apache.org/hadoop/UnsetHostnameOrPort
> {code}
> Log output (with local log4j config) reveals that connection is attempted to 
> 0.0.0.0:9862:
> {code:title=log output}
> 2019-08-14 10:49:14,225 [main] INFO  ipc.Client 
> (Client.java:handleConnectionFailure(948)) - Retrying connect to server: 
> 0.0.0.0/0.0.0.0:9862. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> {code}
> The address 0.0.0.0:9862 was added as default in 
> [HDDS-1920|https://github.com/apache/hadoop/commit/bf457797f607f3aeeb2292e63f440cb13e15a2d9].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14609) RBF: Security should use common AuthenticationFilter

2019-08-14 Thread Chen Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907486#comment-16907486
 ] 

Chen Zhang commented on HDFS-14609:
---

Thanks [~tasanuma] for providing the old revision of HDFS-13891, it's very 
helpful.

I've fixed these 2 tests, here is some detail;
h3. TestRouterWithSecureStartup#testStartupWithoutSpnegoPrincipal

HADOOP-16314 and HADOOP-16354 made some changes which breaks the test:
 # Added an AuthFilterInitializer, which using 
{{hadoop.http.authentication.kerberos.*}} instead of 
{{dfs.web.authentication.kerberos.*}} to initialize kerberos
 # {{hadoop.http.authentication.kerberos.principal}} has a default value, so 
even we don't configure this key, the cluster will still start normally

h3. TestRouterHttpDelegationToken
 # HDFS-14434 ignores user.name query parameter in secure WebHDFS, and the 
initial version of this test leveraged this parameter to bypass the kerberos 
authentication, so after HDFS-14434, it's not work. I added a set of methods to 
send request by http connection instead of {{WebHdfsFileSystem}} to make it 
continue working.
 # HADOOP-16314 changed configuration-key of the authentication filter from 
{{dfs.web.authentication.filter}} to {{hadoop.http.filter.initializers}}, so I 
added an {{NoAuthFilterInitializer}} to initialize {{NoAuthFilter}}
 # For case {{testGetDelegationToken()}}, the server address is set by 
WebHdfsFileSystem after it get the response, the original address is the 
address of RouterRpcServer. Since we now send request by http connection 
directly, it's unnecessary to reset the address, so I removed this assert
 # For the case {{testCancelDelegationToken()}}, the {{InvalidToken}} exception 
is also generated by WebHdfsFileSystem and the logic is very complex, I think 
it's also unnecessary to keep this assert, so I using the 403 detection instead.

 

In the trunk code, the config {{dfs.web.authentication.filter}} is not used 
anywhere, I propose to deprecate this config, I'll track this in another Jira.

> RBF: Security should use common AuthenticationFilter
> 
>
> Key: HDFS-14609
> URL: https://issues.apache.org/jira/browse/HDFS-14609
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: CR Hota
>Assignee: Chen Zhang
>Priority: Major
>
> We worked on router based federation security as part of HDFS-13532. We kept 
> it compatible with the way namenode works. However with HADOOP-16314 and 
> HDFS-16354 in trunk, auth filters seems to have been changed causing tests to 
> fail.
> Changes are needed appropriately in RBF, mainly fixing broken tests.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14725) Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks until next report)

2019-08-14 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907485#comment-16907485
 ] 

Wei-Chiu Chuang commented on HDFS-14725:


I'll review later today.

> Backport HDFS-12914 to branch-2 (Block report leases cause missing blocks 
> until next report)
> 
>
> Key: HDFS-14725
> URL: https://issues.apache.org/jira/browse/HDFS-14725
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Reporter: Wei-Chiu Chuang
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14725.branch-2.001.patch, 
> HDFS-14725.branch-2.002.patch, HDFS-14725.branch-2.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14609) RBF: Security should use common AuthenticationFilter

2019-08-14 Thread Chen Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907486#comment-16907486
 ] 

Chen Zhang edited comment on HDFS-14609 at 8/14/19 5:45 PM:


Thanks [~tasanuma] for providing the old revision of HDFS-13891, it's very 
helpful.

I've fixed these 2 tests, here is some detail;
h3. TestRouterWithSecureStartup#testStartupWithoutSpnegoPrincipal

HADOOP-16314 and HADOOP-16354 made some changes which breaks the test:
 # Added an AuthFilterInitializer, which using 
{{hadoop.http.authentication.kerberos.**}} ** instead of 
{{dfs.web.authentication.kerberos}}*{{*.*}}* to initialize kerberos
 # {{hadoop.http.authentication.kerberos.principal}} has a default value, so 
even we don't configure this key, the cluster will still start normally

h3. TestRouterHttpDelegationToken
 # HDFS-14434 ignores user.name query parameter in secure WebHDFS, and the 
initial version of this test leveraged this parameter to bypass the kerberos 
authentication, so after HDFS-14434, it's not work. I added a set of methods to 
send request by http connection instead of {{WebHdfsFileSystem}} to make it 
continue working.
 # HADOOP-16314 changed configuration-key of the authentication filter from 
{{dfs.web.authentication.filter}} to {{hadoop.http.filter.initializers}}, so I 
added an {{NoAuthFilterInitializer}} to initialize {{NoAuthFilter}}
 # For case {{testGetDelegationToken()}}, the server address is set by 
WebHdfsFileSystem after it get the response, the original address is the 
address of RouterRpcServer. Since we now send request by http connection 
directly, it's unnecessary to reset the address, so I removed this assert
 # For the case {{testCancelDelegationToken()}}, the {{InvalidToken}} exception 
is also generated by WebHdfsFileSystem and the logic is very complex, I think 
it's also unnecessary to keep this assert, so I using the 403 detection instead.

 

In the trunk code, the config {{dfs.web.authentication.filter}} is not used 
anywhere, I propose to deprecate this config, I'll track this in another Jira.


was (Author: zhangchen):
Thanks [~tasanuma] for providing the old revision of HDFS-13891, it's very 
helpful.

I've fixed these 2 tests, here is some detail;
h3. TestRouterWithSecureStartup#testStartupWithoutSpnegoPrincipal

HADOOP-16314 and HADOOP-16354 made some changes which breaks the test:
 # Added an AuthFilterInitializer, which using 
{{hadoop.http.authentication.kerberos.*}} instead of 
{{dfs.web.authentication.kerberos.*}} to initialize kerberos
 # {{hadoop.http.authentication.kerberos.principal}} has a default value, so 
even we don't configure this key, the cluster will still start normally

h3. TestRouterHttpDelegationToken
 # HDFS-14434 ignores user.name query parameter in secure WebHDFS, and the 
initial version of this test leveraged this parameter to bypass the kerberos 
authentication, so after HDFS-14434, it's not work. I added a set of methods to 
send request by http connection instead of {{WebHdfsFileSystem}} to make it 
continue working.
 # HADOOP-16314 changed configuration-key of the authentication filter from 
{{dfs.web.authentication.filter}} to {{hadoop.http.filter.initializers}}, so I 
added an {{NoAuthFilterInitializer}} to initialize {{NoAuthFilter}}
 # For case {{testGetDelegationToken()}}, the server address is set by 
WebHdfsFileSystem after it get the response, the original address is the 
address of RouterRpcServer. Since we now send request by http connection 
directly, it's unnecessary to reset the address, so I removed this assert
 # For the case {{testCancelDelegationToken()}}, the {{InvalidToken}} exception 
is also generated by WebHdfsFileSystem and the logic is very complex, I think 
it's also unnecessary to keep this assert, so I using the 403 detection instead.

 

In the trunk code, the config {{dfs.web.authentication.filter}} is not used 
anywhere, I propose to deprecate this config, I'll track this in another Jira.

> RBF: Security should use common AuthenticationFilter
> 
>
> Key: HDFS-14609
> URL: https://issues.apache.org/jira/browse/HDFS-14609
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: CR Hota
>Assignee: Chen Zhang
>Priority: Major
>
> We worked on router based federation security as part of HDFS-13532. We kept 
> it compatible with the way namenode works. However with HADOOP-16314 and 
> HDFS-16354 in trunk, auth filters seems to have been changed causing tests to 
> fail.
> Changes are needed appropriately in RBF, mainly fixing broken tests.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Comment Edited] (HDFS-14609) RBF: Security should use common AuthenticationFilter

2019-08-14 Thread Chen Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907486#comment-16907486
 ] 

Chen Zhang edited comment on HDFS-14609 at 8/14/19 5:46 PM:


Thanks [~tasanuma] for providing the old revision of HDFS-13891, it's very 
helpful.

I've fixed these 2 tests, here is some detail;
h3. TestRouterWithSecureStartup#testStartupWithoutSpnegoPrincipal

HADOOP-16314 and HADOOP-16354 made some changes which breaks the test:
 # Added an AuthFilterInitializer, which using 
{{hadoop.http.authentication.kerberos.\***}} **  ** instead of 
{{dfs.web.authentication.kerberos}}{{.\}}* to initialize kerberos
 # {{hadoop.http.authentication.kerberos.principal}} has a default value, so 
even we don't configure this key, the cluster will still start normally

h3. TestRouterHttpDelegationToken
 # HDFS-14434 ignores user.name query parameter in secure WebHDFS, and the 
initial version of this test leveraged this parameter to bypass the kerberos 
authentication, so after HDFS-14434, it's not work. I added a set of methods to 
send request by http connection instead of {{WebHdfsFileSystem}} to make it 
continue working.
 # HADOOP-16314 changed configuration-key of the authentication filter from 
{{dfs.web.authentication.filter}} to {{hadoop.http.filter.initializers}}, so I 
added an {{NoAuthFilterInitializer}} to initialize {{NoAuthFilter}}
 # For case {{testGetDelegationToken()}}, the server address is set by 
WebHdfsFileSystem after it get the response, the original address is the 
address of RouterRpcServer. Since we now send request by http connection 
directly, it's unnecessary to reset the address, so I removed this assert
 # For the case {{testCancelDelegationToken()}}, the {{InvalidToken}} exception 
is also generated by WebHdfsFileSystem and the logic is very complex, I think 
it's also unnecessary to keep this assert, so I using the 403 detection instead.

 

In the trunk code, the config {{dfs.web.authentication.filter}} is not used 
anywhere, I propose to deprecate this config, I'll track this in another Jira.


was (Author: zhangchen):
Thanks [~tasanuma] for providing the old revision of HDFS-13891, it's very 
helpful.

I've fixed these 2 tests, here is some detail;
h3. TestRouterWithSecureStartup#testStartupWithoutSpnegoPrincipal

HADOOP-16314 and HADOOP-16354 made some changes which breaks the test:
 # Added an AuthFilterInitializer, which using 
{{hadoop.http.authentication.kerberos.**}} ** instead of 
{{dfs.web.authentication.kerberos}}*{{*.*}}* to initialize kerberos
 # {{hadoop.http.authentication.kerberos.principal}} has a default value, so 
even we don't configure this key, the cluster will still start normally

h3. TestRouterHttpDelegationToken
 # HDFS-14434 ignores user.name query parameter in secure WebHDFS, and the 
initial version of this test leveraged this parameter to bypass the kerberos 
authentication, so after HDFS-14434, it's not work. I added a set of methods to 
send request by http connection instead of {{WebHdfsFileSystem}} to make it 
continue working.
 # HADOOP-16314 changed configuration-key of the authentication filter from 
{{dfs.web.authentication.filter}} to {{hadoop.http.filter.initializers}}, so I 
added an {{NoAuthFilterInitializer}} to initialize {{NoAuthFilter}}
 # For case {{testGetDelegationToken()}}, the server address is set by 
WebHdfsFileSystem after it get the response, the original address is the 
address of RouterRpcServer. Since we now send request by http connection 
directly, it's unnecessary to reset the address, so I removed this assert
 # For the case {{testCancelDelegationToken()}}, the {{InvalidToken}} exception 
is also generated by WebHdfsFileSystem and the logic is very complex, I think 
it's also unnecessary to keep this assert, so I using the 403 detection instead.

 

In the trunk code, the config {{dfs.web.authentication.filter}} is not used 
anywhere, I propose to deprecate this config, I'll track this in another Jira.

> RBF: Security should use common AuthenticationFilter
> 
>
> Key: HDFS-14609
> URL: https://issues.apache.org/jira/browse/HDFS-14609
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: CR Hota
>Assignee: Chen Zhang
>Priority: Major
>
> We worked on router based federation security as part of HDFS-13532. We kept 
> it compatible with the way namenode works. However with HADOOP-16314 and 
> HDFS-16354 in trunk, auth filters seems to have been changed causing tests to 
> fail.
> Changes are needed appropriately in RBF, mainly fixing broken tests.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional com

  1   2   >