[jira] [Work logged] (HDDS-1605) Implement AuditLogging for OM HA Bucket write requests

2019-06-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1605?focusedWorklogId=255261&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-255261
 ]

ASF GitHub Bot logged work on HDDS-1605:


Author: ASF GitHub Bot
Created on: 06/Jun/19 17:13
Start Date: 06/Jun/19 17:13
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #867: 
HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 255261)
Time Spent: 2h 40m  (was: 2.5h)

> Implement AuditLogging for OM HA Bucket write requests
> --
>
> Key: HDDS-1605
> URL: https://issues.apache.org/jira/browse/HDDS-1605
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement audit logging for OM HA Bucket write 
> requests.
> As now we cannot use userName,  IpAddress from Server API's as these will be 
> null, because the requests are executed under GRPC context. So, in our 
> AuditLogger API's we need to pass username and remoteAddress which we can get 
> from OMRequest after HDDS-1600 and use these during audit logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1605) Implement AuditLogging for OM HA Bucket write requests

2019-06-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1605?focusedWorklogId=255259&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-255259
 ]

ASF GitHub Bot logged work on HDDS-1605:


Author: ASF GitHub Bot
Created on: 06/Jun/19 17:13
Start Date: 06/Jun/19 17:13
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #867: HDDS-1605. 
Implement AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#issuecomment-499585766
 
 
   LGTM. +1. Thanks @bharatviswa504 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 255259)
Time Spent: 2.5h  (was: 2h 20m)

> Implement AuditLogging for OM HA Bucket write requests
> --
>
> Key: HDDS-1605
> URL: https://issues.apache.org/jira/browse/HDDS-1605
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement audit logging for OM HA Bucket write 
> requests.
> As now we cannot use userName,  IpAddress from Server API's as these will be 
> null, because the requests are executed under GRPC context. So, in our 
> AuditLogger API's we need to pass username and remoteAddress which we can get 
> from OMRequest after HDDS-1600 and use these during audit logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1605) Implement AuditLogging for OM HA Bucket write requests

2019-06-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1605?focusedWorklogId=254819&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-254819
 ]

ASF GitHub Bot logged work on HDDS-1605:


Author: ASF GitHub Bot
Created on: 06/Jun/19 01:43
Start Date: 06/Jun/19 01:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #867: HDDS-1605. 
Implement AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#issuecomment-499316838
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 515 | trunk passed |
   | +1 | compile | 294 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 801 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | trunk passed |
   | 0 | spotbugs | 344 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 541 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for patch |
   | +1 | mvninstall | 466 | the patch passed |
   | +1 | compile | 283 | the patch passed |
   | +1 | javac | 283 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 616 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 98 | hadoop-ozone generated 1 new + 8 unchanged - 0 fixed = 
9 total (was 8) |
   | +1 | findbugs | 533 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 237 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1389 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 59 | The patch does not generate ASF License warnings. |
   | | | 6519 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/867 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4809e1ca7ff6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 294695d |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/5/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/5/testReport/ |
   | Max. process+thread count | 5238 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 254819)
Time Spent: 2h 20m  (was: 2h 10m)

> Implement AuditLogging for OM HA Bucket write requests
> --
>
> Key: HDDS-1605
> URL: https://issues.apache.org/jira/browse/HDDS-1605
> Project: 

[jira] [Work logged] (HDDS-1605) Implement AuditLogging for OM HA Bucket write requests

2019-06-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1605?focusedWorklogId=254793&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-254793
 ]

ASF GitHub Bot logged work on HDDS-1605:


Author: ASF GitHub Bot
Created on: 06/Jun/19 00:36
Start Date: 06/Jun/19 00:36
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #867: HDDS-1605. 
Implement AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#issuecomment-499305685
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 254793)
Time Spent: 2h 10m  (was: 2h)

> Implement AuditLogging for OM HA Bucket write requests
> --
>
> Key: HDDS-1605
> URL: https://issues.apache.org/jira/browse/HDDS-1605
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement audit logging for OM HA Bucket write 
> requests.
> As now we cannot use userName,  IpAddress from Server API's as these will be 
> null, because the requests are executed under GRPC context. So, in our 
> AuditLogger API's we need to pass username and remoteAddress which we can get 
> from OMRequest after HDDS-1600 and use these during audit logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1605) Implement AuditLogging for OM HA Bucket write requests

2019-06-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1605?focusedWorklogId=254780&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-254780
 ]

ASF GitHub Bot logged work on HDDS-1605:


Author: ASF GitHub Bot
Created on: 05/Jun/19 23:53
Start Date: 05/Jun/19 23:53
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #867: 
HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#discussion_r290978411
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
 ##
 @@ -152,27 +160,35 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMException.ResultCodes.BUCKET_ALREADY_EXISTS);
   }
 
-  LOG.debug("created bucket: {} in volume: {}", bucketName, volumeName);
-  omMetrics.incNumBuckets();
-
   // Update table cache.
   metadataManager.getBucketTable().addCacheEntry(new CacheKey<>(bucketKey),
   new CacheValue<>(Optional.of(omBucketInfo), transactionLogIndex));
 
-  // return response.
+
+} catch (IOException ex) {
+  exception = ex;
+} finally {
+  metadataManager.getLock().releaseBucketLock(volumeName, bucketName);
+  metadataManager.getLock().releaseVolumeLock(volumeName);
+
+  // Performing audit logging outside of the lock.
+  auditLog(auditLogger, buildAuditMessage(OMAction.CREATE_BUCKET,
+  omBucketInfo.toAuditMap(), exception, userInfo));
+}
 
 Review comment:
   Done. Addressed it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 254780)
Time Spent: 2h  (was: 1h 50m)

> Implement AuditLogging for OM HA Bucket write requests
> --
>
> Key: HDDS-1605
> URL: https://issues.apache.org/jira/browse/HDDS-1605
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement audit logging for OM HA Bucket write 
> requests.
> As now we cannot use userName,  IpAddress from Server API's as these will be 
> null, because the requests are executed under GRPC context. So, in our 
> AuditLogger API's we need to pass username and remoteAddress which we can get 
> from OMRequest after HDDS-1600 and use these during audit logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1605) Implement AuditLogging for OM HA Bucket write requests

2019-06-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1605?focusedWorklogId=254779&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-254779
 ]

ASF GitHub Bot logged work on HDDS-1605:


Author: ASF GitHub Bot
Created on: 05/Jun/19 23:51
Start Date: 05/Jun/19 23:51
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #867: 
HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#discussion_r290978023
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
 ##
 @@ -152,27 +160,35 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMException.ResultCodes.BUCKET_ALREADY_EXISTS);
   }
 
-  LOG.debug("created bucket: {} in volume: {}", bucketName, volumeName);
-  omMetrics.incNumBuckets();
-
   // Update table cache.
   metadataManager.getBucketTable().addCacheEntry(new CacheKey<>(bucketKey),
   new CacheValue<>(Optional.of(omBucketInfo), transactionLogIndex));
 
-  // return response.
+
+} catch (IOException ex) {
+  exception = ex;
+} finally {
+  metadataManager.getLock().releaseBucketLock(volumeName, bucketName);
+  metadataManager.getLock().releaseVolumeLock(volumeName);
+
+  // Performing audit logging outside of the lock.
+  auditLog(auditLogger, buildAuditMessage(OMAction.CREATE_BUCKET,
+  omBucketInfo.toAuditMap(), exception, userInfo));
+}
 
 Review comment:
   Yes it can be done. Will update it if you prefer that way?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 254779)
Time Spent: 1h 50m  (was: 1h 40m)

> Implement AuditLogging for OM HA Bucket write requests
> --
>
> Key: HDDS-1605
> URL: https://issues.apache.org/jira/browse/HDDS-1605
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement audit logging for OM HA Bucket write 
> requests.
> As now we cannot use userName,  IpAddress from Server API's as these will be 
> null, because the requests are executed under GRPC context. So, in our 
> AuditLogger API's we need to pass username and remoteAddress which we can get 
> from OMRequest after HDDS-1600 and use these during audit logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1605) Implement AuditLogging for OM HA Bucket write requests

2019-06-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1605?focusedWorklogId=254778&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-254778
 ]

ASF GitHub Bot logged work on HDDS-1605:


Author: ASF GitHub Bot
Created on: 05/Jun/19 23:49
Start Date: 05/Jun/19 23:49
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #867: 
HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#discussion_r290977673
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
 ##
 @@ -152,27 +160,35 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMException.ResultCodes.BUCKET_ALREADY_EXISTS);
   }
 
-  LOG.debug("created bucket: {} in volume: {}", bucketName, volumeName);
-  omMetrics.incNumBuckets();
-
   // Update table cache.
   metadataManager.getBucketTable().addCacheEntry(new CacheKey<>(bucketKey),
   new CacheValue<>(Optional.of(omBucketInfo), transactionLogIndex));
 
-  // return response.
+
+} catch (IOException ex) {
+  exception = ex;
+} finally {
+  metadataManager.getLock().releaseBucketLock(volumeName, bucketName);
+  metadataManager.getLock().releaseVolumeLock(volumeName);
+
+  // Performing audit logging outside of the lock.
+  auditLog(auditLogger, buildAuditMessage(OMAction.CREATE_BUCKET,
+  omBucketInfo.toAuditMap(), exception, userInfo));
+}
 
 Review comment:
   Yes I meant why not outside the finally block as generally we keep the must 
be done things (like lock releases) in the finally block.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 254778)
Time Spent: 1h 40m  (was: 1.5h)

> Implement AuditLogging for OM HA Bucket write requests
> --
>
> Key: HDDS-1605
> URL: https://issues.apache.org/jira/browse/HDDS-1605
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement audit logging for OM HA Bucket write 
> requests.
> As now we cannot use userName,  IpAddress from Server API's as these will be 
> null, because the requests are executed under GRPC context. So, in our 
> AuditLogger API's we need to pass username and remoteAddress which we can get 
> from OMRequest after HDDS-1600 and use these during audit logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1605) Implement AuditLogging for OM HA Bucket write requests

2019-06-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1605?focusedWorklogId=254754&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-254754
 ]

ASF GitHub Bot logged work on HDDS-1605:


Author: ASF GitHub Bot
Created on: 05/Jun/19 23:13
Start Date: 05/Jun/19 23:13
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #867: 
HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#discussion_r290969230
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
 ##
 @@ -152,27 +160,35 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMException.ResultCodes.BUCKET_ALREADY_EXISTS);
   }
 
-  LOG.debug("created bucket: {} in volume: {}", bucketName, volumeName);
-  omMetrics.incNumBuckets();
-
   // Update table cache.
   metadataManager.getBucketTable().addCacheEntry(new CacheKey<>(bucketKey),
   new CacheValue<>(Optional.of(omBucketInfo), transactionLogIndex));
 
-  // return response.
+
+} catch (IOException ex) {
+  exception = ex;
+} finally {
+  metadataManager.getLock().releaseBucketLock(volumeName, bucketName);
+  metadataManager.getLock().releaseVolumeLock(volumeName);
+
+  // Performing audit logging outside of the lock.
+  auditLog(auditLogger, buildAuditMessage(OMAction.CREATE_BUCKET,
+  omBucketInfo.toAuditMap(), exception, userInfo));
+}
 
 Review comment:
   Yes, it is not to do logging/expensive things inside the lock. As after 
adding the response to cache, we can release the lock, so that other threads 
waiting for the lock can acquire it. (In future we don't see performance things 
because of audit log we are seeing some performance issues.) 
   
   More on a side note, doing audit logging outside lock will not cause any 
side effects in my view. Let me know if anything I am missing here.
   
   Edit:
   Now I got it, we can do this outside the finally block also. But I think it 
should be fine any way.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 254754)
Time Spent: 1.5h  (was: 1h 20m)

> Implement AuditLogging for OM HA Bucket write requests
> --
>
> Key: HDDS-1605
> URL: https://issues.apache.org/jira/browse/HDDS-1605
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement audit logging for OM HA Bucket write 
> requests.
> As now we cannot use userName,  IpAddress from Server API's as these will be 
> null, because the requests are executed under GRPC context. So, in our 
> AuditLogger API's we need to pass username and remoteAddress which we can get 
> from OMRequest after HDDS-1600 and use these during audit logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1605) Implement AuditLogging for OM HA Bucket write requests

2019-06-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1605?focusedWorklogId=254752&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-254752
 ]

ASF GitHub Bot logged work on HDDS-1605:


Author: ASF GitHub Bot
Created on: 05/Jun/19 23:09
Start Date: 05/Jun/19 23:09
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #867: 
HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#discussion_r290969230
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
 ##
 @@ -152,27 +160,35 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMException.ResultCodes.BUCKET_ALREADY_EXISTS);
   }
 
-  LOG.debug("created bucket: {} in volume: {}", bucketName, volumeName);
-  omMetrics.incNumBuckets();
-
   // Update table cache.
   metadataManager.getBucketTable().addCacheEntry(new CacheKey<>(bucketKey),
   new CacheValue<>(Optional.of(omBucketInfo), transactionLogIndex));
 
-  // return response.
+
+} catch (IOException ex) {
+  exception = ex;
+} finally {
+  metadataManager.getLock().releaseBucketLock(volumeName, bucketName);
+  metadataManager.getLock().releaseVolumeLock(volumeName);
+
+  // Performing audit logging outside of the lock.
+  auditLog(auditLogger, buildAuditMessage(OMAction.CREATE_BUCKET,
+  omBucketInfo.toAuditMap(), exception, userInfo));
+}
 
 Review comment:
   Yes, it is not to do logging/expensive things inside the lock. As after 
adding the response to cache, we can release the lock, so that other threads 
waiting for the lock can acquire it. (In future we don't see performance things 
because of audit log we are seeing some performance issues.) 
   
   More on a side note, doing audit logging outside lock will not cause any 
side effects in my view. Let me know if anything I am missing here.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 254752)
Time Spent: 1h 20m  (was: 1h 10m)

> Implement AuditLogging for OM HA Bucket write requests
> --
>
> Key: HDDS-1605
> URL: https://issues.apache.org/jira/browse/HDDS-1605
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement audit logging for OM HA Bucket write 
> requests.
> As now we cannot use userName,  IpAddress from Server API's as these will be 
> null, because the requests are executed under GRPC context. So, in our 
> AuditLogger API's we need to pass username and remoteAddress which we can get 
> from OMRequest after HDDS-1600 and use these during audit logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1605) Implement AuditLogging for OM HA Bucket write requests

2019-06-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1605?focusedWorklogId=254751&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-254751
 ]

ASF GitHub Bot logged work on HDDS-1605:


Author: ASF GitHub Bot
Created on: 05/Jun/19 23:09
Start Date: 05/Jun/19 23:09
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #867: 
HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#discussion_r290969230
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
 ##
 @@ -152,27 +160,35 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMException.ResultCodes.BUCKET_ALREADY_EXISTS);
   }
 
-  LOG.debug("created bucket: {} in volume: {}", bucketName, volumeName);
-  omMetrics.incNumBuckets();
-
   // Update table cache.
   metadataManager.getBucketTable().addCacheEntry(new CacheKey<>(bucketKey),
   new CacheValue<>(Optional.of(omBucketInfo), transactionLogIndex));
 
-  // return response.
+
+} catch (IOException ex) {
+  exception = ex;
+} finally {
+  metadataManager.getLock().releaseBucketLock(volumeName, bucketName);
+  metadataManager.getLock().releaseVolumeLock(volumeName);
+
+  // Performing audit logging outside of the lock.
+  auditLog(auditLogger, buildAuditMessage(OMAction.CREATE_BUCKET,
+  omBucketInfo.toAuditMap(), exception, userInfo));
+}
 
 Review comment:
   Yes, it not to do logging/expensive things inside the lock. As after adding 
the response to cache, we can release the lock, so that other threads waiting 
for the lock can acquire it. (In future we don't see performance things because 
of audit log we are seeing some performance issues.) 
   
   More on a side note, doing audit logging outside lock will not cause any 
side effects in my view. Let me know if anything I am missing here.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 254751)
Time Spent: 1h 10m  (was: 1h)

> Implement AuditLogging for OM HA Bucket write requests
> --
>
> Key: HDDS-1605
> URL: https://issues.apache.org/jira/browse/HDDS-1605
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement audit logging for OM HA Bucket write 
> requests.
> As now we cannot use userName,  IpAddress from Server API's as these will be 
> null, because the requests are executed under GRPC context. So, in our 
> AuditLogger API's we need to pass username and remoteAddress which we can get 
> from OMRequest after HDDS-1600 and use these during audit logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1605) Implement AuditLogging for OM HA Bucket write requests

2019-06-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1605?focusedWorklogId=254744&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-254744
 ]

ASF GitHub Bot logged work on HDDS-1605:


Author: ASF GitHub Bot
Created on: 05/Jun/19 23:04
Start Date: 05/Jun/19 23:04
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #867: 
HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#discussion_r290968001
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
 ##
 @@ -152,27 +160,35 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 OMException.ResultCodes.BUCKET_ALREADY_EXISTS);
   }
 
-  LOG.debug("created bucket: {} in volume: {}", bucketName, volumeName);
-  omMetrics.incNumBuckets();
-
   // Update table cache.
   metadataManager.getBucketTable().addCacheEntry(new CacheKey<>(bucketKey),
   new CacheValue<>(Optional.of(omBucketInfo), transactionLogIndex));
 
-  // return response.
+
+} catch (IOException ex) {
+  exception = ex;
+} finally {
+  metadataManager.getLock().releaseBucketLock(volumeName, bucketName);
+  metadataManager.getLock().releaseVolumeLock(volumeName);
+
+  // Performing audit logging outside of the lock.
+  auditLog(auditLogger, buildAuditMessage(OMAction.CREATE_BUCKET,
+  omBucketInfo.toAuditMap(), exception, userInfo));
+}
 
 Review comment:
   Any reason for having the auditLog in the finally block?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 254744)
Time Spent: 1h  (was: 50m)

> Implement AuditLogging for OM HA Bucket write requests
> --
>
> Key: HDDS-1605
> URL: https://issues.apache.org/jira/browse/HDDS-1605
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement audit logging for OM HA Bucket write 
> requests.
> As now we cannot use userName,  IpAddress from Server API's as these will be 
> null, because the requests are executed under GRPC context. So, in our 
> AuditLogger API's we need to pass username and remoteAddress which we can get 
> from OMRequest after HDDS-1600 and use these during audit logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1605) Implement AuditLogging for OM HA Bucket write requests

2019-06-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1605?focusedWorklogId=254638&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-254638
 ]

ASF GitHub Bot logged work on HDDS-1605:


Author: ASF GitHub Bot
Created on: 05/Jun/19 20:53
Start Date: 05/Jun/19 20:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #867: HDDS-1605. 
Implement AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#issuecomment-499251087
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 480 | trunk passed |
   | +1 | compile | 266 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 785 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | trunk passed |
   | 0 | spotbugs | 322 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 510 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 462 | the patch passed |
   | +1 | compile | 271 | the patch passed |
   | +1 | javac | 271 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 610 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 84 | hadoop-ozone generated 1 new + 8 unchanged - 0 fixed = 
9 total (was 8) |
   | +1 | findbugs | 533 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 243 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2426 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 7337 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.scm.node.TestQueryNode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/867 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 690734c4a711 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b1e288 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/4/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/4/testReport/ |
   | Max. process+thread count | 4434 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 254638)
Time Spent: 50m  (was: 40m)

> Implement AuditLogging for OM HA Bucket write requests
> --
>
> Key: HDDS-1605
> URL: https://issues.apache.org/jira/browse/HDDS-1605
>  

[jira] [Work logged] (HDDS-1605) Implement AuditLogging for OM HA Bucket write requests

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1605?focusedWorklogId=254101&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-254101
 ]

ASF GitHub Bot logged work on HDDS-1605:


Author: ASF GitHub Bot
Created on: 04/Jun/19 23:45
Start Date: 04/Jun/19 23:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #867: HDDS-1605. 
Implement AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#issuecomment-498883968
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 69 | Maven dependency ordering for branch |
   | +1 | mvninstall | 557 | trunk passed |
   | +1 | compile | 282 | trunk passed |
   | +1 | checkstyle | 87 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 860 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | trunk passed |
   | 0 | spotbugs | 356 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 549 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | +1 | mvninstall | 473 | the patch passed |
   | +1 | compile | 285 | the patch passed |
   | +1 | javac | 285 | the patch passed |
   | +1 | checkstyle | 74 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 623 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 101 | hadoop-ozone generated 1 new + 8 unchanged - 0 fixed 
= 9 total (was 8) |
   | +1 | findbugs | 540 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 212 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1149 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 6417 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/867 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 77ac0edec967 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 580b639 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/3/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/3/testReport/ |
   | Max. process+thread count | 5308 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 254101)
Time Spent: 40m  (was: 0.5h)

> Implement AuditLogging for OM HA Bucket write requests
> --
>
> Key: HDDS-1605
> URL: https://issues.apache.org/jira/browse/HDDS-1605
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time

[jira] [Work logged] (HDDS-1605) Implement AuditLogging for OM HA Bucket write requests

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1605?focusedWorklogId=254093&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-254093
 ]

ASF GitHub Bot logged work on HDDS-1605:


Author: ASF GitHub Bot
Created on: 04/Jun/19 22:58
Start Date: 04/Jun/19 22:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #867: HDDS-1605. 
Implement AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#issuecomment-498873374
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 550 | trunk passed |
   | +1 | compile | 293 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 869 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | trunk passed |
   | 0 | spotbugs | 343 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 537 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 547 | the patch passed |
   | +1 | compile | 295 | the patch passed |
   | +1 | javac | 295 | the patch passed |
   | +1 | checkstyle | 77 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 618 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | the patch passed |
   | +1 | findbugs | 537 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 231 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1752 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 7080 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.scm.TestAllocateContainer |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/867 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 612c0afe9e5f 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 580b639 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/2/testReport/ |
   | Max. process+thread count | 5292 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-867/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 254093)
Time Spent: 0.5h  (was: 20m)

> Implement AuditLogging for OM HA Bucket write requests
> --
>
> Key: HDDS-1605
> URL: https://issues.apache.org/jira/browse/HDDS-1605
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement audit logging for OM HA Bucket write 
> requests.
>

[jira] [Work logged] (HDDS-1605) Implement AuditLogging for OM HA Bucket write requests

2019-06-04 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1605?focusedWorklogId=254044&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-254044
 ]

ASF GitHub Bot logged work on HDDS-1605:


Author: ASF GitHub Bot
Created on: 04/Jun/19 21:40
Start Date: 04/Jun/19 21:40
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #867: HDDS-1605. 
Implement AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867#issuecomment-498854117
 
 
   @bharatviswa504 Overall +1 (non-binding), pending jenkins.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 254044)
Time Spent: 20m  (was: 10m)

> Implement AuditLogging for OM HA Bucket write requests
> --
>
> Key: HDDS-1605
> URL: https://issues.apache.org/jira/browse/HDDS-1605
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement audit logging for OM HA Bucket write 
> requests.
> As now we cannot use userName,  IpAddress from Server API's as these will be 
> null, because the requests are executed under GRPC context. So, in our 
> AuditLogger API's we need to pass username and remoteAddress which we can get 
> from OMRequest after HDDS-1600 and use these during audit logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1605) Implement AuditLogging for OM HA Bucket write requests

2019-05-28 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1605?focusedWorklogId=249612&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-249612
 ]

ASF GitHub Bot logged work on HDDS-1605:


Author: ASF GitHub Bot
Created on: 28/May/19 20:16
Start Date: 28/May/19 20:16
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #867: 
HDDS-1605. Implement AuditLogging for OM HA Bucket write requests.
URL: https://github.com/apache/hadoop/pull/867
 
 
   This patch is dependent on HDDS-1551 and HDDS-1600.
   First 2 commits are of HDDS-1551 and HDDS-1600.
   The last commit is the changes which are needed in this PR.
   Opened this PR to get UT's and CI run.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 249612)
Time Spent: 10m
Remaining Estimate: 0h

> Implement AuditLogging for OM HA Bucket write requests
> --
>
> Key: HDDS-1605
> URL: https://issues.apache.org/jira/browse/HDDS-1605
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In this Jira, we shall implement audit logging for OM HA Bucket write 
> requests.
> As now we cannot use userName,  IpAddress from Server API's as these will be 
> null, because the requests are executed under GRPC context. So, in our 
> AuditLogger API's we need to pass username and remoteAddress which we can get 
> from OMRequest after HDDS-1600 and use these during audit logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org