[jira] [Commented] (HDDS-2245) Use dynamic ports for SCM in TestSecureOzoneCluster

2019-10-07 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946308#comment-16946308
 ] 

Anu Engineer commented on HDDS-2245:


The normal jenkins is broken when a patch is submitted, if you do it via Github 
we have a working version. I have tested the patch manually and confirmed that 
it works as expected.

> Use dynamic ports for SCM in TestSecureOzoneCluster
> ---
>
> Key: HDDS-2245
> URL: https://issues.apache.org/jira/browse/HDDS-2245
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
> Fix For: 0.5.0
>
> Attachments: HDDS-2245.001.patch, HDDS-2245.002.patch
>
>
> {{TestSecureOzoneCluster}} is using default SCM ports, we should use dynamic 
> ports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2245) Use dynamic ports for SCM in TestSecureOzoneCluster

2019-10-07 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2245:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

[~pingsutw] Thank you for the contribution. I have committed this patch to the 
trunk.

> Use dynamic ports for SCM in TestSecureOzoneCluster
> ---
>
> Key: HDDS-2245
> URL: https://issues.apache.org/jira/browse/HDDS-2245
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
> Fix For: 0.5.0
>
> Attachments: HDDS-2245.001.patch, HDDS-2245.002.patch
>
>
> {{TestSecureOzoneCluster}} is using default SCM ports, we should use dynamic 
> ports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2262) SLEEP_SECONDS: command not found

2019-10-07 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2262:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to the trunk branch. Thanks for the contribution.

> SLEEP_SECONDS: command not found
> 
>
> Key: HDDS-2262
> URL: https://issues.apache.org/jira/browse/HDDS-2262
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {noformat}
> datanode_1  | /opt/hadoop/bin/docker/entrypoint.sh: line 66: SLEEP_SECONDS: 
> command not found
> datanode_1  | Sleeping for  seconds
> {noformat}
> Eg. 
> https://raw.githubusercontent.com/elek/ozone-ci-q4/master/pr/pr-hdds-2238-79fll/acceptance/docker-ozonesecure-ozonesecure-s3-s3g.log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2259) Container Data Scrubber computes wrong checksum

2019-10-07 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2259:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thank you for the contribution. I have committed this patch to the trunk.

> Container Data Scrubber computes wrong checksum
> ---
>
> Key: HDDS-2259
> URL: https://issues.apache.org/jira/browse/HDDS-2259
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Chunk checksum verification fails for (almost) any file.  This is caused by 
> computing checksum for the entire buffer, regardless of the actual size of 
> the chunk.
> {code:title=https://github.com/apache/hadoop/blob/55c5436f39120da0d7dabf43d7e5e6404307123b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java#L259-L273}
> byte[] buffer = new byte[cData.getBytesPerChecksum()];
> ...
> v = fs.read(buffer);
> ...
> bytesRead += v;
> ...
> ByteString actual = cal.computeChecksum(buffer)
> .getChecksums().get(0);
> {code}
> This results in marking all closed containers as unhealthy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2264) Improve output of TestOzoneContainer

2019-10-07 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2264:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

I have committed this patch to the trunk. Thank you for the contribution.

> Improve output of TestOzoneContainer
> 
>
> Key: HDDS-2264
> URL: https://issues.apache.org/jira/browse/HDDS-2264
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> TestOzoneContainer#testContainerCreateDiskFull fails intermittently 
> (HDDS-2263), but test output does not reveal too much about the reason.  The 
> goal of this task is to improve the assertion/output to make it easier to fix 
> the failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2238) Container Data Scrubber spams log in empty cluster

2019-10-07 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2238:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to the trunk

> Container Data Scrubber spams log in empty cluster
> --
>
> Key: HDDS-2238
> URL: https://issues.apache.org/jira/browse/HDDS-2238
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In an empty cluster (without closed containers) logs are filled with messages 
> from completed data scrubber iterations (~3600 per second for me), if 
> Container Scanner is enabled ({{hdds.containerscrub.enabled=true}}), eg.:
> {noformat}
> datanode_1  | 2019-10-03 15:43:57 INFO  ContainerDataScanner:114 - Completed 
> an iteration of container data scrubber in 0 minutes. Number of  iterations 
> (since the data-node restart) : 6763, Number of containers scanned in this 
> iteration : 0, Number of unhealthy containers found in this iteration : 0
> {noformat} 
> Also CPU usage is quite high.
> I think:
> # there should be a small sleep between iterations
> # it should log only if any containers were scanned



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2070) Create insight point to debug one specific pipeline

2019-10-07 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-2070:
--

Assignee: Marton Elek

> Create insight point to debug one specific pipeline
> ---
>
> Key: HDDS-2070
> URL: https://issues.apache.org/jira/browse/HDDS-2070
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> During the first demo of ozone insight tool we had a demo insight point to 
> debug Ratis pipelines. It was not stable enough to include in the first patch 
> but here we can add it.
> The goal is to implement a new insight point (eg. datanode.pipeline) which 
> can show information about one pipeline.
> It can be done with retrieving the hosts of the pipeline and generate the 
> loggers metrics (InsightPoint.getRelatedLoggers and InsightPoint.getMetrics) 
> based on the pipeline information (same loggers should be displayed from all 
> the three datanodes.
> The pipeline id can be defined as a filter parameter which (in this case) 
> should be required.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2261) Change readChunk methods to return ByteBuffer

2019-10-07 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-2261:
--

Assignee: Istvan Fajth

> Change readChunk methods to return ByteBuffer
> -
>
> Key: HDDS-2261
> URL: https://issues.apache.org/jira/browse/HDDS-2261
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Major
>  Labels: pull-request-available
>
> During refactoring to HDDS-2233 I realized the following:
> KeyValueHandler.handleReadChunk and handleGetSmallFile methods are using 
> ChunkManager.readChunk, which returns a byte[], but then both of them (the 
> only usage points) converts the returning byte[] to a ByteBuffer, and then to 
> a ByteString.
> ChunkManagerImpl on the other hand in readChunk utilizes 
> ChunkUtils.readChunk, which in order to conform the return value converts a 
> ByteBuffer back to a byte[].
> I open this JIRA to change the internal logic to fully rely on ByteBuffers 
> instead of converting from ByteBuffer to byte[] then to ByteBuffer again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2233) Remove ByteStringHelper and refactor the code to the place where it used

2019-10-07 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-2233:
--

Assignee: Istvan Fajth

> Remove ByteStringHelper and refactor the code to the place where it used
> 
>
> Key: HDDS-2233
> URL: https://issues.apache.org/jira/browse/HDDS-2233
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> See HDDS-2203 where there is a race condition reported by me.
> Later in the discussion we agreed that it is better to refactor the code and 
> remove the class completely for now, and that would also resolve the race 
> condition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2261) Change readChunk methods to return ByteBuffer

2019-10-07 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946015#comment-16946015
 ] 

Anu Engineer commented on HDDS-2261:


Thank you for finding this issue. Appreciate it.


> Change readChunk methods to return ByteBuffer
> -
>
> Key: HDDS-2261
> URL: https://issues.apache.org/jira/browse/HDDS-2261
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Istvan Fajth
>Priority: Major
>  Labels: pull-request-available
>
> During refactoring to HDDS-2233 I realized the following:
> KeyValueHandler.handleReadChunk and handleGetSmallFile methods are using 
> ChunkManager.readChunk, which returns a byte[], but then both of them (the 
> only usage points) converts the returning byte[] to a ByteBuffer, and then to 
> a ByteString.
> ChunkManagerImpl on the other hand in readChunk utilizes 
> ChunkUtils.readChunk, which in order to conform the return value converts a 
> ByteBuffer back to a byte[].
> I open this JIRA to change the internal logic to fully rely on ByteBuffers 
> instead of converting from ByteBuffer to byte[] then to ByteBuffer again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2247) Delete FileEncryptionInfo from KeyInfo when a Key is deleted

2019-10-04 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944814#comment-16944814
 ] 

Anu Engineer commented on HDDS-2247:


Perhaps we should always do GDPR, irrespective what the encryption status is. 
The issue is that we don't control the life time of the encryption keys at all.

> Delete FileEncryptionInfo from KeyInfo when a Key is deleted
> 
>
> Key: HDDS-2247
> URL: https://issues.apache.org/jira/browse/HDDS-2247
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> As part of HDDS-2174 we are deleting GDPR Encryption Key on delete file 
> operation.
> However, if KMS is enabled, we are skipping GDPR Encryption Key approach when 
> writing file in a GDPR enforced Bucket.
> {code:java}
> final FileEncryptionInfo feInfo = keyOutputStream.getFileEncryptionInfo();
> if (feInfo != null) {
>   KeyProvider.KeyVersion decrypted = getDEK(feInfo);
>   final CryptoOutputStream cryptoOut =
>   new CryptoOutputStream(keyOutputStream,
>   OzoneKMSUtil.getCryptoCodec(conf, feInfo),
>   decrypted.getMaterial(), feInfo.getIV());
>   return new OzoneOutputStream(cryptoOut);
> } else {
>   try{
> GDPRSymmetricKey gk;
> Map openKeyMetadata =
> openKey.getKeyInfo().getMetadata();
> if(Boolean.valueOf(openKeyMetadata.get(OzoneConsts.GDPR_FLAG))){
>   gk = new GDPRSymmetricKey(
>   openKeyMetadata.get(OzoneConsts.GDPR_SECRET),
>   openKeyMetadata.get(OzoneConsts.GDPR_ALGORITHM)
>   );
>   gk.getCipher().init(Cipher.ENCRYPT_MODE, gk.getSecretKey());
>   return new OzoneOutputStream(
>   new CipherOutputStream(keyOutputStream, gk.getCipher()));
> }
>   }catch (Exception ex){
> throw new IOException(ex);
>   }
> {code}
> In such scenario, when KMS is enabled & GDPR enforced on a bucket, if user 
> deletes a file, we should delete the {{FileEncryptionInfo}} from KeyInfo, 
> before moving it to deletedTable, else we cannot guarantee Right to Erasure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2256) Checkstyle issues in CheckSumByteBuffer.java

2019-10-04 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2256:
--

 Summary: Checkstyle issues in CheckSumByteBuffer.java
 Key: HDDS-2256
 URL: https://issues.apache.org/jira/browse/HDDS-2256
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Anu Engineer


HDDS-, added some checkstyle failures in CheckSumByteBuffer.java. This JIRA 
is to track and fix those checkstyle issues.

{code}
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/ChecksumByteBuffer.java
 84: Inner assignments should be avoided.
 85: Inner assignments should be avoided.
 101: child has incorrect indentation level 8, expected level should be 6.
 102: child has incorrect indentation level 8, expected level should be 6.
 103:  child has incorrect indentation level 8, expected level should be 6.
 104:  child has incorrect indentation level 8, expected level should be 6.
 105: child has incorrect indentation level 8, expected level should be 6.
 106:  child has incorrect indentation level 8, expected level should be 6.
 107:  child has incorrect indentation level 8, expected level should be 6.
 108: child has incorrect indentation level 8, expected level should be 6.
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2020) Remove mTLS from Ozone GRPC

2019-10-03 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2020.

Fix Version/s: 0.4.1
   Resolution: Fixed

Committed to both 0.4.1 and trunk

> Remove mTLS from Ozone GRPC
> ---
>
> Key: HDDS-2020
> URL: https://issues.apache.org/jira/browse/HDDS-2020
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Generic GRPC support mTLS for mutual authentication. However, Ozone has built 
> in block token mechanism for server to authenticate the client. We only need 
> TLS for client to authenticate the server and wire encryption. 
> Remove the mTLS support also simplify the GRPC server/client configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2200) Recon does not handle the NULL snapshot from OM DB cleanly.

2019-10-03 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2200.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the trunk. Thanks for the contribution.

> Recon does not handle the NULL snapshot from OM DB cleanly.
> ---
>
> Key: HDDS-2200
> URL: https://issues.apache.org/jira/browse/HDDS-2200
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {code}
> 2019-09-27 11:35:19,835 [pool-9-thread-1] ERROR  - Null snapshot location 
> got from OM.
> 2019-09-27 11:35:19,839 [pool-9-thread-1] INFO   - Calling reprocess on 
> Recon tasks.
> 2019-09-27 11:35:19,840 [pool-7-thread-1] INFO   - Starting a 'reprocess' 
> run of ContainerKeyMapperTask.
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Creating new Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609319840
> 2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Cleaning up old Recon 
> Container DB at /tmp/recon/db/recon-container.db_1569609258721.
> 2019-09-27 11:35:20,144 [pool-9-thread-1] ERROR  - Unexpected error :
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.reInitializeTasks(ReconTaskControllerImpl.java:181)
> at 
> org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.syncDataFromOM(OzoneManagerServiceProviderImpl.java:333)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.recon.tasks.ContainerKeyMapperTask.reprocess(ContainerKeyMapperTask.java:81)
> at 
> org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.lambda$reInitializeTasks$3(ReconTaskControllerImpl.java:176)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1720) Add ability to configure RocksDB logs for Ozone Manager

2019-10-03 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1720:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to the trunk

> Add ability to configure RocksDB logs for Ozone Manager
> ---
>
> Key: HDDS-1720
> URL: https://issues.apache.org/jira/browse/HDDS-1720
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> While doing performance testing, it was seen that there was no way to get 
> RocksDB logs for Ozone Manager. Along with Rocksdb metrics, this may be a 
> useful mechanism to understand the health of Rocksdb while investigating 
> large clusters. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2241) Optimize the refresh pipeline logic used by KeyManagerImpl to obtain the pipelines for a key

2019-10-03 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944059#comment-16944059
 ] 

Anu Engineer commented on HDDS-2241:


agree, I thought this was based on some bottleneck that we saw in testing and 
was trying to think why this would cause a bottleneck. If this is something 
that you want to do for fun and make the world a better place, be my guest. 
Post a patch and I will review it. Thanks for volunteering.

> Optimize the refresh pipeline logic used by KeyManagerImpl to obtain the 
> pipelines for a key
> 
>
> Key: HDDS-2241
> URL: https://issues.apache.org/jira/browse/HDDS-2241
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>
> Currently, while looking up a key, the Ozone Manager gets the pipeline 
> information from SCM through an RPC for every block in the key. For large 
> files > 1GB, we may end up making a lot of RPC calls for this. This can be 
> optimized in a couple of ways
> * We can implement a batch getContainerWithPipeline API in SCM using which we 
> can get the pipeline info locations for all the blocks for a file. To keep 
> the number of containers passed in to SCM in a single call, we can have a 
> fixed container batch size on the OM side. _Here, Number of calls = 1 (or k 
> depending on batch size)_
> * Instead, a simpler change would be to have a map (method local) of 
> ContainerID -> Pipeline that we get from SCM so that we don't need to make 
> repeated calls to SCM for the same containerID for a key. _Here, Number of 
> calls = Number of unique containerIDs_



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2166) Some RPC metrics are missing from SCM prometheus endpoint

2019-10-03 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2166:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Some RPC metrics are missing from SCM prometheus endpoint
> -
>
> Key: HDDS-2166
> URL: https://issues.apache.org/jira/browse/HDDS-2166
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In Hadoop metrics it's possible to register multiple metrics with the same 
> name but with different tags. For example each RpcServere has an own metrics 
> instance in SCM.
> {code}
> "name" : 
> "Hadoop:service=StorageContainerManager,name=RpcActivityForPort9860",
> "name" : 
> "Hadoop:service=StorageContainerManager,name=RpcActivityForPort9863",
> {code}
> They are converted by PrometheusSink to a prometheus metric line with proper 
> name and tags. For example:
> {code}
> rpc_rpc_queue_time60s_num_ops{port="9860",servername="StorageContainerLocationProtocolService",context="rpc",hostname="72736061cbc5"}
>  0
> {code}
> The PrometheusSink uses a Map to cache all the recent values but 
> unfortunately the key contains only the name (rpc_rpc_queue_time60s_num_ops 
> in our example) but not the tags (port=...)
> For this reason if there are multiple metrics with the same name, only the 
> first one will be displayed.
> As a result in SCM only the metrics of the first RPC server can be exported 
> to the prometheus endpoint. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2231) test-single.sh cannot copy results

2019-10-03 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2231:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thank you for the contribution. I have committed this patch to the trunk.

> test-single.sh cannot copy results
> --
>
> Key: HDDS-2231
> URL: https://issues.apache.org/jira/browse/HDDS-2231
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Previously {{result}} directory was created by simply {{source}}-ing 
> {{testlib.sh}}, but HDDS-2185 changed it to avoid lost results.  
> {{test-single.sh}} needs to be adjusted accordingly.
> {noformat}
> $ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone
> $ docker-compose up -d --scale datanode=3
> $ ../test-single.sh scm basic/basic.robot
> ...
> invalid output path: directory 
> "hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone/result" does not 
> exist
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2221) Monitor datanodes in ozoneperf compose cluster

2019-10-03 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-2221:
--

Assignee: Marton Elek

> Monitor datanodes in ozoneperf compose cluster
> --
>
> Key: HDDS-2221
> URL: https://issues.apache.org/jira/browse/HDDS-2221
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: docker
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> ozoneperf compose cluster contains a prometheus but as of now it collects the 
> data only from scm and om.
> We don't know the exact number of datanodes (can be scaled up and down) 
> therefor it's harder to configure the datanode host names. I would suggest to 
> configure the first 10 datanodes (which covers most of the use cases)
> How to test?
> {code:java}
> cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozoneperf
> docker-compose up -d
> firefox http://localhost:9090/targets
>   {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2234) rat.sh fails due to ozone-recon-web/build files

2019-10-03 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2234:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thank you for the contribution. I have committed this patch to the trunk.

> rat.sh fails due to ozone-recon-web/build files
> ---
>
> Key: HDDS-2234
> URL: https://issues.apache.org/jira/browse/HDDS-2234
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Anu Engineer
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR] mvn  -rf :hadoop-ozone-recon
> [INFO] Build failures were ignored.
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/index.html
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/css/2.8943d5a3.chunk.css
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/css/2.8943d5a3.chunk.css.map
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/css/main.96eebd44.chunk.css
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/runtime~main.a8a9905a.js.map
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/runtime~main.a8a9905a.js
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/2.ea549bfe.chunk.js
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/main.5bb53989.chunk.js
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/2.ea549bfe.chunk.js.map
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/precache-manifest.1d05d7a103ee9d6b280ef7adfcab3c01.js
> hadoop-ozone/recon/target/rat.txt: !? 
> /Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/service-worker.js



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2241) Optimize the refresh pipeline logic used by KeyManagerImpl to obtain the pipelines for a key

2019-10-03 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944042#comment-16944042
 ] 

Anu Engineer commented on HDDS-2241:


What is the block size ? for a file size of 1 GB, and if you assume that 
largest block size is 256MB, then you have 4 RPC calls. Does that make any 
significant difference? For small files, you will have to make one call -- and 
hopefully that is it. I am curious to understand what the data really 
indicates, are these calls that expensive ?

> Optimize the refresh pipeline logic used by KeyManagerImpl to obtain the 
> pipelines for a key
> 
>
> Key: HDDS-2241
> URL: https://issues.apache.org/jira/browse/HDDS-2241
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>
> Currently, while looking up a key, the Ozone Manager gets the pipeline 
> information from SCM through an RPC for every block in the key. For large 
> files > 1GB, we may end up making a lot of RPC calls for this. This can be 
> optimized in a couple of ways
> * We can implement a batch getContainerWithPipeline API in SCM using which we 
> can get the pipeline info locations for all the blocks for a file. To keep 
> the number of containers passed in to SCM in a single call, we can have a 
> fixed container batch size on the OM side. _Here, Number of calls = 1 (or k 
> depending on batch size)_
> * Instead, a simpler change would be to have a map (method local) of 
> ContainerID -> Pipeline that we get from SCM so that we don't need to make 
> repeated calls to SCM for the same containerID for a key. _Here, Number of 
> calls = Number of unique containerIDs_



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2226) S3 Secrets should use a strong RNG

2019-10-03 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2226.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the trunk.

> S3 Secrets should use a strong RNG
> --
>
> Key: HDDS-2226
> URL: https://issues.apache.org/jira/browse/HDDS-2226
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The S3 token generation under ozone should use a strong RNG. 
> I want to thank Jonathan Leitschuh, for originally noticing this issue and 
> reporting it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2072) Make StorageContainerLocationProtocolService message based

2019-10-02 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2072:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to the trunk.

> Make StorageContainerLocationProtocolService message based
> --
>
> Key: HDDS-2072
> URL: https://issues.apache.org/jira/browse/HDDS-2072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> StorageContainerLocationProtocolService is not yet migrated to this approach. 
> To make our generic debug tool more powerful and unify our protocols I 
> suggest to transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2227) GDPR key generation could benefit from secureRandom

2019-10-02 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2227.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the trunk branch.

> GDPR key generation could benefit from secureRandom
> ---
>
> Key: HDDS-2227
> URL: https://issues.apache.org/jira/browse/HDDS-2227
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The SecureRandom can be used for the symetric key for GDPR. While GDPR is not 
> a security feature, this is a good to have optional feature.
> I want to thank Jonathan Leitschuh, for originally noticing this issue and 
> reporting it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2073) Make SCMSecurityProtocol message based

2019-10-02 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2073:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to the trunk.

> Make SCMSecurityProtocol message based
> --
>
> Key: HDDS-2073
> URL: https://issues.apache.org/jira/browse/HDDS-2073
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> SCMSecurityProtocol.proto is not yet migrated to this approach. To make our 
> generic debug tool more powerful and unify our protocols I suggest to 
> transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2068) Make StorageContainerDatanodeProtocolService message based

2019-10-02 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2068:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to the trunk.

> Make StorageContainerDatanodeProtocolService message based
> --
>
> Key: HDDS-2068
> URL: https://issues.apache.org/jira/browse/HDDS-2068
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> We started to use a generic pattern where we have only one method in the grpc 
> service and the main message contains all the required common information 
> (eg. tracing).
> StorageContainerDatanodeProtocolService is not yet migrated to this approach. 
> To make our generic debug tool more powerful and unify our protocols I 
> suggest to transform this protocol as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2234) Running rat.sh without any parameter on mac fails due to the following files.

2019-10-02 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2234:
--

 Summary: Running rat.sh without any parameter on mac fails due to 
the following files.
 Key: HDDS-2234
 URL: https://issues.apache.org/jira/browse/HDDS-2234
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Anu Engineer


[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn  -rf :hadoop-ozone-recon
[INFO] Build failures were ignored.
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/index.html
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/css/2.8943d5a3.chunk.css
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/css/2.8943d5a3.chunk.css.map
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/css/main.96eebd44.chunk.css
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/runtime~main.a8a9905a.js.map
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/runtime~main.a8a9905a.js
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/2.ea549bfe.chunk.js
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/main.5bb53989.chunk.js
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/static/js/2.ea549bfe.chunk.js.map
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/precache-manifest.1d05d7a103ee9d6b280ef7adfcab3c01.js
hadoop-ozone/recon/target/rat.txt: !? 
/Users/aengineer/apache/hadoop/hadoop-ozone/recon/src/main/resources/webapps/recon/ozone-recon-web/build/service-worker.js



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2201) Rename VolumeList to UserVolumeInfo

2019-10-02 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2201.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the trunk.

> Rename VolumeList to UserVolumeInfo
> ---
>
> Key: HDDS-2201
> URL: https://issues.apache.org/jira/browse/HDDS-2201
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Under Ozone Manager, The volume points to a structure called volumeInfo, 
> Bucket points to BucketInfo, Key points to KeyInfo. However, User points to 
> VolumeList. duh?
> This JIRA proposes to refactor the VolumeList as UserVolumeInfo. Why not, 
> UserInfo, because that structure is already taken by the security work of 
> Ozone Manager.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2227) GDPR key generation could benefit from secureRandom

2019-10-01 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942353#comment-16942353
 ] 

Anu Engineer commented on HDDS-2227:


yes please.

> GDPR key generation could benefit from secureRandom
> ---
>
> Key: HDDS-2227
> URL: https://issues.apache.org/jira/browse/HDDS-2227
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>
> The SecureRandom can be used for the symetric key for GDPR. While GDPR is not 
> a security feature, this is a good to have optional feature.
> I want to thank Jonathan Leitschuh, for originally noticing this issue and 
> reporting it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2226) S3 Secrets should use a strong RNG

2019-10-01 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2226:
---
Description: 
The S3 token generation under ozone should use a strong RNG. 

I want to thank Jonathan Leitschuh, for originally noticing this issue and 
reporting it.


  was:The S3 token generation under ozone should use a strong RNG. 


> S3 Secrets should use a strong RNG
> --
>
> Key: HDDS-2226
> URL: https://issues.apache.org/jira/browse/HDDS-2226
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>
> The S3 token generation under ozone should use a strong RNG. 
> I want to thank Jonathan Leitschuh, for originally noticing this issue and 
> reporting it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2227) GDPR key generation could benefit from secureRandom

2019-10-01 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2227:
---
Description: 
The SecureRandom can be used for the symetric key for GDPR. While GDPR is not a 
security feature, this is a good to have optional feature.

I want to thank Jonathan Leitschuh, for originally noticing this issue and 
reporting it.


  was:The SecureRandom can be used for the symetric key for GDPR. While GDPR is 
not a security feature, this is a good to have optional feature.


> GDPR key generation could benefit from secureRandom
> ---
>
> Key: HDDS-2227
> URL: https://issues.apache.org/jira/browse/HDDS-2227
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>
> The SecureRandom can be used for the symetric key for GDPR. While GDPR is not 
> a security feature, this is a good to have optional feature.
> I want to thank Jonathan Leitschuh, for originally noticing this issue and 
> reporting it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2227) GDPR key generation could benefit from secureRandom

2019-10-01 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2227:
--

 Summary: GDPR key generation could benefit from secureRandom
 Key: HDDS-2227
 URL: https://issues.apache.org/jira/browse/HDDS-2227
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Anu Engineer
Assignee: Anu Engineer


The SecureRandom can be used for the symetric key for GDPR. While GDPR is not a 
security feature, this is a good to have optional feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2226) S3 Secrets should use a strong RNG

2019-10-01 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2226:
--

 Summary: S3 Secrets should use a strong RNG
 Key: HDDS-2226
 URL: https://issues.apache.org/jira/browse/HDDS-2226
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: S3
Reporter: Anu Engineer
Assignee: Anu Engineer


The S3 token generation under ozone should use a strong RNG. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-2175) Propagate System Exceptions from the OzoneManager

2019-10-01 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941476#comment-16941476
 ] 

Anu Engineer edited comment on HDDS-2175 at 10/1/19 4:46 PM:
-

It is something that I disagree with. But if you feel strongly about this; 
please go ahead.

Also since you seem passionate about this topic, I think I should point you to 
something better than tweet. Here is a good analysis of problems with 
exceptions.
https://pdfs.semanticscholar.org/5a36/bc568242439e9a4509fba63fb18a01ffdfc9.pdf

Google etc. for the longest time, argued that even C++ code should not use 
exceptions. 
Languages like Golang, cannot handle them if you throw them.

I understand that proposal here is not to propogate exceptions, inspite of what 
you argue, but move the code to a frankenstien state, where we send exceptions 
to client as strings. 

So it still is Error code, message -- but sometimes you need to parse the error 
string to understand what it is; sometimes it is just human readable. Our 
current model is that these strings be always be human readable.

But as I said; I think the disagreement is a question of taste; so I do not 
want perfect to be the enemy of good; and if we want to move to this 
frankenstien model, where sometimes error strings are exceptions, I am willing 
to live with it.



was (Author: anu):
It is something that I disagree with. But if you feel strongly about this; 
please go ahead.

> Propagate System Exceptions from the OzoneManager
> -
>
> Key: HDDS-2175
> URL: https://issues.apache.org/jira/browse/HDDS-2175
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Exceptions encountered while processing requests on the OM are categorized as 
> business exceptions and system exceptions. All of the business exceptions are 
> captured as OMException and have an associated status code which is returned 
> to the client. The handling of these is not going to be changed.
> Currently system exceptions are returned as INTERNAL ERROR to the client with 
> a 1 line message string from the exception. The scope of this jira is to 
> capture system exceptions and propagate the related information(including the 
> complete stack trace) back to the client.
> There are 3 sub-tasks required to achieve this
> 1. Separate capture and handling for OMException and the other 
> exceptions(IOException). For system exceptions, use Hadoop IPC 
> ServiceException mechanism to send the stack trace to the client.
> 2. track and propagate exceptions inside Ratis OzoneManagerStateMachine and 
> propagate up to the OzoneManager layer (on the leader). Currently, these 
> exceptions are not being tracked.
> 3. Handle and propagate exceptions from Ratis.
> Will raise jira for each sub-task.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2175) Propagate System Exceptions from the OzoneManager

2019-09-30 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941476#comment-16941476
 ] 

Anu Engineer commented on HDDS-2175:


It is something that I disagree with. But if you feel strongly about this; 
please go ahead.

> Propagate System Exceptions from the OzoneManager
> -
>
> Key: HDDS-2175
> URL: https://issues.apache.org/jira/browse/HDDS-2175
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Exceptions encountered while processing requests on the OM are categorized as 
> business exceptions and system exceptions. All of the business exceptions are 
> captured as OMException and have an associated status code which is returned 
> to the client. The handling of these is not going to be changed.
> Currently system exceptions are returned as INTERNAL ERROR to the client with 
> a 1 line message string from the exception. The scope of this jira is to 
> capture system exceptions and propagate the related information(including the 
> complete stack trace) back to the client.
> There are 3 sub-tasks required to achieve this
> 1. Separate capture and handling for OMException and the other 
> exceptions(IOException). For system exceptions, use Hadoop IPC 
> ServiceException mechanism to send the stack trace to the client.
> 2. track and propagate exceptions inside Ratis OzoneManagerStateMachine and 
> propagate up to the OzoneManager layer (on the leader). Currently, these 
> exceptions are not being tracked.
> 3. Handle and propagate exceptions from Ratis.
> Will raise jira for each sub-task.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2205) checkstyle.sh reports wrong failure count

2019-09-30 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2205:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> checkstyle.sh reports wrong failure count
> -
>
> Key: HDDS-2205
> URL: https://issues.apache.org/jira/browse/HDDS-2205
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {{checkstyle.sh}} outputs files with checkstyle violations and the violations 
> themselves on separate lines.  It then reports line count as number of 
> failures.
> {code:title=target/checkstyle/summary.txt}
> hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
>  49: Unused import - org.apache.hadoop.ozone.om.OMMetadataManager.
> {code}
> {code:title=target/checkstyle/failures}
> 2
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2203) Race condition in ByteStringHelper.init()

2019-09-30 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941398#comment-16941398
 ] 

Anu Engineer commented on HDDS-2203:


makes sense. Do you want this patch committed? or just move to the new model ? 


> Race condition in ByteStringHelper.init()
> -
>
> Key: HDDS-2203
> URL: https://issues.apache.org/jira/browse/HDDS-2203
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, SCM
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Critical
>  Labels: pull-request-available, pull-requests-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The current init method:
> {code}
> public static void init(boolean isUnsafeByteOperation) {
>   final boolean set = INITIALIZED.compareAndSet(false, true);
>   if (set) {
> ByteStringHelper.isUnsafeByteOperationsEnabled =
>isUnsafeByteOperation;
>} else {
>  // already initialized, check values
>  Preconditions.checkState(isUnsafeByteOperationsEnabled
>== isUnsafeByteOperation);
>}
> }
> {code}
> In a scenario when two thread accesses this method, and the execution order 
> is the following, then the second thread runs into an exception from 
> PreCondition.checkState() in the else branch.
> In an unitialized state:
> - T1 thread arrives to the method with true as the parameter, the class 
> initialises the isUnsafeByteOperationsEnabled to false
> - T1 sets INITIALIZED true
> - T2 arrives to the method with true as the parameter
> - T2 reads the INITALIZED value and as it is not false goes to else branch
> - T2 tries to check if the internal boolean property is the same true as it 
> wanted to set, and as T1 still to set the value, the checkState throws an 
> IllegalArgumentException.
> This happens in certain Hive query cases, as it came from that testing, the 
> exception we see there is the following:
> {code}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, 
> vertexName=Map 2, vertexId=vertex_1569486223160_0334_1_02, 
> diagnostics=[Vertex vertex_1569486223160_0334_1_02 [Map 2] killed/failed
>  due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: item initializer failed, 
> vertex=vertex_1569486223160_0334_1_02 [Map 2], java.io.IOException: Couldn't 
> create RpcClient protocol
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:263)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:239)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:203)
> at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:165)
> at 
> org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:158)
> at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:50)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:102)
> at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:155)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3315)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:136)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3364)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3332)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:491)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1821)
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:2002)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:524)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:781)
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:243)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
>   

[jira] [Commented] (HDDS-2175) Propagate System Exceptions from the OzoneManager

2019-09-30 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941392#comment-16941392
 ] 

Anu Engineer commented on HDDS-2175:


bq. I feel that call stacks are invaluable when included in the bug report to 
the developer.

I completely agree. As I mentioned in my comment in the Github, they are very 
useful tools for debugging. But we have to weigh the pros and cons of the 
approach. Here are some downsides, so I will list them out.

1. Code and Style Consistency - Generally, Errors are propagated via Error code 
and Message (Goland, C, etc) or Exceptions (Java, C++ etc). When we developed 
this interface, we choose to go with Error code and Message approach instead of 
Exceptions. So mixing these different approaches creates very inconsistent code 
flows.

2. Prevent Java server abstractions from leaking to client side - Java 
exceptions are very java specific; it is hard to parse these exceptions even 
when they are part of normal log files. It is difficult to read thru a printed 
stack to even understand the issue. This gets compounded when Exceptions stack. 
When we were writing this client interface, we wanted to make sure it is easy 
to write clients in other languages. A simple, Error code and a message is 
universal, that all languages understand and easy to write other language 
clients which can speak this protocol.

3. The current code experience - There are several parts of this code, where 
the clients print out these messages to the users. If we add exceptions to 
those strings, the human readability of those error messages goes down. 

4. If we want to move to exceptions instead of  error codes , it is possible 
(even though I think our future clients will suffer), but we need to move away 
from the error/message model. That is lot of work,  with very little benefit, 
other than the fact that we will have a consistent experience and exceptions 
will flow to the client side.

I had a chat with [~sdeka] and I said that I am all for increasing the fidelity 
of the error codes, that is we can add more error codes if we want to fine tune 
these messages. I am also all for logging more on the server side. So I am not 
against the patch, just wanted to avoid *server side Java exceptions crossing 
over to the client side*. I prefer a clear, simple contract between the server 
and client, I think it makes it easier for future clients to be developed more 
easily. 

> Propagate System Exceptions from the OzoneManager
> -
>
> Key: HDDS-2175
> URL: https://issues.apache.org/jira/browse/HDDS-2175
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Exceptions encountered while processing requests on the OM are categorized as 
> business exceptions and system exceptions. All of the business exceptions are 
> captured as OMException and have an associated status code which is returned 
> to the client. The handling of these is not going to be changed.
> Currently system exceptions are returned as INTERNAL ERROR to the client with 
> a 1 line message string from the exception. The scope of this jira is to 
> capture system exceptions and propagate the related information(including the 
> complete stack trace) back to the client.
> There are 3 sub-tasks required to achieve this
> 1. Separate capture and handling for OMException and the other 
> exceptions(IOException). For system exceptions, use Hadoop IPC 
> ServiceException mechanism to send the stack trace to the client.
> 2. track and propagate exceptions inside Ratis OzoneManagerStateMachine and 
> propagate up to the OzoneManager layer (on the leader). Currently, these 
> exceptions are not being tracked.
> 3. Handle and propagate exceptions from Ratis.
> Will raise jira for each sub-task.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-2175) Propagate System Exceptions from the OzoneManager

2019-09-30 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16941392#comment-16941392
 ] 

Anu Engineer edited comment on HDDS-2175 at 9/30/19 10:54 PM:
--

{quote}I feel that call stacks are invaluable when included in the bug report 
to the developer.
{quote}
I completely agree. As I mentioned in my comment in the Github, they are very 
useful tools for debugging. But we have to weigh the pros and cons of the 
approach. Here are some downsides, so I will list them out.

1. Code and Style Consistency - Generally, Errors are propagated via Error code 
and Message (Golang, C, etc) or Exceptions (Java, C++ etc). When we developed 
this interface, we choose to go with Error code and Message approach instead of 
Exceptions. So mixing these different approaches creates very inconsistent code 
flows.

2. Prevent Java server abstractions from leaking to client side - Java 
exceptions are very java specific; it is hard to parse these exceptions even 
when they are part of normal log files. It is difficult to read thru a printed 
stack to even understand the issue. This gets compounded when Exceptions stack. 
When we were writing this client interface, we wanted to make sure it is easy 
to write clients in other languages. A simple, Error code and a message is 
universal, that all languages understand and easy to write other language 
clients which can speak this protocol.

3. The current code experience - There are several parts of this code, where 
the clients print out these messages to the users. If we add exceptions to 
those strings, the human readability of those error messages goes down.

4. If we want to move to exceptions instead of error codes , it is possible 
(even though I think our future clients will suffer), but we need to move away 
from the error/message model. That is lot of work, with very little benefit, 
other than the fact that we will have a consistent experience and exceptions 
will flow to the client side.

I had a chat with [~sdeka] and I said that I am all for increasing the fidelity 
of the error codes, that is we can add more error codes if we want to fine tune 
these messages. I am also all for logging more on the server side. So I am not 
against the patch, just wanted to avoid *server side Java exceptions crossing 
over to the client side*. I prefer a clear, simple contract between the server 
and client, I think it makes it easier for future clients to be developed more 
easily.


was (Author: anu):
bq. I feel that call stacks are invaluable when included in the bug report to 
the developer.

I completely agree. As I mentioned in my comment in the Github, they are very 
useful tools for debugging. But we have to weigh the pros and cons of the 
approach. Here are some downsides, so I will list them out.

1. Code and Style Consistency - Generally, Errors are propagated via Error code 
and Message (Goland, C, etc) or Exceptions (Java, C++ etc). When we developed 
this interface, we choose to go with Error code and Message approach instead of 
Exceptions. So mixing these different approaches creates very inconsistent code 
flows.

2. Prevent Java server abstractions from leaking to client side - Java 
exceptions are very java specific; it is hard to parse these exceptions even 
when they are part of normal log files. It is difficult to read thru a printed 
stack to even understand the issue. This gets compounded when Exceptions stack. 
When we were writing this client interface, we wanted to make sure it is easy 
to write clients in other languages. A simple, Error code and a message is 
universal, that all languages understand and easy to write other language 
clients which can speak this protocol.

3. The current code experience - There are several parts of this code, where 
the clients print out these messages to the users. If we add exceptions to 
those strings, the human readability of those error messages goes down. 

4. If we want to move to exceptions instead of  error codes , it is possible 
(even though I think our future clients will suffer), but we need to move away 
from the error/message model. That is lot of work,  with very little benefit, 
other than the fact that we will have a consistent experience and exceptions 
will flow to the client side.

I had a chat with [~sdeka] and I said that I am all for increasing the fidelity 
of the error codes, that is we can add more error codes if we want to fine tune 
these messages. I am also all for logging more on the server side. So I am not 
against the patch, just wanted to avoid *server side Java exceptions crossing 
over to the client side*. I prefer a clear, simple contract between the server 
and client, I think it makes it easier for future clients to be developed more 
easily. 

> Propagate System Exceptions from the OzoneManager
> -
>
>  

[jira] [Created] (HDDS-2201) Rename VolumeList to UserVolumeInfo

2019-09-27 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2201:
--

 Summary: Rename VolumeList to UserVolumeInfo
 Key: HDDS-2201
 URL: https://issues.apache.org/jira/browse/HDDS-2201
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Manager
Reporter: Anu Engineer
Assignee: Anu Engineer


Under Ozone Manager, The volume points to a structure called volumeInfo, Bucket 
points to BucketInfo, Key points to KeyInfo. However, User points to 
VolumeList. duh?

This JIRA proposes to refactor the VolumeList as UserVolumeInfo. Why not, 
UserInfo, because that structure is already taken by the security work of Ozone 
Manager.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2149) Replace findbugs with spotbugs

2019-09-26 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939152#comment-16939152
 ] 

Anu Engineer commented on HDDS-2149:


Thank you for the contribution. I have committed this to the trunk.  [~elek]  
Thank you for the reivew.

> Replace findbugs with spotbugs
> --
>
> Key: HDDS-2149
> URL: https://issues.apache.org/jira/browse/HDDS-2149
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Findbugs has been marked deprecated and all future work is now happening 
> under SpotBugs project.
> This Jira is to investigate and possibly transition to Spotbugs in Ozone
>  
> Ref1 - 
> [https://mailman.cs.umd.edu/pipermail/findbugs-discuss/2017-September/004383.html]
> Ref2 - [https://spotbugs.github.io/]
>  
> A turn off for developers is that IntelliJ does not yet have a plugin for 
> Spotbugs - [https://youtrack.jetbrains.com/issue/IDEA-201846]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2149) Replace findbugs with spotbugs

2019-09-26 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2149:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Replace findbugs with spotbugs
> --
>
> Key: HDDS-2149
> URL: https://issues.apache.org/jira/browse/HDDS-2149
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Findbugs has been marked deprecated and all future work is now happening 
> under SpotBugs project.
> This Jira is to investigate and possibly transition to Spotbugs in Ozone
>  
> Ref1 - 
> [https://mailman.cs.umd.edu/pipermail/findbugs-discuss/2017-September/004383.html]
> Ref2 - [https://spotbugs.github.io/]
>  
> A turn off for developers is that IntelliJ does not yet have a plugin for 
> Spotbugs - [https://youtrack.jetbrains.com/issue/IDEA-201846]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2179) ConfigFileGenerator fails with Java 10 or newer

2019-09-26 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2179:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> ConfigFileGenerator fails with Java 10 or newer
> ---
>
> Key: HDDS-2179
> URL: https://issues.apache.org/jira/browse/HDDS-2179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:title=mvn -f pom.ozone.xml -DskipTests -am -pl :hadoop-hdds-config 
> clean package}
> ...
> [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
> hadoop-hdds-config ---
> [INFO] Compiling 3 source files to hadoop-hdds/config/target/test-classes
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdds-config: Compilation failure
> [ERROR] Can't generate the config file from annotation: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
> {code}
> The root cause is that new Java (I guess it's 9+, but tried only on 10+) 
> throws a different {{IOException}} subclass: {{NoSuchFileException}} instead 
> of {{FileNotFoundException}}.
> {code}
> java.nio.file.NoSuchFileException: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:374)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:425)
>   at 
> java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
>   at java.base/java.nio.file.Files.newInputStream(Files.java:159)
>   at 
> jdk.compiler/com.sun.tools.javac.file.PathFileObject.openInputStream(PathFileObject.java:461)
>   at 
> java.compiler@13/javax.tools.ForwardingFileObject.openInputStream(ForwardingFileObject.java:74)
>   at 
> org.apache.hadoop.hdds.conf.ConfigFileGenerator.process(ConfigFileGenerator.java:62)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2179) ConfigFileGenerator fails with Java 10 or newer

2019-09-26 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939143#comment-16939143
 ] 

Anu Engineer commented on HDDS-2179:


I have committed this patch to the trunk. Thank you for the contribution

> ConfigFileGenerator fails with Java 10 or newer
> ---
>
> Key: HDDS-2179
> URL: https://issues.apache.org/jira/browse/HDDS-2179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:title=mvn -f pom.ozone.xml -DskipTests -am -pl :hadoop-hdds-config 
> clean package}
> ...
> [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
> hadoop-hdds-config ---
> [INFO] Compiling 3 source files to hadoop-hdds/config/target/test-classes
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-hdds-config: Compilation failure
> [ERROR] Can't generate the config file from annotation: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
> {code}
> The root cause is that new Java (I guess it's 9+, but tried only on 10+) 
> throws a different {{IOException}} subclass: {{NoSuchFileException}} instead 
> of {{FileNotFoundException}}.
> {code}
> java.nio.file.NoSuchFileException: 
> hadoop-hdds/config/target/test-classes/ozone-default-generated.xml
>   at 
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
>   at 
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
>   at 
> java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:374)
>   at java.base/java.nio.file.Files.newByteChannel(Files.java:425)
>   at 
> java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
>   at java.base/java.nio.file.Files.newInputStream(Files.java:159)
>   at 
> jdk.compiler/com.sun.tools.javac.file.PathFileObject.openInputStream(PathFileObject.java:461)
>   at 
> java.compiler@13/javax.tools.ForwardingFileObject.openInputStream(ForwardingFileObject.java:74)
>   at 
> org.apache.hadoop.hdds.conf.ConfigFileGenerator.process(ConfigFileGenerator.java:62)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2193) Adding container related metrics in SCM

2019-09-26 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939119#comment-16939119
 ] 

Anu Engineer commented on HDDS-2193:


Thank you for the contribution. I have committed this patch to the trunk branch.

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-2193
> URL: https://issues.apache.org/jira/browse/HDDS-2193
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
>  * Number of list container ops.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2193) Adding container related metrics in SCM

2019-09-26 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2193.

Fix Version/s: 0.5.0
   Resolution: Fixed

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-2193
> URL: https://issues.apache.org/jira/browse/HDDS-2193
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
>  * Number of list container ops.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2174) Delete GDPR Encryption Key from metadata when a Key is deleted

2019-09-26 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2174:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

[~bharat] Thanks for the reviews. [~dineshchitlangia] Thanks you for the 
contribution. I have committed this patch to the trunk.

> Delete GDPR Encryption Key from metadata when a Key is deleted
> --
>
> Key: HDDS-2174
> URL: https://issues.apache.org/jira/browse/HDDS-2174
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> As advised by [~arp]  & [~aengineer], when a deleteKey command is executed, 
> delete the gdpr encryption key details from key metadata before moving it to 
> deletedTable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2019) Handle Set DtService of token in S3Gateway for OM HA

2019-09-26 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939013#comment-16939013
 ] 

Anu Engineer commented on HDDS-2019:


bq. the service name should be set with address of all OM's.

 I am not sure I understand this assertion. Can you please help me understand 
why we need this ? 

> Handle Set DtService of token in S3Gateway for OM HA
> 
>
> Key: HDDS-2019
> URL: https://issues.apache.org/jira/browse/HDDS-2019
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address. Right now in OMClientProducer, UGI is 
> created with S3 token, and serviceName of token is set with OMAddress, for HA 
> case, this should be set with all OM RPC addresses.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-26 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2180.

Fix Version/s: 0.5.0
   Resolution: Fixed

> Add Object ID and update ID on VolumeList Object
> 
>
> Key: HDDS-2180
> URL: https://issues.apache.org/jira/browse/HDDS-2180
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-26 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16938793#comment-16938793
 ] 

Anu Engineer commented on HDDS-2180:


[~xyao] and nanda kumar, thank you for the reviews, I have committed this patch 
to the trunk branch.

> Add Object ID and update ID on VolumeList Object
> 
>
> Key: HDDS-2180
> URL: https://issues.apache.org/jira/browse/HDDS-2180
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2067) Create generic service facade with tracing/metrics/logging support

2019-09-25 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2067:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

[~elek] Thank you for the contribution. I have committed this patch to the 
trunk branch.

> Create generic service facade with tracing/metrics/logging support
> --
>
> Key: HDDS-2067
> URL: https://issues.apache.org/jira/browse/HDDS-2067
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> We started to use a message based GRPC approach. Wen have only one method and 
> the requests are routed based on a "type" field in the proto message. 
> For example in OM protocol:
> {code}
> /**
>  The OM service that takes care of Ozone namespace.
> */
> service OzoneManagerService {
> // A client-to-OM RPC to send client requests to OM Ratis server
> rpc submitRequest(OMRequest)
>   returns(OMResponse);
> }
> {code}
> And 
> {code}
> message OMRequest {
>   required Type cmdType = 1; // Type of the command
> ...
> {code}
> This approach makes it possible to use the same code to process incoming 
> messages in the server side.
> ScmBlockLocationProtocolServerSideTranslatorPB.send method contains the logic 
> of:
>  * Logging the request/response message (can be displayed with ozone insight)
>  * Updated metrics
>  * Handle open tracing context propagation.
> These functions are generic. For example 
> OzoneManagerProtocolServerSideTranslatorPB use the same (=similar) code.
> The goal in this jira is to provide a generic utility and move the common 
> code for tracing/request logging/response logging/metrics calculation to a 
> common utility which can be used from all the ServerSide translators.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-25 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2180:
---
Target Version/s: 0.5.0

> Add Object ID and update ID on VolumeList Object
> 
>
> Key: HDDS-2180
> URL: https://issues.apache.org/jira/browse/HDDS-2180
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2180) Add Object ID and update ID on VolumeList Object

2019-09-25 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2180:
--

 Summary: Add Object ID and update ID on VolumeList Object
 Key: HDDS-2180
 URL: https://issues.apache.org/jira/browse/HDDS-2180
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Anu Engineer
Assignee: Anu Engineer


This JIRA proposes to add Object ID and Update IDs to the Volume List Object.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2175) Propagate stack trace for OM Exceptions to the Client

2019-09-25 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937867#comment-16937867
 ] 

Anu Engineer commented on HDDS-2175:


Just a question: Does this open up the server for easier attacks ? The fact 
that where the server is failing, when it can be probed from a client, might 
make it easier to probe and attack the server. More of a question that a 
comment.

> Propagate stack trace for OM Exceptions to the Client
> -
>
> Key: HDDS-2175
> URL: https://issues.apache.org/jira/browse/HDDS-2175
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Ozone Manager responds with a Status code and the summary message when an 
> exception occurs while running the OM request handlers.
> The proposal is to respond to the client with the complete stack trace for 
> the exception, as part of the response message.
> This makes debugging more convenient without requiring code change on the 
> client, because the status code is retained in the response message.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2159) Fix Race condition in ProfileServlet#pid

2019-09-23 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2159.

Fix Version/s: 0.5.0
   Resolution: Fixed

Committed to the trunk

> Fix Race condition in ProfileServlet#pid
> 
>
> Key: HDDS-2159
> URL: https://issues.apache.org/jira/browse/HDDS-2159
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> There is a race condition in ProfileServlet. The Servlet member field pid 
> should not be used for local assignment. It could lead to race condition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2170) Add Object IDs and Update ID to Volume Object

2019-09-23 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2170:
--

 Summary: Add Object IDs and Update ID to Volume Object
 Key: HDDS-2170
 URL: https://issues.apache.org/jira/browse/HDDS-2170
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Anu Engineer
Assignee: Anu Engineer


This patch proposes to add object ID and update ID when a volume is created. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2160) Add acceptance test for ozonesecure-mr compose

2019-09-23 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2160:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

[~xyao] Thank you for the contribution. I have committed this patch to the 
trunk.

> Add acceptance test for ozonesecure-mr compose
> --
>
> Key: HDDS-2160
> URL: https://issues.apache.org/jira/browse/HDDS-2160
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This will give us coverage of running basic MR jobs on security enabled OZONE 
> cluster against YARN. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable

2019-09-23 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936163#comment-16936163
 ] 

Anu Engineer commented on HDDS-2161:


[~dineshchitlangia]  Thank you for the contribution. I have committed this 
patch to the trunk.

> Create RepeatedKeyInfo structure to be saved in deletedTable
> 
>
> Key: HDDS-2161
> URL: https://issues.apache.org/jira/browse/HDDS-2161
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Currently, OM Metadata deletedTable stores 
> When a user deletes a Key,  is moved to deletedTable.
> If a user creates and deletes key with exact same name in quick succession 
> repeatedly, then old  can get overwritten and we may be 
> left with dangling blocks.
> To address this, currently we append delete timestamp to keyname and preserve 
> the multiple delete attempts for same key name.
> However, for GDPR compliance we need a way to check if a key is deleted from 
> deletedTable and thus given the above explanation, we may not get accurate 
> information and it must also confuse the users.
>  
> This Jira aims to:
>  # Create new structure RepeatedKeyInfo which allows us to group multiple 
> KeyInfo which can be saved to deletedTable corresponding to a keyname as 
> 
>  # Due to this, before we move a key to deletedTable, we need to check if key 
> with same name exists. If yes, then fetch the existing instance and add the 
> latest key to the list, store it back to deletedTable, else create a new 
> instance and save to table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable

2019-09-23 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2161:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Create RepeatedKeyInfo structure to be saved in deletedTable
> 
>
> Key: HDDS-2161
> URL: https://issues.apache.org/jira/browse/HDDS-2161
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Currently, OM Metadata deletedTable stores 
> When a user deletes a Key,  is moved to deletedTable.
> If a user creates and deletes key with exact same name in quick succession 
> repeatedly, then old  can get overwritten and we may be 
> left with dangling blocks.
> To address this, currently we append delete timestamp to keyname and preserve 
> the multiple delete attempts for same key name.
> However, for GDPR compliance we need a way to check if a key is deleted from 
> deletedTable and thus given the above explanation, we may not get accurate 
> information and it must also confuse the users.
>  
> This Jira aims to:
>  # Create new structure RepeatedKeyInfo which allows us to group multiple 
> KeyInfo which can be saved to deletedTable corresponding to a keyname as 
> 
>  # Due to this, before we move a key to deletedTable, we need to check if key 
> with same name exists. If yes, then fetch the existing instance and add the 
> latest key to the list, store it back to deletedTable, else create a new 
> instance and save to table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2128) Make ozone sh command work with OM HA service ids

2019-09-20 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2128.

Fix Version/s: 0.5.0
   Resolution: Fixed

> Make ozone sh command work with OM HA service ids
> -
>
> Key: HDDS-2128
> URL: https://issues.apache.org/jira/browse/HDDS-2128
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Now that HDDS-2007 is committed. I can use some common helper function to 
> make this work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2128) Make ozone sh command work with OM HA service ids

2019-09-20 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934774#comment-16934774
 ] 

Anu Engineer commented on HDDS-2128:


+1, I have committed this change to the trunk branch.


> Make ozone sh command work with OM HA service ids
> -
>
> Key: HDDS-2128
> URL: https://issues.apache.org/jira/browse/HDDS-2128
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Now that HDDS-2007 is committed. I can use some common helper function to 
> make this work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2159) Fix Race condition in ProfileServlet#pid

2019-09-20 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934765#comment-16934765
 ] 

Anu Engineer commented on HDDS-2159:


{quote}
It could lead to race condition.
{quote}
How ?

> Fix Race condition in ProfileServlet#pid
> 
>
> Key: HDDS-2159
> URL: https://issues.apache.org/jira/browse/HDDS-2159
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There is a race condition in ProfileServlet. The Servlet member field pid 
> should not be used for local assignment. It could lead to race condition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2157) checkstyle: print filenames relative to project root

2019-09-20 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2157:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> checkstyle: print filenames relative to project root
> 
>
> Key: HDDS-2157
> URL: https://issues.apache.org/jira/browse/HDDS-2157
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently {{checkstyle.sh}} prints files with violations using full path, eg:
> {noformat:title=https://github.com/elek/ozone-ci/blob/master/trunk/trunk-nightly-20190920-4x9x8/checkstyle/summary.txt}
> ...
> /workdir/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartUploadList.java
>  23: Unused import - org.apache.hadoop.hdds.client.ReplicationType.
>  24: Unused import - 
> org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor.
> /workdir/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartUploadListParts.java
>  23: Unused import - 
> org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType.
> /workdir/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartKeyInfo.java
>  19: Unused import - org.apache.hadoop.hdds.client.ReplicationFactor.
>  20: Unused import - org.apache.hadoop.hdds.client.ReplicationType.
>  26: Unused import - java.time.Instant.
> ...
> {noformat}
> {{/workdir}} is specific to the CI environment.  Similarly, local checkout 
> directory is specific to each developer.
> Printing only path relative to project root ({{/workdir}} here) would make 
> handling these paths easier (eg. reporting errors in JIRA or opening files 
> locally for editing).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2157) checkstyle: print filenames relative to project root

2019-09-20 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934763#comment-16934763
 ] 

Anu Engineer commented on HDDS-2157:


+1. I have committed this change into the trunk branch. Thank you for the 
contribution.

> checkstyle: print filenames relative to project root
> 
>
> Key: HDDS-2157
> URL: https://issues.apache.org/jira/browse/HDDS-2157
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently {{checkstyle.sh}} prints files with violations using full path, eg:
> {noformat:title=https://github.com/elek/ozone-ci/blob/master/trunk/trunk-nightly-20190920-4x9x8/checkstyle/summary.txt}
> ...
> /workdir/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartUploadList.java
>  23: Unused import - org.apache.hadoop.hdds.client.ReplicationType.
>  24: Unused import - 
> org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor.
> /workdir/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartUploadListParts.java
>  23: Unused import - 
> org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType.
> /workdir/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmMultipartKeyInfo.java
>  19: Unused import - org.apache.hadoop.hdds.client.ReplicationFactor.
>  20: Unused import - org.apache.hadoop.hdds.client.ReplicationType.
>  26: Unused import - java.time.Instant.
> ...
> {noformat}
> {{/workdir}} is specific to the CI environment.  Similarly, local checkout 
> directory is specific to each developer.
> Printing only path relative to project root ({{/workdir}} here) would make 
> handling these paths easier (eg. reporting errors in JIRA or opening files 
> locally for editing).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1982) Extend SCMNodeManager to support decommission and maintenance states

2019-09-20 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1982.

Fix Version/s: 0.5.0
   Resolution: Fixed

[~sodonnell] Thank you for the contribution. I have committed this patch to the 
HDDS-1880-Decom branch.

> Extend SCMNodeManager to support decommission and maintenance states
> 
>
> Key: HDDS-1982
> URL: https://issues.apache.org/jira/browse/HDDS-1982
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> Currently, within SCM a node can have the following states:
> HEALTHY
> STALE
> DEAD
> DECOMMISSIONING
> DECOMMISSIONED
> The last 2 are not currently used.
> In order to support decommissioning and maintenance mode, we need to extend 
> the set of states a node can have to include decommission and maintenance 
> states.
> It is also important to note that a node decommissioning or entering 
> maintenance can also be HEALTHY, STALE or go DEAD.
> Therefore in this Jira I propose we should model a node state with two 
> different sets of values. The first, is effectively the liveliness of the 
> node, with the following states. This is largely what is in place now:
> HEALTHY
> STALE
> DEAD
> The second is the node operational state:
> IN_SERVICE
> DECOMMISSIONING
> DECOMMISSIONED
> ENTERING_MAINTENANCE
> IN_MAINTENANCE
> That means the overall total number of states for a node is the cross-product 
> of the two above lists, however it probably makes sense to keep the two 
> states seperate internally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2001) Update Ratis version to 0.4.0

2019-09-20 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2001:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thank you for the contribution. I have committed this patch to the trunk branch.

> Update Ratis version to 0.4.0
> -
>
> Key: HDDS-2001
> URL: https://issues.apache.org/jira/browse/HDDS-2001
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Update Ratis version to 0.4.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1949) Missing or error-prone test cleanup

2019-09-20 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1949:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

[~adoroszlai] Thanks for the contribution. I have committed this patch into the 
trunk branch.

> Missing or error-prone test cleanup
> ---
>
> Key: HDDS-1949
> URL: https://issues.apache.org/jira/browse/HDDS-1949
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Some integration tests do not clean up after themselves.  Some only clean up 
> if the test is successful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2020) Remove mTLS from Ozone GRPC

2019-09-19 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2020:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

[~nandakumar131] Please cherry-pick when you get a chance.

[~xyao] Thanks for the contribution. I have committed this to the trunk.

> Remove mTLS from Ozone GRPC
> ---
>
> Key: HDDS-2020
> URL: https://issues.apache.org/jira/browse/HDDS-2020
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Generic GRPC support mTLS for mutual authentication. However, Ozone has built 
> in block token mechanism for server to authenticate the client. We only need 
> TLS for client to authenticate the server and wire encryption. 
> Remove the mTLS support also simplify the GRPC server/client configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2156) Fix alignment issues in HDDS doc pages

2019-09-19 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2156:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Fix alignment issues in HDDS doc pages
> --
>
> Key: HDDS-2156
> URL: https://issues.apache.org/jira/browse/HDDS-2156
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The cards in HDDS doc pages don't align properly and needs to be fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2127) Detailed Tools doc not reachable

2019-09-19 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2127:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

I have committed this to the trunk. [~nandakumar131]  we might need this in the 
0.4.1, please cherry-pick if needed. [~elek]  Thank you for the contribution. 
[~adoroszlai]  Thanks for the review.

> Detailed Tools doc not reachable
> 
>
> Key: HDDS-2127
> URL: https://issues.apache.org/jira/browse/HDDS-2127
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> There are two doc pages for tools:
>  * docs/beyond/tools.html
>  * docs/tools.html
> The latter is more detailed (has subpages for several tools), but it is not 
> reachable (even indirectly) from the start page.  Not sure if this is 
> intentional.
> On a related note, it has two "Testing tools" sub-pages. One of them is empty 
> and should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2110) Arbitrary file can be downloaded with the help of ProfilerServlet

2019-09-19 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2110:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

[~elek] Thanks for the fix. I have committed this to the trunk branch.

[~adeo] Thank you for filing this JIRA.

> Arbitrary file can be downloaded with the help of ProfilerServlet
> -
>
> Key: HDDS-2110
> URL: https://issues.apache.org/jira/browse/HDDS-2110
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Native
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The LOC 324 in the file 
> [ProfileServlet.java|https://github.com/apache/hadoop/blob/217bdbd940a96986df3b96899b43caae2b5a9ed2/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java]
>  is prone to an arbitrary file download:-
> {code:java}
> protected void doGetDownload(String fileName, final HttpServletRequest req,   
>final HttpServletResponse resp) throws IOException {
> File requestedFile = 
> ProfileServlet.OUTPUT_DIR.resolve(fileName).toAbsolutePath().toFile();{code}
> As the String fileName is directly considered as the requested file.
>  
> Which is called at LOC 180 with HTTP request directly passed:-
> {code:java}
> if (req.getParameter("file") != null) {  
> doGetDownload(req.getParameter("file"), req, resp);  
> return;
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14846) libhdfs tests are failing on trunk due to jni usage bugs

2019-09-17 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931909#comment-16931909
 ] 

Anu Engineer commented on HDFS-14846:
-

Thank you for your contribution. I have committed this patch to the trunk 
branch.

> libhdfs tests are failing on trunk due to jni usage bugs
> 
>
> Key: HDFS-14846
> URL: https://issues.apache.org/jira/browse/HDFS-14846
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> While working on HDFS-14564, I noticed that the libhdfs tests are failing on 
> trunk (both on Hadoop QA and locally). I did some digging and found out that 
> the {{-Xcheck:jni}} flag is causing a bunch of crashes. I haven't been able 
> to pinpoint what caused this regression, but my best guess is that an upgrade 
> in the JDK we use in Hadoop QA started causing these failures. I looked back 
> at some old JIRAs and it looks like the tests work on Java 1.8.0_212, but 
> Hadoop QA is running 1.8.0_222 (as is my local env) (I couldn't confirm this 
> theory because I'm having trouble getting Java 1.8.0_212 installed next to 
> 1.8.0_222 on my Ubuntu machine) (even after re-winding the commit history 
> back to a known good commit where the libhdfs passed, the tests still fail, 
> so I don't think a code change caused the regressions).
> The failures are a bunch of "FATAL ERROR in native method: Bad global or 
> local ref passed to JNI" errors. After doing some debugging, it looks like 
> {{-Xcheck:jni}} now errors out if any code tries to pass a local ref to 
> {{DeleteLocalRef}} twice (previously it looked like it didn't complain) (we 
> have some checks to avoid this, but it looks like they don't work as 
> expected).
> There are a few places in the libhdfs code where this pattern causes a crash, 
> as well as one place in {{JniBasedUnixGroupsMapping}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2111) XSS fragments can be injected to the S3g landing page

2019-09-16 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2111:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> XSS fragments can be injected to the S3g landing page  
> ---
>
> Key: HDDS-2111
> URL: https://issues.apache.org/jira/browse/HDDS-2111
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> VULNERABILITY DETAILS
> There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
> "window.location.href".
> Considering a typical URL:
> scheme://domain:port/path?query_string#fragment_id
> Browsers encode correctly both "path" and "query_string", but not the 
> "fragment_id". 
> So if used "fragment_id" the vector is also not logged on Web Server.
> VERSION
> Chrome Version: 10.0.648.134 (Official Build 77917) beta
> REPRODUCTION CASE
> This is an index.html page:
> {code:java}
> aws s3api --endpoint 
> document.write(window.location.href.replace("static/", "")) 
> create-bucket --bucket=wordcount
> {code}
> The attack vector is:
> index.html?#alert('XSS');
> * PoC:
> For your convenience, a minimalist PoC is located on:
> http://security.onofri.org/xss_location.html?#alert('XSS');
> * References
> - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
> http://www.webappsec.org/projects/articles/071105.shtml
> reference:- 
> https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2111) XSS fragments can be injected to the S3g landing page

2019-09-16 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16930939#comment-16930939
 ] 

Anu Engineer commented on HDDS-2111:


Thank you [~adeo] for reporting this issue. [~elek] Thank you for fixing this 
issue. I have committed this patch to the trunk branch.

FYI: [~nandakumar131] You might want to cherry-pick this to 0.4.1

 

> XSS fragments can be injected to the S3g landing page  
> ---
>
> Key: HDDS-2111
> URL: https://issues.apache.org/jira/browse/HDDS-2111
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Aayush
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> VULNERABILITY DETAILS
> There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
> "window.location.href".
> Considering a typical URL:
> scheme://domain:port/path?query_string#fragment_id
> Browsers encode correctly both "path" and "query_string", but not the 
> "fragment_id". 
> So if used "fragment_id" the vector is also not logged on Web Server.
> VERSION
> Chrome Version: 10.0.648.134 (Official Build 77917) beta
> REPRODUCTION CASE
> This is an index.html page:
> {code:java}
> aws s3api --endpoint 
> document.write(window.location.href.replace("static/", "")) 
> create-bucket --bucket=wordcount
> {code}
> The attack vector is:
> index.html?#alert('XSS');
> * PoC:
> For your convenience, a minimalist PoC is located on:
> http://security.onofri.org/xss_location.html?#alert('XSS');
> * References
> - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
> http://www.webappsec.org/projects/articles/071105.shtml
> reference:- 
> https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2030) Generate simplifed reports by the dev-support/checks/*.sh scripts

2019-09-16 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2030:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Generate simplifed reports by the dev-support/checks/*.sh scripts
> -
>
> Key: HDDS-2030
> URL: https://issues.apache.org/jira/browse/HDDS-2030
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 16.5h
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains shell scripts to execute 
> different type of code checks (findbugs, checkstyle, etc.)
> Currently the contract is very simple. Every shell script executes one (and 
> only one) check and the shell response code is set according to the result 
> (non-zero code if failed).
> To have better reporting in the github pr build, it would be great to improve 
> the scripts to generate simple summary files and save the relevant files for 
> archiving.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2057) Incorrect Default OM Port in Ozone FS URI Error Message

2019-09-13 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2057:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Incorrect Default OM Port in Ozone FS URI Error Message
> ---
>
> Key: HDDS-2057
> URL: https://issues.apache.org/jira/browse/HDDS-2057
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The error message displayed from BasicOzoneFilesystem.initialize specifies 
> 5678 as the OM port. This is not the default port.
> "Ozone file system URL " +
>  "should be one of the following formats: " +
>  "o3fs://bucket.volume/key OR " +
>  "o3fs://bucket.volume.om-host.example.com/key OR " +
>  "o3fs://bucket.volume.om-host.example.com:5678/key";
>  
> This should be fixed to pull the default value from the configuration 
> parameter, instead of a hard-coded value.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2057) Incorrect Default OM Port in Ozone FS URI Error Message

2019-09-13 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2057:
---
   Fix Version/s: 0.5.0
Target Version/s: 0.5.0

> Incorrect Default OM Port in Ozone FS URI Error Message
> ---
>
> Key: HDDS-2057
> URL: https://issues.apache.org/jira/browse/HDDS-2057
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The error message displayed from BasicOzoneFilesystem.initialize specifies 
> 5678 as the OM port. This is not the default port.
> "Ozone file system URL " +
>  "should be one of the following formats: " +
>  "o3fs://bucket.volume/key OR " +
>  "o3fs://bucket.volume.om-host.example.com/key OR " +
>  "o3fs://bucket.volume.om-host.example.com:5678/key";
>  
> This should be fixed to pull the default value from the configuration 
> parameter, instead of a hard-coded value.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2111) DOM XSS

2019-09-13 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929204#comment-16929204
 ] 

Anu Engineer commented on HDDS-2111:


Thank you for bringing attention to this issue. We are working on addressing 
this.

> DOM XSS
> ---
>
> Key: HDDS-2111
> URL: https://issues.apache.org/jira/browse/HDDS-2111
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Aayush
>Priority: Major
>
> VULNERABILITY DETAILS
> There is a way to bypass anti-XSS filter for DOM XSS exploiting a 
> "window.location.href".
> Considering a typical URL:
> scheme://domain:port/path?query_string#fragment_id
> Browsers encode correctly both "path" and "query_string", but not the 
> "fragment_id". 
> So if used "fragment_id" the vector is also not logged on Web Server.
> VERSION
> Chrome Version: 10.0.648.134 (Official Build 77917) beta
> REPRODUCTION CASE
> This is an index.html page:
> {code:java}
> aws s3api --endpoint 
> document.write(window.location.href.replace("static/", "")) 
> create-bucket --bucket=wordcount
> {code}
> The attack vector is:
> index.html?#alert('XSS');
> * PoC:
> For your convenience, a minimalist PoC is located on:
> http://security.onofri.org/xss_location.html?#alert('XSS');
> * References
> - DOM Based Cross-Site Scripting or XSS of the Third Kind - 
> http://www.webappsec.org/projects/articles/071105.shtml
> reference:- 
> https://bugs.chromium.org/p/chromium/issues/detail?id=76796



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2124) Random next links

2019-09-13 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929198#comment-16929198
 ] 

Anu Engineer commented on HDDS-2124:


Having clean mistake-free documentation is crucial. Thanks for finding this. 
[~nandakumar131], I have marked this as a blocker for the release.

> Random next links 
> --
>
> Key: HDDS-2124
> URL: https://issues.apache.org/jira/browse/HDDS-2124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Priority: Blocker
>
> _Next>>_ links at the bottom of some documentation pages seem to be out of 
> order.
>  * _Simple Single Ozone_ ("easy start") should link to one of the 
> intermediate level pages, but has no _Next_ link
>  * _Building From Sources_ (ninja) should be the last (no _Next_ link), but 
> points to _Minikube_ (intermediate)
>  * _Pseudo-cluster_ (intermediate) should point to the ninja level, but leads 
> to _Simple Single Ozone_ (easy start)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2124) Random next links

2019-09-13 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2124:
---
Priority: Blocker  (was: Minor)

> Random next links 
> --
>
> Key: HDDS-2124
> URL: https://issues.apache.org/jira/browse/HDDS-2124
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Priority: Blocker
>
> _Next>>_ links at the bottom of some documentation pages seem to be out of 
> order.
>  * _Simple Single Ozone_ ("easy start") should link to one of the 
> intermediate level pages, but has no _Next_ link
>  * _Building From Sources_ (ninja) should be the last (no _Next_ link), but 
> points to _Minikube_ (intermediate)
>  * _Pseudo-cluster_ (intermediate) should point to the ninja level, but leads 
> to _Simple Single Ozone_ (easy start)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2015) Encrypt/decrypt key using symmetric key while writing/reading

2019-09-06 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2015:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to the trunk branch.

> Encrypt/decrypt key using symmetric key while writing/reading
> -
>
> Key: HDDS-2015
> URL: https://issues.apache.org/jira/browse/HDDS-2015
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> *Key Write Path (Encryption)*
> When a bucket metadata has gdprEnabled=true, we generate the GDPRSymmetricKey 
> and add it to Key Metadata before we create the Key.
> This ensures that key is encrypted before writing.
> *Key Read Path(Decryption)*
> While reading the Key, we check for gdprEnabled=true and they get the 
> GDPRSymmetricKey based on secret/algorithm as fetched from Key Metadata.
> Create a stream to decrypt the key and pass it on to client.
> *Test*
> Create Key in GDPR Enabled Bucket -> Read Key -> Verify content is as 
> expected -> Update Key Metadata to remove the gdprEnabled flag -> Read Key -> 
> Confirm the content is not as expected.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2093) Add Ranger specific information to documentation

2019-09-05 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2093:
--

 Summary: Add Ranger specific information to documentation
 Key: HDDS-2093
 URL: https://issues.apache.org/jira/browse/HDDS-2093
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Anu Engineer


Apache Ranger version 2.0 supports an Ozone Manager plug-in, which allows Ozone 
policies to be controlled via Ranger. We need to update the Ozone documentation 
that explains how to configure and use Apache Ranger as the Ozone's policy 
engine.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2092) Support groups in adminstrators in SCM

2019-09-05 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2092:
--

 Summary: Support groups in adminstrators in SCM
 Key: HDDS-2092
 URL: https://issues.apache.org/jira/browse/HDDS-2092
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Anu Engineer


Today SCM adminstrators are a set of users specified by an key in Ozone. We 
should add support for groups, so that instead of users groups can be specified 
as SCM administrators.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2091) Document the who are adminstators Under Ozone.

2019-09-05 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2091:
--

 Summary: Document the who are adminstators Under Ozone. 
 Key: HDDS-2091
 URL: https://issues.apache.org/jira/browse/HDDS-2091
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Anu Engineer


Ozone uses ozone.adminstrators as a key to indicate who are administrators. 
This information is missing from the documentation. We need to add that to both 
security pages and CLI pages.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1708) Expose metrics for unhealthy containers

2019-09-05 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1708.

Fix Version/s: 0.5.0
   Resolution: Fixed

[~hgadre] Thank you for the contribution. I have committed this patch to the 
trunk branch.

> Expose metrics for unhealthy containers
> ---
>
> Key: HDDS-1708
> URL: https://issues.apache.org/jira/browse/HDDS-1708
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDDS-1201 introduced a capability for datanode to report unhealthy containers 
> to SCM. This Jira is to expose this information as a metric for user 
> visibility.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2087) Remove the hard coded config key in ChunkManager

2019-09-04 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2087:
--

 Summary: Remove the hard coded config key in ChunkManager
 Key: HDDS-2087
 URL: https://issues.apache.org/jira/browse/HDDS-2087
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Anu Engineer


We have a hard-coded config key in the {{ChunkManagerFactory.java.}}

 
{code}
boolean scrubber = config.getBoolean(
 "hdds.containerscrub.enabled",
 false);
{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1200) Ozone Data Scrubbing : Checksum verification for chunks

2019-09-04 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1200.

Fix Version/s: 0.5.0
   Resolution: Fixed

[~hgadre] Thank you for the contribution. I have committed this patch to the 
trunk.

> Ozone Data Scrubbing : Checksum verification for chunks
> ---
>
> Key: HDDS-1200
> URL: https://issues.apache.org/jira/browse/HDDS-1200
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Supratim Deka
>Assignee: Hrishikesh Gadre
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Background scrubber should read each chunk and verify the checksum.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14703) NameNode Fine-Grained Locking via Metadata Partitioning

2019-09-04 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922937#comment-16922937
 ] 

Anu Engineer commented on HDFS-14703:
-

[~shv] We ([~arp] , [~xyao] , [~jojochuang] , [~szetszwo] ) were looking at the 
patch, as well as the document and came across some questions that we were not 
able to answer. I have been tasked with asking these.
 # The Block Partition - We understand that you are proposing the block 
partitions be divided into GSets that match the Inode partition. What we could 
not puzzle out was how to handle block reports? One of the suggestions we came 
up with was the in the initial parts of the work, we leave the block map as a 
single monolith. It would be interesting to hear how you plan to partition the 
block map, especially when the block reports are involved.
 # The locks in the Range Map Lock and Range Set  lock– It is not very clear 
what the semantics would be, if I hold a Range Map lock, does it mean that I 
can operate safely? what happens to the Range Set Locks? Do I need to make sure 
that all users of RangeSet has released the locks ? and if I am holding the 
Range Map lock, no other thread will be able to enter?  is it possible that 
Range Map lock might have to wait a really long time for the Range Set locks to 
be released ?

> NameNode Fine-Grained Locking via Metadata Partitioning
> ---
>
> Key: HDFS-14703
> URL: https://issues.apache.org/jira/browse/HDFS-14703
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: 001-partitioned-inodeMap-POC.tar.gz, NameNode 
> Fine-Grained Locking.pdf
>
>
> We target to enable fine-grained locking by splitting the in-memory namespace 
> into multiple partitions each having a separate lock. Intended to improve 
> performance of NameNode write operations.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2060) Create Ozone specific LICENSE file for the Ozone source and binary packages

2019-08-31 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2060:
---
Fix Version/s: 0.5.0
   0.4.1
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

[~elek] Thank you for the contribution. I have committed this patch to trunk 
and ozone-0.4.1 branch. FYI, [~nandakumar131]

> Create Ozone specific LICENSE file for the Ozone source and binary packages
> ---
>
> Key: HDDS-2060
> URL: https://issues.apache.org/jira/browse/HDDS-2060
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> With HDDS-2058 the Ozone (source) release package doesn't contains the hadoop 
> sources any more. We need to create an adjusted LICENSE file for the Ozone 
> source package (We already created a specific LICENSE file for the binary 
> package which is not changed).
> In the new LICENSE file we should include entries only for the sources which 
> are part of the Ozone release.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1413) TestCloseContainerCommandHandler is flaky

2019-08-30 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1413:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> TestCloseContainerCommandHandler is flaky
> -
>
> Key: HDDS-1413
> URL: https://issues.apache.org/jira/browse/HDDS-1413
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Doroszlai, Attila
>Priority: Major
>  Labels: ozone-flaky-test, pull-request-available
> Fix For: 0.5.0
>
> Attachments: ci.log
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> TestCloseContainerCommandHandler.testCloseContainerViaStandalone is flaky, we 
> get the below exception when it fails.
> {code}
> org.apache.ratis.protocol.NotLeaderException: Server 
> a200dff7-f26d-4be3-addd-e8e0ca569ae0 is not the leader (null). Request must 
> be sent to leader.
>   at 
> org.apache.ratis.server.impl.RaftServerImpl.generateNotLeaderException(RaftServerImpl.java:448)
>   at 
> org.apache.ratis.server.impl.RaftServerImpl.checkLeaderState(RaftServerImpl.java:419)
>   at 
> org.apache.ratis.server.impl.RaftServerImpl.submitClientRequestAsync(RaftServerImpl.java:514)
>   at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$submitClientRequestAsync$7(RaftServerProxy.java:333)
>   at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$null$5(RaftServerProxy.java:328)
>   at org.apache.ratis.util.JavaUtils.callAsUnchecked(JavaUtils.java:109)
>   at 
> org.apache.ratis.server.impl.RaftServerProxy.lambda$submitRequest$6(RaftServerProxy.java:328)
>   at 
> java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:981)
>   at 
> java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2124)
>   at 
> org.apache.ratis.server.impl.RaftServerProxy.submitRequest(RaftServerProxy.java:327)
>   at 
> org.apache.ratis.server.impl.RaftServerProxy.submitClientRequestAsync(RaftServerProxy.java:333)
>   at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.submitRequest(XceiverServerRatis.java:485)
>   at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler.createContainer(TestCloseContainerCommandHandler.java:310)
>   at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler.testCloseContainerViaStandalone(TestCloseContainerCommandHandler.java:111)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 

[jira] [Updated] (HDDS-2042) Avoid log on console with Ozone shell

2019-08-30 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2042:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Avoid log on console with Ozone shell
> -
>
> Key: HDDS-2042
> URL: https://issues.apache.org/jira/browse/HDDS-2042
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> HDDS-1489 fixed several sample docker compose configs to avoid unnecessary 
> messages on console when running eg. {{ozone sh key put}}.  The goal of this 
> task is to fix the remaining ones.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2057) Incorrect Default OM Port in Ozone FS URI Error Message

2019-08-29 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918774#comment-16918774
 ] 

Anu Engineer commented on HDDS-2057:


[~sdeka] Did you forget to attach the patch, or the if it is a pull request can 
you please link that here ? 

 

> Incorrect Default OM Port in Ozone FS URI Error Message
> ---
>
> Key: HDDS-2057
> URL: https://issues.apache.org/jira/browse/HDDS-2057
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Minor
>
> The error message displayed from BasicOzoneFilesystem.initialize specifies 
> 5678 as the OM port. This is not the default port.
> "Ozone file system URL " +
>  "should be one of the following formats: " +
>  "o3fs://bucket.volume/key OR " +
>  "o3fs://bucket.volume.om-host.example.com/key OR " +
>  "o3fs://bucket.volume.om-host.example.com:5678/key";
>  
> This should be fixed to pull the default value from the configuration 
> parameter, instead of a hard-coded value.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-28 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1950:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to the trunk

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-28 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1942:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1937) Acceptance tests fail if scm webui shows invalid json

2019-08-28 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1937:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Acceptance tests fail if scm webui shows invalid json
> -
>
> Key: HDDS-1937
> URL: https://issues.apache.org/jira/browse/HDDS-1937
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Acceptance test of a nightly build is failed with the following error:
> {code}
> Creating ozonesecure_datanode_3 ... 
> 
> Creating ozonesecure_kdc_1  ... done
> 
> Creating ozonesecure_om_1   ... done
> 
> Creating ozonesecure_scm_1  ... done
> 
> Creating ozonesecure_datanode_3 ... done
> 
> Creating ozonesecure_kms_1  ... done
> 
> Creating ozonesecure_s3g_1  ... done
> 
> Creating ozonesecure_datanode_2 ... done
> 
> Creating ozonesecure_datanode_1 ... done
> parse error: Invalid numeric literal at line 2, column 0
> {code}
> https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-5b87q/acceptance/output.log
> The problem is in the script which checks the number of available datanodes.
> If the HTTP endpoint of the SCM is already started BUT not ready yet it may 
> return with a simple HTML error message instead of json. Which can not be 
> parsed by jq:
> In testlib.sh:
> {code}
>   37   │   if [[ "${SECURITY_ENABLED}" == 'true' ]]; then
>   38   │ docker-compose -f "${compose_file}" exec -T scm bash -c "kinit 
> -k HTTP/scm@EXAMPL
>│ E.COM -t /etc/security/keytabs/HTTP.keytab && curl --negotiate -u : 
> -s '${jmx_url}'"
>   39   │   else
>   40   │ docker-compose -f "${compose_file}" exec -T scm curl -s 
> "${jmx_url}"
>   41   │   fi \
>   42   │ | jq -r '.beans[0].NodeCount[] | select(.key=="HEALTHY") | 
> .value'
> {code}
> One possible fix is to adjust the error handling (set +x / set -x) per method 
> instead of using a generic set -x at the beginning. It would provide a more 
> predictable behavior. In our case count_datanode should not fail evert (as 
> the caller method: wait_for_datanodes can retry anyway).



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-168) Add ScmGroupID to Datanode Version File

2019-08-28 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16917995#comment-16917995
 ] 

Anu Engineer commented on HDDS-168:
---

No. Needs to be added.

> Add ScmGroupID to Datanode Version File
> ---
>
> Key: HDDS-168
> URL: https://issues.apache.org/jira/browse/HDDS-168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>
> Add the field {{ScmGroupID}} to Datanode Version file. This field identifies 
> the set of SCMs that this datanode talks to, or takes commands from.
> This value is not same as Cluster ID – since a cluster can technically have 
> more than one SCM group.
> Refer to [~anu]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-156?focusedCommentId=16511903=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16511903]
>  in HDDS-156.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1881) Design doc: decommissioning in Ozone

2019-08-28 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1881.

Fix Version/s: 0.5.0
   Resolution: Fixed

Thank you for all the comments, discussions and contributions to this design.  
I have committed this design doc, since we have not had any more comments for 
the last 30 days.

> Design doc: decommissioning in Ozone
> 
>
> Key: HDDS-1881
> URL: https://issues.apache.org/jira/browse/HDDS-1881
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: design, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 43h
>  Remaining Estimate: 0h
>
> Design doc can be attached to the documentation. In this jira the design doc 
> will be attached and merged to the documentation page.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-738) Removing REST protocol support from OzoneClient

2019-08-28 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-738:
--
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

I have committed this to the trunk. I will follow up with 2 more JIRAs for this 
work item and make them blockers for the 0.5.0 release.

> Removing REST protocol support from OzoneClient
> ---
>
> Key: HDDS-738
> URL: https://issues.apache.org/jira/browse/HDDS-738
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Nanda kumar
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Since we have functional {{S3Gateway}} for Ozone which works on REST 
> protocol, having REST protocol support in OzoneClient feels redundant and it 
> will take a lot of effort to maintain it up to date.
> As S3Gateway is in a functional state now, I propose to remove REST protocol 
> support from OzoneClient.
> Once we remove REST support from OzoneClient, the following will be the 
> interface to access Ozone cluster
>  * OzoneClient (RPC Protocol)
>  * OzoneFS (RPC Protocol)
>  * S3Gateway (REST Protocol)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2049) Fix Ozone Rest client documentation

2019-08-28 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2049:
--

 Summary: Fix Ozone Rest client documentation
 Key: HDDS-2049
 URL: https://issues.apache.org/jira/browse/HDDS-2049
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: 0.5.0


We have removed Ozone Rest protocol support and moved to using S3 as the 
standard REST protocol. The ozone documentation needs to be updated for the 
0.5.0 release.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1596) Create service endpoint to download configuration from SCM

2019-08-28 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1596:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Create service endpoint to download configuration from SCM
> --
>
> Key: HDDS-1596
> URL: https://issues.apache.org/jira/browse/HDDS-1596
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> As written in the design doc (see the parent issue) it was proposed to 
> download the configuration from the scm by the other services.
> I propose to create a separated endpoint to provide the ozone configuration. 
> /conf can't be used as it contains *all* the configuration and we need only 
> the modified configuration.
> The easiest way to implement this feature is:
>  * Create a simple rest endpoint which publishes all the configuration
>  * Download the configurations to $HADOOP_CONF_DIR/ozone-global.xml during 
> the service startup.
>  * Add ozone-global.xml as an additional config source (before ozone-site.xml 
> but after ozone-default.xml)
>  * The download can be optional
> With this approach we keep the support of the existing manual configuration 
> (ozone-site.xml has higher priority) but we can download the configuration to 
> a separated file during the startup, which will be loaded.
> There is no magic: the configuration file is saved and it's easy to debug 
> what's going on as the OzoneConfiguration is loaded from the $HADOOP_CONF_DIR 
> as before.
> Possible follow-up steps:
>  * Migrate all the other services (recon, s3g) to the new approach. (possible 
> newbie jiras)
>  * Improve the CLI to define the SCM address. (As of now we use 
> ozone.scm.names)
>  * Create a service/hostname registration mechanism and autofill some of the 
> configuration based on the topology information.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



<    1   2   3   4   5   6   7   8   9   10   >