[ 
https://issues.apache.org/jira/browse/HDDS-2015?focusedWorklogId=307327&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-307327
 ]

ASF GitHub Bot logged work on HDDS-2015:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 05/Sep/19 16:56
            Start Date: 05/Sep/19 16:56
    Worklog Time Spent: 10m 
      Work Description: anuengineer commented on pull request #1386: HDDS-2015. 
Encrypt/decrypt key using symmetric key while writing/reading
URL: https://github.com/apache/hadoop/pull/1386#discussion_r321374583
 
 

 ##########
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##########
 @@ -601,6 +605,16 @@ public OzoneOutputStream createKey(
     HddsClientUtils.verifyResourceName(volumeName, bucketName);
     HddsClientUtils.checkNotNull(keyName, type, factor);
     String requestId = UUID.randomUUID().toString();
+
+    if(Boolean.valueOf(metadata.get(OzoneConsts.GDPR_FLAG))){
 
 Review comment:
   @ajayydv Yes, it can be done; but then the cost of this function will be 
borne by the data node. Today the EC of Hadoop/Ozone pushes that to the client; 
so the data nodes don't see this cost. Also, today we have no mechanism to 
deliver the key to data node; that means even if we lose the HDD disk, the 
person who finds it cannot decode it; since data node never sees the key. 
   
   > From security side that will be more secure as we will not be sharing it 
over the wire on client side.
   
   Completely agree; we will need to make sure that RPC is secure on wire when 
we send this key. But remember; GDPR is not about security; it is about 
forgetting the key when we delete a block; so that we can write and sign a 
document saying that file has been deleted. Since it is not a security feature, 
even if the key is leaked; it is ok; all we are saying -- or promising is that 
via Ozone Manager you cannot get to the key. if someone has made a copy of that 
file; it is a problem, but something that we do not know off.
   
   @dineshchitlangia  has attached a design document here, where he discusses 
some of these issues. https://issues.apache.org/jira/browse/HDDS-2012
   
   
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 307327)
    Time Spent: 4.5h  (was: 4h 20m)

> Encrypt/decrypt key using symmetric key while writing/reading
> -------------------------------------------------------------
>
>                 Key: HDDS-2015
>                 URL: https://issues.apache.org/jira/browse/HDDS-2015
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>            Reporter: Dinesh Chitlangia
>            Assignee: Dinesh Chitlangia
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> *Key Write Path (Encryption)*
> When a bucket metadata has gdprEnabled=true, we generate the GDPRSymmetricKey 
> and add it to Key Metadata before we create the Key.
> This ensures that key is encrypted before writing.
> *Key Read Path(Decryption)*
> While reading the Key, we check for gdprEnabled=true and they get the 
> GDPRSymmetricKey based on secret/algorithm as fetched from Key Metadata.
> Create a stream to decrypt the key and pass it on to client.
> *Test*
> Create Key in GDPR Enabled Bucket -> Read Key -> Verify content is as 
> expected -> Update Key Metadata to remove the gdprEnabled flag -> Read Key -> 
> Confirm the content is not as expected.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to