Istvan Fajth created HDDS-2390: ---------------------------------- Summary: SCMSecurityClient and mixing of abstraction layers Key: HDDS-2390 URL: https://issues.apache.org/jira/browse/HDDS-2390 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Istvan Fajth
During the work on HDDS-2362, I ran into a bigger topic with Security and how the code is organized around internal certificate handling. What I see is that we have two methods, HDDSUtils.getScmSecurityClient, and HDDSClientUtils.getSCMSecurityClient, which are essentially doing the same, the latter one without infinite retries, but used only from one test class, besides this DefaultCertificateClient replicates the same logic also. Further evaluating the first one in HDDSUtils, I see that it is being used at 3 places, ContainerOperationClient, HddsDataNodeService, and OzoneManager, both of the two has its own DefaultCertificateClient descendants as well. ContainerOperationClient is using the client to get the CA certificate. HddsDataNodeService is using it to have a signed certificate for the DataNode. OzoneManager uses it for the same purpose to get a signed certificate. In the HddsDataNodeService, and OzoneManager class with this we effectively tinkering with protobuf requests and responses, and protobuf related code, which seems to be pretty high for the communication layer to buble up to. Based on the above, I would propose to have this logic encapsulated in a client class that handles all the protobuf related things below DefaultCertificateClient, and let all the code that is depending on SCMSecurityProtocolClientSideTranslatorPB to depend on this new class or the already exsisting DefaultCertificateClient methods and its descendants. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org