[ 
https://issues.apache.org/jira/browse/HADOOP-10693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14046256#comment-14046256
 ] 

Colin Patrick McCabe commented on HADOOP-10693:
-----------------------------------------------

bq. Right, I also noticed them, actually \[the warnings\] were caused by the 
fix of one previous comment

I was just asking you to change the type of the variables to unsigned, and then 
you won't need any casts.  It seems that you dropped the cast, but didn't 
change the type to unsigned.

If you just apply this patch to v5, you will see what I mean.

{code}
diff --git 
a/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpenSSLCipher.c
 
b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpenSSLCipher.c
index 08cb5e8..39a71c6 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpenSSLCipher.c
+++ 
b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpenSSLCipher.c
@@ -263,8 +263,8 @@ JNIEXPORT jint JNICALL 
Java_org_apache_hadoop_crypto_OpenSSLCipher_update
         "Output buffer is not sufficient.");
     return 0;
   }
-  char *input_bytes = (*env)->GetDirectBufferAddress(env, input);
-  char *output_bytes = (*env)->GetDirectBufferAddress(env, output);
+  unsigned char *input_bytes = (*env)->GetDirectBufferAddress(env, input);
+  unsigned char *output_bytes = (*env)->GetDirectBufferAddress(env, output);
   if (input_bytes == NULL || output_bytes == NULL) {
     THROW(env, "java/lang/InternalError", "Cannot get buffer address.");
     return 0;
@@ -273,8 +273,8 @@ JNIEXPORT jint JNICALL 
Java_org_apache_hadoop_crypto_OpenSSLCipher_update
   output_bytes = output_bytes + output_offset;
   
   int output_len = 0;
-  if (!dlsym_EVP_CipherUpdate(context, (unsigned char *)output_bytes,  \
-      &output_len, (unsigned char *)input_bytes, input_len)) {
+  if (!dlsym_EVP_CipherUpdate(context, output_bytes,  \
+      &output_len, input_bytes, input_len)) {
     dlsym_EVP_CIPHER_CTX_cleanup(context);
     THROW(env, "java/lang/InternalError", "Error in EVP_CipherUpdate.");
     return 0;
@@ -308,7 +308,7 @@ JNIEXPORT jint JNICALL 
Java_org_apache_hadoop_crypto_OpenSSLCipher_doFinal
         "Output buffer is not sufficient.");
     return 0;
   }
-  char *output_bytes = (*env)->GetDirectBufferAddress(env, output);
+  unsigned char *output_bytes = (*env)->GetDirectBufferAddress(env, output);
   if (output_bytes == NULL) {
     THROW(env, "java/lang/InternalError", "Cannot get buffer address.");
     return 0;
@@ -316,8 +316,7 @@ JNIEXPORT jint JNICALL 
Java_org_apache_hadoop_crypto_OpenSSLCipher_doFinal
   output_bytes = output_bytes + offset;
   
   int output_len = 0;
-  if (!dlsym_EVP_CipherFinal_ex(context, (unsigned char *)output_bytes,  \
-      &output_len)) {
+  if (!dlsym_EVP_CipherFinal_ex(context, output_bytes, &output_len)) {
     dlsym_EVP_CIPHER_CTX_cleanup(context);
     THROW(env, "java/lang/InternalError", "Error in EVP_CipherFinal_ex.");
     return 0;
{code}

There is no need for casts here.

bq. Agree, let’s change the name \[of the test\].

It's good that you changed the name of the test, but there are still some other 
class names we should fix.  Since the other stuff shows up in config XMLs, it's 
probably even more important to use something readable.

Example:

{code}
cmccabe@keter:~/hadoop3> find | grep -i aesctrcrypto
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpenSSLAESCTRCryptoCodec.java
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/JCEAESCTRCryptoCodec.java
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AESCTRCryptoCodec.java
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpenSSLAESCTRCryptoCodec.java
{code}

Probably want {{JceAesCtrCryptoCodec}}, {{AesCtrCryptoCodec}}, etc. etc.

bq. I found it’s a bit inconvenient to do this. 1. -Drequire.openssl is only 
for build, we can’t get this in hadoop code. 2. If user specify 
-Drequire.openssl but there is no openssl available, then build will fail.

The intended behavior of {{\-Drequire.openssl}} is that it will fail the build 
if {{openssl.so}} is not there.  This is by design, since otherwise it is very 
hard to get reliable builds.  By the way, I can see that you got this right in 
your patch (I just tested it)... specifying {{\-Drequire.openssl}} makes the 
build fail on a system without openssl.  I added the "require" flags since 
previously we were having so much trouble ensuring our builds contained what 
they were supposed to contain.

There is a way to pass java properties on to Surefire, but it's finicky.  A 
simpler approach might just be to do something like this:
{code}
    Preconditions.checkState(NativeCodeLoader.buildSupportsOpenssl());
    Assert.assertTrue(OpenSSLCipher.isNativeCodeLoaded());
{code}

We need to make sure that the build fails when there is an issue loading 
{{libcrypto.so}} on Jenkins.  Otherwise, we can never be sure about our test 
coverage.

> Implementation of AES-CTR CryptoCodec using JNI to OpenSSL
> ----------------------------------------------------------
>
>                 Key: HADOOP-10693
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10693
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: security
>    Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
>            Reporter: Yi Liu
>            Assignee: Yi Liu
>             Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)
>
>         Attachments: HADOOP-10693.1.patch, HADOOP-10693.2.patch, 
> HADOOP-10693.3.patch, HADOOP-10693.4.patch, HADOOP-10693.5.patch, 
> HADOOP-10693.patch
>
>
> In HADOOP-10603, we have an implementation of AES-CTR CryptoCodec using Java 
> JCE provider. 
> To get high performance, the configured JCE provider should utilize native 
> code and AES-NI, but in JDK6,7 the Java embedded provider doesn't support it.
>  
> Considering not all hadoop user will use the provider like Diceros or able to 
> get signed certificate from oracle to develop a custom provider, so this JIRA 
> will have an implementation of AES-CTR CryptoCodec using JNI to OpenSSL 
> directly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to