[ https://issues.apache.org/jira/browse/HADOOP-10150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13844347#comment-13844347 ]
Yi Liu commented on HADOOP-10150: --------------------------------- Create a sub task: HADOOP-10156: This JIRA defines Encryptor and Decryptor which are buffer-based interfaces for encryption and decryption. Standard javax.security.Cipher interface was employed to provide AES/CTR encryption/decryption implemention. In this way, one can replace javax.security.Cipher implementation by plug other JCE provider such as Diceros. Diceros was opensource project under Rhino project, implement a set of Cipher interface which provide high performance encyption/decryption compared to default JCE provider. The initial performance test result shows 20x speedup in CTR mode compared to default JCE provider in JDK 1.7_u45. Moreover, Encryptor/Decryptor interfaces implements a internal buffer to further improve the performance over javax.security.Cipher. hadoop-crypto component was removed from latest patch as a result of Diceros emerging. One can use "cfs.cipher.provider" to specify the JCE provider, for example, .... Diceros project link: https://github.com/intel-hadoop/diceros > Hadoop cryptographic file system > -------------------------------- > > Key: HADOOP-10150 > URL: https://issues.apache.org/jira/browse/HADOOP-10150 > Project: Hadoop Common > Issue Type: New Feature > Components: security > Affects Versions: 3.0.0 > Reporter: Yi Liu > Assignee: Yi Liu > Labels: rhino > Fix For: 3.0.0 > > Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file > system.pdf > > > There is an increasing need for securing data when Hadoop customers use > various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so > on. > HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based > on HADOOP “FilterFileSystem” decorating DFS or other file systems, and > transparent to upper layer applications. It’s configurable, scalable and fast. > High level requirements: > 1. Transparent to and no modification required for upper layer > applications. > 2. “Seek”, “PositionedReadable” are supported for input stream of CFS if > the wrapped file system supports them. > 3. Very high performance for encryption and decryption, they will not > become bottleneck. > 4. Can decorate HDFS and all other file systems in Hadoop, and will not > modify existing structure of file system, such as namenode and datanode > structure if the wrapped file system is HDFS. > 5. Admin can configure encryption policies, such as which directory will > be encrypted. > 6. A robust key management framework. > 7. Support Pread and append operations if the wrapped file system supports > them. -- This message was sent by Atlassian JIRA (v6.1.4#6159)