[jira] [Resolved] (HADOOP-10154) Provide cryptographic filesystem implementation and it's data IO.
[ https://issues.apache.org/jira/browse/HADOOP-10154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10154. - Resolution: Won't Fix Provide cryptographic filesystem implementation and it's data IO. - Key: HADOOP-10154 URL: https://issues.apache.org/jira/browse/HADOOP-10154 Project: Hadoop Common Issue Type: Sub-task Components: security Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Labels: rhino Fix For: 3.0.0 The JIRA includes Cryptographic filesystem data InputStream which extends FSDataInputStream and OutputStream which extends FSDataOutputStream. Implantation of Cryptographic file system is also included in this JIRA. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10605) CryptoFileSystem decorator documentation
Alejandro Abdelnur created HADOOP-10605: --- Summary: CryptoFileSystem decorator documentation Key: HADOOP-10605 URL: https://issues.apache.org/jira/browse/HADOOP-10605 Project: Hadoop Common Issue Type: Sub-task Components: fs Reporter: Alejandro Abdelnur Assignee: Yi Liu Documentation explaining how the Crypto filesystem works and how it is configured. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-7935) Fix the scope the dependencies across subprojects
[ https://issues.apache.org/jira/browse/HADOOP-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur resolved HADOOP-7935. Resolution: Duplicate [doing self-clean up of JIRAs] Fix the scope the dependencies across subprojects - Key: HADOOP-7935 URL: https://issues.apache.org/jira/browse/HADOOP-7935 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 0.23.1 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur As a follow up of HADOOP-7934, we should fix the scope of dependencies, ie many deps meant for testing have compile scope and end up in the binary distribution (ie junit) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-7412) Mavenization Umbrella
[ https://issues.apache.org/jira/browse/HADOOP-7412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur resolved HADOOP-7412. Resolution: Invalid [doing self-clean up of JIRAs] closing as invalid as this has been done in different jiras. Mavenization Umbrella - Key: HADOOP-7412 URL: https://issues.apache.org/jira/browse/HADOOP-7412 Project: Hadoop Common Issue Type: Task Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Umbrella JIRA for all Mavenization JIRAs -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10153) Define Crypto policy interfaces and provide its default implementation.
[ https://issues.apache.org/jira/browse/HADOOP-10153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10153. - Resolution: Won't Fix Define Crypto policy interfaces and provide its default implementation. --- Key: HADOOP-10153 URL: https://issues.apache.org/jira/browse/HADOOP-10153 Project: Hadoop Common Issue Type: Sub-task Components: security Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Labels: rhino Fix For: 3.0.0 The JIRA defines crypto policy interface, developers/users can implement their own crypto policy to decide how files/directories are encrypted. This JIRA also includes a default implementation. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10151) Implement a Buffer-Based Chiper InputStream and OutputStream
[ https://issues.apache.org/jira/browse/HADOOP-10151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-10151. - Resolution: Won't Fix Implement a Buffer-Based Chiper InputStream and OutputStream Key: HADOOP-10151 URL: https://issues.apache.org/jira/browse/HADOOP-10151 Project: Hadoop Common Issue Type: Sub-task Components: security Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Labels: rhino Fix For: 3.0.0 Attachments: HADOOP-10151.patch Cipher InputStream and OuputStream are buffer-based, and the buffer is used to cache the encrypted data or result. Cipher InputStream is used to read encrypted data, and the result is plain text . Cipher OutputStream is used to write plain data and result is encrypted data. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: #Contributors on JIRA
Last time we cleaned up names of people who had not contributed in a long time. That could be an option. On Mon, May 12, 2014 at 12:03 PM, Karthik Kambatla ka...@cloudera.comwrote: Hi devs Looks like we ran over the max contributors allowed for a project, again. I don't remember what we did last time and can't find it in my email either. Can we bump up the number of contributors allowed? Otherwise, we might have to remove some of the currently inactive contributors from the list? Thanks Karthik -- http://hortonworks.com/download/ -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
[jira] [Resolved] (HADOOP-8076) hadoop (1.x) ant build fetches ivy JAR every time
[ https://issues.apache.org/jira/browse/HADOOP-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Abdelnur resolved HADOOP-8076. Resolution: Won't Fix [doing self-clean up of JIRAs] seems there is no interest in this. hadoop (1.x) ant build fetches ivy JAR every time - Key: HADOOP-8076 URL: https://issues.apache.org/jira/browse/HADOOP-8076 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 1.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Attachments: HADOOP-8076.patch the get .. task does a timestamp check, as ivy JAR is final release it should use the skip if already downloaded check. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10606) NodeManager cannot launch container when using RawLocalFileSystem for fs.file.impl
BoYang created HADOOP-10606: --- Summary: NodeManager cannot launch container when using RawLocalFileSystem for fs.file.impl Key: HADOOP-10606 URL: https://issues.apache.org/jira/browse/HADOOP-10606 Project: Hadoop Common Issue Type: Bug Components: fs, io, util Affects Versions: 2.4.0 Environment: The environment does not matter with this issue. But I use Windows 8 64bits, Open JDK 7.0. Reporter: BoYang Priority: Critical Fix For: 2.4.0 Node manager failed to launch container when I set fs.file.impl to org.apache.hadoop.fs.RawLocalFileSystem in core-site.xml. The log is: WARN ContainersLauncher #11 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch - Failed to launch container. java.lang.ClassCastException: org.apache.hadoop.fs.RawLocalFileSystem cannot be cast to org.apache.hadoop.fs.LocalFileSystem at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:339) at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:270) at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344) at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150) at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLogPathForWrite(LocalDirsHandlerService.java:307) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:185) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) The issue is in hadoop-common-project\hadoop-common\src\main\java\org\apache\hadoop\fs\LocalDirAllocator.java. It invokes FileSystem.getLocal(), which tries to convert the FileSystem to LocalFileSystem and will fail when FileSystem object is RawLocalFileSystem (RawLocalFileSystem is not a sub class of LocalFileSystem). public static LocalFileSystem getLocal(Configuration conf) throws IOException { return (LocalFileSystem)get(LocalFileSystem.NAME, conf); } The fix for LocalDirAllocator.java seems to be invoking LocalFileSystem.get() instead of LocalFileSystem.getLocal()? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10607) Create an API to separate Credentials/Password Storage from Applications
Larry McCay created HADOOP-10607: Summary: Create an API to separate Credentials/Password Storage from Applications Key: HADOOP-10607 URL: https://issues.apache.org/jira/browse/HADOOP-10607 Project: Hadoop Common Issue Type: Bug Components: security Reporter: Larry McCay Assignee: Owen O'Malley Fix For: 3.0.0 As with the filesystem API, we need to provide a generic mechanism to support multiple key storage mechanisms that are potentially from third parties. An additional requirement for long term data lakes is to keep multiple versions of each key so that keys can be rolled periodically without requiring the entire data set to be re-written. Rolling keys provides containment in the event of keys being leaked. Toward that end, I propose an API that is configured using a list of URLs of KeyProviders. The implementation will look for implementations using the ServiceLoader interface and thus support third party libraries. Two providers will be included in this patch. One using the credentials cache in MapReduce jobs and the other using Java KeyStores from either HDFS or local file system. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10593) Concurrency Improvements
Benoy Antony created HADOOP-10593: - Summary: Concurrency Improvements Key: HADOOP-10593 URL: https://issues.apache.org/jira/browse/HADOOP-10593 Project: Hadoop Common Issue Type: Improvement Reporter: Benoy Antony Assignee: Benoy Antony This is an umbrella jira to improve the concurrency of a few classes by making use of safe publication idioms. Most of the improvements are based on the following: {panel} To publish an object safely, both the reference to the object and the object's state must be made visible to other threads at the same time. A properly constructed object can be safely published by: * Initializing an object reference from a static initializer; * Storing a reference to it into a volatile field or AtomicReference; * Storing a reference to it into a final field of a properly constructed object; or * Storing a reference to it into a field that is properly guarded by a lock. {panel} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10585) Retry polices ignore interrupted exceptions
Daryn Sharp created HADOOP-10585: Summary: Retry polices ignore interrupted exceptions Key: HADOOP-10585 URL: https://issues.apache.org/jira/browse/HADOOP-10585 Project: Hadoop Common Issue Type: Bug Components: ipc Affects Versions: 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Priority: Critical Retry polices should not use {{ThreadUtil.sleepAtLeastIgnoreInterrupts}}. This prevents {{FsShell}} commands from being aborted during retries. It also causes orphaned webhdfs DN DFSClients to keep running after the webhdfs client closes the connection. Jetty goes into a loop constantly sending interrupts to the handler thread. Webhdfs retries cause multiple nodes to have these orphaned clients. The DN cannot shutdown until orphaned clients complete. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10587) Use a thread-local cache in TokenIdentifier#getBytes to avoid creating many DataOutputBuffer objects
Colin Patrick McCabe created HADOOP-10587: - Summary: Use a thread-local cache in TokenIdentifier#getBytes to avoid creating many DataOutputBuffer objects Key: HADOOP-10587 URL: https://issues.apache.org/jira/browse/HADOOP-10587 Project: Hadoop Common Issue Type: Improvement Affects Versions: 2.4.0 Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Attachments: HADOOP-10587.001.patch We can use a thread-local cache in TokenIdentifier#getBytes to avoid creating many DataOutputBuffer objects. This will reduce our memory usage (for example, when loading edit logs), and help prevent OOMs. -- This message was sent by Atlassian JIRA (v6.2#6252)