[jira] [Created] (HADOOP-9384) Update S3 native fs implementation to use AWS SDK to support authorization through roles

2013-03-08 Thread D. Granit (JIRA)
D. Granit created HADOOP-9384:
-

 Summary: Update S3 native fs implementation to use AWS SDK to 
support authorization through roles
 Key: HADOOP-9384
 URL: https://issues.apache.org/jira/browse/HADOOP-9384
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
 Environment: Locally: RHEL 6, AWS S3
Remotely: AWS EC2 (RHEL 6), AWS S3
Reporter: D. Granit


Currently the S3 native implementation 
{{org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore}} requires 
credentials to be set explicitly. Amazon allows setting credentials for 
instances instead of users, via roles. Such are rotated frequently and kept in 
a local cache all of which is handled by the AWS SDK in this case the 
{{AmazonS3Client}}. The SDK follows a specific order to establish whether 
credentials are set explicitly or via a role:
- Environment Variables: AWS_ACCESS_KEY_ID and AWS_SECRET_KEY
- Java System Properties: aws.accessKeyId and aws.secretKey
- Instance Metadata Service, which provides the credentials associated with the 
IAM role for the EC2 instance
as seen in 
http://docs.aws.amazon.com/IAM/latest/UserGuide/role-usecase-ec2app.html

To support this feature the current {{NativeFileSystemStore}} implementation 
needs to be altered to use the AWS SDK instead of the JetS3t S3 libraries.

A request for this feature has previously been raised as part of the Flume 
project (FLUME-1691) where the HDFS on top of S3 implementation is used as a 
manner of logging into S3 via an HDFS Sink.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9385) create hadoop-common-project/hadoop-filesystem-clients subprojects for blobstore other clients

2013-03-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-9385:
--

 Summary: create hadoop-common-project/hadoop-filesystem-clients 
subprojects for blobstore  other clients
 Key: HADOOP-9385
 URL: https://issues.apache.org/jira/browse/HADOOP-9385
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
Reporter: Steve Loughran


As discussed on hadoop-general, we need somewhere to host the non-HDFS 
filesystem clients. S3/S3N, ftp are all in hadoop-common, with the JAR 
dependencies there. This doesn't scale to openstack, azure, or handle changes 
in the S3 dependencies.

With a project of {{hadoop-common/hadoop-filesystem-clients}}, we could add 
separate FS clients: {{hadoop-filesystem-client-aws}}, 
{{hadoop-filesystem-client-openstack}}, etc, each with their own tests, JARs 
and POM file dependencies. This would translate into separate bigtop RPMs/JARs 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8796) commands_manual.html link is broken

2013-03-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-8796.
-

Resolution: Not A Problem

Resolving the bug as Not A Problem.

 commands_manual.html link is broken
 ---

 Key: HADOOP-8796
 URL: https://issues.apache.org/jira/browse/HADOOP-8796
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.0.1-alpha
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
Priority: Minor
 Attachments: screenshot-1.jpg


 If you go to http://hadoop.apache.org/docs/r2.0.0-alpha/ and click on Hadoop 
 Commands you are getting a broken link: 
 http://hadoop.apache.org/docs/r2.0.0-alpha/hadoop-project-dist/hadoop-common/commands_manual.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9326) maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-common: There a test failures.

2013-03-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-9326.
-

Resolution: Invalid

 maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-common: 
 There a test failures.
 -

 Key: HADOOP-9326
 URL: https://issues.apache.org/jira/browse/HADOOP-9326
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, test
 Environment: For information, i take hadoop with GIT and i run it on 
 mac OS 
Reporter: JLASSI Aymen
   Original Estimate: 336h
  Remaining Estimate: 336h

 I'd like to compile hadoop from source code, and when i launch test-step, i 
 have the desciption as follows, when i skip the test-step to the package 
 step, i have the same problem, the same description of bug:
 Results :
 Failed tests:   testFailFullyDelete(org.apache.hadoop.fs.TestFileUtil): The 
 directory xSubDir *should* not have been deleted. expected:true but 
 was:false
   testFailFullyDeleteContents(org.apache.hadoop.fs.TestFileUtil): The 
 directory xSubDir *should* not have been deleted. expected:true but 
 was:false
   
 testListStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem):
  Should throw IOException
   test0[0](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 build/test/temp/RELATIVE1 in 
 build/test/temp/RELATIVE0/block4197707426846287299.tmp - FAILED!
   
 testROBufferDirAndRWBufferDir[0](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for build/test/temp/RELATIVE2 in 
 build/test/temp/RELATIVE1/block138767728739012230.tmp - FAILED!
   testRWBufferDirBecomesRO[0](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for build/test/temp/RELATIVE3 in 
 build/test/temp/RELATIVE4/block4888615109050601773.tmp - FAILED!
   test0[1](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
  in 
 /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block4663369813226761504.tmp
  - FAILED!
   
 testROBufferDirAndRWBufferDir[1](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for 
 /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE2
  in 
 /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1/block2846944239985650460.tmp
  - FAILED!
   testRWBufferDirBecomesRO[1](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for 
 /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE3
  in 
 /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE4/block4367331619344952181.tmp
  - FAILED!
   test0[2](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 file:/Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
  in 
 /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block5687619346377173125.tmp
  - FAILED!
   
 testROBufferDirAndRWBufferDir[2](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for 
 file:/Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED2
  in 
 /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1/block2235209534902942511.tmp
  - FAILED!
   testRWBufferDirBecomesRO[2](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for 
 file:/Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED3
  in 
 /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED4/block6994640486900109274.tmp
  - FAILED!
   testReportChecksumFailure(org.apache.hadoop.fs.TestLocalFileSystem)
   
 testListStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem):
  Should throw IOException
   testCount(org.apache.hadoop.metrics2.util.TestSampleQuantiles): 
 expected:50[.00 %ile +/- 5.00%: 1337(..)
   testCheckDir_notDir_local(org.apache.hadoop.util.TestDiskChecker): checkDir 
 success
   testCheckDir_notReadable_local(org.apache.hadoop.util.TestDiskChecker): 
 checkDir success
   testCheckDir_notWritable_local(org.apache.hadoop.util.TestDiskChecker):