[jira] [Updated] (HADOOP-7298) Add test utility for writing multi-threaded tests
[ https://issues.apache.org/jira/browse/HADOOP-7298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon updated HADOOP-7298: Attachment: hadoop-7298.txt Cleaned up the hbase utility a bit and added test cases for the test utility. I think this will make a nice building block to build some stress/fuzz tests for the namenode for HDFS-988. But, no need to commit it until such a test is ready. > Add test utility for writing multi-threaded tests > - > > Key: HADOOP-7298 > URL: https://issues.apache.org/jira/browse/HADOOP-7298 > Project: Hadoop Common > Issue Type: Test > Components: test >Affects Versions: 0.22.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon > Fix For: 0.22.0 > > Attachments: hadoop-7298.txt > > > A lot of our tests spawn off multiple threads in order to check various > synchronization issues, etc. It's often tedious to write these kinds of tests > because you have to manually propagate exceptions back to the main thread, > etc. > In HBase we have developed a testing utility which makes writing these kinds > of tests much easier. I'd like to copy that utility into Hadoop so we can use > it here as well. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7298) Add test utility for writing multi-threaded tests
[ https://issues.apache.org/jira/browse/HADOOP-7298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13035202#comment-13035202 ] Todd Lipcon commented on HADOOP-7298: - The HBase class lives here: http://svn.apache.org/repos/asf/hbase/trunk/src/test/java/org/apache/hadoop/hbase/MultithreadedTestUtil.java For an example usage, see testLABThreading in the following test: http://svn.apache.org/repos/asf/hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreLAB.java > Add test utility for writing multi-threaded tests > - > > Key: HADOOP-7298 > URL: https://issues.apache.org/jira/browse/HADOOP-7298 > Project: Hadoop Common > Issue Type: Test > Components: test >Affects Versions: 0.22.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon > Fix For: 0.22.0 > > > A lot of our tests spawn off multiple threads in order to check various > synchronization issues, etc. It's often tedious to write these kinds of tests > because you have to manually propagate exceptions back to the main thread, > etc. > In HBase we have developed a testing utility which makes writing these kinds > of tests much easier. I'd like to copy that utility into Hadoop so we can use > it here as well. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-7298) Add test utility for writing multi-threaded tests
Add test utility for writing multi-threaded tests - Key: HADOOP-7298 URL: https://issues.apache.org/jira/browse/HADOOP-7298 Project: Hadoop Common Issue Type: Test Components: test Affects Versions: 0.22.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Fix For: 0.22.0 A lot of our tests spawn off multiple threads in order to check various synchronization issues, etc. It's often tedious to write these kinds of tests because you have to manually propagate exceptions back to the main thread, etc. In HBase we have developed a testing utility which makes writing these kinds of tests much easier. I'd like to copy that utility into Hadoop so we can use it here as well. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7296) The FsPermission(FsPermission) constructor does not use the sticky bit
[ https://issues.apache.org/jira/browse/HADOOP-7296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13035081#comment-13035081 ] Hudson commented on HADOOP-7296: Integrated in Hadoop-Common-22-branch #50 (See [https://builds.apache.org/hudson/job/Hadoop-Common-22-branch/50/]) Merge -r 1104373:1104374 from trunk to branch-0.22. Fixes: HADOOP-7296 tomwhite : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1104377 Files : * /hadoop/common/branches/branch-0.22/src/test/core/org/apache/hadoop/fs/permission/TestFsPermission.java * /hadoop/common/branches/branch-0.22/CHANGES.txt * /hadoop/common/branches/branch-0.22/src/java/org/apache/hadoop/fs/permission/FsPermission.java > The FsPermission(FsPermission) constructor does not use the sticky bit > -- > > Key: HADOOP-7296 > URL: https://issues.apache.org/jira/browse/HADOOP-7296 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 0.21.0, 0.22.0, 0.23.0 >Reporter: Siddharth Seth >Assignee: Siddharth Seth >Priority: Minor > Fix For: 0.22.0 > > Attachments: HADOOP7296.patch, HADOOP7296_2.patch > > > The FsPermission(FsPermission) constructor copies u, g, o from the supplied > FsPermission object but ignores the sticky bit. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7284) Trash and shell's rm does not work for viewfs
[ https://issues.apache.org/jira/browse/HADOOP-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13035080#comment-13035080 ] Sanjay Radia commented on HADOOP-7284: -- Good question. Yes and no. If you recall, when I did FileContext and AbstractFileSystem, the home directory came from the server side. Our current default of /user/ is used for FileSystem since it has no notion of a server defaults. Enter Viewfs -- there is no notion of a server side. Also a viewfs could have mount points to different file systems; so I can't pick the ss defaults for home directory from any one of them. This hit home when i tried to run the tests because the mac and linux and hdfs have different home directory pathname. SO the test exposed a real problem in the system design by the fact that our tests run on different platforms and hence different file systems. I haven't figured out what is the cleanest way to fix this (but I give a suggestion below that I will file as a jira). The intermediate solution is that the person configuring the view file system will create a mount point for /user or /Users etc depending on what he wants as the default home dir. So I simply coded a rule in viewfilesystem impl, for now, to see if it has mount point of /user or /Users etc, till we figure out a better solution. Here is proposal on which I am going to shortly file a jira: let the viewfs config indicate the home directory. In a sense it is like a SS default. Should this simply be a key in the config or something else (like pick it up from a hdfs?) I could have made the tests work by simply not using a mount point and test path starting with "/user" or "/Users" but then it does not test our most common usecase. BTW I plan to add a test to HDFS also. " > Trash and shell's rm does not work for viewfs > - > > Key: HADOOP-7284 > URL: https://issues.apache.org/jira/browse/HADOOP-7284 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sanjay Radia >Assignee: Sanjay Radia > Fix For: 0.23.0 > > Attachments: trash1.patch, trash2.patch > > -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7284) Trash and shell's rm does not work for viewfs
[ https://issues.apache.org/jira/browse/HADOOP-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13035009#comment-13035009 ] Todd Lipcon commented on HADOOP-7284: - Hey Sanjay. I'm trying to follow this patch. Is the change to getHomeDirectory in ViewFileSystem just for the sake of the tests? > Trash and shell's rm does not work for viewfs > - > > Key: HADOOP-7284 > URL: https://issues.apache.org/jira/browse/HADOOP-7284 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sanjay Radia >Assignee: Sanjay Radia > Fix For: 0.23.0 > > Attachments: trash1.patch, trash2.patch > > -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7282) getRemoteIp could return null in cases where the call is ongoing but the ip went away.
[ https://issues.apache.org/jira/browse/HADOOP-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034923#comment-13034923 ] Hudson commented on HADOOP-7282: Integrated in Hadoop-Common-trunk-Commit #606 (See [https://builds.apache.org/hudson/job/Hadoop-Common-trunk-Commit/606/]) HADOOP-7282. ipc.Server.getRemoteIp() may return null. Contributed by John George > getRemoteIp could return null in cases where the call is ongoing but the ip > went away. > -- > > Key: HADOOP-7282 > URL: https://issues.apache.org/jira/browse/HADOOP-7282 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: John George >Assignee: John George > Fix For: 0.23.0 > > Attachments: HADOOP-7282-1.patch, HADOOP-7282.patch, diffs.txt > > > getRemoteIp gets the ip from socket instead of the stored ip in Connection > object. Thus calls to this function could return null when a client > disconnected, but the rpc call is still ongoing... -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7282) getRemoteIp could return null in cases where the call is ongoing but the ip went away.
[ https://issues.apache.org/jira/browse/HADOOP-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE updated HADOOP-7282: --- Resolution: Fixed Status: Resolved (was: Patch Available) I have committed this. Thanks, John! > getRemoteIp could return null in cases where the call is ongoing but the ip > went away. > -- > > Key: HADOOP-7282 > URL: https://issues.apache.org/jira/browse/HADOOP-7282 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: John George >Assignee: John George > Fix For: 0.23.0 > > Attachments: HADOOP-7282-1.patch, HADOOP-7282.patch, diffs.txt > > > getRemoteIp gets the ip from socket instead of the stored ip in Connection > object. Thus calls to this function could return null when a client > disconnected, but the rpc call is still ongoing... -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7282) getRemoteIp could return null in cases where the call is ongoing but the ip went away.
[ https://issues.apache.org/jira/browse/HADOOP-7282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo (Nicholas), SZE updated HADOOP-7282: --- Component/s: ipc Hadoop Flags: [Reviewed] +1 patch looks good. > getRemoteIp could return null in cases where the call is ongoing but the ip > went away. > -- > > Key: HADOOP-7282 > URL: https://issues.apache.org/jira/browse/HADOOP-7282 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: John George >Assignee: John George > Fix For: 0.23.0 > > Attachments: HADOOP-7282-1.patch, HADOOP-7282.patch, diffs.txt > > > getRemoteIp gets the ip from socket instead of the stored ip in Connection > object. Thus calls to this function could return null when a client > disconnected, but the rpc call is still ongoing... -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7296) The FsPermission(FsPermission) constructor does not use the sticky bit
[ https://issues.apache.org/jira/browse/HADOOP-7296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034862#comment-13034862 ] Hudson commented on HADOOP-7296: Integrated in Hadoop-Common-trunk-Commit #605 (See [https://builds.apache.org/hudson/job/Hadoop-Common-trunk-Commit/605/]) HADOOP-7296. The FsPermission(FsPermission) constructor does not use the sticky bit. Contributed by Siddharth Seth > The FsPermission(FsPermission) constructor does not use the sticky bit > -- > > Key: HADOOP-7296 > URL: https://issues.apache.org/jira/browse/HADOOP-7296 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 0.21.0, 0.22.0, 0.23.0 >Reporter: Siddharth Seth >Assignee: Siddharth Seth >Priority: Minor > Fix For: 0.22.0 > > Attachments: HADOOP7296.patch, HADOOP7296_2.patch > > > The FsPermission(FsPermission) constructor copies u, g, o from the supplied > FsPermission object but ignores the sticky bit. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7296) The FsPermission(FsPermission) constructor does not use the sticky bit
[ https://issues.apache.org/jira/browse/HADOOP-7296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-7296: -- Resolution: Fixed Assignee: Siddharth Seth Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) I've just committed this. Thanks, Siddharth! > The FsPermission(FsPermission) constructor does not use the sticky bit > -- > > Key: HADOOP-7296 > URL: https://issues.apache.org/jira/browse/HADOOP-7296 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 0.21.0, 0.22.0, 0.23.0 >Reporter: Siddharth Seth >Assignee: Siddharth Seth >Priority: Minor > Fix For: 0.22.0 > > Attachments: HADOOP7296.patch, HADOOP7296_2.patch > > > The FsPermission(FsPermission) constructor copies u, g, o from the supplied > FsPermission object but ignores the sticky bit. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7206) Integrate Snappy compression
[ https://issues.apache.org/jira/browse/HADOOP-7206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034829#comment-13034829 ] Tom White commented on HADOOP-7206: --- I actually think that, long-term, Snappy compression belongs in Hadoop, along with the other compression codecs. The hadoop-snappy project is a useful stopgap until we get regular releases going again, and it allows other projects like HBase to use Snappy in the meantime. > Integrate Snappy compression > > > Key: HADOOP-7206 > URL: https://issues.apache.org/jira/browse/HADOOP-7206 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 0.21.0 >Reporter: Eli Collins > Attachments: HADOOP-7206.patch > > > Google release Zippy as an open source (APLv2) project called Snappy > (http://code.google.com/p/snappy). This tracks integrating it into Hadoop. > {quote} > Snappy is a compression/decompression library. It does not aim for maximum > compression, or compatibility with any other compression library; instead, it > aims for very high speeds and reasonable compression. For instance, compared > to the fastest mode of zlib, Snappy is an order of magnitude faster for most > inputs, but the resulting compressed files are anywhere from 20% to 100% > bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy > compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec > or more. > {quote} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7291) Update Hudson job not to run test-contrib
[ https://issues.apache.org/jira/browse/HADOOP-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034822#comment-13034822 ] Hudson commented on HADOOP-7291: Integrated in Hadoop-Mapreduce-trunk #682 (See [https://builds.apache.org/hudson/job/Hadoop-Mapreduce-trunk/682/]) HADOOP-7291. Remove spurious call to runTestContrib. Contributed by Eli Collins MAPREDUCE-2499. MR part of HADOOP-7291. Contributed by Eli Collins HADOOP-7291. Update Hudson job not to run test-contrib. Contributed by Nigel Daley Reverting the change r1102914 for HADOOP-7291 to fix build issues. > Update Hudson job not to run test-contrib > - > > Key: HADOOP-7291 > URL: https://issues.apache.org/jira/browse/HADOOP-7291 > Project: Hadoop Common > Issue Type: Task >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 0.22.0 > > Attachments: HADOOP-7291-2.patch, HADOOP-7291-3.patch, > HADOOP-7291.patch > > > The test-contrib target was removed in HADOOP-7137, which causes the Hudson > job to fail. The build file doesn't execute test-contrib so I suspect the > Hudson job needs to be updated to not call ant with the test-contrib target. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-7297) Error in the documentation regarding Checkpoint/Backup Node
Error in the documentation regarding Checkpoint/Backup Node --- Key: HADOOP-7297 URL: https://issues.apache.org/jira/browse/HADOOP-7297 Project: Hadoop Common Issue Type: Bug Components: documentation Affects Versions: 0.20.203.0 Reporter: arnaud p Priority: Trivial On http://hadoop.apache.org/common/docs/r0.20.203.0/hdfs_user_guide.html#Checkpoint+Node: the command bin/hdfs namenode -checkpoint required to launch the backup/checkpoint node does not exist. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7291) Update Hudson job not to run test-contrib
[ https://issues.apache.org/jira/browse/HADOOP-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034747#comment-13034747 ] Hudson commented on HADOOP-7291: Integrated in Hadoop-Mapreduce-22-branch #52 (See [https://builds.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/52/]) HADOOP-7291. svn merge -c 1103931 from trunk > Update Hudson job not to run test-contrib > - > > Key: HADOOP-7291 > URL: https://issues.apache.org/jira/browse/HADOOP-7291 > Project: Hadoop Common > Issue Type: Task >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 0.22.0 > > Attachments: HADOOP-7291-2.patch, HADOOP-7291-3.patch, > HADOOP-7291.patch > > > The test-contrib target was removed in HADOOP-7137, which causes the Hudson > job to fail. The build file doesn't execute test-contrib so I suspect the > Hudson job needs to be updated to not call ant with the test-contrib target. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7256) Resource leak during failure scenario of closing of resources.
[ https://issues.apache.org/jira/browse/HADOOP-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HADOOP-7256: --- Description: Problem Statement: === There are chances of resource leak and stream not getting closed Take the case when after copying data we try to close the Input and output stream followed by closing of the socket. Suppose an exception occurs while closing the input stream(due to runtime exception) then the subsequent operations of closing the output stream and socket may not happen and there is a chance of resource leak. Scenario === During long run of map reduce jobs, the copyFromLocalFile() api is getting called. Here we found some exceptions happening. As a result of this we found the lsof value raising leading to resource leak. Solution: === While doing a close operation of any resource catch the RuntimeException also rather than catching the IOException alone. Additionally there are places where we try to close a resource in the catch block. If this close fails, we just throw and come out of the current flow. In order to avoid this, we can carry out the close operation in the finally block. Probable reasons for getting RunTimeExceptions: = We may get runtime exception from customised hadoop streams like FSDataOutputStream.close() . So better to handle RunTimeExceptions also. was: Problem Statement: === There are chances of resource leak and stream not getting closed Take the case when after copying data we try to close the Input and output stream followed by closing of the socket. Suppose an exception occurs while closing the input stream(due to runtime exception) then the subsequent operations of closing the output stream and socket may not happen and there is a chance of resource leak. Scenario === During long run of map reduce jobs, the copyFromLocalFile() api is getting called. Here we found some exceptions happening. As a result of this we found the lsof value raising leading to resource leak. Solution: === While doing a close operation of any resource catch the RuntimeException also rather than catching the IOException alone. Additionally there are places where we try to close a resource in the catch block. If this close fails, we just throw and come out of the current flow. In order to avoid this, we can carry out the close operation in the finally block. Probable reasons for getting RunTimeExceptions: = We have many wrapped stream for writing and reading. These wrappers are prone to errors. > Resource leak during failure scenario of closing of resources. > --- > > Key: HADOOP-7256 > URL: https://issues.apache.org/jira/browse/HADOOP-7256 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 0.20.2, 0.21.0 >Reporter: ramkrishna.s.vasudevan >Priority: Minor > Original Estimate: 8h > Remaining Estimate: 8h > > Problem Statement: > === > There are chances of resource leak and stream not getting closed > Take the case when after copying data we try to close the Input and output > stream followed by closing of the socket. > Suppose an exception occurs while closing the input stream(due to runtime > exception) then the subsequent operations of closing the output stream and > socket may not happen and there is a chance of resource leak. > Scenario > === > During long run of map reduce jobs, the copyFromLocalFile() api is getting > called. > Here we found some exceptions happening. As a result of this we found the > lsof value raising leading to resource leak. > Solution: > === > While doing a close operation of any resource catch the RuntimeException also > rather than catching the IOException alone. > Additionally there are places where we try to close a resource in the catch > block. > If this close fails, we just throw and come out of the current flow. > In order to avoid this, we can carry out the close operation in the finally > block. > Probable reasons for getting RunTimeExceptions: > = > We may get runtime exception from customised hadoop streams like > FSDataOutputStream.close() . So better to handle RunTimeExceptions also. > -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7269) S3 Native should allow customizable file meta-data (headers)
[ https://issues.apache.org/jira/browse/HADOOP-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034725#comment-13034725 ] Hadoop QA commented on HADOOP-7269: --- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12479451/HADOOP-7269-S3-metadata-003.diff against trunk revision 1103971. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 9 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 1070 javac compiler warnings (more than the trunk's current 1068 warnings). +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://builds.apache.org/hudson/job/PreCommit-HADOOP-Build/465//testReport/ Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HADOOP-Build/465//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/hudson/job/PreCommit-HADOOP-Build/465//console This message is automatically generated. > S3 Native should allow customizable file meta-data (headers) > > > Key: HADOOP-7269 > URL: https://issues.apache.org/jira/browse/HADOOP-7269 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Reporter: Nicholas Telford >Assignee: Nicholas Telford >Priority: Minor > Attachments: HADOOP-7269-S3-metadata-001.diff, > HADOOP-7269-S3-metadata-002.diff, HADOOP-7269-S3-metadata-003.diff > > > The S3 Native FileSystem currently writes all files with a set of default > headers: > * Content-Type: binary/octet-stream > * Content-Length: > * Content-MD5: > This is a good start, however many applications would benefit from the > ability to customize (for example) the Content-Type and Expires headers for > the file. Ideally the implementation should be abstract enough to customize > all of the available S3 headers and provide a facility for other FileSystems > to specify optional file metadata. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7269) S3 Native should allow customizable file meta-data (headers)
[ https://issues.apache.org/jira/browse/HADOOP-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicholas Telford updated HADOOP-7269: - Attachment: HADOOP-7269-S3-metadata-003.diff Removed abstraction of metadata at the top-level FileSystem. Metadata is now only supported by the NativeS3FileSystem, and you'll need to cast explicitly in order to use it. > S3 Native should allow customizable file meta-data (headers) > > > Key: HADOOP-7269 > URL: https://issues.apache.org/jira/browse/HADOOP-7269 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Reporter: Nicholas Telford >Assignee: Nicholas Telford >Priority: Minor > Attachments: HADOOP-7269-S3-metadata-001.diff, > HADOOP-7269-S3-metadata-002.diff, HADOOP-7269-S3-metadata-003.diff > > > The S3 Native FileSystem currently writes all files with a set of default > headers: > * Content-Type: binary/octet-stream > * Content-Length: > * Content-MD5: > This is a good start, however many applications would benefit from the > ability to customize (for example) the Content-Type and Expires headers for > the file. Ideally the implementation should be abstract enough to customize > all of the available S3 headers and provide a facility for other FileSystems > to specify optional file metadata. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7269) S3 Native should allow customizable file meta-data (headers)
[ https://issues.apache.org/jira/browse/HADOOP-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicholas Telford updated HADOOP-7269: - Release Note: Added support for arbitrary file metadata when creating a new file on the NativeS3FileSystem. (was: Added support for arbitrary file metadata when creating a new file on a FileSystem that supports metadata.) Status: Patch Available (was: Open) > S3 Native should allow customizable file meta-data (headers) > > > Key: HADOOP-7269 > URL: https://issues.apache.org/jira/browse/HADOOP-7269 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Reporter: Nicholas Telford >Assignee: Nicholas Telford >Priority: Minor > Attachments: HADOOP-7269-S3-metadata-001.diff, > HADOOP-7269-S3-metadata-002.diff, HADOOP-7269-S3-metadata-003.diff > > > The S3 Native FileSystem currently writes all files with a set of default > headers: > * Content-Type: binary/octet-stream > * Content-Length: > * Content-MD5: > This is a good start, however many applications would benefit from the > ability to customize (for example) the Content-Type and Expires headers for > the file. Ideally the implementation should be abstract enough to customize > all of the available S3 headers and provide a facility for other FileSystems > to specify optional file metadata. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7292) Metrics 2 TestSinkQueue is racy
[ https://issues.apache.org/jira/browse/HADOOP-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034716#comment-13034716 ] Hudson commented on HADOOP-7292: Integrated in Hadoop-Common-trunk #691 (See [https://builds.apache.org/hudson/job/Hadoop-Common-trunk/691/]) HADOOP-7292. Fix racy test case TestSinkQueue. Contributed by Luke Lu. > Metrics 2 TestSinkQueue is racy > --- > > Key: HADOOP-7292 > URL: https://issues.apache.org/jira/browse/HADOOP-7292 > Project: Hadoop Common > Issue Type: Bug > Components: metrics >Affects Versions: 0.23.0 >Reporter: Luke Lu >Assignee: Luke Lu >Priority: Minor > Fix For: 0.23.0 > > Attachments: hadoop-7292-test-race-v1.patch, > hadoop-7292-test-race-v2.patch > > > The TestSinkQueue is racy (Thread.yield is not enough to guarantee other > intended thread getting run), though it's the first time (from HADOOP-7289) I > saw it manifested here. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7291) Update Hudson job not to run test-contrib
[ https://issues.apache.org/jira/browse/HADOOP-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034715#comment-13034715 ] Hudson commented on HADOOP-7291: Integrated in Hadoop-Common-trunk #691 (See [https://builds.apache.org/hudson/job/Hadoop-Common-trunk/691/]) HADOOP-7291. Remove spurious call to runTestContrib. Contributed by Eli Collins HADOOP-7291. Update Hudson job not to run test-contrib. Contributed by Nigel Daley Reverting the change r1102914 for HADOOP-7291 to fix build issues. > Update Hudson job not to run test-contrib > - > > Key: HADOOP-7291 > URL: https://issues.apache.org/jira/browse/HADOOP-7291 > Project: Hadoop Common > Issue Type: Task >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 0.22.0 > > Attachments: HADOOP-7291-2.patch, HADOOP-7291-3.patch, > HADOOP-7291.patch > > > The test-contrib target was removed in HADOOP-7137, which causes the Hudson > job to fail. The build file doesn't execute test-contrib so I suspect the > Hudson job needs to be updated to not call ant with the test-contrib target. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7286) Refactor FsShell's du/dus/df
[ https://issues.apache.org/jira/browse/HADOOP-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034717#comment-13034717 ] Hudson commented on HADOOP-7286: Integrated in Hadoop-Common-trunk #691 (See [https://builds.apache.org/hudson/job/Hadoop-Common-trunk/691/]) HADOOP-7286. Refactor the du/dus/df commands to conform to new FsCommand class. Contributed by Daryn Sharp. > Refactor FsShell's du/dus/df > > > Key: HADOOP-7286 > URL: https://issues.apache.org/jira/browse/HADOOP-7286 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 0.23.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Fix For: 0.23.0 > > Attachments: HADOOP-7286-2.patch, HADOOP-7286-3.patch, > HADOOP-7286-4.patch, HADOOP-7286.patch > > > Need to refactor to conform to FsCommand subclass. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7291) Update Hudson job not to run test-contrib
[ https://issues.apache.org/jira/browse/HADOOP-7291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034649#comment-13034649 ] Hudson commented on HADOOP-7291: Integrated in Hadoop-Common-22-branch #49 (See [https://builds.apache.org/hudson/job/Hadoop-Common-22-branch/49/]) HADOOP-7291. svn merge -c 1103931 from trunk > Update Hudson job not to run test-contrib > - > > Key: HADOOP-7291 > URL: https://issues.apache.org/jira/browse/HADOOP-7291 > Project: Hadoop Common > Issue Type: Task >Reporter: Eli Collins >Assignee: Eli Collins > Fix For: 0.22.0 > > Attachments: HADOOP-7291-2.patch, HADOOP-7291-3.patch, > HADOOP-7291.patch > > > The test-contrib target was removed in HADOOP-7137, which causes the Hudson > job to fail. The build file doesn't execute test-contrib so I suspect the > Hudson job needs to be updated to not call ant with the test-contrib target. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7137) Remove hod contrib
[ https://issues.apache.org/jira/browse/HADOOP-7137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034651#comment-13034651 ] Hudson commented on HADOOP-7137: Integrated in Hadoop-Common-22-branch #49 (See [https://builds.apache.org/hudson/job/Hadoop-Common-22-branch/49/]) > Remove hod contrib > -- > > Key: HADOOP-7137 > URL: https://issues.apache.org/jira/browse/HADOOP-7137 > Project: Hadoop Common > Issue Type: Task >Reporter: Nigel Daley >Assignee: Nigel Daley > Fix For: 0.22.0 > > Attachments: HADOOP-7137.patch, HADOOP-7137.patch > > > As per vote on general@ > (http://mail-archives.apache.org/mod_mbox/hadoop-general/201102.mbox/%3cac35a7ef-1d68-4055-8d47-eda2fcf8c...@mac.com%3E) > I will > svn remove common/trunk/src/contrib/hod > using this Jira. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-6846) Scripts for building Hadoop 0.22.0 release
[ https://issues.apache.org/jira/browse/HADOOP-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034652#comment-13034652 ] Hudson commented on HADOOP-6846: Integrated in Hadoop-Common-22-branch #49 (See [https://builds.apache.org/hudson/job/Hadoop-Common-22-branch/49/]) > Scripts for building Hadoop 0.22.0 release > -- > > Key: HADOOP-6846 > URL: https://issues.apache.org/jira/browse/HADOOP-6846 > Project: Hadoop Common > Issue Type: Task > Components: build >Affects Versions: 0.22.0 >Reporter: Tom White >Assignee: Tom White > Fix For: 0.22.0 > > Attachments: HADOOP-6846.patch, release-scripts.tar.gz > > -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7192) fs -stat docs aren't updated to reflect the format features
[ https://issues.apache.org/jira/browse/HADOOP-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034650#comment-13034650 ] Hudson commented on HADOOP-7192: Integrated in Hadoop-Common-22-branch #49 (See [https://builds.apache.org/hudson/job/Hadoop-Common-22-branch/49/]) > fs -stat docs aren't updated to reflect the format features > --- > > Key: HADOOP-7192 > URL: https://issues.apache.org/jira/browse/HADOOP-7192 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 0.21.0 > Environment: Linux / 0.20 >Reporter: Harsh J Chouraria >Assignee: Harsh J Chouraria >Priority: Trivial > Labels: documentaion > Fix For: 0.22.0 > > Attachments: hadoop.common.fsstatdoc.r1.diff > > Original Estimate: 1m > Remaining Estimate: 1m > > The html docs of the 'fs -stat' command (that is found listed in the File > System Shell Guide), does not seem to have the formatting abilities of -stat > explained (along with the options). > Like 'fs -help', the docs must also reflect the latest available features. > I shall attach a doc-fix patch shortly. > If anyone has other discrepancies to point out in the web version of the > guide, please do so :) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7189) Add ability to enable 'debug' property in JAAS configuration
[ https://issues.apache.org/jira/browse/HADOOP-7189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034648#comment-13034648 ] Hudson commented on HADOOP-7189: Integrated in Hadoop-Common-22-branch #49 (See [https://builds.apache.org/hudson/job/Hadoop-Common-22-branch/49/]) > Add ability to enable 'debug' property in JAAS configuration > > > Key: HADOOP-7189 > URL: https://issues.apache.org/jira/browse/HADOOP-7189 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 0.22.0 >Reporter: Todd Lipcon >Assignee: Ted Yu >Priority: Minor > Labels: newbie > Fix For: 0.22.0 > > Attachments: HADOOP-7189.patch, HADOOP-7189.txt, > enable-UGI-debug-example.txt > > > Occasionally users have run into weird "Unable to login" messages. > Unfortunately, JAAS obscures the underlying exception message in many cases > because it thinks leaking the exception might be insecure in itself. Enabling > the "debug" option in the JAAS configuration gets it to dump the underlying > issue and makes troubleshooting this kind of issue easier. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7284) Trash and shell's rm does not work for viewfs
[ https://issues.apache.org/jira/browse/HADOOP-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034636#comment-13034636 ] Hadoop QA commented on HADOOP-7284: --- +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12479439/trash2.patch against trunk revision 1103971. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 15 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://builds.apache.org/hudson/job/PreCommit-HADOOP-Build/464//testReport/ Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HADOOP-Build/464//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/hudson/job/PreCommit-HADOOP-Build/464//console This message is automatically generated. > Trash and shell's rm does not work for viewfs > - > > Key: HADOOP-7284 > URL: https://issues.apache.org/jira/browse/HADOOP-7284 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sanjay Radia >Assignee: Sanjay Radia > Fix For: 0.23.0 > > Attachments: trash1.patch, trash2.patch > > -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7284) Trash and shell's rm does not work for viewfs
[ https://issues.apache.org/jira/browse/HADOOP-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sanjay Radia updated HADOOP-7284: - Status: Patch Available (was: Open) > Trash and shell's rm does not work for viewfs > - > > Key: HADOOP-7284 > URL: https://issues.apache.org/jira/browse/HADOOP-7284 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sanjay Radia >Assignee: Sanjay Radia > Fix For: 0.23.0 > > Attachments: trash1.patch, trash2.patch > > -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-7284) Trash and shell's rm does not work for viewfs
[ https://issues.apache.org/jira/browse/HADOOP-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sanjay Radia updated HADOOP-7284: - Attachment: trash2.patch > Trash and shell's rm does not work for viewfs > - > > Key: HADOOP-7284 > URL: https://issues.apache.org/jira/browse/HADOOP-7284 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sanjay Radia >Assignee: Sanjay Radia > Fix For: 0.23.0 > > Attachments: trash1.patch, trash2.patch > > -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7284) Trash and shell's rm does not work for viewfs
[ https://issues.apache.org/jira/browse/HADOOP-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034632#comment-13034632 ] Sanjay Radia commented on HADOOP-7284: -- trash.isEnabled() was redundant - #moveToTrash() checks that first. #moveToApropriateTrash() calls #moveToTrash(). The unit test ended being small code but harder than it looked (partly because of home dir being /Users and /user in different systems - we want to the test to work on linux and mac.) Did some further cleanup of tests and resolved a bug in chRootedFs(). The FileSystem cache causes some weird problems -- one thinks one gets independent FileSystems but not true because of wd. > Trash and shell's rm does not work for viewfs > - > > Key: HADOOP-7284 > URL: https://issues.apache.org/jira/browse/HADOOP-7284 > Project: Hadoop Common > Issue Type: Bug >Reporter: Sanjay Radia >Assignee: Sanjay Radia > Fix For: 0.23.0 > > Attachments: trash1.patch, trash2.patch > > -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira