[jira] Resolved: (HADOOP-5623) Streaming: process provided status messages are overwritten every 10 seoncds
[ https://issues.apache.org/jira/browse/HADOOP-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White resolved HADOOP-5623. --- Resolution: Fixed I've just committed this to the 0.20 branch. > Streaming: process provided status messages are overwritten every 10 seoncds > > > Key: HADOOP-5623 > URL: https://issues.apache.org/jira/browse/HADOOP-5623 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 0.19.0, 0.19.1, 0.20.0 >Reporter: Rick Cox >Assignee: Rick Cox > Fix For: 0.20.2, 0.21.0 > > Attachments: HADOOP-5623-streaming-status.patch, > HADOOP-5623-streaming-status.patch, hadoop-5623-v1.patch > > > Every 10 seconds (if the streaming process is producing output key/values on > stdout), PipeMapRed sets the task's status string to "Records R/W=N/N". This > replaces any custom task status that the streaming process may have specified > using the "reporter:status:" stderr lines. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6466) Add a ZooKeeper service to the cloud scripts
[ https://issues.apache.org/jira/browse/HADOOP-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6466: -- Resolution: Fixed Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) I've just committed this. I tested this change manually, by starting and stopping a ZooKeeper cluster. > Add a ZooKeeper service to the cloud scripts > > > Key: HADOOP-6466 > URL: https://issues.apache.org/jira/browse/HADOOP-6466 > Project: Hadoop Common > Issue Type: New Feature > Components: contrib/cloud >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6466.patch > > > It would be good to add other Hadoop services to the cloud scripts. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-3205) Read multiple chunks directly from FSInputChecker subclass into user buffers
[ https://issues.apache.org/jira/browse/HADOOP-3205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-3205: -- Resolution: Fixed Fix Version/s: 0.22.0 Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) I've just committed this. Thanks Todd! > Read multiple chunks directly from FSInputChecker subclass into user buffers > > > Key: HADOOP-3205 > URL: https://issues.apache.org/jira/browse/HADOOP-3205 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 0.22.0 >Reporter: Raghu Angadi >Assignee: Todd Lipcon > Fix For: 0.22.0 > > Attachments: hadoop-3205.txt, hadoop-3205.txt, hadoop-3205.txt, > hadoop-3205.txt, hadoop-3205.txt > > > Implementations of FSInputChecker and FSOutputSummer like DFS do not have > access to full user buffer. At any time DFS can access only up to 512 bytes > even though user usually reads with a much larger buffer (often controlled by > io.file.buffer.size). This requires implementations to double buffer data if > an implementation wants to read or write larger chunks of data from > underlying storage. > We could separate changes for FSInputChecker and FSOutputSummer into two > separate jiras. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6407) Have a way to automatically update Eclipse .classpath file when new libs are added to the classpath through Ivy
[ https://issues.apache.org/jira/browse/HADOOP-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6407: -- Status: Patch Available (was: Open) > Have a way to automatically update Eclipse .classpath file when new libs are > added to the classpath through Ivy > --- > > Key: HADOOP-6407 > URL: https://issues.apache.org/jira/browse/HADOOP-6407 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 0.21.0, 0.22.0 >Reporter: Konstantin Boudnik >Assignee: Tom White >Priority: Minor > Attachments: HADOOP-6407.patch, HADOOP-6407.patch > > > Currently Eclipse configuration (namely .classpath) isn't synchronized > automatically when lib versions are changed. This causes great inconvenience > so people have to change their project settings manually, etc. > It'd be great if these configs could be updated automatically every time such > a change takes place, e.g. whenever ivy is pulling in new version of a jar. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6407) Have a way to automatically update Eclipse .classpath file when new libs are added to the classpath through Ivy
[ https://issues.apache.org/jira/browse/HADOOP-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6407: -- Status: Open (was: Patch Available) > Have a way to automatically update Eclipse .classpath file when new libs are > added to the classpath through Ivy > --- > > Key: HADOOP-6407 > URL: https://issues.apache.org/jira/browse/HADOOP-6407 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 0.21.0, 0.22.0 >Reporter: Konstantin Boudnik >Assignee: Tom White >Priority: Minor > Attachments: HADOOP-6407.patch, HADOOP-6407.patch > > > Currently Eclipse configuration (namely .classpath) isn't synchronized > automatically when lib versions are changed. This causes great inconvenience > so people have to change their project settings manually, etc. > It'd be great if these configs could be updated automatically every time such > a change takes place, e.g. whenever ivy is pulling in new version of a jar. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6407) Have a way to automatically update Eclipse .classpath file when new libs are added to the classpath through Ivy
[ https://issues.apache.org/jira/browse/HADOOP-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6407: -- Attachment: HADOOP-6407.patch Minor change to correct the deletion of the downloaded Ant-Eclipse package. Also, I tested this with Eclipse Galileo. After generating the Eclipse files with "ant eclipse" you should clean the project from within Eclipse. bq. Just a piece of suggestion - can that task be refactored appropriately into a separate file - eclipse-targets.xml with comments mentioning the variables/properties that need to be set for the same. I agree it would be good to modularize the build files between projects: this should be a separate set of JIRA issues. For the present issue, there's not a lot of scope for sharing ant code, since the "eclipse" targets, for example, will differ between projects since the source paths are different (different contrib projects). > Have a way to automatically update Eclipse .classpath file when new libs are > added to the classpath through Ivy > --- > > Key: HADOOP-6407 > URL: https://issues.apache.org/jira/browse/HADOOP-6407 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 0.21.0, 0.22.0 >Reporter: Konstantin Boudnik >Assignee: Tom White >Priority: Minor > Attachments: HADOOP-6407.patch, HADOOP-6407.patch > > > Currently Eclipse configuration (namely .classpath) isn't synchronized > automatically when lib versions are changed. This causes great inconvenience > so people have to change their project settings manually, etc. > It'd be great if these configs could be updated automatically every time such > a change takes place, e.g. whenever ivy is pulling in new version of a jar. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6466) Add a ZooKeeper service to the cloud scripts
[ https://issues.apache.org/jira/browse/HADOOP-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6466: -- Status: Patch Available (was: Open) > Add a ZooKeeper service to the cloud scripts > > > Key: HADOOP-6466 > URL: https://issues.apache.org/jira/browse/HADOOP-6466 > Project: Hadoop Common > Issue Type: New Feature > Components: contrib/cloud >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6466.patch > > > It would be good to add other Hadoop services to the cloud scripts. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6370) Contrib project ivy dependencies are not included in binary target
[ https://issues.apache.org/jira/browse/HADOOP-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12796768#action_12796768 ] Tom White commented on HADOOP-6370: --- Not sure there is anything extra to do with the classpath, since users wishing to use a particular contrib module can either copy all the jars to the top-level lib, or set HADOOP_CLASSPATH to include the extra jars. > Contrib project ivy dependencies are not included in binary target > -- > > Key: HADOOP-6370 > URL: https://issues.apache.org/jira/browse/HADOOP-6370 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Aaron Kimball >Assignee: Aaron Kimball >Priority: Critical > Attachments: HADOOP-6370.patch > > > Only Hadoop's own library dependencies are promoted to ${build.dir}/lib; any > libraries required by contribs are not redistributed. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-5973) "map.input.file" is not set
[ https://issues.apache.org/jira/browse/HADOOP-5973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12796689#action_12796689 ] Tom White commented on HADOOP-5973: --- The "map.input.file" parameter is only available in 0.20.0 when using the old (deprecated) MapReduce API. In the new API the parameter is not set: the equivalent call is {code} ((FileSplit) context.getInputSplit).getPath() {code} This is as designed, but I think we should update the documentation to explain this. > "map.input.file" is not set > --- > > Key: HADOOP-5973 > URL: https://issues.apache.org/jira/browse/HADOOP-5973 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 0.20.0 >Reporter: Rares Vernica >Priority: Minor > > Hadoop does not set the "map.input.file" variable. I tried the fallowing and > all I get is "null". > public class Map extends Mapper { >public void map(Object key, Text value, Context context) >throws IOException, InterruptedException { >Configuration conf = context.getConfiguration(); >System.out.println(conf.get("map.input.file")); >} >protected void setup(Context context) throws IOException, >InterruptedException { >Configuration conf = context.getConfiguration(); >System.out.println(conf.get("map.input.file")); >} > } -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6155) deprecate Record IO
[ https://issues.apache.org/jira/browse/HADOOP-6155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6155: -- Fix Version/s: 0.22.0 Status: Patch Available (was: Open) > deprecate Record IO > --- > > Key: HADOOP-6155 > URL: https://issues.apache.org/jira/browse/HADOOP-6155 > Project: Hadoop Common > Issue Type: Task > Components: record >Reporter: Owen O'Malley > Fix For: 0.22.0 > > Attachments: HADOOP-6155.patch > > > With the advent of Avro, I think we should deprecate Record IO. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6155) deprecate Record IO
[ https://issues.apache.org/jira/browse/HADOOP-6155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6155: -- Attachment: HADOOP-6155.patch Here's a patch which deprecates all the public Record IO classes. > deprecate Record IO > --- > > Key: HADOOP-6155 > URL: https://issues.apache.org/jira/browse/HADOOP-6155 > Project: Hadoop Common > Issue Type: Task > Components: record >Reporter: Owen O'Malley > Attachments: HADOOP-6155.patch > > > With the advent of Avro, I think we should deprecate Record IO. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6478) 0.21 - .eclipse-templates/.classpath out of sync with file system
[ https://issues.apache.org/jira/browse/HADOOP-6478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6478: -- Status: Open (was: Patch Available) Does HADOOP-6407 solve this for you? > 0.21 - .eclipse-templates/.classpath out of sync with file system > > > Key: HADOOP-6478 > URL: https://issues.apache.org/jira/browse/HADOOP-6478 > Project: Hadoop Common > Issue Type: Bug >Reporter: Kay Kay > Fix For: 0.21.0 > > Attachments: HADOOP-6478.patch > > > some of the jars in .classpath of branch-0.21 out of sync with the file > system retrieved by ivy . -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6407) Have a way to automatically update Eclipse .classpath file when new libs are added to the classpath through Ivy
[ https://issues.apache.org/jira/browse/HADOOP-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6407: -- Assignee: Tom White Status: Patch Available (was: Open) > Have a way to automatically update Eclipse .classpath file when new libs are > added to the classpath through Ivy > --- > > Key: HADOOP-6407 > URL: https://issues.apache.org/jira/browse/HADOOP-6407 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 0.22.0 >Reporter: Konstantin Boudnik >Assignee: Tom White >Priority: Minor > Attachments: HADOOP-6407.patch > > > Currently Eclipse configuration (namely .classpath) isn't synchronized > automatically when lib versions are changed. This causes great inconvenience > so people have to change their project settings manually, etc. > It'd be great if these configs could be updated automatically every time such > a change takes place, e.g. whenever ivy is pulling in new version of a jar. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5489) hadoop-env.sh still refers to java1.5
[ https://issues.apache.org/jira/browse/HADOOP-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-5489: -- Resolution: Fixed Status: Resolved (was: Patch Available) > hadoop-env.sh still refers to java1.5 > - > > Key: HADOOP-5489 > URL: https://issues.apache.org/jira/browse/HADOOP-5489 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 0.21.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Trivial > Fix For: 0.22.0 > > Attachments: HADOOP-5849.patch > > > The example JAVA_HOME in conf/hadoop-env.sh still points to > /usr/lib/j2sdk1.5-sun > better to have it set to point to wherever the sun java 6 RPM sticks Java -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6443) Serialization classes accept invalid metadata
[ https://issues.apache.org/jira/browse/HADOOP-6443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6443: -- Resolution: Fixed Fix Version/s: 0.22.0 Status: Resolved (was: Patch Available) I've just committed this. Thanks Aaron! > Serialization classes accept invalid metadata > - > > Key: HADOOP-6443 > URL: https://issues.apache.org/jira/browse/HADOOP-6443 > Project: Hadoop Common > Issue Type: Improvement > Components: io >Reporter: Aaron Kimball >Assignee: Aaron Kimball > Fix For: 0.22.0 > > Attachments: HADOOP-6443.2.patch, HADOOP-6443.3.patch, > HADOOP-6443.patch > > > The {{SerializationBase.accept()}} methods of several serialization > implementations use incorrect metadata when determining whether they are the > correct serializer for the user's metadata. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6370) Contrib project ivy dependencies are not included in binary target
[ https://issues.apache.org/jira/browse/HADOOP-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6370: -- Status: Open (was: Patch Available) It looks like the contrib dependencies have never been distributed (e.g. index depends on Lucene, but the Lucene jar has not been bundled). I think it probably makes sense to distribute the dependencies though. To put a particular contrib module's dependencies into contrib//lib we could copy all of its dependencies there, then remove duplicates using a present selector (http://ant.apache.org/manual/CoreTypes/selectors.html#presentselect). Could this work? > Contrib project ivy dependencies are not included in binary target > -- > > Key: HADOOP-6370 > URL: https://issues.apache.org/jira/browse/HADOOP-6370 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Aaron Kimball >Assignee: Aaron Kimball >Priority: Critical > Attachments: HADOOP-6370.patch > > > Only Hadoop's own library dependencies are promoted to ${build.dir}/lib; any > libraries required by contribs are not redistributed. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6403) Deprecate EC2 bash scripts
[ https://issues.apache.org/jira/browse/HADOOP-6403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6403: -- Fix Version/s: 0.21.0 > Deprecate EC2 bash scripts > -- > > Key: HADOOP-6403 > URL: https://issues.apache.org/jira/browse/HADOOP-6403 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/cloud >Reporter: Tom White >Assignee: Tom White > Fix For: 0.21.0 > > > With the addition of python-based EC2 scripts introduced in HADOOP-6108, the > bash scripts in src/contrib/ec2 should be deprecated. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6451) Contrib tests are not being run
[ https://issues.apache.org/jira/browse/HADOOP-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6451: -- Status: Patch Available (was: Open) > Contrib tests are not being run > --- > > Key: HADOOP-6451 > URL: https://issues.apache.org/jira/browse/HADOOP-6451 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Tom White >Assignee: Tom White >Priority: Blocker > Fix For: 0.21.0, 0.22.0 > > Attachments: HADOOP-6451.patch > > > The test target in src/contrib/build.xml references contrib modules that are > no longer there post project split. This was discovered in HADOOP-6426. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6451) Contrib tests are not being run
[ https://issues.apache.org/jira/browse/HADOOP-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6451: -- Attachment: HADOOP-6451.patch When calling out to contrib targets using trunk the following message appears (e.g. for "ant clean-contrib"): {code} [subant] No sub-builds to iterate on {code} This seems to have been happening since HADOOP-5107 which added {{inheritall="true"}} to the subant call. I notice that {{inheritall="true"}} was only added to common's top-level build file, HDFS and MapReduce didn't have it added. Anyone know why? Anyway, this patch removes {{inheritall="true"}} which causes the contrib targets to work again. I've also fixed the test contrib modules that are included since they mistakenly refer to HDFS and MapReduce modules (a vestige of the project split?). > Contrib tests are not being run > --- > > Key: HADOOP-6451 > URL: https://issues.apache.org/jira/browse/HADOOP-6451 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Tom White > Fix For: 0.21.0, 0.22.0 > > Attachments: HADOOP-6451.patch > > > The test target in src/contrib/build.xml references contrib modules that are > no longer there post project split. This was discovered in HADOOP-6426. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Assigned: (HADOOP-6451) Contrib tests are not being run
[ https://issues.apache.org/jira/browse/HADOOP-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White reassigned HADOOP-6451: - Assignee: Tom White > Contrib tests are not being run > --- > > Key: HADOOP-6451 > URL: https://issues.apache.org/jira/browse/HADOOP-6451 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Tom White >Assignee: Tom White > Fix For: 0.21.0, 0.22.0 > > Attachments: HADOOP-6451.patch > > > The test target in src/contrib/build.xml references contrib modules that are > no longer there post project split. This was discovered in HADOOP-6426. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6451) Contrib tests are not being run
[ https://issues.apache.org/jira/browse/HADOOP-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6451: -- Priority: Blocker (was: Major) Fix Version/s: 0.22.0 0.21.0 This should go into 0.21.0 too since HADOOP-5107 went in there, so I'm changing it to be a blocker. > Contrib tests are not being run > --- > > Key: HADOOP-6451 > URL: https://issues.apache.org/jira/browse/HADOOP-6451 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Tom White >Assignee: Tom White >Priority: Blocker > Fix For: 0.21.0, 0.22.0 > > Attachments: HADOOP-6451.patch > > > The test target in src/contrib/build.xml references contrib modules that are > no longer there post project split. This was discovered in HADOOP-6426. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HADOOP-2409) Make EC2 image independent of Hadoop version
[ https://issues.apache.org/jira/browse/HADOOP-2409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White resolved HADOOP-2409. --- Resolution: Won't Fix Fixed as a by-product of HADOOP-6108. > Make EC2 image independent of Hadoop version > > > Key: HADOOP-2409 > URL: https://issues.apache.org/jira/browse/HADOOP-2409 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/cloud >Reporter: Tom White >Assignee: Tom White > Attachments: hadoop-2409.patch, HADOOP-2409.patch > > > Instead of building a new image for each released version of Hadoop, install > Hadoop on instance start up. Since it is a small download this would not add > significantly to startup time. Hadoop releases would be mirrored on S3 for > scalability (and to avoid bandwidth costs). The version to install would be > found from the instance metadata - this would be a download URL. > More generally, the instance could retrieve a script to run on start up from > a URL specified in the metadata. The script would install and configure > Hadoop, but it could be extended to do cluster-specific set up. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HADOOP-4762) Fix Eclipse classpath following introduction of gridmix 2
[ https://issues.apache.org/jira/browse/HADOOP-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White resolved HADOOP-4762. --- Resolution: Won't Fix Superseded by HADOOP-6407. > Fix Eclipse classpath following introduction of gridmix 2 > - > > Key: HADOOP-4762 > URL: https://issues.apache.org/jira/browse/HADOOP-4762 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Tom White >Assignee: Tom White > Attachments: hadoop-4762.patch > > > Need to add src/benchmarks/gridmix2/src/java to classpath. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HADOOP-5813) TeraRecordWriter doesn't override parent close() method
[ https://issues.apache.org/jira/browse/HADOOP-5813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White resolved HADOOP-5813. --- Resolution: Won't Fix Fixed in MAPREDUCE-639. > TeraRecordWriter doesn't override parent close() method > --- > > Key: HADOOP-5813 > URL: https://issues.apache.org/jira/browse/HADOOP-5813 > Project: Hadoop Common > Issue Type: Bug >Reporter: Tom White >Assignee: Owen O'Malley > > The signature should be > {{public void close(Reporter reporter) throws IOException}} > not > {{public void close() throws IOException}} > Using {...@override}} would enforce this. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HADOOP-4882) Fix Eclipse configuration to work with ivy
[ https://issues.apache.org/jira/browse/HADOOP-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White resolved HADOOP-4882. --- Resolution: Duplicate Duplicate of HADOOP-6407. > Fix Eclipse configuration to work with ivy > -- > > Key: HADOOP-4882 > URL: https://issues.apache.org/jira/browse/HADOOP-4882 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 0.20.0 >Reporter: Tom White > > Following HADOOP-3305, not all third party jars reside in lib, instead they > are managed by ivy. The Eclipse configuration needs updating so it finds the > jars. The IvyDE Plugin may be able to do this, see > http://ant.apache.org/ivy/ivyde/index.html. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HADOOP-6128) Serializer and Deserializer should extend java.io.Closeable
[ https://issues.apache.org/jira/browse/HADOOP-6128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White resolved HADOOP-6128. --- Resolution: Duplicate Fixed in HADOOP-6165. > Serializer and Deserializer should extend java.io.Closeable > --- > > Key: HADOOP-6128 > URL: https://issues.apache.org/jira/browse/HADOOP-6128 > Project: Hadoop Common > Issue Type: Improvement > Components: io >Reporter: Tom White > > This change wouldn't change behaviour or the API, but would make it possible > to use such utilities as IOUtils#closeStream() on Serializers and > Deserializers. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HADOOP-5959) Support availability zone in EC2 scripts
[ https://issues.apache.org/jira/browse/HADOOP-5959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White resolved HADOOP-5959. --- Resolution: Duplicate Fixed as a part of HADOOP-6108. > Support availability zone in EC2 scripts > > > Key: HADOOP-5959 > URL: https://issues.apache.org/jira/browse/HADOOP-5959 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/cloud >Reporter: Tom White > > It would be convenient to be able to control which availability zone to > launch instances in. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6466) Add a ZooKeeper service to the cloud scripts
[ https://issues.apache.org/jira/browse/HADOOP-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12795535#action_12795535 ] Tom White commented on HADOOP-6466: --- Thanks for the review, Henry. If we need to target Python 2.4 (for RHEL 5 compatibility, for instance) then it should be done as a part of another issue, since there are other places in the scripts where there are Python 2.5-isms. I'll commit this in the next few days, unless there are objections. > Add a ZooKeeper service to the cloud scripts > > > Key: HADOOP-6466 > URL: https://issues.apache.org/jira/browse/HADOOP-6466 > Project: Hadoop Common > Issue Type: New Feature > Components: contrib/cloud >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6466.patch > > > It would be good to add other Hadoop services to the cloud scripts. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HADOOP-4745) EC2 scripts should configure Hadoop to use all available disks on large instances
[ https://issues.apache.org/jira/browse/HADOOP-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White resolved HADOOP-4745. --- Resolution: Duplicate This was fixed as a part of HADOOP-6108. > EC2 scripts should configure Hadoop to use all available disks on large > instances > - > > Key: HADOOP-4745 > URL: https://issues.apache.org/jira/browse/HADOOP-4745 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/cloud >Affects Versions: 0.19.0 >Reporter: Tom White >Assignee: Tom White > Attachments: hadoop-4745.patch > > > The Hadoop configuration on EC2 currently always uses a single disk, even > when more are available > (http://docs.amazonwebservices.com/AWSEC2/2007-08-29/DeveloperGuide/instance-storage.html). > Performance is significantly boosted by using all the available disks, so we > should configure Hadoop to use them automatically. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6407) Have a way to automatically update Eclipse .classpath file when new libs are added to the classpath through Ivy
[ https://issues.apache.org/jira/browse/HADOOP-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6407: -- Attachment: HADOOP-6407.patch Here's a patch for review that adds an "eclipse" target (like the one in Avro and ZooKeeper) that generates the .classpath and .project files for Eclipse using Ant-Eclipse. Note that it doesn't keep the Hadoop Ant builder that ran the code generator for RecordIO and Avro records: this is easy (and rare) enough to do from the command line with "ant compile-core-test". I've tested this on Eclipse 3.4.1. > Have a way to automatically update Eclipse .classpath file when new libs are > added to the classpath through Ivy > --- > > Key: HADOOP-6407 > URL: https://issues.apache.org/jira/browse/HADOOP-6407 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 0.22.0 >Reporter: Konstantin Boudnik >Priority: Minor > Attachments: HADOOP-6407.patch > > > Currently Eclipse configuration (namely .classpath) isn't synchronized > automatically when lib versions are changed. This causes great inconvenience > so people have to change their project settings manually, etc. > It'd be great if these configs could be updated automatically every time such > a change takes place, e.g. whenever ivy is pulling in new version of a jar. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6408) Add a /conf servlet to dump running configuration
[ https://issues.apache.org/jira/browse/HADOOP-6408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12795443#action_12795443 ] Tom White commented on HADOOP-6408: --- * The catch block in writeXml() can actually be removed without changing behaviour, since the method already declares that it throws IOException, and any RuntimeException is re-thrown. This would remove the FindBugs warning. * The javadoc on the writeXml(Writer) method is incorrect since it refers to OutputStream. > Add a /conf servlet to dump running configuration > - > > Key: HADOOP-6408 > URL: https://issues.apache.org/jira/browse/HADOOP-6408 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 0.22.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon > Fix For: 0.22.0 > > Attachments: hadoop-6408.txt, hadoop-6408.txt, hadoop-6408.txt, > hadoop-6408.txt, hadoop-6408.txt > > > HADOOP-6184 added a command line flag to dump the running configuration. It > would be great for cluster troubleshooting to provide access to this as a > servlet, preferably in both JSON and XML formats. But really, any format > would be better than nothing. This should/could go into all of the daemons. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6402) testConf.xsl is not well-formed XML
[ https://issues.apache.org/jira/browse/HADOOP-6402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6402: -- Resolution: Fixed Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) I've just committed this. Thanks Steve! > testConf.xsl is not well-formed XML > --- > > Key: HADOOP-6402 > URL: https://issues.apache.org/jira/browse/HADOOP-6402 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 0.22.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Trivial > Fix For: 0.22.0 > > Attachments: HADOOP-6402.patch > > > File {{/org/apache/hadoop/cli/testConf.xsl}} is not valid XML, as the > directive comes after the comment. XML requires this to be the first thing in > a file, so it can be used to determine the encoding. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-5489) hadoop-env.sh still refers to java1.5
[ https://issues.apache.org/jira/browse/HADOOP-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12795429#action_12795429 ] Tom White commented on HADOOP-5489: --- +1 > hadoop-env.sh still refers to java1.5 > - > > Key: HADOOP-5489 > URL: https://issues.apache.org/jira/browse/HADOOP-5489 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Affects Versions: 0.21.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Trivial > Fix For: 0.22.0 > > Attachments: HADOOP-5849.patch > > > The example JAVA_HOME in conf/hadoop-env.sh still points to > /usr/lib/j2sdk1.5-sun > better to have it set to point to wherever the sun java 6 RPM sticks Java -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6408) Add a /conf servlet to dump running configuration
[ https://issues.apache.org/jira/browse/HADOOP-6408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12794249#action_12794249 ] Tom White commented on HADOOP-6408: --- +1 Looks good to me. A couple of minor comments: * It would be nice to make the string literals in ConfServlet constants. * Make TestConfServlet check that the output is as expected, rather than just non-null. > Add a /conf servlet to dump running configuration > - > > Key: HADOOP-6408 > URL: https://issues.apache.org/jira/browse/HADOOP-6408 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 0.22.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon > Fix For: 0.22.0 > > Attachments: hadoop-6408.txt, hadoop-6408.txt, hadoop-6408.txt, > hadoop-6408.txt > > > HADOOP-6184 added a command line flag to dump the running configuration. It > would be great for cluster troubleshooting to provide access to this as a > servlet, preferably in both JSON and XML formats. But really, any format > would be better than nothing. This should/could go into all of the daemons. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-4998) Implement a native OS runtime for Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12794210#action_12794210 ] Tom White commented on HADOOP-4998: --- Having getUserName() for the Java method and getUsername() for the native method is confusing, so I would introduce another class with the native methods (JniPlatformCall) and delegate to it from PlatformCall. > Implement a native OS runtime for Hadoop > > > Key: HADOOP-4998 > URL: https://issues.apache.org/jira/browse/HADOOP-4998 > Project: Hadoop Common > Issue Type: New Feature > Components: native >Reporter: Arun C Murthy >Assignee: Arun C Murthy > Fix For: 0.21.0 > > Attachments: hadoop-4998-1.patch > > > It would be useful to implement a JNI-based runtime for Hadoop to get access > to the native OS runtime. This would allow us to stop relying on exec'ing > bash to get access to information such as user-groups, process limits etc. > and for features such as chown/chgrp (org.apache.hadoop.util.Shell). -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6056) Use java.net.preferIPv4Stack to force IPv4
[ https://issues.apache.org/jira/browse/HADOOP-6056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6056: -- Status: Open (was: Patch Available) Moving to open while this is verified manually. What would need to be done to allow Hadoop to work with IPv6? > Use java.net.preferIPv4Stack to force IPv4 > -- > > Key: HADOOP-6056 > URL: https://issues.apache.org/jira/browse/HADOOP-6056 > Project: Hadoop Common > Issue Type: Improvement > Components: scripts >Affects Versions: 0.21.0, 0.22.0 >Reporter: Steve Loughran >Assignee: Todd Lipcon > Fix For: 0.21.0, 0.22.0 > > Attachments: hadoop-6056.txt > > > This was mentioned on HADOOP-3427, there is a property, > java.net.preferIPv4Stack, which you set to true for the java net process to > switch to IPv4 everywhere. > As Hadoop doesn't work on IPv6, this should be set to true in the startup > scripts. Hopefully this will ensure that Jetty will also pick it up. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6435) Make RPC.waitForProxy with timeout public
[ https://issues.apache.org/jira/browse/HADOOP-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6435: -- Resolution: Fixed Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) I've just committed this. Thanks Steve! > Make RPC.waitForProxy with timeout public > - > > Key: HADOOP-6435 > URL: https://issues.apache.org/jira/browse/HADOOP-6435 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Affects Versions: 0.22.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Fix For: 0.22.0 > > Attachments: HADOOP-6435-1.patch, HADOOP-6435-2.patch > > > The public RPC.waitForProxy() method waits for Long.MAX_VALUE before giving > up, ignores all interrupt requests. This is excessive. > The version of the method that is package scoped should be made public. > Interrupt swallowing is covered in HADOOP-6221 and can be done as a separate > patch -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6332) Large-scale Automated Test Framework
[ https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12794134#action_12794134 ] Tom White commented on HADOOP-6332: --- I would prefer to see a role-based approach in ClusterProcessManager (and other classes) since having explicit master/slave roles makes it difficult to support clusters with a separate namenode and jobtracker, or ZooKeeper (where all nodes are peers). > Large-scale Automated Test Framework > > > Key: HADOOP-6332 > URL: https://issues.apache.org/jira/browse/HADOOP-6332 > Project: Hadoop Common > Issue Type: New Feature > Components: test >Reporter: Arun C Murthy >Assignee: Arun C Murthy > Fix For: 0.21.0 > > Attachments: 6332_v1.patch, 6332_v2.patch, HADOOP-6332-MR.patch, > HADOOP-6332-MR.patch, HADOOP-6332.patch, HADOOP-6332.patch > > > Hadoop would benefit from having a large-scale, automated, test-framework. > This jira is meant to be a master-jira to track relevant work. > > The proposal is a junit-based, large-scale test framework which would run > against _real_ clusters. > There are several pieces we need to achieve this goal: > # A set of utilities we can use in junit-based tests to work with real, > large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & > stop clusters, bring down tasktrackers, datanodes, entire racks of both etc. > # Enhanced control-ability and inspect-ability of the various components in > the system e.g. daemons such as namenode, jobtracker should expose their > data-structures for query/manipulation etc. Tests would be much more relevant > if we could for e.g. query for specific states of the jobtracker, scheduler > etc. Clearly these apis should _not_ be part of the production clusters - > hence the proposal is to use aspectj to weave these new apis to > debug-deployments. > > Related note: we should break up our tests into at least 3 categories: > # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & > MAPREDUCE-1050). > # src/test/integration -> Current junit tests with Mini* clusters etc. > # src/test/system -> HADOOP-6332 and it's children -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6466) Add a ZooKeeper service to the cloud scripts
[ https://issues.apache.org/jira/browse/HADOOP-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12794108#action_12794108 ] Tom White commented on HADOOP-6466: --- > Are you in danger of creating a dependency loop here? Not really. There is no compile-time dependency here. The scripts already run HDFS and MapReduce clusters, which are not contained in Hadoop Common. HOD does this too. Having said that, if people think this is a problem then we can move the scripts. > Add a ZooKeeper service to the cloud scripts > > > Key: HADOOP-6466 > URL: https://issues.apache.org/jira/browse/HADOOP-6466 > Project: Hadoop Common > Issue Type: New Feature > Components: contrib/cloud >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6466.patch > > > It would be good to add other Hadoop services to the cloud scripts. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6466) Add a ZooKeeper service to the cloud scripts
[ https://issues.apache.org/jira/browse/HADOOP-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6466: -- Attachment: HADOOP-6466.patch Patch implementing a basic ZooKeeper service. > Add a ZooKeeper service to the cloud scripts > > > Key: HADOOP-6466 > URL: https://issues.apache.org/jira/browse/HADOOP-6466 > Project: Hadoop Common > Issue Type: New Feature > Components: contrib/cloud >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6466.patch > > > It would be good to add other Hadoop services to the cloud scripts. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HADOOP-6466) Add a ZooKeeper service to the cloud scripts
Add a ZooKeeper service to the cloud scripts Key: HADOOP-6466 URL: https://issues.apache.org/jira/browse/HADOOP-6466 Project: Hadoop Common Issue Type: New Feature Components: contrib/cloud Reporter: Tom White Assignee: Tom White It would be good to add other Hadoop services to the cloud scripts. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6465) Write a Terremark cloud provider
[ https://issues.apache.org/jira/browse/HADOOP-6465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6465: -- Attachment: HADOOP-6465.patch This patch uses libcloud to communicate with the Terremark vCloud API. > Write a Terremark cloud provider > > > Key: HADOOP-6465 > URL: https://issues.apache.org/jira/browse/HADOOP-6465 > Project: Hadoop Common > Issue Type: New Feature > Components: contrib/cloud >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6465.patch > > > The scripts in contrib/cloud currently only support running on EC2. This > issue is to add support for running Hadoop clusters on Terremark's vCloud > Express platform. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6464) Write a Rackspace cloud provider
[ https://issues.apache.org/jira/browse/HADOOP-6464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6464: -- Attachment: HADOOP-6464.patch This patch uses the Python libcloud API (http://incubator.apache.org/libcloud/) to communicate with Rackspace's API. Instrcutions are provided in the README. > Write a Rackspace cloud provider > > > Key: HADOOP-6464 > URL: https://issues.apache.org/jira/browse/HADOOP-6464 > Project: Hadoop Common > Issue Type: New Feature > Components: contrib/cloud >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6464.patch > > > The scripts in contrib/cloud currently only support running on EC2. This > issue is to add support for running Hadoop clusters on Rackspace Cloud > Servers. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HADOOP-6465) Write a Terremark cloud provider
Write a Terremark cloud provider Key: HADOOP-6465 URL: https://issues.apache.org/jira/browse/HADOOP-6465 Project: Hadoop Common Issue Type: New Feature Components: contrib/cloud Reporter: Tom White Assignee: Tom White The scripts in contrib/cloud currently only support running on EC2. This issue is to add support for running Hadoop clusters on Terremark's vCloud Express platform. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HADOOP-6464) Write a Rackspace cloud provider
Write a Rackspace cloud provider Key: HADOOP-6464 URL: https://issues.apache.org/jira/browse/HADOOP-6464 Project: Hadoop Common Issue Type: New Feature Components: contrib/cloud Reporter: Tom White Assignee: Tom White The scripts in contrib/cloud currently only support running on EC2. This issue is to add support for running Hadoop clusters on Rackspace Cloud Servers. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6454) Create setup.py for EC2 cloud scripts
[ https://issues.apache.org/jira/browse/HADOOP-6454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6454: -- Resolution: Fixed Fix Version/s: 0.22.0 Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) I've just committed this. > Create setup.py for EC2 cloud scripts > - > > Key: HADOOP-6454 > URL: https://issues.apache.org/jira/browse/HADOOP-6454 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Fix For: 0.22.0 > > Attachments: HADOOP-6454.patch, HADOOP-6454.patch > > > This would make it easier to install the scripts. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6424) Port FsShell to FileContext
[ https://issues.apache.org/jira/browse/HADOOP-6424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6424: -- Status: Open (was: Patch Available) > Port FsShell to FileContext > > > Key: HADOOP-6424 > URL: https://issues.apache.org/jira/browse/HADOOP-6424 > Project: Hadoop Common > Issue Type: Task >Reporter: Eli Collins >Assignee: Eli Collins > Attachments: HADOOP-6424.patch > > > FsShell currently uses FileSystem, needs to be moved over to FileContext. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6444) Support additional security group option in hadoop-ec2 script
[ https://issues.apache.org/jira/browse/HADOOP-6444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6444: -- Resolution: Fixed Fix Version/s: 0.22.0 Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) I've just committed this. Thanks Paul! > Support additional security group option in hadoop-ec2 script > - > > Key: HADOOP-6444 > URL: https://issues.apache.org/jira/browse/HADOOP-6444 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Paul Egan >Assignee: Paul Egan >Priority: Minor > Fix For: 0.22.0 > > Attachments: hadoop-ec2-py-0.3.0.patch, > hadoop-trunk-contrib-cloud.patch, hadoop-trunk-contrib-cloud.patch, > hadoop-trunk-contrib-cloud.patch > > > When deploying a hadoop cluster on ec2 alongside other services it is very > useful to be able to specify additional (pre-existing) security groups to > facilitate access control. For example one could use this feature to add a > cluster to a generic "hadoop" group, which authorizes hdfs access from > instances outside the cluster. Without such an option the access control for > the security groups created by the script need to manually updated after > cluster launch. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6462) contrib/cloud failing, target "compile" does not exist
[ https://issues.apache.org/jira/browse/HADOOP-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6462: -- Resolution: Fixed Assignee: Tom White Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) I've just committed this. > contrib/cloud failing, target "compile" does not exist > -- > > Key: HADOOP-6462 > URL: https://issues.apache.org/jira/browse/HADOOP-6462 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 0.22.0 >Reporter: Steve Loughran >Assignee: Tom White > Fix For: 0.22.0 > > Attachments: HADOOP-6462.patch > > > I'm not seeing this mentioned in hudson or other bugreports, which confuses > me. With the addition of a src/contrib/cloud/build.xml from HADOOP-6426, > contrib/build.xml won't build no more: > hadoop-common/src/contrib/build.xml:30: The following error occurred while > executing this line: > Target "compile" does not exist in the project "hadoop-cloud". > What is odd is this: the final patch of HADOOP-6426 does include the stub > files needed, yet they aren't in SVN_HEAD. Which implies that a > different version may have gone in than intended. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6452) Hadoop JSP pages don't work under a security manager
[ https://issues.apache.org/jira/browse/HADOOP-6452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12793448#action_12793448 ] Tom White commented on HADOOP-6452: --- +1 A nit: there are a couple of places where the indentation introduced by the patch isn't correct. > Hadoop JSP pages don't work under a security manager > > > Key: HADOOP-6452 > URL: https://issues.apache.org/jira/browse/HADOOP-6452 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 0.21.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 0.22.0 > > Attachments: hadoop-5740.patch, HADOOP-6452-2.patch, > mapreduce-439-2.patch > > > When you run Hadoop under a security manager that says "yes" to all security > checks, you get stack traces when Jetty tries to initialise the JSP engine. > Which implies you can't use Jasper under a security manager -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6434) Make HttpServer slightly easier to manage/diagnose faults with
[ https://issues.apache.org/jira/browse/HADOOP-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12793440#action_12793440 ] Tom White commented on HADOOP-6434: --- +1 > Make HttpServer slightly easier to manage/diagnose faults with > -- > > Key: HADOOP-6434 > URL: https://issues.apache.org/jira/browse/HADOOP-6434 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 0.22.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 0.22.0 > > Attachments: HADOOP-6434-1.patch, HADOOP-6434-2.patch > > > It would be easier to work with HttpServer if > # webServer.isStarted() was exported > # the toString() method included the (hostname,port) in use > # Bind Exceptions raised in startup included the (hostname, port) requested -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6444) Support additional security group option in hadoop-ec2 script
[ https://issues.apache.org/jira/browse/HADOOP-6444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6444: -- Assignee: Paul Egan (was: Tom White) Status: Open (was: Patch Available) Paul, thanks for the updated patch. Hudson can't apply it since it needs to be relative to Hadoop Common's root - i.e. paths starting with "src/contrib/cloud/src/py/hadoop/..." Could you regenerate it please? (BTW you can leave this issue assigned to yourself since you are working on it.) > Support additional security group option in hadoop-ec2 script > - > > Key: HADOOP-6444 > URL: https://issues.apache.org/jira/browse/HADOOP-6444 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Paul Egan >Assignee: Paul Egan >Priority: Minor > Attachments: hadoop-ec2-py-0.3.0.patch, > hadoop-trunk-contrib-cloud.patch, hadoop-trunk-contrib-cloud.patch > > > When deploying a hadoop cluster on ec2 alongside other services it is very > useful to be able to specify additional (pre-existing) security groups to > facilitate access control. For example one could use this feature to add a > cluster to a generic "hadoop" group, which authorizes hdfs access from > instances outside the cluster. Without such an option the access control for > the security groups created by the script need to manually updated after > cluster launch. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6462) contrib/cloud failing, target "compile" does not exist
[ https://issues.apache.org/jira/browse/HADOOP-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6462: -- Attachment: HADOOP-6462.patch Patch to fix the missing targets. The final patch from HADOOP-6426 was not the one committed, since there were problems getting Hudson to run the new unit tests, so this was broken off into another issue (HADOOP-6451). Sorry for the confusion. > contrib/cloud failing, target "compile" does not exist > -- > > Key: HADOOP-6462 > URL: https://issues.apache.org/jira/browse/HADOOP-6462 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 0.21.0 >Reporter: Steve Loughran > Attachments: HADOOP-6462.patch > > > I'm not seeing this mentioned in hudson or other bugreports, which confuses > me. With the addition of a src/contrib/cloud/build.xml from HADOOP-6426, > contrib/build.xml won't build no more: > hadoop-common/src/contrib/build.xml:30: The following error occurred while > executing this line: > Target "compile" does not exist in the project "hadoop-cloud". > What is odd is this: the final patch of HADOOP-6426 does include the stub > files needed, yet they aren't in SVN_HEAD. Which implies that a > different version may have gone in than intended. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6452) Hadoop JSP pages don't work under a security manager
[ https://issues.apache.org/jira/browse/HADOOP-6452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12792742#action_12792742 ] Tom White commented on HADOOP-6452: --- This looks reasonable to me. Is it possible to write a test for it? If not, can you describe the manual test you ran please? > Hadoop JSP pages don't work under a security manager > > > Key: HADOOP-6452 > URL: https://issues.apache.org/jira/browse/HADOOP-6452 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 0.21.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 0.22.0 > > Attachments: hadoop-5740.patch, mapreduce-439-2.patch > > > When you run Hadoop under a security manager that says "yes" to all security > checks, you get stack traces when Jetty tries to initialise the JSP engine. > Which implies you can't use Jasper under a security manager -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6443) Serialization classes accept invalid metadata
[ https://issues.apache.org/jira/browse/HADOOP-6443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12792741#action_12792741 ] Tom White commented on HADOOP-6443: --- +1 A nit: the logic for checking for SERIALIZATION_KEY is repeated in three places. Can you factor it out into a method? > Serialization classes accept invalid metadata > - > > Key: HADOOP-6443 > URL: https://issues.apache.org/jira/browse/HADOOP-6443 > Project: Hadoop Common > Issue Type: Improvement > Components: io >Reporter: Aaron Kimball >Assignee: Aaron Kimball > Attachments: HADOOP-6443.2.patch, HADOOP-6443.patch > > > The {{SerializationBase.accept()}} methods of several serialization > implementations use incorrect metadata when determining whether they are the > correct serializer for the user's metadata. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6454) Create setup.py for EC2 cloud scripts
[ https://issues.apache.org/jira/browse/HADOOP-6454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12792733#action_12792733 ] Tom White commented on HADOOP-6454: --- I ran 'sudo python setup.py install' then launched and terminated a cluster successfully. > Create setup.py for EC2 cloud scripts > - > > Key: HADOOP-6454 > URL: https://issues.apache.org/jira/browse/HADOOP-6454 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6454.patch, HADOOP-6454.patch > > > This would make it easier to install the scripts. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6434) Make HttpServer slightly easier to manage/diagnose faults with
[ https://issues.apache.org/jira/browse/HADOOP-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6434: -- Status: Open (was: Patch Available) Looks good. Marking open, pending tests. > Make HttpServer slightly easier to manage/diagnose faults with > -- > > Key: HADOOP-6434 > URL: https://issues.apache.org/jira/browse/HADOOP-6434 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 0.22.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 0.22.0 > > Attachments: HADOOP-6434-1.patch > > > It would be easier to work with HttpServer if > # webServer.isStarted() was exported > # the toString() method included the (hostname,port) in use > # Bind Exceptions raised in startup included the (hostname, port) requested -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6370) Contrib project ivy dependencies are not included in binary target
[ https://issues.apache.org/jira/browse/HADOOP-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12792505#action_12792505 ] Tom White commented on HADOOP-6370: --- bq. Do you know of a straightforward way to do this? No, I don't. If we put the contrib dependency jars into the top-level lib then we'd have to be sure there aren't version inconsistencies, which has happened before (see HADOOP-6395). > Contrib project ivy dependencies are not included in binary target > -- > > Key: HADOOP-6370 > URL: https://issues.apache.org/jira/browse/HADOOP-6370 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Aaron Kimball >Assignee: Aaron Kimball >Priority: Critical > Attachments: HADOOP-6370.patch > > > Only Hadoop's own library dependencies are promoted to ${build.dir}/lib; any > libraries required by contribs are not redistributed. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6454) Create setup.py for EC2 cloud scripts
[ https://issues.apache.org/jira/browse/HADOOP-6454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6454: -- Status: Patch Available (was: Open) > Create setup.py for EC2 cloud scripts > - > > Key: HADOOP-6454 > URL: https://issues.apache.org/jira/browse/HADOOP-6454 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6454.patch, HADOOP-6454.patch > > > This would make it easier to install the scripts. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6454) Create setup.py for EC2 cloud scripts
[ https://issues.apache.org/jira/browse/HADOOP-6454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6454: -- Attachment: HADOOP-6454.patch Release audit failure was caused by the manifest file, which is actually unnecessary since it doesn't specify anything other than the defaults. This patch removes it. > Create setup.py for EC2 cloud scripts > - > > Key: HADOOP-6454 > URL: https://issues.apache.org/jira/browse/HADOOP-6454 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6454.patch, HADOOP-6454.patch > > > This would make it easier to install the scripts. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6454) Create setup.py for EC2 cloud scripts
[ https://issues.apache.org/jira/browse/HADOOP-6454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6454: -- Status: Open (was: Patch Available) > Create setup.py for EC2 cloud scripts > - > > Key: HADOOP-6454 > URL: https://issues.apache.org/jira/browse/HADOOP-6454 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6454.patch, HADOOP-6454.patch > > > This would make it easier to install the scripts. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6454) Create setup.py for EC2 cloud scripts
[ https://issues.apache.org/jira/browse/HADOOP-6454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6454: -- Attachment: HADOOP-6454.patch Patch for an installation script. As a part of this change the reliance on loading VERSION and hadoop-ec2-init-remote.sh files relative to sys.path[0] has been removed. (When this is committed hadoop-ec2-init-remote.sh should be moved using svn mv.) > Create setup.py for EC2 cloud scripts > - > > Key: HADOOP-6454 > URL: https://issues.apache.org/jira/browse/HADOOP-6454 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6454.patch > > > This would make it easier to install the scripts. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6454) Create setup.py for EC2 cloud scripts
[ https://issues.apache.org/jira/browse/HADOOP-6454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6454: -- Status: Patch Available (was: Open) > Create setup.py for EC2 cloud scripts > - > > Key: HADOOP-6454 > URL: https://issues.apache.org/jira/browse/HADOOP-6454 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6454.patch > > > This would make it easier to install the scripts. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HADOOP-6454) Create setup.py for EC2 cloud scripts
Create setup.py for EC2 cloud scripts - Key: HADOOP-6454 URL: https://issues.apache.org/jira/browse/HADOOP-6454 Project: Hadoop Common Issue Type: Improvement Components: contrib/ec2 Reporter: Tom White Assignee: Tom White This would make it easier to install the scripts. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6370) Contrib project ivy dependencies are not included in binary target
[ https://issues.apache.org/jira/browse/HADOOP-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12792180#action_12792180 ] Tom White commented on HADOOP-6370: --- I wonder if contrib-specific dependencies should go into contrib//lib. The contrib jar files are not on the classpath by default, and putting their dependencies in a separate directory would be equivalent. > Contrib project ivy dependencies are not included in binary target > -- > > Key: HADOOP-6370 > URL: https://issues.apache.org/jira/browse/HADOOP-6370 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Aaron Kimball >Assignee: Aaron Kimball >Priority: Critical > Attachments: HADOOP-6370.patch > > > Only Hadoop's own library dependencies are promoted to ${build.dir}/lib; any > libraries required by contribs are not redistributed. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6426) Create ant build for running EC2 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6426: -- Resolution: Fixed Fix Version/s: 0.22.0 Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) I've just committed the original version of this. I've opened HADOOP-6451 to address the problem of contrib tests not being run. > Create ant build for running EC2 unit tests > --- > > Key: HADOOP-6426 > URL: https://issues.apache.org/jira/browse/HADOOP-6426 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Fix For: 0.22.0 > > Attachments: HADOOP-6426.patch, HADOOP-6426.patch, HADOOP-6426.patch, > HADOOP-6426.patch > > > There is no easy way currently to run the Python unit tests for the cloud > contrib. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HADOOP-6451) Contrib tests are not being run
Contrib tests are not being run --- Key: HADOOP-6451 URL: https://issues.apache.org/jira/browse/HADOOP-6451 Project: Hadoop Common Issue Type: Bug Components: build Reporter: Tom White The test target in src/contrib/build.xml references contrib modules that are no longer there post project split. This was discovered in HADOOP-6426. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Assigned: (HADOOP-6444) Support additional security group option in hadoop-ec2 script
[ https://issues.apache.org/jira/browse/HADOOP-6444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White reassigned HADOOP-6444: - Assignee: Paul Egan > Support additional security group option in hadoop-ec2 script > - > > Key: HADOOP-6444 > URL: https://issues.apache.org/jira/browse/HADOOP-6444 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Paul Egan >Assignee: Paul Egan >Priority: Minor > Attachments: hadoop-ec2-py-0.3.0.patch, > hadoop-trunk-contrib-cloud.patch > > > When deploying a hadoop cluster on ec2 alongside other services it is very > useful to be able to specify additional (pre-existing) security groups to > facilitate access control. For example one could use this feature to add a > cluster to a generic "hadoop" group, which authorizes hdfs access from > instances outside the cluster. Without such an option the access control for > the security groups created by the script need to manually updated after > cluster launch. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6444) Support additional security group option in hadoop-ec2 script
[ https://issues.apache.org/jira/browse/HADOOP-6444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12791511#action_12791511 ] Tom White commented on HADOOP-6444: --- Security groups are an EC2-only feature so it would be good to document that in the option help (just like --key-name, for example). Otherwise, this looks good. > Support additional security group option in hadoop-ec2 script > - > > Key: HADOOP-6444 > URL: https://issues.apache.org/jira/browse/HADOOP-6444 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Paul Egan >Priority: Minor > Attachments: hadoop-ec2-py-0.3.0.patch, > hadoop-trunk-contrib-cloud.patch > > > When deploying a hadoop cluster on ec2 alongside other services it is very > useful to be able to specify additional (pre-existing) security groups to > facilitate access control. For example one could use this feature to add a > cluster to a generic "hadoop" group, which authorizes hdfs access from > instances outside the cluster. Without such an option the access control for > the security groups created by the script need to manually updated after > cluster launch. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6315) GzipCodec should not represent BuiltInZlibInflater as decompressorType
[ https://issues.apache.org/jira/browse/HADOOP-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12791495#action_12791495 ] Tom White commented on HADOOP-6315: --- It would be good to have a contributor from HADOOP-5281 comment on this patch, since HADOOP-5281 was marked as a blocker for 0.20. Does the current fix need to go into 0.20 too? > GzipCodec should not represent BuiltInZlibInflater as decompressorType > -- > > Key: HADOOP-6315 > URL: https://issues.apache.org/jira/browse/HADOOP-6315 > Project: Hadoop Common > Issue Type: Bug > Components: io >Reporter: Aaron Kimball >Assignee: Aaron Kimball > Attachments: HADOOP-6315.2.patch, HADOOP-6315.3.patch, > HADOOP-6315.patch > > > It is possible to pollute CodecPool in such a way that Hadoop cannot read > gzip-compressed data. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6426) Create ant build for running EC2 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12790966#action_12790966 ] Tom White commented on HADOOP-6426: --- This is failing because Hudson doesn't have the necessary python dependencies installed (simplejson, boto). I would like to go with the original version of this patch and solve the Hudson build integration in another JIRA. > Create ant build for running EC2 unit tests > --- > > Key: HADOOP-6426 > URL: https://issues.apache.org/jira/browse/HADOOP-6426 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6426.patch, HADOOP-6426.patch, HADOOP-6426.patch, > HADOOP-6426.patch > > > There is no easy way currently to run the Python unit tests for the cloud > contrib. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6315) GzipCodec should not represent BuiltInZlibInflater as decompressorType
[ https://issues.apache.org/jira/browse/HADOOP-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12790962#action_12790962 ] Tom White commented on HADOOP-6315: --- Does getCompressorType() need to change too? Is this related to the change that went in to HADOOP-5281? > GzipCodec should not represent BuiltInZlibInflater as decompressorType > -- > > Key: HADOOP-6315 > URL: https://issues.apache.org/jira/browse/HADOOP-6315 > Project: Hadoop Common > Issue Type: Bug > Components: io >Reporter: Aaron Kimball >Assignee: Aaron Kimball > Attachments: HADOOP-6315.2.patch, HADOOP-6315.patch > > > It is possible to pollute CodecPool in such a way that Hadoop cannot read > gzip-compressed data. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Reopened: (HADOOP-5901) FileSystem.fixName() has unexpected behaviour
[ https://issues.apache.org/jira/browse/HADOOP-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White reopened HADOOP-5901: --- I've just reverted this change, since FileSystem.setDefaultUri(conf, "file:///") fails. > FileSystem.fixName() has unexpected behaviour > - > > Key: HADOOP-5901 > URL: https://issues.apache.org/jira/browse/HADOOP-5901 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 0.21.0 >Reporter: Steve Loughran >Assignee: Aaron Kimball >Priority: Minor > Fix For: 0.22.0 > > Attachments: HADOOP-5901.2.patch, HADOOP-5901.3.patch, > HADOOP-5901.patch > > > {{FileSystem.fixName()}} tries to patch up fs.default.name values, but I'm > not sure it helps that well. > Has it been warning about deprecated values for long enough for it to be > turned off? -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-3659) Patch to allow hadoop native to compile on Mac OS X
[ https://issues.apache.org/jira/browse/HADOOP-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12790852#action_12790852 ] Tom White commented on HADOOP-3659: --- I can confirm that this patch works for me on Mac OS X 10.5.8 if I do: {code} export LDFLAGS=-L/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Libraries ant compile-native {code} > Patch to allow hadoop native to compile on Mac OS X > --- > > Key: HADOOP-3659 > URL: https://issues.apache.org/jira/browse/HADOOP-3659 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Affects Versions: 0.20.0 > Environment: Mac OS X 10.5.3 >Reporter: Colin Evans >Assignee: Colin Evans >Priority: Minor > Fix For: 0.20.2 > > Attachments: HADOOP-3659.patch, hadoop-native-mac.patch, > hadoop-native-mac.patch > > > This patch makes the autoconf script work on Mac OS X. LZO needs to be > installed (including the optional shared libraries) for the compile to > succeed. You'll want to regenerate the configure script using autoconf after > applying this patch. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6328) Hadoop 0.20 Docs - backport changes for streaming and m/r tutorial docs
[ https://issues.apache.org/jira/browse/HADOOP-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6328: -- Status: Open (was: Patch Available) Patch no longer applies. > Hadoop 0.20 Docs - backport changes for streaming and m/r tutorial docs > --- > > Key: HADOOP-6328 > URL: https://issues.apache.org/jira/browse/HADOOP-6328 > Project: Hadoop Common > Issue Type: Task > Components: documentation >Reporter: Corinne Chandel >Assignee: Amareshwari Sriramadasu >Priority: Blocker > Fix For: 0.20.2 > > Attachments: hadoop-6328.patch > > > Doc changes added to the Hadoop/mapreduce/trunk (for Hadoop 0.21 release) > need to be backported to Hadoop-0.20 branch (for Hadoop 0.20.2 release). > Doc files affected: > > streaming.xml > > mapred_tutorial.xml > Changes include: > 1. During the execution of a streaming job, the names of the "mapred" > parameters are transformed. The dots ( . ) become underscores ( _ ). -- > (affects streaming doc and m/r tutorial doc) > 2. For -files and -archives options, Hadoop now creates symlink with same > name as file (user-defined symlinks, #mysymlink, currently not supported) -- > (affects streaming doc) > 3. Streaming supports streaming command options and generic command options. > Generic options must be placed before streaming options, otherwise command > fails. -- (affects streaming doc) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5901) FileSystem.fixName() has unexpected behaviour
[ https://issues.apache.org/jira/browse/HADOOP-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-5901: -- Resolution: Fixed Fix Version/s: 0.22.0 Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) I've just committed this. Thanks Aaron! > FileSystem.fixName() has unexpected behaviour > - > > Key: HADOOP-5901 > URL: https://issues.apache.org/jira/browse/HADOOP-5901 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 0.21.0 >Reporter: Steve Loughran >Assignee: Aaron Kimball >Priority: Minor > Fix For: 0.22.0 > > Attachments: HADOOP-5901.2.patch, HADOOP-5901.3.patch, > HADOOP-5901.patch > > > {{FileSystem.fixName()}} tries to patch up fs.default.name values, but I'm > not sure it helps that well. > Has it been warning about deprecated values for long enough for it to be > turned off? -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6413) Move TestReflectionUtils to Common
[ https://issues.apache.org/jira/browse/HADOOP-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6413: -- Fix Version/s: 0.21.0 > Move TestReflectionUtils to Common > -- > > Key: HADOOP-6413 > URL: https://issues.apache.org/jira/browse/HADOOP-6413 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Reporter: Todd Lipcon >Assignee: Todd Lipcon > Fix For: 0.21.0, 0.22.0 > > Attachments: hadoop-6413.txt, hadoop-6413.txt > > > The common half of MAPREDUCE-1209 -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6413) Move TestReflectionUtils to Common
[ https://issues.apache.org/jira/browse/HADOOP-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6413: -- Resolution: Fixed Fix Version/s: (was: 0.21.0) Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) I've just committed this. Thanks Todd! > Move TestReflectionUtils to Common > -- > > Key: HADOOP-6413 > URL: https://issues.apache.org/jira/browse/HADOOP-6413 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Reporter: Todd Lipcon >Assignee: Todd Lipcon > Fix For: 0.22.0 > > Attachments: hadoop-6413.txt, hadoop-6413.txt > > > The common half of MAPREDUCE-1209 -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5958) Use JDK 1.6 File APIs in DF.java wherever possible
[ https://issues.apache.org/jira/browse/HADOOP-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-5958: -- Resolution: Fixed Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) I've just committed this. Thanks Aaron! > Use JDK 1.6 File APIs in DF.java wherever possible > -- > > Key: HADOOP-5958 > URL: https://issues.apache.org/jira/browse/HADOOP-5958 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Devaraj Das >Assignee: Aaron Kimball > Fix For: 0.22.0 > > Attachments: HADOOP-5958-hdfs.patch, HADOOP-5958-mapred.patch, > HADOOP-5958.2.patch, HADOOP-5958.3.patch, HADOOP-5958.4.patch, > HADOOP-5958.5.patch, HADOOP-5958.6.patch, HADOOP-5958.patch > > > JDK 1.6 has File APIs like File.getFreeSpace() which should be used instead > of spawning a command process for getting the various disk/partition related > attributes. This would avoid spikes in memory consumption by tasks when > things like LocalDirAllocator is used for creating paths on the filesystem. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6391) Classpath should not be part of command line arguments
[ https://issues.apache.org/jira/browse/HADOOP-6391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6391: -- Resolution: Fixed Fix Version/s: (was: 0.21.0) Assignee: Cristian Ivascu Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) I've just committed this. Thanks Cristian! > Classpath should not be part of command line arguments > -- > > Key: HADOOP-6391 > URL: https://issues.apache.org/jira/browse/HADOOP-6391 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 0.21.0, 0.22.0 >Reporter: Cristian Ivascu >Assignee: Cristian Ivascu > Fix For: 0.22.0 > > Attachments: HADOOP-6391.patch > > > Because bin/hadoop and bin/hdfs put the entire CLASSPATH in the command line > arguments, it exceeds 4096 bytes, which is the maximum size that ps (or > /proc) can work with. This makes looking for the processes difficult, since > the output gets truncated for all components at the same point (e.g. > NameNode, SecondaryNameNode, DataNode). > The mapred sub-project does not have this problem, because it calls "export > CLASSPATH" before the final exec. bin/hadoop and bin/hdfs should do the same > thing -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-5901) FileSystem.fixName() has unexpected behaviour
[ https://issues.apache.org/jira/browse/HADOOP-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-5901: -- Status: Open (was: Patch Available) This patch no longer applies cleanly. > FileSystem.fixName() has unexpected behaviour > - > > Key: HADOOP-5901 > URL: https://issues.apache.org/jira/browse/HADOOP-5901 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 0.21.0 >Reporter: Steve Loughran >Assignee: Aaron Kimball >Priority: Minor > Attachments: HADOOP-5901.2.patch, HADOOP-5901.patch > > > {{FileSystem.fixName()}} tries to patch up fs.default.name values, but I'm > not sure it helps that well. > Has it been warning about deprecated values for long enough for it to be > turned off? -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6414) Add command line help for -expunge command.
[ https://issues.apache.org/jira/browse/HADOOP-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6414: -- Resolution: Fixed Fix Version/s: (was: 0.21.0) Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) I've just committed this. Thanks Ravi! > Add command line help for -expunge command. > --- > > Key: HADOOP-6414 > URL: https://issues.apache.org/jira/browse/HADOOP-6414 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ravi Phulari >Assignee: Ravi Phulari >Priority: Trivial > Fix For: 0.22.0 > > Attachments: HDFS-809.patch > > > Command line help for *hadoop fs -expunge* command is missing. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6409) TestHDFSCLI has to check if it's running any testcases at all
[ https://issues.apache.org/jira/browse/HADOOP-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6409: -- Status: Open (was: Patch Available) Removing from queue while Konstantin's suggestion is addressed. > TestHDFSCLI has to check if it's running any testcases at all > - > > Key: HADOOP-6409 > URL: https://issues.apache.org/jira/browse/HADOOP-6409 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 0.21.0, 0.22.0 >Reporter: Konstantin Boudnik >Assignee: Todd Lipcon >Priority: Blocker > Attachments: hadoop-6409.txt, hadoop-6409.txt > > > There's a number of occasions when TestHDFSCLI reports a successful execution > however doesn't run any tests at all. > For a typical case please take a [look > here|http://hudson.zones.apache.org/hudson/view/Hadoop/job/Hadoop-Patch-h4.grid.sp2.yahoo.net/162/testReport/org.apache.hadoop.cli/TestCLI/testAll/] -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6426) Create ant build for running EC2 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6426: -- Status: Patch Available (was: Open) > Create ant build for running EC2 unit tests > --- > > Key: HADOOP-6426 > URL: https://issues.apache.org/jira/browse/HADOOP-6426 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6426.patch, HADOOP-6426.patch, HADOOP-6426.patch, > HADOOP-6426.patch > > > There is no easy way currently to run the Python unit tests for the cloud > contrib. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6426) Create ant build for running EC2 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6426: -- Status: Open (was: Patch Available) > Create ant build for running EC2 unit tests > --- > > Key: HADOOP-6426 > URL: https://issues.apache.org/jira/browse/HADOOP-6426 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6426.patch, HADOOP-6426.patch, HADOOP-6426.patch, > HADOOP-6426.patch > > > There is no easy way currently to run the Python unit tests for the cloud > contrib. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HADOOP-6440) Use boot from EBS in EC2
Use boot from EBS in EC2 Key: HADOOP-6440 URL: https://issues.apache.org/jira/browse/HADOOP-6440 Project: Hadoop Common Issue Type: Improvement Components: contrib/ec2 Reporter: Tom White Amazon now supports the ability to boot from EBS snapshots (http://aws.amazon.com/about-aws/whats-new/2009/12/03/amazon-ec2-instances-now-can-boot-from-amazon-ebs/). We can use this feature to simplify the EBS support in the contrib scripts. The code that manages the instance/volume mapping can be retired, since EC2 itself now manages this relationship. This change would add "stop-cluster" and "start-cluster" commands to stop/start the instances in a cluster while keeping the EBS volumes attached. Also, deprecate the "create-storage" and "attach-storage" commands, since both happen as a part of launch-cluster. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6426) Create ant build for running EC2 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6426: -- Status: Open (was: Patch Available) > Create ant build for running EC2 unit tests > --- > > Key: HADOOP-6426 > URL: https://issues.apache.org/jira/browse/HADOOP-6426 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6426.patch, HADOOP-6426.patch, HADOOP-6426.patch, > HADOOP-6426.patch > > > There is no easy way currently to run the Python unit tests for the cloud > contrib. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6426) Create ant build for running EC2 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6426: -- Status: Patch Available (was: Open) Trying again with pyAntTasks jar now committed (it's not in any Maven repos). > Create ant build for running EC2 unit tests > --- > > Key: HADOOP-6426 > URL: https://issues.apache.org/jira/browse/HADOOP-6426 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6426.patch, HADOOP-6426.patch, HADOOP-6426.patch, > HADOOP-6426.patch > > > There is no easy way currently to run the Python unit tests for the cloud > contrib. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6426) Create ant build for running EC2 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6426: -- Status: Open (was: Patch Available) > Create ant build for running EC2 unit tests > --- > > Key: HADOOP-6426 > URL: https://issues.apache.org/jira/browse/HADOOP-6426 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6426.patch, HADOOP-6426.patch, HADOOP-6426.patch, > HADOOP-6426.patch > > > There is no easy way currently to run the Python unit tests for the cloud > contrib. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6426) Create ant build for running EC2 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6426: -- Status: Patch Available (was: Open) > Create ant build for running EC2 unit tests > --- > > Key: HADOOP-6426 > URL: https://issues.apache.org/jira/browse/HADOOP-6426 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6426.patch, HADOOP-6426.patch, HADOOP-6426.patch, > HADOOP-6426.patch > > > There is no easy way currently to run the Python unit tests for the cloud > contrib. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6426) Create ant build for running EC2 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6426: -- Attachment: HADOOP-6426.patch Contrib tests failed because Hudson couldn't pick up the PyAnt task. New patch to alleviate this. > Create ant build for running EC2 unit tests > --- > > Key: HADOOP-6426 > URL: https://issues.apache.org/jira/browse/HADOOP-6426 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6426.patch, HADOOP-6426.patch, HADOOP-6426.patch, > HADOOP-6426.patch > > > There is no easy way currently to run the Python unit tests for the cloud > contrib. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6422) permit RPC protocols to be implemented by Avro
[ https://issues.apache.org/jira/browse/HADOOP-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12789482#action_12789482 ] Tom White commented on HADOOP-6422: --- +1 Looks good to me. A couple of minor things: * AvroRpc should really be called AvroRpcEngine or, perhaps, TunnelingAvroRpcEngine. * Change Class to Class in RpcEngine and implementations. > permit RPC protocols to be implemented by Avro > -- > > Key: HADOOP-6422 > URL: https://issues.apache.org/jira/browse/HADOOP-6422 > Project: Hadoop Common > Issue Type: New Feature > Components: ipc >Reporter: Doug Cutting >Assignee: Doug Cutting > Fix For: 0.22.0 > > Attachments: HADOOP-6422.patch, HADOOP-6422.patch, HADOOP-6422.patch, > HADOOP-6422.patch > > > To more easily permit Hadoop to evolve to use Avro RPC, I propose to change > RPC to use different implementations for clients and servers based on the > configuration. This is not intended as an end-user configuration: only a > single RPC implementation will be supported in a given release, but rather a > tool to permit us to more easily develop and test new RPC implementations. > As such, the configuration parameters used would not be documented. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6426) Create ant build for running EC2 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6426: -- Status: Patch Available (was: Open) > Create ant build for running EC2 unit tests > --- > > Key: HADOOP-6426 > URL: https://issues.apache.org/jira/browse/HADOOP-6426 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6426.patch, HADOOP-6426.patch, HADOOP-6426.patch > > > There is no easy way currently to run the Python unit tests for the cloud > contrib. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6426) Create ant build for running EC2 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6426: -- Status: Open (was: Patch Available) > Create ant build for running EC2 unit tests > --- > > Key: HADOOP-6426 > URL: https://issues.apache.org/jira/browse/HADOOP-6426 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6426.patch, HADOOP-6426.patch, HADOOP-6426.patch > > > There is no easy way currently to run the Python unit tests for the cloud > contrib. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6426) Create ant build for running EC2 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6426: -- Attachment: HADOOP-6426.patch Properly formatted patch. > Create ant build for running EC2 unit tests > --- > > Key: HADOOP-6426 > URL: https://issues.apache.org/jira/browse/HADOOP-6426 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6426.patch, HADOOP-6426.patch, HADOOP-6426.patch > > > There is no easy way currently to run the Python unit tests for the cloud > contrib. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HADOOP-6437) HOD tests are not being run by 'test-contrib'
HOD tests are not being run by 'test-contrib' - Key: HADOOP-6437 URL: https://issues.apache.org/jira/browse/HADOOP-6437 Project: Hadoop Common Issue Type: Bug Reporter: Tom White The contrib projects build files were not being referenced correctly (post project split, see HADOOP-6426) so HOD tests are not being run. HOD tests need python.home to be set, but we should probably not run them if it is not set (Hudson sets this variable). -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6426) Create ant build for running EC2 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6426: -- Status: Patch Available (was: Open) > Create ant build for running EC2 unit tests > --- > > Key: HADOOP-6426 > URL: https://issues.apache.org/jira/browse/HADOOP-6426 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6426.patch, HADOOP-6426.patch > > > There is no easy way currently to run the Python unit tests for the cloud > contrib. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6426) Create ant build for running EC2 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6426: -- Status: Open (was: Patch Available) > Create ant build for running EC2 unit tests > --- > > Key: HADOOP-6426 > URL: https://issues.apache.org/jira/browse/HADOOP-6426 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6426.patch, HADOOP-6426.patch > > > There is no easy way currently to run the Python unit tests for the cloud > contrib. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-6426) Create ant build for running EC2 unit tests
[ https://issues.apache.org/jira/browse/HADOOP-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tom White updated HADOOP-6426: -- Attachment: HADOOP-6426.patch Modified patch to change src/contrib/build.xml. This file was incorrectly referencing some projects that have been moved to MapReduce as a part of the project split, and hence Common contrib tests were not being run. I have excluded HOD since it needs more work to get it to run tests - I'll open a separate JIRA for that. > Create ant build for running EC2 unit tests > --- > > Key: HADOOP-6426 > URL: https://issues.apache.org/jira/browse/HADOOP-6426 > Project: Hadoop Common > Issue Type: Improvement > Components: contrib/ec2 >Reporter: Tom White >Assignee: Tom White > Attachments: HADOOP-6426.patch, HADOOP-6426.patch > > > There is no easy way currently to run the Python unit tests for the cloud > contrib. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-6323) Serialization should provide comparators
[ https://issues.apache.org/jira/browse/HADOOP-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12788503#action_12788503 ] Tom White commented on HADOOP-6323: --- Could we do it using reflection in the serializer's constructor? Construct an instance of the SpecificRecord class specified in the metadata, then call getSchema() on it. > Serialization should provide comparators > > > Key: HADOOP-6323 > URL: https://issues.apache.org/jira/browse/HADOOP-6323 > Project: Hadoop Common > Issue Type: New Feature > Components: io >Reporter: Doug Cutting >Assignee: Aaron Kimball > Attachments: HADOOP-6323.2.patch, HADOOP-6323.3.patch, > HADOOP-6323.4.patch, HADOOP-6323.5.patch, HADOOP-6323.6.patch, > HADOOP-6323.patch > > > The Serialization interface should permit one to create raw comparators. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.