Re: AbandonBlockRequestProto cannot be resolved

2012-06-14 Thread Gourav Sengupta
Hi Harsh, I found the errors in one of the files to that dependencies are not found for the following statements: import org.apache.hadoop.ha.proto.HAServiceProtocolProtos.GetServiceStatusRequestProto; import org.apache.hadoop.ha.proto.HAServiceProtocolProtos.GetServiceStatusResponseProto; imp

Re: AbandonBlockRequestProto cannot be resolved

2012-06-14 Thread Gourav Sengupta
Hi Harsh, I have installed protocol buffers and protoc is in the command path as mentioned in the links you had forwarded me before. I will try the command mentioned and let you know the details. Regards, Gourav On 14/06/12 18:01, Harsh J wrote: Hi Gourav, As mentioned on http://wiki.apach

Re: AbandonBlockRequestProto cannot be resolved

2012-06-14 Thread Harsh J
Hi Gourav, As mentioned on http://wiki.apache.org/hadoop/HowToContribute and http://wiki.apache.org/hadoop/QwertyManiac/BuildingHadoopTrunk (I gotta update this for branch-2 (rebranding of branch-0.23 rolling ahead), etc. now though), did you install the protocol buffers dependencies on your machi

AbandonBlockRequestProto cannot be resolved

2012-06-14 Thread Gourav Sengupta
Hi, I downloaded the source code from GitHub using the command git clone git://git.apache.org/hadoop-common.git and after that installed Eclipse with Maven and EGit and imported that into Eclipse by adding the base path into EGit. While building the project I am getting around 100 errors an

Re: mapreduce.job.max.split.locations just a warning in hadoop 1.0.3 but not in 2.0.1-alpha?

2012-06-14 Thread Harsh J
Hey Jim, These are limits on the locations of a single split (locations for a regular File would mean where all the file split's blocks may reside). They do not control or cap inputs, just cap the maximum number of locations shippable per InputSplit object. For a 'regular' job on a 'regular' clust

[jira] [Created] (MAPREDUCE-4341) add types to capacity scheduler properties

2012-06-14 Thread Thomas Graves (JIRA)
Thomas Graves created MAPREDUCE-4341: Summary: add types to capacity scheduler properties Key: MAPREDUCE-4341 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4341 Project: Hadoop Map/Reduce

Hadoop-Mapreduce-trunk - Build # 1109 - Still Failing

2012-06-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1109/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 30148 lines...] Tests run: 1, Failures: 0, Errors: 0

Re: try to fix hadoop streaming bug

2012-06-14 Thread Robert Evans
It looks like your jar's MANIFEST file is missing the Main Class attribute. It may have something to do with how you created the updated jar you are using. Hadoop is trying to run the jar, and because it did not find the MainClass in the jar's manifest it thinks you are supplying it as the nex

mapreduce.job.max.split.locations just a warning in hadoop 1.0.3 but not in 2.0.1-alpha?

2012-06-14 Thread Jim Donofrio
I didnt hear anything from common-user about this, maybe that was the wrong list because this is more a development issue. final int max_loc = conf.getInt(MAX_SPLIT_LOCATIONS, 10); if (locations.length > max_loc) { LOG.warn("Max block location exceeded for split: "

try to fix hadoop streaming bug

2012-06-14 Thread HU Wenjing A
Hi all, I tried to fix the hadoop streaming bug for the version 0.21.0 (streaming overrides user given output key and value types). I saw some useful message about this issue on https://issues.apache.org/jira/browse/MAPREDUCE-1888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabp