RE: FUSE CRASHING

2011-10-16 Thread Banka, Deepti
Hi Brian,
Thanks for your response. My java version is 6. And hadoop-0.20.2.

Regards,
Deepti

-Original Message-
From: Brian Bockelman [mailto:bbock...@cse.unl.edu] 
Sent: Friday, October 14, 2011 8:48 PM
To: common-user@hadoop.apache.org
Subject: Re: FUSE CRASHING

Hi Deepti,

That appears to crash deep in pthread, which would scare me a bit.  Are
you using a strange/non-standard platform?  What Java version?  What
HDFS version?

Brian

On Oct 14, 2011, at 3:59 AM, Banka, Deepti wrote:

> Hi,
> 
> I am trying to run FUSE and, its crashing randomly in the middle with
> the following error:
> 
> 
> 
> fuse_dfs:  tpp.c:66: __pthread_tpp_change_priority: Assertion
> `previous_prio == -1 || (previous_prio >= __sched_fifo_min_prio &&
> previous_prio <= __sched_fifo_max_prio)' failed.
> 
> 
> 
> Does anyone know the possible reason for such an error? And is it a
> known bug  in FUSE? The FUSE version I am using is 2.8.5.
> 
> Kindly help.
> 
> Thanks,
> 
> Deepti
> 
> 
> 
> 
> 



Re: Unrecognized option: -jvm

2011-10-16 Thread Harsh J
As a quick workaround, you can try not starting as user 'root' but as
another user instead. You can revisit this issue later when you
require security.

On Sun, Oct 16, 2011 at 11:02 PM, Majid Azimi  wrote:
> I have tested both 0.20.204.0 and 0.20.203.0. But problem still not solved.
> I'm going to test another jvm. I'm using openjdk now.
>
> On Sun, Oct 16, 2011 at 2:53 PM, Uma Maheswara Rao G 72686 <
> mahesw...@huawei.com> wrote:
>
>> You are using Which version of Hadoop ?
>>
>> Please check the recent discussion, which will help you related to this
>> problem.
>> http://search-hadoop.com/m/PPgvNPUoL2&subj=Re+Starting+Datanode
>>
>> Regards,
>> Uma
>>
>



-- 
Harsh J


Re: implementing comparable

2011-10-16 Thread Keith Thompson
Ahh ok ...I think I understand now. I added the default constructor (just
initializing all values to 0) and now it seems to be running.  :-)  Thanks
for your help.

On Sun, Oct 16, 2011 at 9:43 PM, Brock Noland  wrote:

> Hi,
>
> Inline..
>
> On Sun, Oct 16, 2011 at 9:40 PM, Keith Thompson  >wrote:
>
> > Thanks.  I went back and changed to WritableComparable instead of just
> > Comparable.  So, I added the readFields and write methods.   I also took
> > care of the typo in the constructor. :P
> >
> > Now I am getting this error:
> >
> > 11/10/16 21:34:08 INFO mapred.JobClient: Task Id :
> > attempt_201110162105_0002_m_01_1, Status : FAILED
> > java.lang.RuntimeException: java.lang.NoSuchMethodException:
> > edu.bing.vfi5.KeyList.()
> > at
> >
> >
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115)
> > at
> >
> org.apache.hadoop.io.WritableComparator.newKey(WritableComparator.java:84)
> > at
> >
> org.apache.hadoop.io.WritableComparator.(WritableComparator.java:70)
> > at
> org.apache.hadoop.io.WritableComparator.get(WritableComparator.java:44)
> > at
> > org.apache.hadoop.mapred.JobConf.getOutputKeyComparator(JobConf.java:599)
> > at
> > org.apache.hadoop.mapred.MapTask$MapOutputBuffer.(MapTask.java:791)
> > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:350)
> > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
> > at org.apache.hadoop.mapred.Child.main(Child.java:170)
> > Caused by: java.lang.NoSuchMethodException:
> edu.bing.vfi5.KeyList.()
> > at java.lang.Class.getConstructor0(Class.java:2706)
> > at java.lang.Class.getDeclaredConstructor(Class.java:1985)
> > at
> >
> >
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:109)
> >
> > Is it saying it can't find the constructor?
> >
> >
> Writables and by extension WritableComparables need a default Constructor.
> This makes logical sense. If hadoop is going to call the readFields()
> method, it needs  a previously constructed object.
>
> Brock
>



-- 
*Keith Thompson*
Graduate Research Associate, Xerox Corporation
SUNY Research Foundation
Dept. of Systems Science and Industrial Engineering
Binghamton University
work: 585-422-6587


Re: implementing comparable

2011-10-16 Thread Brock Noland
Hi,

Inline..

On Sun, Oct 16, 2011 at 9:40 PM, Keith Thompson wrote:

> Thanks.  I went back and changed to WritableComparable instead of just
> Comparable.  So, I added the readFields and write methods.   I also took
> care of the typo in the constructor. :P
>
> Now I am getting this error:
>
> 11/10/16 21:34:08 INFO mapred.JobClient: Task Id :
> attempt_201110162105_0002_m_01_1, Status : FAILED
> java.lang.RuntimeException: java.lang.NoSuchMethodException:
> edu.bing.vfi5.KeyList.()
> at
>
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115)
> at
> org.apache.hadoop.io.WritableComparator.newKey(WritableComparator.java:84)
> at
> org.apache.hadoop.io.WritableComparator.(WritableComparator.java:70)
> at org.apache.hadoop.io.WritableComparator.get(WritableComparator.java:44)
> at
> org.apache.hadoop.mapred.JobConf.getOutputKeyComparator(JobConf.java:599)
> at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.(MapTask.java:791)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:350)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
> at org.apache.hadoop.mapred.Child.main(Child.java:170)
> Caused by: java.lang.NoSuchMethodException: edu.bing.vfi5.KeyList.()
> at java.lang.Class.getConstructor0(Class.java:2706)
> at java.lang.Class.getDeclaredConstructor(Class.java:1985)
> at
>
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:109)
>
> Is it saying it can't find the constructor?
>
>
Writables and by extension WritableComparables need a default Constructor.
This makes logical sense. If hadoop is going to call the readFields()
method, it needs  a previously constructed object.

Brock


Re: implementing comparable

2011-10-16 Thread Keith Thompson
Thanks.  I went back and changed to WritableComparable instead of just
Comparable.  So, I added the readFields and write methods.   I also took
care of the typo in the constructor. :P

Now I am getting this error:

11/10/16 21:34:08 INFO mapred.JobClient: Task Id :
attempt_201110162105_0002_m_01_1, Status : FAILED
java.lang.RuntimeException: java.lang.NoSuchMethodException:
edu.bing.vfi5.KeyList.()
at
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115)
at
org.apache.hadoop.io.WritableComparator.newKey(WritableComparator.java:84)
at
org.apache.hadoop.io.WritableComparator.(WritableComparator.java:70)
at org.apache.hadoop.io.WritableComparator.get(WritableComparator.java:44)
at org.apache.hadoop.mapred.JobConf.getOutputKeyComparator(JobConf.java:599)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.(MapTask.java:791)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:350)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.lang.NoSuchMethodException: edu.bing.vfi5.KeyList.()
at java.lang.Class.getConstructor0(Class.java:2706)
at java.lang.Class.getDeclaredConstructor(Class.java:1985)
at
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:109)

Is it saying it can't find the constructor?

On Sat, Oct 15, 2011 at 5:26 PM, Keith Thompson wrote:

> Hello,
> I am trying to write my very first MapReduce code.  When I try to run the
> jar, I get this error:
>
> 11/10/15 17:17:30 INFO mapred.JobClient: Task Id :
> attempt_201110151636_0003_m_01_2, Status : FAILED
> java.lang.ClassCastException: class edu.bing.vfi5.KeyList
> at java.lang.Class.asSubclass(Class.java:3018)
> at
> org.apache.hadoop.mapred.JobConf.getOutputKeyComparator(JobConf.java:599)
>  at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.(MapTask.java:791)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:350)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
> at org.apache.hadoop.mapred.Child.main(Child.java:170)
>
> I assume this means that it has something to do with my implementation of
> comparable.  KeyList is a class for a 3-tuple key.  The code is listed
> below.  Any hints would be greatly appreciated as I am trying to understand
> how comparable is supposed to work.  Also, do I need to implement Writable
> as well?  If so, should this be code for how the output is written to a file
> in HDFS?
>
> Thanks,
> Keith
>
> package edu.bing.vfi5;
>
> public class KeyList implements Comparable {
>
> private int[] keys;
>  public KeyList(int i, int j, int k) {
> keys = new int[3];
>  keys[0] = i;
> keys[1] = j;
> keys[2] = k;
>  }
>
> @Override
> public int compareTo(KeyList k) {
>  // TODO Auto-generated method stub
> if(this.keys[0] == k.keys[0] && this.keys[1] == k.keys[1] && this.keys[2]
> == k.keys[2])
>  return 0;
> else if((this.keys[0]>k.keys[0])
> ||(this.keys[0]==k.keys[0]&&this.keys[1]>k.keys[1])
>
> ||(this.keys[0]==k.keys[0]&&this.keys[1]==k.keys[1]&&this.keys[2]>k.keys[2]))
> return 1;
>  else
> return -1;
> }
> }
>
>


-- 
*Keith Thompson*
Graduate Research Associate, Xerox Corporation
SUNY Research Foundation
Dept. of Systems Science and Industrial Engineering
Binghamton University
work: 585-422-6587


Re: cannot find DeprecatedLzoTextInputFormat

2011-10-16 Thread Joey Echeverria
Hi Jessica,

Sorry for the delay. I don't know of a pre-built version of the LZO
libraries that has the fix. I also couldn't quite tell which source
versions might have it. The easiest thing to do would be to pull the
source from github, make any changes, and build it locally:

https://github.com/kevinweil/hadoop-lzo

-Joey

On Mon, Oct 10, 2011 at 7:54 PM, Jessica Owensby
 wrote:
> I understood the comments in the JIRA ticket to say that hadoop-lzo
> 0.4.8.jar from gerrit had the fix for
> HIVE-2395.
>  I wasn't able to find a good version of 0.4.8 of already built (I found
> this, but there appears to be some issues with it:
> http://hadoop-gpl-packing.googlecode.com/svn-history/r18/trunk/src/main/resources/lib/hadoop-lzo-0.4.8.jar).
> And hadoop-lzo-0.4.13.jar (
> http://hadoop-gpl-packing.googlecode.com/svn-history/r39/trunk/hadoop/src/main/resources/lib/hadoop-lzo-0.4.13.jar)
> doesn't contain the fix.  Is there a version of the jar built with the
> HIVE-2395 fix?  I thought I would ask before I build it myself.
>
> Lastly, I didn't mention before that this issue appears in only one of our 2
> environments - both running cdh3u1.  I've done an number of comparisons
> between the environments and am still unable to find a dissimilarity that
> might be resulting in the 'No LZO codec found' error.  So, it
> would surprise me if we required the fix in one environment and did not in
> another -- but that may just show my lack of understanding about hadoop. :-)
>
> Jessica
>
> On Wed, Oct 5, 2011 at 4:27 PM, Jessica Owensby
> wrote:
>
>> Great.  Thanks!  Will give that a try.
>> Jessica
>>
>>
>> On Wed, Oct 5, 2011 at 4:22 PM, Joey Echeverria  wrote:
>>
>>> It sounds like you're hitting this:
>>>
>>> https://issues.apache.org/jira/browse/HIVE-2395
>>>
>>> You might need to patch your version of DeprecatedLzoLineRecordReader
>>> to ignore the .lzo.index files.
>>>
>>> -Joey
>>>
>>> On Wed, Oct 5, 2011 at 4:13 PM, Jessica Owensby
>>>  wrote:
>>> > Alex,
>>> > The task trackers have been restarted many times across the cluster
>>> since
>>> > this issue was first seen.
>>> >
>>> > Hmmm, I hadn't tried to explicitly add the lzo jar to my classpath in
>>> the
>>> > hive shell, but I just tried it and got the same errors.
>>> >
>>> > Do you see
>>> >
>>> > /usr/lib/hadoop-0.20/lib/hadoop-lzo-20110217.jar in the child classpath
>>> when
>>> >
>>> > the task is executed (use 'ps aux' on the node)?
>>> >
>>> >
>>> > While the job wasn't running, I did this and I got back the tasktracker
>>> > process:  ps aux | grep java | grep lzo.
>>> > Do I have to run this while the task is running on that node?
>>> >
>>> > Joey,
>>> > Yes, the lzo files are indexed.  They are indexed using the following
>>> > command:
>>> >
>>> > hadoop jar /usr/lib/hadoop/lib/hadoop-lzo-20110217.jar
>>> > com.hadoop.compression.lzo.LzoIndexer /user/hive/warehouse/foo/bar.lzo
>>> >
>>> > Jessica
>>> >
>>> > On Wed, Oct 5, 2011 at 3:52 PM, Joey Echeverria 
>>> wrote:
>>> >> Are your LZO files indexed?
>>> >>
>>> >> -Joey
>>> >>
>>> >> On Wed, Oct 5, 2011 at 3:35 PM, Jessica Owensby
>>> >>  wrote:
>>> >>> Hi Joey,
>>> >>> Thanks. I forgot to say that; yes, the lzocodec class is listed in
>>> >>> core-site.xml under the io.compression.codecs property:
>>> >>>
>>> >>> 
>>> >>>  io.compression.codecs
>>> >>>
>>> >
>>>  org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec,org.apache.hadoop.io.compress.BZip2Codec
>>> >>> 
>>> >>>
>>> >>> I also added the mapred.child.env property to mapred site:
>>> >>>
>>> >>>  
>>> >>>    mapred.child.env
>>> >>>    JAVA_LIBRARY_PATH=/usr/lib/hadoop-0.20/lib
>>> >>>  
>>> >>>
>>> >>> per these instructions:
>>> >>>
>>> >
>>> http://www.cloudera.com/blog/2009/11/hadoop-at-twitter-part-1-splittable-lzo-compression/
>>> >>>
>>> >>> After making each of these changes I have restarted the cluster --
>>> >>> just to be sure that the new changes were being picked up.
>>> >>>
>>> >>> Jessica
>>> >>>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Joseph Echeverria
>>> >> Cloudera, Inc.
>>> >> 443.305.9434
>>> >>
>>> >
>>> >
>>> > Adding back the email history:
>>> >
>>> > Hello Everyone,
>>> > I've been having an issue in a hadoop environment (running cdh3u1)
>>> > where any table declared in hive
>>> > with the "STORED AS INPUTFORMAT
>>> > "com.hadoop.mapred.DeprecatedLzoTextInputFormat"" directive has the
>>> > following errors when running any query against it.
>>> >
>>> > For instance, running "select count(*) from foo;" gives the following
>>> error:
>>> >
>>> > java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
>>> >      at
>>> >
>>> org.apache.hadoop.hive.shims.Hadoop20SShims$CombineFileRecordReader.initNextRecordReader(Hadoop20SShims.java:306)
>>> >      at
>>> >
>>> org.apache.hadoop.hive.shims.Hadoop20SShims$CombineFileRecordReader.next(Hadoop20SShims.ja

Re: Unrecognized option: -jvm

2011-10-16 Thread Shrinivas Joshi
Check bin/hadoop script and search -jvm option in there that is getting
passed to datanode launch command. Removing it should get around this issue.
I am not aware of significance of this flag though.
On Oct 16, 2011 12:32 PM, "Majid Azimi"  wrote:

> I have tested both 0.20.204.0 and 0.20.203.0. But problem still not solved.
> I'm going to test another jvm. I'm using openjdk now.
>
> On Sun, Oct 16, 2011 at 2:53 PM, Uma Maheswara Rao G 72686 <
> mahesw...@huawei.com> wrote:
>
> > You are using Which version of Hadoop ?
> >
> > Please check the recent discussion, which will help you related to this
> > problem.
> > http://search-hadoop.com/m/PPgvNPUoL2&subj=Re+Starting+Datanode
> >
> > Regards,
> > Uma
> >
>


Re: Unrecognized option: -jvm

2011-10-16 Thread Majid Azimi
I have tested both 0.20.204.0 and 0.20.203.0. But problem still not solved.
I'm going to test another jvm. I'm using openjdk now.

On Sun, Oct 16, 2011 at 2:53 PM, Uma Maheswara Rao G 72686 <
mahesw...@huawei.com> wrote:

> You are using Which version of Hadoop ?
>
> Please check the recent discussion, which will help you related to this
> problem.
> http://search-hadoop.com/m/PPgvNPUoL2&subj=Re+Starting+Datanode
>
> Regards,
> Uma
>


Re: Too much fetch failure

2011-10-16 Thread Humayun gmail
yes, the two nodes work as tasktracker

On 16 October 2011 22:20, Uma Maheswara Rao G 72686 wrote:

> I mean, two nodes here is tasktrackers.
>
> - Original Message -
> From: Humayun gmail 
> Date: Sunday, October 16, 2011 7:38 pm
> Subject: Re: Too much fetch failure
> To: common-user@hadoop.apache.org
>
> > yes we can ping every node (both master and slave).
> >
> > On 16 October 2011 19:52, Uma Maheswara Rao G 72686
> > wrote:
> > > Are you able to ping the other node with the configured hostnames?
> > >
> > > Make sure that you should be able to ping to the other machine
> > with the
> > > configured hostname in ect/hosts files.
> > >
> > > Regards,
> > > Uma
> > > - Original Message -
> > > From: praveenesh kumar 
> > > Date: Sunday, October 16, 2011 6:46 pm
> > > Subject: Re: Too much fetch failure
> > > To: common-user@hadoop.apache.org
> > >
> > > > try commenting 127.0.0.1 localhost line in your /etc/hosts and
> > then> > restartthe cluster and then try again.
> > > >
> > > > Thanks,
> > > > Praveenesh
> > > >
> > > > On Sun, Oct 16, 2011 at 2:00 PM, Humayun gmail
> > > > wrote:
> > > > > we are using hadoop on virtual box. when it is a single node
> > then> > it works
> > > > > fine for big dataset larger than the default block size. but in
> > > > case of
> > > > > multinode cluster (2 nodes) we are facing some problems.
> > > > > Like when the input dataset is smaller than the default block
> > > > size(64 MB)
> > > > > then it works fine. but when the input dataset is larger than
> > the> > default> block size then it shows ‘too much fetch failure’ in
> > > > reduce state.
> > > > > here is the output link
> > > > > http://paste.ubuntu.com/707517/
> > > > >
> > > > > From the above comments , there are many users who faced this
> > > > problem.> different users suggested to modify the /etc/hosts file
> > > > in different manner
> > > > > to fix the problem. but there is no ultimate solution.we need
> > the> > actual> solution thats why we are writing here.
> > > > >
> > > > > this is our /etc/hosts file
> > > > > 192.168.60.147 humayun # Added by NetworkManager
> > > > > 127.0.0.1 localhost.localdomain localhost
> > > > > ::1 humayun localhost6.localdomain6 localhost6
> > > > > 127.0.1.1 humayun
> > > > >
> > > > > # The following lines are desirable for IPv6 capable hosts
> > > > > ::1 localhost ip6-localhost ip6-loopback
> > > > > fe00::0 ip6-localnet
> > > > > ff00::0 ip6-mcastprefix
> > > > > ff02::1 ip6-allnodes
> > > > > ff02::2 ip6-allrouters
> > > > > ff02::3 ip6-allhosts
> > > > >
> > > > > 192.168.60.1 master
> > > > > 192.168.60.2 slave
> > > > >
> > > >
> > >
> >
>


Re: Jira Assignment

2011-10-16 Thread Mahadev Konar
Arun,
 This was fixed a week ago or so. Here's the infra ticket.

https://issues.apache.org/jira/browse/INFRA-3960

You should be able to add new contributors now.

thanks
mahadev

On Sun, Oct 16, 2011 at 9:36 AM, Arun C Murthy  wrote:
> I've tried, and failed, many times recently to add 'contributors' to the 
> Hadoop projects - something to do with the new UI they rolled out.
>
> Let me try and follow-up with the ASF folks, thanks for being patient!
>
> Arun
>
> On Oct 16, 2011, at 9:32 AM, Jon Allen wrote:
>
>> I've been doing some work on a Jira and want to assign it to myself but
>> there doesn't seem to be an option to do this.  I believe I need to be
>> assigned the contributor role before I can have issues assigned to me.  Is
>> this correct and if so how do I get this role?
>>
>> Thanks,
>> Jon
>
>


Re: Jira Assignment

2011-10-16 Thread Arun C Murthy
I've tried, and failed, many times recently to add 'contributors' to the Hadoop 
projects - something to do with the new UI they rolled out.

Let me try and follow-up with the ASF folks, thanks for being patient!

Arun

On Oct 16, 2011, at 9:32 AM, Jon Allen wrote:

> I've been doing some work on a Jira and want to assign it to myself but
> there doesn't seem to be an option to do this.  I believe I need to be
> assigned the contributor role before I can have issues assigned to me.  Is
> this correct and if so how do I get this role?
> 
> Thanks,
> Jon



Jira Assignment

2011-10-16 Thread Jon Allen
I've been doing some work on a Jira and want to assign it to myself but
there doesn't seem to be an option to do this.  I believe I need to be
assigned the contributor role before I can have issues assigned to me.  Is
this correct and if so how do I get this role?

Thanks,
Jon


Re: Too much fetch failure

2011-10-16 Thread Uma Maheswara Rao G 72686
I mean, two nodes here is tasktrackers.

- Original Message -
From: Humayun gmail 
Date: Sunday, October 16, 2011 7:38 pm
Subject: Re: Too much fetch failure
To: common-user@hadoop.apache.org

> yes we can ping every node (both master and slave).
> 
> On 16 October 2011 19:52, Uma Maheswara Rao G 72686 
> wrote:
> > Are you able to ping the other node with the configured hostnames?
> >
> > Make sure that you should be able to ping to the other machine 
> with the
> > configured hostname in ect/hosts files.
> >
> > Regards,
> > Uma
> > - Original Message -
> > From: praveenesh kumar 
> > Date: Sunday, October 16, 2011 6:46 pm
> > Subject: Re: Too much fetch failure
> > To: common-user@hadoop.apache.org
> >
> > > try commenting 127.0.0.1 localhost line in your /etc/hosts and 
> then> > restartthe cluster and then try again.
> > >
> > > Thanks,
> > > Praveenesh
> > >
> > > On Sun, Oct 16, 2011 at 2:00 PM, Humayun gmail
> > > wrote:
> > > > we are using hadoop on virtual box. when it is a single node 
> then> > it works
> > > > fine for big dataset larger than the default block size. but in
> > > case of
> > > > multinode cluster (2 nodes) we are facing some problems.
> > > > Like when the input dataset is smaller than the default block
> > > size(64 MB)
> > > > then it works fine. but when the input dataset is larger than 
> the> > default> block size then it shows ‘too much fetch failure’ in
> > > reduce state.
> > > > here is the output link
> > > > http://paste.ubuntu.com/707517/
> > > >
> > > > From the above comments , there are many users who faced this
> > > problem.> different users suggested to modify the /etc/hosts file
> > > in different manner
> > > > to fix the problem. but there is no ultimate solution.we need 
> the> > actual> solution thats why we are writing here.
> > > >
> > > > this is our /etc/hosts file
> > > > 192.168.60.147 humayun # Added by NetworkManager
> > > > 127.0.0.1 localhost.localdomain localhost
> > > > ::1 humayun localhost6.localdomain6 localhost6
> > > > 127.0.1.1 humayun
> > > >
> > > > # The following lines are desirable for IPv6 capable hosts
> > > > ::1 localhost ip6-localhost ip6-loopback
> > > > fe00::0 ip6-localnet
> > > > ff00::0 ip6-mcastprefix
> > > > ff02::1 ip6-allnodes
> > > > ff02::2 ip6-allrouters
> > > > ff02::3 ip6-allhosts
> > > >
> > > > 192.168.60.1 master
> > > > 192.168.60.2 slave
> > > >
> > >
> >
>


Re: capacity scheduler

2011-10-16 Thread Arun C Murthy
You are welcome. *smile*

One of the greatest advantages of open-src s/w is that you can look at the code 
while scratching your head in the corner - this way you gain better 
understanding of the system and we, the project, will hopefully gain another 
valuable contributor... hint, hint. ;-)

Good luck.

Arun

On Oct 16, 2011, at 1:27 AM, patrick sang wrote:

> Hi Arun,
> 
> Your answer sheds extra bright light while I am scratching head in the corner.
> 1 million thanks for answer and document. I will post back the result.
> 
> Thanks again,
> P
> 
> On Sat, Oct 15, 2011 at 10:32 PM, Arun C Murthy  wrote:
>> 
>> Hi Patrick,
>> 
>> It's hard to diagnose CDH since I don't know what patch-sets they have for 
>> the CapacityScheduler - afaik they only support FairScheduler, but that 
>> might have changed.
>> 
>> On Oct 15, 2011, at 4:45 PM, patrick sang wrote:
>> 
>>> 4. from webUI, scheduling  information of orange queue.
>>> 
>>> It said "Used capacity: 12 (100.0% of Capacity)"
>>> while next line said "Maximum capacity: 16 slots"
>>> So what's going on with other 4 slots ? why they are not get used.
>>> 
>>> Is capacity-scheduler supposed to start using extra slots until it hit the
>>> Max capacity ?
>>> (from the variable of
>>> mapred.capacity-scheduler.queue..maximum-capacity)
>>> (there are no other jobs at all in the cluster)
>>> 
>>> I am really thankful for reading up to this point.
>>> Truly hope someone can shed some light on this.
>>> 
>> 
>> However, if you were using Apache Hadoop 0.20.203 or 0.20.204 (or upcoming 
>> 0.20.205 with security + append) you would still see this behaviour because 
>> you are hitting 'user limits' where the CS will not allow a single user to 
>> take more than the queue 'configured' capacity (12 slots here). You will 
>> need more than one user in the 'orange' queue  to go over the queue's 
>> capacity. This is to prevent a single user from hogging the system's 
>> resources.
>> 
>> If you really want one user to acquire more resources in 'orange' queue, you 
>> need to tweak mapred.capacity-scheduler.queue.orange.user-limit-factor.
>> 
>> More details here:
>> http://hadoop.apache.org/common/docs/stable/capacity_scheduler.html
>> 
>> Arun
>> 



Re: Too much fetch failure

2011-10-16 Thread Humayun gmail
no. in my config files i mention as master

On 16 October 2011 20:20, praveenesh kumar  wrote:

> why are you formatting the namenode again ?
> 1. Just stop the cluster..
> 2. Just comment out the 127.0.0.1 localhost line
> 3. Restart the cluster.
>
> How have you defined your hadoop config files..?
> Have  you mentioned localhost there ?
>
> Thanks,
> Praveenesh
>
> On Sun, Oct 16, 2011 at 7:42 PM, Humayun gmail  >wrote:
>
> > commenting the line 127.0.0.1 in /etc/hosts is not working. if i format
> the
> > namenode then automatically this line is added.
> > any other solution?
> >
> > On 16 October 2011 19:13, praveenesh kumar  wrote:
> >
> > > try commenting 127.0.0.1 localhost line in your /etc/hosts and then
> > restart
> > > the cluster and then try again.
> > >
> > > Thanks,
> > > Praveenesh
> > >
> > > On Sun, Oct 16, 2011 at 2:00 PM, Humayun gmail  > > >wrote:
> > >
> > > > we are using hadoop on virtual box. when it is a single node then it
> > > works
> > > > fine for big dataset larger than the default block size. but in case
> of
> > > > multinode cluster (2 nodes) we are facing some problems.
> > > > Like when the input dataset is smaller than the default block size(64
> > MB)
> > > > then it works fine. but when the input dataset is larger than the
> > default
> > > > block size then it shows ‘too much fetch failure’ in reduce state.
> > > > here is the output link
> > > > http://paste.ubuntu.com/707517/
> > > >
> > > > From the above comments , there are many users who faced this
> problem.
> > > > different users suggested to modify the /etc/hosts file in different
> > > manner
> > > > to fix the problem. but there is no ultimate solution.we need the
> > actual
> > > > solution thats why we are writing here.
> > > >
> > > > this is our /etc/hosts file
> > > > 192.168.60.147 humayun # Added by NetworkManager
> > > > 127.0.0.1 localhost.localdomain localhost
> > > > ::1 humayun localhost6.localdomain6 localhost6
> > > > 127.0.1.1 humayun
> > > >
> > > > # The following lines are desirable for IPv6 capable hosts
> > > > ::1 localhost ip6-localhost ip6-loopback
> > > > fe00::0 ip6-localnet
> > > > ff00::0 ip6-mcastprefix
> > > > ff02::1 ip6-allnodes
> > > > ff02::2 ip6-allrouters
> > > > ff02::3 ip6-allhosts
> > > >
> > > > 192.168.60.1 master
> > > > 192.168.60.2 slave
> > > >
> > >
> >
>


Re: Too much fetch failure

2011-10-16 Thread praveenesh kumar
why are you formatting the namenode again ?
1. Just stop the cluster..
2. Just comment out the 127.0.0.1 localhost line
3. Restart the cluster.

How have you defined your hadoop config files..?
Have  you mentioned localhost there ?

Thanks,
Praveenesh

On Sun, Oct 16, 2011 at 7:42 PM, Humayun gmail wrote:

> commenting the line 127.0.0.1 in /etc/hosts is not working. if i format the
> namenode then automatically this line is added.
> any other solution?
>
> On 16 October 2011 19:13, praveenesh kumar  wrote:
>
> > try commenting 127.0.0.1 localhost line in your /etc/hosts and then
> restart
> > the cluster and then try again.
> >
> > Thanks,
> > Praveenesh
> >
> > On Sun, Oct 16, 2011 at 2:00 PM, Humayun gmail  > >wrote:
> >
> > > we are using hadoop on virtual box. when it is a single node then it
> > works
> > > fine for big dataset larger than the default block size. but in case of
> > > multinode cluster (2 nodes) we are facing some problems.
> > > Like when the input dataset is smaller than the default block size(64
> MB)
> > > then it works fine. but when the input dataset is larger than the
> default
> > > block size then it shows ‘too much fetch failure’ in reduce state.
> > > here is the output link
> > > http://paste.ubuntu.com/707517/
> > >
> > > From the above comments , there are many users who faced this problem.
> > > different users suggested to modify the /etc/hosts file in different
> > manner
> > > to fix the problem. but there is no ultimate solution.we need the
> actual
> > > solution thats why we are writing here.
> > >
> > > this is our /etc/hosts file
> > > 192.168.60.147 humayun # Added by NetworkManager
> > > 127.0.0.1 localhost.localdomain localhost
> > > ::1 humayun localhost6.localdomain6 localhost6
> > > 127.0.1.1 humayun
> > >
> > > # The following lines are desirable for IPv6 capable hosts
> > > ::1 localhost ip6-localhost ip6-loopback
> > > fe00::0 ip6-localnet
> > > ff00::0 ip6-mcastprefix
> > > ff02::1 ip6-allnodes
> > > ff02::2 ip6-allrouters
> > > ff02::3 ip6-allhosts
> > >
> > > 192.168.60.1 master
> > > 192.168.60.2 slave
> > >
> >
>


Re: Too much fetch failure

2011-10-16 Thread Humayun gmail
commenting the line 127.0.0.1 in /etc/hosts is not working. if i format the
namenode then automatically this line is added.
any other solution?

On 16 October 2011 19:13, praveenesh kumar  wrote:

> try commenting 127.0.0.1 localhost line in your /etc/hosts and then restart
> the cluster and then try again.
>
> Thanks,
> Praveenesh
>
> On Sun, Oct 16, 2011 at 2:00 PM, Humayun gmail  >wrote:
>
> > we are using hadoop on virtual box. when it is a single node then it
> works
> > fine for big dataset larger than the default block size. but in case of
> > multinode cluster (2 nodes) we are facing some problems.
> > Like when the input dataset is smaller than the default block size(64 MB)
> > then it works fine. but when the input dataset is larger than the default
> > block size then it shows ‘too much fetch failure’ in reduce state.
> > here is the output link
> > http://paste.ubuntu.com/707517/
> >
> > From the above comments , there are many users who faced this problem.
> > different users suggested to modify the /etc/hosts file in different
> manner
> > to fix the problem. but there is no ultimate solution.we need the actual
> > solution thats why we are writing here.
> >
> > this is our /etc/hosts file
> > 192.168.60.147 humayun # Added by NetworkManager
> > 127.0.0.1 localhost.localdomain localhost
> > ::1 humayun localhost6.localdomain6 localhost6
> > 127.0.1.1 humayun
> >
> > # The following lines are desirable for IPv6 capable hosts
> > ::1 localhost ip6-localhost ip6-loopback
> > fe00::0 ip6-localnet
> > ff00::0 ip6-mcastprefix
> > ff02::1 ip6-allnodes
> > ff02::2 ip6-allrouters
> > ff02::3 ip6-allhosts
> >
> > 192.168.60.1 master
> > 192.168.60.2 slave
> >
>


Re: Too much fetch failure

2011-10-16 Thread Humayun gmail
yes we can ping every node (both master and slave).

On 16 October 2011 19:52, Uma Maheswara Rao G 72686 wrote:

> Are you able to ping the other node with the configured hostnames?
>
> Make sure that you should be able to ping to the other machine with the
> configured hostname in ect/hosts files.
>
> Regards,
> Uma
> - Original Message -
> From: praveenesh kumar 
> Date: Sunday, October 16, 2011 6:46 pm
> Subject: Re: Too much fetch failure
> To: common-user@hadoop.apache.org
>
> > try commenting 127.0.0.1 localhost line in your /etc/hosts and then
> > restartthe cluster and then try again.
> >
> > Thanks,
> > Praveenesh
> >
> > On Sun, Oct 16, 2011 at 2:00 PM, Humayun gmail
> > wrote:
> > > we are using hadoop on virtual box. when it is a single node then
> > it works
> > > fine for big dataset larger than the default block size. but in
> > case of
> > > multinode cluster (2 nodes) we are facing some problems.
> > > Like when the input dataset is smaller than the default block
> > size(64 MB)
> > > then it works fine. but when the input dataset is larger than the
> > default> block size then it shows ‘too much fetch failure’ in
> > reduce state.
> > > here is the output link
> > > http://paste.ubuntu.com/707517/
> > >
> > > From the above comments , there are many users who faced this
> > problem.> different users suggested to modify the /etc/hosts file
> > in different manner
> > > to fix the problem. but there is no ultimate solution.we need the
> > actual> solution thats why we are writing here.
> > >
> > > this is our /etc/hosts file
> > > 192.168.60.147 humayun # Added by NetworkManager
> > > 127.0.0.1 localhost.localdomain localhost
> > > ::1 humayun localhost6.localdomain6 localhost6
> > > 127.0.1.1 humayun
> > >
> > > # The following lines are desirable for IPv6 capable hosts
> > > ::1 localhost ip6-localhost ip6-loopback
> > > fe00::0 ip6-localnet
> > > ff00::0 ip6-mcastprefix
> > > ff02::1 ip6-allnodes
> > > ff02::2 ip6-allrouters
> > > ff02::3 ip6-allhosts
> > >
> > > 192.168.60.1 master
> > > 192.168.60.2 slave
> > >
> >
>


Re: Too much fetch failure

2011-10-16 Thread Uma Maheswara Rao G 72686
Are you able to ping the other node with the configured hostnames?

Make sure that you should be able to ping to the other machine with the 
configured hostname in ect/hosts files.

Regards,
Uma
- Original Message -
From: praveenesh kumar 
Date: Sunday, October 16, 2011 6:46 pm
Subject: Re: Too much fetch failure
To: common-user@hadoop.apache.org

> try commenting 127.0.0.1 localhost line in your /etc/hosts and then 
> restartthe cluster and then try again.
> 
> Thanks,
> Praveenesh
> 
> On Sun, Oct 16, 2011 at 2:00 PM, Humayun gmail 
> wrote:
> > we are using hadoop on virtual box. when it is a single node then 
> it works
> > fine for big dataset larger than the default block size. but in 
> case of
> > multinode cluster (2 nodes) we are facing some problems.
> > Like when the input dataset is smaller than the default block 
> size(64 MB)
> > then it works fine. but when the input dataset is larger than the 
> default> block size then it shows ‘too much fetch failure’ in 
> reduce state.
> > here is the output link
> > http://paste.ubuntu.com/707517/
> >
> > From the above comments , there are many users who faced this 
> problem.> different users suggested to modify the /etc/hosts file 
> in different manner
> > to fix the problem. but there is no ultimate solution.we need the 
> actual> solution thats why we are writing here.
> >
> > this is our /etc/hosts file
> > 192.168.60.147 humayun # Added by NetworkManager
> > 127.0.0.1 localhost.localdomain localhost
> > ::1 humayun localhost6.localdomain6 localhost6
> > 127.0.1.1 humayun
> >
> > # The following lines are desirable for IPv6 capable hosts
> > ::1 localhost ip6-localhost ip6-loopback
> > fe00::0 ip6-localnet
> > ff00::0 ip6-mcastprefix
> > ff02::1 ip6-allnodes
> > ff02::2 ip6-allrouters
> > ff02::3 ip6-allhosts
> >
> > 192.168.60.1 master
> > 192.168.60.2 slave
> >
>


Re: Too much fetch failure

2011-10-16 Thread praveenesh kumar
try commenting 127.0.0.1 localhost line in your /etc/hosts and then restart
the cluster and then try again.

Thanks,
Praveenesh

On Sun, Oct 16, 2011 at 2:00 PM, Humayun gmail wrote:

> we are using hadoop on virtual box. when it is a single node then it works
> fine for big dataset larger than the default block size. but in case of
> multinode cluster (2 nodes) we are facing some problems.
> Like when the input dataset is smaller than the default block size(64 MB)
> then it works fine. but when the input dataset is larger than the default
> block size then it shows ‘too much fetch failure’ in reduce state.
> here is the output link
> http://paste.ubuntu.com/707517/
>
> From the above comments , there are many users who faced this problem.
> different users suggested to modify the /etc/hosts file in different manner
> to fix the problem. but there is no ultimate solution.we need the actual
> solution thats why we are writing here.
>
> this is our /etc/hosts file
> 192.168.60.147 humayun # Added by NetworkManager
> 127.0.0.1 localhost.localdomain localhost
> ::1 humayun localhost6.localdomain6 localhost6
> 127.0.1.1 humayun
>
> # The following lines are desirable for IPv6 capable hosts
> ::1 localhost ip6-localhost ip6-loopback
> fe00::0 ip6-localnet
> ff00::0 ip6-mcastprefix
> ff02::1 ip6-allnodes
> ff02::2 ip6-allrouters
> ff02::3 ip6-allhosts
>
> 192.168.60.1 master
> 192.168.60.2 slave
>


Hadoop 0.20.205

2011-10-16 Thread praveenesh kumar
Hi all,

Any Idea, when is hadoop 0.20.205 is officially going to release ?
Is Hadoop-0.20.205 rc2 stable enough to start into production ?
I am using hadoop-0.20-append now with hbase 0.90.3, want to  switch to 205.
But looking for some valubale suggestions/recommendations ?

Thanks,
Praveenesh


Re: Unrecognized option: -jvm

2011-10-16 Thread Uma Maheswara Rao G 72686
You are using Which version of Hadoop ?

Please check the recent discussion, which will help you related to this problem.
http://search-hadoop.com/m/PPgvNPUoL2&subj=Re+Starting+Datanode

Regards,
Uma

- Original Message -
From: Majid Azimi 
Date: Sunday, October 16, 2011 2:22 am
Subject: Unrecognized option: -jvm
To: common-user@hadoop.apache.org

> Hi guys,
> 
> I'm realy new to hadoop. I have configured a single node hadoop 
> cluster. but
> seems that my data node is not working. job tracker log file shows 
> thismessage(alot of them per 10 second):
> 
> 2011-10-16 00:01:15,558 WARN org.apache.hadoop.mapred.JobTracker:
> Retrying...
> 2011-10-16 00:01:15,589 WARN org.apache.hadoop.hdfs.DFSClient: 
> DataStreamerException: org.apache.hadoop.ipc.RemoteException: 
> java.io.IOException: File
> /tmp/hadoop-root/mapred/system/jobtracker.info could only be 
> replicated to 0
> nodes, instead of 1
>at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1417)
>at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:596)
>at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>at java.lang.reflect.Method.invoke(Method.java:616)
>at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)
>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)
>at java.security.AccessController.doPrivileged(Native Method)
>at javax.security.auth.Subject.doAs(Subject.java:416)
>at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)
> 
>at org.apache.hadoop.ipc.Client.call(Client.java:1030)
>at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:224)
>at $Proxy5.addBlock(Unknown Source)
>at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>at java.lang.reflect.Method.invoke(Method.java:616)
>at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>at $Proxy5.addBlock(Unknown Source)
>at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3104)
>at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2975)
>at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2255)
>at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2446)
> 
> 2011-10-16 00:01:15,589 WARN org.apache.hadoop.hdfs.DFSClient: Error
> Recovery for block null bad datanode[0] nodes == null
> 2011-10-16 00:01:15,589 WARN org.apache.hadoop.hdfs.DFSClient: 
> Could not get
> block locations. Source file "/tmp/hadoop-
> root/mapred/system/jobtracker.info"- Aborting...
> 2011-10-16 00:01:15,590 WARN org.apache.hadoop.mapred.JobTracker: 
> Writing to
> file hdfs://localhost/tmp/hadoop-root/mapred/system/jobtracker.info 
> failed!2011-10-16 00:01:15,593 WARN 
> org.apache.hadoop.mapred.JobTracker: FileSystem
> is not ready yet!
> 2011-10-16 00:01:15,603 WARN org.apache.hadoop.mapred.JobTracker: 
> Failed to
> initialize recovery manager.
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /tmp/hadoop-root/mapred/system/jobtracker.info could only be 
> replicated to 0
> nodes, instead of 1
>at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1417)
>at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:596)
>at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>at java.lang.reflect.Method.invoke(Method.java:616)
>at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)
>at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)
>at java.security.AccessController.doPrivileged(Native Method)
>at javax.security.auth.Subject.doAs(Subject.java:416)
>at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)
> 
>at org.apache.hadoop.ipc.Client.call(Client.java:1030)
>at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:224)
>at $Proxy5.addBlock(Unknown Source)
>at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>at jav

Too much fetch failure

2011-10-16 Thread Humayun gmail
we are using hadoop on virtual box. when it is a single node then it works
fine for big dataset larger than the default block size. but in case of
multinode cluster (2 nodes) we are facing some problems.
Like when the input dataset is smaller than the default block size(64 MB)
then it works fine. but when the input dataset is larger than the default
block size then it shows ‘too much fetch failure’ in reduce state.
here is the output link
http://paste.ubuntu.com/707517/

>From the above comments , there are many users who faced this problem.
different users suggested to modify the /etc/hosts file in different manner
to fix the problem. but there is no ultimate solution.we need the actual
solution thats why we are writing here.

this is our /etc/hosts file
192.168.60.147 humayun # Added by NetworkManager
127.0.0.1 localhost.localdomain localhost
::1 humayun localhost6.localdomain6 localhost6
127.0.1.1 humayun

# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

192.168.60.1 master
192.168.60.2 slave


Re: capacity scheduler

2011-10-16 Thread patrick sang
Hi Arun,

Your answer sheds extra bright light while I am scratching head in the corner.
1 million thanks for answer and document. I will post back the result.

Thanks again,
P

On Sat, Oct 15, 2011 at 10:32 PM, Arun C Murthy  wrote:
>
> Hi Patrick,
>
> It's hard to diagnose CDH since I don't know what patch-sets they have for 
> the CapacityScheduler - afaik they only support FairScheduler, but that might 
> have changed.
>
> On Oct 15, 2011, at 4:45 PM, patrick sang wrote:
>
> > 4. from webUI, scheduling  information of orange queue.
> >
> > It said "Used capacity: 12 (100.0% of Capacity)"
> > while next line said "Maximum capacity: 16 slots"
> > So what's going on with other 4 slots ? why they are not get used.
> >
> > Is capacity-scheduler supposed to start using extra slots until it hit the
> > Max capacity ?
> > (from the variable of
> > mapred.capacity-scheduler.queue..maximum-capacity)
> > (there are no other jobs at all in the cluster)
> >
> > I am really thankful for reading up to this point.
> > Truly hope someone can shed some light on this.
> >
>
> However, if you were using Apache Hadoop 0.20.203 or 0.20.204 (or upcoming 
> 0.20.205 with security + append) you would still see this behaviour because 
> you are hitting 'user limits' where the CS will not allow a single user to 
> take more than the queue 'configured' capacity (12 slots here). You will need 
> more than one user in the 'orange' queue  to go over the queue's capacity. 
> This is to prevent a single user from hogging the system's resources.
>
> If you really want one user to acquire more resources in 'orange' queue, you 
> need to tweak mapred.capacity-scheduler.queue.orange.user-limit-factor.
>
> More details here:
> http://hadoop.apache.org/common/docs/stable/capacity_scheduler.html
>
> Arun
>