I believe it defaults to submit a job to the default queue if you don't
specify it.  You don't have the default queue defined in your list of
mapred.queue.names.  So add -Dmapred.job.queue.name=myqueue1 (or another
queue you have defined) to the wordcount command like:

bin/hadoop jar
> hadoop*examples*.jar wordcount -Dmapred.job.queue.name=myqueue1
/user/hduser/wcinput /user/hduser/wcoutput5

Tom


On 9/14/11 5:57 AM, "arun k" <arunk...@gmail.com> wrote:

> Hi !
> 
> I have set up single-node cluster using
> http://www.google.co.in/url?sa=t&source=web&cd=1&ved=0CB0QFjAA&url=http%3A%2F%
> 2Fwww.michael-noll.com%2Ftutorials%2Frunning-hadoop-on-ubuntu-linux-single-nod
> e-cluster%2F&rct=j&q=michael%20noll%20single%20node&ei=b4ZwTvrCLsOrrAei-N32Bg&
> usg=AFQjCNGhuvv0tNdvPj4u23bbj-qXJDlixg&sig2=7ij8Dy7aQZUkBwhTnS1rLw&cad=rja
> and could run wordcount example application.
> I was trying to run this application using capacity scheduler.
> As per http://hadoop.apache.org/common/docs/current/capacity_scheduler.htmli
> have done :
> 1.Copied the hadoop-capacity-scheduler-*.jar from *
> contrib/capacity-scheduler* directory to HADOOP_HOME/lib
> 2.Set mapred.jobtracker.taskScheduler
> 3.Set *mapred.queue.names to myqueue1,myqueue2.
> 4.Set *mapred.capacity-scheduler.queue.<queue-name>.capacity to 30 and 70
> for two queues.
> 
> When i run i get the error :
> hduser@arun-Presario-C500-RU914PA-ACJ:/usr/local/hadoop$ bin/hadoop jar
> hadoop*examples*.jar wordcount /user/hduser/wcinput /user/hduser/wcoutput5
> 11/09/14 16:00:56 INFO input.FileInputFormat: Total input paths to process :
> 4
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: Queue "default"
> does not exist
>     at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:2998)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.j
> ava:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:396)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 
>     at org.apache.hadoop.ipc.Client.call(Client.java:740)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>     at org.apache.hadoop.mapred.$Proxy0.submitJob(Unknown Source)
>     at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:800)
>     at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
>     at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
>     at org.apache.hadoop.examples.WordCount.main(WordCount.java:67)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.j
> ava:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.j
> ava:68)
>     at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>     at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.j
> ava:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> 
> I didn't submit jobs to a particular queue as such. Do i need to do it ? How
> can i do it ?
> Any help ?
> 
> Thanks,
> Arun
> 
> 
> *
> *

Reply via email to