Hi Sitakant, Have you added below configuration in your setup?
<property> <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> You can refer this link https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html Thanks, Bilwa On Thu, Sep 10, 2020, 10:08 PM Sitakant Mishra <sitakanta.mis...@gmail.com> wrote: > Hi, > > I have been struggling with the issue above for some time now. I have added > all the mailing lists because of my desperation. Any help or suggestions > would be greatly appreciated. > > Thanks and Regards, > Sitakanta Mishra > > On Wed, Sep 9, 2020 at 4:24 PM Sitakant Mishra <sitakanta.mis...@gmail.com > > > wrote: > > > Hi, > > > > I have set up a new Hadoop cluster with hadoop version 3.3.0. I have a 5 > > node setup where the namenode and hive run on one server, yarn and > > secondary namenode on the second server and the last three nodes are only > > datanodes. The cluster is up and running. However, when I run an example > > wordcount map-reduce job, it throws the following exception. > > > > 2020-09-09 21:35:06,656 INFO mapreduce.Job: Task Id : > > attempt_1599672538759_0004_m_000000_2, Status : FAILED > > Container launch failed for container_1599672538759_0004_01_000004 : > > org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The > > auxService:mapreduce_shuffle does not exist > > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > > Method) > > at > > > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:83) > > at > > > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:57) > > at > java.lang.reflect.Constructor.newInstance(Constructor.java:437) > > at > > > org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateExceptionImpl(SerializedExceptionPBImpl.java:171) > > at > > > org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:182) > > at > > > org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106) > > at > > > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:163) > > at > > > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:394) > > at > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160) > > at > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) > > at java.lang.Thread.run(Thread.java:820) > > > > > > After digging the web, I get a lot of suggestions for adding properties. > I > > did follow all the suggestions from the official hadoop documentation and > > other links. Right now, I am using the following property in > yarn-site.xml > > in all the nodes and restarted dfs/yarn. > > > > <property> > > <name>yarn.nodemanager.aux-services</name> > > <value>mapreduce_shuffle</value> > > </property> > > > > It works once in 10 times, but most of the time, the mapper fails. I have > > no clue on how to fix this and I am badly stuck with this problem. Any > help > > is greatly appreciated. I am looking forward to getting some help. > > > > Thanks and Regards, > > Sitakanta Mishra > > > > > > >