Re: Running Distributed shell in hadoop0.23

2011-12-14 Thread raghavendhra rahul
I get the following erroer by the given command to run distributed shell
hadoop1@master:~/hadoop/bin$ ./hadoop jar
../modules/hadoop-yarn-applications-distributedshell-0.23.0.jar
org.apache.hadoop.yarn.applications.distributedshell.Client --jar
../modules/hadoop-yarn-applications-distributedshell-0.23.0.jar
--shell_command ls --num_containers 5 --debug
2011-12-15 10:04:41,605 FATAL distributedshell.Client
(Client.java:main(190)) - Error running CLient
java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/ipc/YarnRPC
at
org.apache.hadoop.yarn.applications.distributedshell.Client.(Client.java:206)
at
org.apache.hadoop.yarn.applications.distributedshell.Client.main(Client.java:182)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.util.RunJar.main(RunJar.java:189)
Caused by: java.lang.ClassNotFoundException:
org.apache.hadoop.yarn.ipc.YarnRPC
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
... 7 more


On Thu, Dec 15, 2011 at 12:09 AM, Hitesh Shah wrote:

> Assuming you have a non-secure cluster setup ( the code does not handle
> security properly yet ), the following command would run the ls command on
> 5 allocated containers.
>
> $HADOOP_COMMON_HOME/bin/hadoop jar < path to
> hadoop-yarn-applications-distributedshell-0.24.0-SNAPSHOT.jar>
> org.apache.hadoop.yarn.applications.distributedshell.Client --jar < path to
> hadoop-yarn-applications-distributedshell-0.24.0-SNAPSHOT.jar>
> --shell_command ls --num_containers 5 --debug
>
> What the above does is upload the jar that contains the AppMaster class to
> hdfs, submits a new application request to launch the distributed shell app
> master on a container which then in turn runs the shell command on the no.
> of containers specified.
>
> -- Hitesh
>
> On Dec 14, 2011, at 1:06 AM, sri ram wrote:
>
> > Hi,
> >  Can anyone give the procedure about how to run Distibuted shell
> example in hadoop yarn.So that i try to understand how applicatin master
> really works.
>
>


Re: Running Distributed shell in hadoop0.23

2011-12-14 Thread raghavendhra rahul
should i link the hadoop-yarn-applications-distributedshell-0.23.0.jar also.
Without linking this jar it throws the same error.
If linked it shows
at org.apache.hadoop.util.RunJar.main(RunJar.java:130)
Caused by: java.util.zip.ZipException: error in opening zip file
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.(ZipFile.java:131)
at java.util.jar.JarFile.(JarFile.java:150)
at java.util.jar.JarFile.(JarFile.java:87)
at org.apache.hadoop.util.RunJar.main(RunJar.java:128)


Re: Running Distributed shell in hadoop0.23

2011-12-14 Thread raghavendhra rahul
Thanks for the help i made a mistake of creating symlinks within
modules.Now everythng is fine.


On Thu, Dec 15, 2011 at 11:18 AM, raghavendhra rahul <
raghavendhrara...@gmail.com> wrote:

> should i link the hadoop-yarn-applications-distributedshell-0.23.0.jar
> also.
> Without linking this jar it throws the same error.
> If linked it shows
> at org.apache.hadoop.util.RunJar.main(RunJar.java:130)
> Caused by: java.util.zip.ZipException: error in opening zip file
> at java.util.zip.ZipFile.open(Native Method)
> at java.util.zip.ZipFile.(ZipFile.java:131)
> at java.util.jar.JarFile.(JarFile.java:150)
> at java.util.jar.JarFile.(JarFile.java:87)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:128)
>
>
>


Re: Running Distributed shell in hadoop0.23

2011-12-14 Thread raghavendhra rahul
How to run any script using this.When i tried it shows final status as
failed.

On Thu, Dec 15, 2011 at 11:48 AM, raghavendhra rahul <
raghavendhrara...@gmail.com> wrote:

> Thanks for the help i made a mistake of creating symlinks within
> modules.Now everythng is fine.
>
>
>
> On Thu, Dec 15, 2011 at 11:18 AM, raghavendhra rahul <
> raghavendhrara...@gmail.com> wrote:
>
>> should i link the hadoop-yarn-applications-distributedshell-0.23.0.jar
>> also.
>> Without linking this jar it throws the same error.
>> If linked it shows
>> at org.apache.hadoop.util.RunJar.main(RunJar.java:130)
>> Caused by: java.util.zip.ZipException: error in opening zip file
>> at java.util.zip.ZipFile.open(Native Method)
>> at java.util.zip.ZipFile.(ZipFile.java:131)
>> at java.util.jar.JarFile.(JarFile.java:150)
>> at java.util.jar.JarFile.(JarFile.java:87)
>> at org.apache.hadoop.util.RunJar.main(RunJar.java:128)
>>
>>
>>
>


Re: Running Distributed shell in hadoop0.23

2011-12-14 Thread raghavendhra rahul
When we create a directory using distributed shell,any idea where it is
created

On Thu, Dec 15, 2011 at 11:57 AM, raghavendhra rahul <
raghavendhrara...@gmail.com> wrote:

> How to run any script using this.When i tried it shows final status as
> failed.
>
>
> On Thu, Dec 15, 2011 at 11:48 AM, raghavendhra rahul <
> raghavendhrara...@gmail.com> wrote:
>
>> Thanks for the help i made a mistake of creating symlinks within
>> modules.Now everythng is fine.
>>
>>
>>
>> On Thu, Dec 15, 2011 at 11:18 AM, raghavendhra rahul <
>> raghavendhrara...@gmail.com> wrote:
>>
>>> should i link the hadoop-yarn-applications-distributedshell-0.23.0.jar
>>> also.
>>> Without linking this jar it throws the same error.
>>> If linked it shows
>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:130)
>>> Caused by: java.util.zip.ZipException: error in opening zip file
>>> at java.util.zip.ZipFile.open(Native Method)
>>> at java.util.zip.ZipFile.(ZipFile.java:131)
>>> at java.util.jar.JarFile.(JarFile.java:150)
>>> at java.util.jar.JarFile.(JarFile.java:87)
>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:128)
>>>
>>>
>>>
>>
>


Re: Running Distributed shell in hadoop0.23

2011-12-15 Thread raghavendhra rahul
Thanks for the reply.
Also is there any other application master example other than
DistributedShell.

On Fri, Dec 16, 2011 at 4:23 AM, Hitesh Shah  wrote:

> The shell script is invoked within the context of a container launched by
> the NodeManager. If you are creating a directory using a relative path, it
> will be created relative of the container's working directory and cleaned
> up when the container completes.
>
> If you really want to see some output, one option could be to have your
> script create some data on hdfs or echo output to stdout which will be
> captured in the container logs.  The stdout/stderr logs generated by your
> script should be available wherever you have configured the node-manager's
> log dirs to point to.
>
> -- Hitesh
>
> On Dec 14, 2011, at 10:52 PM, raghavendhra rahul wrote:
>
> > When we create a directory using distributed shell,any idea where it is
> created
> >
> > On Thu, Dec 15, 2011 at 11:57 AM, raghavendhra rahul <
> raghavendhrara...@gmail.com> wrote:
> > How to run any script using this.When i tried it shows final status as
> failed.
> >
> >
> > On Thu, Dec 15, 2011 at 11:48 AM, raghavendhra rahul <
> raghavendhrara...@gmail.com> wrote:
> > Thanks for the help i made a mistake of creating symlinks within
> modules.Now everythng is fine.
> >
> >
> >
> > On Thu, Dec 15, 2011 at 11:18 AM, raghavendhra rahul <
> raghavendhrara...@gmail.com> wrote:
> > should i link the hadoop-yarn-applications-distributedshell-0.23.0.jar
> also.
> > Without linking this jar it throws the same error.
> > If linked it shows
> > at org.apache.hadoop.util.RunJar.main(RunJar.java:130)
> > Caused by: java.util.zip.ZipException: error in opening zip file
> > at java.util.zip.ZipFile.open(Native Method)
> > at java.util.zip.ZipFile.(ZipFile.java:131)
> > at java.util.jar.JarFile.(JarFile.java:150)
> > at java.util.jar.JarFile.(JarFile.java:87)
> > at org.apache.hadoop.util.RunJar.main(RunJar.java:128)
> >
> >
> >
> >
> >
>
>


Re: Yarn related questions:

2012-01-04 Thread raghavendhra rahul
Hi,

I tried to set the client node for launching the container within the
application master.
I have set the parameter as
request.setHostName("client");
but the containers are not launched in the destined host.Instead the loop
goes on continuously.
2012-01-04 15:11:48,535 INFO appmaster.ApplicationMaster
(ApplicationMaster.java:run(204)) - Current application state: loop=95,
appDone=false, total=2, requested=2, completed=0, failed=0,
currentAllocated=0

On Wed, Jan 4, 2012 at 11:24 PM, Robert Evans  wrote:

>  Ann,
>
> A container more or less corresponds to a task in MRV1.  There is one
> exception to this, as the ApplicationMaster also runs in a container.  The
> ApplicationMaster will request new containers for each mapper or reducer
> task that it wants to launch.  There is separate code from the container
> that will serve up the intermediate mapper output and is run as part of the
> NodeManager (Similar to the TaskTracker from before).  When the
> ApplicationMaster requests a container it also includes with it a hint as
> to where it would like the container placed.  In fact it actually makes
> three request one for the exact node, one for the rack the node is on, and
> one that is generic and could be anywhere.  The scheduler will try to honor
> those requests in the same order so data locality is still considered and
> generally honored.  Yes there is the possibility of back and forth to get a
> container, but the ApplicationMaster generally will try to use all of the
> containers that it is given, even if they are not optimal.
>
> --Bobby Evans
>
>
> On 1/4/12 10:23 AM, "Ann Pal"  wrote:
>
> Hi,
> I am trying to understand more about Hadoop Next Gen Map Reduce and had
> the following questions based on the following post:
>
>
> http://developer.yahoo.com/blogs/hadoop/posts/2011/03/mapreduce-nextgen-scheduler/
>
> [1] How does application decide how many containers it needs? The
> containers are used to store the intermediate result at the map nodes?
>
> [2] During resource allocation, if the resource manager has no mapping
> between map tasks to resources allocated, how can it properly allocate the
> right resources. It might end up allocating resources on a node, which does
> not have data for the map task, and hence is not optimal. In this case the
> Application Master will have to reject it and request again . There could
> be considerable back- and- forth between application master and resource
> manager before it could converge. Is this right?
>
> Thanks!
>
>


Launching containers in specific host

2012-01-04 Thread raghavendhra rahul
Hi,

I tried to set the client node for launching the container within the
application master.
I have set the parameter as
request.setHostName("client");
but the containers are not launched in the destined host.Instead the loop
goes on continuously.
2012-01-04 15:11:48,535 INFO appmaster.ApplicationMaster
(ApplicationMaster.java:run(
204)) - Current application state: loop=95, appDone=false, total=2,
requested=2, completed=0, failed=0, currentAllocated=0


Appmasgter error

2012-01-04 Thread raghavendhra rahul
Hi,
   I am trying to start an server within the application
master's container alone.But when i tried using
Runtime.getRuntime.exec("command").But it throws the following execption.
Application application_1325738010393_0003 failed 1 times due to AM
Container for appattempt_1325738010393_0003_01 exited with exitCode:
143 due to: Container
[pid=7212,containerID=container_1325738010393_0003_01_01] is running
beyond virtual memory limits. Current usage: 118.4mb of 1.0gb physical
memory used; 2.7gb of 2.1gb virtual memory used. Killing container. Dump of
the process-tree for container_1325738010393_0003_01_0

When i tried using single node yarn cluster everything works fine.But in
multi node it throws this exception.Should i increase the size of /tmp in
linux...
Any ideas


Appmaster error

2012-01-04 Thread raghavendhra rahul
Hi,
   I am trying to start an server within the application
master's container alone.But when i tried using
Runtime.getRuntime.exec("command").But it throws the following execption.
Application application_1325738010393_0003 failed 1 times due to AM
Container for appattempt_1325738010393_0003_01 exited with exitCode:
143 due to: Container
[pid=7212,containerID=container_1325738010393_0003_01_01] is running
beyond virtual memory limits. Current usage: 118.4mb of 1.0gb physical
memory used; 2.7gb of 2.1gb virtual memory used. Killing container. Dump of
the process-tree for container_1325738010393_0003_01_0

When i tried using single node yarn cluster everything works fine.But in
multi node it throws this exception.Should i increase the size of /tmp in
linux...
Any ideas


Container launch failure

2012-01-05 Thread raghavendhra rahul
Hi,
   I tired to launch a container from application master.But it
throws the following error.

java.io.FileNotFoundException: File
/local1/yarn/.yarn/local/usercache/rahul_2011/appcache/application_1325760852770_0001
does not exist
at
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:431)
at
org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:815)
at
org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:700)
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:697)
at
org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2325)
at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:697)
at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:122)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:237)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:67)
at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Any idea to solve it.???


Re: Yarn related questions:

2012-01-05 Thread raghavendhra rahul
Yes i am writing my own application master.Is there a way to specify
node1: 10 conatiners
node2: 10 containers
Can we specify this kind of list using the application master

Also i set request.setHostName("client"); where client is the hostname of a
node
I checked the log to find the following error
java.io.FileNotFoundException: File /local1/yarn/.yarn/local/
usercache/rahul_2011/appcache/application_1325760852770_0001 does not exist
at
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:431)
at
org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:815)
at
org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:700)
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:697)
at
org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2325)
at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:697)
at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:122)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:237)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:67)
at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)

Any ideas.


On Fri, Jan 6, 2012 at 12:41 AM, Arun C Murthy  wrote:

> Are you writing your own application i.e. custom ApplicationMaster?
>
> You need to pass ResourceRequest (RR) with a valid hostname alongwith
> (optionally) RR with rack and also a mandatory RR with * as the
> resource-name.
>
> Arun
>
> On Jan 4, 2012, at 8:04 PM, raghavendhra rahul wrote:
>
> Hi,
>
> I tried to set the client node for launching the container within the
> application master.
> I have set the parameter as
> request.setHostName("client");
> but the containers are not launched in the destined host.Instead the loop
> goes on continuously.
> 2012-01-04 15:11:48,535 INFO appmaster.ApplicationMaster
> (ApplicationMaster.java:run(204)) - Current application state: loop=95,
> appDone=false, total=2, requested=2, completed=0, failed=0,
> currentAllocated=0
>
> On Wed, Jan 4, 2012 at 11:24 PM, Robert Evans  wrote:
>
>>  Ann,
>>
>> A container more or less corresponds to a task in MRV1.  There is one
>> exception to this, as the ApplicationMaster also runs in a container.  The
>> ApplicationMaster will request new containers for each mapper or reducer
>> task that it wants to launch.  There is separate code from the container
>> that will serve up the intermediate mapper output and is run as part of the
>> NodeManager (Similar to the TaskTracker from before).  When the
>> ApplicationMaster requests a container it also includes with it a hint as
>> to where it would like the container placed.  In fact it actually makes
>> three request one for the exact node, one for the rack the node is on, and
>> one that is generic and could be anywhere.  The scheduler will try to honor
>> those requests in the same order so data locality is still considered and
>> generally honored.  Yes there is the possibility of back and forth to get a
>> container, but the ApplicationMaster generally will try to use all of the
>> containers that it is given, even if they are not optimal.
>>
>> --Bobby Evans
>>
>>
>> On 1/4/12 10:23 AM, "Ann Pal"  wrote:
>>
>> Hi,
>> I am trying to understand more about Hadoop Next Gen Map Reduce and had
>> the following questions based on the following post:
>>
>>
>> http://developer.yahoo.com/blogs/hadoop/posts/2011/03/mapreduce-nextgen-scheduler/
>>
>> [1] How does application decide how many containers it needs? The
>> containers are used to store the intermediate result at the map nodes?
>>
>> [2] During resource allocation, if the resource manager has no mapping
>> between map tasks to resources allocated, how can it properly allocate the
>> right resources. It might end up allocating resources on a node, which does
>> not have data for the map task, and hence is not optimal. In this case the
>> Application Master will have to reject it and request again . There could
>> be considerable back- and- forth between application master and resource
>> manager before it could converge. Is this right?
>>
>> Thanks!
>>
>>
>
>


Re: Exception from Yarn Launch Container

2012-01-06 Thread raghavendhra rahul
Hi,
Can i know where to get the release hadoop 0.23.1

2012/1/5 Bing Jiang 

> Arun,
>
> In order to figure out the fact, I trace back to source code. I find that
> *org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor*:
>
> @Override
>   public int launchContainer(Container container,
>   Path nmPrivateContainerScriptPath, Path nmPrivateTokensPath,
>   String userName, String appId, Path containerWorkDir)
>   throws IOException {
>   
>String[] sLocalDirs = getConf().getStrings(
> YarnConfiguration.NM_LOCAL_
> DIRS,
> YarnConfiguration.DEFAULT_NM_LOCAL_DIRS);
> for (String sLocalDir : sLocalDirs) {
>   Path usersdir = new Path(sLocalDir, ContainerLocalizer.USERCACHE);
>   Path userdir = new Path(usersdir, userName);
>   Path appCacheDir = new Path(userdir, ContainerLocalizer.APPCACHE);
>   Path appDir = new Path(appCacheDir, appIdStr);
>   Path containerDir = new Path(appDir, containerIdStr);
>   lfs.mkdir(containerDir, null, false);
>}
>   
>
> lfs.mkdir(containerDir, null, false);  refer to the api of mkdir, false
> means cannot create parent path here if not exists.
> In my hadoop project, I revise  lfs.mkdir(containerDir, null, false);  to
> lfs.mkdir(containerDir, null, true); , then my program goes well.
>
> I fetch the hadoop source code from git now, but I can find the same issue
> as before.
>
> I want to ask why you set false here, or I missed out some important
> issues?
>
> Thanks!
>
>
> 在 2012年1月4日 下午3:42,Arun C Murthy 写道:
>
> Bing,
>>
>>  Are you using the released version of hadoop-0.23? If so, you might want
>> to upgrade to latest build off branch-0.23 (i.e. hadoop-0.23.1-SNAPSHOT)
>> which has the fix for MAPREDUCE-3537.
>>
>> Arun
>>
>> On Dec 29, 2011, at 12:27 AM, Bing Jiang wrote:
>>
>> Hi, I use Yarn as resource management to deploy my run-time computing
>> system. I follow
>>
>>
 http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/YARN.html

 http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html

>>> as guide, and I find these issues below.
>>
>> yarn-nodemanager-**.log:
>> 
>> 2011-12-29 15:49:16,250 INFO
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
>> Adding container_1325062142731_0006_01_01 to application
>> application_1325062142731_0006
>> 2011-12-29 15:49:16,250 DEBUG
>> org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.ApplicationLocalizationEvent.EventType:
>> INIT_APPLICATION_RESOURCES
>> 2011-12-29 15:49:16,250 DEBUG
>> org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationInitedEvent.EventType:
>> APPLICATION_INITED
>> 2011-12-29 15:49:16,250 INFO
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
>> Processing application_1325062142731_0006 of type APPLICATION_INITED
>> 2011-12-29 15:49:16,250 INFO
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
>> Application application_1325062142731_0006 transitioned from INITING to
>> RUNNING
>> 2011-12-29 15:49:16,250 DEBUG
>> org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.event.LogHandlerAppStartedEvent.EventType:
>> APPLICATION_STARTED
>> 2011-12-29 15:49:16,250 DEBUG
>> org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerInitEvent.EventType:
>> INIT_CONTAINER
>> 2011-12-29 15:49:16,250 INFO
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
>> Processing container_1325062142731_0006_01_01 of type INIT_CONTAINER
>> 2011-12-29 15:49:16,250 INFO
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
>> Container container_1325062142731_0006_01_01 transitioned from NEW to
>> LOCALIZED
>> 2011-12-29 15:49:16,250 DEBUG
>> org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEvent.EventType:
>> LAUNCH_CONTAINER
>> 2011-12-29 15:49:16,287 DEBUG
>> org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEvent.EventType:
>> CONTAINER_LAUNCHED
>> 2011-12-29 15:49:16,287 INFO
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
>> Processing container_1325062142731_0006_01_01 of type CONTAINER_LAUNCHED
>> 2011-12-29 15:49:16,287 INFO
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
>> Container container_1325062142731_0006_01_01 t

Container Launching

2012-01-06 Thread raghavendhra rahul
Hi all,
I am trying to write an application master.Is there a way to specify
node1: 10 conatiners
node2: 10 containers
Can we specify this kind of list using the application master

Also i set request.setHostName("client"); where client is the hostname of a
node
I checked the log to find the following error
java.io.FileNotFoundException: File
/local1/yarn/.yarn/local/usercache/rahul_2011/appcache/
application_1325760852770_0001 does not exist
at
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:431)
at
org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:815)
at
org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:700)
at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:697)
at
org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2325)
at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:697)
at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:122)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:237)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:67)
at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)

i.e containers are only launching within the host where the application
master runs while the other nodes always remain free.
Any ideas.


Hadoop 0.23 build error

2012-01-06 Thread raghavendhra rahul
Hi,
  I tried to build the hadoop 0.23.1 version .But it throws the
followung error during the build

Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run
(tar) on project hadoop-mapreduce: An Ant BuildException has occured: exec
returned: 2

Any help..


Cannot start yarn daemons

2012-01-09 Thread raghavendhra rahul
Hi,
  I am trying to install hadoop 0.23.1 SNAPSHOT. while starting yarn
daemons.sh it shows the following error
Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/hadoop/conf/Configuration
Caused by: java.lang.ClassNotFoundException:
org.apache.hadoop.conf.Configuration
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
Could not find the main class:
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager. Program will
exit.


Re: Container Launching

2012-01-09 Thread raghavendhra rahul
Any solutions.

On Fri, Jan 6, 2012 at 5:15 PM, raghavendhra rahul <
raghavendhrara...@gmail.com> wrote:

> Hi all,
> I am trying to write an application master.Is there a way to specify
> node1: 10 conatiners
> node2: 10 containers
> Can we specify this kind of list using the application master
>
> Also i set request.setHostName("client"); where client is the hostname of
> a node
> I checked the log to find the following error
> java.io.FileNotFoundException: File
> /local1/yarn/.yarn/local/usercache/rahul_2011/appcache/
> application_1325760852770_0001 does not exist
> at
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:431)
> at
> org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:815)
> at
> org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
> at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
> at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:700)
> at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:697)
> at
> org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2325)
> at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:697)
> at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:122)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:237)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:67)
> at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:636)
>
> i.e containers are only launching within the host where the application
> master runs while the other nodes always remain free.
> Any ideas.
>


Re: Container Launching

2012-01-09 Thread raghavendhra rahul
text$4.next(FileContext.java:697)
>at
> org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2325)
> at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:697)
> at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:123)
>
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:237)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:67)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> 2011-12-29 15:49:16,290 DEBUG
> org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerExitEvent.EventType:
> CONTAINER_EXITED_WITH_FAILURE
> 2011-12-29 15:49:16,290 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
> Processing container_1325062142731_0006_01_01 of type
> CONTAINER_EXITED_WITH_FAILURE
> 2011-12-29 15:49:16,290 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
> Container container_1325062142731_0006_01_01 transitioned from RUNNING
> to EXITED_WITH_FAILURE
> 2011-12-29 15:49:16,290 DEBUG
> org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEvent.EventType:
> CLEANUP_CONTAINER
> 2011-12-29 15:49:16,290 INFO
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
> Cleaning up container container_1325062142731_0006_01_01
> 2011-12-29 15:49:16,290 DEBUG
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
> Marking container container_1325062142731_0006_01_01 as inactive
> 2011-12-29 15:49:16,290 DEBUG
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
> Getting pid for container container_1325062142731_0006_01_01 to kill
> from pid file
> /tmp/nm-local-dir/nmPrivate/container_1325062142731_0006_01_01.pid
> 2011-12-29 15:49:16,290 DEBUG
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
> Accessing pid for container container_1325062142731_0006_01_01 from pid
> file /tmp/nm-local-dir/nmPrivate/container_1325062142731_0006_01_01.pid
> 2011-12-29 15:49:16,307 DEBUG
> org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.ContainerLocalizationCleanupEvent.EventType:
> CLEANUP_CONTAINER_RESOURCES
>
> In order to figure out the fact, I trace back to source code. I find that
> *org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor*:
>
> @Override
>   public int launchContainer(Container container,
>   Path nmPrivateContainerScriptPath, Path nmPrivateTokensPath,
>   String userName, String appId, Path containerWorkDir)
>   throws IOException {
>   
>String[] sLocalDirs = getConf().getStrings(
> YarnConfiguration.NM_LOCAL_DIRS,
> YarnConfiguration.DEFAULT_NM_
> LOCAL_DIRS);
> for (String sLocalDir : sLocalDirs) {
>   Path usersdir = new Path(sLocalDir, ContainerLocalizer.USERCACHE);
>   Path userdir = new Path(usersdir, userName);
>   Path appCacheDir = new Path(userdir, ContainerLocalizer.APPCACHE);
>   Path appDir = new Path(appCacheDir, appIdStr);
>   Path containerDir = new Path(appDir, containerIdStr);
>   lfs.mkdir(containerDir, null, false);
>}
>   
>
> lfs.mkdir(containerDir, null, false);  refer to the api of mkdir, false
> means cannot create parent path here if not exists.
> In my hadoop project, I revise  lfs.mkdir(containerDir, null, false);  to
> lfs.mkdir(containerDir, null, true); , then my program goes well.
>
>
>
> 2012/1/9 raghavendhra rahul 
>
>> Any solutions.
>>
>>
>> On Fri, Jan 6, 2012 at 5:15 PM, raghavendhra rahul <
>> raghavendhrara...@gmail.com> wrote:
>>
>>> Hi all,
>>> I am trying to write an application master.Is there a way to specify
>>> node1: 10 conatiners
>>> node2: 10 containers
>>> Can we specify this kind of list using the application master
>>>
>>> Also i set

Container launch from appmaster

2012-01-09 Thread raghavendhra rahul
Hi all,
I am trying to write an application master.Is there a way to specify
node1: 10 conatiners
node2: 10 containers
Can we specify this kind of list using the application master


Yarn Container Limit

2012-01-10 Thread raghavendhra rahul
Hi,
How to set the maximum number of containers to be executed in
each node.
So that at a time only that much of containers will be running
in that node..


Yarn Container Memory

2012-01-11 Thread raghavendhra rahul
Hi,


Yarn Container Memory

2012-01-11 Thread raghavendhra rahul
Hi,
 I formed a hadoop cluster with 3 nodes of 3500mb alloted for
containers in each node.

In the appmaster i set resourcecapability as 1000 and total containers as 20
Now in each node only 2 container is running.
Second time i reduced the resourcecapability as 100 and total containers as
20
Even now only 2 containers are running in each node.

Any reason why the remaining containers are not alloted even though
resourcememory is available...

Thanks..


Re: Container launch from appmaster

2012-01-11 Thread raghavendhra rahul
When i provide hostname for "*" then the containers are not alloted for the
application till its end.
Is there a format to specify the hostname

2012/1/11 Bing Jiang 

> I think you can make control of allocated container, and check whether it
> meets your requirements.
>
>
>
> 2012/1/11 Vinod Kumar Vavilapalli 
>
>> Yes, you can.
>>
>>
>> http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html#Writing_an_ApplicationMaster
>> should give you a very good idea and example code about this.
>>
>> But, the requirements are not hard-fixed. If the scheduler cannot find
>> free resources on the nodes you mention, it will try to get containers
>> on a different node. Despite that, the total number of containers can
>> be controlled by limiting the count against the entry for "*".
>>
>> HTH,
>> +Vinod
>>
>> On Mon, Jan 9, 2012 at 10:12 PM, raghavendhra rahul
>>  wrote:
>> > Hi all,
>> > I am trying to write an application master.Is there a way to specify
>> > node1: 10 conatiners
>> > node2: 10 containers
>> > Can we specify this kind of list using the application master
>> >
>>
>
>
>
> --
> Bing Jiang
> Tel:(86)134-2619-1361
> weibo: http://weibo.com/jiangbinglover
> BLOG: http://blog.sina.com.cn/jiangbinglover
> National Research Center for Intelligent Computing Systems
> Institute of Computing technology
> Graduate University of Chinese Academy of Science
>
>


Re: Container launch from appmaster

2012-01-11 Thread raghavendhra rahul
How can i find the rackname.Where to set it
Also os ther a specific way to stop the application master other than
timeout option in the client.
Is there a command like
./hadoop -job kill jobid

2012/1/11 Bing Jiang 

> I figure out source code, and havenot already found the format of hostname
> besides "*"
> But it should contains "rackname" + "hostname" .
>
> 在 2012年1月11日 下午4:47,raghavendhra rahul 写道:
>
> When i provide hostname for "*" then the containers are not alloted for
>> the application till its end.
>> Is there a format to specify the hostname
>>
>>
>> 2012/1/11 Bing Jiang 
>>
>>> I think you can make control of allocated container, and check whether
>>> it meets your requirements.
>>>
>>>
>>>
>>> 2012/1/11 Vinod Kumar Vavilapalli 
>>>
>>>> Yes, you can.
>>>>
>>>>
>>>> http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html#Writing_an_ApplicationMaster
>>>> should give you a very good idea and example code about this.
>>>>
>>>> But, the requirements are not hard-fixed. If the scheduler cannot find
>>>> free resources on the nodes you mention, it will try to get containers
>>>> on a different node. Despite that, the total number of containers can
>>>> be controlled by limiting the count against the entry for "*".
>>>>
>>>> HTH,
>>>> +Vinod
>>>>
>>>> On Mon, Jan 9, 2012 at 10:12 PM, raghavendhra rahul
>>>>  wrote:
>>>> > Hi all,
>>>> > I am trying to write an application master.Is there a way to specify
>>>> > node1: 10 conatiners
>>>> > node2: 10 containers
>>>> > Can we specify this kind of list using the application master
>>>> >
>>>>
>>>
>>>
>>>
>>> --
>>> Bing Jiang
>>> Tel:(86)134-2619-1361
>>> weibo: http://weibo.com/jiangbinglover
>>> BLOG: http://blog.sina.com.cn/jiangbinglover
>>> National Research Center for Intelligent Computing Systems
>>> Institute of Computing technology
>>> Graduate University of Chinese Academy of Science
>>>
>>>
>>
>
>
> --
> Bing Jiang
> Tel:(86)134-2619-1361
> weibo: http://weibo.com/jiangbinglover
> BLOG: http://blog.sina.com.cn/jiangbinglover
> National Research Center for Intelligent Computing Systems
> Institute of Computing technology
> Graduate University of Chinese Academy of Science
>
>


Application start stop

2012-01-11 Thread raghavendhra rahul
Hi
 Is there a specific way to stop the application master other than timeout
option in the client.
Is there a command like
./hadoop -job kill jobid for application masters


Re: Yarn Container Memory

2012-01-11 Thread raghavendhra rahul
Any suggestions..

On Wed, Jan 11, 2012 at 2:09 PM, raghavendhra rahul <
raghavendhrara...@gmail.com> wrote:

> Hi,
>  I formed a hadoop cluster with 3 nodes of 3500mb alloted for
> containers in each node.
>
> In the appmaster i set resourcecapability as 1000 and total containers as
> 20
> Now in each node only 2 container is running.
> Second time i reduced the resourcecapability as 100 and total containers
> as 20
> Even now only 2 containers are running in each node.
>
> Any reason why the remaining containers are not alloted even though
> resourcememory is available...
>
> Thanks..
>


Re: Yarn Container Memory

2012-01-11 Thread raghavendhra rahul
min container size 100mb
AM size is 1000mb

On Thu, Jan 12, 2012 at 1:06 PM, Arun C Murthy  wrote:

> What is your min container size?
>
> How much did you allocate to AM itself?
>
> On Jan 11, 2012, at 9:51 PM, raghavendhra rahul wrote:
>
> Any suggestions..
>
> On Wed, Jan 11, 2012 at 2:09 PM, raghavendhra rahul <
> raghavendhrara...@gmail.com> wrote:
>
>> Hi,
>>  I formed a hadoop cluster with 3 nodes of 3500mb alloted for
>> containers in each node.
>>
>> In the appmaster i set resourcecapability as 1000 and total containers as
>> 20
>> Now in each node only 2 container is running.
>> Second time i reduced the resourcecapability as 100 and total containers
>> as 20
>> Even now only 2 containers are running in each node.
>>
>> Any reason why the remaining containers are not alloted even though
>> resourcememory is available...
>>
>> Thanks..
>>
>
>
>


Re: Yarn Container Memory

2012-01-12 Thread raghavendhra rahul
FIFO

On Thu, Jan 12, 2012 at 1:45 PM, Arun C Murthy  wrote:

> What scheduler are you using?
>
> On Jan 11, 2012, at 11:48 PM, raghavendhra rahul wrote:
>
> min container size 100mb
> AM size is 1000mb
>
> On Thu, Jan 12, 2012 at 1:06 PM, Arun C Murthy wrote:
>
>> What is your min container size?
>>
>> How much did you allocate to AM itself?
>>
>> On Jan 11, 2012, at 9:51 PM, raghavendhra rahul wrote:
>>
>> Any suggestions..
>>
>> On Wed, Jan 11, 2012 at 2:09 PM, raghavendhra rahul <
>> raghavendhrara...@gmail.com> wrote:
>>
>>> Hi,
>>>  I formed a hadoop cluster with 3 nodes of 3500mb alloted for
>>> containers in each node.
>>>
>>> In the appmaster i set resourcecapability as 1000 and total containers
>>> as 20
>>> Now in each node only 2 container is running.
>>> Second time i reduced the resourcecapability as 100 and total containers
>>> as 20
>>> Even now only 2 containers are running in each node.
>>>
>>> Any reason why the remaining containers are not alloted even though
>>> resourcememory is available...
>>>
>>> Thanks..
>>>
>>
>>
>>
>
>


Container size

2012-01-17 Thread raghavendhra rahul
Hi,

 What is the minimum size of the container in hadoop yarn.
capability.setmemory(xx);


Re: Container size

2012-01-17 Thread raghavendhra rahul
What could be the minimum size.Because my cluster size is 2.5GB
When i set 100 mb for each container only 2 instances are launched in each
node.
When i set 1000 mb for each container even then 2 instances are launched in
each node.
What could be the problem.

Sorry for the cross posting...

On Wed, Jan 18, 2012 at 1:01 PM, Arun C Murthy  wrote:

> Removing common-user@, please do not cross-post.
>
> On Jan 17, 2012, at 11:24 PM, raghavendhra rahul wrote:
>
> > Hi,
> >
> > What is the minimum size of the container in hadoop yarn.
> >capability.setmemory(xx);
>
> The AM gets this information from RM via the return value for
> AMRMProtocol.registerApplicationMaster.
>
> Arun
>
>


Re: Hbase compatible for Hadoop Yarn

2012-02-07 Thread raghavendhra rahul
I tried installing hbase on top of hadoop 0.23.I get the following
error.Any suggestion
client1: Exception in thread "main" org.apache.hadoop.ipc.
RemoteException: Server IPC version 5 cannot communicate with client
version 3
client1: at org.apache.hadoop.ipc.Client.call(Client.java:740)
client1: at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
client1: at $Proxy5.getProtocolVersion(Unknown Source)
client1: at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
client1: at
org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
client1: at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:207)
client1: at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:170)
client1: at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
client1: at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)


On Tue, Feb 7, 2012 at 6:26 PM, Harsh J  wrote:

> HBase isn't an MR-dependent application, if you mean to ask that.
>
> If your question is generally "What version of HBase can I use with
> 0.23", then 0.92 and 0.90 would both work against 0.23's HDFS, as
> HBase is merely a HDFS client.
>
> Qs:
> - Are you facing issues using HBase with 0.23?
> - Which component throws up the issue? MR libs of HBase or the daemons
> itself?
> - Are you instead looking for
> https://issues.apache.org/jira/browse/HBASE-4329?
>
> On Tue, Feb 7, 2012 at 9:54 AM, raghavendhra rahul
>  wrote:
> > Hi,
> >  What is the suitable version of hbase that can be tested with
> > hadoop yarn.
>
>
>
> --
> Harsh J
> Customer Ops. Engineer
> Cloudera | http://tiny.cloudera.com/about
>


Re: Hadoop 0.23.1 installation

2012-03-01 Thread raghavendhra rahul
I have made simple configurations for testing

core-site.xml



fs.default.name
hdfs://localhost:9000



mapred-site.xml



 mapreduce.framework.name
yarn



hdfs-site.xml




dfs.replication
1


dfs.permissions
false



hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/java-6-openjdk
export HADOOP_DEV_HOME=/home/hadoop1/hadoop-0.23.1
export HADOOP_MAPRED_HOME=${HADOOP_DEV_HOME}
export HADOOP_COMMON_HOME=${HADOOP_DEV_HOME}
export HADOOP_HDFS_HOME=${HADOOP_DEV_HOME}
export YARN_HOME=${HADOOP_DEV_HOME}
export HADOOP_CONF_DIR=${HADOOP_DEV_HOME}/conf/
export YARN_CONF_DIR=${HADOOP_DEV_HOME}/conf/


I tried to format using the command bin/hadoop namenode -format



On Fri, Mar 2, 2012 at 7:56 AM, Wellington Chevreuil <
wellington.chevre...@gmail.com> wrote:

> Hi,
>
> can you tell us how are you trying to format your hdfs? As it´s a
> NoClassDefFoundError, probably your hadoop lib is not on your
> classpath.
>
> Thanks,
> Wellington.
>
> 2012/2/29 Marcos Ortiz :
> > On 03/01/2012 04:48 AM, raghavendhra rahul wrote:
> >>
> >> Hi,
> >>   I tried to configure hadoop 0.23.1.I added all libs from share
> >> folder to lib directory.But still i get the error while formating the
> >> namenode
> >>
> >>
> >> Exception in thread "main" java.lang.NoClassDefFoundError:
> >> org/apache/hadoop/hdfs/server/namenode/NameNode
> >> Caused by: java.lang.ClassNotFoundException:
> >> org.apache.hadoop.hdfs.server.namenode.NameNode
> >>at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
> >>at java.security.AccessController.doPrivileged(Native Method)
> >>at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
> >>at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
> >>at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
> >>at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
> >> Could not find the main class:
> >> org.apache.hadoop.hdfs.server.namenode.NameNode. Program will exit.
> >>
> >>
> >> Any help???
> >
> > Can you show us here your .conf files?
> > core-site.xml
> > mapred-site.xml
> > hdfs-site.xml
> >
> > Which is your configuration for your conf/hadoop-env.sh?
> >
> > Regards
> >
> > --
> > Marcos Luis Ortíz Valmaseda
> >  Sr. Software Engineer (UCI)
> >  http://marcosluis2186.posterous.com
> >  http://postgresql.uci.cu/blog/38
> >
> >
> >
> > Fin a la injusticia, LIBERTAD AHORA A NUESTROS CINCO COMPATRIOTAS QUE SE
> > ENCUENTRAN INJUSTAMENTE EN PRISIONES DE LOS EEUU!
> > http://www.antiterroristas.cu
> > http://justiciaparaloscinco.wordpress.com
>


Re: Distibuted Shell example in capacity Scheduler

2012-04-01 Thread raghavendhra rahul
Any Ideas?

On Tue, Mar 27, 2012 at 2:52 PM, raghavendhra rahul <
raghavendhrara...@gmail.com> wrote:

> Hi,
> When i tried to run randomwriter example in capacity scheduler
> it works fine.But when i run the distributed shell example under capacity
> scheduler it shows the following exception.
>
> RemoteTrace:
> java.lang.NullPointerException
> at
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:796)
> at
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.getQueueInfo(CapacityScheduler.java:507)
> at
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getQueueInfo(ClientRMService.java:383)
> at
> org.apache.hadoop.yarn.api.impl.pb.service.ClientRMProtocolPBServiceImpl.getQueueInfo(ClientRMProtocolPBServiceImpl.java:181)
> at
> org.apache.hadoop.yarn.proto.ClientRMProtocol$ClientRMProtocolService$2.callBlockingMethod(ClientRMProtocol.java:188)
> at
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Server.call(ProtoOverHadoopRpcEngine.java:352)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1608)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1604)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:416)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1167)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1602)
>  at LocalTrace:
> org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl:
> at
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:160)
> at $Proxy6.getQueueInfo(Unknown Source)
> at
> org.apache.hadoop.yarn.api.impl.pb.client.ClientRMProtocolPBClientImpl.getQueueInfo(ClientRMProtocolPBClientImpl.java:223)
> at
> org.apache.hadoop.yarn.applications.distributedshell.Client.run(Client.java:356)
> at
> org.apache.hadoop.yarn.applications.distributedshell.Client.main(Client.java:188)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:616)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
>
>
> Thank You
>