I just want the last 4 jobs in the job history in Yarn?

2013-06-18 Thread Pedro Sá da Costa
Is it possible to say that I just want the last 4 jobs in the job history in Yarn? -- Best regards,

RE: I just want the last 4 jobs in the job history in Yarn?

2013-06-18 Thread Devaraj k
I don't think anything is there like this based on the number of jobs. Can you give me the use case for this? If you want to delete the old history files you have a provision in the history server. You can use these configuration and change the value. mapreduce.jobhistory.max-age-ms :

Re: hprof profiler output location

2013-06-18 Thread yypvsxf19870706
Hi Rahul I even search the files using find / - name attemp*.profile.but still nothing was found. Can you indicate the format of the file name. Thanks 发自我的 iPhone 在 2013-6-18,20:27,Rahul Bhattacharjee rahul.rec@gmail.com 写道: In the same directory from which the job has

DFS Permissions on Hadoop 2.x

2013-06-18 Thread Prashant Kommireddi
Hello, We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a question around disabling dfs permissions on the latter version. For some reason, setting the following config does not seem to work property namedfs.permissions.enabled/name valuefalse/value /property

Re: Namenode memory usage

2013-06-18 Thread Patai Sangbutsarakum
Thanks Brahma, I am kind of afraid to run the command, I had an issue on jobtracker early this year. I launched the command and it caused the jobtracker stop responding long enough till we need to roll the jobtracker instead. So i am kind of afraid to run it on the production namenode. Any

Re: DFS Permissions on Hadoop 2.x

2013-06-18 Thread Jean-Baptiste Onofré
It sounds like a change in the behavior. Regards JB On 06/18/2013 09:04 PM, Prashant Kommireddi wrote: Thanks for the reply, Chris. Yes, I am certain this worked with 0.20.2. It used a slightly different property and I have checked setting it to false actually disables checking for perms.

Re: Assignment of data splits to mappers

2013-06-18 Thread Bertrand Dechoux
1) The tradeoff is between reducing the overhead of distributed computing and reducing the cost of failure. Less tasks, less overhead but the cost of failure will be bigger, mainly because the distribution will be coarser. One of the reason was outlined before. A (failed) task is related to an

Re: DFS Permissions on Hadoop 2.x

2013-06-18 Thread Prashant Kommireddi
Hi Chris, This is while running a MR job. Please note the job is able to write files to /mapred directory and fails on EXECUTE permissions. On digging in some more, it looks like the failure occurs after writing to /mapred/history/done_intermediate. Here is a more detailed stacktrace. INFO: Job

Re: DFS Permissions on Hadoop 2.x

2013-06-18 Thread Prashant Kommireddi
Looks like the jobs fail only on the first attempt and pass thereafter. Failure occurs while setting perms on intermediate done directory. Here is what I think is happening: 1. Intermediate done dir is (ideally) created as part of deployment (for eg, /mapred/history/done_intermediate) 2. When a

Hadoop 1.0.3 join

2013-06-18 Thread Ahmed Elgohary
Hello, I am using hadoop 1.0.3 and trying to join multiple input files using CompositeInputFormat. It seems to me that I have to use the old api to write the join job since the new api does not support join in hadoop 1.0.3. Is that correct? thanks, --ahmed

Re: Hadoop 1.0.3 join

2013-06-18 Thread Harsh J
Yes, it doesn't exist in the new API in 1.0.3. On Wed, Jun 19, 2013 at 6:45 AM, Ahmed Elgohary aagoh...@gmail.com wrote: Hello, I am using hadoop 1.0.3 and trying to join multiple input files using CompositeInputFormat. It seems to me that I have to use the old api to write the join job

Re: DFS Permissions on Hadoop 2.x

2013-06-18 Thread Harsh J
This is a HDFS bug. Like all other methods that check for permissions being enabled, the client call of setPermission should check it as well. It does not do that currently and I believe it should be a NOP in such a case. Please do file a JIRA (and reference the ID here to close the loop)! On

Mounting HDFS as Local File System using FUSE

2013-06-18 Thread Mohammad Mustaqeem
I want to mount the HDFS as local file system using FUSE but I don't know how to install fuse. I am using ubuntu 12.04. I found these instructions http://xmodulo.com/2012/06/how-to-mount-hdfs-using-fuse.html but when I run sudo apt-get install hadoop-0.20-fuse I got following error: Reading

RE: How Yarn execute MRv1 job?

2013-06-18 Thread Devaraj k
Hi Sam, Please find the answers for your queries. - Yarn could run multiple kinds of jobs(MR, MPI, ...), but, MRv1 job has special execution process(map shuffle reduce) in Hadoop 1.x, and how Yarn execute a MRv1 job? still include some special MR steps in Hadoop 1.x, like map, sort, merge,

Re: How Yarn execute MRv1 job?

2013-06-18 Thread Rahul Bhattacharjee
Hi Devaraj, As for the container request request for yarn container , currently only memory is considered as resource , not cpu. Please correct. Thanks, Rahul On Wed, Jun 19, 2013 at 11:05 AM, Devaraj k devara...@huawei.com wrote: Hi Sam, Please find the answers for your queries.

Re: How Yarn execute MRv1 job?

2013-06-18 Thread Rahul Bhattacharjee
by please correct , i meant - please correct me if my statement is wrong. On Wed, Jun 19, 2013 at 11:11 AM, Rahul Bhattacharjee rahul.rec@gmail.com wrote: Hi Devaraj, As for the container request request for yarn container , currently only memory is considered as resource , not cpu.

Re: How Yarn execute MRv1 job?

2013-06-18 Thread Arun C Murthy
Not true, the CapacityScheduler has support for both CPU Memory now. On Jun 18, 2013, at 10:41 PM, Rahul Bhattacharjee rahul.rec@gmail.com wrote: Hi Devaraj, As for the container request request for yarn container , currently only memory is considered as resource , not cpu. Please