Is it possible to say that I just want the last 4 jobs in the job history
in Yarn?
--
Best regards,
I don't think anything is there like this based on the number of jobs. Can you
give me the use case for this?
If you want to delete the old history files you have a provision in the history
server. You can use these configuration and change the value.
mapreduce.jobhistory.max-age-ms :
Hi Rahul
I even search the files using find / - name attemp*.profile.but still
nothing was found.
Can you indicate the format of the file name.
Thanks
发自我的 iPhone
在 2013-6-18,20:27,Rahul Bhattacharjee rahul.rec@gmail.com 写道:
In the same directory from which the job has
Hello,
We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
question around disabling dfs permissions on the latter version. For some
reason, setting the following config does not seem to work
property
namedfs.permissions.enabled/name
valuefalse/value
/property
Thanks Brahma,
I am kind of afraid to run the command, I had an issue on jobtracker early
this year. I launched the command and it caused the jobtracker stop
responding long enough till we need to roll the jobtracker instead. So i am
kind of afraid to run it on the production namenode.
Any
It sounds like a change in the behavior.
Regards
JB
On 06/18/2013 09:04 PM, Prashant Kommireddi wrote:
Thanks for the reply, Chris.
Yes, I am certain this worked with 0.20.2. It used a slightly different
property and I have checked setting it to false actually disables
checking for perms.
1) The tradeoff is between reducing the overhead of distributed computing
and reducing the cost of failure.
Less tasks, less overhead but the cost of failure will be bigger, mainly
because the distribution will be coarser. One of the reason was outlined
before. A (failed) task is related to an
Hi Chris,
This is while running a MR job. Please note the job is able to write files
to /mapred directory and fails on EXECUTE permissions. On digging in some
more, it looks like the failure occurs after writing to
/mapred/history/done_intermediate.
Here is a more detailed stacktrace.
INFO: Job
Looks like the jobs fail only on the first attempt and pass thereafter.
Failure occurs while setting perms on intermediate done directory. Here
is what I think is happening:
1. Intermediate done dir is (ideally) created as part of deployment (for
eg, /mapred/history/done_intermediate)
2. When a
Hello,
I am using hadoop 1.0.3 and trying to join multiple input files using
CompositeInputFormat. It seems to me that I have to use the old api to
write the join job since the new api does not support join in hadoop 1.0.3.
Is that correct?
thanks,
--ahmed
Yes, it doesn't exist in the new API in 1.0.3.
On Wed, Jun 19, 2013 at 6:45 AM, Ahmed Elgohary aagoh...@gmail.com wrote:
Hello,
I am using hadoop 1.0.3 and trying to join multiple input files using
CompositeInputFormat. It seems to me that I have to use the old api to write
the join job
This is a HDFS bug. Like all other methods that check for permissions
being enabled, the client call of setPermission should check it as
well. It does not do that currently and I believe it should be a NOP
in such a case. Please do file a JIRA (and reference the ID here to
close the loop)!
On
I want to mount the HDFS as local file system using FUSE but I don't know
how to install fuse.
I am using ubuntu 12.04.
I found these instructions
http://xmodulo.com/2012/06/how-to-mount-hdfs-using-fuse.html but when
I run sudo
apt-get install hadoop-0.20-fuse
I got following error:
Reading
Hi Sam,
Please find the answers for your queries.
- Yarn could run multiple kinds of jobs(MR, MPI, ...), but, MRv1 job has
special execution process(map shuffle reduce) in Hadoop 1.x, and how Yarn
execute a MRv1 job? still include some special MR steps in Hadoop 1.x, like
map, sort, merge,
Hi Devaraj,
As for the container request request for yarn container , currently only
memory is considered as resource , not cpu. Please correct.
Thanks,
Rahul
On Wed, Jun 19, 2013 at 11:05 AM, Devaraj k devara...@huawei.com wrote:
Hi Sam,
Please find the answers for your queries.
by please correct , i meant - please correct me if my statement is wrong.
On Wed, Jun 19, 2013 at 11:11 AM, Rahul Bhattacharjee
rahul.rec@gmail.com wrote:
Hi Devaraj,
As for the container request request for yarn container , currently only
memory is considered as resource , not cpu.
Not true, the CapacityScheduler has support for both CPU Memory now.
On Jun 18, 2013, at 10:41 PM, Rahul Bhattacharjee rahul.rec@gmail.com
wrote:
Hi Devaraj,
As for the container request request for yarn container , currently only
memory is considered as resource , not cpu. Please
17 matches
Mail list logo