Hi Philip,
The cluster metrics in the screenshot attached shows that there are no active
nodes.
Have you started any Node Manager process ?
-Varun Saxena.
From: Phillip Wu [mailto:phillip...@unsw.edu.au]
Sent: 15 June 2016 15:28
To: user@hadoop.apache.org
Cc: Sunil Govind; Varun saxena
Subject
Can you open the Resource Manager(RM) UI and share screenshot of main RM page.
We can check cluster resources there. Most probably cluster does not have
enough resources.
How much memory and VCores does your AM need ?
RM UI can be accessed at http://localhost:8088/
- Varun Saxena.
From
Hi Silnov,
Can you check your AM logs and compare it with MAPREDUCE-6513 scenario ?
I suspect its same.
MAPREDUCE-6513 is marked to go in 2.7.3
Regards,
Varun Saxena.
From: Silnov [mailto:sil...@sina.com]
Sent: 24 February 2016 14:52
To: user
Subject: MapReduce job doesn't make any pro
Regards,
Varun Saxena
On Mon, Feb 22, 2016 at 12:01 PM, kumar r wrote:
> Hi,
>
> I have configured hadoop-2.7.2 pseudo-node cluster in windows. When i
> submit a MR job, it works fine. But if i submit multiple MR jobs then only
> one job runs at a time.
>
> First job is in
There is already a JIRA for it - MAPREDUCE-5938(and other related JIRAs'
MAPREDUCE-6402 and YARN-4119)
Regards,
Varun Saxena
On Tue, Dec 8, 2015 at 9:56 PM, Harsh J wrote:
> Hello,
>
> Could you file a JIRA for this please? Currently the ShuffleHandler will
> always bind to
ger's logs for this
node(10.0.0.5:30050) ? Something like deactivating node, unhealthy, etc.
Regards,
Varun Saxena.
On Thu, Nov 12, 2015 at 12:32 AM, Sergey wrote:
> Hi Varun,
> thank you!
>
> But the main question is the reason...
>
> I did check log files with yarn lo
ILLED and a new attempt is launched.
Regards,
Varun Saxena.
On Wed, Nov 11, 2015 at 11:44 PM, Sergey wrote:
> Hi,
>
> yes, there are several "failed" map, because of 600 sec time-out.
>
> I also found a lot messages like this in the log:
>
> 2015-11-09 22:00:35,882
Hi Istabrak,
Sorry had misunderstood your query a little.
The code written is a little bit wrong. You are not really using the Tool
class.
You can change the code as under :
public static void main(String[] argv) throws Exception {
int ret = ToolRunner.run(null, new AvgSix(), argv));
Sy
Two configurations are used to pass command line opts to mappers and
reducers.
These are mapreduce.map.java.opts and mapreduce.reduce.java.opts for
mappers and reducers respectively.
There is a configuration named mapred.child.java.opts if you want same
arguments to be passed to both mappers and re
This configuration is used for redirection by NM Web UI.
NM will append container id, node id and app owner depending on the
container whose logs you want to see.
Regards,
Varun Saxena.
On Thu, Oct 1, 2015 at 7:33 PM, Boyu Zhang wrote:
> Hello users,
>
> I have the job history serv
Job History runs cleaner 30sec. after restart and then after every 1 day,
if cleaner is enabled. That is why jobs older than 7 days would have got
deleted.
Regarding your second question.
No, you cannot recover deleted files.
Regards,
Varun Saxena
On Tue, Sep 22, 2015 at 7:08 AM, Boyu Zhang
/done_intermediate
and ${yarn.app.mapreduce.am.staging-dir}/history/done respectively.
7 days is also configurable(config being mapreduce.jobhistory.max-age-ms).
You can set this value according to your cluster.
I hope this answers your question.
Regards,
Varun Saxena.
On Tue, Sep 22, 2015 at 1:39 AM, Boyu
to
configure them.
Just to clarify though, the apps themselves are not lost, as in, the output
is not lost. Its just the information about them which is no longer present
on RM restart.
Regards,
Varun Saxena.
On Mon, Sep 21, 2015 at 10:31 PM, Boyu Zhang wrote:
> Thanks for the answer Va
Hi Boyu,
RM stores apps in state store if recovery is enabled. Only then they will
be available on restart.
Otherwise they are kept in memory and hence lost on restart.
You may not have it enabled. Check config value for below config. By
default its false.
yarn.resourcemanager.recovery.enabled
R
job.
Regards,
Varun Saxena
On Mon, Aug 24, 2015 at 1:26 AM, Varun Saxena
wrote:
> This configuration is read and used by NodeManager, on whichever node its
> running.
> If it is not configured, default value will be taken.
>
> Regards,
> Varun Saxena.
>
> On Mon, Aug
This configuration is read and used by NodeManager, on whichever node its
running.
If it is not configured, default value will be taken.
Regards,
Varun Saxena.
On Mon, Aug 24, 2015 at 1:21 AM, Pedro Magalhaes
wrote:
> Thanks Varun! Like we say in Brazil. "U are the guy!" (
-default.xml)
may not be available AFAIK,
However, this XML(yarn-default.xml) can be checked online in git repository.
Associated JIRA which fixes this is
https://issues.apache.org/jira/browse/YARN-3823
Regards,
Varun Saxena.
On Mon, Aug 24, 2015 at 12:53 AM, Pedro Magalhaes
wrote:
> Tha
Hi Pedro,
Real default value of yarn.scheduler.maximum-allocation-vcores is 4.
The value of 32 is actually a documentation issue and has been fixed
recently.
Regards,
Varun Saxena.
On Sun, Aug 23, 2015 at 10:39 PM, Pedro Magalhaes
wrote:
> Varun,
> Thanks for the reply. I undesta
ing Linux and Windows) if
configured to do so. This change would be available in 2.8
I hope this answers your question.
Regards,
Varun Saxena.
On Sun, Aug 23, 2015 at 9:40 PM, Pedro Magalhaes
wrote:
> I was looking at default parameters for:
>
> yarn.nodemanager.resource.cp
19 matches
Mail list logo