It should be normal. If you can check the diagnostics of the container, it
is likely that you will see Container killed by the ApplicationMaster. MR
AM will stop the container when a task is finished.
Thanks,
Zhijie
On Wed, Apr 2, 2014 at 7:22 PM, Fengyun RAO raofeng...@gmail.com wrote:
same
So you are on machine n2, right? Run command hostname n2 and run
hostname -f, see what's the output.
--
Cheers
-MJ
Hi Zhijie,
Yes, I opened debug level, and looked through RM, AM, container's logs,
container always receive SIGTERM(15) after transition to FINISHED, webUI
just shows the diagnostics message.
Thanks Zhijie.
On Thu, Apr 3, 2014 at 2:00 PM, Zhijie Shen zs...@hortonworks.com wrote:
It should be
Thanks Ravi, I am using Graphviz as Jeff said. it's enough to me.
On Thu, Apr 3, 2014 at 5:12 AM, Ravi Prakash ravi...@ymail.com wrote:
Hi Azuryy!
You have to use dot to convert it to png
On Tuesday, April 1, 2014 6:38 PM, Azuryy Yu azury...@gmail.com wrote:
Hi,
I compiled Yarn
Hi,
Please help to have sample code for
getting the Files from the sharepoiint API using mapreduce.
How to Download the Files from Sharepoint API using java in mapreduce.
Thanks in advance
Ranjini
Just trying to read a txt file with 3 simple words on it.
On 03.04.2014 02:15, Shumin Guo wrote:
what is your input data like?
On Apr 2, 2014 10:16 AM, ei09072 ei09...@fe.up.pt [8] wrote:
After installing Hadoop 2.3.0 on windows 8, I tried to run the
wordcount example given. However I get
2g
On Thu, Apr 3, 2014 at 1:30 PM, Stanley Shi s...@gopivotal.com wrote:
This doesn't seem like related with the data size.
How much memory do you use for the reducer?
Regards,
*Stanley Shi,*
On Thu, Apr 3, 2014 at 8:04 AM, Li Li fancye...@gmail.com wrote:
I have a map reduce
*mapred.child.java.opts=-Xmx2g*
On Thu, Apr 3, 2014 at 5:10 PM, Li Li fancye...@gmail.com wrote:
2g
On Thu, Apr 3, 2014 at 1:30 PM, Stanley Shi s...@gopivotal.com wrote:
This doesn't seem like related with the data size.
How much memory do you use for the reducer?
Regards,
*Stanley
you can think of each TrainingWeights as a very large double[] whose
length is about 10,000,000
TrainingWeights result=null;
int total=0;
for(TrainingWeights weights:values){
if(result==null){
result=weights;
}else{
addWeights(result, weights);
}
total++;
}
if(total1){
divideWeights(result,
Yes, thanks a lot, it is almost done.
From: m...@gopivotal.com
Date: Thu, 3 Apr 2014 14:07:24 +0800
Subject: Re: DNS pre request for hadoop install
To: user@hadoop.apache.org
So you are on machine n2, right? Run command hostname n2 and run hostname
-f, see what's the output.
--
Cheers
-MJ
You'd better change the the /etc/sysconfig/network as well so the hostname
is persisted after system reboot.
HOSTNAME=n2.myhdp.com --- HOSTNAME=n2
On Thu, Apr 3, 2014 at 6:05 PM, Alex Lee eliy...@hotmail.com wrote:
Yes, thanks a lot, it is almost done.
--
From:
I know the default replication is 3, which ensures reliability when 2 nodes
crash at the same time.
However, for a small cluster, e.g. 10~20 nodes, the possibility that 2
nodes crash at the same time is too small.
Can we simply set the replication to 2, or are there any other defects?
any
Hi All,
I have a query with respect to JBOD configuration . Suppose in my
data node if one of the
disk crashes in JBOD configuration will the entire data node be shown as
dead/unavailable ?
--
Warm Regards,
*Bharath Kumar *
AFAIK, data node will be dead since it cannot handle one disk failure ( if
it cannot write data to a disk, it will fail).
But not sure if this situation has changed.
Regards,
*Stanley Shi,*
On Thu, Apr 3, 2014 at 7:29 PM, Bharath Kumar bharath...@gmail.com wrote:
Hi All,
I have a
Use dfs.datanode.failed.volumes.tolerated
http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
On Thu, Apr 3, 2014 at 6:07 PM, Stanley Shi s...@gopivotal.com wrote:
AFAIK, data node will be dead since it cannot handle one disk failure ( if
it cannot write data
The reason for replication also has to do with data locality in a larger
cluster for running a map-reduce jobs. You can reduce the replication,
that's why it's a configurable parameter.
On Thu, Apr 3, 2014 at 7:10 AM, Fengyun RAO raofeng...@gmail.com wrote:
I know the default replication is 3,
unsubscribe
2014-04-03 20:54 GMT+08:00 anuj maurice anuj.maur...@gmail.com:
Use dfs.datanode.failed.volumes.tolerated
http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
On Thu, Apr 3, 2014 at 6:07 PM, Stanley Shi s...@gopivotal.com wrote:
AFAIK, data
Hi,
This is regarding a single node cluster setup
If I have a value of 0.0.0.0:8050 for yarn.nodemanager.address in the
configuration file yarn-site.xml/yarn-default.xml is it mandatory
requirement that ssh 0.0.0.0 should work on my machine for being able to
start YARN? Or will I be
There are several issues could come together, since you know your data, we can
only guess here:
1) mapred.child.java.opts=-Xmx2g setting only works IF you didn't set
mapred.map.child.java.opts or mapred.reduce.child.java.opts, otherwise, the
later one will override the mapred.child.java.opts.
There are several issues could come together, since you know your data, we can
only guess here:
1) mapred.child.java.opts=-Xmx2g setting only works IF you didn't set
mapred.map.child.java.opts or mapred.reduce.child.java.opts, otherwise, the
later one will override the mapred.child.java.opts.
There are several issues could come together, since you know your data, we can
only guess here:
1) mapred.child.java.opts=-Xmx2g setting only works IF you didn't set
mapred.map.child.java.opts or mapred.reduce.child.java.opts, otherwise, the
later one will override the mapred.child.java.opts.
I was able to fix address item (2) below.
Looking through the logs, I noticed that the node manager initiated
shutdown but was killed before it could finish. So I increased the value
for YARN_STOP_TIMEOUT from default 5 secs to 10 secs and in some cases 30
secs. Is it normal to have longer than
please send the mail to user-unsubscr...@hadoop.apache.org
Send From My Macbook
在 2014年4月3日,下午9:52,Levin ding dingweiqi...@gmail.com 写道:
unsubscribe
2014-04-03 20:54 GMT+08:00 anuj maurice anuj.maur...@gmail.com:
Use dfs.datanode.failed.volumes.tolerated
you can use yarn-daemon.sh to start nodemanager without ssh
On Thu, Apr 3, 2014 at 10:36 PM, Krishna Kishore Bonagiri
write2kish...@gmail.com wrote:
Hi,
This is regarding a single node cluster setup
If I have a value of 0.0.0.0:8050 for yarn.nodemanager.address in the
I've no idea what your program is doing.
But you'd better estimate the memory consumption of your reducer. And then,
pick a proper Xmx size for it.
Looks like your reducer needs much memory to cache the TrainingWeights.
On Thu, Apr 3, 2014 at 5:53 PM, Li Li fancye...@gmail.com wrote:
you can
Hi Krishna,
Don't worrry about that, there is no ssh 0.0.0.0 during start NM, it's a
local service.
On Fri, Apr 4, 2014 at 9:12 AM, Shengjun Xin s...@gopivotal.com wrote:
you can use yarn-daemon.sh to start nodemanager without ssh
On Thu, Apr 3, 2014 at 10:36 PM, Krishna Kishore Bonagiri
thanks, Peyman!
I know it's configurable, what I don't know is if it's typical to reduce it
in small cluster,
or are there any recommended setting, such as 2 for 10-node cluster, 3 for
100-node, 4 for 1000-node?
or no matter how big the cluster is, just set it to 3.
2014-04-03 21:13 GMT+08:00
27 matches
Mail list logo