Hi all,
I met some error when I run fsck -move, I'm wondering whether I do
something wrong or that is a hdfs bug.
The hadoop version is 2.4.0
My steps are as the following:
*1. Set up a pseudo cluster*
*2. Copy a file to hdfs*
*3. Corrupt a block of the file*
*4. Run fsck to check:*
Connecting
Did you configure the environment variables, such as $HADOOP_YARN_HOME?
On Tue, May 20, 2014 at 11:32 AM, Ted Yu yuzhih...@gmail.com wrote:
Which user did you use for the yarn-daemon.sh command ?
See the following:
bq. mkdir: cannot create directory `/logs': Permission denied
Cheers
Do you get any error log?
On Tue, May 20, 2014 at 4:10 AM, Anfernee Xu anfernee...@gmail.com wrote:
Hi,
I'm running hadoop 2.2.0, and I have one node which only has NodeManager
running(not a data node), now I want to decommission it from MR cluster and
followed below steps
1. add node
.
--
Chandra
On Mon, May 5, 2014 at 11:31 AM, chandra kant
chandralakshmikan...@gmail.com wrote:
Thanks..
I set it accordingly..
On Monday, 5 May 2014, Shengjun Xin s...@gopivotal.com wrote:
I think you need to set two different environment variables for hadoop1
and hadoop2
I think you need to set two different environment variables for hadoop1 and
hadoop2, such as HADOOP_HOME, HADOOP_CONF_DIR and before you run hadoop
command, you need to make sure the correct environment variables are enable.
On Mon, May 5, 2014 at 12:36 PM, chandra kant
,started:true}],done:false,started:true}
TIME=1397715250471
How to do it now?
- Original Message -
*From:* Shengjun Xin s...@gopivotal.com
*To:* user@hadoop.apache.org
*Sent:* Thursday, April 17, 2014 12:42 PM
*Subject:* Re: question about hive under hadoop
For the first problem, you
Did you start datanode service?
On Thu, Apr 17, 2014 at 9:23 PM, Karim Awara karim.aw...@kaust.edu.sawrote:
Hi,
Whenever I start the hadoop on 24 machines, the following exception
happens on the jobtracker log file on the namenode: I would appreciate
any help. Thank you.
2014-04-17
Maybe a configuration problem, what's the content of configuration?
On Thu, Apr 17, 2014 at 10:40 AM, 易剑 eyj...@gmail.com wrote:
*How to solve the following problem?*
*hadoop-hadoop-secondarynamenode-Tencent_VM_39_166_sles10_64.out:*
Java HotSpot(TM) 64-Bit Server VM warning: You have
For the first problem, you need to check the hive.log for the details
On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing zhan...@neusoft.com wrote:
I use hive-0.11.0 under hadoop 2.2.0, like follows:
[hadoop@node1 software]$ hive
14/04/16 19:11:02 INFO Configuration.deprecation:
Maybe /tmp/$username/hive.log, you can check the the parameter
'hive.log.dir' in hive-log4j.properties
On Thu, Apr 17, 2014 at 1:18 PM, EdwardKing zhan...@neusoft.com wrote:
Where is hive.log? Thanks.
- Original Message -
*From:* Shengjun Xin s...@gopivotal.com
*To:* user
try to use bin/hadoop classpath to check whether the classpath is what you
set
On Tue, Apr 15, 2014 at 4:16 PM, Anacristing 99403...@qq.com wrote:
Hi,
I'm trying to setup Hadoop(version 2.2.0) on Windows(32-bit) with
cygwin(version 1.7.5).
I export
I think you can use the command 'mvn package -Pnative,dist -DskipTests' in
source code root directory to build the binaries
On Wed, Apr 16, 2014 at 2:31 AM, Justin Mrkva m...@justinmrkva.com wrote:
I'm using the guide at
You can check the nodemanager log
On Mon, Apr 14, 2014 at 2:38 PM, Rahul Singh smart.rahul.i...@gmail.comwrote:
Hi,
I am running a job(wordcount example) on 3 node cluster(1 master and 2
slave), some times the job passes but some times it fails(as reduce fails,
input data few kbs).
I
If you want to use start-all.sh, you need to configure ssh keys or you can
login the target machine to start service
On Tue, Apr 15, 2014 at 11:56 AM, Shashidhar Rao raoshashidhar...@gmail.com
wrote:
Hi ,
Can somebody please clarify in real production environment with multiple
nodes in
Check resourcemamager log and nodemanager log
On Mon, Apr 14, 2014 at 11:37 AM, Rahul Singh smart.rahul.i...@gmail.comwrote:
Hi,
I am trying to run wordcount example(input file contains just few words)
but the job seems to be stuck. How do i debug what went wrong?
[hduser@poc-hadoop04
add '-Dmapreduce.user.classpath.first=true' to your command and try again
On Wed, Apr 9, 2014 at 6:27 AM, Kim Chew kchew...@gmail.com wrote:
It seems to me that in Hadoop 2.2.1, using the libjars option does not
search the jars located in the the local file system but HDFS. For example,
you can use yarn-daemon.sh to start nodemanager without ssh
On Thu, Apr 3, 2014 at 10:36 PM, Krishna Kishore Bonagiri
write2kish...@gmail.com wrote:
Hi,
This is regarding a single node cluster setup
If I have a value of 0.0.0.0:8050 for yarn.nodemanager.address in the
17 matches
Mail list logo