P_MSG:
/
- Original Message -
From: Nitin Pawar
To: user@hadoop.apache.org
Sent: Wednesday, July 02, 2014 4:49 PM
Subject: Re: why hadoop-daemon.sh stop itself
pull out the logs from datanode log file
it will tell why it stopped
On Wed, Jul 2, 2014 at 2:05 PM, Edwar
I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
[hdfs@localhost logs]$ hadoop-daemon.sh start namenode
[hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
[hdfs@localhost logs]$ hadoop-daemon.sh start datanode
[hdfs@localhost logs]$ jps
4135 NameNode
4270 SecondaryNameN
I want to remove hadoop directory,so I use hadoop fs -rmr,but it can't
remove,why?
[hdfs@localhost hadoop-2.2.0]$ hadoop fs -ls
Found 2 items
drwxr-xr-x - hdfs supergroup 0 2014-07-01 17:52
QuasiMonteCarlo_1404262305436_855154103
drwxr-xr-x - hdfs supergroup 0 2014-07-01 18
I run hadoop-mapreduce-examples-2.2.0.jar,it can get correct result,but it
raise error "exitCode: 1 due to: Exception from container-launch". Why? How to
solve it? Thanks.
[yarn@localhost sbin]$ hadoop jar
/opt/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
pi
-D
I use hadoop 2.2.0 under Redhat Linux,I run hadoop mapreduce examples like
follows:
[yarn@localhost sbin]$ hadoop jar
/opt/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
pi
-Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory
-libjars
I try hadoop-mapreduce-examples-2.2.0.jar , and the screen information is
mapred.ClientServiceDelegate: Application state is completed.
FinalApplicationStatus=SUCCEEDED
The final result should be follows,but I don't find it,why it don't show, where
is the result?
Estimated value of Pi is 3.1425
I want to try hadoop example,but it raise following error,where is wrong,how to
correct? Thanks.
[root@localhost ~]# useradd -g hadoop yarn
useradd: user 'yarn' already exists
[root@localhost ~]# gpasswd -a hdfs hadoop
Adding user hdfs to group hadoop
[root@localhost ~]# su - hdfs
[hdfs@localhost
I use Hadoop2.2.0 and flume-1.4.0,when I update protobuf from
protobuf-java-2.4.1.jar to protobuf-java-2.5.0.jar under flume1.4.0/lib. But
when I run flume as following command,it raise errors:
[hadoop@master flume]$ flume-ng agent --conf conf --conf-file agent4.conf
--name agent4
Info: Sourcin
When I use hadoop to import data from HDFS to mysql database like following
command:
$ sqoop import --connect jdbc:mysql://172.11.12.6/hadooptest --username
hadoopuser --password password --table employees
But it raise following error in yarn-hadoop-resourcemanager-master.log and
hadoop-hadoo
When I use hadoop to import data from HDFS to mysql database like following
command:
$ sqoop import --connect jdbc:mysql://172.11.12.6/hadooptest --username
hadoopuser --password password --table employees
But it raise following error in yarn-hadoop-resourcemanager-master.log and
hadoop-hadoo
When I use hadoop to import data from HDFS to mysql database like following
command:
$ sqoop import --connect jdbc:mysql://172.11.12.6/hadooptest --username
hadoopuser --password password --table employees
But it raise following error in yarn-hadoop-resourcemanager-master.log and
hadoop-hadoo
I use Hadoop 2.2.0, I know hadoop will execute map first,when map is 100%, it
then execute reduce, after reduce is 100%,job will end. I execute a job,the map
is from 0% to 100% and map is from 0% to 100% again, why map execute twice?
Thanks.
Hadoop job information for Stage-1: number of mapper
;:"Stage-1_REDUCE","taskType":"REDUCE","taskAttributes":"null","taskCounters":"null","operatorGraph":{"nodeType":"OPERATOR","roots":"null","adjacencyList":[{&qu
, 2014 at 11:06 AM, EdwardKing wrote:
I use hive-0.11.0 under hadoop 2.2.0, like follows:
[hadoop@node1 software]$ hive
14/04/16 19:11:02 INFO Configuration.deprecation:
mapred.input.dir.recursive is deprecated. Instead, use
mapreduce.input.fileinputformat.input.dir.recursive
14/04
I use hive-0.11.0 under hadoop 2.2.0, like follows:
[hadoop@node1 software]$ hive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.input.dir.recursive is
deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.ma
I want to use hive in hadoop2.2.0, so I execute following steps:
[hadoop@master /]$ tar �Cxzf hive-0.11.0.tar.gz
[hadoop@master /]$ export HIVE_HOME=/home/software/hive
[hadoop@master /]$ export PATH=${HIVE_HOME}/bin:${PATH}
[hadoop@master /]$ hadoop fs -mkdir /tmp
[hadoop@master /]$ hadoop fs -m
I want to use verify fsimage by following command:
$ md5sum /var/hadoop/dfs/name/image/fsimage
but I don't know where is the fsimage under Hadoop2.2.0. There is no directory
of /var/hadoop/dfs/name/image/fsimage. Where is it? Thanks
--
I want to look at the default rack configuration,so I use following command
like books says:
[hadoop@master sbin]$ hadoop fsck -rack
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
14/04/08 01:22:56 WARN util.NativeCodeLoader: Unable to
I look through http://hadoop.apache.org/docs/r2.2.0/ ,but I don't find any url
about document. Where can I download hadoop 2.2.0 document?
Thanks
---
Confidentiality Notice: The information contained
I use Hadoop 2.2 and I want to run MapReduce web UI,so I visit following url:
http://172.11.12.6:50030/jobtracker.jsp
Unable to connect Firefox can't establish a connection to the server at
172.11.12.6:50030.
Where is wrong?
---
Two node,one is master,another is slave, I kill DataNode under slave, then
create directory by dfs command under master machine:
[hadoop@master]$./start-all.sh
[hadoop@slave]$ jps
9917 DataNode
10152 Jps
[hadoop@slave]$ kill -9 9917
[hadoop@master]$ hadoop dfs -mkdir test
[hadoop@master]$ hadoop d
parameter also
'dfs.namenode.stale.datanode.interval'
On Thu, Feb 27, 2014 at 7:08 AM, EdwardKing wrote:
I have two machine,one is master and another is slave, I want to know how
to configure heartbeat of hadoop 2.2.0,which file will
I have two machine,one is master and another is slave, I want to know how to
configure heartbeat of hadoop 2.2.0,which file will be modified?
Thanks.
---
Confidentiality Notice: The information contain
Hadoop 2.2.0, two computer, one is master,another is node1. I want to know
following scene:
If node1 is down by some reason, but I don't know node1 can't work, then I use
hadoop command to put a file,such as:
$ hadoop fs -put graph.txtgraphin/graph.txt
I know graph.txt file will be put master m
Sent: Monday, February 17, 2014 11:11 AM
Subject: Re: Aggregation service start
Please set
yarn.log-aggregation-enable
true
in yarn-site.xml to enable log aggregation.
-Zhijie
On Feb 16, 2014 6:15 PM, "EdwardKing" wrote:
hadoop 2.2.0, I want to view Tracking UI,so I visit
http://1
hadoop 2.2.0, I want to view Tracking UI,so I visit
http://172.11.12.6:8088/cluster,
then I click History of Completed Job,such as follows:
MapReduce Job job_1392601388579_0001
Attempt Number Start Time NodeLogs
1 Sun Feb 16 17:44:57 PST 2014 master:804
No.
You don't need to.
Master will start all required daemons on slave.
Check all daemons using jps command.
Sent from my iPhone
On Feb 16, 2014, at 7:03 PM, EdwardKing wrote:
I have install hadoop-2.2.0 under two machine,one is master and other is
slave,then I start hadoop service
I have install hadoop-2.2.0 under two machine,one is master and other is
slave,then I start hadoop service under master machine.
[hadoop@master ~]$./start-dfs.sh
[hadoop@master ~]$./start-yarn.sh
[hadoop@master ~]$./mr-jobhistory-daemon.sh start historyserver
My question is whether I need start h
I use hadoop-2.2.0,
[hadoop@node1 ~]$ cd /home/software/hadoop-2.2.0/sbin/
[hadoop@node1 sbin]$ ./start-dfs.sh
[hadoop@node1 sbin]$ ./start-yarn.sh
then I use http://172.11.12.6:8088/cluster/apps can view All Applications, then
I submit a job and log some information like follows:
public void m
I have two node,one is master(172.11.12.6) and one is node1(172.11.12.7), I use
following command to run a program under 172.11.12.6 ,like follows:
$ hadoop jar wc2.jar WordCountPredefined abc.txt output
[hadoop@master ~]$ hadoop jar wc2.jar WordCountPredefined abc.txt output
14/02/10 00:23:23 WA
th and block location) is in master, the file
data itself is in datanode.
On Sun, Jan 26, 2014 at 2:22 PM, EdwardKing wrote:
I use Hadoop2.2.0 to create a master node and a sub node,like follows:
Live Datanodes : 2
Node Transferring Address Last Contact Admin State Configured Capacity (G
I use Hadoop2.2.0 to create a master node and a sub node,like follows:
Live Datanodes : 2
Node Transferring Address Last Contact Admin State Configured Capacity (GB)
Used(GB) Non DFS Used (GB) Remaining(GB) Used(%)
master 172.11.12.6:50010 1In Service 1
I am a newbies of hadoop, I have downloaded hadoop-2.2.0, then I want to learn
it, but I don't know which books is written about hadoop version 2, I only find
hadoop 1.x version, anyone could recommend me a hadoop book name of version 2?
Thanks.
-
33 matches
Mail list logo