I want to remove hadoop directory,so I use hadoop fs -rmr,but it can't
remove,why?
[hdfs@localhost hadoop-2.2.0]$ hadoop fs -ls
Found 2 items
drwxr-xr-x - hdfs supergroup 0 2014-07-01 17:52
QuasiMonteCarlo_1404262305436_855154103
drwxr-xr-x - hdfs supergroup 0 2014-07-01
I use hadoop2.2.0 , I start hadoop-daemon service,like follows:
[hdfs@localhost logs]$ hadoop-daemon.sh start namenode
[hdfs@localhost logs]$ hadoop-daemon.sh start secondarynamenode
[hdfs@localhost logs]$ hadoop-daemon.sh start datanode
[hdfs@localhost logs]$ jps
4135 NameNode
4270
:
/
- Original Message -
From: Nitin Pawar
To: user@hadoop.apache.org
Sent: Wednesday, July 02, 2014 4:49 PM
Subject: Re: why hadoop-daemon.sh stop itself
pull out the logs from datanode log file
it will tell why it stopped
On Wed, Jul 2, 2014 at 2:05 PM, EdwardKing
I run hadoop-mapreduce-examples-2.2.0.jar,it can get correct result,but it
raise error exitCode: 1 due to: Exception from container-launch. Why? How to
solve it? Thanks.
[yarn@localhost sbin]$ hadoop jar
/opt/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
pi
I use hadoop 2.2.0 under Redhat Linux,I run hadoop mapreduce examples like
follows:
[yarn@localhost sbin]$ hadoop jar
/opt/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
pi
-Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory
-libjars
I want to try hadoop example,but it raise following error,where is wrong,how to
correct? Thanks.
[root@localhost ~]# useradd -g hadoop yarn
useradd: user 'yarn' already exists
[root@localhost ~]# gpasswd -a hdfs hadoop
Adding user hdfs to group hadoop
[root@localhost ~]# su - hdfs
[hdfs@localhost
I try hadoop-mapreduce-examples-2.2.0.jar , and the screen information is
mapred.ClientServiceDelegate: Application state is completed.
FinalApplicationStatus=SUCCEEDED
The final result should be follows,but I don't find it,why it don't show, where
is the result?
Estimated value of Pi is
When I use hadoop to import data from HDFS to mysql database like following
command:
$ sqoop import --connect jdbc:mysql://172.11.12.6/hadooptest --username
hadoopuser --password password --table employees
But it raise following error in yarn-hadoop-resourcemanager-master.log and
When I use hadoop to import data from HDFS to mysql database like following
command:
$ sqoop import --connect jdbc:mysql://172.11.12.6/hadooptest --username
hadoopuser --password password --table employees
But it raise following error in yarn-hadoop-resourcemanager-master.log and
When I use hadoop to import data from HDFS to mysql database like following
command:
$ sqoop import --connect jdbc:mysql://172.11.12.6/hadooptest --username
hadoopuser --password password --table employees
But it raise following error in yarn-hadoop-resourcemanager-master.log and
I use Hadoop 2.2.0, I know hadoop will execute map first,when map is 100%, it
then execute reduce, after reduce is 100%,job will end. I execute a job,the map
is from 0% to 100% and map is from 0% to 100% again, why map execute twice?
Thanks.
Hadoop job information for Stage-1: number of
under hadoop
For the first problem, you need to check the hive.log for the details
On Thu, Apr 17, 2014 at 11:06 AM, EdwardKing zhan...@neusoft.com wrote:
I use hive-0.11.0 under hadoop 2.2.0, like follows:
[hadoop@node1 software]$ hive
14/04/16 19:11:02 INFO
I use hive-0.11.0 under hadoop 2.2.0, like follows:
[hadoop@node1 software]$ hive
14/04/16 19:11:02 INFO Configuration.deprecation: mapred.input.dir.recursive is
deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/04/16 19:11:02 INFO Configuration.deprecation:
, 2014 at 11:06 AM, EdwardKing zhan...@neusoft.com wrote:
I use hive-0.11.0 under hadoop 2.2.0, like follows:
[hadoop@node1 software]$ hive
14/04/16 19:11:02 INFO Configuration.deprecation:
mapred.input.dir.recursive is deprecated. Instead, use
I want to use hive in hadoop2.2.0, so I execute following steps:
[hadoop@master /]$ tar �Cxzf hive-0.11.0.tar.gz
[hadoop@master /]$ export HIVE_HOME=/home/software/hive
[hadoop@master /]$ export PATH=${HIVE_HOME}/bin:${PATH}
[hadoop@master /]$ hadoop fs -mkdir /tmp
[hadoop@master /]$ hadoop fs
I want to use verify fsimage by following command:
$ md5sum /var/hadoop/dfs/name/image/fsimage
but I don't know where is the fsimage under Hadoop2.2.0. There is no directory
of /var/hadoop/dfs/name/image/fsimage. Where is it? Thanks
I want to look at the default rack configuration,so I use following command
like books says:
[hadoop@master sbin]$ hadoop fsck -rack
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
14/04/08 01:22:56 WARN util.NativeCodeLoader: Unable to
I look through http://hadoop.apache.org/docs/r2.2.0/ ,but I don't find any url
about document. Where can I download hadoop 2.2.0 document?
Thanks
---
Confidentiality Notice: The information
I use Hadoop 2.2 and I want to run MapReduce web UI,so I visit following url:
http://172.11.12.6:50030/jobtracker.jsp
Unable to connect Firefox can't establish a connection to the server at
172.11.12.6:50030.
Where is wrong?
I have two machine,one is master and another is slave, I want to know how to
configure heartbeat of hadoop 2.2.0,which file will be modified?
Thanks.
---
Confidentiality Notice: The information
' parameter determines datanode
heartbeat interval in seconds.
and have look these variables this parameter also
'dfs.namenode.stale.datanode.interval'
On Thu, Feb 27, 2014 at 7:08 AM, EdwardKing zhan...@neusoft.com wrote:
I have two machine,one is master and another is slave, I want
Hadoop 2.2.0, two computer, one is master,another is node1. I want to know
following scene:
If node1 is down by some reason, but I don't know node1 can't work, then I use
hadoop command to put a file,such as:
$ hadoop fs -put graph.txtgraphin/graph.txt
I know graph.txt file will be put master
I have install hadoop-2.2.0 under two machine,one is master and other is
slave,then I start hadoop service under master machine.
[hadoop@master ~]$./start-dfs.sh
[hadoop@master ~]$./start-yarn.sh
[hadoop@master ~]$./mr-jobhistory-daemon.sh start historyserver
My question is whether I need start
service
No.
You don't need to.
Master will start all required daemons on slave.
Check all daemons using jps command.
Sent from my iPhone
On Feb 16, 2014, at 7:03 PM, EdwardKing zhan...@neusoft.com wrote:
I have install hadoop-2.2.0 under two machine,one is master and other is
slave,then I
hadoop 2.2.0, I want to view Tracking UI,so I visit
http://172.11.12.6:8088/cluster,
then I click History of Completed Job,such as follows:
MapReduce Job job_1392601388579_0001
Attempt Number Start Time NodeLogs
1 Sun Feb 16 17:44:57 PST 2014
aggregation.
-Zhijie
On Feb 16, 2014 6:15 PM, EdwardKing zhan...@neusoft.com wrote:
hadoop 2.2.0, I want to view Tracking UI,so I visit
http://172.11.12.6:8088/cluster,
then I click History of Completed Job,such as follows:
MapReduce Job job_1392601388579_0001
Attempt Number Start Time
I use hadoop-2.2.0,
[hadoop@node1 ~]$ cd /home/software/hadoop-2.2.0/sbin/
[hadoop@node1 sbin]$ ./start-dfs.sh
[hadoop@node1 sbin]$ ./start-yarn.sh
then I use http://172.11.12.6:8088/cluster/apps can view All Applications, then
I submit a job and log some information like follows:
public void
I have two node,one is master(172.11.12.6) and one is node1(172.11.12.7), I use
following command to run a program under 172.11.12.6 ,like follows:
$ hadoop jar wc2.jar WordCountPredefined abc.txt output
[hadoop@master ~]$ hadoop jar wc2.jar WordCountPredefined abc.txt output
14/02/10 00:23:23
I am a newbies of hadoop, I have downloaded hadoop-2.2.0, then I want to learn
it, but I don't know which books is written about hadoop version 2, I only find
hadoop 1.x version, anyone could recommend me a hadoop book name of version 2?
Thanks.
I use Hadoop2.2.0 to create a master node and a sub node,like follows:
Live Datanodes : 2
Node Transferring Address Last Contact Admin State Configured Capacity (GB)
Used(GB) Non DFS Used (GB) Remaining(GB) Used(%)
master 172.11.12.6:50010 1In Service
30 matches
Mail list logo