I have a CDH4.1 cluster with 30 TB of HDFS space across 12 nodes. I now
want to uninstall CDH and move the cluster to HDP. Nothing wrong with CDH,
but want to try moving between distros without losing the data on datanodes.
Is it possible to re-map the same datanodes and pre-populated HDFS data
property
namehadoop.tmp.dir/name
value/home/hadoop/project/hadoop-data/value
/property
On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu azury...@gmail.com wrote:
I don't think this is the case, because there is;
property
namehadoop.tmp.dir/name
I don't think this is the case, because there is;
property
namehadoop.tmp.dir/name
value/home/hadoop/project/hadoop-data/value
/property
On Tue, Mar 18, 2014 at 1:55 PM, Stanley Shi s...@gopivotal.com wrote:
one possible reason is that you didn't set the namenode working directory,
Ah yes, I overlooked this. Then please check the file are there or not: ls
/home/hadoop/project/hadoop-data/dfs/name?
Regards,
*Stanley Shi,*
On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu azury...@gmail.com wrote:
I don't think this is the case, because there is;
property
hi,maillist:
i try look application log use the following process
# yarn application -list
Application-Id Application-Name
User Queue State
Final-State Tracking-URL
application_1395126130647_0014 select user_id
Just for confirmation,
1. Does NodeManager is restarted after enabling LogAggregation? If Yes,
check for NodeManager start up logs for Log Aggregation Service start is
success.
Thanks Regards
Rohith Sharma K S
From: ch huang [mailto:justlo...@gmail.com]
Sent: 18 March 2014 13:09
To:
hi,
How to solve this problem.
[cloudera@localhost ~]$ hadoop job -history ~/1
DEPRECATED: Use of this script to execute mapred command is deprecated.
Instead use the mapred command for it.
Exception in thread main java.io.IOException: Not able to initialize
History viewer
at
check the error
Caused by: java.io.IOException: History directory
/home/cloudera/1/_logs/historydoes
not exist
create that directory and change the ownership to the user running history
server
On Tue, Mar 18, 2014 at 5:16 PM, Avinash Kujur avin...@gmail.com wrote:
hi,
How to solve this
you need to give the locatiion of the history file...Please find following for
same...
user@host-10-18-40-132mailto:user@host-10-18-40-132:~ mapred job -history
Please remove me from the user distribution list.
Thanks.
Please send an email to user-unsubscr...@hadoop.apache.org.
On Tue, Mar 18, 2014 at 6:57 PM, Rananavare, Sunil
sunil.rananav...@unify.com wrote:
Please remove me from the user distribution list.
Thanks.
--
Thanks and Regards,
Vimal Jain
I think I found the issue. The ZKFC on the standby NN server tried, and failed,
to connect to the standby NN when I shutdown the network on the Active NN
server. I'm getting an exception from the HealthMonitor in the ZKFC log:
WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception
Found this:
http://grokbase.com/t/cloudera/cdh-user/12anhyr8ht/cdh4-failover-controllers
Then configured dfs.ha.fencing.methods to contain both sshfence and
shell(/bin/true). Note that the docs for core-default.xml say that the value is
a list. I tried a comma with no luck. Had to look in the
Hi Stanley,
Thanks for your response, but I still have some problems. Could you gave
me further instructions?
I am now using hadoop 1.0.3. Does that mean I have to upgrade to 1.2.0? or
I can directly override the original code with what command?
Another question is that you said I can refer to
OTOH, if the application is still running, the logs are not yet uploaded,
you certainly can not see the logs.
Jian
On Tue, Mar 18, 2014 at 1:57 AM, Rohith Sharma K S
rohithsharm...@huawei.com wrote:
Just for confirmation,
1. Does NodeManager is restarted after enabling
Hi,
Back to older release of Hadoop, the web admin page is able to show me the
number of failed task for each node, so I can have a clue that a certain
node with higher number is not healthy, disk issue for example. But after I
upgrade to 2.2.0 release, I do not find any equivalent page, so my
Hi,
I want to create Cluster and want to add services through Apache Ambari
Restful APIs.
I am unable to call POST,PUT and DELETE Web Services successfully.
I am using Resful API's client to work and trying to use below URL with
POST request but not working.
*POST REQUEST*
Hi,
I want to create Cluster by calling Apache Ambari Restful API's. I am using
Restful API's client named as Postman Rest Client. GET Requests are working
fine but POST,PUT and Delete are not working. Help me in this Regard please
*Example:*
I want to add Cluster by below URL, but it's not
Hi Asif,
What is the exact call that you are trying (with all the headers and
parameters), and the response you are getting back from the server?
Yusaku
On Tue, Mar 18, 2014 at 2:46 AM, asif sajjad asif.sajjad@gmail.com wrote:
Hi,
I want to create Cluster and want to add services
Correct david,
Sshfence doesnot handle network unavailability.
Since the JournalNodes ensures that only one NN can write, fencing of old
active handled Automatically. So configuring fence method to shell(/bin/true)
should be fine.
Regards,
Vinayakumar B.
From: david marion
Hello,
I'm running MR with 2.2.0 release, I noticed we can configure
nodemanager.health-checker.script.path in yarn-site.xml to customize NM
health checking, so I add below properties to yarn-site.xml
property
nameyarn.nodemanager.health-checker.script.path/name
HI Eric,
Are you running an prod environment on Hadoop 1.0.3? If yes, then you have
to upgrade to Hadoop1.2.0 or Hadoop2.2.0.
If you don't want to change to other Hadoop version, you need to backport
the patch to your code base ( I'm not sure the patch provided in HDFS-385
can be applied to
Hello
When run the following command on Mahout-0.9 and Hadoop-1.2.1, I get multiple
errors and I can not figure out what is the problem? Sorry for the long post.
[hadoop@solaris ~]$ mahout wikipediaDataSetCreator -i wikipedia/chunks -o
wikipediainput -c ~/categories.txt
Running on hadoop,
24 matches
Mail list logo