Team,
I am using Ambari to install a cluster which now needs to be deleted and
re-installed.
Is there a clean way to Uninstall the cluster, clean up all the binaries
from all the nodes and do a fresh install ?
There is no data on the cluster, so nothing to worry.
Thanks in advance
I have a CDH4.1 cluster with 30 TB of HDFS space across 12 nodes. I now
want to uninstall CDH and move the cluster to HDP. Nothing wrong with CDH,
but want to try moving between distros without losing the data on datanodes.
Is it possible to re-map the same datanodes and pre-populated HDFS data
I have a 10 node cluster and I want to reclaim 5 nodes without moving the
data. I have only 1 copy (rep count 1) in my cluster. Don't ask me why ?
What I want to do:
Take out the 3 TB data disks /disk1 and /disk2 from the 5
to-be-decommissioned nodes.
Move the disks physically to the other
I have couple of questions about HDFS federation:
Can I state different block store directories for each namespace on a
datanode ?
Can I have some datanodes dedicated to a particular namespace only ?
This seems quite interesting. Way to go !
On Tue, Oct 1, 2013 at 9:52 PM, Krishna Kumaar
Hadoop cluster dans do regular distcp.
On tape, make sure you have a backup program which can backup streams
so you don't have to materialize your TB files outside of your Hadoop
cluster first... (I know Simpana can't do that :-().
On Fri, Jan 25, 2013 at 12:29 AM, Steve Edison sediso
I just started using Cloudera Manager to build my new hadoop cluster, its
neat !
Here are the issues I am facing.
Using the default install, everything went fine. When I do a hadoop fs -ls
/ , I see the root file system of the datanodes ?
Did I miss anything ?
How do I modify the hdfs-site.xml
I was exploring .har based hadop archive files for a similar small log file
scenario I have. I have millions of log files which are less than 64MB each
and I want to put them into HDFS and run analysis. Still exploring if HDFS
is a good options. Traditionally what I have learnt is that HDFS isn't