You can run it with the single node.
ᐧ
On Wed, Jul 29, 2015 at 3:56 PM, Kanaka avvaru kanaka.avv...@huawei.com
wrote:
Hi Siva,
Please try posting this question on the Cloudera forum
http://community.cloudera.com
Thanks Regards,
Kanaka Kumar Avvaru
--
* Regards*
* Sandeep
What is the size of your Hbase table?
A copy of snapshot will be stored in archive directory.
hadoop fs -du -s -h /apps/hbase/data/data/default/table-name
hadoop fs -du -s -h /apps/hbase/data/archive/data/default/table-name
Check this directory size.
Thanks
Sandeep Nemuri
ᐧ
On Thu, Jul 30,
Please take a look at HDFS-6133 which aims to help with hbase data locality.
It was integrated to hadoop 2.7.0 release.
FYI
On Thu, Jul 30, 2015 at 3:06 AM, Akmal Abbasov akmal.abba...@icloud.com
wrote:
I am running HBase snapshot exporting, but I stopped it, and still the
capacity used is
Hi Gera,
Thanks for your input. I have fairly large amount of data and if I go by
-cat option followed by md5sum calculation then it will become time
consuming process.
I could understand from the code that hadoop checksum is nothing but MD5 of
MD5 of CRC32C and then returning output.I would be
In our HA setup, the active namenode keeps crashing once a week or so. The
cluster is quite idle without many jobs running and not much user activity.
Below is logs from journal nodes. Can someone help us with this please?
2015-08-04 13:00:20,054 INFO server.Journal
From log below, hbase-rs4 was writing to the datanode.
Can you take a look at region server log and see if there is some clue ?
Thanks
On Jul 28, 2015, at 9:41 AM, Akmal Abbasov akmal.abba...@icloud.com wrote:
Hi, I’m observing strange behaviour in HDFS/HBase cluster.
The disk space of