Can you please check in UI, what is the heap usage. Then we can confirm whether
java heap is growing or not.
top will consider native memory usage also and nio uses directByteBuffers
internally.
This is good write up from Jonathan
There is no BackupNode on Apache Hadoop 1.x (Rename of 0.20.20x series). The
documentation was a mistake, which we've fixed for the next release:
https://issues.apache.org/jira/browse/HADOOP-7297
BackupNode was introduced in 0.21, and is available in the 0.22 release today
but it has not been
Hi Harsh,
Thanks for the update. I will wait for the stable releases of .22 and .23 to
test Backup node.
Currently I want to check all the namenode failover scenarios and its recovery
for Hadoop 1.0.0 version.
I tried one conventional way by keeping fsimage and edit log safe using that in
case
Does anyone know of any work/ideas to encrypt data stored on hdfs?
Ideally both temporary files and final files would be encrypted. Or there
would have to be a mechanism in hdfs to securely wipe temporary files, like
shred in linux.
So far this is what i found:
agreed.
many forms of data require encryption to be stored on any system. i do know
now the exact motivation(s) for that, but i do know we have to conform to
this.
my assumption was that i want to protect against access to the data by
someone stealing the harddrives or the servers. so physical
Protecting against the guy who has physical access to the servers and all the
time in the world is the nightmare case because he has the keys in his
possession.
That's where you start buying expensive FIPS-140 cryptomodules that keep the
keys in a tight little box that self-destructs when