unsubscribe
-
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org
Hi Genius,
I just configured HDFS Federation, and try to use it(2 namenodes, one is
for /my, another is for /your). When I run the command:
hdfs dfs -ls /,
I can get:
-r-xr-xr-x - hadoop hadoop 0 2016-06-05 20:05 /my
-r-xr-xr-x - hadoop hadoop 0 2016-06-05 20:05 /your
It is very common practice to backup the metadata in some SAN store. So the
idea of complete loss of all the metadata is preventable. You could lose a
day worth of data if e.g. you back the metadata once a day but you could do
it more frequently. I'm not saying S3 or Azure Blob are bad ideas.
On
The namenode architecture is a source of fragility in HDFS. While a high
availability deployment (with two namenodes, and a failover mechanism)
means you're unlikely to see service interruption, it is still possible to
have a complete loss of filesystem metadata with the loss of two machines.
Another correction about the terminology needs to be made.
i said 1gb = 1 million blocks. Pay attention to term block. it is not file.
A file may contain more than one block. Default block size 64mb so 640 mb
file will hold 10 blocks. Each file has its name ,permissions, path,
creation date and
it is written 128 000 000 million in my previous post. it was incorrect
(million million)
what i mean is 128 million.
1gb raughly 1 million.
5 Haz 2016 16:58 tarihinde "Ascot Moss" yazdı:
> HDFS2 "Limit to 50-200 million files", is it really true like what MapR
> says?
>
No it is Not true. it totally depends of server's Ram.
Assume that each file holds 1k on Ram and your server has 128gb of ram. So
you will have 128 000 000 million file. But 1k is just approximation.
Raughtly 1gb holds 1million blocks. So if your server has 512gb of ram then
you can approximately
HDFS2 "Limit to 50-200 million files", is it really true like what MapR
says?
On Sun, Jun 5, 2016 at 7:55 PM, Hayati Gonultas
wrote:
> I forgot to mention about file system limit.
>
> Yes HDFS has limit, because for the performance considirations HDFS
> filesystem is
I forgot to mention about file system limit.
Yes HDFS has limit, because for the performance considirations HDFS
filesystem is read from disk to RAM and rest of the work is done with RAM.
So RAM should be big enough to fit the filesystem image. But HDFS has
configuration options like har files
Hi,
In most cases I think one cluster is enough. Since HDFS is a file system,
and with federation you may have multiple namenodes for different mount
points. So, you may mount /images/facebook to a namenode1 and
/images/instagram to namenode2, similar to linux file system mounts. With
such a way
Will the the common pool of datanodes and namenode federation be a more
effective alternative in HDFS2 than multiple clusters?
On Sun, Jun 5, 2016 at 12:19 PM, daemeon reiydelle
wrote:
> There are indeed many tuning points here. If the name nodes and journal
> nodes can be
11 matches
Mail list logo