_
From: Demai Ni [nid...@gmail.com]
Sent: Wednesday, August 12, 2015 02:05
To: user@hadoop.apache.org
Subject: Re: hadoop/hdfs cache question, do client processes share cache?
Ritesh,
many thanks for your response. I just read through the centralized Cache
document. Thanks for the pointer. A
Ritesh,
many thanks for your response. I just read through the centralized Cache
document. Thanks for the pointer. A couple follow-up questions.
First, the centralized cache required 'explicit' configuration, so by
default, there is no HDFS-managed cache? Will the cache occur at local
filesystem
Let's assume that hdfs maintains 3 replicas of the 256MB block, then all of
these 3 datanodes will have only one copy of the block in their respective
mem cache and thus avoiding the repeated i/o reads. This goes with the
centralized cache management policy of hdfs that also gives you an option
to
hi, folks,
I have a quick question about how hdfs handle cache? In this lab
experiment, I have a 4 node hadoop cluster (2.x) and each node has a fair
large memory (96GB). And have a single hdfs file with 256MB, and also fit
in one HDFS block. The local filesystem is linux.
Now from one of the Da
d
restart sshd once again.
Regards,
Akira
(2014/04/21 18:51), lei liu wrote:
I use hadoop-2.4, I want use the hdfs cache function.
I use "ulimit -l 32212254720" linux command to set size of max locked
memory, but there is below error:
ulimit -l 322
-bash: ulimit: max locked m
I use hadoop-2.4, I want use the hdfs cache function.
I use "ulimit -l 32212254720" linux command to set size of max locked
memory, but there is below error:
ulimit -l 322
-bash: ulimit: max locked memory: cannot modify limit: Operation not
permitted
How can I set size of max loc