Thanks. I believe this is a good feature to have for clients especially if
you are reading the same large file over and over.


On Sun, Jan 15, 2012 at 7:33 PM, Todd Lipcon <t...@cloudera.com> wrote:

> There is some work being done in this area by some folks over at UC
> Berkeley's AMP Lab in coordination with Facebook. I don't believe it
> has been published quite yet, but the title of the project is "PACMan"
> -- I expect it will be published soon.
>
> -Todd
>
> On Sat, Jan 14, 2012 at 5:30 PM, Rita <rmorgan...@gmail.com> wrote:
> > After reading this article,
> > http://www.cloudera.com/blog/2012/01/caching-in-hbase-slabcache/ , I was
> > wondering if there was a filesystem cache for hdfs. For example, if a
> large
> > file (10gigabytes) was keep getting accessed on the cluster instead of
> keep
> > getting it from the network why not storage the content of the file
> locally
> > on the client itself.  A use case on the client would be like this:
> >
> >
> >
> > <property>
> >  <name>dfs.client.cachedirectory</name>
> >  <value>/var/cache/hdfs</value>
> > </property>
> >
> >
> > <property>
> > <name>dfs.client.cachesize</name>
> > <description>in megabytes</description>
> > <value>100000</value>
> > </property>
> >
> >
> > Any thoughts of a feature like this?
> >
> >
> > --
> > --- Get your facts first, then you can distort them as you please.--
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>



-- 
--- Get your facts first, then you can distort them as you please.--

Reply via email to