Dechao bu,

Pay attention that running Hadoop on NFS you can get problems with locks.

And if you're looking for process large files, your network will probably be
a bottleneck.


--
Edson Ramiro Lucas Filho
http://www.inf.ufpr.br/erlf07/


On 12 May 2010 12:47, abhishek sharma <absha...@usc.edu> wrote:

> If the HDFS uses the NFS to store files, then all I/O during the
> execution of map and reduce tasks will use the NFS instead of the
> local disks on each machine in the cluster (if they have one). This
> can become a bottleneck if you have lots of tasks running
> simultaneously.
>
> However, even with the NFS, you can still use Hadoop to run multiple
> map and reduce tasks in parallel.
>
> Abhishek
>
> On Wed, May 12, 2010 at 8:31 AM, dechao bu <dechao...@gmail.com> wrote:
> > hello,
> >     I want to deploy Hadoop on a cluster. In this cluster, different
> nodes
> > share same file system. If I make changes to files on node1. then other
> > nodes will have the same changes. (The file system of this cluster is
> > perhaps called NFS ).
> >     I don't know whether this cluster is fit for deploying Hadoop.
> >
> >
> >     Look forward to you reply. Thank you.
> >
>

Reply via email to