Sandra:

No, not necessarily.

By default, PVFS2 uses a strip size of 64K and a Round Robin data
distribution.  So, data is written in the following manner:

first 64k is written to server 1
second 64k is written to server 2
third 64k is written to server 3
fourth 64k is written to server 4
....and so on.

The server order can be seen by running this command:

pvfs2-viewdist -f /path/to/filename

If your compute node #1 is reading a file, each of your storage nodes will
be accessed, returning the file in the proper order to compute node #1.
So, if some of your data happens to be on compute node #1, because it is
also a storage node, then, yes, some of your storage will be handled by the
same node as your compute node.

The idea of PVFS2 (OrangeFS) is to distribute your data across nodes,
providing parallelism with reads/writes of large data sets.  Having said
all that, there are ways to change the default behavior, a complex topic in
and of itself.

Becky

On Fri, May 22, 2015 at 4:47 AM, Sandra A. Mendez <[email protected]
> wrote:

> Dear Users,
>
> I have a question about the data distribution on PVFS2 when each datafile
> is compute node and storage node.
> We have configured PVFS2 on Amazon AWS EC2 using different numbers of
> Datafiles with the temporal storage of the instances.
> For example: For a small cluster on Amazon EC2 with 9 compute nodes and a
> NFS server, we configure PVFS2 with 9 Datafiles (with the temporal disk of
> the compute nodes) and 1 Metadata server (in the node that is also NFS
> server). We would like to know if the I/O requests from a compute node are
> processed by the datafile configured in that compute node.
>
> I would appreciate your help.
> Regards.
>
> Sandra.-
>
> _______________________________________________
> Pvfs2-users mailing list
> [email protected]
> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
>
>
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to