Tu, Tiankai wrote:
> Hi,
>
> I have been evaluating the parallel read performance of PVFS2 and wonder
> how I can get better performance.
> Hardware configuration:
>
> - 100 nodes, each with two quad-core processors and 16 GB memory
> - 1 GigE connection out of each node and connecting to a 1GigE switch in
> the rack (3 racks in total)
> - Each rack switch has 4 10-GigE connections to a backbone
> full-bandwidth 10-GigE switch
As far as I understand all comes to 1 10-GigE switch backbone, where the backbone is 10-GigE, which translates to 1250MB/s or am I missing something ?
> - Software (md) RAID0 on 4 SATA disks, with a capacity of 500 GB per
> node
> - Raw RAID0 bulk data transfer rate around 200 MB/s  (dd a 4GB file
> after dropping linux vfs cache)
>
> Software configuration:
>
> - 2.6.26-10smp kernel
> - PVFS 2.8.1 - All 100 nodes running both data servers and metadata servers
> - All 100 nodes running application programs (MPI codes) and accessing
> data via PVFS Linux kernel interface - PVFS mounted on each node via the local host (since all of them are
> also metadata servers)
> - Data striping across all the nodes
>
> Experiment setup:
>
> - Four datasets consisting of files of 1 GB, 256 MB, 64 MB, and 2 MB,
> respectively
> - Each dataset has 1.6 terabyte data (that is, 1600 1GB files, 6400
> 256MB files, etc.)
> - A simple parallel (MPI) program that reads all the files of each
> dataset
>      * Read tasks split among the MPI process evenly (standard block
> decomposition algorithm)
> * Used all 100 nodes * Varied the number of cores used (1, 2, 4)
>      * MPI_Barrier() called before and after the reading of all the
> files assigned to a process
> - Experiments conducted with no interference from other applications
>
> Below is the sustained read performance I measured. During the
> experiments, the network traffic peaked at around 2 GB/s as shown by
> Ganglia running on the cluster.
>
>                                 1 reader per node     2 readers per node
> 4 readers per node
>
> 1 GB file dataset         660 MB/s               1218 MB/s
> 1106 MB/s
> 256 MB file dataset     663 MB/s               1205 MB/s
> 1153 MB/s
> 64 MB file dataset       677 MB/s               1302 MB/s
> 1377 MB/s
> 2 MB file dataset         502 MB/s               549 MB/s
> 587 MB/s
>

>
> Given the hardware/software configuration,  are these reasonable
> performance results or shall I expect better outcome? Thanks.
> Tiankai
>
>
> _______________________________________________
> Pvfs2-users mailing list
> [email protected]
> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to