I am will be out-of-town this coming week; so, I won't have a chance to
review the numbers.  In the mean time, you might try 2.8.4.  I think you
will get better performance numbers.

Becky
-- 
Becky Ligon
HPC Admin Staff
PVFS/OrangeFS Developer
Clemson University
864-650-4065

> Thanks, Becky. Please let me know if you get any findings. Have a good
> weekend.
>
> Wantao
>
>
> ------------------ Original ------------------
> From:  "ligon"<li...@clemson.edu>;
> Date:  Sat, May 14, 2011 01:33 AM
> To:  "Wantao"<liu_wan...@qq.com>;
> Cc:  "ligon"<li...@clemson.edu>;
> "pvfs2-users"<pvfs2-users@beowulf-underground.org>;
> Subject:  Re: [Pvfs2-users] Questions for IOZone performance test results
>
>
>  You may not be seeing a bottleneck right now, but it will cause a
> bottleneck in a high-I/O environment....just for future reference.
>
> Let me take a closer looks at the numbers and compare them to some
> numbers
> that I have for OrangeFS-2.8.4.
>
> Becky
> --
> Becky Ligon
> HPC Admin Staff
> PVFS/OrangeFS Developer
> Clemson University
> 864-650-4065
>
>> Hi Becky,
>>
>> Thanks for your reply. I am using PVFS2.8.2. I agree that multiple
>> metadata servers will boost the performance, but I feel that those
>> questions are not rose from the metaserver bottleneck. Even when only
>> one
>> IOZone process is started, my second question is still there. But in
>> this
>> case, the re-write is slightly faster than write.
>>
>> BTW, I just measured the read/write performance of each disk using dd
>> command, it is able to write data at about 90MB/s and read data at
>> 120MB/s.
>>
>> Wantao
>>
>>
>> ------------------ Original ------------------
>> From:  "Becky Ligon"<li...@clemson.edu>;
>> Date:  Fri, May 13, 2011 10:04 PM
>> To:  "Wantao"<liu_wan...@qq.com>;
>> Cc:  "pvfs2-users"<pvfs2-users@beowulf-underground.org>;
>> Subject:  Re: [Pvfs2-users] Questions for IOZone performance test
>> results
>>
>>
>>  Which version of PVFS are you using?
>>
>> Your setup will work better if each of your 16 servers are both meta
>> and
>> I/O servers.  Your current configuration causes a bottleneck at the
>> metadata server.
>>
>> BEcky
>> --
>> Becky Ligon
>> HPC Admin Staff
>> PVFS/OrangeFS Developer
>> Clemson University/Omnibond.com
>> 864-650-4065
>>
>>> Hi guys,
>>>
>>> I am a PVFS2 newbie and made some performance tests using IOZone, but
>>> the
>>> results puzzle me. I have 16 machines. One is meta data server, and
>>> other
>>> 15 machines are both PVFS2 IO servers and clients.  Each client
>>> machine
>>> runs one IOZone process, so the aggregate performance is measured.
>>> Those
>>> machines are configured as follows: one Intel i7-860 processor, 16GB
>>> DDR3
>>> memory and 1TB SATA hard disk. They are connected through a gigabit
>>> Ethernet switch. The OS is Debian Lenny (2.6.26 kernel). The PVFS2 is
>>> 2.8.2 with default configuration.
>>>
>>> The IOZone command used is: ./iozone -i 0 -i 1 -i 2 -r 4m -s 32g -t 15
>>> -+m
>>> pvfs_client_list. Since the memory capacity for each machine is 16GB,
>>> so
>>> I
>>> set the test file size to 32GB to exercise the PVFS2 heavily. The
>>> result
>>> is listed below:
>>>
>>> Record Size 4096 KB
>>>     File size set to 33554432 KB
>>>     Network distribution mode enabled.
>>>     Command line used: ./iozone -i 0 -i 1 -i 2 -r 4m -s 32g -t 15 -+m
>>> pvfs_client_list
>>>     Output is in Kbytes/sec
>>>     Time Resolution = 0.000001 seconds.
>>>     Processor cache size set to 1024 Kbytes.
>>>     Processor cache line size set to 32 bytes.
>>>     File stride size set to 17 * record size.
>>>     Throughput test with 15 processes
>>>     Each process writes a 33554432 Kbyte file in 4096 Kbyte records
>>>
>>>     Test running:
>>>     Children see throughput for 15 initial writers     =  785775.56
>>> KB/sec
>>>     Min throughput per process             =   50273.01 KB/sec
>>>     Max throughput per process             =   53785.79 KB/sec
>>>     Avg throughput per process             =   52385.04 KB/sec
>>>     Min xfer                     = 31375360.00 KB
>>>
>>>     Test running:
>>>     Children see throughput for 15 rewriters     =  612876.38 KB/sec
>>>     Min throughput per process             =   39466.78 KB/sec
>>>     Max throughput per process             =   41843.63 KB/sec
>>>     Avg throughput per process             =   40858.43 KB/sec
>>>     Min xfer                     = 31649792.00 KB
>>>
>>>     Test running:
>>>     Children see throughput for 15 readers         =  366397.27 KB/sec
>>>     Min throughput per process             =    9371.45 KB/sec
>>>     Max throughput per process             =   29229.74 KB/sec
>>>     Avg throughput per process             =   24426.48 KB/sec
>>>     Min xfer                     = 10760192.00 KB
>>>
>>>     Test running:
>>>     Children see throughput for 15 re-readers     =  370985.14 KB/sec
>>>     Min throughput per process             =    9850.98 KB/sec
>>>     Max throughput per process             =   29660.86 KB/sec
>>>     Avg throughput per process             =   24732.34 KB/sec
>>>     Min xfer                     = 11145216.00 KB
>>>
>>>     Test running:
>>>     Children see throughput for 15 random readers     =  257970.32
>>> KB/sec
>>>     Min throughput per process             =    8147.65 KB/sec
>>>     Max throughput per process             =   20084.32 KB/sec
>>>     Avg throughput per process             =   17198.02 KB/sec
>>>     Min xfer                     = 13615104.00 KB
>>>
>>>     Test running:
>>>     Children see throughput for 15 random writers     =  376059.73
>>> KB/sec
>>>     Min throughput per process             =   24060.38 KB/sec
>>>     Max throughput per process             =   26446.96 KB/sec
>>>     Avg throughput per process             =   25070.65 KB/sec
>>>     Min xfer                     = 30527488.00 KB
>>>
>>> I have three questions:
>>>  1. Why does write outperforms rewrite significantly? According to
>>> IOZone's document, rewrite is supposed to perform better, since it
>>> writes to a file which already exists, and the metadata is already
>>> there.
>>>  2. Why is write/random-write faster than read/random-read so much?
>>> This
>>> result is really unexpected. I feel that read is supposed to be
>>> faster.
>>> Is there anything wrong in my result numbers?
>>>  3. Observing the max and min throughput per process in each test
>>> item,
>>> you can find that in write/re-write/random-write, the difference
>>> between
>>> max and min is acceptable; while in read/re-read/random-read, the max
>>> throughput is about two or three times of the min number. How can I
>>> explain this result? Is it normal?
>>>
>>> These results are out of my expectation. Is it possible that they are
>>> caused by faulty hardware (network or disk) or configuration?
>>>
>>> Any advice is appreciated.
>>>
>>> Sincerely,
>>> Wantao_______________________________________________
>>> Pvfs2-users mailing list
>>> Pvfs2-users@beowulf-underground.org
>>> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
>>>


_______________________________________________
Pvfs2-users mailing list
Pvfs2-users@beowulf-underground.org
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to