Thx....

James Peach sent something similar to me too......
darn vfs objects :)  Thx james :)
I remember now on irix we had to do some stuff with these.....

I do not see the "cacheprime" vfs object on FC13 but I
do see "readahead" so I guess I will start with that.......

I also heard that there are some other I/O tuning
that I can do in the Linux/FC kernel to maybe increase the
read/write sizes.......

The kicker here is that I am using an Areca 1880-ix-16
as well as an ATTO R680 card in my boxes and they do not
work very well with small I/Os..... They are both 6Gbit
technology and I have 16 X 2TB fast Hitachi drives
in the connected raid....... No one has issues
with reading/writing one file....it is where
one wants to read or write say 20-40 files simultaneously
it all goes to heck with small I/Os to/from the raid cards
on Linux. 

I use "fio" as a test vehicle.  It runs over 10Gbit
port from a client to a 10Gbe port on my FC13
system.....using samba or netatalk....simulating 20-40
video users.... to see home many read/pread's
I can get under 38ms/1MByte......  It is a requirement we
have for Video editing......I have tuned Mac OS using the
ATTO R680 card and it has no problems with low latency
for 20-30 sessions/users......The most I can get out of
FC13 is maybe 8-10.

So it looks like I have homework to do to get Linux/FC13
so it handles I/O's bigger :)

Thx
chris


On 1/18/11 12:34 AM, Volker Lendecke wrote:
> On Mon, Jan 17, 2011 at 10:14:04AM -0600, Chris Duffy wrote:
>> We are testing Samba 3 (and 4) on Fedora Core 13,
>> 10Gbit connection with a Mac OS 10.6.4 system
>> as the client.  We will be adding some Windows
>> machines sooner or later with 10Gbit interfaces.
>>
>> We are seeing 100-150MBytes/sec read or write
>> performance between the Mac and the FC13 system
>> over 10Gbit interface but it should be capable of
>> 400-500MBytes/sec.  We have a local raid
>> on the FC13 system that runs 1GByte/sec locally
>> using an Areca 1880-ix-16 raid card (6Gbit version).
>> It has 16 fast Hitachi disks in a Raid5 format
>> using xfs filesystem.
>>
>> The problem here is that samba is poking the Areca
>> at 128KByte I/O's on preads and writes, i.e.
>> shown to us using strace on the smbd daemons
>> that are running.  Using vmstat/iostat/sar utilities,
>> we see 100% utilization of the Areca card because the
>> average wait time is real high and the average
>> queue length to it is also high......too many
>> small I/O's.....
>> This is not the case if I run "fio" or "dd" locally
>> to/from the Areca's raid using 1-4MByte I/O's.
>> I see fast I/O...
>>
>> I do not see any way to increase the size of Samba's
>> pread/write's in the smb.conf documentation.  I
>> am sure it may be just a matter of getting the source
>> code and making some changes to allow larger
>> sized IO's but........and of course I suppose Windows
>> clients may complain but....
>>
>> I remember that back in the old Irix days with the
>> group of engineers in Australia I worked with,
>> we had Samba screaming fast but not sure if they
>> tweaked the version of Samba on Irix to do this.
>>
>> Can you guys come up with a way for us to allow
>> the reads/writes to/from the disks to be tunable
>> up to say 4MB in size?
> You might want to play with "write cache size". This will
> only tune writes though. And it will only work for oplocked
> files. I'm not 100% sure that OS/X plays nicely wrt oplocks.
>
> For reads, we need to take a much closer look at your real
> workload and see if we need to use some kind of preopen,
> prefetch or so module. We need to closely work with the
> kernel buffer cache and potential readahead kernel
> algorithms.
>
> With best regards,
>
> Volker Lendecke
>


-- 
Chris Duffy
Technical Support
Small Tree
www.small-tree.com <http://www.small-tree.com>
Direct 651-209-6509 X305
Mobil 651-303-9613
Yahoo:chris_duffy6288
AIM:chris.j.du...@comcast.net

-- 
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/options/samba

Reply via email to