Re: [Samba] Maximum samba file transfer speed on gigabit...

2006-06-14 Thread bjquinn
 Try the following:
   socket options = TCP_NODELAY SO_SNDBUF=65536 SO_RCVBUF=65536
 IPTOS_LOWDELAY
   use sendfile = no
   lock spin time = 15
   lock spin count = 100
   oplocks = no
   level2 oplocks = no
 You may have to tune your smb settings to get foxpro to perform properly
 on you server hardware.  Have a look at:
 http://www.drouillard.ca/TipsTricks/Samba/Oplocks.htm

I tried the above settings although I modified the lock spin count to 30,
as suggested by the website you pointed me to, since our hardware was
comparable to the machine used as an example for when 30 would be a good
choice.  I have to say that this did make a difference - and basically
nothing else I have ever tried has made a difference - but it wasn't as
significant as I would have hoped.  Our 65 second FoxPro query shortened
to 55 seconds.  That doesn't really help much, but the fact that these
settings actually had a marginally positive effect on the speed of the
query is promising.  Maybe these are the correct settings to modify, but
they're not tweaked correctly.  That being said, could you explain what
these settings actually do so that I can have a better idea of how to
modify these settings to best match my equipment?  It's basically a
Pentium D 3.2 GHz with 2 GB memory, gigabit network, and a RAID 10 array
of 4 15K RPM SAS drives.  (Although I don't think that raid array is
performing up to par - likely some sort of driver issue, I'm still getting
50-60 MB/s sustained from the disks and across the network.)

Thanks for the help!

-BJ Quinn
-- 
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


[Samba] (no subject)

2006-06-14 Thread bjquinn
Oplock's tells the Windows Client he can cache the requestet file on
local machine.
Should the Client change the File (or another Client would do this) the
Lock must released by the first Client, or Samba break's the Lock after
a certain time he doesn't become the Lock back.

When you take the Settings in your Share Section with the Database File,
then this Settings work only on this Share.

So helped this?

No, this setting alone didn't seem to make any difference, although (as
suggested by Gerald, the following settings created about a 15% speedup
(down to about 55 seconds from 65 seconds on a baseline FoxPro query that
we've been using to test speed).

socket options = TCP_NODELAY SO_SNDBUF=65536 SO_RCVBUF=65536 
IPTOS_LOWDELAY
use sendfile = no
lock spin time = 15
lock spin count = 30
oplocks = no
level2 oplocks = no

That's an improvement, but still nowhere near what speeds I think I ought
to be getting, and still nowhere near the 10-15 seconds the same query
takes if the .dbf files reside on a Windows server with similar or worse
hardware.

 Ok well along those lines, here's another thing that I've noticed since I
 first posted.  I had been getting ~940Mb/s in iperf, so I didn't think it
 was a network or NIC specific issue.  I was using mount -t cifs and
 rsync -a --stats --progress to gauge my speed, which is where I was

Sorry, i didn't understand you.
You have mounted from a different Linux Workstation this Share, or did
you mount a Share from the Windows Workstation?

From my linux server where I'm doing these tests, I mounted a Windows
share through cifs (also tried smbfs) and copied files to it from the
server's hard drive.  That was surprisingly slow, never above 20 MB/s, and
rarely above 15 MB/s.  Although that's a bit disconcerting (and maybe it
has something to do with my problem), if I don't worry about mounting a
windows share and just copy files from the server to the windows machine
through Windows Explorer on the windows machine, I get 50-60 MB/s, which
is plenty fast for now, and I think the hard drive on the server is the
bottleneck at this point.

 Have you testet your Diskthrouput with bonnie (or such Tools)?


 Yes, and I'm getting at least 50-60 MB/s (probably now my bottleneck),
 although I've set up an SAS raid array that ought to get much faster than
 that, but doesn't - however that's a question for another mailing list!


And without a RAID Array, only a Simple Disk?

Yes, it's a 10K RPM SATA WD Raptor drive.  As single disks come, they're
pretty fast.

Maybe a Problem with the RAID Controller or your Bussystem?

Very possibly, although I think it's a problem with the AIC94xx driver in
the kernel.  Since my RAID array is actually running slightly slower than
my single disk, it's probably either the driver, or possibly the
controller card itself, as you suggested.

What Kind of Mainboard?

Asus P5WDG2-WS

What Bussystem, PCI (PCI-X should be much better for a huge Performance
in a Gigabit Environment)?

PCI-E x4 actually, for the onboard dual gigabit network card.  Iperf
results in plenty fast speeds (~940 Mb/s).

How long take a time dd count=100 bs=1024 if=/dev/zero
of=/tmp/testfile?

Regardless of file size, this test results fairly consistently in about a
55 MB/s speed on the single drive and 45-50 MB/s on the RAID array.

-BJ Quinn
-- 
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


[Samba] Maximum samba file transfer speed on gigabit...

2006-06-14 Thread bjquinn
Oplock's tells the Windows Client he can cache the requestet file on
local machine.
Should the Client change the File (or another Client would do this) the
Lock must released by the first Client, or Samba break's the Lock after
a certain time he doesn't become the Lock back.

When you take the Settings in your Share Section with the Database File,
then this Settings work only on this Share.

So helped this?

No, this setting alone didn't seem to make any difference, although (as
suggested by Gerald, the following settings created about a 15% speedup
(down to about 55 seconds from 65 seconds on a baseline FoxPro query that
we've been using to test speed).

socket options = TCP_NODELAY SO_SNDBUF=65536 SO_RCVBUF=65536 
IPTOS_LOWDELAY
use sendfile = no
lock spin time = 15
lock spin count = 30
oplocks = no
level2 oplocks = no

That's an improvement, but still nowhere near what speeds I think I ought
to be getting, and still nowhere near the 10-15 seconds the same query
takes if the .dbf files reside on a Windows server with similar or worse
hardware.

 Ok well along those lines, here's another thing that I've noticed since I
 first posted.  I had been getting ~940Mb/s in iperf, so I didn't think it
 was a network or NIC specific issue.  I was using mount -t cifs and
 rsync -a --stats --progress to gauge my speed, which is where I was

Sorry, i didn't understand you.
You have mounted from a different Linux Workstation this Share, or did
you mount a Share from the Windows Workstation?

From my linux server where I'm doing these tests, I mounted a Windows
share through cifs (also tried smbfs) and copied files to it from the
server's hard drive.  That was surprisingly slow, never above 20 MB/s, and
rarely above 15 MB/s.  Although that's a bit disconcerting (and maybe it
has something to do with my problem), if I don't worry about mounting a
windows share and just copy files from the server to the windows machine
through Windows Explorer on the windows machine, I get 50-60 MB/s, which
is plenty fast for now, and I think the hard drive on the server is the
bottleneck at this point.

 Have you testet your Diskthrouput with bonnie (or such Tools)?


 Yes, and I'm getting at least 50-60 MB/s (probably now my bottleneck),
 although I've set up an SAS raid array that ought to get much faster than
 that, but doesn't - however that's a question for another mailing list!


And without a RAID Array, only a Simple Disk?

Yes, it's a 10K RPM SATA WD Raptor drive.  As single disks come, they're
pretty fast.

Maybe a Problem with the RAID Controller or your Bussystem?

Very possibly, although I think it's a problem with the AIC94xx driver in
the kernel.  Since my RAID array is actually running slightly slower than
my single disk, it's probably either the driver, or possibly the
controller card itself, as you suggested.

What Kind of Mainboard?

Asus P5WDG2-WS

What Bussystem, PCI (PCI-X should be much better for a huge Performance
in a Gigabit Environment)?

PCI-E x4 actually, for the onboard dual gigabit network card.  Iperf
results in plenty fast speeds (~940 Mb/s).

How long take a time dd count=100 bs=1024 if=/dev/zero
of=/tmp/testfile?

Regardless of file size, this test results fairly consistently in about a
55 MB/s speed on the single drive and 45-50 MB/s on the RAID array.

-BJ Quinn
-- 
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


[Samba] Maximum samba file transfer speed on gigabit...

2006-06-06 Thread bjquinn
 What Version of Samba is running?

Various versions of 3.0 on multiple servers.

 Is it a kind of Locking Problem?

Ooh, good question, I'm not sure, and I'll try your oplocks settings. 
What exactly am I turning off, however, if I do that?  Am I turning off
file locking altogether?

 What speed have a Filetransfer with ftp?
 What speed did you have with a Windows Server?

Ok well along those lines, here's another thing that I've noticed since I
first posted.  I had been getting ~940Mb/s in iperf, so I didn't think it
was a network or NIC specific issue.  I was using mount -t cifs and
rsync -a --stats --progress to gauge my speed, which is where I was
getting the 20 MB/s speed statistics.  However, copying large files
through Windows Explorer from the Samba share results in 55-60 MB/s.  So,
I don't know if there's a problem with rsync, smbfs, or cifs or whatever,
but it looks like actual file transfer speeds (whether on one large file
or an entire directory) are pretty good.  I wouldn't mind seeing closer to
100+ MB/s, but I guess at around 60 MB/s, that's a great start.  NOW the
problem is that whenever I actually OPEN a file from any of the Samba
servers, it opens MUCH slower than on a comparable Windows server.  A
large Excel file, for example, takes 15 seconds to load instead of 6
seconds when loaded from the Windows server.  A given FoxPro query takes
45-55 seconds to run over the Samba share as opposed to around 10-12
seconds over the network from the Windows server.  Could this be related
to the oplocks stuff you were talking about, or would this point to a
completely different problem?  What are the downsides to turning off these
oplocks settings?

Have you testet your Diskthrouput with bonnie (or such Tools)?

Yes, and I'm getting at least 50-60 MB/s (probably now my bottleneck),
although I've set up an SAS raid array that ought to get much faster than
that, but doesn't - however that's a question for another mailing list!

Thanks for your help!

-BJ Quinn
-- 
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


Re: [Samba] Maximum samba file transfer speed on gigabit...

2006-06-06 Thread bjquinn
Whoops, I guess I didn't reply correctly and accidentally created a new
thread with my response, so here's to hoping I get it right this time...

 What Version of Samba is running?

Various versions of 3.0 on multiple servers.

 Is it a kind of Locking Problem?

Ooh, good question, I'm not sure, and I'll try your oplocks settings.
What exactly am I turning off, however, if I do that?  Am I turning off
file locking altogether?

 What speed have a Filetransfer with ftp?
 What speed did you have with a Windows Server?

Ok well along those lines, here's another thing that I've noticed since I
first posted.  I had been getting ~940Mb/s in iperf, so I didn't think it
was a network or NIC specific issue.  I was using mount -t cifs and
rsync -a --stats --progress to gauge my speed, which is where I was
getting the 20 MB/s speed statistics.  However, copying large files
through Windows Explorer from the Samba share results in 55-60 MB/s.  So,
I don't know if there's a problem with rsync, smbfs, or cifs or whatever,
but it looks like actual file transfer speeds (whether on one large file
or an entire directory) are pretty good.  I wouldn't mind seeing closer to
100+ MB/s, but I guess at around 60 MB/s, that's a great start.  NOW the
problem is that whenever I actually OPEN a file from any of the Samba
servers, it opens MUCH slower than on a comparable Windows server.  A
large Excel file, for example, takes 15 seconds to load instead of 6
seconds when loaded from the Windows server.  A given FoxPro query takes
45-55 seconds to run over the Samba share as opposed to around 10-12
seconds over the network from the Windows server.  Could this be related
to the oplocks stuff you were talking about, or would this point to a
completely different problem?  What are the downsides to turning off these
oplocks settings?

Have you testet your Diskthrouput with bonnie (or such Tools)?

Yes, and I'm getting at least 50-60 MB/s (probably now my bottleneck),
although I've set up an SAS raid array that ought to get much faster than
that, but doesn't - however that's a question for another mailing list!

Thanks for your help!

-BJ Quinn
-- 
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


Re: [Samba] Maximum samba file transfer speed on gigabit...

2006-06-06 Thread bjquinn
 You should be able to do a crude test by creating a large file (dd
 if=/dev/random of=test.dat bs=1048576 count=100 will create a 100MB
 test file) and then timing how long it takes to read the file back
 (time dd if=test.dat of=/dev/null)  That'll tell you if your hard
 drives are configured properly and reading at full speed.  Use a larger
 file for a more accurate test.

Well, my 4 drive 15k RPM SAS RAID 10 configuration is performing slightly
more poorly than my single drive 10k RPM SATA (~50 MB/s vs. ~55MB/s in
both Bonnie and the dd test you suggested), but I guess that's the least
of my concerns right now.  (Besides this being the wrong list for such a
concern, but thanks for your suggestions!)  Although my maximum file
transfer speed seems to be maxing out at about 50 MB/s (looking like now
hard drive transfer speed is the bottleneck), which is almost exactly the
speed I'm getting from the Windows server, I am still able to run these
queries in FoxPro in around 10-12 seconds from the Windows server and
around 55 seconds for the Samba server.  A large Excel file (~45MB) opens
up in around 6-7 seconds over the Windows share and in 15 or so seconds
over the Samba share, looking like there's a big pause before it actually
starts loading the file into Excel.  Does this shed any light on the
issue?

 I wouldn't think there'd be a huge overhead, but in my own experience
 it's certainly noticeable (as compared to say FTP.)  Don't forget that
 if the PC on the other end isn't capable of receiving the data at full
 speed, then it doesn't matter how fast the server is.

I've already noticed significant differences between client computers, but
right now the computers I'm testing as client computers are comparable to
the server from a hardware specification standpoint, differing only in
that they run Windows 2000.  One odd thing is that the computers that are
capable only of transferring files from the server at a significantly
slower rate (whether Windows or Samba) don't seem to have a significant
degradation in FoxPro query time or Excel spreadsheet loading.

-BJ Quinn
-- 
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba


[Samba] Maximum samba file transfer speed on gigabit...

2006-06-04 Thread bjquinn
Ok so maybe someone can explain this to me.  I've been banging my head
against the wall on this one for several weeks now and the powers that be
are starting to get a little impatient.  What we've got is an old FoxPro
application, the FoxPro .dbf's being stored on a Linux fileserver using
Samba (Fedora 3 currently, using Fedora 5 on the new test server).  We're
having speed problems (don't mention the fact that we should be using a
real SQL server - I know, I know).  So I'm thinking what I need to do is
to increase the speed at which the server can distribute those .dbf files
across the network.  We'd been getting somewhere between 10-20 MB/s,
depending on file size, etc.  We've already got a gigabit network.  So,
I'm thinking to myself, a gigabit is 125 MB/s, so we should be going a
LOT faster.  Ok, so I know it's only really about 119 MB/s (darn 1000 B =
1KB vs 1024 B = 1KB marketing crap).  Whatever.  That's a lot faster than
10-20 MB/s.  I've got a bottleneck, I tell myself.  The hard drive light
on the old server is blood red all the time and top reports high (~10-40%)
iowait.  Must be the hard drive.  So we upgrade from 2x 10K RPM SATA
1.5Gbps drives in RAID-0 to 4x 15K RPM SAS 3.0Gbps drives in RAID-10. 
That should do it.  Nope.  No difference, no change whatsoever (that was
an expensive mistake).  Then it must be the network card is the
bottleneck.  So we get PCI-E Gigabit NICs, I learn all about rmem and wmem
and tcp window sizes, set a bunch of those settings (rmem  wmem =
2500, tcp window size on Windows = 262800 as well as so_sndbuf,
so_rcvbuf, max xmit, and read size in smb.conf = 262800), still no change.
 No change!  I can run 944 Mb/s or higher in iperf.  Why can't I even get
a FRACTION of that transferring files through Samba?  I mean, hard drive
speed shouldn't be the issue - a single one of these SAS drives is
supposed to sustain 90+ MB/s, and I have four of them raided together. 
The NICs are testing out at nearly 1Gb/s.  Is there REALLY that much
overhead for Samba?  Isn't there something I can do to increase the
efficiency of the file transfer speeds?  It doesn't seem to matter which
settings I use in Samba, the best I ever get is about 22 MB/s, and it
sometimes bogs down to around 12 MB/s.  Assuming nothing else is the
bottleneck, that's about 100 Mb/s - 175 Mb/s, or 10-18% of the theoretical
limit of gigabit ethernet.  The Windows clients never write the data
received over the network to the hard drive, it loads it up into memory,
which should be fairly fast, as are all the clients - 2.8+ GHz, 800MHz
FSB, 10K RPM SATA drives, etc.  Besides that, these fast SATA drives ought
to be able to write more than 10-15 MB/s for a file transfer anyway.  What
am I missing here?  Is the overhead for Samba really that significant, or
is there some setting I can change, or am I overlooking something else?

Thanks for your help, and maybe you guys can spare my head any more injury
from the banging it has been getting over the past few weeks.

-BJ Quinn
-- 
To unsubscribe from this list go to the following URL and read the
instructions:  https://lists.samba.org/mailman/listinfo/samba