On Tue, 19 Apr 2005, Bosko Milekic wrote:
My experience with 6.0-CURRENT has been that I am able to push at
least about 400kpps INTO THE KERNEL from a gigE em card on its own
64-bit PCI-X 133MHz bus (i.e., the bus is uncontested) and that's
A 64-bit bus doesn't seem to be essential for reasonabl
I would try to transfer from /dev/zero to /dev/null via the
network interface.
It might be interesting,
1. if it is a switched network,
2. if there is a lot of concurrency between the network nodes,
and
3. if there are really a lot of PCI cards fighting for the bus
(btw. when I multiply 33e6, 8 an
My experience with 6.0-CURRENT has been that I am able to push at
least about 400kpps INTO THE KERNEL from a gigE em card on its own
64-bit PCI-X 133MHz bus (i.e., the bus is uncontested) and that's
basically out of the box GENERIC on a dual-CPU box with HTT disabled
and no debugging opt
On Tue, Apr 19, 2005 at 11:04:10PM +0200, Eivind Hestnes wrote:
> It's correct that the card is plugged into a 32-bit 33 Mhz PCI slot. If
> i'm not wrong, 33 Mhz PCI slots has a peak transfer rate of 133 MByte/s.
> However, when pulling 180 mbit/s without the polling enabled the system
> is ver
Eivind Hestnes wrote:
It's correct that the card is plugged into a 32-bit 33 Mhz PCI slot.
If i'm not wrong, 33 Mhz PCI slots has a peak transfer rate of 133
MByte/s. However, when pulling 180 mbit/s without the polling enabled
the system is very little responsive due to the interrupt load. I'l
It sounds sensible, but I have also learned that throwing hardware on a
problem is not always right.. Compared to shiny boxes from Cisco, HP
etc. a 500 Mhz router is for heavy duty networks. I would try some more
tweaking before replacing the box with some more spectular hardware.
- E.
Michael
It's correct that the card is plugged into a 32-bit 33 Mhz PCI slot. If
i'm not wrong, 33 Mhz PCI slots has a peak transfer rate of 133 MByte/s.
However, when pulling 180 mbit/s without the polling enabled the system
is very little responsive due to the interrupt load. I'll try to
increase the
Thanks for the advice. Didn't do any difference, though.. Perhaps I
should try to increase the polling frequency..
- E.
Jerald Von Dipple wrote:
Hey man
You need to bump
kern.polling.burst: 150
Upto at least 15
Regards,
Jerald Von D.
On 4/19/05, Eivind Hestnes <[EMAIL PROTECTED]> wrote:
Hi
On 4/19/2005 1:32 PM, Eivind Hestnes wrote:
I have an Intel Pro 1000 MT (PWLA8490MT) NIC (em(4) driver 1.7.35) installed
in a Pentium III 500 Mhz with 512 MB RAM (100 Mhz) running FreeBSD 5.4-RC3.
The machine is routing traffic between multiple VLANs. Recently I did a
benchmark with/without device
Hi,
I have an Intel Pro 1000 MT (PWLA8490MT) NIC (em(4) driver 1.7.35) installed
in a Pentium III 500 Mhz with 512 MB RAM (100 Mhz) running FreeBSD 5.4-RC3.
The machine is routing traffic between multiple VLANs. Recently I did a
benchmark with/without device polling enabled. Without device polling
Claus Guttesen wrote:
What state is nfsd in? Can you send the output of this:
ps -auxw|grep nfsd
while the server is slammed?
elin~%>ps -auxw|grep nfsd
root 378 3,7 0,0 1412 732 ?? DTor07am 4:08,82 nfsd:
server (nfsd)
root 380 3,5 0,0 1412 732 ?? DTor07am 1:56,
> What state is nfsd in? Can you send the output of this:
> ps -auxw|grep nfsd
> while the server is slammed?
elin~%>ps -auxw|grep nfsd
root 378 3,7 0,0 1412 732 ?? DTor07am 4:08,82 nfsd:
server (nfsd)
root 380 3,5 0,0 1412 732 ?? DTor07am 1:56,52 nfsd:
server
Claus Guttesen wrote:
What does gstat look like on the server when you are doing this?
Also - does a dd locally on the server give the same results? You should get
about double that I would estimate locally direct to disk. What about a dd over
NFS?
dd-command:
dd if=/dev/zero of=/nfssrv/dd.tst b
> What does gstat look like on the server when you are doing this?
> Also - does a dd locally on the server give the same results? You should get
> about double that I would estimate locally direct to disk. What about a dd
> over
> NFS?
dd-command:
dd if=/dev/zero of=/nfssrv/dd.tst bs=1024 cou
Claus Guttesen wrote:
When you say 'ide->fiber' that could mean a lot of things. Is this a single
drive, or a RAID subsystem?
Yes, I do read it different now ;-)
It's a raid 5 with 12 400 GB drives split into two volumes (where I
performed the test on one of them).
What does gstat look like on th
> When you say 'ide->fiber' that could mean a lot of things. Is this a single
> drive, or a RAID subsystem?
Yes, I do read it different now ;-)
It's a raid 5 with 12 400 GB drives split into two volumes (where I
performed the test on one of them).
regards
Claus
_
Claus Guttesen wrote:
Q:
Will I get better performance upgrading the server from dual PIII to dual Xeon?
A:
rsync is CPU intensive, so depending on how much cpu you were using for
this,
you may or may not gain. How busy was the server during that time? Is this to
a single IDE disk? If so, you a
> > Q:
> > Will I get better performance upgrading the server from dual PIII to dual
> > Xeon?
> > A:
>
> rsync is CPU intensive, so depending on how much cpu you were using for this,
> you may or may not gain. How busy was the server during that time? Is this
> to
> a single IDE disk? If so,
Claus Guttesen wrote:
Hi.
Sorry for x-posting but the thread was originally meant for
freebsd-stable but then a performance-related question slowly emerged
into the message ;-)
Inspired by the nfs-benchmarks by Willem Jan Withagen I ran some
simple benchmarks against a FreeBSD 5.4 RC2-server. My s
Hi.
Sorry for x-posting but the thread was originally meant for
freebsd-stable but then a performance-related question slowly emerged
into the message ;-)
Inspired by the nfs-benchmarks by Willem Jan Withagen I ran some
simple benchmarks against a FreeBSD 5.4 RC2-server. My seven clients
are RC1
20 matches
Mail list logo