Regression in /etc/rc.conf.d support

2007-06-14 Thread Sean McNeil
I don't know why this was done, but now we are no longer able to place 
firewall rule info as once possible in /etc/rc.conf.d/ipfw.  I had


firewall_enable=YES
firewall_type=/etc/fw/rc.firewall.rules
firewall_quiet=YES

and now the last two variables no longer make it into /etc/rc.firewall.  
They have to be placed in /etc/rc.conf or /etc/rc.conf.local which is 
what /etc/rc.conf.d was trying to mitigate.


I see:

Revision *1.15*: download 
http://www.freebsd.org/cgi/cvsweb.cgi/%7Echeckout%7E/src/etc/rc.d/ipfw?rev=1.15;content-type=text%2Fplain 
- view: text 
http://www.freebsd.org/cgi/cvsweb.cgi/src/etc/rc.d/ipfw?rev=1.15;content-type=text%2Fplain, 
annotated 
http://www.freebsd.org/cgi/cvsweb.cgi/src/etc/rc.d/ipfw?annotate=1.15 
- select for diffs 
http://www.freebsd.org/cgi/cvsweb.cgi/src/etc/rc.d/ipfw?r1=1.15#rev1.15

/Mon Apr 2 15:38:53 2007 UTC/ (2 months, 1 week ago) by /mtm/
Branches: MAIN 
http://www.freebsd.org/cgi/cvsweb.cgi/src/etc/rc.d/ipfw?only_with_tag=MAIN
CVS tags: HEAD 
http://www.freebsd.org/cgi/cvsweb.cgi/src/etc/rc.d/ipfw?only_with_tag=HEAD


Instead of directly sourcing the firewall script, run it in a separate shell.
If the firewall script is sourced directly from the script, then any
exit statements in it will also terminate the rc.d script prematurely.

I think this should be reverted and anyone using exit statements in 
their firewall_script should be told to remove them.  It certainly 
should not have been MFCd.


Cheers,
Sean

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


build fails on amd64 machine

2006-06-15 Thread Sean McNeil
I get the following:

=== ipmi (depend)
make: don't know how to make ipmi.c. Stop
*** Error code 2


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


kernel build failure with bce

2006-04-13 Thread Sean McNeil
The following error occurs building -stable on amd64:

make -sj2 buildworld  make -sj2 buildkernel
...
=== bce (all)
/usr/src/sys/modules/bce/../../dev/bce/if_bce.c: In function
`bce_rx_intr':
/usr/src/sys/modules/bce/../../dev/bce/if_bce.c:4093: error: structure
has no member named `rxcycles'
/usr/src/sys/modules/bce/../../dev/bce/if_bce.c: In function
`bce_ioctl':
/usr/src/sys/modules/bce/../../dev/bce/if_bce.c:4897: error: label
`bce_ioctl_exit' used but not defined

I have nothing in my config file for device bce, so it would appear
someone didn't check for building as a module.

Cheers,
Sean


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


multiple IPv6 addressing

2006-02-22 Thread Sean McNeil
I seem to think that this behaved differently, but perhaps not.  Is this
appropriate behavior?...

triton# host ferrari
ferrari has address 10.1.0.50
ferrari has IPv6 address ::::xx0e:xbxx:xxca:77cf
ferrari has IPv6 address ::::xxc0:xfxx:xxa7:aea

If I do a ping6 ferrari, it alternates between the 2 addresses.  I
thought it tried to figure out which was working.  Seems that systems
will often have multiple IPv6 addresses especially for something like a
laptop with a wireless and wired connection.

triton# ping6 ferrari
PING6(56=40+8+8 bytes) ::::::: --
::::xxc0:fxx:xxa7:aea

triton# ping6 ferrari
PING6(56=40+8+8 bytes) ::::::: --
::::xx0e:xbxx:xxca:77cf

triton# ping6 ferrari
PING6(56=40+8+8 bytes) ::::::: --
::::xxc0:fxx:xxa7:aea

...


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


6-STABLE TCP nfs Linux client hangs

2005-11-11 Thread Sean McNeil
I have a 2x Athlon running with amd64 kernel and I can hang 2 Linux
clients by attempting to write large amounts of data over TCP nfs.  If I
set the nfsd to be UDP only, the problem never occurs.

Please let me know what I can do to help track this problem down.

Cheers,
Sean


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 6-STABLE TCP nfs Linux client hangs (UPDATE)

2005-11-11 Thread Sean McNeil
On Fri, 2005-11-11 at 11:24 -0800, Sean McNeil wrote:
 I have a 2x Athlon running with amd64 kernel and I can hang 2 Linux
 clients by attempting to write large amounts of data over TCP nfs.  If I
 set the nfsd to be UDP only, the problem never occurs.
 
 Please let me know what I can do to help track this problem down.

It appears that the above isn't the case.  I managed to get a failure of
nfs in UDP mode as well.  I got some messages about jumbo frames that
didn't make sense coming from the NIC when that happened and some
corruption of files.  I believe there is an issue with mpsafe on the sk0
nic.

Turning off debug.mpsafenet (i.e. setting to 0) seems to have resolved
the issue for me at the moment.

Sean


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


multicast join flood messes up sk0

2005-11-08 Thread Sean McNeil
My sk0 is rendered useless when flooded with multicast join requests.

Here is my setup:

FreeBSD server.mcneil.com 6.0-STABLE FreeBSD 6.0-STABLE #94: Mon Nov  7
23:51:05 PST 2005 [EMAIL PROTECTED]:/usr/obj/usr/src/sys/AMD64
amd64

CPU: AMD Athlon(tm) 64 X2 Dual Core Processor 3800+ (2009.79-MHz
K8-class CPU)
  Origin = AuthenticAMD  Id = 0x20f32  Stepping = 2

Features=0x178bfbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,MMX,FXSR,SSE,SSE2,HTT
  Features2=0x1SSE3
  AMD Features=0xe2500800SYSCALL,NX,MMX+,b25,LM,3DNow+,3DNow
real memory  = 2147418112 (2047 MB)
avail memory = 2064441344 (1968 MB)
FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs
 cpu0 (BSP): APIC ID:  0
 cpu1 (AP): APIC ID:  1
skc0: Marvell Gigabit Ethernet port 0xa800-0xa8ff mem
0xf500-0xf5003fff irq 19 at device 11.0 on pci2
skc0: Marvell Yukon Lite Gigabit Ethernet rev. (0x9)
sk0: Marvell Semiconductor, Inc. Yukon on skc0
sk0: Ethernet address: 00:14:85:85:27:b3
miibus1: MII bus on sk0
e1000phy0: Marvell 88E1000 Gigabit PHY on miibus1
e1000phy0:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX,
1000baseTX-FDX, auto

I'm streaming MPEG with vls to a target device that is running Linux.
Something goes wrong (not related) and I stop the streaming, but the
Linux target appears to start flooding multicast join requests.  When
this happens, my sk0 nic suddenly becomes useless.  I get messages like:

sk0: watchdog timeout
sk0: link state changed to DOWN
sk0: watchdog timeout

dhcpd: send_packet: No buffer space available

If I kill my Linux target, the interface recovers just fine.

The sk0 is attached to a gigE linksys switch which is attached to a
100BT switch which is attached to the Linux target.  There should be no
way that it could use up all the resources on my machine.

Sean


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


6-stable and mount_autofs

2005-11-02 Thread Sean McNeil
This is very confusing:

I have a mount_autofs man page that is installed.  I do not know when it
was placed in there, but it has a date of Nov 9, 2004
on /usr/share/man/man8/mount_autofs.8.gz.

The history, however, says The mount_autofs utility first appeared in
FreeBSD 6.0. which is not true.  mount_autofs is not being built in
6.0-stable.  Looking in the /usr/src/sbin/Makefile, it is not traversed.

The dates in there seem a little wacky too:

ls -l /usr/src/sbin/mount_autofs/

-rw-r--r--  1 root  wheel   213 Sep  8  2004 Makefile
-rw-r--r--  1 root  wheel  2351 Jan 24  2005 mount_autofs.8
-rw-r--r--  1 root  wheel  2883 Sep 12  2004 mount_autofs.c

There is also references to libautofs as well as a man page
and /usr/src/lib/libautofs which isn't built either.

There doesn't appear to be enough information anywhere as to how one
would setup support if it were indeed in there.  Googling freebsd
autofs indicates that freebsd 6 integrated autofs support (part of
google summer of code?).

Was this pulled from the release?  Was autofs support suppose to go in
(and was actually built at one point), then removed from 6-stable?

Sean


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: undefined reference to `memset'

2005-03-24 Thread Sean McNeil
Vinod,

On Thu, 2005-03-24 at 19:01 -0800, Vinod Kashyap wrote:
 Just like the problem is not seen when I build only the module, it's
 not seen if I simply write a foo.c (with the example code) and compile it.
 That's the reason I posted the patch to /sys/dev/twa/twa.c, which would
 cause the problem if applied, and then followed with a kernel build.
 I can send the result of running nm on twa.o tomorrow.

Please take a look at other messages in this thread, like some of the
ones I have posted.  They clearly show your problem in a small example
and how it is happening in the -O2 case as memset is being optimized
away.  -O would appear to do the right thing and adding
-minline-all-stringops (at either optimization level) would produce even
better code.

Cheers,
Sean


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: undefined reference to `memset'

2005-03-24 Thread Sean McNeil
On Thu, 2005-03-24 at 19:51 -0800, Vinod Kashyap wrote:
 I did look at your posting Sean, thanks.  But did you see the
 undefined reference to `memset' linker error when you built it?
 It's obvious that a reference to memset is being generated by
 the initialization of an array of 100 bytes to 0.  The linker is getting
 the needed memset if you build a stand alone program, or even build a stand
 alone kernel module, but is not being able to find it when building
 the kernel itself.  This implies to me that it is a difference in
 the use of flags, or linking/not linking with particular libraries
 that's causing the problem.

Here is what I believe is happening:

There exists an inline function called memset.  This inline function
_should_ replace any uses of memset within the function when compiled
with optimization (-O or -O2).  In any event that a call is not inlined,
a local copy of memset should be emitted and used.  This is what happens
with -O, but not with -O2.  This is clearly seen by an objdump at each
optimization.  For -O, a local copy of memset is emitted and used.  For
-O2, memset is still called, but the memset code is optimized away.
This is a bug, IMHO, in various ways.

1) -O2 is being too agressive and eliminating memset when it shouldn't.
2) both optimized versions are not replacing the call to memset with the
inline code.

Here is one of several issues with the amd64 compiler used at -O2 vs.
-O.  There are others as well. Note: this comment inserted for the sole
purpose of adding flame-bait :)

You do not need to link to show this.  In fact, since this is a standard
program and memset is available in libc, you will not see the problem in
a link.  You need to look at the nm output and objdump to understand
what is happening.

 I am also confused as to how an explicit call to memset works,
 when compiler generated call doesn't!  Are we talking 2 different
 memset's here?  Maybe a memset and an __memset?

The problem is that the compiler is inserting the memset call.  Either
it is happening too late for inlining to occur or it is done in some
fashion that doesn't lend to inlining by the compiler.  The call inside
the function is handled like all other code and is properly identified
and inlined.  We are not talking about 2 different memset functions, no.
We are talking about 2 different mechanisms for using memset where one
is not properly inlined and the other is.

HTH,
Sean


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Re[4]: serious networking (em) performance (ggate and NFS) problem

2004-11-22 Thread Sean McNeil
On Mon, 2004-11-22 at 11:34 +, Robert Watson wrote:
 On Sun, 21 Nov 2004, Sean McNeil wrote:
 
  I have to disagree.  Packet loss is likely according to some of my
  tests.  With the re driver, no change except placing a 100BT setup with
  no packet loss to a gigE setup (both linksys switches) will cause
  serious packet loss at 20Mbps data rates.  I have discovered the only
  way to get good performance with no packet loss was to
  
  1) Remove interrupt moderation
  2) defrag each mbuf that comes in to the driver.
 
 Sounds like you're bumping into a queue limit that is made worse by
 interrupting less frequently, resulting in bursts of packets that are
 relatively large, rather than a trickle of packets at a higher rate.
 Perhaps a limit on the number of outstanding descriptors in the driver or
 hardware and/or a limit in the netisr/ifqueue queue depth.  You might try
 changing the default IFQ_MAXLEN from 50 to 128 to increase the size of the
 ifnet and netisr queues.  You could also try setting net.isr.enable=1 to
 enable direct dispatch, which in the in-bound direction would reduce the
 number of context switches and queueing.  It sounds like the device driver
 has a limit of 256 receive and transmit descriptors, which one supposes is
 probably derived from the hardware limit, but I have no documentation on
 hand so can't confirm that.

I've tried bumping IFQ_MAXLEN and it made no difference.  I could rerun
this test to be 100% certain I suppose.  It was done a while back.  I
haven't tried net.isr.enable=1, but packet loss is in the transmission
direction.  The device driver has been modified to have 1024 transmit
and receive descriptors each as that is the hardware limitation.  That
didn't matter either.  With 1024 descriptors I still lost packets
without the m_defrag.

The most difficult thing for me to understand is:  if this is some sort
of resource limitation why will it work with a slower phy layer
perfectly and not with the gigE?  The only thing I could think of was
that the old driver was doing m_defrag calls when it filled the transmit
descriptor queues up to a certain point.  Understanding the effects of
m_defrag would be helpful in figuring this out I suppose.

 It would be interesting on the send and receive sides to inspect the
 counters for drops at various points in the network stack; i.e., are we
 dropping packets at the ifq handoff because we're overfilling the
 descriptors in the driver, are packets dropped on the inbound path going
 into the netisr due to over-filling before the netisr is scheduled, etc. 
 And, it's probably interesting to look at stats on filling the socket
 buffers for the same reason: if bursts of packets come up the stack, the
 socket buffers could well be being over-filled before the user thread can
 run.

Yes, this would be very interesting and should point out the problem.  I
would do such a thing if I had enough knowledge of the network pathways.
Alas, I am very green in this area.  The receive side has no issues,
though, so I would focus on transmit counters (with assistance).



signature.asc
Description: This is a digitally signed message part


Re: Re[4]: serious networking (em) performance (ggate and NFS) problem

2004-11-22 Thread Sean McNeil
Hi John-Mark,

On Mon, 2004-11-22 at 13:31 -0800, John-Mark Gurney wrote:
 Sean McNeil wrote this message on Mon, Nov 22, 2004 at 12:14 -0800:
  On Mon, 2004-11-22 at 11:34 +, Robert Watson wrote:
   On Sun, 21 Nov 2004, Sean McNeil wrote:
   
I have to disagree.  Packet loss is likely according to some of my
tests.  With the re driver, no change except placing a 100BT setup with
no packet loss to a gigE setup (both linksys switches) will cause
serious packet loss at 20Mbps data rates.  I have discovered the only
way to get good performance with no packet loss was to

1) Remove interrupt moderation
2) defrag each mbuf that comes in to the driver.
   
   Sounds like you're bumping into a queue limit that is made worse by
   interrupting less frequently, resulting in bursts of packets that are
   relatively large, rather than a trickle of packets at a higher rate.
   Perhaps a limit on the number of outstanding descriptors in the driver or
   hardware and/or a limit in the netisr/ifqueue queue depth.  You might try
   changing the default IFQ_MAXLEN from 50 to 128 to increase the size of the
   ifnet and netisr queues.  You could also try setting net.isr.enable=1 to
   enable direct dispatch, which in the in-bound direction would reduce the
   number of context switches and queueing.  It sounds like the device driver
   has a limit of 256 receive and transmit descriptors, which one supposes is
   probably derived from the hardware limit, but I have no documentation on
   hand so can't confirm that.
  
  I've tried bumping IFQ_MAXLEN and it made no difference.  I could rerun
 
 And the default for if_re is RL_IFQ_MAXLEN which is already 512...  As
 is mentioned below, the card can do 64 segments (which usually means 32
 packets since each packet usually has a header + payload in seperate
 packets)...

It sounds like you believe this is an if_re-only problem.  I had the
feeling that the if_em driver performance problems were related in some
way.  I noticed that if_em does not do anything with m_defrag and
thought it might be a little more than coincidence.

  this test to be 100% certain I suppose.  It was done a while back.  I
  haven't tried net.isr.enable=1, but packet loss is in the transmission
  direction.  The device driver has been modified to have 1024 transmit
  and receive descriptors each as that is the hardware limitation.  That
  didn't matter either.  With 1024 descriptors I still lost packets
  without the m_defrag.
 
 hmmm...  you know, I wonder if this is a problem with the if_re not
 pulling enough data from memory before starting the transmit...  Though
 we currently have it set for unlimited... so, that doesn't seem like it
 would be it..

Right.  Plus it now has 1024 descriptors on my machine and, like I said,
made little difference.

  The most difficult thing for me to understand is:  if this is some sort
  of resource limitation why will it work with a slower phy layer
  perfectly and not with the gigE?  The only thing I could think of was
  that the old driver was doing m_defrag calls when it filled the transmit
  descriptor queues up to a certain point.  Understanding the effects of
  m_defrag would be helpful in figuring this out I suppose.
 
 maybe the chip just can't keep the transmit fifo loaded at the higher
 speeds...  is it possible vls is doing a writev for multisegmented UDP
 packet?   I'll have to look at this again...

I suppose.  As I understand it, though, it should be sending out
1316-byte data packets at a metered pace.  Also, wouldn't it behave the
same for 100BT vs. gigE?  Shouldn't I see packet loss with 100BT if this
is the case?

   It would be interesting on the send and receive sides to inspect the
   counters for drops at various points in the network stack; i.e., are we
   dropping packets at the ifq handoff because we're overfilling the
   descriptors in the driver, are packets dropped on the inbound path going
   into the netisr due to over-filling before the netisr is scheduled, etc. 
   And, it's probably interesting to look at stats on filling the socket
   buffers for the same reason: if bursts of packets come up the stack, the
   socket buffers could well be being over-filled before the user thread can
   run.
  
  Yes, this would be very interesting and should point out the problem.  I
  would do such a thing if I had enough knowledge of the network pathways.
  Alas, I am very green in this area.  The receive side has no issues,
  though, so I would focus on transmit counters (with assistance).
 


signature.asc
Description: This is a digitally signed message part


Re: Re[2]: serious networking (em) performance (ggate and NFS) problem

2004-11-21 Thread Sean McNeil
On Sun, 2004-11-21 at 21:27 +0900, Shunsuke SHINOMIYA wrote:
  Jeremie, thank you for your comment.
 
  I did simple benchmark at some settings.
 
  I used two boxes which are single Xeon 2.4GHz with on-boarded em.
  I measured a TCP throughput by iperf.
 
  These results show that the throughput of TCP increased if Interrupt
 Moderation is turned OFF. At least, adjusting these parameters affected
 TCP performance. Other appropriate combination of parameter may exist.

I have found interrupt moderation to seriously kill gigE performance.
Another test you can make is to have the driver always defrag packets in
em_encap().  Something like

m_head = m_defrag(*m_headp, M_DONTWAIT);
if (m_head == NULL)
return ENOBUFS;



signature.asc
Description: This is a digitally signed message part


Re: Re[4]: serious networking (em) performance (ggate and NFS) problem

2004-11-21 Thread Sean McNeil
On Sun, 2004-11-21 at 20:42 -0800, Matthew Dillon wrote:
 : Yes, I knew that adjusting TCP window size is important to use up a link.
 : However I wanted to show adjusting the parameters of Interrupt
 : Moderation affects network performance.
 :
 : And I think a packet loss was occured by enabled Interrupt Moderation.
 : The mechanism of a packet loss in this case is not cleared, but I think
 : inappropriate TCP window size is not the only reason.
 
 Packet loss is not likely, at least not for the contrived tests we
 are doing because GiGE links have hardware flow control (I'm fairly
 sure).

I have to disagree.  Packet loss is likely according to some of my
tests.  With the re driver, no change except placing a 100BT setup with
no packet loss to a gigE setup (both linksys switches) will cause
serious packet loss at 20Mbps data rates.  I have discovered the only
way to get good performance with no packet loss was to

1) Remove interrupt moderation
2) defrag each mbuf that comes in to the driver.

Doing both of these, I get excellent performance without any packet
loss.  All my testing has been with UDP packets, however, and nothing
was checked for TCP.

 One could calculate the worst case small-packet build up in the receive
 ring.  I'm not sure what the minimum pad for GiGE is, but lets say it's
 64 bytes.  Then the packet rate would be around 1.9M pps or 244 packets
 per interrupt at a moderation frequency of 8000 hz.  The ring is 256
 packets.  But, don't forget the hardware flow control!  The switch
 has some buffering too.
 
 hmm... me thinks I now understand why 8000 was chosen as the default :-)
 
 I would say that this means packet loss due to the interrupt moderation
 is highly unlikely, at least in theory, but if one were paranoid one
 might want to use a higher moderation frequency, say 16000 hz, to be sure.

Your calculations are based on the mbufs being a particular size, no?
What happens if they are seriously defragmented?  Is this what you mean
by small-packet?  Are you assuming the mbufs are as small as they get?
How small can they go?  1 byte? 1 MTU?

Increasing the interrupt moderation frequency worked on the re driver,
but it only made it marginally better.  Even without moderation,
however, I could lose packets without m_defrag.  I suspect that there is
something in the higher level layers that is causing the packet loss.  I
have no explanation why m_defrag makes such a big difference for me, but
it does.  I also have no idea why a 20Mbps UDP stream can lose data over
gigE phy and not lose anything over 100BT... without the above mentioned
changes that is.



signature.asc
Description: This is a digitally signed message part


Re: serious networking (em) performance (ggate and NFS) problem

2004-11-17 Thread Sean McNeil
On Wed, 2004-11-17 at 23:57 +0100, Emanuel Strobl wrote:
 Dear best guys,
 
 I really love 5.3 in many ways but here're some unbelievable transfer rates, 
 after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve 
 my performance problem (*laugh*):
 
 (In short, see *** below)
 
 Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit PCI 
 Desktop adapter MT) connected directly without a switch/hub and device 
 polling compiled into a custom kernel with HZ set to 256 and 
 kern.polling.enabled set to 1:
 
 LOCAL:
 (/samsung is ufs2 on /dev/ad4p1, a SAMSUNG SP080N2)
  test3:~#7: dd if=/dev/zero of=/samsung/testfile bs=16k
  ^C10524+0 records in
  10524+0 records out
  172425216 bytes transferred in 3.284735 secs (52492882 bytes/sec)
 -
   ~ 52MB/s
 NFS(udp,polling):
 (/samsung is nfs on test3:/samsung, via em0, x-over, polling enabled)
  test2:/#21: dd if=/dev/zero of=/samsung/testfile bs=16k
  ^C1858+0 records in
  1857+0 records out
  30425088 bytes transferred in 8.758475 secs (3473788 bytes/sec)
 -^^^ ~ 3,4MB/s
 
 This example shows that using NFS over GigaBit Ethernet decimates performance 
 by the factor of 15, in words fifteen!
 
 GGATE with MTU 16114 and polling:
  test2:/dev#28: ggatec create 10.0.0.2 /dev/ad4p1
  ggate0
  test2:/dev#29: mount /dev/ggate0 /samsung/
  test2:/dev#30: dd if=/dev/zero of=/samsung/testfile bs=16k
  ^C2564+0 records in
  2563+0 records out
  41992192 bytes transferred in 15.908581 secs (2639594 bytes/sec)
 - ^^^ ~ 2,6MB/s
 
 GGATE without polling and MTU 16114:
  test2:~#12: ggatec create 10.0.0.2 /dev/ad4p1
  ggate0
  test2:~#13: mount /dev/ggate0 /samsung/
  test2:~#14: dd if=/dev/zero of=/samsung/testfile bs=128k
  ^C1282+0 records in
  1281+0 records out
  167903232 bytes transferred in 11.274768 secs (14891945 bytes/sec)
 -   ~ 15MB/s
 .and with 1m blocksize:
  test2:~#17: dd if=/dev/zero of=/samsung/testfile bs=1m
  ^C61+0 records in
  60+0 records out
  62914560 bytes transferred in 4.608726 secs (13651182 bytes/sec)
 - ~ 13,6MB/s
 
 I can't imagine why there seems to be a absolute limit of 15MB/s that can be 
 transfered over the network
 But it's even worse, here two excerpts of NFS (udp) with jumbo Frames 
 (mtu=16114):
  test2:~#23: mount 10.0.0.2:/samsung /samsung/
  test2:~#24: dd if=/dev/zero of=/samsung/testfile bs=1m
  ^C89+0 records in
  88+0 records out
  92274688 bytes transferred in 13.294708 secs (6940708 bytes/sec)
 - ^^^ ~7MB/s
 .and with 64k blocksize:
  test2:~#25: dd if=/dev/zero of=/samsung/testfile bs=64k
  ^C848+0 records in
  847+0 records out
  55508992 bytes transferred in 8.063415 secs (6884055 bytes/sec)
 
 And with TCP-NFS (and Jumbo Frames):
  test2:~#30: mount_nfs -T 10.0.0.2:/samsung /samsung/
  test2:~#31: dd if=/dev/zero of=/samsung/testfile bs=64k
  ^C1921+0 records in
  1920+0 records out
  125829120 bytes transferred in 7.461226 secs (16864403 bytes/sec)
 -  ~ 17MB/s
 
 Again NFS (udp) but with MTU 1500:
  test2:~#9: mount_nfs 10.0.0.2:/samsung /samsung/
  test2:~#10: dd if=/dev/zero of=/samsung/testfile bs=8k
  ^C12020+0 records in
  12019+0 records out
  98459648 bytes transferred in 10.687460 secs (9212633 bytes/sec)
 - ^^^ ~ 10MB/s
 And TCP-NFS with MTU 1500:
  test2:~#12: mount_nfs -T 10.0.0.2:/samsung /samsung/
  test2:~#13: dd if=/dev/zero of=/samsung/testfile bs=8k
  ^C19352+0 records in
  19352+0 records out
  158531584 bytes transferred in 12.093529 secs (13108794 bytes/sec)
 -   ~ 13MB/s
 
 GGATE with default MTU of 1500, polling disabled:
  test2:~#14: dd if=/dev/zero of=/samsung/testfile bs=64k
  ^C971+0 records in
  970+0 records out
  63569920 bytes transferred in 6.274578 secs (10131346 bytes/sec)
 - ~ 10M/s
 
 
 Conclusion:
 
 ***
 
 - It seems that GEOM_GATE is less efficient with GigaBit (em) than NFS via 
 TCP 
 is.
 
 - em seems to have problems with MTU greater than 1500
 
 - UDP seems to have performance disadvantages over TCP regarding NFS which 
 should be vice versa AFAIK
 
 - polling and em (GbE) with HZ=256 is definitly no good idea, even 10Base-2 
 can compete
 
 - NFS over TCP with MTU of 16114 gives the maximum transferrate for large 
 files over GigaBit Ethernet with a value of 17MB/s, a quarter of what I'd 
 expect with my test equipment.
 
 - overall network performance (regarding large file transfers) is horrible
 
 Please, if anybody has the knowledge to dig into these problems, let me know 
 if I can do any tests to 

kernel crashes during boot

2002-10-10 Thread Sean McNeil

I just cvsup'd my STABLE sources and recompiled.  My new kernel now
panics on bootup.  I couldn't get the info but I think it was a page
fault 12 or something like that.  AMD processor.  Anyone else
experiencing this? If not I will try to capture all the relevant info.

Sean



To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-stable in the body of the message



new dhcp client causing problems

2002-05-31 Thread Sean McNeil

Hi,

I've noticed that I occasionally lose connections through my computer to
the internet.  This has been happening because the dhcp client is trying
to change my ip to something bogus and resetting the ethernet:

May 30 23:44:17 blue dhclient: New Network Number: 66.75.176.0
May 30 23:44:17 blue dhclient: New Broadcast Address: 255.255.255.255
May 31 02:38:07 blue su: sean to root on /dev/ttyp0

The previous dhcp client in 4.5 did not do this.  It is only happening
with the 4.6-RC.

Cheers,
Sean



To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-stable in the body of the message