I chose pf+altq for traffic shaper solution, because it seems to
better match my needs. I use ipfw where FW is needed, but from the
point of easy administration pf+altq is better for traffic shaper. 
So I have not tested shaping performance with ipfw as I have chosen
pf this time. If it will turn out that it is not possible to achieve
good enough results this way - I'll try ipfw+dummynet.
Server is i386 based.

Still I am quite sure that it is possible to tune this configuration,
but need to find bottleneck...

Ugis
>Hi,
>Just for my information, what performance if you replace pf to ipfw
?
>and what freebsd v7.0 version ? i386 or amd64 ?
>Regards
>Rmkml
 On Fri, 14 Mar 2008, [EMAIL PROTECTED] wrote:

> Date: Fri, 14 Mar 2008 12:51:50 +0200
> From: [EMAIL PROTECTED]
> To: freebsd-performance@freebsd.org
> Subject: FreeBSD 7.0 bridge tuning
> 
> Hello!
>
> I'm trying to tune FreeBSD 7.0 bridge.
>
> Environment:
> Server - 2 x Xeon 3GHz, 2 x Gb LAN(em driver) + 1 LAN for
management,
> 1GB RAM.
> Testers -2 x Sunrise Telecom 100Mbit Ethernet testers for traffic
> generation.
>
> What I have intended to achieve is to substitute proprietary
traffic
> shaper Allot with FreeBSD traffic shaper(Bridge + PF + ALTQ).
> The minimum task is to make FreeBSD shaper to perform perfectly
with
> 100Mbit traffic in all spectrum of packet lengths (from 64 bytes to
> at least 1518 bytes)
>
> The situation now:
> with pf turned off - there is no problem, bridge throughput is
> 100Mbit/s no packet loss (starting from 64 byte packets)
>
> With pf on I have statistics:
> packet lengt -> Mbit/s without packet loss
> 64 -> 46
> 100 -> 66
> 150 -> 94
>> 200 -> 100
>
> Lower configuration of kernel/sysctl is displayed.
>
> I don't know what else can I tune?
>
> It seems to me that bottleneck is somewhere around pf/kernel
buffers
> of packet headers. I read somewhere that in bridging packet payload
> does not travel through all stack - just header is evaluated.
> In case of 64 byte packets in the same time unit there are more
> packets for the same bandwith on interfaces and as plain layer2
> bridge performs 100Mbit/s with no problem
> the problem is above layer2 :)
>
> btw: kern.polling.enable=1 does not help - at packetlength 64 bytes
> performance is 2x worse than with interrupts.
> kernel:
> ---------------------------
>
> cpu             I686_CPU
> ident           ALLOT 
>
> # To statically compile in device wiring instead of
> /boot/device.hints
> #hints          "GENERIC.hints"         # Default places to look
for
> devices.
>
> makeoptions     DEBUG=-g                # Build kernel with gdb(1)
> debug symbols
>
> options         SCHED_ULE               # ULE scheduler
> #options        SCHED_4BSD              # 4BSD scheduler
> options         PREEMPTION              # Enable kernel thread
> preemption
> options         INET                    # InterNETworking
> #options        INET6                   # IPv6 communications
> protocols
> #options        SCTP                    # Stream Control
Transmission
> Protocol
> options         FFS                     # Berkeley Fast Filesystem
> options         SOFTUPDATES             # Enable FFS soft updates
> support
> options         UFS_ACL                 # Support for access
control
> lists
> options         UFS_DIRHASH             # Improve performance on
big
> directories
> options         UFS_GJOURNAL            # Enable gjournal-based UFS
> journaling
> options         MD_ROOT                 # MD is a potential root
> device
> options         NFSCLIENT               # Network Filesystem Client
> options         NFSSERVER               # Network Filesystem Server
> options         NFS_ROOT                # NFS usable as /, requires
> NFSCLIENT
> options         MSDOSFS                 # MSDOS Filesystem
> options         CD9660                  # ISO 9660 Filesystem
> options         PROCFS                  # Process filesystem
> (requires PSEUDOFS)
> options         PSEUDOFS                # Pseudo-filesystem
framework
> options         GEOM_PART_GPT           # GUID Partition Tables.
> options         GEOM_LABEL              # Provides labelization
> options         COMPAT_43TTY            # BSD 4.3 TTY compat [KEEP
> THIS!]
> options         COMPAT_FREEBSD4         # Compatible with FreeBSD4
> options         COMPAT_FREEBSD5         # Compatible with FreeBSD5
> options         COMPAT_FREEBSD6         # Compatible with FreeBSD6
> options         SCSI_DELAY=5000         # Delay (in ms) before
> probing SCSI
> options         KTRACE                  # ktrace(1) support
> options         SYSVSHM                 # SYSV-style shared memory
> options         SYSVMSG                 # SYSV-style message queues
> options         SYSVSEM                 # SYSV-style semaphores
> options         _KPOSIX_PRIORITY_SCHEDULING # POSIX P1003_1B
> real-time extensions
> options         KBD_INSTALL_CDEV        # install a CDEV entry in
> /dev
> options         ADAPTIVE_GIANT          # Giant mutex is adaptive.
> options         STOP_NMI                # Stop CPUS using NMI
instead
> of IPI
> options         AUDIT                   # Security event auditing
>
> options ALTQ
> options ALTQ_CBQ
> options ALTQ_RED
> options ALTQ_RIO
> options ALTQ_HFSC
> options ALTQ_CDNR
> options ALTQ_PRIQ
> options ALTQ_NOPCC
> options HZ=1000
> options DEVICE_POLLING
> options IPSTEALTH
> options ZERO_COPY_SOCKETS
> options MPTABLE_FORCE_HTT       # Enable HTT CPUs with the MP Table
> options IPI_PREEMPTION
>
> # To make an SMP kernel, the next two lines are needed
> options         SMP                     # Symmetric MultiProcessor
> Kernel
> device          apic                    # I/O APIC
> --------------------------------
>
> /etc/sysctl.conf
> #kern.polling.enable=1
> kern.ipc.nmbcluster=32768
> kern.ipc.maxsockbufs=2097152
> kern.ipc.somaxconn=8192
> kern.maxfiles=65536
> kern.maxfilesperproc=32768
> net.inet.tcp.delayed_ack=0
> net.inet.tcp.sendspace=65535
> net.inet.udp.recvspace=65535
> net.inet.udp.maxdgram=57344
> net.local.stream.recvspace=65535
> net.local.stream.sendspace=65535
> kern.polling.user_frac=20
> net.isr.direct=0
> net.inet.ip.forwarding=1
> -------------------------------
>
> P.S. I tried pfSense, but as we have used Allot before - we need to
> see queue statistics in graphs per queue, pfSense just offers
> numbers..
> Seems to me that pFsense is good for many things but not for
> bridge+traffic shapeing - correct me if I'm wrong.
>
> Best regards,
> Ugis
_______________________________________________
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to