Re: 4.2-current throughput with pf enabled

2008-01-19 Thread Chris Cohen
On Tuesday 15 January 2008 21:06:51 Chris Cohen wrote:
> On Tuesday 15 January 2008 18:13:15 Chris Cappuccio wrote:
> > Chris Cohen [EMAIL PROTECTED] wrote:
> > > I think my CPU is way too slow to be able to handle the GigE link and
> > > the filter. Aren't there any tweaks for pf.conf/sysctl?
> >
> > Your CPU only gets used for packets that you actually receive.  Your
> > performance between a gig card and a 100m card is probably not going to
> > be any different, unless your problem is related to the em driver.  It's
> > time to figure out what is fucking up your configuration.
> >
> > Have you tried disabling apm? pcibios? What does your dmesg look like?
>
> No, I haven't. I can try it at the weekend, but since the "problem" only
> appears when I enable pf I am not sure if that will buy me anything?
> Nevertheless will try to disable apm and pcibios this weekend.
>

replying to myself... tried both, but didn't help :(

I think I will just upgrade to a new mini-itx system like 
http://cgi.ebay.de/ws/eBayISAPI.dll?ViewItem&rd=1&item=260202085551&ssPageName=STRK:MEWA:IT&ih=016.
Are there any numbers (bps, ~1500byte packets) with this cpu/nic combination?

-- 
Thanks
Chris



Re: 4.2-current throughput with pf enabled

2008-01-15 Thread Chris Kuethe
On Jan 15, 2008 12:06 PM, Chris Cohen <[EMAIL PROTECTED]> wrote:
> On Tuesday 15 January 2008 18:13:15 Chris Cappuccio wrote:
> > Have you tried disabling apm? pcibios? What does your dmesg look like?
>
> No, I haven't. I can try it at the weekend, but since the "problem" only
> appears when I enable pf I am not sure if that will buy me anything?
> Nevertheless will try to disable apm and pcibios this weekend.

i had an old toshiba machine that would aggressively force the pci bus
to sleep and that would play merry havoc with... all kinds of things.
maybe, just maybe you're suffering a buggy power management bios too.

-- 
GDB has a 'break' feature; why doesn't it have 'fix' too?



Re: 4.2-current throughput with pf enabled

2008-01-15 Thread Chris Cohen
On Tuesday 15 January 2008 18:13:15 Chris Cappuccio wrote:
> Chris Cohen [EMAIL PROTECTED] wrote:
> > I think my CPU is way too slow to be able to handle the GigE link and the
> > filter. Aren't there any tweaks for pf.conf/sysctl?
>
> Your CPU only gets used for packets that you actually receive.  Your
> performance between a gig card and a 100m card is probably not going to be
> any different, unless your problem is related to the em driver.  It's time
> to figure out what is fucking up your configuration.
>
> Have you tried disabling apm? pcibios? What does your dmesg look like?
>

No, I haven't. I can try it at the weekend, but since the "problem" only 
appears when I enable pf I am not sure if that will buy me anything? 
Nevertheless will try to disable apm and pcibios this weekend.


This is the dmesg with a dual fxp card: (by the way, I can only get 9Mbyte/s 
through the trunkport with trunkproto loadbalance or roundrobin)

OpenBSD 4.2-current (GENERIC) #642: Tue Jan  8 17:06:33 MST 2008
[EMAIL PROTECTED]:/usr/src/sys/arch/i386/compile/GENERIC
cpu0: Intel Pentium III ("GenuineIntel" 686-class, 512KB L2 cache) 498 MHz
cpu0: 
FPU,V86,DE,PSE,TSC,MSR,PAE,MCE,CX8,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE
real mem  = 268005376 (255MB)
avail mem = 251240448 (239MB)
mainbus0 at root
bios0 at mainbus0: AT/286+ BIOS, date 02/10/99, BIOS32 rev. 0 @ 0xec700, 
SMBIOS rev. 2.1 @ 0xf15e2 (54 entries)
bios0: vendor Compaq version "686T3" date 02/10/99
bios0: Compaq Deskpro EN Series
apm0 at bios0: Power Management spec V1.2 (BIOS managing devices)
apm0: AC on, battery charge unknown
acpi at bios0 function 0x0 not configured
pcibios0 at bios0: rev 2.1 @ 0xec700/0x3900
pcibios0: PCI IRQ Routing Table rev 1.0 @ 0xf6f30/176 (9 entries)
pcibios0: PCI Interrupt Router at 000:20:0 ("Intel 82371AB PIIX4 ISA" rev 
0x00)
pcibios0: PCI bus #2 is the last bus
bios0: ROM list: 0xc/0x8000 0xc8000/0x1000 0xe/0x8000!
cpu0 at mainbus0
pci0 at mainbus0 bus 0: configuration mode 1 (no bios)
pchb0 at pci0 dev 0 function 0 "Intel 82443BX AGP" rev 0x03
agp0 at pchb0: aperture at 0x4800, size 0x400
ppb0 at pci0 dev 1 function 0 "Intel 82443BX AGP" rev 0x03
pci1 at ppb0 bus 1
vga1 at pci0 dev 13 function 0 "S3 Trio64V2/DX" rev 0x14
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
fxp0 at pci0 dev 14 function 0 "Intel 8255x" rev 0x08, i82559: irq 11, address 
00:d0:b7:0b:97:6f
inphy0 at fxp0 phy 1: i82555 10/100 PHY, rev. 4
ppb1 at pci0 dev 15 function 0 "DEC 21154 PCI-PCI" rev 0x02
pci2 at ppb1 bus 2
fxp1 at pci2 dev 4 function 0 "Intel 8255x" rev 0x05, i82558: irq 11, address 
00:50:8b:95:a4:d2
inphy1 at fxp1 phy 1: i82555 10/100 PHY, rev. 0
fxp2 at pci2 dev 5 function 0 "Intel 8255x" rev 0x05, i82558: irq 11, address 
00:50:8b:95:a4:d3
inphy2 at fxp2 phy 1: i82555 10/100 PHY, rev. 0
piixpcib0 at pci0 dev 20 function 0 "Intel 82371AB PIIX4 ISA" rev 0x02
pciide0 at pci0 dev 20 function 1 "Intel 82371AB IDE" rev 0x01: DMA, channel 0 
wired to compatibility, channel 1 wired to compatibility
wd0 at pciide0 channel 0 drive 0: 
wd0: 1-sector PIO, LBA, 976MB, 2000880 sectors
wd0(pciide0:0:0): using PIO mode 4
atapiscsi0 at pciide0 channel 1 drive 0
scsibus0 at atapiscsi0: 2 targets
cd0 at scsibus0 targ 0 lun 0:  SCSI0 5/cdrom 
removable
cd0(pciide0:1:0): using PIO mode 4, DMA mode 2
uhci0 at pci0 dev 20 function 2 "Intel 82371AB USB" rev 0x01: irq 11
piixpm0 at pci0 dev 20 function 3 "Intel 82371AB Power" rev 0x02: SMI
iic0 at piixpm0
spdmem0 at iic0 addr 0x50: 128MB SDRAM non-parity PC133CL2
spdmem1 at iic0 addr 0x51: 128MB SDRAM non-parity PC133CL3
isa0 at piixpcib0
isadma0 at isa0
pckbc0 at isa0 port 0x60/5
pckbd0 at pckbc0 (kbd slot)
pckbc0: using irq 1 for kbd slot
wskbd0 at pckbd0: console keyboard, using wsdisplay0
sb0 at isa0 port 0x220/24 irq 5 drq 1: dsp v3.01
midi0 at sb0: 
audio0 at sb0
opl0 at sb0: model OPL3
midi1 at opl0: 
pcppi0 at isa0 port 0x61
midi2 at pcppi0: 
spkr0 at pcppi0
lpt0 at isa0 port 0x378/4 irq 7
npx0 at isa0 port 0xf0/16: reported by CPUID; using exception 16
pccom0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
pccom1 at isa0 port 0x2f8/8 irq 3: ns16550a, 16 byte fifo
fdc0 at isa0 port 0x3f0/6 irq 6 drq 2
usb0 at uhci0: USB revision 1.0
uhub0 at usb0 "Intel UHCI root hub" rev 1.00/1.00 addr 1
biomask ff45 netmask ff45 ttymask ffc7
mtrr: Pentium Pro MTRR support
softraid0 at root
root on wd0a swap on wd0b dump on wd0b


-- 
Thanks
Chris



Re: 4.2-current throughput with pf enabled

2008-01-15 Thread Chris Cappuccio
Chris Cohen [EMAIL PROTECTED] wrote:

> I think my CPU is way too slow to be able to handle the GigE link and the
> filter. Aren't there any tweaks for pf.conf/sysctl?
> 

Your CPU only gets used for packets that you actually receive.  Your
performance between a gig card and a 100m card is probably not going to be any
different, unless your problem is related to the em driver.  It's time to
figure out what is fucking up your configuration.

Have you tried disabling apm? pcibios? What does your dmesg look like?

> I think I will switch back to a dual 100baseTX card since I don't really need
> the extra speed and will try to get a new lowpower machine until summer...
> 
> --
> Thanks
> Chris

-- 
"You were about to change the channel when God healed you" -- Benny Hinn



Re: 4.2-current throughput with pf enabled

2008-01-12 Thread Chris Cohen
On Saturday 12 January 2008 03:44:48 scott wrote:
> I use both fxp and em NICs and have great throughput.  You may want to
> check the full-half duplex settings/agreements -- configured and
> actual-operation -- with the pf box AND EACH adjacent device.
> Disagreements can provoke a lot of re-sends.
>
Did that, all fine :)

> Also, with the slower link, you may want to try implementing queuing so
> that --at a minimum-- the tos lowlatency packets are prioritized over
> the bulk large packet traffic. Queue is assigned on the PASS OUT
> rule(s).
>
> Something like...
>
> ---pf.conf frag---
> altq on  priq bandwidth 640Kb queue { Q1, Q7 }
> queue Q7 priority 7
> queue Q1 priority 1 priq(default)
> #
> #...
> #
> pass out on  ... queue(Q1, Q7)
> #

Thank you scott, I already set up queuing fC

Re: 4.2-current throughput with pf enabled

2008-01-11 Thread scott
I use both fxp and em NICs and have great throughput.  You may want to
check the full-half duplex settings/agreements -- configured and
actual-operation -- with the pf box AND EACH adjacent device.
Disagreements can provoke a lot of re-sends.

Also, with the slower link, you may want to try implementing queuing so
that --at a minimum-- the tos lowlatency packets are prioritized over
the bulk large packet traffic. Queue is assigned on the PASS OUT
rule(s).

Something like...

---pf.conf frag---
altq on  priq bandwidth 640Kb queue { Q1, Q7 }
queue Q7 priority 7
queue Q1 priority 1 priq(default)
#
#...
#
pass out on  ... queue(Q1, Q7)
#
---pf.conf frag---

/S

-Original Message-
From: Chris Cohen <[EMAIL PROTECTED]>
To: misc@openbsd.org
Subject: Re: 4.2-current throughput with pf enabled
Date: Fri, 11 Jan 2008 19:38:59 +0100
Mailer: KMail/1.9.7
Delivered-To: [EMAIL PROTECTED]

On Friday 11 January 2008 18:36:54 scott wrote:
> re-test and post with in your ruleset
>
> pass in quick on fxp0 inet from any to any keep state
> pass out quick on $ext_if inet from any to any  keep state
>
Did that, didn't change anything. Maybe I should add some details:
I generated the traffic by simply dding from /dev/zero from one machine in my 
lan to a machine in my dmz (but i got almost the same results with ftp/http). 
They are in two different vlans which are both attached to em0. fxp0 is the 
interface to my adsl modem.



Re: 4.2-current throughput with pf enabled

2008-01-11 Thread James Records
Try using something like iperf or netperf to get more results than just
icmp.

J

On Jan 11, 2008 9:36 AM, scott <[EMAIL PROTECTED]> wrote:

> re-test and post with in your ruleset
>
> pass in quick on fxp0 inet from any to any keep state
> pass out quick on $ext_if inet from any to any  keep state
>
> /S
>
> -Original Message-
> From: Chris Cohen <[EMAIL PROTECTED]>
> To: misc@openbsd.org
> Subject: 4.2-current throughput with pf enabled
> Date: Fri, 11 Jan 2008 17:45:37 +0100
> Mailer: KMail/1.9.7
> Delivered-To: [EMAIL PROTECTED]
>
> Hi,
>
> I just upgraded my home firewall/router from 4.1 to a current snapshot
> from %
> 9th January. I also changed the NIC which is connected to my core switch
> from
> fxp to em and upgraded the memory from 128Mb to 256Mb.
> With PF disabled I can route about 40Mbyte/s (sorry, don't have pps but
> the
> traffic should mostly be large packets) and the system still responds very
> well. (To get some numbers I just pinged the machine...):
>
> PING 10.1.0.254 (10.1.0.254) 56(84) bytes of data.
> 64 bytes from 10.1.0.254: icmp_seq=1 ttl=255 time=2.39 ms
> 64 bytes from 10.1.0.254: icmp_seq=2 ttl=255 time=0.078 ms
> 64 bytes from 10.1.0.254: icmp_seq=3 ttl=255 time=0.077 ms
> 64 bytes from 10.1.0.254: icmp_seq=4 ttl=255 time=0.258 ms
> 64 bytes from 10.1.0.254: icmp_seq=5 ttl=255 time=1.63 ms
> 64 bytes from 10.1.0.254: icmp_seq=6 ttl=255 time=2.03 ms
> 64 bytes from 10.1.0.254: icmp_seq=7 ttl=255 time=1.87 ms
> 64 bytes from 10.1.0.254: icmp_seq=8 ttl=255 time=0.954 ms
> 64 bytes from 10.1.0.254: icmp_seq=9 ttl=255 time=2.65 ms
> 64 bytes from 10.1.0.254: icmp_seq=10 ttl=255 time=0.315 ms
>
> --- 10.1.0.254 ping statistics ---
> 10 packets transmitted, 10 received, 0% packet loss, time 9007ms
> rtt min/avg/max/mdev = 0.077/1.228/2.657/0.955 ms
>
> With pf enabled and a very short ruleset (see pf.conf below) the system
> doesn't respond to many of the dns queries (bind9 is also enabled on this
> system) and the throughput is decreased to about 10Mbyte/s with the same
> kind
> of traffic as above. See my stupid pingtest:
>
> PING 10.1.0.254 56(84) bytes of data.
> 64 bytes from 10.1.0.254: icmp_seq=2 ttl=255 time=5.39 ms
> 64 bytes from 10.1.0.254: icmp_seq=3 ttl=255 time=0.206 ms
> 64 bytes from 10.1.0.254: icmp_seq=4 ttl=255 time=9.87 ms
> 64 bytes from 10.1.0.254: icmp_seq=5 ttl=255 time=1.35 ms
> 64 bytes from 10.1.0.254: icmp_seq=6 ttl=255 time=10.1 ms
> 64 bytes from 10.1.0.254: icmp_seq=7 ttl=255 time=1.47 ms
> 64 bytes from 10.1.0.254: icmp_seq=8 ttl=255 time=11.1 ms
> 64 bytes from 10.1.0.254: icmp_seq=9 ttl=255 time=11.8 ms
> 64 bytes from 10.1.0.254: icmp_seq=10 ttl=255 time=12.1 ms
> 64 bytes from 10.1.0.254: icmp_seq=11 ttl=255 time=11.7 ms
> 64 bytes from 10.1.0.254: icmp_seq=12 ttl=255 time=12.7 ms
> 64 bytes from 10.1.0.254: icmp_seq=13 ttl=255 time=11.3 ms
> 64 bytes from 10.1.0.254: icmp_seq=14 ttl=255 time=14.0 ms
> 64 bytes from 10.1.0.254: icmp_seq=15 ttl=255 time=12.2 ms
> 64 bytes from 10.1.0.254: icmp_seq=16 ttl=255 time=11.7 ms
> 64 bytes from 10.1.0.254: icmp_seq=17 ttl=255 time=14.7 ms
> 64 bytes from 10.1.0.254: icmp_seq=18 ttl=255 time=11.1 ms
> 64 bytes from 10.1.0.254: icmp_seq=19 ttl=255 time=3.01 ms
>
> --- 10.1.0.254 ping statistics ---
> 19 packets transmitted, 18 received, 5% packet loss, time 18026ms
> rtt min/avg/max/mdev = 0.206/9.239/14.713/4.549 ms
>
> With openbsd 4.1 and an fxp NIC instead of the em one the system was able
> to
> handle full 12Mbyte/s with a pretty complex pf.conf (more than 200 lines).
> The system is an old Compaq Deskpro EN with a P3/500 and 256Mb of ram.
>
>
> pf.conf (already played with scrub, skip and pass with no success...)
> -
> ext_if="pppoe0"
> set skip on lo
> set skip on em0
> #scrub in
> scrub out on pppoe0 max-mss 1440 no-df random-id fragment reassemble
> nat-anchor "ftp-proxy/*"
> rdr-anchor "ftp-proxy/*"
> nat on $ext_if from !($ext_if) -> ($ext_if:0)
> nat on fxp0 from any to 10.1.0.253 -> 10.1.0.254
> rdr pass on vlan10 proto tcp to port ftp -> 127.0.0.1 port 8021
> anchor "ftp-proxy/*"
> #block in on pppoe0
> #pass out
>
> Is there anything I can tune in pf?
> Should I provide a dmesg?



Re: 4.2-current throughput with pf enabled

2008-01-11 Thread Chris Cohen
On Friday 11 January 2008 18:36:54 scott wrote:
> re-test and post with in your ruleset
>
> pass in quick on fxp0 inet from any to any keep state
> pass out quick on $ext_if inet from any to any  keep state
>
Did that, didn't change anything. Maybe I should add some details:
I generated the traffic by simply dding from /dev/zero from one machine in my 
lan to a machine in my dmz (but i got almost the same results with ftp/http). 
They are in two different vlans which are both attached to em0. fxp0 is the 
interface to my adsl modem.

-- 
Thanks
Chris



Re: 4.2-current throughput with pf enabled

2008-01-11 Thread scott
re-test and post with in your ruleset

pass in quick on fxp0 inet from any to any keep state
pass out quick on $ext_if inet from any to any  keep state

/S

-Original Message-
From: Chris Cohen <[EMAIL PROTECTED]>
To: misc@openbsd.org
Subject: 4.2-current throughput with pf enabled
Date: Fri, 11 Jan 2008 17:45:37 +0100
Mailer: KMail/1.9.7
Delivered-To: [EMAIL PROTECTED]

Hi,

I just upgraded my home firewall/router from 4.1 to a current snapshot from %
9th January. I also changed the NIC which is connected to my core switch from 
fxp to em and upgraded the memory from 128Mb to 256Mb.
With PF disabled I can route about 40Mbyte/s (sorry, don't have pps but the 
traffic should mostly be large packets) and the system still responds very 
well. (To get some numbers I just pinged the machine...):

PING 10.1.0.254 (10.1.0.254) 56(84) bytes of data.
64 bytes from 10.1.0.254: icmp_seq=1 ttl=255 time=2.39 ms
64 bytes from 10.1.0.254: icmp_seq=2 ttl=255 time=0.078 ms
64 bytes from 10.1.0.254: icmp_seq=3 ttl=255 time=0.077 ms
64 bytes from 10.1.0.254: icmp_seq=4 ttl=255 time=0.258 ms
64 bytes from 10.1.0.254: icmp_seq=5 ttl=255 time=1.63 ms
64 bytes from 10.1.0.254: icmp_seq=6 ttl=255 time=2.03 ms
64 bytes from 10.1.0.254: icmp_seq=7 ttl=255 time=1.87 ms
64 bytes from 10.1.0.254: icmp_seq=8 ttl=255 time=0.954 ms
64 bytes from 10.1.0.254: icmp_seq=9 ttl=255 time=2.65 ms
64 bytes from 10.1.0.254: icmp_seq=10 ttl=255 time=0.315 ms

--- 10.1.0.254 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9007ms
rtt min/avg/max/mdev = 0.077/1.228/2.657/0.955 ms

With pf enabled and a very short ruleset (see pf.conf below) the system 
doesn't respond to many of the dns queries (bind9 is also enabled on this 
system) and the throughput is decreased to about 10Mbyte/s with the same kind 
of traffic as above. See my stupid pingtest:

PING 10.1.0.254 56(84) bytes of data.
64 bytes from 10.1.0.254: icmp_seq=2 ttl=255 time=5.39 ms
64 bytes from 10.1.0.254: icmp_seq=3 ttl=255 time=0.206 ms
64 bytes from 10.1.0.254: icmp_seq=4 ttl=255 time=9.87 ms
64 bytes from 10.1.0.254: icmp_seq=5 ttl=255 time=1.35 ms
64 bytes from 10.1.0.254: icmp_seq=6 ttl=255 time=10.1 ms
64 bytes from 10.1.0.254: icmp_seq=7 ttl=255 time=1.47 ms
64 bytes from 10.1.0.254: icmp_seq=8 ttl=255 time=11.1 ms
64 bytes from 10.1.0.254: icmp_seq=9 ttl=255 time=11.8 ms
64 bytes from 10.1.0.254: icmp_seq=10 ttl=255 time=12.1 ms
64 bytes from 10.1.0.254: icmp_seq=11 ttl=255 time=11.7 ms
64 bytes from 10.1.0.254: icmp_seq=12 ttl=255 time=12.7 ms
64 bytes from 10.1.0.254: icmp_seq=13 ttl=255 time=11.3 ms
64 bytes from 10.1.0.254: icmp_seq=14 ttl=255 time=14.0 ms
64 bytes from 10.1.0.254: icmp_seq=15 ttl=255 time=12.2 ms
64 bytes from 10.1.0.254: icmp_seq=16 ttl=255 time=11.7 ms
64 bytes from 10.1.0.254: icmp_seq=17 ttl=255 time=14.7 ms
64 bytes from 10.1.0.254: icmp_seq=18 ttl=255 time=11.1 ms
64 bytes from 10.1.0.254: icmp_seq=19 ttl=255 time=3.01 ms

--- 10.1.0.254 ping statistics ---
19 packets transmitted, 18 received, 5% packet loss, time 18026ms
rtt min/avg/max/mdev = 0.206/9.239/14.713/4.549 ms

With openbsd 4.1 and an fxp NIC instead of the em one the system was able to 
handle full 12Mbyte/s with a pretty complex pf.conf (more than 200 lines).
The system is an old Compaq Deskpro EN with a P3/500 and 256Mb of ram.


pf.conf (already played with scrub, skip and pass with no success...)
-
ext_if="pppoe0"
set skip on lo
set skip on em0
#scrub in
scrub out on pppoe0 max-mss 1440 no-df random-id fragment reassemble
nat-anchor "ftp-proxy/*"
rdr-anchor "ftp-proxy/*"
nat on $ext_if from !($ext_if) -> ($ext_if:0)
nat on fxp0 from any to 10.1.0.253 -> 10.1.0.254
rdr pass on vlan10 proto tcp to port ftp -> 127.0.0.1 port 8021
anchor "ftp-proxy/*"
#block in on pppoe0
#pass out

Is there anything I can tune in pf?
Should I provide a dmesg?



4.2-current throughput with pf enabled

2008-01-11 Thread Chris Cohen
Hi,

I just upgraded my home firewall/router from 4.1 to a current snapshot from 
9th January. I also changed the NIC which is connected to my core switch from 
fxp to em and upgraded the memory from 128Mb to 256Mb.
With PF disabled I can route about 40Mbyte/s (sorry, don't have pps but the 
traffic should mostly be large packets) and the system still responds very 
well. (To get some numbers I just pinged the machine...):

PING 10.1.0.254 (10.1.0.254) 56(84) bytes of data.
64 bytes from 10.1.0.254: icmp_seq=1 ttl=255 time=2.39 ms
64 bytes from 10.1.0.254: icmp_seq=2 ttl=255 time=0.078 ms
64 bytes from 10.1.0.254: icmp_seq=3 ttl=255 time=0.077 ms
64 bytes from 10.1.0.254: icmp_seq=4 ttl=255 time=0.258 ms
64 bytes from 10.1.0.254: icmp_seq=5 ttl=255 time=1.63 ms
64 bytes from 10.1.0.254: icmp_seq=6 ttl=255 time=2.03 ms
64 bytes from 10.1.0.254: icmp_seq=7 ttl=255 time=1.87 ms
64 bytes from 10.1.0.254: icmp_seq=8 ttl=255 time=0.954 ms
64 bytes from 10.1.0.254: icmp_seq=9 ttl=255 time=2.65 ms
64 bytes from 10.1.0.254: icmp_seq=10 ttl=255 time=0.315 ms

--- 10.1.0.254 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9007ms
rtt min/avg/max/mdev = 0.077/1.228/2.657/0.955 ms

With pf enabled and a very short ruleset (see pf.conf below) the system 
doesn't respond to many of the dns queries (bind9 is also enabled on this 
system) and the throughput is decreased to about 10Mbyte/s with the same kind 
of traffic as above. See my stupid pingtest:

PING 10.1.0.254 56(84) bytes of data.
64 bytes from 10.1.0.254: icmp_seq=2 ttl=255 time=5.39 ms
64 bytes from 10.1.0.254: icmp_seq=3 ttl=255 time=0.206 ms
64 bytes from 10.1.0.254: icmp_seq=4 ttl=255 time=9.87 ms
64 bytes from 10.1.0.254: icmp_seq=5 ttl=255 time=1.35 ms
64 bytes from 10.1.0.254: icmp_seq=6 ttl=255 time=10.1 ms
64 bytes from 10.1.0.254: icmp_seq=7 ttl=255 time=1.47 ms
64 bytes from 10.1.0.254: icmp_seq=8 ttl=255 time=11.1 ms
64 bytes from 10.1.0.254: icmp_seq=9 ttl=255 time=11.8 ms
64 bytes from 10.1.0.254: icmp_seq=10 ttl=255 time=12.1 ms
64 bytes from 10.1.0.254: icmp_seq=11 ttl=255 time=11.7 ms
64 bytes from 10.1.0.254: icmp_seq=12 ttl=255 time=12.7 ms
64 bytes from 10.1.0.254: icmp_seq=13 ttl=255 time=11.3 ms
64 bytes from 10.1.0.254: icmp_seq=14 ttl=255 time=14.0 ms
64 bytes from 10.1.0.254: icmp_seq=15 ttl=255 time=12.2 ms
64 bytes from 10.1.0.254: icmp_seq=16 ttl=255 time=11.7 ms
64 bytes from 10.1.0.254: icmp_seq=17 ttl=255 time=14.7 ms
64 bytes from 10.1.0.254: icmp_seq=18 ttl=255 time=11.1 ms
64 bytes from 10.1.0.254: icmp_seq=19 ttl=255 time=3.01 ms

--- 10.1.0.254 ping statistics ---
19 packets transmitted, 18 received, 5% packet loss, time 18026ms
rtt min/avg/max/mdev = 0.206/9.239/14.713/4.549 ms

With openbsd 4.1 and an fxp NIC instead of the em one the system was able to 
handle full 12Mbyte/s with a pretty complex pf.conf (more than 200 lines).
The system is an old Compaq Deskpro EN with a P3/500 and 256Mb of ram.


pf.conf (already played with scrub, skip and pass with no success...)
-
ext_if="pppoe0"
set skip on lo
set skip on em0
#scrub in
scrub out on pppoe0 max-mss 1440 no-df random-id fragment reassemble
nat-anchor "ftp-proxy/*"
rdr-anchor "ftp-proxy/*"
nat on $ext_if from !($ext_if) -> ($ext_if:0)
nat on fxp0 from any to 10.1.0.253 -> 10.1.0.254
rdr pass on vlan10 proto tcp to port ftp -> 127.0.0.1 port 8021
anchor "ftp-proxy/*"
#block in on pppoe0
#pass out

Is there anything I can tune in pf?
Should I provide a dmesg?

-- 
Thanks
Chris