Re: packet loss when > 1000 clients connect

2019-04-16 Thread R0me0 ***
+1

Em ter, 16 de abr de 2019 às 09:44, Torsten  escreveu:

> > Check with pfctl -si if you reach a limit
>
> Thanks, will do.
>
> Marc Peters also suggested to check pf state limit, upon digging into
> that I found
>
>   https://serverascode.com/2011/09/12/openbsd-pf-set-limit-states.html
>
> and therefore added
>
>   set limit states 20
>
> to pf.conf.
>
>


Re: packet loss when > 1000 clients connect

2019-04-16 Thread Torsten
> Check with pfctl -si if you reach a limit

Thanks, will do.

Marc Peters also suggested to check pf state limit, upon digging into
that I found

  https://serverascode.com/2011/09/12/openbsd-pf-set-limit-states.html

and therefore added

  set limit states 20

to pf.conf.



Re: packet loss when > 1000 clients connect

2019-04-16 Thread Denis Fondras
On Tue, Apr 16, 2019 at 11:07:47AM +0200, Torsten wrote:
> Hi!
> 
> Problem description:
> In a customers network more than 2k clients connect to a server and
> perform https requests. When in the morning more and more clients become
> active, the number of connections rises until more and more clients fail
> to connect to the server. The reason appears to be packet losses.
> 
> 
> Question:
> Are we hitting system limits or resource exhaustion that we should have
> configured higher? Any other idea what to look for?
> 

Check with pfctl -si if you reach a limit



Re: Packet loss with latest snapshot

2019-03-04 Thread Tony Sarendal
On Mon, 4 Mar 2019, 13:29 David Gwynne,  wrote:

> On Mon, Mar 04, 2019 at 10:36:23AM +0100, Tony Sarendal wrote:
> > On Mon, 4 Mar 2019, 09:43 Tony Sarendal,  wrote:
> >
> > >
> > >
> > > Den m??n 4 mars 2019 kl 09:26 skrev Tony Sarendal :
> > >
> > >> Den s??n 3 mars 2019 kl 21:35 skrev Theo de Raadt <
> dera...@openbsd.org>:
> > >>
> > >>> Tony,
> > >>>
> > >>> Are you out of your mind?  You didn't provide even a rough hint about
> > >>> what your firewall configuration looks like.  You recognize that's
> > >>> pathetic, right?
> > >>>
> > >>> > Earlier in the week I could run parallel ping-pong tests through my
> > >>> test
> > >>> > firewalls
> > >>> > at 300kpps without any packet loss. I updated to the latest
> snapshot
> > >>> today
> > >>> > and
> > >>> > start to see packet loss at around 80kpps.
> > >>> >
> > >>> > /T
> > >>> >
> > >>> > OpenBSD 6.5-beta (GENERIC.MP) #764: Sun Mar  3 10:24:08 MST 2019
> > >>> > dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/
> > >>> GENERIC.MP
> > >>> > real mem = 34300891136 (32711MB)
> > >>> > avail mem = 33251393536 (31711MB)
> > >>> > mpath0 at root
> > >>> > scsibus0 at mpath0: 256 targets
> > >>> > mainbus0 at root
> > >>> > bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xec170 (34 entries)
> > >>> > bios0: vendor American Megatrends Inc. version "3.0" date
> 04/24/2015
> > >>> > bios0: Supermicro X10SLD
> > >>> > acpi0 at bios0: rev 2
> > >>> > acpi0: sleep states S0 S4 S5
> > >>> > acpi0: tables DSDT FACP APIC FPDT FIDT SSDT SSDT MCFG PRAD HPET
> SSDT
> > >>> SSDT
> > >>> > SPMI DMAR EINJ ERST HEST BERT
> > >>> > acpi0: wakeup devices PEGP(S4) PEG0(S4) PEGP(S4) PEG1(S4) PEGP(S4)
> > >>> PEG2(S4)
> > >>> > PXSX(S4) RP01(S4) PXSX(S4) RP02(S4) PXSX(S4) RP03(S4) PXSX(S4)
> RP04(S4)
> > >>> > PXSX(S4) RP05(S4) [...]
> > >>> > acpitimer0 at acpi0: 3579545 Hz, 24 bits
> > >>> > acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
> > >>> > cpu0 at mainbus0: apid 0 (boot processor)
> > >>> > cpu0: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.68 MHz,
> 06-3c-03
> > >>> > cpu0:
> > >>> >
> > >>>
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> > >>> > cpu0: 256KB 64b/line 8-way L2 cache
> > >>> > cpu0: smt 0, core 0, package 0
> > >>> > mtrr: Pentium Pro MTRR support, 10 var ranges, 88 fixed ranges
> > >>> > cpu0: apic clock running at 99MHz
> > >>> > cpu0: mwait min=64, max=64, C-substates=0.2.1.2.4, IBE
> > >>> > cpu1 at mainbus0: apid 2 (application processor)
> > >>> > cpu1: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.01 MHz,
> 06-3c-03
> > >>> > cpu1:
> > >>> >
> > >>>
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> > >>> > cpu1: 256KB 64b/line 8-way L2 cache
> > >>> > cpu1: smt 0, core 1, package 0
> > >>> > cpu2 at mainbus0: apid 4 (application processor)
> > >>> > cpu2: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.01 MHz,
> 06-3c-03
> > >>> > cpu2:
> > >>> >
> > >>>
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> > >>> > cpu2: 256KB 64b/line 8-way L2 cache
> > >>> > cpu2: smt 0, core 2, package 0
> > >>> > cpu3 at mainbus0: apid 6 (application processor)
> > >>> > cpu3: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.01 MHz,
> 06-3c-03
> > >>> > cpu3:
> > >>> >
> > >>>
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> > >>> > cpu3: 256KB 64b/line 8-way L2 cache
> > >>> > cpu3: smt 0, core 3, package 0
> > >>> > ioapic0 at mainbus0: apid 8 pa 0xfec0, version 20, 24 pins
> > >>> > acpimcfg0 at acpi0
> > >>> > acpimcfg0: addr 0xf800, bus 0-63
> > >>> > acpihpet0 at acpi0: 14318179 Hz
> > >>> > 

Re: Packet loss with latest snapshot

2019-03-04 Thread David Gwynne
On Mon, Mar 04, 2019 at 10:36:23AM +0100, Tony Sarendal wrote:
> On Mon, 4 Mar 2019, 09:43 Tony Sarendal,  wrote:
> 
> >
> >
> > Den m??n 4 mars 2019 kl 09:26 skrev Tony Sarendal :
> >
> >> Den s??n 3 mars 2019 kl 21:35 skrev Theo de Raadt :
> >>
> >>> Tony,
> >>>
> >>> Are you out of your mind?  You didn't provide even a rough hint about
> >>> what your firewall configuration looks like.  You recognize that's
> >>> pathetic, right?
> >>>
> >>> > Earlier in the week I could run parallel ping-pong tests through my
> >>> test
> >>> > firewalls
> >>> > at 300kpps without any packet loss. I updated to the latest snapshot
> >>> today
> >>> > and
> >>> > start to see packet loss at around 80kpps.
> >>> >
> >>> > /T
> >>> >
> >>> > OpenBSD 6.5-beta (GENERIC.MP) #764: Sun Mar  3 10:24:08 MST 2019
> >>> > dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/
> >>> GENERIC.MP
> >>> > real mem = 34300891136 (32711MB)
> >>> > avail mem = 33251393536 (31711MB)
> >>> > mpath0 at root
> >>> > scsibus0 at mpath0: 256 targets
> >>> > mainbus0 at root
> >>> > bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xec170 (34 entries)
> >>> > bios0: vendor American Megatrends Inc. version "3.0" date 04/24/2015
> >>> > bios0: Supermicro X10SLD
> >>> > acpi0 at bios0: rev 2
> >>> > acpi0: sleep states S0 S4 S5
> >>> > acpi0: tables DSDT FACP APIC FPDT FIDT SSDT SSDT MCFG PRAD HPET SSDT
> >>> SSDT
> >>> > SPMI DMAR EINJ ERST HEST BERT
> >>> > acpi0: wakeup devices PEGP(S4) PEG0(S4) PEGP(S4) PEG1(S4) PEGP(S4)
> >>> PEG2(S4)
> >>> > PXSX(S4) RP01(S4) PXSX(S4) RP02(S4) PXSX(S4) RP03(S4) PXSX(S4) RP04(S4)
> >>> > PXSX(S4) RP05(S4) [...]
> >>> > acpitimer0 at acpi0: 3579545 Hz, 24 bits
> >>> > acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
> >>> > cpu0 at mainbus0: apid 0 (boot processor)
> >>> > cpu0: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.68 MHz, 06-3c-03
> >>> > cpu0:
> >>> >
> >>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> >>> > cpu0: 256KB 64b/line 8-way L2 cache
> >>> > cpu0: smt 0, core 0, package 0
> >>> > mtrr: Pentium Pro MTRR support, 10 var ranges, 88 fixed ranges
> >>> > cpu0: apic clock running at 99MHz
> >>> > cpu0: mwait min=64, max=64, C-substates=0.2.1.2.4, IBE
> >>> > cpu1 at mainbus0: apid 2 (application processor)
> >>> > cpu1: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.01 MHz, 06-3c-03
> >>> > cpu1:
> >>> >
> >>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> >>> > cpu1: 256KB 64b/line 8-way L2 cache
> >>> > cpu1: smt 0, core 1, package 0
> >>> > cpu2 at mainbus0: apid 4 (application processor)
> >>> > cpu2: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.01 MHz, 06-3c-03
> >>> > cpu2:
> >>> >
> >>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> >>> > cpu2: 256KB 64b/line 8-way L2 cache
> >>> > cpu2: smt 0, core 2, package 0
> >>> > cpu3 at mainbus0: apid 6 (application processor)
> >>> > cpu3: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.01 MHz, 06-3c-03
> >>> > cpu3:
> >>> >
> >>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> >>> > cpu3: 256KB 64b/line 8-way L2 cache
> >>> > cpu3: smt 0, core 3, package 0
> >>> > ioapic0 at mainbus0: apid 8 pa 0xfec0, version 20, 24 pins
> >>> > acpimcfg0 at acpi0
> >>> > acpimcfg0: addr 0xf800, bus 0-63
> >>> > acpihpet0 at acpi0: 14318179 Hz
> >>> > acpiprt0 at acpi0: bus 0 (PCI0)
> >>> > acpiprt1 at acpi0: bus 1 (PEG0)
> >>> > acpiprt2 at acpi0: bus 2 (PEG1)
> >>> > acpiprt3 at acpi0: bus -1 (PEG2)
> >>> > acpiprt4 at acpi0: bus 3 (RP01)
> >>> > acpiprt5 at acpi0: bus -1 (RP02)
> >>> > acpiprt6 at acpi0: 

Re: Packet loss with latest snapshot

2019-03-04 Thread Tony Sarendal
On Mon, 4 Mar 2019, 09:43 Tony Sarendal,  wrote:

>
>
> Den mån 4 mars 2019 kl 09:26 skrev Tony Sarendal :
>
>> Den sön 3 mars 2019 kl 21:35 skrev Theo de Raadt :
>>
>>> Tony,
>>>
>>> Are you out of your mind?  You didn't provide even a rough hint about
>>> what your firewall configuration looks like.  You recognize that's
>>> pathetic, right?
>>>
>>> > Earlier in the week I could run parallel ping-pong tests through my
>>> test
>>> > firewalls
>>> > at 300kpps without any packet loss. I updated to the latest snapshot
>>> today
>>> > and
>>> > start to see packet loss at around 80kpps.
>>> >
>>> > /T
>>> >
>>> > OpenBSD 6.5-beta (GENERIC.MP) #764: Sun Mar  3 10:24:08 MST 2019
>>> > dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/
>>> GENERIC.MP
>>> > real mem = 34300891136 (32711MB)
>>> > avail mem = 33251393536 (31711MB)
>>> > mpath0 at root
>>> > scsibus0 at mpath0: 256 targets
>>> > mainbus0 at root
>>> > bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xec170 (34 entries)
>>> > bios0: vendor American Megatrends Inc. version "3.0" date 04/24/2015
>>> > bios0: Supermicro X10SLD
>>> > acpi0 at bios0: rev 2
>>> > acpi0: sleep states S0 S4 S5
>>> > acpi0: tables DSDT FACP APIC FPDT FIDT SSDT SSDT MCFG PRAD HPET SSDT
>>> SSDT
>>> > SPMI DMAR EINJ ERST HEST BERT
>>> > acpi0: wakeup devices PEGP(S4) PEG0(S4) PEGP(S4) PEG1(S4) PEGP(S4)
>>> PEG2(S4)
>>> > PXSX(S4) RP01(S4) PXSX(S4) RP02(S4) PXSX(S4) RP03(S4) PXSX(S4) RP04(S4)
>>> > PXSX(S4) RP05(S4) [...]
>>> > acpitimer0 at acpi0: 3579545 Hz, 24 bits
>>> > acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
>>> > cpu0 at mainbus0: apid 0 (boot processor)
>>> > cpu0: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.68 MHz, 06-3c-03
>>> > cpu0:
>>> >
>>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
>>> > cpu0: 256KB 64b/line 8-way L2 cache
>>> > cpu0: smt 0, core 0, package 0
>>> > mtrr: Pentium Pro MTRR support, 10 var ranges, 88 fixed ranges
>>> > cpu0: apic clock running at 99MHz
>>> > cpu0: mwait min=64, max=64, C-substates=0.2.1.2.4, IBE
>>> > cpu1 at mainbus0: apid 2 (application processor)
>>> > cpu1: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.01 MHz, 06-3c-03
>>> > cpu1:
>>> >
>>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
>>> > cpu1: 256KB 64b/line 8-way L2 cache
>>> > cpu1: smt 0, core 1, package 0
>>> > cpu2 at mainbus0: apid 4 (application processor)
>>> > cpu2: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.01 MHz, 06-3c-03
>>> > cpu2:
>>> >
>>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
>>> > cpu2: 256KB 64b/line 8-way L2 cache
>>> > cpu2: smt 0, core 2, package 0
>>> > cpu3 at mainbus0: apid 6 (application processor)
>>> > cpu3: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.01 MHz, 06-3c-03
>>> > cpu3:
>>> >
>>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
>>> > cpu3: 256KB 64b/line 8-way L2 cache
>>> > cpu3: smt 0, core 3, package 0
>>> > ioapic0 at mainbus0: apid 8 pa 0xfec0, version 20, 24 pins
>>> > acpimcfg0 at acpi0
>>> > acpimcfg0: addr 0xf800, bus 0-63
>>> > acpihpet0 at acpi0: 14318179 Hz
>>> > acpiprt0 at acpi0: bus 0 (PCI0)
>>> > acpiprt1 at acpi0: bus 1 (PEG0)
>>> > acpiprt2 at acpi0: bus 2 (PEG1)
>>> > acpiprt3 at acpi0: bus -1 (PEG2)
>>> > acpiprt4 at acpi0: bus 3 (RP01)
>>> > acpiprt5 at acpi0: bus -1 (RP02)
>>> > acpiprt6 at acpi0: bus -1 (RP03)
>>> > acpiprt7 at acpi0: bus -1 (RP04)
>>> > acpiprt8 at acpi0: bus -1 (RP05)
>>> > acpiprt9 at acpi0: bus -1 (RP06)
>>> > acpiprt10 at acpi0: bus -1 (RP07)
>>> > acpiprt11 at acpi0: bus -1 (RP08)
>>> > acpiec0 at acpi0: not 

Re: Packet loss with latest snapshot

2019-03-04 Thread Tony Sarendal
Den mån 4 mars 2019 kl 09:26 skrev Tony Sarendal :

> Den sön 3 mars 2019 kl 21:35 skrev Theo de Raadt :
>
>> Tony,
>>
>> Are you out of your mind?  You didn't provide even a rough hint about
>> what your firewall configuration looks like.  You recognize that's
>> pathetic, right?
>>
>> > Earlier in the week I could run parallel ping-pong tests through my test
>> > firewalls
>> > at 300kpps without any packet loss. I updated to the latest snapshot
>> today
>> > and
>> > start to see packet loss at around 80kpps.
>> >
>> > /T
>> >
>> > OpenBSD 6.5-beta (GENERIC.MP) #764: Sun Mar  3 10:24:08 MST 2019
>> > dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/
>> GENERIC.MP
>> > real mem = 34300891136 (32711MB)
>> > avail mem = 33251393536 (31711MB)
>> > mpath0 at root
>> > scsibus0 at mpath0: 256 targets
>> > mainbus0 at root
>> > bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xec170 (34 entries)
>> > bios0: vendor American Megatrends Inc. version "3.0" date 04/24/2015
>> > bios0: Supermicro X10SLD
>> > acpi0 at bios0: rev 2
>> > acpi0: sleep states S0 S4 S5
>> > acpi0: tables DSDT FACP APIC FPDT FIDT SSDT SSDT MCFG PRAD HPET SSDT
>> SSDT
>> > SPMI DMAR EINJ ERST HEST BERT
>> > acpi0: wakeup devices PEGP(S4) PEG0(S4) PEGP(S4) PEG1(S4) PEGP(S4)
>> PEG2(S4)
>> > PXSX(S4) RP01(S4) PXSX(S4) RP02(S4) PXSX(S4) RP03(S4) PXSX(S4) RP04(S4)
>> > PXSX(S4) RP05(S4) [...]
>> > acpitimer0 at acpi0: 3579545 Hz, 24 bits
>> > acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
>> > cpu0 at mainbus0: apid 0 (boot processor)
>> > cpu0: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.68 MHz, 06-3c-03
>> > cpu0:
>> >
>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
>> > cpu0: 256KB 64b/line 8-way L2 cache
>> > cpu0: smt 0, core 0, package 0
>> > mtrr: Pentium Pro MTRR support, 10 var ranges, 88 fixed ranges
>> > cpu0: apic clock running at 99MHz
>> > cpu0: mwait min=64, max=64, C-substates=0.2.1.2.4, IBE
>> > cpu1 at mainbus0: apid 2 (application processor)
>> > cpu1: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.01 MHz, 06-3c-03
>> > cpu1:
>> >
>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
>> > cpu1: 256KB 64b/line 8-way L2 cache
>> > cpu1: smt 0, core 1, package 0
>> > cpu2 at mainbus0: apid 4 (application processor)
>> > cpu2: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.01 MHz, 06-3c-03
>> > cpu2:
>> >
>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
>> > cpu2: 256KB 64b/line 8-way L2 cache
>> > cpu2: smt 0, core 2, package 0
>> > cpu3 at mainbus0: apid 6 (application processor)
>> > cpu3: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.01 MHz, 06-3c-03
>> > cpu3:
>> >
>> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
>> > cpu3: 256KB 64b/line 8-way L2 cache
>> > cpu3: smt 0, core 3, package 0
>> > ioapic0 at mainbus0: apid 8 pa 0xfec0, version 20, 24 pins
>> > acpimcfg0 at acpi0
>> > acpimcfg0: addr 0xf800, bus 0-63
>> > acpihpet0 at acpi0: 14318179 Hz
>> > acpiprt0 at acpi0: bus 0 (PCI0)
>> > acpiprt1 at acpi0: bus 1 (PEG0)
>> > acpiprt2 at acpi0: bus 2 (PEG1)
>> > acpiprt3 at acpi0: bus -1 (PEG2)
>> > acpiprt4 at acpi0: bus 3 (RP01)
>> > acpiprt5 at acpi0: bus -1 (RP02)
>> > acpiprt6 at acpi0: bus -1 (RP03)
>> > acpiprt7 at acpi0: bus -1 (RP04)
>> > acpiprt8 at acpi0: bus -1 (RP05)
>> > acpiprt9 at acpi0: bus -1 (RP06)
>> > acpiprt10 at acpi0: bus -1 (RP07)
>> > acpiprt11 at acpi0: bus -1 (RP08)
>> > acpiec0 at acpi0: not present
>> > acpicpu0 at acpi0: C1(@1 halt!)
>> > acpicpu1 at acpi0: C1(@1 halt!)
>> > acpicpu2 at acpi0: C1(@1 halt!)
>> > acpicpu3 at acpi0: C1(@1 

Re: Packet loss with latest snapshot

2019-03-04 Thread Tony Sarendal
Den sön 3 mars 2019 kl 21:35 skrev Theo de Raadt :

> Tony,
>
> Are you out of your mind?  You didn't provide even a rough hint about
> what your firewall configuration looks like.  You recognize that's
> pathetic, right?
>
> > Earlier in the week I could run parallel ping-pong tests through my test
> > firewalls
> > at 300kpps without any packet loss. I updated to the latest snapshot
> today
> > and
> > start to see packet loss at around 80kpps.
> >
> > /T
> >
> > OpenBSD 6.5-beta (GENERIC.MP) #764: Sun Mar  3 10:24:08 MST 2019
> > dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> > real mem = 34300891136 (32711MB)
> > avail mem = 33251393536 (31711MB)
> > mpath0 at root
> > scsibus0 at mpath0: 256 targets
> > mainbus0 at root
> > bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xec170 (34 entries)
> > bios0: vendor American Megatrends Inc. version "3.0" date 04/24/2015
> > bios0: Supermicro X10SLD
> > acpi0 at bios0: rev 2
> > acpi0: sleep states S0 S4 S5
> > acpi0: tables DSDT FACP APIC FPDT FIDT SSDT SSDT MCFG PRAD HPET SSDT SSDT
> > SPMI DMAR EINJ ERST HEST BERT
> > acpi0: wakeup devices PEGP(S4) PEG0(S4) PEGP(S4) PEG1(S4) PEGP(S4)
> PEG2(S4)
> > PXSX(S4) RP01(S4) PXSX(S4) RP02(S4) PXSX(S4) RP03(S4) PXSX(S4) RP04(S4)
> > PXSX(S4) RP05(S4) [...]
> > acpitimer0 at acpi0: 3579545 Hz, 24 bits
> > acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
> > cpu0 at mainbus0: apid 0 (boot processor)
> > cpu0: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.68 MHz, 06-3c-03
> > cpu0:
> >
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> > cpu0: 256KB 64b/line 8-way L2 cache
> > cpu0: smt 0, core 0, package 0
> > mtrr: Pentium Pro MTRR support, 10 var ranges, 88 fixed ranges
> > cpu0: apic clock running at 99MHz
> > cpu0: mwait min=64, max=64, C-substates=0.2.1.2.4, IBE
> > cpu1 at mainbus0: apid 2 (application processor)
> > cpu1: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.01 MHz, 06-3c-03
> > cpu1:
> >
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> > cpu1: 256KB 64b/line 8-way L2 cache
> > cpu1: smt 0, core 1, package 0
> > cpu2 at mainbus0: apid 4 (application processor)
> > cpu2: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.01 MHz, 06-3c-03
> > cpu2:
> >
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> > cpu2: 256KB 64b/line 8-way L2 cache
> > cpu2: smt 0, core 2, package 0
> > cpu3 at mainbus0: apid 6 (application processor)
> > cpu3: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.01 MHz, 06-3c-03
> > cpu3:
> >
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> > cpu3: 256KB 64b/line 8-way L2 cache
> > cpu3: smt 0, core 3, package 0
> > ioapic0 at mainbus0: apid 8 pa 0xfec0, version 20, 24 pins
> > acpimcfg0 at acpi0
> > acpimcfg0: addr 0xf800, bus 0-63
> > acpihpet0 at acpi0: 14318179 Hz
> > acpiprt0 at acpi0: bus 0 (PCI0)
> > acpiprt1 at acpi0: bus 1 (PEG0)
> > acpiprt2 at acpi0: bus 2 (PEG1)
> > acpiprt3 at acpi0: bus -1 (PEG2)
> > acpiprt4 at acpi0: bus 3 (RP01)
> > acpiprt5 at acpi0: bus -1 (RP02)
> > acpiprt6 at acpi0: bus -1 (RP03)
> > acpiprt7 at acpi0: bus -1 (RP04)
> > acpiprt8 at acpi0: bus -1 (RP05)
> > acpiprt9 at acpi0: bus -1 (RP06)
> > acpiprt10 at acpi0: bus -1 (RP07)
> > acpiprt11 at acpi0: bus -1 (RP08)
> > acpiec0 at acpi0: not present
> > acpicpu0 at acpi0: C1(@1 halt!)
> > acpicpu1 at acpi0: C1(@1 halt!)
> > acpicpu2 at acpi0: C1(@1 halt!)
> > acpicpu3 at acpi0: C1(@1 halt!)
> > acpipwrres0 at acpi0: PG00, resource for PEG0
> > acpipwrres1 at acpi0: PG01, resource for PEG1
> > acpipwrres2 at acpi0: PG02, resource for 

Re: Packet loss with latest snapshot

2019-03-03 Thread Theo de Raadt
Tony,

Are you out of your mind?  You didn't provide even a rough hint about
what your firewall configuration looks like.  You recognize that's
pathetic, right?

> Earlier in the week I could run parallel ping-pong tests through my test
> firewalls
> at 300kpps without any packet loss. I updated to the latest snapshot today
> and
> start to see packet loss at around 80kpps.
> 
> /T
> 
> OpenBSD 6.5-beta (GENERIC.MP) #764: Sun Mar  3 10:24:08 MST 2019
> dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> real mem = 34300891136 (32711MB)
> avail mem = 33251393536 (31711MB)
> mpath0 at root
> scsibus0 at mpath0: 256 targets
> mainbus0 at root
> bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xec170 (34 entries)
> bios0: vendor American Megatrends Inc. version "3.0" date 04/24/2015
> bios0: Supermicro X10SLD
> acpi0 at bios0: rev 2
> acpi0: sleep states S0 S4 S5
> acpi0: tables DSDT FACP APIC FPDT FIDT SSDT SSDT MCFG PRAD HPET SSDT SSDT
> SPMI DMAR EINJ ERST HEST BERT
> acpi0: wakeup devices PEGP(S4) PEG0(S4) PEGP(S4) PEG1(S4) PEGP(S4) PEG2(S4)
> PXSX(S4) RP01(S4) PXSX(S4) RP02(S4) PXSX(S4) RP03(S4) PXSX(S4) RP04(S4)
> PXSX(S4) RP05(S4) [...]
> acpitimer0 at acpi0: 3579545 Hz, 24 bits
> acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
> cpu0 at mainbus0: apid 0 (boot processor)
> cpu0: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.68 MHz, 06-3c-03
> cpu0:
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> cpu0: 256KB 64b/line 8-way L2 cache
> cpu0: smt 0, core 0, package 0
> mtrr: Pentium Pro MTRR support, 10 var ranges, 88 fixed ranges
> cpu0: apic clock running at 99MHz
> cpu0: mwait min=64, max=64, C-substates=0.2.1.2.4, IBE
> cpu1 at mainbus0: apid 2 (application processor)
> cpu1: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.01 MHz, 06-3c-03
> cpu1:
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> cpu1: 256KB 64b/line 8-way L2 cache
> cpu1: smt 0, core 1, package 0
> cpu2 at mainbus0: apid 4 (application processor)
> cpu2: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.01 MHz, 06-3c-03
> cpu2:
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> cpu2: 256KB 64b/line 8-way L2 cache
> cpu2: smt 0, core 2, package 0
> cpu3 at mainbus0: apid 6 (application processor)
> cpu3: Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz, 3500.01 MHz, 06-3c-03
> cpu3:
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN
> cpu3: 256KB 64b/line 8-way L2 cache
> cpu3: smt 0, core 3, package 0
> ioapic0 at mainbus0: apid 8 pa 0xfec0, version 20, 24 pins
> acpimcfg0 at acpi0
> acpimcfg0: addr 0xf800, bus 0-63
> acpihpet0 at acpi0: 14318179 Hz
> acpiprt0 at acpi0: bus 0 (PCI0)
> acpiprt1 at acpi0: bus 1 (PEG0)
> acpiprt2 at acpi0: bus 2 (PEG1)
> acpiprt3 at acpi0: bus -1 (PEG2)
> acpiprt4 at acpi0: bus 3 (RP01)
> acpiprt5 at acpi0: bus -1 (RP02)
> acpiprt6 at acpi0: bus -1 (RP03)
> acpiprt7 at acpi0: bus -1 (RP04)
> acpiprt8 at acpi0: bus -1 (RP05)
> acpiprt9 at acpi0: bus -1 (RP06)
> acpiprt10 at acpi0: bus -1 (RP07)
> acpiprt11 at acpi0: bus -1 (RP08)
> acpiec0 at acpi0: not present
> acpicpu0 at acpi0: C1(@1 halt!)
> acpicpu1 at acpi0: C1(@1 halt!)
> acpicpu2 at acpi0: C1(@1 halt!)
> acpicpu3 at acpi0: C1(@1 halt!)
> acpipwrres0 at acpi0: PG00, resource for PEG0
> acpipwrres1 at acpi0: PG01, resource for PEG1
> acpipwrres2 at acpi0: PG02, resource for PEG2
> acpipwrres3 at acpi0: FN00, resource for FAN0
> acpipwrres4 at acpi0: FN01, resource for FAN1
> acpipwrres5 at acpi0: FN02, resource for FAN2
> acpipwrres6 at acpi0: FN03, resource for FAN3
> acpipwrres7 at acpi0: FN04, 

Re: Packet loss on traffic flowing between VLANs

2016-06-02 Thread Evgeniy Sudyr
Good to know it helped,

probably you also need check for "set optimization aggressive" it will
also reduce number of states if it works for your use cases.

--
Evgeniy

On Thu, Jun 2, 2016 at 2:40 PM, Tim Korn  wrote:
> Hi Evgeniy,
> Thank you for your reply.  The states hard limit was the problem.  The
> default limit is quite low :)
>
>
> --
> Tim Korn
> Network Ninja
>
>
> On Thu, Jun 2, 2016 at 3:48 AM, Evgeniy Sudyr  wrote:
>>
>> Tim,
>>
>> from your problem description I can suggest you to check if you are not
>> hitting
>>
>> states hard limit with (note - during load when you can reproduce issue):
>>
>> pfctl -si
>> pfctl -sm
>>
>> Default limit is: stateshard limit1
>>
>> --
>> Evgeniy
>>
>> On Thu, Jun 2, 2016 at 3:29 AM, Tim Korn  wrote:
>> > Hi.  I have a pair of openBSD boxes (5.8) setup as a core/firewall.  I
>> > have
>> > ten VLANs tied to a physical NIC (Intel 82599).  This is a new setup and
>> > it
>> > was just recently put in service.  Traffic was fine (or at least we
>> > didn't
>> > notice any issues) until a large job was run which roughly doubled
>> > traffic
>> > going thru the firewall.  Traffic rate is still extremely low... roughly
>> > 2k
>> > packets per second on the interface in question and around 20Mb.  I have
>> > other identical openBSD boxes that don't use VLANs, and they pass
>> > multiple
>> > gigs of traffic per second, so I'm having a hard time not leaning
>> > towards
>> > it being a VLAN issue, however I don't know where to look to prove it.
>> >
>> > If a host in vlan100 pings a host in vlan101 I see packet loss on the
>> > first
>> > few packets, than all subsequent packets pass.  Stopping and restarting
>> > the
>> > ping results in the same thingfirst few pings lost, then responses
>> > and
>> > never fail again until the ping is stopped and restarted.  We see this
>> > behavior with pretty much any new connection.  I can replicate it
>> > consistently with ICMP, TCP, and UDP traffic.
>> >
>> > PF ruleset is quite basic.  Simple *pass in* rules on the VLANs and
>> > *pass
>> > out* is allowed on all interfaces.  icmp has a rule at the top saying
>> > "pass
>> > log quick proto icmp".  i really don't think theres a pf issue of any
>> > kind.
>> >
>> > I've run a tcpdump to confirm that packets come in on vlan100, and never
>> > leave vlan101.  Here is an example:
>> >
>> > Ping from host in vlan100 (you can see the seq start at 9.  first 8
>> > never left the firewall):
>> > [root@pakkit ~]# ping 10.95.1.50
>> > PING 10.95.1.50 (10.95.1.50) 56(84) bytes of data.
>> > 64 bytes from 10.95.1.50: icmp_seq=9 ttl=63 time=0.263 ms
>> > 64 bytes from 10.95.1.50: icmp_seq=10 ttl=63 time=0.341 ms
>> > 64 bytes from 10.95.1.50: icmp_seq=11 ttl=63 time=0.335 ms
>> > 64 bytes from 10.95.1.50: icmp_seq=12 ttl=63 time=0.348 ms
>> > 64 bytes from 10.95.1.50: icmp_seq=13 ttl=63 time=0.348 ms
>> >
>> >
>> >
>> > tcpdump on vlan100 showing 13 echo requests:
>> > [root@pci-ny2-fw1:~ (master)] tcpdump -neti vlan100 host 10.95.0.5 and
>> > host 10.95.1.50
>> > tcpdump: listening on vlan100, link-type EN10MB
>> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
>> > icmp: echo request (DF)
>> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
>> > icmp: echo request (DF)
>> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
>> > icmp: echo request (DF)
>> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
>> > icmp: echo request (DF)
>> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
>> > icmp: echo request (DF)
>> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
>> > icmp: echo request (DF)
>> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
>> > icmp: echo request (DF)
>> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
>> > icmp: echo request (DF)
>> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
>> > icmp: echo request (DF)
>> > 24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
>> > icmp: echo reply
>> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
>> > icmp: echo request (DF)
>> > 24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
>> > icmp: echo reply
>> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
>> > icmp: echo request (DF)
>> > 24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
>> > icmp: echo reply
>> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
>> > icmp: echo request (DF)
>> > 24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
>> > icmp: echo reply
>> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
>> > icmp: echo request (DF)
>> > 24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
>> > icmp: echo reply
>> > ^C

Re: Packet loss on traffic flowing between VLANs

2016-06-02 Thread Tim Korn
Hi Evgeniy,
Thank you for your reply.  The states hard limit was the problem.  The
default limit is quite low :)


--
Tim Korn
Network Ninja


On Thu, Jun 2, 2016 at 3:48 AM, Evgeniy Sudyr  wrote:

> Tim,
>
> from your problem description I can suggest you to check if you are not
> hitting
>
> states hard limit with (note - during load when you can reproduce issue):
>
> pfctl -si
> pfctl -sm
>
> Default limit is: stateshard limit1
>
> --
> Evgeniy
>
> On Thu, Jun 2, 2016 at 3:29 AM, Tim Korn  wrote:
> > Hi.  I have a pair of openBSD boxes (5.8) setup as a core/firewall.  I
> have
> > ten VLANs tied to a physical NIC (Intel 82599).  This is a new setup and
> it
> > was just recently put in service.  Traffic was fine (or at least we
> didn't
> > notice any issues) until a large job was run which roughly doubled
> traffic
> > going thru the firewall.  Traffic rate is still extremely low... roughly
> 2k
> > packets per second on the interface in question and around 20Mb.  I have
> > other identical openBSD boxes that don't use VLANs, and they pass
> multiple
> > gigs of traffic per second, so I'm having a hard time not leaning towards
> > it being a VLAN issue, however I don't know where to look to prove it.
> >
> > If a host in vlan100 pings a host in vlan101 I see packet loss on the
> first
> > few packets, than all subsequent packets pass.  Stopping and restarting
> the
> > ping results in the same thingfirst few pings lost, then responses
> and
> > never fail again until the ping is stopped and restarted.  We see this
> > behavior with pretty much any new connection.  I can replicate it
> > consistently with ICMP, TCP, and UDP traffic.
> >
> > PF ruleset is quite basic.  Simple *pass in* rules on the VLANs and *pass
> > out* is allowed on all interfaces.  icmp has a rule at the top saying
> "pass
> > log quick proto icmp".  i really don't think theres a pf issue of any
> kind.
> >
> > I've run a tcpdump to confirm that packets come in on vlan100, and never
> > leave vlan101.  Here is an example:
> >
> > Ping from host in vlan100 (you can see the seq start at 9.  first 8
> > never left the firewall):
> > [root@pakkit ~]# ping 10.95.1.50
> > PING 10.95.1.50 (10.95.1.50) 56(84) bytes of data.
> > 64 bytes from 10.95.1.50: icmp_seq=9 ttl=63 time=0.263 ms
> > 64 bytes from 10.95.1.50: icmp_seq=10 ttl=63 time=0.341 ms
> > 64 bytes from 10.95.1.50: icmp_seq=11 ttl=63 time=0.335 ms
> > 64 bytes from 10.95.1.50: icmp_seq=12 ttl=63 time=0.348 ms
> > 64 bytes from 10.95.1.50: icmp_seq=13 ttl=63 time=0.348 ms
> >
> >
> >
> > tcpdump on vlan100 showing 13 echo requests:
> > [root@pci-ny2-fw1:~ (master)] tcpdump -neti vlan100 host 10.95.0.5 and
> > host 10.95.1.50
> > tcpdump: listening on vlan100, link-type EN10MB
> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> > icmp: echo request (DF)
> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> > icmp: echo request (DF)
> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> > icmp: echo request (DF)
> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> > icmp: echo request (DF)
> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> > icmp: echo request (DF)
> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> > icmp: echo request (DF)
> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> > icmp: echo request (DF)
> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> > icmp: echo request (DF)
> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> > icmp: echo request (DF)
> > 24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
> > icmp: echo reply
> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> > icmp: echo request (DF)
> > 24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
> > icmp: echo reply
> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> > icmp: echo request (DF)
> > 24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
> > icmp: echo reply
> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> > icmp: echo request (DF)
> > 24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
> > icmp: echo reply
> > 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> > icmp: echo request (DF)
> > 24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
> > icmp: echo reply
> > ^C
> > 1049 packets received by filter
> > 0 packets dropped by kernel
> >
> >
> > tcpdump on vlan101 showing only 5 echo requests:
> > [root@pci-ny2-fw1:/etc/ (master)] tcpdump -neti vlan101 host 10.95.0.5
> > and host 10.95.1.50
> > tcpdump: listening on vlan101, link-type EN10MB
> > 24:6e:96:04:1b:d8 24:6e:96:04:1c:84 0800 98: 10.95.0.5 > 10.95.1.50:
> > icmp: echo request (DF)
> > 

Re: Packet loss on traffic flowing between VLANs

2016-06-02 Thread Evgeniy Sudyr
Tim,

from your problem description I can suggest you to check if you are not hitting

states hard limit with (note - during load when you can reproduce issue):

pfctl -si
pfctl -sm

Default limit is: stateshard limit1

--
Evgeniy

On Thu, Jun 2, 2016 at 3:29 AM, Tim Korn  wrote:
> Hi.  I have a pair of openBSD boxes (5.8) setup as a core/firewall.  I have
> ten VLANs tied to a physical NIC (Intel 82599).  This is a new setup and it
> was just recently put in service.  Traffic was fine (or at least we didn't
> notice any issues) until a large job was run which roughly doubled traffic
> going thru the firewall.  Traffic rate is still extremely low... roughly 2k
> packets per second on the interface in question and around 20Mb.  I have
> other identical openBSD boxes that don't use VLANs, and they pass multiple
> gigs of traffic per second, so I'm having a hard time not leaning towards
> it being a VLAN issue, however I don't know where to look to prove it.
>
> If a host in vlan100 pings a host in vlan101 I see packet loss on the first
> few packets, than all subsequent packets pass.  Stopping and restarting the
> ping results in the same thingfirst few pings lost, then responses and
> never fail again until the ping is stopped and restarted.  We see this
> behavior with pretty much any new connection.  I can replicate it
> consistently with ICMP, TCP, and UDP traffic.
>
> PF ruleset is quite basic.  Simple *pass in* rules on the VLANs and *pass
> out* is allowed on all interfaces.  icmp has a rule at the top saying "pass
> log quick proto icmp".  i really don't think theres a pf issue of any kind.
>
> I've run a tcpdump to confirm that packets come in on vlan100, and never
> leave vlan101.  Here is an example:
>
> Ping from host in vlan100 (you can see the seq start at 9.  first 8
> never left the firewall):
> [root@pakkit ~]# ping 10.95.1.50
> PING 10.95.1.50 (10.95.1.50) 56(84) bytes of data.
> 64 bytes from 10.95.1.50: icmp_seq=9 ttl=63 time=0.263 ms
> 64 bytes from 10.95.1.50: icmp_seq=10 ttl=63 time=0.341 ms
> 64 bytes from 10.95.1.50: icmp_seq=11 ttl=63 time=0.335 ms
> 64 bytes from 10.95.1.50: icmp_seq=12 ttl=63 time=0.348 ms
> 64 bytes from 10.95.1.50: icmp_seq=13 ttl=63 time=0.348 ms
>
>
>
> tcpdump on vlan100 showing 13 echo requests:
> [root@pci-ny2-fw1:~ (master)] tcpdump -neti vlan100 host 10.95.0.5 and
> host 10.95.1.50
> tcpdump: listening on vlan100, link-type EN10MB
> 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> icmp: echo request (DF)
> 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> icmp: echo request (DF)
> 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> icmp: echo request (DF)
> 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> icmp: echo request (DF)
> 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> icmp: echo request (DF)
> 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> icmp: echo request (DF)
> 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> icmp: echo request (DF)
> 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> icmp: echo request (DF)
> 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> icmp: echo request (DF)
> 24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
> icmp: echo reply
> 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> icmp: echo request (DF)
> 24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
> icmp: echo reply
> 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> icmp: echo request (DF)
> 24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
> icmp: echo reply
> 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> icmp: echo request (DF)
> 24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
> icmp: echo reply
> 00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
> icmp: echo request (DF)
> 24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
> icmp: echo reply
> ^C
> 1049 packets received by filter
> 0 packets dropped by kernel
>
>
> tcpdump on vlan101 showing only 5 echo requests:
> [root@pci-ny2-fw1:/etc/ (master)] tcpdump -neti vlan101 host 10.95.0.5
> and host 10.95.1.50
> tcpdump: listening on vlan101, link-type EN10MB
> 24:6e:96:04:1b:d8 24:6e:96:04:1c:84 0800 98: 10.95.0.5 > 10.95.1.50:
> icmp: echo request (DF)
> 24:6e:96:04:1c:84 00:00:5e:00:01:65 0800 98: 10.95.1.50 > 10.95.0.5:
> icmp: echo reply
> 24:6e:96:04:1b:d8 24:6e:96:04:1c:84 0800 98: 10.95.0.5 > 10.95.1.50:
> icmp: echo request (DF)
> 24:6e:96:04:1c:84 00:00:5e:00:01:65 0800 98: 10.95.1.50 > 10.95.0.5:
> icmp: echo reply
> 24:6e:96:04:1b:d8 24:6e:96:04:1c:84 0800 98: 10.95.0.5 > 10.95.1.50:
> icmp: echo request (DF)
> 24:6e:96:04:1c:84 00:00:5e:00:01:65 0800 98: 10.95.1.50 > 10.95.0.5:
> icmp: echo 

Re: Packet loss on traffic flowing between VLANs

2016-06-02 Thread Kapetanakis Giannis

On 02/06/16 04:29, Tim Korn wrote:

Hi.  I have a pair of openBSD boxes (5.8) setup as a core/firewall.  I have
ten VLANs tied to a physical NIC (Intel 82599).  This is a new setup and it
was just recently put in service.  Traffic was fine (or at least we didn't
notice any issues) until a large job was run which roughly doubled traffic
going thru the firewall.  Traffic rate is still extremely low... roughly 2k
packets per second on the interface in question and around 20Mb.  I have
other identical openBSD boxes that don't use VLANs, and they pass multiple
gigs of traffic per second, so I'm having a hard time not leaning towards
it being a VLAN issue, however I don't know where to look to prove it.

If a host in vlan100 pings a host in vlan101 I see packet loss on the first
few packets, than all subsequent packets pass.  Stopping and restarting the
ping results in the same thingfirst few pings lost, then responses and
never fail again until the ping is stopped and restarted.  We see this
behavior with pretty much any new connection.  I can replicate it
consistently with ICMP, TCP, and UDP traffic.

PF ruleset is quite basic.  Simple *pass in* rules on the VLANs and *pass
out* is allowed on all interfaces.  icmp has a rule at the top saying "pass
log quick proto icmp".  i really don't think theres a pf issue of any kind.

I've run a tcpdump to confirm that packets come in on vlan100, and never
leave vlan101.  Here is an example:

Ping from host in vlan100 (you can see the seq start at 9.  first 8
never left the firewall):
[root@pakkit ~]# ping 10.95.1.50
PING 10.95.1.50 (10.95.1.50) 56(84) bytes of data.
64 bytes from 10.95.1.50: icmp_seq=9 ttl=63 time=0.263 ms
64 bytes from 10.95.1.50: icmp_seq=10 ttl=63 time=0.341 ms
64 bytes from 10.95.1.50: icmp_seq=11 ttl=63 time=0.335 ms
64 bytes from 10.95.1.50: icmp_seq=12 ttl=63 time=0.348 ms
64 bytes from 10.95.1.50: icmp_seq=13 ttl=63 time=0.348 ms



tcpdump on vlan100 showing 13 echo requests:
[root@pci-ny2-fw1:~ (master)] tcpdump -neti vlan100 host 10.95.0.5 and
host 10.95.1.50
tcpdump: listening on vlan100, link-type EN10MB
00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
icmp: echo request (DF)
00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
icmp: echo request (DF)
00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
icmp: echo request (DF)
00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
icmp: echo request (DF)
00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
icmp: echo request (DF)
00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
icmp: echo request (DF)
00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
icmp: echo request (DF)
00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
icmp: echo request (DF)
00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
icmp: echo request (DF)
24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
icmp: echo reply
00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
icmp: echo request (DF)
24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
icmp: echo reply
00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
icmp: echo request (DF)
24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
icmp: echo reply
00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
icmp: echo request (DF)
24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
icmp: echo reply
00:0c:29:16:f7:bf 00:00:5e:00:01:64 0800 98: 10.95.0.5 > 10.95.1.50:
icmp: echo request (DF)
24:6e:96:04:1b:d8 00:0c:29:16:f7:bf 0800 98: 10.95.1.50 > 10.95.0.5:
icmp: echo reply
^C
1049 packets received by filter
0 packets dropped by kernel


tcpdump on vlan101 showing only 5 echo requests:
[root@pci-ny2-fw1:/etc/ (master)] tcpdump -neti vlan101 host 10.95.0.5
and host 10.95.1.50
tcpdump: listening on vlan101, link-type EN10MB
24:6e:96:04:1b:d8 24:6e:96:04:1c:84 0800 98: 10.95.0.5 > 10.95.1.50:
icmp: echo request (DF)
24:6e:96:04:1c:84 00:00:5e:00:01:65 0800 98: 10.95.1.50 > 10.95.0.5:
icmp: echo reply
24:6e:96:04:1b:d8 24:6e:96:04:1c:84 0800 98: 10.95.0.5 > 10.95.1.50:
icmp: echo request (DF)
24:6e:96:04:1c:84 00:00:5e:00:01:65 0800 98: 10.95.1.50 > 10.95.0.5:
icmp: echo reply
24:6e:96:04:1b:d8 24:6e:96:04:1c:84 0800 98: 10.95.0.5 > 10.95.1.50:
icmp: echo request (DF)
24:6e:96:04:1c:84 00:00:5e:00:01:65 0800 98: 10.95.1.50 > 10.95.0.5:
icmp: echo reply
24:6e:96:04:1b:d8 24:6e:96:04:1c:84 0800 98: 10.95.0.5 > 10.95.1.50:
icmp: echo request (DF)
24:6e:96:04:1c:84 00:00:5e:00:01:65 0800 98: 10.95.1.50 > 10.95.0.5:
icmp: echo reply
24:6e:96:04:1b:d8 24:6e:96:04:1c:84 0800 98: 10.95.0.5 > 10.95.1.50:
icmp: echo request (DF)
24:6e:96:04:1c:84 00:00:5e:00:01:65 0800 98: 10.95.1.50 > 10.95.0.5:
icmp: echo reply
^C
1975 packets received by filter
0 packets dropped by kernel

Any help would be greatly 

Re: packet loss in larger packets

2012-09-21 Thread Camiel Dobbelaar
On Fri, 21 Sep 2012, Erwin Lubbers wrote:
 I'm using OpenBSD 5.1 and an Intel 10GbE SR (82598AF) ethernet card as a
 router/firewall and it's working almost perfect. It is routing around 2 gbps
 of traffic.
 
 On the ix0 interface there are several vlans configured with an MTU of 1500.
 When I'm pinging a switch connected to the system (with 10 gbps) there is no
 packet loss while sending packets of 1472 bytes. From 1473 bytes and more
 there is somewhere between 15 to 40% loss.
 
 I first thought the switch was busy, but from another (Linux) system,
 connected with 10 gbps and the same network interface there is no loss on
 larger packets.
 
 Does someone have an idea on how to solve this?

Can you show from both systems with tcpdump what the packets look like?

You are using normal (no flood) ping and the systems and switch are not 
loaded with other traffic?



Re: packet loss in larger packets

2012-09-21 Thread Erwin Lubbers
Op 21 sep. 2012, om 09:43 heeft Camiel Dobbelaar c...@sentia.nl het volgende
geschreven:


 Can you show from both systems with tcpdump what the packets look like?

 You are using normal (no flood) ping and the systems and switch are not
 loaded with other traffic?




No flooding ping, just normal ping packets. I will create a tcpdump later. But
the output of a 1472 and 1473 ping packet looks like this. And even if I
disable PF the problem stays the same. Switch is handling around 350 mbps of
traffic at the moment of doing this pings.

# ping -s 1472 -c 20 10.0.1.239
PING 10.0.1.239 (10.0.1.239): 1472 data bytes
1480 bytes from 10.0.1.239: icmp_seq=0 ttl=255 time=1.782 ms
1480 bytes from 10.0.1.239: icmp_seq=1 ttl=255 time=1.499 ms
1480 bytes from 10.0.1.239: icmp_seq=2 ttl=255 time=1.244 ms
1480 bytes from 10.0.1.239: icmp_seq=3 ttl=255 time=1.339 ms
1480 bytes from 10.0.1.239: icmp_seq=4 ttl=255 time=1.453 ms
1480 bytes from 10.0.1.239: icmp_seq=5 ttl=255 time=1.486 ms
1480 bytes from 10.0.1.239: icmp_seq=6 ttl=255 time=1.627 ms
1480 bytes from 10.0.1.239: icmp_seq=7 ttl=255 time=2.323 ms
1480 bytes from 10.0.1.239: icmp_seq=8 ttl=255 time=1.386 ms
1480 bytes from 10.0.1.239: icmp_seq=9 ttl=255 time=1.511 ms
1480 bytes from 10.0.1.239: icmp_seq=10 ttl=255 time=1.578 ms
1480 bytes from 10.0.1.239: icmp_seq=11 ttl=255 time=1.552 ms
1480 bytes from 10.0.1.239: icmp_seq=12 ttl=255 time=1.732 ms
1480 bytes from 10.0.1.239: icmp_seq=13 ttl=255 time=1.279 ms
1480 bytes from 10.0.1.239: icmp_seq=14 ttl=255 time=1.369 ms
1480 bytes from 10.0.1.239: icmp_seq=15 ttl=255 time=1.399 ms
1480 bytes from 10.0.1.239: icmp_seq=16 ttl=255 time=1.513 ms
1480 bytes from 10.0.1.239: icmp_seq=17 ttl=255 time=1.546 ms
1480 bytes from 10.0.1.239: icmp_seq=18 ttl=255 time=1.551 ms
1480 bytes from 10.0.1.239: icmp_seq=19 ttl=255 time=1.483 ms
--- 10.0.1.239 ping statistics ---
20 packets transmitted, 20 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 1.244/1.532/2.323/0.227 ms


# ping -s 1473 -c 20 10.0.1.239
PING 10.0.1.239 (10.0.1.239): 1473 data bytes
1481 bytes from 10.0.1.239: icmp_seq=1 ttl=255 time=2.107 ms
1481 bytes from 10.0.1.239: icmp_seq=2 ttl=255 time=2.035 ms
1481 bytes from 10.0.1.239: icmp_seq=3 ttl=255 time=2.045 ms
1481 bytes from 10.0.1.239: icmp_seq=4 ttl=255 time=2.048 ms
1481 bytes from 10.0.1.239: icmp_seq=6 ttl=255 time=2.708 ms
1481 bytes from 10.0.1.239: icmp_seq=7 ttl=255 time=1.768 ms
1481 bytes from 10.0.1.239: icmp_seq=8 ttl=255 time=2.274 ms
1481 bytes from 10.0.1.239: icmp_seq=9 ttl=255 time=1.775 ms
1481 bytes from 10.0.1.239: icmp_seq=11 ttl=255 time=3.969 ms
1481 bytes from 10.0.1.239: icmp_seq=13 ttl=255 time=5.679 ms
1481 bytes from 10.0.1.239: icmp_seq=14 ttl=255 time=2.012 ms
1481 bytes from 10.0.1.239: icmp_seq=15 ttl=255 time=2.148 ms
1481 bytes from 10.0.1.239: icmp_seq=17 ttl=255 time=2.179 ms
1481 bytes from 10.0.1.239: icmp_seq=18 ttl=255 time=1.796 ms
1481 bytes from 10.0.1.239: icmp_seq=19 ttl=255 time=3.433 ms
--- 10.0.1.239 ping statistics ---
20 packets transmitted, 15 packets received, 25.0% packet loss
round-trip min/avg/max/std-dev = 1.768/2.531/5.679/1.035 ms



Re: packet loss

2011-12-02 Thread rik
We've solved the problem increasing net.inet.ip.ifq.maxlen from the default
of our version (50) to the default of the more recent versions (250). Does
it make sens to you?
How far do you think we can go with that value considering that we've 3
physical interfaces (int  100mbit, ext 100mbit and pfsync 10mbit) and that
the servers have only 512Mb of RAM?  Something like Henning's rule with
256*3 (number of physical interfaces) would be a good and safe choice with
our hardware (of course we're planning an upgrade of both servers and
openbsd version)?
Thanks for your help
Alessandro


On Tue, Nov 29, 2011 at 7:49 PM, rik rikc...@gmail.com wrote:

 Thanks for the suggestion, I'll try with the GENERIC kernel
 Is that possibile that this problem is due to hardware limitation (it's
 quite an old server)?  Apparently when the traffic decrease the packet loss
 decrease as well and disappear just like the odd ping's result
 Thanks!
 Alessandro



 On Tue, Nov 29, 2011 at 12:15 AM, Stuart Henderson 
 s...@spacehopper.orgwrote:

 On 2011-11-28, James Shupe jsh...@osre.org wrote:
  Your dmesg doesn't show the version you're running. Can you provide
  that,

 Yep, seconded. If people ask for a dmesg, they mean a complete one.
 I would also try a GENERIC kernel (not GENERIC.MP).

  along with ifconfig output from both machines? You may want to
  check the physical connectivity (cable/ NIC/ switch) for the internal
  interface of the carp master... Or just fail over to the secondary box
  to see if the issue goes away.

 Well there appears to be something very odd going on with timers there
 so who knows what else might follow from that.

  64 bytes from xx.xx.xx.12: icmp_seq=4 ttl=64 time=-3.-656 ms
  64 bytes from xx.xx.xx.12: icmp_seq=5 ttl=64 time=0.794 ms
  64 bytes from xx.xx.xx.12: icmp_seq=6 ttl=64 time=0.-491 ms
  ping: sendto: No route to host
  ping: wrote xx.xx.xx.12 64 chars, ret=-1
  ping: sendto: No route to host
  ping: wrote xx.xx.xx.12 64 chars, ret=-1
  64 bytes from xx.xx.xx.12: icmp_seq=9 ttl=64 time=0.526 ms
  64 bytes from xx.xx.xx.12: icmp_seq=10 ttl=64 time=1.415 ms



Re: packet loss

2011-11-29 Thread rik
Sorry, I've mised the top 2 rows of the dmesg:
OpenBSD 3.9 (FIREWALL) #0: Sun Sep 17 15:49:07 CEST 2006
r...@fw1.domain.com:/usr/src/sys/arch/i386/compile/FIREWALL

Firewall is just the generic.mp with a device (cpu temp monitor) removed
because not working.
This is my netstat -i from the master

NameMtu   Network Address  Ipkts IerrsOpkts Oerrs
Colls
lo0 33224 Link2170 0 2170
0 0
lo0 33224 loopbacklocalhost 2170 0 2170
0 0
lo0 33224 localhost.n ::1   2170 0 2170
0 0
fxp01500  Link  xx:xx:xx:xx:xx:xx 4080602979  5814 3643673264
0 0
fxp11500  Link  xx:xx:xx:xx:xx:xx 3990056491   256 4226316164
0 0
fxp11500  x.x.x.0 fw1   3990056491   256 4226316164 0 0
rl0 1500  Link  xx:xx:xx:xx:xx:xx  4757956 0 16291765
0 0
rl0 1500  10.1.0/24   10.1.0.3   4757956 0 16291765
0 0
pflog0  33224 Link   0 00
0 0
pfsync0 1460  Link   0 00
0 0
enc0*   1536  Link   0 00
0 0
carp0   1500  Link  xx:xx:xx:xx:xx:xx 4077521045 0  4450639
0 0
carp0   1500  xx.xx.ww.2 xx.xx.ww.30  4077521045 0  4450639
0 0
carp1   1500  Link  xx:xx:xx:xx:xx:xx 397833709935  4450637
2 0
carp1   1500  xx.xx.xx.0 xx.xx.xx.1   397833709935  4450637
2 0
carp1   1500  xx.xx.xx.1 xx.xx.xx.17  397833709935  4450637
2 0
carp1   1500  xx.xx.xx.3 xx.xx.xx.33  397833709935  4450637
2 0
carp1   1500  xx.xx.xx.4 xx.xx.xx.49  397833709935  4450637
2 0
carp1   1500  xx.xx.zz.1 xx.xx.zz.129 397833709935  4450637
2 0
carp1   1500  xx.xx.zz.1 xx.xx.zz.145 397833709935  4450637
2 0
carp1   1500  xx.xx.zz.1 xx.xx.zz.161 397833709935  4450637
2 0
carp1   1500  xx.xx.zz.1 xx.xx.zz.177 397833709935  4450637
2 0
carp1   1500  xx.xx.yy.1 xx.xx.yy.129 397833709935  4450637
2 0

I've tried to switch on the basckup with no difference. It has also been
changed the cable and the port on the switch
Thanks!
alessandro


On Mon, Nov 28, 2011 at 8:58 PM, James Shupe jsh...@osre.org wrote:

 Your dmesg doesn't show the version you're running. Can you provide
 that, along with ifconfig output from both machines? You may want to
 check the physical connectivity (cable/ NIC/ switch) for the internal
 interface of the carp master... Or just fail over to the secondary box
 to see if the issue goes away.

 Also, provide the netstat -i output.

 On 11/28/11 1:37 PM, rik wrote:
  Hi James,
  both carp on the master firewall are in master status (one on the
 external
  side, one on the internal side), but as much as I know they've always
 been
  like this; on the backup firewall they both are in backup status (and the
  backup, using the phisical interface, can ping without any packet loss).
  Thanks
  Alessandro
 
 
  On Mon, Nov 28, 2011 at 8:08 PM, James Shupe jsh...@osre.org wrote:
 
  Run
 
  ifconfig carp | grep status
 
  on both machines... If they're pre 4.8, do:
 
  ifconfig carp | grep 'carp: '
 
  .
 
  If both think they're masters, they'll do what you're seeing.
 
  Thank you,
  James Shupe
 
  On 11/28/11 12:53 PM, Stuart Henderson wrote:
  dmesg?
 
  On 2011-11-28, rik rikc...@gmail.com wrote:
  Good day,
  I'm using 2 openbsd boxes as router firewall with carp in a colo-like
  setup.
  In the last few days we saw the packet loss percentuale increase up to
  8-10% and it doesn't look like a problem for outside.  If I ping from
  the
  master firewall one of the server inside I can see something like
 this:
 
  64 bytes from xx.xx.xx.12: icmp_seq=4 ttl=64 time=-3.-656 ms
  64 bytes from xx.xx.xx.12: icmp_seq=5 ttl=64 time=0.794 ms
  64 bytes from xx.xx.xx.12: icmp_seq=6 ttl=64 time=0.-491 ms
  ping: sendto: No route to host
  ping: wrote xx.xx.xx.12 64 chars, ret=-1
  ping: sendto: No route to host
  ping: wrote xx.xx.xx.12 64 chars, ret=-1
  64 bytes from xx.xx.xx.12: icmp_seq=9 ttl=64 time=0.526 ms
  64 bytes from xx.xx.xx.12: icmp_seq=10 ttl=64 time=1.415 ms
 
  No errors in syslog.
  Any idea?
  Thanks
  Alessandro
 
 
 
  --
  James Shupe, OSRE
  developer/ engineer
  BSD/ Linux support  hosting
  jsh...@osre.org | www.osre.org
  O 9032530140 | F 9032530150 | M 9035223425
 


 --
 James Shupe, OSRE
 developer/ engineer
 BSD/ Linux support  hosting
 jsh...@osre.org | www.osre.org
 O 9032530140 | F 9032530150 | M 9035223425



Re: packet loss

2011-11-29 Thread rik
Thanks for the suggestion, I'll try with the GENERIC kernel
Is that possibile that this problem is due to hardware limitation (it's
quite an old server)?  Apparently when the traffic decrease the packet loss
decrease as well and disappear just like the odd ping's result
Thanks!
Alessandro


On Tue, Nov 29, 2011 at 12:15 AM, Stuart Henderson s...@spacehopper.orgwrote:

 On 2011-11-28, James Shupe jsh...@osre.org wrote:
  Your dmesg doesn't show the version you're running. Can you provide
  that,

 Yep, seconded. If people ask for a dmesg, they mean a complete one.
 I would also try a GENERIC kernel (not GENERIC.MP).

  along with ifconfig output from both machines? You may want to
  check the physical connectivity (cable/ NIC/ switch) for the internal
  interface of the carp master... Or just fail over to the secondary box
  to see if the issue goes away.

 Well there appears to be something very odd going on with timers there
 so who knows what else might follow from that.

  64 bytes from xx.xx.xx.12: icmp_seq=4 ttl=64 time=-3.-656 ms
  64 bytes from xx.xx.xx.12: icmp_seq=5 ttl=64 time=0.794 ms
  64 bytes from xx.xx.xx.12: icmp_seq=6 ttl=64 time=0.-491 ms
  ping: sendto: No route to host
  ping: wrote xx.xx.xx.12 64 chars, ret=-1
  ping: sendto: No route to host
  ping: wrote xx.xx.xx.12 64 chars, ret=-1
  64 bytes from xx.xx.xx.12: icmp_seq=9 ttl=64 time=0.526 ms
  64 bytes from xx.xx.xx.12: icmp_seq=10 ttl=64 time=1.415 ms



Re: packet loss

2011-11-29 Thread Daniel Melameth
On Tue, Nov 29, 2011 at 11:47 AM, rik rikc...@gmail.com wrote:
 Sorry, I've mised the top 2 rows of the dmesg:
 OpenBSD 3.9 (FIREWALL) #0: Sun Sep 17 15:49:07 CEST 2006
r...@fw1.domain.com:/usr/src/sys/arch/i386/compile/FIREWALL

 Firewall is just the generic.mp with a device (cpu temp monitor) removed
 because not working.

3.9?  Really?  Last I checked, OpenBSD was still free.  Why don't you
try the latest version, which will lilely resolve your issue, and then
make a donation.



Re: packet loss

2011-11-28 Thread Peter N. M. Hansteen
rik rikc...@gmail.com writes:

 I'm using 2 openbsd boxes as router firewall with carp in a colo-like setup.
 In the last few days we saw the packet loss percentuale increase up to
 8-10% and it doesn't look like a problem for outside.  

I take this to mean that the CARP setup provided the needed redundancy.

 If I ping from the master firewall one of the server inside I can see
 something like this:

 64 bytes from xx.xx.xx.12: icmp_seq=4 ttl=64 time=-3.-656 ms
 64 bytes from xx.xx.xx.12: icmp_seq=5 ttl=64 time=0.794 ms
 64 bytes from xx.xx.xx.12: icmp_seq=6 ttl=64 time=0.-491 ms
 ping: sendto: No route to host
 ping: wrote xx.xx.xx.12 64 chars, ret=-1
 ping: sendto: No route to host
 ping: wrote xx.xx.xx.12 64 chars, ret=-1
 64 bytes from xx.xx.xx.12: icmp_seq=9 ttl=64 time=0.526 ms
 64 bytes from xx.xx.xx.12: icmp_seq=10 ttl=64 time=1.415 ms

 No errors in syslog.
 Any idea?

This is what it looks like when your link goes down, then comes back
again. I'd check with the upstream if they know of any specific incident
that matches your disruption.

- P
-- 
Peter N. M. Hansteen, member of the first RFC 1149 implementation team
http://bsdly.blogspot.com/ http://www.bsdly.net/ http://www.nuug.no/
Remember to set the evil bit on all malicious network traffic
delilah spamd[29949]: 85.152.224.147: disconnected after 42673 seconds.



Re: packet loss

2011-11-28 Thread rik
Hi,


On Mon, Nov 28, 2011 at 5:59 PM, Peter N. M. Hansteen pe...@bsdly.netwrote:

 rik rikc...@gmail.com writes:

  I'm using 2 openbsd boxes as router firewall with carp in a colo-like
 setup.
  In the last few days we saw the packet loss percentuale increase up to
  8-10% and it doesn't look like a problem for outside.

 I take this to mean that the CARP setup provided the needed redundancy.


Yes exactly, we've 2 carp interfaces, one for the internal interface, the
second for the external interface; the setup is working with no major issue
for 3 years or so


   If I ping from the master firewall one of the server inside I can see
  something like this:
 
  64 bytes from xx.xx.xx.12: icmp_seq=4 ttl=64 time=-3.-656 ms
  64 bytes from xx.xx.xx.12: icmp_seq=5 ttl=64 time=0.794 ms
  64 bytes from xx.xx.xx.12: icmp_seq=6 ttl=64 time=0.-491 ms
  ping: sendto: No route to host
  ping: wrote xx.xx.xx.12 64 chars, ret=-1
  ping: sendto: No route to host
  ping: wrote xx.xx.xx.12 64 chars, ret=-1
  64 bytes from xx.xx.xx.12: icmp_seq=9 ttl=64 time=0.526 ms
  64 bytes from xx.xx.xx.12: icmp_seq=10 ttl=64 time=1.415 ms
 
  No errors in syslog.
  Any idea?

 This is what it looks like when your link goes down, then comes back
 again. I'd check with the upstream if they know of any specific incident
 that matches your disruption.


 The ping I've tried is from the master firewall to a server inside the
network:
firewall - switch - xx.xx.xx.12

The switch works ok, if I ping from one server to another one in the same
subnet there's no packet lost so it looks like something on the firewall.
The two machines are idle as 99,9% and no high interrupt or mbuf clusters
number
Thanks!
Alessandro



Re: packet loss

2011-11-28 Thread Stuart Henderson
dmesg?

On 2011-11-28, rik rikc...@gmail.com wrote:
 Good day,
 I'm using 2 openbsd boxes as router firewall with carp in a colo-like setup.
 In the last few days we saw the packet loss percentuale increase up to
 8-10% and it doesn't look like a problem for outside.  If I ping from the
 master firewall one of the server inside I can see something like this:

 64 bytes from xx.xx.xx.12: icmp_seq=4 ttl=64 time=-3.-656 ms
 64 bytes from xx.xx.xx.12: icmp_seq=5 ttl=64 time=0.794 ms
 64 bytes from xx.xx.xx.12: icmp_seq=6 ttl=64 time=0.-491 ms
 ping: sendto: No route to host
 ping: wrote xx.xx.xx.12 64 chars, ret=-1
 ping: sendto: No route to host
 ping: wrote xx.xx.xx.12 64 chars, ret=-1
 64 bytes from xx.xx.xx.12: icmp_seq=9 ttl=64 time=0.526 ms
 64 bytes from xx.xx.xx.12: icmp_seq=10 ttl=64 time=1.415 ms

 No errors in syslog.
 Any idea?
 Thanks
 Alessandro



Re: packet loss

2011-11-28 Thread James Shupe
Run

ifconfig carp | grep status

on both machines... If they're pre 4.8, do:

ifconfig carp | grep 'carp: '

.

If both think they're masters, they'll do what you're seeing.

Thank you,
James Shupe

On 11/28/11 12:53 PM, Stuart Henderson wrote:
 dmesg?
 
 On 2011-11-28, rik rikc...@gmail.com wrote:
 Good day,
 I'm using 2 openbsd boxes as router firewall with carp in a colo-like setup.
 In the last few days we saw the packet loss percentuale increase up to
 8-10% and it doesn't look like a problem for outside.  If I ping from the
 master firewall one of the server inside I can see something like this:

 64 bytes from xx.xx.xx.12: icmp_seq=4 ttl=64 time=-3.-656 ms
 64 bytes from xx.xx.xx.12: icmp_seq=5 ttl=64 time=0.794 ms
 64 bytes from xx.xx.xx.12: icmp_seq=6 ttl=64 time=0.-491 ms
 ping: sendto: No route to host
 ping: wrote xx.xx.xx.12 64 chars, ret=-1
 ping: sendto: No route to host
 ping: wrote xx.xx.xx.12 64 chars, ret=-1
 64 bytes from xx.xx.xx.12: icmp_seq=9 ttl=64 time=0.526 ms
 64 bytes from xx.xx.xx.12: icmp_seq=10 ttl=64 time=1.415 ms

 No errors in syslog.
 Any idea?
 Thanks
 Alessandro
 


-- 
James Shupe, OSRE
developer/ engineer
BSD/ Linux support  hosting
jsh...@osre.org | www.osre.org
O 9032530140 | F 9032530150 | M 9035223425



Re: packet loss

2011-11-28 Thread rik
Hi,
this is the dmesg:

cpu0: Intel Pentium III (GenuineIntel 686-class) 745 MHz
cpu0:
FPU,V86,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,SER,MMX,FXSR,SSE
real mem  = 536449024 (523876K)
avail mem = 482430976 (471124K)
using 4278 buffers containing 26927104 bytes (26296K) of memory
mainbus0 (root)
bios0 at mainbus0: AT/286+(00) BIOS, date 03/17/00, BIOS32 rev. 0 @ 0xfd6b1
pcibios0 at bios0: rev 2.1 @ 0xf/0x
pcibios0: PCI BIOS has 10 Interrupt Routing table entries
pcibios0: PCI Interrupt Router at 000:07:0 (Intel 82371AB PIIX4 ISA rev
0x00)
pcibios0: PCI bus #2 is the last bus
bios0: ROM list: 0xc/0x9a00 0xc9a00/0xd800 0xd7200/0x4800
mainbus0: Intel MP Specification (Version 1.1) (IBM ENSW Kiowa SMP   )
cpu0 at mainbus0: apid 1 (boot processor)
cpu0: apic clock running at 99 MHz
cpu1 at mainbus0: apid 0 (application processor)
cpu1: Intel Pentium III (GenuineIntel 686-class)
cpu1: FPU,CX8,APIC
mainbus0: bus 0 is type PCI
mainbus0: bus 1 is type PCI
mainbus0: bus 2 is type PCI
mainbus0: bus 3 is type ISA
ioapic0 at mainbus0: apid 14 pa 0xfec0, version 11, 24 pins
pci0 at mainbus0 bus 0: configuration mode 1 (no bios)
pchb0 at pci0 dev 0 function 0 Intel 82440BX AGP rev 0x00
ppb0 at pci0 dev 1 function 0 Intel 82440BX AGP rev 0x00
pci1 at ppb0 bus 1
vga1 at pci1 dev 0 function 0 Chips and Technologies 69000 rev 0x64
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
pcib0 at pci0 dev 7 function 0 Intel 82371AB PIIX4 ISA rev 0x02
pciide0 at pci0 dev 7 function 1 Intel 82371AB IDE rev 0x01: DMA, channel
0 wired to compatibility, channel 1 wired to compatibility
pciide0: channel 0 ignored (disabled)
atapiscsi0 at pciide0 channel 1 drive 0
scsibus0 at atapiscsi0: 2 targets
cd0 at scsibus0 targ 0 lun 0: LG, CD-ROM CRN-8241B, 1.24 SCSI0 5/cdrom
removable
cd0(pciide0:1:0): using PIO mode 4, DMA mode 2
Intel 82371AB USB rev 0x01 at pci0 dev 7 function 2 not configured
piixpm0 at pci0 dev 7 function 3 Intel 82371AB Power rev 0x02: SMI
iic0 at piixpm0
admtemp0 at iic0 addr 0x18: max1617
admtemp1 at iic0 addr 0x1a: max1617
unknown at iic0 addr 0x2d not configured
admtemp2 at iic0 addr 0x4c: max1617
admtemp3 at iic0 addr 0x4e: max1617
fxp0 at pci0 dev 17 function 0 Intel 8255x rev 0x08, i82559: apic 14 int
18 (irq 10), address xx:xx:xx:xx:xx:xx
inphy0 at fxp0 phy 1: i82555 10/100 PHY, rev. 4
fxp1 at pci0 dev 18 function 0 Intel 8255x rev 0x08, i82559: apic 14 int
17 (irq 11), address xx:xx:xx:xx:xx:xx
inphy1 at fxp1 phy 1: i82555 10/100 PHY, rev. 4
ppb1 at pci0 dev 20 function 0 DEC 21152 PCI-PCI rev 0x03
pci2 at ppb1 bus 2
rl0 at pci2 dev 14 function 0 D-Link Systems 530TX+ rev 0x10: apic 14 int
17 (irq 11), address xx:xx:xx:xx:xx:xx
rlphy0 at rl0 phy 0: RTL internal PHY
ahc0 at pci2 dev 15 function 0 Adaptec AHA-2940U rev 0x01: apic 14 int 16
(irq 9)
scsibus1 at ahc0: 16 targets
ahc0: target 0 using 8bit transfers
ahc0: target 0 using asynchronous transfers
sd0 at scsibus1 targ 0 lun 0: IBM-PSG, ST39175LW !#, 0350 SCSI2 0/direct
fixed
sd0: 8678MB, 11721 cyl, 5 head, 303 sec, 512 bytes/sec, 17774160 sec total
ahc0: target 1 using 8bit transfers
ahc0: target 1 using asynchronous transfers
sd1 at scsibus1 targ 1 lun 0: IBM-PSG, ST39175LW !#, 0350 SCSI2 0/direct
fixed
sd1: 8678MB, 11721 cyl, 5 head, 303 sec, 512 bytes/sec, 17774160 sec total
isa0 at pcib0
isadma0 at isa0
pckbc0 at isa0 port 0x60/5
pckbd0 at pckbc0 (kbd slot)
pckbc0: using irq 1 for kbd slot
wskbd0 at pckbd0: console keyboard, using wsdisplay0
pcppi0 at isa0 port 0x61
midi0 at pcppi0: PC speaker
spkr0 at pcppi0
npx0 at isa0 port 0xf0/16: using exception 16
pccom0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
pccom1 at isa0 port 0x2f8/8 irq 3: ns16550a, 16 byte fifo
fdc0 at isa0 port 0x3f0/6 irq 6 drq 2
biomask 0 netmask 0 ttymask 0
pctr: 686-class user-level performance counters enabled
mtrr: Pentium Pro MTRR support
ahc0: target 0 using 16bit transfers
ahc0: target 0 synchronous at 10.0MHz, offset = 0x8
dkcsum: sd0 matches BIOS drive 0x80
ahc0: target 1 using 16bit transfers
ahc0: target 1 synchronous at 10.0MHz, offset = 0x8
dkcsum: sd1 matches BIOS drive 0x81
root on sd0a
rootdev=0x400 rrootdev=0xd00 rawdev=0xd02

Thanks!
Alessandro

On Mon, Nov 28, 2011 at 7:53 PM, Stuart Henderson s...@spacehopper.orgwrote:

 dmesg?

 On 2011-11-28, rik rikc...@gmail.com wrote:
  Good day,
  I'm using 2 openbsd boxes as router firewall with carp in a colo-like
 setup.
  In the last few days we saw the packet loss percentuale increase up to
  8-10% and it doesn't look like a problem for outside.  If I ping from the
  master firewall one of the server inside I can see something like this:
 
  64 bytes from xx.xx.xx.12: icmp_seq=4 ttl=64 time=-3.-656 ms
  64 bytes from xx.xx.xx.12: icmp_seq=5 ttl=64 time=0.794 ms
  64 bytes from xx.xx.xx.12: icmp_seq=6 ttl=64 time=0.-491 ms
  ping: sendto: No route to host
  ping: wrote xx.xx.xx.12 64 chars, ret=-1
  ping: sendto: No 

Re: packet loss

2011-11-28 Thread rik
Hi James,
both carp on the master firewall are in master status (one on the external
side, one on the internal side), but as much as I know they've always been
like this; on the backup firewall they both are in backup status (and the
backup, using the phisical interface, can ping without any packet loss).
Thanks
Alessandro


On Mon, Nov 28, 2011 at 8:08 PM, James Shupe jsh...@osre.org wrote:

 Run

 ifconfig carp | grep status

 on both machines... If they're pre 4.8, do:

 ifconfig carp | grep 'carp: '

 .

 If both think they're masters, they'll do what you're seeing.

 Thank you,
 James Shupe

 On 11/28/11 12:53 PM, Stuart Henderson wrote:
  dmesg?
 
  On 2011-11-28, rik rikc...@gmail.com wrote:
  Good day,
  I'm using 2 openbsd boxes as router firewall with carp in a colo-like
 setup.
  In the last few days we saw the packet loss percentuale increase up to
  8-10% and it doesn't look like a problem for outside.  If I ping from
 the
  master firewall one of the server inside I can see something like this:
 
  64 bytes from xx.xx.xx.12: icmp_seq=4 ttl=64 time=-3.-656 ms
  64 bytes from xx.xx.xx.12: icmp_seq=5 ttl=64 time=0.794 ms
  64 bytes from xx.xx.xx.12: icmp_seq=6 ttl=64 time=0.-491 ms
  ping: sendto: No route to host
  ping: wrote xx.xx.xx.12 64 chars, ret=-1
  ping: sendto: No route to host
  ping: wrote xx.xx.xx.12 64 chars, ret=-1
  64 bytes from xx.xx.xx.12: icmp_seq=9 ttl=64 time=0.526 ms
  64 bytes from xx.xx.xx.12: icmp_seq=10 ttl=64 time=1.415 ms
 
  No errors in syslog.
  Any idea?
  Thanks
  Alessandro
 


 --
 James Shupe, OSRE
 developer/ engineer
 BSD/ Linux support  hosting
 jsh...@osre.org | www.osre.org
 O 9032530140 | F 9032530150 | M 9035223425



Re: packet loss

2011-11-28 Thread James Shupe
Your dmesg doesn't show the version you're running. Can you provide
that, along with ifconfig output from both machines? You may want to
check the physical connectivity (cable/ NIC/ switch) for the internal
interface of the carp master... Or just fail over to the secondary box
to see if the issue goes away.

Also, provide the netstat -i output.

On 11/28/11 1:37 PM, rik wrote:
 Hi James,
 both carp on the master firewall are in master status (one on the external
 side, one on the internal side), but as much as I know they've always been
 like this; on the backup firewall they both are in backup status (and the
 backup, using the phisical interface, can ping without any packet loss).
 Thanks
 Alessandro
 
 
 On Mon, Nov 28, 2011 at 8:08 PM, James Shupe jsh...@osre.org wrote:
 
 Run

 ifconfig carp | grep status

 on both machines... If they're pre 4.8, do:

 ifconfig carp | grep 'carp: '

 .

 If both think they're masters, they'll do what you're seeing.

 Thank you,
 James Shupe

 On 11/28/11 12:53 PM, Stuart Henderson wrote:
 dmesg?

 On 2011-11-28, rik rikc...@gmail.com wrote:
 Good day,
 I'm using 2 openbsd boxes as router firewall with carp in a colo-like
 setup.
 In the last few days we saw the packet loss percentuale increase up to
 8-10% and it doesn't look like a problem for outside.  If I ping from
 the
 master firewall one of the server inside I can see something like this:

 64 bytes from xx.xx.xx.12: icmp_seq=4 ttl=64 time=-3.-656 ms
 64 bytes from xx.xx.xx.12: icmp_seq=5 ttl=64 time=0.794 ms
 64 bytes from xx.xx.xx.12: icmp_seq=6 ttl=64 time=0.-491 ms
 ping: sendto: No route to host
 ping: wrote xx.xx.xx.12 64 chars, ret=-1
 ping: sendto: No route to host
 ping: wrote xx.xx.xx.12 64 chars, ret=-1
 64 bytes from xx.xx.xx.12: icmp_seq=9 ttl=64 time=0.526 ms
 64 bytes from xx.xx.xx.12: icmp_seq=10 ttl=64 time=1.415 ms

 No errors in syslog.
 Any idea?
 Thanks
 Alessandro



 --
 James Shupe, OSRE
 developer/ engineer
 BSD/ Linux support  hosting
 jsh...@osre.org | www.osre.org
 O 9032530140 | F 9032530150 | M 9035223425
 


-- 
James Shupe, OSRE
developer/ engineer
BSD/ Linux support  hosting
jsh...@osre.org | www.osre.org
O 9032530140 | F 9032530150 | M 9035223425



Re: packet loss

2011-11-28 Thread Stuart Henderson
On 2011-11-28, James Shupe jsh...@osre.org wrote:
 Your dmesg doesn't show the version you're running. Can you provide
 that,

Yep, seconded. If people ask for a dmesg, they mean a complete one.
I would also try a GENERIC kernel (not GENERIC.MP).

 along with ifconfig output from both machines? You may want to
 check the physical connectivity (cable/ NIC/ switch) for the internal
 interface of the carp master... Or just fail over to the secondary box
 to see if the issue goes away.

Well there appears to be something very odd going on with timers there
so who knows what else might follow from that.

 64 bytes from xx.xx.xx.12: icmp_seq=4 ttl=64 time=-3.-656 ms
 64 bytes from xx.xx.xx.12: icmp_seq=5 ttl=64 time=0.794 ms
 64 bytes from xx.xx.xx.12: icmp_seq=6 ttl=64 time=0.-491 ms
 ping: sendto: No route to host
 ping: wrote xx.xx.xx.12 64 chars, ret=-1
 ping: sendto: No route to host
 ping: wrote xx.xx.xx.12 64 chars, ret=-1
 64 bytes from xx.xx.xx.12: icmp_seq=9 ttl=64 time=0.526 ms
 64 bytes from xx.xx.xx.12: icmp_seq=10 ttl=64 time=1.415 ms



Re: Packet Loss on Wireless (RAL and WI)

2010-11-02 Thread Joachim Schipper
On Tue, Nov 02, 2010 at 02:23:23AM +1300, Jammer wrote:
 I'm experiencing problems setting up an OpenBSD box as a
 firewall/Wireless Access Point(...)

 Firstly my setup:
 * I've tried this using OpenBSD v4.1, v4.6 and a 4.8 snapshot from
 29/10/20 all with similar results.

Just install 4.8 or -current.

 * I've tried various different wireless cards based on either the
 Prism (wi0) or Ralink 2561 (ral0) chipsets.

There are a lots of caveats about Host AP mode in wi(4) (from -current):

(...)
 Host APIn this mode the driver acts as an access point (base
station) for other cards.  Only cards based on the
Intersil chipsets support this mode.  Furthermore, this
mode is not supported on USB devices.
(...)
HARDWARE
 Cards supported by the wi driver come in a variety of packages, though
 the most common are of the PCMCIA type.  In many cases, the PCI version
 of a wireless card is simply a PCMCIA card bundled with a PCI adapter.
(...)
 USB support is still experimental and the device may stop functioning
 during normal use.  Resetting the device by configuring the interface
 down and back up again will normally reactivate it.
(...)
CAVEATS
 Not all 3.3V wi PCMCIA cards work.

 IBSS creation does not currently work with Symbol cards.

 The host-based access point mode on the Intersil PRISM cards has bugs
 when used with firmware versions prior to 0.8.3 and is completely
 unusable with firmware versions prior to 0.8.0 and 1.4.0-1.4.2.

 Software WEP is currently only supported in Host AP and BSS modes.
 Furthermore, software WEP is currently incapable of decrypting fragmented
 frames.  Lucent-based cards using firmware 8.10 and above fragment
 encrypted frames sent at 11Mbps.  To work around this, Lucent clients
 with this firmware revision connecting to a Host AP server should use a
 2Mbps connection or upgrade their firmware to version 8.72.

 Host AP mode doesn't support WDS or power saving.  Clients attempting to
 use power saving mode may experience significant packet loss (disabling
 power saving on the client will fix this).

 Support for USB devices is buggy.  Host AP mode and AP scanning are not
 currently supported with USB devices.

From ral(4):

(...)
CAVEATS
(...)
 Host AP mode doesn't support power saving.  Clients attempting to use
 power saving mode may experience significant packet loss (disabling power
 saving on the client will fix this).

 Some PCI ral adapters seem to strictly require a system supporting PCI
 2.2 or greater and will likely not work in systems based on older
 revisions of the PCI specification.  Check the board's PCI version before
 purchasing the card.

I've never set up an AP myself, but it's not clear that you are aware of
these possible issues from your message.

 * I've used 4 different machines, admittedly all low horsepower
 machines, from 400MHz PII to 1.2GHz Athlon

 * I've tried configuring the interface in both ibss and hostap
 mode. I'm aware of the caveat regarding hostap mode and power saving
 mode in the client and have ensured that the clients (various WinXP
 times 2, and Brother wireless enabled printer)  have this disabled
 but the packet loss occurs in both ad-hoc and hostap modes anyway.

 On each occasion I get anywhere up to 75% packet loss or long
 periods of several tens of seconds where the wireless link is down.
 Often the clients are completely unable to associate with the access
 point/peer and the link is most unstable. I have tried this with the
 two machines side by side and at a distance of 10m but even with a
 link of only a few feet I still get packet loss.
 
 I've tested by pinging both ends both individually, and
 simultaneously, and the packet loss occurs in both directions.
 At the same time, I can use the same wireless cards in a Windows XP
 machine and get zero packet loss and a completely stable link in an
 ad-hoc network so I'm sure that the hardware is OK and the wireless
 radio does work.

 I'm afraid I don't have my dmesg handy (...)

*Always* include a dmesg if you're having hardware issues.

Joachim

-- 
PotD: x11/lupe - real-time magnifying glass for X11
http://www.joachimschipper.nl/



Re: packet loss over nat

2005-08-05 Thread Håkan Olsson

Try increasing PF max number of states.

It is currently limited to 1, so when you reach this no new  
traffic (that would create a state) is permitted until some of the  
old ones expire. The 1 limit is ok for most machines, but  
definitely not for a busy server / firewall. (Same goes for the  
default httpd.conf, btw, which also requires tweaking for higher  
performance.)


Use pfctl -s info and check the memory counter, it indicates the  
number of states that could not be created due to the limit  
(presumably other mem failures too). You want to see 0 (zero) here.


See pf.conf(5), try set limit states 5 or so.

/H

On 2 aug 2005, at 00.07, Bc. Radek Krejca wrote:


Hi,

  thank you for response. It was my idea too but pfctl -ss shows about
  1 lines. Where I got better information about ports over nat?

  Thank you
  Radek

1. srpna 2005, 23:02:15, jste napsal(a):
SKQ On Mon, 2005-08-01 at 21:21 +0200, Bc. Radek Krejca wrote:

  I have problem with packet loss over nat. I dont know where  
could be
  mistake. If i try stop half IPs I have no problem. What can I  
change

  to resolving problem? Over this nat runs about 1300 IPs.



SKQ My gut instinct says that you're simply running out of ports  
on the one
SKQ external address. That is definitely something you want to  
look into at

SKQ some point.



--
Regards,
 Bc. Radek Krejca
 [EMAIL PROTECTED]
 http://www.ceskedomeny.cz
 http://www.skdomeny.com
 http://www.starnet.cz




Re: packet loss over nat

2005-08-01 Thread Bc. Radek Krejca
Hi,

  thank you for response. It was my idea too but pfctl -ss shows about
  1 lines. Where I got better information about ports over nat?

  Thank you
  Radek

1. srpna 2005, 23:02:15, jste napsal(a):
SKQ On Mon, 2005-08-01 at 21:21 +0200, Bc. Radek Krejca wrote:
   I have problem with packet loss over nat. I dont know where could be
   mistake. If i try stop half IPs I have no problem. What can I change
   to resolving problem? Over this nat runs about 1300 IPs.

SKQ My gut instinct says that you're simply running out of ports on the one
SKQ external address. That is definitely something you want to look into at
SKQ some point.



-- 
Regards,
 Bc. Radek Krejca
 [EMAIL PROTECTED]
 http://www.ceskedomeny.cz
 http://www.skdomeny.com
 http://www.starnet.cz