On 05 Mar 2015, at 10:14, Владимир Друзенко <v...@unislabs.com> wrote:
> 
> 05.03.2015 11:38, Golub Mikhail пишет:
>> Все дело в vmxnet3.
>> Вернул e1000 (em0 в гостевой) - в корку не вылетает.
>> 
>> Дальше теперь только настроить правильно altq.
>> 
>> Пока остановлюсь на этом.
>> Всем спасибо за помощь.
>> 
>>> -----Original Message-----
>>> From: owner-free...@uafug.org.ua [mailto:owner-free...@uafug.org.ua]
>>> On Behalf Of Golub Mikhail
>>> Sent: Thursday, March 05, 2015 10:16 AM
>>> To: freebsd@uafug.org.ua
>>> Subject: RE: [freebsd] FreeBSD 10.1: PF bug?
>>> 
>>> На тестовой виртуалке под vmware esx 5.5u2, гостевая ОС FreeBSD 10.1
>>> x64
>>> Установлен сетевой адаптер vmx3f0.
>>> Установлены vmaware tools.
>>> Сделано так, что при старте системы правила pf не загружаются, сквид не
>>> загружается (для теста).
>>> 
>>> Загружаю вручную такие правила (взял за основу
>>> https://calomel.org/pf_hfsc.html, пока только тест).
>>> pfctl -f /etc/pf.conf-test
>>> 
>>> ext_if="vmx3f0"
>>> altq on $ext_if bandwidth 90Mb hfsc queue { ack, dns, ssh, web_high,
>>> web_low, bulk}
>>>    queue ack bandwidth 30% priority 8 qlimit 500 hfsc (realtime 20%)
>>>    queue dns bandwidth 5% priority 7 qlimit 500 hfsc (realtime 5%)
>>>    queue ssh bandwidth 5% priority 6 qlimit 500 hfsc (realtime 5%)
>>> {ssh_login, ssh_bulk}
>>>        queue ssh_login bandwidth 50% priority 6 qlimit 500 hfsc
>>>        queue ssh_bulk bandwidth 50% priority 5 qlimit 500 hfsc
>>>    queue bulk bandwidth 10% priority 5 qlimit 500 hfsc (realtime 10%
>>> default)
>>>    queue web_high bandwidth 25% priority 4 qlimit 500 hfsc (realtime 20%)
>>>    queue web_low  bandwidth 25% priority 3 qlimit 500 hfsc (realtime 20%)
>>> pass out on $ext_if inet proto tcp from ($ext_if) to any flags S/SA
>> modulate
>>> state queue (ack, bulk)
>>> pass out on $ext_if inet proto tcp from ($ext_if) to any port ssh flags
>> S/SA
>>> modulate state queue (ssh_login, ssh_bulk)
>>> pass on $ext_if inet proto udp from any to any modulate state queue (dns)
>>> pass on $ext_if inet proto tcp from any to any port {80,443} tos 0x31
>> flags
>>> S/SA modulate state queue (web_high, ack)
>>> pass on $ext_if inet proto tcp from any to any port {80,443} tos 0x32
>> flags
>>> S/SA modulate state queue (web_low, ack)
>>> 
>>> Запускаю сквид - service squid start
>>> Все работает ... До первого запроса к прокси. И тут получаю core.
>>> 
>>> Mar  5 10:01:12 vm2 savecore: reboot after panic: page fault
>>> Mar  5 10:01:12 vm2 savecore: writing core to /var/crash/vmcore.2
>>> 
>>> kgdb kernel.debug /var/crash/vmcore.2
>>> 
>>> Fatal trap 12: page fault while in kernel mode
>>> cpuid = 0; apic id = 00
>>> fault virtual address   = 0x38
>>> fault code              = supervisor read data, page not present
>>> instruction pointer     = 0x20:0xffffffff81b3cfa7
>>> stack pointer           = 0x28:0xfffffe004e52f250
>>> frame pointer           = 0x28:0xfffffe004e52f2e0
>>> code segment            = base 0x0, limit 0xfffff, type 0x1b
>>>                        = DPL 0, pres 1, long 1, def32 0, gran 1
>>> processor eflags        = interrupt enabled, resume, IOPL = 0
>>> current process         = 0 (vmx3f0 taskq)
>>> trap number             = 12
>>> panic: page fault
>>> cpuid = 0
>>> KDB: stack backtrace:
>>> #0 0xffffffff809202f0 at kdb_backtrace+0x60
>>> #1 0xffffffff808e5415 at panic+0x155
>>> #2 0xffffffff80ce13bf at trap_fatal+0x38f
>>> #3 0xffffffff80ce16d8 at trap_pfault+0x308
>>> #4 0xffffffff80ce0d3a at trap+0x47a
>>> #5 0xffffffff80cc6c22 at calltrap+0x8
>>> #6 0xffffffff809a5f20 at if_transmit+0x130
>>> #7 0xffffffff809a7c8d at ether_output+0x58d
>>> #8 0xffffffff80a154db at ip_output+0x115b
>>> #9 0xffffffff80a85cbc at tcp_output+0x191c
>>> #10 0xffffffff80a82f55 at tcp_do_segment+0x3045
>>> #11 0xffffffff80a7f2c4 at tcp_input+0xd04
>>> #12 0xffffffff80a114b7 at ip_input+0x97
>>> #13 0xffffffff809b09b2 at netisr_dispatch_src+0x62
>>> #14 0xffffffff809a7e26 at ether_demux+0x126
>>> #15 0xffffffff809a8ace at ether_nh_input+0x35e
>>> #16 0xffffffff809b09b2 at netisr_dispatch_src+0x62
>>> #17 0xffffffff81b3c820 at vmxnet3_rq_rx_complete+0x3d0
>>> Uptime: 2m1s
>>> Dumping 123 out of 998 MB:..13%..26%..39%..52%..65%..78%..91%
>>> 
>>> Reading symbols from /boot/kernel/zfs.ko.symbols...done.
>>> Loaded symbols for /boot/kernel/zfs.ko.symbols
>>> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done.
>>> Loaded symbols for /boot/kernel/opensolaris.ko.symbols
>>> Reading symbols from /boot/kernel/ipfw.ko.symbols...done.
>>> Loaded symbols for /boot/kernel/ipfw.ko.symbols
>>> Reading symbols from /boot/kernel/accf_http.ko.symbols...done.
>>> Loaded symbols for /boot/kernel/accf_http.ko.symbols
>>> Reading symbols from /boot/kernel/crypto.ko.symbols...done.
>>> Loaded symbols for /boot/kernel/crypto.ko.symbols
>>> Reading symbols from /boot/modules/vmxnet3.ko...done.
>>> Loaded symbols for /boot/modules/vmxnet3.ko
>>> Reading symbols from /boot/kernel/pflog.ko.symbols...done.
>>> Loaded symbols for /boot/kernel/pflog.ko.symbols
>>> Reading symbols from /boot/kernel/pf.ko.symbols...done.
>>> Loaded symbols for /boot/kernel/pf.ko.symbols
>>> Reading symbols from /boot/modules/vmmemctl.ko...done.
>>> Loaded symbols for /boot/modules/vmmemctl.ko
>>> #0  doadump (textdump=<value optimized out>) at pcpu.h:219
>>> 219             __asm("movq %%gs:%1,%0" : "=r" (td)
> Не очень понятно - если ядро и так кастомное, то зачем столько всего
> оставлять в модулях?

Не очень понятно: зачем Вы намекаете на вариант “всё запихнуть в монолитное 
ядро”? Что это даст? Особенно с учетом того, что всё запихнуть без костылей 
всё-равно не получится.
Ну а оставлять в модулях например для большей гибкости.

Ответить