SO_BINTIME was ENOTSUPP with IPv6 from the day one. In general I think it's
safe to advise to use SO_TIMESTAMP instead, which is also more flexible
interface. I'd suggest SO_BINTIME is marked as obsoleting one instead.
-Max
On Tue, Jun 6, 2017 at 8:39 AM, Andrey V. Elsukov
JFYI. I've opened a follow-up differential for this potential regression:
https://reviews.freebsd.org/D10351
Thanks!
-Max
On Mon, Apr 10, 2017 at 7:43 AM, Maxim Sobolev <sobo...@freebsd.org> wrote:
> Hi Guys, I am sorry to bring this old thread up, but I think Ed's
> comparison wit
ed perhaps due to alignment and such,
hence "".
On Wed, Nov 16, 2016 at 5:38 PM, Maxim Sobolev <sobo...@freebsd.org> wrote:
> Hi folks, we are working on some extra socket options, which is getting
> monotinic timestamps instead of realtime. That might be extremely useful in
Hi folks, we are working on some extra socket options, which is getting
monotinic timestamps instead of realtime. That might be extremely useful in
the Real World[tm] situations.
Before looking at the code closely I've considered SO_TIMESTAMP_MT. But
then and I've found that are are out of space
Andrey,
1. Sockets are unconnected
2. Datagrams are unicast
-Max
On Mon, Sep 5, 2016 at 6:46 AM, Andrey V. Elsukov <bu7c...@yandex.ru> wrote:
> On 05.09.16 15:42, Maxim Sobolev wrote:
> > Suppose we have two threads in the system both bound to a same specific
> UDP
&
P.S. Destination UDP address of the message in question is obviously
X.Y.Z.W:5071.
On Mon, Sep 5, 2016 at 5:42 AM, Maxim Sobolev <sobo...@freebsd.org> wrote:
> Hi Netters,
>
> Suppose we have two threads in the system both bound to a same specific
> UDP port, one using INAD
Hi Netters,
Suppose we have two threads in the system both bound to a same specific UDP
port, one using INADDR_ANY and another one using actual IP. When incoming
message arrives to that port is there any guarantee as to which of those
two threads going to see the message? The question has arisen
Larry, thanks for the pointer. Patch in the PR fixed the issue for me.
Added comment in there.
-Max
On Wed, Jul 13, 2016 at 2:00 PM, Larry Rosenman <l...@lerctr.org> wrote:
> NOTE: I get an insta-panic on boot :(
>
> I'm waiting for Gleb to respond.
>
>
>
> On 2016
Thanks, looks like the same issue. I'll try the patch from ticket.
-Max
On Wed, Jul 13, 2016 at 1:46 PM, Larry Rosenman <l...@lerctr.org> wrote:
> On 2016-07-13 15:32, Maxim Sobolev wrote:
>
>> Hi, we are seeing consistent crash doing 'ifconfig igb0 media auto' after
>
Hi, we are seeing consistent crash doing 'ifconfig igb0 media auto' after
interface has been provisioned by the dhcpclient. This is stable/11 sources
from svn revision 302593.
That problem did not happen to us before the upgrade from from 11.0-ALPHA3,
svn revision 301898 from head.
Sreenshot of
t;j...@freebsd.org> wrote:
> On Tuesday, October 20, 2015 06:31:47 PM Maxim Sobolev wrote:
> > Here you go:
> >
> > $ sudo procstat -S 11
> > PIDTID COMM TDNAME CPU CSID CPU MASK
> >11 100082 intr irq269: ig
the upstream switch is not doing proper load
balancing between two ports. We'll take it to the DC to fix.
Thanks John, for helping to drill that down!
On Wed, Oct 21, 2015 at 11:31 AM, John Baldwin <j...@freebsd.org> wrote:
> On Wednesday, October 21, 2015 11:29:17 AM Maxim Sobolev wrote:
&
1 0-23
11 100142 intr swi0: uart uart01 0-23
On Mon, Oct 19, 2015 at 2:03 PM, John Baldwin <j...@freebsd.org> wrote:
> On Thursday, October 08, 2015 07:33:27 AM Maxim Sobolev wrote:
> > Hi John & others,
> >
> > We've came across a weird
Hi John & others,
We've came across a weird MSI routing issue on one of our newest dual
E5-2690v3 (haswell) Supermicro X10DRL-i boxes running latest 10.2-p4. It is
fitted with dual port Intel I350 card, in addition to the built-in I210
chip that is not used. The hw.igb.num_queues is set to 4, and
Yes, we've confirmed it's IXGBE_FDIR. That's good it comes disabled in 10.2.
Thanks everyone for constructive input!
-Max
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to
I think we are getting a better performance today with the IXGBE_FDIR
switched off. It's not 100% decisive though, since we've only pushed it to
little bit below 200kpps. We'll push more traffic tomorrow and see how it
goes.
-Maxim
On Fri, Aug 14, 2015 at 10:29 AM, Maxim Sobolev sobo
, Maxim Sobolev sobo...@freebsd.org:
Olivier, keep in mind that we are not kernel forwarding packets, but
app
forwarding, i.e. the packet goes full way
net-kernel-recvfrom-app-sendto-kernel-net, which is why we have
much
lower PPS limits and which is why I think we are actually benefiting
/ScreenShot395.png
On Fri, Aug 14, 2015 at 10:29 AM, Maxim Sobolev sobo...@freebsd.org wrote:
Hi guys, unfortunately no, neither reduction of the number of queues from
8 to 6 nor pinning interrupt rate at 2 per queue have not made any
difference. The card still goes kaboom at about 200Kpps no matter
Thanks, we'll try that as well. We have not got as much traffic in the past
2 days, so we were running at about 140Kpps, well below the level that used
to cause issues before. I'll try to redistribute traffic tomorrow so that
we get it tested.
-Max
On Wed, Aug 12, 2015 at 11:47 PM, Adrian Chadd
wrote:
12.08.2015, 02:28, Maxim Sobolev sobo...@freebsd.org:
Olivier, keep in mind that we are not kernel forwarding packets, but
app
forwarding, i.e. the packet goes full way
net-kernel-recvfrom-app-sendto-kernel-net, which is why we have
much
lower PPS limits and which is why I think we
rev=0x03 hdr=0x00
vendor = 'Intel Corporation'
device = 'I210 Gigabit Network Connection'
class = network
subclass = ethernet
On Wed, Aug 12, 2015 at 8:03 AM, Maxim Sobolev sobo...@sippysoft.com
wrote:
Ok, so my current settings are:
hw.ix.max_interrupt_rate
-AT2'
class = network
subclass = ethernet
On Wed, Aug 12, 2015 at 8:23 AM, Adrian Chadd adrian.ch...@gmail.com
wrote:
Right, and for the ixgbe hardware?
-a
On 12 August 2015 at 08:05, Maxim Sobolev sobo...@freebsd.org wrote:
igb0@pci0:7:0:0:class=0x02 card
Hi folks,
We've trying to migrate some of our high-PPS systems to a new hardware that
has four X540-AT2 10G NICs and observed that interrupt time goes through
roof after we cross around 200K PPS in and 200K out (two ports in LACP).
The previous hardware was stable up to about 350K PPS in and 350K
Thanks, we will try, however I don't think it's going to make a huge
difference because we run almost 2x the PPS on the otherwise identical (as
far as FreeBSD version/lagg code goes) with I350/igb(9) and inferior CPU
hardware. That kinda suggests that whatever the problem is it is below lagg.
.
Cheers,
Hiren
-adrian
On 11 August 2015 at 14:18, Maxim Sobolev sobo...@freebsd.org wrote:
Hi folks,
We've trying to migrate some of our high-PPS systems to a new hardware
that
has four X540-AT2 10G NICs and observed that interrupt time goes
through
roof after we
something that worked with old-gen hardware/driver almost out of the
box now needs some extra tuning. Thanks everyone for useful suggestions in
any case!
On Tue, Aug 11, 2015 at 4:04 PM, Luigi Rizzo ri...@iet.unipi.it wrote:
On Wed, Aug 12, 2015 at 12:46 AM, Maxim Sobolev sobo...@sippysoft.com
wrote
...@cochard.me wrote:
On Tue, Aug 11, 2015 at 11:18 PM, Maxim Sobolev sobo...@freebsd.org
wrote:
Hi folks,
Hi,
We've trying to migrate some of our high-PPS systems to a new hardware
that
has four X540-AT2 10G NICs and observed that interrupt time goes through
roof after we cross
8/4096 rx
8/4096 queues/slots
On Tue, Aug 11, 2015 at 4:14 PM, Olivier Cochard-Labbé oliv...@cochard.me
wrote:
On Tue, Aug 11, 2015 at 11:18 PM, Maxim Sobolev sobo...@freebsd.org
wrote:
Hi folks,
Hi,
We've trying to migrate some of our high-PPS systems to a new hardware
that
has
On 11/24/2011 11:24 PM, Robert N. M. Watson wrote:
There was recently a commit to fix a race condition in 10-CURRENT which
I think is not slated to be merged for 9.0. You might check the commit
logs there and see if that fixes the problems you have -- if so, we
might want to reconsider the
On 12/30/2011 3:31 PM, Robert N. M. Watson wrote:
Looking back at a recent post from you, it appears that you are on 8.x and not 9.x, as I
had assumed form your original e-mail. The patch I was referring to in 9-CURRENT has long
since been merged for 9.0 and will appear in that release.
On 12/30/2011 4:46 PM, Maxim Sobolev wrote:
I see. Would you guys mind if I put that NULL pointer check into the
code for the time being and turn it into some kind of big nasty warning
in 8-stable branch only?
I could also open a ticket, put all debug information collected to date
wrote:
On Mon, 7 Nov 2011, Maxim Sobolev wrote:
On 11/7/2011 2:57 PM, Maxim Sobolev wrote:
On 11/7/2011 10:24 AM, Bjoern A. Zeeb wrote:
Unlikely; the inp is properly locked there and the udp info attach
better still be valid there; your problem is most likely elsewhere;
try to see if you have
wrote:
On Mon, 7 Nov 2011, Maxim Sobolev wrote:
On 11/7/2011 2:57 PM, Maxim Sobolev wrote:
On 11/7/2011 10:24 AM, Bjoern A. Zeeb wrote:
Unlikely; the inp is properly locked there and the udp info attach
better still be valid there; your problem is most likely elsewhere;
try to see if you have
Hi Gang,
We are seeing repeatable panics under high PPS load on our production
systems. It happens when the traffic gets into the range or 200MBps and
150-200K PPS. We have been managed to track it down to the following
piece of code:
(gdb) l *udp_input+0x5d2
0x806f6202 is in
Panic screenshot is here:
http://sobomax.sippysoft.com/ScreenShot859.png
-Maxim
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
On 11/7/2011 10:24 AM, Bjoern A. Zeeb wrote:
Unlikely; the inp is properly locked there and the udp info attach
better still be valid there; your problem is most likely elsewhere;
try to see if you have other threads and see what they do at the same
time, etc. You would need to race with
On 11/7/2011 2:57 PM, Maxim Sobolev wrote:
On 11/7/2011 10:24 AM, Bjoern A. Zeeb wrote:
Unlikely; the inp is properly locked there and the udp info attach
better still be valid there; your problem is most likely elsewhere;
try to see if you have other threads and see what they do at the same
On 11/7/2011 3:25 PM, Bjoern A. Zeeb wrote:
Now if you are clever you'd also log the inp there as the above will
only prove the case that something is wrong but still not help us in
anything to figure out what.
Good point, thank you Sir.
Would that be good enough?
printf(BZZT! Something is
Gary Jennejohn wrote:
On Fri, 30 Apr 2010 13:23:13 -0700
Maxim Sobolev sobo...@sippysoft.com wrote:
Hi,
Many network drivers in the FreeBSD kernel use the IFQ_MAXLEN value to
set length of the outgoing packets queue. The default value for that
parameter is only 50, which is pretty low
Hi,
Many network drivers in the FreeBSD kernel use the IFQ_MAXLEN value to
set length of the outgoing packets queue. The default value for that
parameter is only 50, which is pretty low especially for the cases when
the system handles lot of small packets and can cause ENOBUFS in
Julian Elischer wrote:
On 4/30/10 1:23 PM, Maxim Sobolev wrote:
Hi,
Many network drivers in the FreeBSD kernel use the IFQ_MAXLEN value to
set length of the outgoing packets queue. The default value for that
parameter is only 50, which is pretty low especially for the cases when
the system
Jack Vogel wrote:
This thread is confusing, first he says its an igb problem, then you
offer an em patch :)
I suspect it could be patch for the kern/140326.
-Maxim
___
freebsd-net@freebsd.org mailing list
Folks,
Indeed, it looks like igb(4) issue. Replacing the card with the
desktop-grade em(4)-supported card has fixed the problem for us. The
system has been happily pushing 110mbps worth of RTP traffic and 2000
concurrent calls without any problems for two days now.
e...@pci0:7:0:0:
Miroslav Lachman wrote:
Can it be related to this issue somehow?
http://lists.freebsd.org/pipermail/freebsd-current/2009-August/011013.html
http://lists.freebsd.org/pipermail/freebsd-current/2009-August/010740.html
It was tested on FreeBSD 8 and high UDP traffic on igb interfaces emits
OK, here is some new data that I think rules out any issues with the
applications. Following Alfred's suggestion I have made a script to run
every second and output some system statistics:
date
netstat -m
vmstat -i
ps -axl
pstat -T
vmstat -z
sysctl -a
The problem had hit us again today
Hi,
Our company have a FreeBSD based product that consists of the numerous
interconnected processes and it does some high-PPS UDP processing
(30-50K PPS is not uncommon). We are seeing some strange periodic
failures under the load in several such systems, which usually evidences
itself in
Sergey Babkin wrote:
Maxim Sobolev wrote:
Hi,
Our company have a FreeBSD based product that consists of the numerous
interconnected processes and it does some high-PPS UDP processing
(30-50K PPS is not uncommon). We are seeing some strange periodic
failures under the load in several
The following reply was made to PR kern/140326; it has been noted by GNATS.
From: Maxim Sobolev sobo...@freebsd.org
To: Jack Vogel jfvo...@gmail.com
Cc: freebsd-gnats-sub...@freebsd.org
Subject: Re: kern/140326: em0: watchdog timeout when communicating to windows
using 9K MTU
Date: Fri
Hi,
My em0 interface repeatedly hangs up with watchdog timeout when
communicating to the windows host at MTU 9K.
[sobo...@pioneer ~]$ grep em0 /var/run/dmesg.boot
em0: Intel(R) PRO/1000 Network Connection 6.9.6 port 0xecc0-0xecdf mem
0xfe6e-0xfe6f,0xfe6d9000-0xfe6d9fff irq 21 at
Hi,
I've noticed that in some cases you have removed bcopy()
into arpcom.ac_enaddr completely, while in some others
have modified it to use IFP2AC(). I wonder if it's a mistake
or if there is some logic behind that.
Also, it looks like in cdce(4) driver you are referencing
if_softc before it's
On Thu, Jun 09, 2005 at 10:25:30AM +0200, Maxim Sobolev wrote:
Also, it looks like in cdce(4) driver you are referencing
if_softc before it's been assigned by if_alloc():
- cdce_ifp
-Maxim
___
freebsd-net@freebsd.org mailing list
http
On Fri, Oct 04, 2002 at 05:51:46PM +0400, Vladimir B. Grebenschikov wrote:
Hi
I have tried to install fresh zebra (from ports) on 4.7-RC2
have a problem - zebra turns on promiscuity mode on interface,
it is completely unacceptable when interface connected to HUB (not
switch) - router
it immediately.
-Maxim
Vladimir B. Grebenschikov wrote:
÷ Fri, 04.10.2002, × 18:06, Maxim Sobolev ÎÁÐÉÓÁÌ:
On Fri, Oct 04, 2002 at 05:51:46PM +0400, Vladimir B. Grebenschikov wrote:
Hi
I have tried to install fresh zebra (from ports) on 4.7-RC2
have a problem - zebra turns
Hi,
I noticed that after upgrading to 4-BETA something goes wrong with ip
forwarding via ppp(8). I have a FreeBSD box (A) connected to Internet
via network interface and this system also has a modem for dial-in and
backup dial-up connection. Sometimes I need to route through this
modem traffic
54 matches
Mail list logo