On Thu, 2017-01-26 at 00:56 +0100, Hans-Kristian Bakke wrote:
> These are just the fq settings as they get applied when having fq as
> default qdiscs. I guess there are room for improvements on those
> default settings depending on use case.
>
>
> For future reference: should I increase the
On Thu, 2017-01-26 at 00:47 +0100, Hans-Kristian Bakke wrote:
>
>
> I did record the qdisc settings, but I didn't capture the stats, but
> throttling is definitively active when I watch the tc -s stats in
> realtime when testing (looking at tun1)
>
>
> tc -s qdisc show
> qdisc noqueue 0: dev
I did record the qdisc settings, but I didn't capture the stats, but
throttling is definitively active when I watch the tc -s stats in realtime
when testing (looking at tun1)
tc -s qdisc show
qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog
On Thu, 2017-01-26 at 00:41 +0100, Hans-Kristian Bakke wrote:
> I listed the qdiscs and put them with the captures, but the setup is:
>
> KVM VM:
> tun1 (qdisc: fq, OpenVPN UDP to dst 443)
> eth0 (qdisc: fq, local connection to internet)
I am not sure that it will work properly.
My concern is
I listed the qdiscs and put them with the captures, but the setup is:
KVM VM:
tun1 (qdisc: fq, OpenVPN UDP to dst 443)
eth0 (qdisc: fq, local connection to internet)
BBR always set
Nics are using virtio and is connected to a Open vSwitch
Physical host (Newest proxmox VE with kernel 4.4):
On Thu, 2017-01-26 at 00:04 +0100, Hans-Kristian Bakke wrote:
> I can do that. I guess I should do the capture from tun1 as that is
> the place that the tcp-traffic is visible? My non-virtual nic is only
> seeing OpenVPN encapsulated UDP-traffic.
>
But is FQ installed at the point TCP sockets
I can do that. I guess I should do the capture from tun1 as that is the
place that the tcp-traffic is visible? My non-virtual nic is only seeing
OpenVPN encapsulated UDP-traffic.
On 25 January 2017 at 23:48, Neal Cardwell wrote:
> On Wed, Jan 25, 2017 at 5:38 PM,
On Wed, Jan 25, 2017 at 5:38 PM, Hans-Kristian Bakke
wrote:
> Actually.. the 1-4 mbit/s results with fq sporadically appears again as I
> keep testing but it is most likely caused by all the unknowns between me an
> my testserver. But still, changing to pfifo_qdisc seems to
Actually.. the 1-4 mbit/s results with fq sporadically appears again as I
keep testing but it is most likely caused by all the unknowns between me an
my testserver. But still, changing to pfifo_qdisc seems to normalize the
throughput again with BBR, could this be one of those times where BBR and
On Wed, Jan 25, 2017 at 02:12:05PM -0800, Eric Dumazet wrote:
> Well, pacing is optional in sch_fq.
>
> Only the FQ part is not optional.
Yeah, but who cares about the FQ part for an end host =)
> So sch_fq is actually a proper name ;)
Hah, kernel devs ;-)
/* Steinar */
--
Homepage:
On Wed, 2017-01-25 at 23:06 +0100, Steinar H. Gunderson wrote:
> On Wed, Jan 25, 2017 at 05:01:04PM -0500, Neal Cardwell wrote:
> > Nope, BBR needs pacing to work correctly, and currently fq is the only
> > Linux qdisc that implements pacing.
>
> I really wish sch_fq was renamed sch_pacing :-)
Thank you for you comments. It makes sense to me now!
I am not a member of that mailing list so feel free to repost or quote my
experiences if you feel it's relevant. I will keep it in mind for the
future though.
On 25 January 2017 at 23:02, Neal Cardwell wrote:
> On Wed,
On Wed, Jan 25, 2017 at 05:01:04PM -0500, Neal Cardwell wrote:
> Nope, BBR needs pacing to work correctly, and currently fq is the only
> Linux qdisc that implements pacing.
I really wish sch_fq was renamed sch_pacing :-) And of course that we had a
single qdisc that was ideal for both end hosts
Perhaps the mail didn't arrive properly, but the fq performance is okay
now. I don't know why it was completely out for a couple of tests. It was
most likely my mistake or some very bad timing for testing.
I see that on my physical hosts tsc is also the default with hpet in the
list of available
kvm-clock is a paravirtualized clock that seems to use the CPUs TSC
capabilities if they exist. But it may not be perfect:
Actually I think that is because it may be using the newer TSC:
dmesg | grep clocksource
[0.00] clocksource: kvm-clock: mask: 0x max_cycles:
0x1cd42e4dffb, max_idle_ns: 881590591483 ns
[0.00] clocksource: refined-jiffies: mask: 0x max_cycles:
0x,
> On 25 Jan, 2017, at 23:20, Hans-Kristian Bakke wrote:
>
> [0.00] ACPI: HPET 0xBFFE274F 38 (v01 BOCHS BXPCHPET
> 0001 BXPC 0001)
> [0.00] ACPI: HPET id: 0x8086a201 base: 0xfed0
> [0.00] clocksource: hpet: mask: 0x
> On 25 Jan, 2017, at 23:13, Hans-Kristian Bakke wrote:
>
> dmesg | grep HPET
> [0.00] ACPI: HPET 0xBFFE274F 38 (v01 BOCHS BXPCHPET
> 0001 BXPC 0001)
> [0.00] ACPI: HPET id: 0x8086a201 base: 0xfed0
>
> I seem to indeed have a HPET in
dmesg | grep HPET
[0.00] ACPI: HPET 0xBFFE274F 38 (v01 BOCHS BXPCHPET
0001 BXPC 0001)
[0.00] ACPI: HPET id: 0x8086a201 base: 0xfed0
I seem to indeed have a HPET in my VM. Does that mean that I should be able
to use fq as intended or could the HPET be some
I did some more testing with fq as a replacement of pfifo_fast and it now
behaves just as good. It must have been some strange artifact. My questions
are still standing however.
Regards,
Hans-Kristian
On 25 January 2017 at 21:54, Hans-Kristian Bakke wrote:
> Hi
>
> Kernel
> On 25 Jan, 2017, at 22:54, Hans-Kristian Bakke wrote:
>
> 4. Do BBR _only_ work with fq pacing or could fq_codel be used as a
> replacement?
Without pacing, it is not BBR as specified. Currently, only the fq qdisc
implements pacing.
AFAIK, you need a working HPET for
Hi
Kernel 4.9 finally landed in Debian testing so I could finally test BBR in
a real life environment that I have struggled with getting any kind of
performance out of.
The challenge at hand is UDP based OpenVPN through europe at around 35 ms
rtt to my VPN-provider with plenty of available
22 matches
Mail list logo