.deb
libudev1_231-10_amd64.deb
systemd_231-10_amd64.deb
udev_231-10_amd64.deb
This is ...fun No wonder file my server performance got more unreliable
the last couple of months.
On 13 February 2017 at 18:39, Hans-Kristian Bakke wrote:
> After some further testing i see that this is not
everything enabled.
As downgrading the kernel to a known good did not do the trick there must
be something else systemwide in Debian Stretch. Perhaps I should really try
to get all the dependencies together to downgrade systemd.
Regards,
Hans-Kristian
On 13 February 2017 at 17:54, Hans-Kristian Bakke
In a previous thread on this mailing list (Excessive throttling with fq) it
was concluded that the reason for the bad performance was that tso was
somewhat inconcistent between the bond and the physical interfaces. I never
really knew why, but I knew I had been experimenting with traffic shaping
an
On 27 January 2017 at 15:40, Eric Dumazet wrote:
> On Thu, 2017-01-26 at 23:55 -0800, Dave Täht wrote:
> >
> > On 1/26/17 11:21 PM, Hans-Kristian Bakke wrote:
> > > Hi
> > >
> > > After having had some issues with inconcistent tso/gso configuration
&
Thank you for answering!
On 27 January 2017 at 08:55, Dave Täht wrote:
>
>
> On 1/26/17 11:21 PM, Hans-Kristian Bakke wrote:
> > Hi
> >
> > After having had some issues with inconcistent tso/gso configuration
> > causing performance issues for sch_fq with p
Hi
After having had some issues with inconcistent tso/gso configuration
causing performance issues for sch_fq with pacing in one of my systems, I
wonder if is it still recommended to disable gso/tso for interfaces used
with fq_codel qdiscs and shaping using HTB etc.
If there is a trade off, at wh
it, from
> 359168 to 1328192
>
> DRS is not working as expected. Again maybe related to HZ value.
>
>
> On Thu, 2017-01-26 at 21:19 +0100, Hans-Kristian Bakke wrote:
> > There are two packet captures from fq with and without pacing here:
> >
> >
> > https://ownc
t;
> grep HZ /boot/config (what is the HZ value of your kernel)
>
> I suspect a possible problem with TSO autodefer when/if HZ < 1000
>
> Thanks.
>
> On Thu, 2017-01-26 at 21:19 +0100, Hans-Kristian Bakke wrote:
> > There are two packet captures from fq
y ss
>> command
>>
>>
>> ss -temoi dst
>>
>>
>> pacing rate is shown. You might have some issues, but it is hard to say.
>>
>>
>> On Thu, 2017-01-26 at 19:55 +0100, Hans-Kristian Bakke wrote:
>>
>>> After some more testing
On 26 January 2017 at 19:38, Hans-Kristian Bakke wrote:
> I can add that this is without BBR, just plain old kernel 4.8 cubic.
>
> On 26 January 2017 at 19:36, Hans-Kristian Bakke
> wrote:
>
>> Another day, another fq issue (or user error).
>>
>> I try to do the s
I can add that this is without BBR, just plain old kernel 4.8 cubic.
On 26 January 2017 at 19:36, Hans-Kristian Bakke wrote:
> Another day, another fq issue (or user error).
>
> I try to do the seeminlig simple task of downloading a single large file
> over local gigabit LAN fro
Another day, another fq issue (or user error).
I try to do the seeminlig simple task of downloading a single large file
over local gigabit LAN from a physical server running kernel 4.8 and
sch_fq on intel server NICs.
For some reason it wouldn't go past around 25 MB/s. After having replaced
SSL
Dumazet wrote:
> On Thu, 2017-01-26 at 00:47 +0100, Hans-Kristian Bakke wrote:
> >
> >
> > I did record the qdisc settings, but I didn't capture the stats, but
> > throttling is definitively active when I watch the tc -s stats in
> > realtime when testing (looking
ues 0)
backlog 101616b 3p requeues 0
16 flows (15 inactive, 1 throttled), next packet delay 351937 ns
0 gc, 0 highprio, 58377 throttled, 12761 ns latency
On 26 January 2017 at 00:33, Eric Dumazet wrote:
>
> On Thu, 2017-01-26 at 00:04 +0100, Hans-Kristian Bakke wrote:
> > I can d
e VPN connection is terminated in netherlands on a gigabit VPN server
with around 35 ms RTT.
On 26 January 2017 at 00:33, Eric Dumazet wrote:
>
> On Thu, 2017-01-26 at 00:04 +0100, Hans-Kristian Bakke wrote:
> > I can do that. I guess I should do the capture from tun1 as that is
> >
2017 at 00:04, Hans-Kristian Bakke wrote:
> I can do that. I guess I should do the capture from tun1 as that is the
> place that the tcp-traffic is visible? My non-virtual nic is only seeing
> OpenVPN encapsulated UDP-traffic.
>
> On 25 January 2017 at 23:48, Neal Cardwell wrote:
&
I can do that. I guess I should do the capture from tun1 as that is the
place that the tcp-traffic is visible? My non-virtual nic is only seeing
OpenVPN encapsulated UDP-traffic.
On 25 January 2017 at 23:48, Neal Cardwell wrote:
> On Wed, Jan 25, 2017 at 5:38 PM, Hans-Kristian Bakke
>
pacing actually is getting hurt for playing nice in some very variable
bottleneck on the way?
On 25 January 2017 at 23:01, Neal Cardwell wrote:
> On Wed, Jan 25, 2017 at 3:54 PM, Hans-Kristian Bakke
> wrote:
>
>> Hi
>>
>> Kernel 4.9 finally landed in Debian testing
at 5:01 PM, Neal Cardwell
> wrote:
>
>> On Wed, Jan 25, 2017 at 3:54 PM, Hans-Kristian Bakke
>> wrote:
>>
>>> Hi
>>>
>>> Kernel 4.9 finally landed in Debian testing so I could finally test BBR
>>> in a real life environment that I have str
| grep latency
> 0 gc, 0 highprio, 32490767 throttled, 2382 ns latency
>
>
> On Wed, 2017-01-25 at 22:31 +0100, Hans-Kristian Bakke wrote:
> > kvm-clock is a paravirtualized clock that seems to use the CPUs TSC
> > capabilities if they exist. But it may not be perfect:
-Virtualization_Host_Configuration_and_Guest_Installation_Guide-KVM_guest_timing_management.html
On 25 January 2017 at 22:29, Hans-Kristian Bakke wrote:
> Actually I think that is because it may be using the newer TSC:
> dmesg | grep clocksource
> [0.00] clocksource: kvm-clock: mask: 0x
>
January 2017 at 22:26, Jonathan Morton wrote:
>
> > On 25 Jan, 2017, at 23:20, Hans-Kristian Bakke
> wrote:
> >
> > [0.00] ACPI: HPET 0xBFFE274F 38 (v01 BOCHS
> BXPCHPET 0001 BXPC 0001)
> > [0.00] ACPI: HPET id: 0x8086a201 b
7 at 22:17, Jonathan Morton wrote:
>
> > On 25 Jan, 2017, at 23:13, Hans-Kristian Bakke
> wrote:
> >
> > dmesg | grep HPET
> > [0.00] ACPI: HPET 0xBFFE274F 38 (v01 BOCHS BXPCHPET
> 0001 BXPC 0001)
> > [0.00] ACPI: HPET id: 0x8
Thank you.
Do I understand correctly that fq is really just hit and miss within a VM
in general then? Is there no advantage to the fair queing part even with a
low-precision clock?
On 25 January 2017 at 22:00, Jonathan Morton wrote:
>
> > On 25 Jan, 2017, at 22:54, Hans-Kristian Bakke
kind of virtualized device?
Regards,
Hans-Kristian
On 25 January 2017 at 22:09, Jonathan Morton wrote:
>
> > On 25 Jan, 2017, at 23:05, Hans-Kristian Bakke
> wrote:
> >
> > Do I understand correctly that fq is really just hit and miss within a
> VM in general then? I
I did some more testing with fq as a replacement of pfifo_fast and it now
behaves just as good. It must have been some strange artifact. My questions
are still standing however.
Regards,
Hans-Kristian
On 25 January 2017 at 21:54, Hans-Kristian Bakke wrote:
> Hi
>
> Kernel 4.9 finally
be the reason that it still works?
4. Do BBR _only_ work with fq pacing or could fq_codel be used as a
replacement?
5. Is BBR perhaps modified to do the right thing without having to change
the qdisc in the current kernel 4.9?
Sorry for long post, but this is an interest
27 matches
Mail list logo