On Fri, Oct 12, 2018 at 07:03:41PM -0400, Len Brown wrote:
> > Why would the cpu topology report 0 cpus? I added a debug entry to
> > cpu_usage_stat and /proc/stat showed it as an extra column. Then
> > fscanf parsing in for_all_cpus() failed, causing the SIGFPE.
> >
> > This is not an issue.
On Fri, Oct 12, 2018 at 07:03:41PM -0400, Len Brown wrote:
> > Why would the cpu topology report 0 cpus? I added a debug entry to
> > cpu_usage_stat and /proc/stat showed it as an extra column. Then
> > fscanf parsing in for_all_cpus() failed, causing the SIGFPE.
> >
> > This is not an issue.
On Fri, Oct 12, 2018 at 11:26:30AM -0700, Solio Sarabia wrote:
> Hi --
>
> turbostat 17.06.23 is throwing an exception on a custom linux-4.16.12
> kernel, on Xeon E5-2699 v4 Broadwell EP, 2S, 22C/S, 44C total, HT off,
> VTx off.
>
> Initially the system had 4.4.0-137. Then
On Fri, Oct 12, 2018 at 11:26:30AM -0700, Solio Sarabia wrote:
> Hi --
>
> turbostat 17.06.23 is throwing an exception on a custom linux-4.16.12
> kernel, on Xeon E5-2699 v4 Broadwell EP, 2S, 22C/S, 44C total, HT off,
> VTx off.
>
> Initially the system had 4.4.0-137. Then
Under high IO activity (storage or network), the kernel is not
accounting some cpu cycles, comparing sar vs emon (tool that accesses hw
pmu directly). The difference is higher on cores that spend most time on
idle state and are constantly waking up to handle interrupts. It happens
even with fine
Under high IO activity (storage or network), the kernel is not
accounting some cpu cycles, comparing sar vs emon (tool that accesses hw
pmu directly). The difference is higher on cores that spend most time on
idle state and are constantly waking up to handle interrupts. It happens
even with fine
as to why this happens:
What could be the reason for this issue?
Any pointers to the kernel subsystem/code performing time accounting?
Thanks,
-Solio
On Wed, Jun 20, 2018 at 04:41:40PM -0700, Solio Sarabia wrote:
> Thanks Andi, Stephen, for your help/insights.
>
> TICK_CPU_ACCOUNTING
as to why this happens:
What could be the reason for this issue?
Any pointers to the kernel subsystem/code performing time accounting?
Thanks,
-Solio
On Wed, Jun 20, 2018 at 04:41:40PM -0700, Solio Sarabia wrote:
> Thanks Andi, Stephen, for your help/insights.
>
> TICK_CPU_ACCOUNTING
ransition between softirq and hardirq state,
so there can be a performance impact.
-Solio
On Thu, Jun 14, 2018 at 08:41:33PM -0700, Solio Sarabia wrote:
> Hello --
>
> I'm running into an issue where sar, mpstat, top, and other tools show
> less cpu utilization compared to emon [1]
ransition between softirq and hardirq state,
so there can be a performance impact.
-Solio
On Thu, Jun 14, 2018 at 08:41:33PM -0700, Solio Sarabia wrote:
> Hello --
>
> I'm running into an issue where sar, mpstat, top, and other tools show
> less cpu utilization compared to emon [1]
Hello --
I'm running into an issue where sar, mpstat, top, and other tools show
less cpu utilization compared to emon [1]. Sar uses /proc/stat as its
source, and was configured to collect in 1s intervals. Emon reads
hardware counter MSRs in the PMU in timer intervals, 0.1s for this
scenario.
The
Hello --
I'm running into an issue where sar, mpstat, top, and other tools show
less cpu utilization compared to emon [1]. Sar uses /proc/stat as its
source, and was configured to collect in 1s intervals. Emon reads
hardware counter MSRs in the PMU in timer intervals, 0.1s for this
scenario.
The
On Fri, Nov 24, 2017 at 10:32:49AM -0800, Eric Dumazet wrote:
> On Fri, 2017-11-24 at 10:14 -0700, David Ahern wrote:
> >
> > This should be added to rtnetlink rather than sysfs.
>
> This is already exposed by rtnetlink [1]
>
> Please lets not add yet another net-sysfs knob.
>
> [1]
On Fri, Nov 24, 2017 at 10:32:49AM -0800, Eric Dumazet wrote:
> On Fri, 2017-11-24 at 10:14 -0700, David Ahern wrote:
> >
> > This should be added to rtnetlink rather than sysfs.
>
> This is already exposed by rtnetlink [1]
>
> Please lets not add yet another net-sysfs knob.
>
> [1]
GSO buffer size supported by underlying devices is not propagated to
veth. In high-speed connections with hw TSO enabled, veth sends buffers
bigger than lower device's maximum GSO, forcing sw TSO and increasing
system CPU usage.
Signed-off-by: Solio Sarabia <solio.sara...@intel.com>
---
Ex
GSO buffer size supported by underlying devices is not propagated to
veth. In high-speed connections with hw TSO enabled, veth sends buffers
bigger than lower device's maximum GSO, forcing sw TSO and increasing
system CPU usage.
Signed-off-by: Solio Sarabia
---
Exposing gso_max_size via sysfs
On Wed, Nov 22, 2017 at 04:30:41PM -0800, Solio Sarabia wrote:
> The netdevice gso_max_size is exposed to allow users fine-control on
> systems with multiple NICs with different GSO buffer sizes, and where
> the virtual devices like bridge and veth, need to be aware of the G
On Wed, Nov 22, 2017 at 04:30:41PM -0800, Solio Sarabia wrote:
> The netdevice gso_max_size is exposed to allow users fine-control on
> systems with multiple NICs with different GSO buffer sizes, and where
> the virtual devices like bridge and veth, need to be aware of the G
Sebastian <shiny.sebast...@intel.com>
Signed-off-by: Solio Sarabia <solio.sara...@intel.com>
---
In one test scenario with Hyper-V host, Ubuntu 16.04 VM, with Docker
inside VM, and NTttcp sending 40 Gbps from one container, setting the
right gso_max_size values for all network devices
Sebastian
Signed-off-by: Solio Sarabia
---
In one test scenario with Hyper-V host, Ubuntu 16.04 VM, with Docker
inside VM, and NTttcp sending 40 Gbps from one container, setting the
right gso_max_size values for all network devices in the chain, reduces
CPU overhead about 3x (for the sender
20 matches
Mail list logo