Re: turbostat-17.06.23 floating point exception

2018-10-18 Thread Solio Sarabia
On Fri, Oct 12, 2018 at 07:03:41PM -0400, Len Brown wrote: > > Why would the cpu topology report 0 cpus? I added a debug entry to > > cpu_usage_stat and /proc/stat showed it as an extra column. Then > > fscanf parsing in for_all_cpus() failed, causing the SIGFPE. > > > > This is not an issue.

Re: turbostat-17.06.23 floating point exception

2018-10-18 Thread Solio Sarabia
On Fri, Oct 12, 2018 at 07:03:41PM -0400, Len Brown wrote: > > Why would the cpu topology report 0 cpus? I added a debug entry to > > cpu_usage_stat and /proc/stat showed it as an extra column. Then > > fscanf parsing in for_all_cpus() failed, causing the SIGFPE. > > > > This is not an issue.

Re: turbostat-17.06.23 floating point exception

2018-10-12 Thread Solio Sarabia
On Fri, Oct 12, 2018 at 11:26:30AM -0700, Solio Sarabia wrote: > Hi -- > > turbostat 17.06.23 is throwing an exception on a custom linux-4.16.12 > kernel, on Xeon E5-2699 v4 Broadwell EP, 2S, 22C/S, 44C total, HT off, > VTx off. > > Initially the system had 4.4.0-137. Then

Re: turbostat-17.06.23 floating point exception

2018-10-12 Thread Solio Sarabia
On Fri, Oct 12, 2018 at 11:26:30AM -0700, Solio Sarabia wrote: > Hi -- > > turbostat 17.06.23 is throwing an exception on a custom linux-4.16.12 > kernel, on Xeon E5-2699 v4 Broadwell EP, 2S, 22C/S, 44C total, HT off, > VTx off. > > Initially the system had 4.4.0-137. Then

Time accounting difference under high IO interrupts

2018-08-14 Thread Solio Sarabia
Under high IO activity (storage or network), the kernel is not accounting some cpu cycles, comparing sar vs emon (tool that accesses hw pmu directly). The difference is higher on cores that spend most time on idle state and are constantly waking up to handle interrupts. It happens even with fine

Time accounting difference under high IO interrupts

2018-08-14 Thread Solio Sarabia
Under high IO activity (storage or network), the kernel is not accounting some cpu cycles, comparing sar vs emon (tool that accesses hw pmu directly). The difference is higher on cores that spend most time on idle state and are constantly waking up to handle interrupts. It happens even with fine

Re: Differences in cpu utilization reported by sar, emon

2018-07-10 Thread Solio Sarabia
as to why this happens: What could be the reason for this issue? Any pointers to the kernel subsystem/code performing time accounting? Thanks, -Solio On Wed, Jun 20, 2018 at 04:41:40PM -0700, Solio Sarabia wrote: > Thanks Andi, Stephen, for your help/insights. > > TICK_CPU_ACCOUNTING

Re: Differences in cpu utilization reported by sar, emon

2018-07-10 Thread Solio Sarabia
as to why this happens: What could be the reason for this issue? Any pointers to the kernel subsystem/code performing time accounting? Thanks, -Solio On Wed, Jun 20, 2018 at 04:41:40PM -0700, Solio Sarabia wrote: > Thanks Andi, Stephen, for your help/insights. > > TICK_CPU_ACCOUNTING

Re: Differences in cpu utilization reported by sar, emon

2018-06-20 Thread Solio Sarabia
ransition between softirq and hardirq state, so there can be a performance impact. -Solio On Thu, Jun 14, 2018 at 08:41:33PM -0700, Solio Sarabia wrote: > Hello -- > > I'm running into an issue where sar, mpstat, top, and other tools show > less cpu utilization compared to emon [1]

Re: Differences in cpu utilization reported by sar, emon

2018-06-20 Thread Solio Sarabia
ransition between softirq and hardirq state, so there can be a performance impact. -Solio On Thu, Jun 14, 2018 at 08:41:33PM -0700, Solio Sarabia wrote: > Hello -- > > I'm running into an issue where sar, mpstat, top, and other tools show > less cpu utilization compared to emon [1]

Differences in cpu utilization reported by sar, emon

2018-06-14 Thread Solio Sarabia
Hello -- I'm running into an issue where sar, mpstat, top, and other tools show less cpu utilization compared to emon [1]. Sar uses /proc/stat as its source, and was configured to collect in 1s intervals. Emon reads hardware counter MSRs in the PMU in timer intervals, 0.1s for this scenario. The

Differences in cpu utilization reported by sar, emon

2018-06-14 Thread Solio Sarabia
Hello -- I'm running into an issue where sar, mpstat, top, and other tools show less cpu utilization compared to emon [1]. Sar uses /proc/stat as its source, and was configured to collect in 1s intervals. Emon reads hardware counter MSRs in the PMU in timer intervals, 0.1s for this scenario. The

Re: [PATCH] net-sysfs: export gso_max_size attribute

2017-11-27 Thread Solio Sarabia
On Fri, Nov 24, 2017 at 10:32:49AM -0800, Eric Dumazet wrote: > On Fri, 2017-11-24 at 10:14 -0700, David Ahern wrote: > > > > This should be added to rtnetlink rather than sysfs. > > This is already exposed by rtnetlink [1] > > Please lets not add yet another net-sysfs knob. > > [1]

Re: [PATCH] net-sysfs: export gso_max_size attribute

2017-11-27 Thread Solio Sarabia
On Fri, Nov 24, 2017 at 10:32:49AM -0800, Eric Dumazet wrote: > On Fri, 2017-11-24 at 10:14 -0700, David Ahern wrote: > > > > This should be added to rtnetlink rather than sysfs. > > This is already exposed by rtnetlink [1] > > Please lets not add yet another net-sysfs knob. > > [1]

[PATCH RFC] veth: make veth aware of gso buffer size

2017-11-25 Thread Solio Sarabia
GSO buffer size supported by underlying devices is not propagated to veth. In high-speed connections with hw TSO enabled, veth sends buffers bigger than lower device's maximum GSO, forcing sw TSO and increasing system CPU usage. Signed-off-by: Solio Sarabia <solio.sara...@intel.com> --- Ex

[PATCH RFC] veth: make veth aware of gso buffer size

2017-11-25 Thread Solio Sarabia
GSO buffer size supported by underlying devices is not propagated to veth. In high-speed connections with hw TSO enabled, veth sends buffers bigger than lower device's maximum GSO, forcing sw TSO and increasing system CPU usage. Signed-off-by: Solio Sarabia --- Exposing gso_max_size via sysfs

Re: [PATCH] net-sysfs: export gso_max_size attribute

2017-11-23 Thread Solio Sarabia
On Wed, Nov 22, 2017 at 04:30:41PM -0800, Solio Sarabia wrote: > The netdevice gso_max_size is exposed to allow users fine-control on > systems with multiple NICs with different GSO buffer sizes, and where > the virtual devices like bridge and veth, need to be aware of the G

Re: [PATCH] net-sysfs: export gso_max_size attribute

2017-11-23 Thread Solio Sarabia
On Wed, Nov 22, 2017 at 04:30:41PM -0800, Solio Sarabia wrote: > The netdevice gso_max_size is exposed to allow users fine-control on > systems with multiple NICs with different GSO buffer sizes, and where > the virtual devices like bridge and veth, need to be aware of the G

[PATCH] net-sysfs: export gso_max_size attribute

2017-11-22 Thread Solio Sarabia
Sebastian <shiny.sebast...@intel.com> Signed-off-by: Solio Sarabia <solio.sara...@intel.com> --- In one test scenario with Hyper-V host, Ubuntu 16.04 VM, with Docker inside VM, and NTttcp sending 40 Gbps from one container, setting the right gso_max_size values for all network devices

[PATCH] net-sysfs: export gso_max_size attribute

2017-11-22 Thread Solio Sarabia
Sebastian Signed-off-by: Solio Sarabia --- In one test scenario with Hyper-V host, Ubuntu 16.04 VM, with Docker inside VM, and NTttcp sending 40 Gbps from one container, setting the right gso_max_size values for all network devices in the chain, reduces CPU overhead about 3x (for the sender