[Kernel-packages] [Bug 1297522] Re: Since Trusty /proc/diskstats shows weird values

2014-09-08 Thread Holger Mauermann
** Tags removed: verification-needed-trusty
** Tags added: verification-done-trusty

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1297522

Title:
  Since Trusty /proc/diskstats shows weird values

Status in “linux” package in Ubuntu:
  Fix Released
Status in “linux” source package in Trusty:
  Fix Committed

Bug description:
  SRU Justification:

  Impact: Tools that rely on diskstats may report incorrect data in
  certain conditions. In particular diskstats in a VM may report
  incorrect statistics.

  Fix: 0fec08b4ecfc36fd8a64432343b2964fb86d2675 ( in 3.14-rc1 )

  Testcase:
- Install a VM with the affected kernel
- Run cat /proc/diskstats | awk '$3==vda { print $7/$4, $11/$8 }'
- If the two values are much larger compared to the v3.14-rc1 kernel in the 
same VM, we have failed. For example in a failing case I see: 132.44 5458.34; 
in a passing case I see: 0.19334 5.90476.

  --

  After upgrading some virtual machines (KVM) to Trusty I noticed really
  high I/O wait times, e.g. Munin graphs now show up to 200 seconds(!)
  read I/O wait time. See attached image. Of course real latency isn't
  higher than before, it's only /proc/diskstats that shows totally wrong
  numbers...

  $ cat /proc/diskstats | awk '$3==vda { print $7/$4, $11/$8 }'
  1375.44 13825.1

  From the documentation for /proc/diskstats field 4 is total number of
  reads completed, field 7 is the total time spent reading in
  milliseconds, and fields 8 and 11 are the same for writes. So above
  numbers are the average read and write latency in milliseconds.

  Same weird numbers with iowait. Note the column await (average  time
  in milliseconds for I/O requests):

  $ iostat -dx 1 60
  Linux 3.13.0-19-generic (munin)   03/25/14_x86_64_(2 CPU)

  Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda   2.3016.75   72.45   24.52   572.79   778.3727.87
 1.57  620.00  450.20 1121.83   1.71  16.54

  Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda   0.00 0.000.000.00 0.00 0.00 0.00
 0.000.000.000.00   0.00   0.00

  Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda   0.00 0.000.000.00 0.00 0.00 0.00
 0.000.000.000.00   0.00   0.00

  Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda   0.0052.000.00   25.00 0.00   308.0024.64
 0.30 27813.920.00 27813.92   0.48   1.20

  I upgraded the host system to Trusty too, however there
  /proc/diskstats output is normal as before.

  $ uname -r
  3.13.0-19-generic

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1297522/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1297522] Re: Since Trusty /proc/diskstats shows weird values

2014-03-28 Thread Holger Mauermann
No issues with 3.14 kernel

** Changed in: linux (Ubuntu)
   Status: Incomplete = Confirmed

** Tags added: kernel-fixed-upstream

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1297522

Title:
  Since Trusty /proc/diskstats shows weird values

Status in “linux” package in Ubuntu:
  Confirmed

Bug description:
  After upgrading some virtual machines (KVM) to Trusty I noticed really
  high I/O wait times, e.g. Munin graphs now show up to 200 seconds(!)
  read I/O wait time. See attached image. Of course real latency isn't
  higher than before, it's only /proc/diskstats that shows totally wrong
  numbers...

  $ cat /proc/diskstats | awk '$3==vda { print $7/$4, $11/$8 }'
  1375.44 13825.1

  From the documentation for /proc/diskstats field 4 is total number of
  reads completed, field 7 is the total time spent reading in
  milliseconds, and fields 8 and 11 are the same for writes. So above
  numbers are the average read and write latency in milliseconds.

  Same weird numbers with iowait. Note the column await (average  time
  in milliseconds for I/O requests):

  $ iostat -dx 1 60
  Linux 3.13.0-19-generic (munin)   03/25/14_x86_64_(2 CPU)

  Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda   2.3016.75   72.45   24.52   572.79   778.3727.87
 1.57  620.00  450.20 1121.83   1.71  16.54

  Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda   0.00 0.000.000.00 0.00 0.00 0.00
 0.000.000.000.00   0.00   0.00

  Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda   0.00 0.000.000.00 0.00 0.00 0.00
 0.000.000.000.00   0.00   0.00

  Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda   0.0052.000.00   25.00 0.00   308.0024.64
 0.30 27813.920.00 27813.92   0.48   1.20

  I upgraded the host system to Trusty too, however there
  /proc/diskstats output is normal as before.

  $ uname -r
  3.13.0-19-generic

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1297522/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1297522] [NEW] Since Trusty /proc/diskstats shows weird values

2014-03-25 Thread Holger Mauermann
Public bug reported:

After upgrading some virtual machines (KVM) to Trusty I noticed really
high I/O wait times, e.g. Munin graphs now show up to 200 seconds(!)
read I/O wait time. See attached image. Of course real latency isn't
higher than before, it's only /proc/diskstats that shows totally wrong
numbers...

$ cat /proc/diskstats | awk '$3==vda { print $7/$4, $11/$8 }'
1375.44 13825.1

From the documentation for /proc/diskstats field 4 is total number of
reads completed, field 7 is the total time spent reading in
milliseconds, and fields 8 and 11 are the same for writes. So above
numbers are the average read and write latency in milliseconds.

Same weird numbers with iowait. Note the column await (average  time
in milliseconds for I/O requests):

$ iostat -dx 1 60
Linux 3.13.0-19-generic (munin) 03/25/14_x86_64_(2 CPU)

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
vda   2.3016.75   72.45   24.52   572.79   778.3727.87 
1.57  620.00  450.20 1121.83   1.71  16.54

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
vda   0.00 0.000.000.00 0.00 0.00 0.00 
0.000.000.000.00   0.00   0.00

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
vda   0.00 0.000.000.00 0.00 0.00 0.00 
0.000.000.000.00   0.00   0.00

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
vda   0.0052.000.00   25.00 0.00   308.0024.64 
0.30 27813.920.00 27813.92   0.48   1.20

I upgraded the host system to Trusty too, however there /proc/diskstats
output is normal as before.

$ uname -r
3.13.0-19-generic

** Affects: linux (Ubuntu)
 Importance: Undecided
 Status: New

** Attachment added: disk latency graph after upgrading to trusty
   
https://bugs.launchpad.net/bugs/1297522/+attachment/4043602/+files/vda-month.png

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1297522

Title:
  Since Trusty /proc/diskstats shows weird values

Status in “linux” package in Ubuntu:
  New

Bug description:
  After upgrading some virtual machines (KVM) to Trusty I noticed really
  high I/O wait times, e.g. Munin graphs now show up to 200 seconds(!)
  read I/O wait time. See attached image. Of course real latency isn't
  higher than before, it's only /proc/diskstats that shows totally wrong
  numbers...

  $ cat /proc/diskstats | awk '$3==vda { print $7/$4, $11/$8 }'
  1375.44 13825.1

  From the documentation for /proc/diskstats field 4 is total number of
  reads completed, field 7 is the total time spent reading in
  milliseconds, and fields 8 and 11 are the same for writes. So above
  numbers are the average read and write latency in milliseconds.

  Same weird numbers with iowait. Note the column await (average  time
  in milliseconds for I/O requests):

  $ iostat -dx 1 60
  Linux 3.13.0-19-generic (munin)   03/25/14_x86_64_(2 CPU)

  Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda   2.3016.75   72.45   24.52   572.79   778.3727.87
 1.57  620.00  450.20 1121.83   1.71  16.54

  Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda   0.00 0.000.000.00 0.00 0.00 0.00
 0.000.000.000.00   0.00   0.00

  Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda   0.00 0.000.000.00 0.00 0.00 0.00
 0.000.000.000.00   0.00   0.00

  Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda   0.0052.000.00   25.00 0.00   308.0024.64
 0.30 27813.920.00 27813.92   0.48   1.20

  I upgraded the host system to Trusty too, however there
  /proc/diskstats output is normal as before.

  $ uname -r
  3.13.0-19-generic

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1297522/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1297522] Re: Since Trusty /proc/diskstats shows weird values

2014-03-25 Thread Holger Mauermann
** Attachment added: apport.linux.WwkgUu.apport
   
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1297522/+attachment/4043636/+files/apport.linux.WwkgUu.apport

** Changed in: linux (Ubuntu)
   Status: Incomplete = Confirmed

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1297522

Title:
  Since Trusty /proc/diskstats shows weird values

Status in “linux” package in Ubuntu:
  Confirmed

Bug description:
  After upgrading some virtual machines (KVM) to Trusty I noticed really
  high I/O wait times, e.g. Munin graphs now show up to 200 seconds(!)
  read I/O wait time. See attached image. Of course real latency isn't
  higher than before, it's only /proc/diskstats that shows totally wrong
  numbers...

  $ cat /proc/diskstats | awk '$3==vda { print $7/$4, $11/$8 }'
  1375.44 13825.1

  From the documentation for /proc/diskstats field 4 is total number of
  reads completed, field 7 is the total time spent reading in
  milliseconds, and fields 8 and 11 are the same for writes. So above
  numbers are the average read and write latency in milliseconds.

  Same weird numbers with iowait. Note the column await (average  time
  in milliseconds for I/O requests):

  $ iostat -dx 1 60
  Linux 3.13.0-19-generic (munin)   03/25/14_x86_64_(2 CPU)

  Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda   2.3016.75   72.45   24.52   572.79   778.3727.87
 1.57  620.00  450.20 1121.83   1.71  16.54

  Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda   0.00 0.000.000.00 0.00 0.00 0.00
 0.000.000.000.00   0.00   0.00

  Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda   0.00 0.000.000.00 0.00 0.00 0.00
 0.000.000.000.00   0.00   0.00

  Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
  vda   0.0052.000.00   25.00 0.00   308.0024.64
 0.30 27813.920.00 27813.92   0.48   1.20

  I upgraded the host system to Trusty too, however there
  /proc/diskstats output is normal as before.

  $ uname -r
  3.13.0-19-generic

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1297522/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 956479] Re: analog audio output is changed to HDMI, after display standby with kernel 3.0.0-17.30

2013-09-13 Thread Holger Mauermann
No issue anymore. I can't say when it was fixed because I had disabled
HDMI in BIOS until now.

** Changed in: linux (Ubuntu)
   Status: Incomplete = Fix Released

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/956479

Title:
  analog audio output is changed to HDMI, after display standby with
  kernel 3.0.0-17.30

Status in “linux” package in Ubuntu:
  Fix Released

Bug description:
  I have a display connected to HDMI port and speakers connected to analog out. 
In gnome-control-center the sound device is set to Analog Stereo Duplex. 
However, after the display was in standby mode, the audio is always sent to the 
HDMI output, so I have to change it back to Analog Stereo Duplex in 
gnome-control-center...
  This happens after installing linux-image-3.0.0-17.30 from oneiric-proposed, 
and I think it is related to the drm/i915: HDMI hot remove notification to 
audio driver patch. Old kernel 3.0.0-16 works well.

  This is a commit regression of
  2deed761188d7480eb5f7efbfe7aa77f09322ed8.

  WORKAROUND: disabling HDMI audio in BIOS.

  ProblemType: Bug
  DistroRelease: Ubuntu 11.10
  Package: linux-image-3.0.0-17-generic 3.0.0-17.30
  ProcVersionSignature: Ubuntu 3.0.0-17.30-generic 3.0.22
  Uname: Linux 3.0.0-17-generic x86_64
  AlsaVersion: Advanced Linux Sound Architecture Driver Version 1.0.24.
  ApportVersion: 1.23-0ubuntu4
  Architecture: amd64
  ArecordDevices:
    List of CAPTURE Hardware Devices 
   card 0: PCH [HDA Intel PCH], device 0: ALC892 Analog [ALC892 Analog]
     Subdevices: 1/1
     Subdevice #0: subdevice #0
  AudioDevicesInUse:
   USERPID ACCESS COMMAND
   /dev/snd/controlC0:  holger 3541 F pulseaudio
  CRDA: Error: [Errno 2] Datei oder Verzeichnis nicht gefunden
  Card0.Amixer.info:
   Card hw:0 'PCH'/'HDA Intel PCH at 0xfe62 irq 51'
     Mixer name : 'Intel CougarPoint HDMI'
     Components : 'HDA:10ec0892,8086201b,00100302 
HDA:80862805,80862805,0010'
     Controls  : 41
     Simple ctrls  : 21
  Date: Thu Mar 15 22:22:32 2012
  InstallationMedia: Ubuntu 11.10 Oneiric Ocelot - Release amd64 (20111012)
  IwConfig:
   lono wireless extensions.

   eth0  no wireless extensions.

   virbr0no wireless extensions.
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.0.0-17-generic 
root=UUID=ecad89b7-f7cc-4ab3-8017-089af5334ee1 ro quiet splash vt.handoff=7
  RelatedPackageVersions:
   linux-restricted-modules-3.0.0-17-generic N/A
   linux-backports-modules-3.0.0-17-generic  N/A
   linux-firmware1.60.1
  RfKill:
   0: hci0: Bluetooth
    Soft blocked: no
    Hard blocked: no
  SourcePackage: linux
  StagingDrivers: mei
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 09/28/2011
  dmi.bios.vendor: Intel Corp.
  dmi.bios.version: DBZ6810H.86A.0032.2011.0928.1502
  dmi.board.asset.tag: To be filled by O.E.M.
  dmi.board.name: DZ68DB
  dmi.board.vendor: Intel Corporation
  dmi.board.version: AAG27985-104
  dmi.chassis.type: 3
  dmi.modalias: 
dmi:bvnIntelCorp.:bvrDBZ6810H.86A.0032.2011.0928.1502:bd09/28/2011:svn:pn:pvr:rvnIntelCorporation:rnDZ68DB:rvrAAG27985-104:cvn:ct3:cvr:

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/956479/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp