You have been subscribed to a public bug:

-- Problem Description --

Firestone system given to DASD group failed HTX overnight test with miscompare 
error.
HTX mdt.hdbuster was running on secondary drive and failed about 12 hours into 
test

HTX miscompare analysis:
====================-==

Device under test: /dev/sdb
Stanza running: rule_3
miscompare offset: 0x40
Transfer size: Random Size
LBA number: 0x70fc
miscompare length: all the blocks in the transfer size

*- STANZA 3: Creates number of threads twice the queue depth. Each thread  -*
*- doing 20000 num_oper with RC operation with xfer size between 1 block   -*
*- to 256K.                                                                -*

This miscompare shows read operation is unable to get the expected data
from the disk. The re-read buffer also shows the same data as the first
read operation. Since the first read and next re-read shows same data,
there could be a write operation (of previous rule stanza to initialize
disk with pattern 007 ) failure on the disk. The same miscompare
behavior shows for all the blocks in the transfer size.

/dev/sdb          Jun  2 02:29:43 2015 err=000003b6 sev=2 hxestorage      <<=== 
 device name (/dev/sdb)
rule_3_13  numopers=     20000  loop=       767  blk=0x70fc len=89088
 min_blkno=0 max_blkno=0x74706daf, RANDOM access
Seed Values= 37303, 290, 23235
Data Pattern Seed Values = 37303, 291, 23235
BWRC LBA fencepost Detail:
th_num                min_lba                  max_lba      status
     0                 0            1c9be3ff    R
     1          1d1c1b6c            3a3836d7    F
     2          3a3836d8            57545243    F
     3          57545244            74706daf    F
Miscompare at buffer offset 64 (0x40)                             <<===   
miscompare offset (0x40)
(Flags: badsig=0; cksum=0x60000)  Maximum LBA = 0x74706daf
wbuf (baseaddr 0x3ffe1c0e6600) b0ffffffffffffffffffffffffffffffffffffff
rbuf (baseaddr 0x3ffe1c0fc400) 850100fc700200fd700300fe700400ff70050000
Write buffer saved in /tmp/htxsdb.wbuf1
Read buffer saved in /tmp/htxsdb.rbuf1
Re-read fails compare at offset64; buffer saved in /tmp/htxsdb.rerd1
errno: 950(Unknown error 950)

Asghar reproduced that HTX hang he is seeing. Looking in the kernel logs
I see some messages from the kernel that there are user threads blocked
on getting reads serviced. So likely HTX is seeing the same thing. I've
asked Asghar to try using the deadline I/O scheduler rather than CFQ to
see if that makes any difference. If that does not make any difference,
the next thing to try is reducing the queue depth of the device. Right
now its 31, which I think is pretty high.

Step 1:

echo deadline > /sys/block/sda/queue/scheduler
echo deadline > /sys/block/sdb/queue/scheduler

If that reproduces the issue, go to step 2:

echo cfq > /sys/block/sda/queue/scheduler
echo cfq > /sys/block/sdb/queue/scheduler
echo 8 > /sys/block/sda/device/queue_depth
echo 8 > /sys/block/sdb/device/queue_depth

Breno - it looks like the default I/O scheduler + default queue depth
for the SATA disks in Firestone is not optimal, in that when running a
heavy I/O workload, we see read starvation occurring, which is making
the system nearly unusable.

Once we changed the I/O scheduler from cfq to deadline, all the issues
went away and the system is able to run the same workload yet still be
responsive. Suggest we either encourage Canonical to change the default
I/O scheduler to deadline or at the very least provide documentation to
encourage our customers to make this change themselves.

** Affects: linux (Ubuntu)
     Importance: Undecided
         Status: New


** Tags: architecture-ppc64le bugnameltc-125862 severity-critical 
targetmilestone-inin1410
-- 
Firestone system I/O hang
https://bugs.launchpad.net/bugs/1469829
You received this bug notification because you are a member of Kernel Packages, 
which is subscribed to linux in Ubuntu.

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to