Pierre Labastie wrote:

>>>>> pierre@debian32-virt:~$ cat /proc/diskstats
>>>>>          2       0 fd0 0 0 0 0 0 0 0 0 0 0 0
>>>>>         11       0 sr0 19 0 152 136 0 0 0 0 0 136 136
>>>>>          8       0 sda 32783 8723 2567928 84792 336771 8561249 71767606
>>>>> 11478240 0 1477316 11607988
>>>>>          8       1 sda1 559 2108 19320 1148 4 0 20 0 0 956 1148
>>>>>          8       2 sda2 161 31 1536 172 0 0 0 0 0 172 172
>>>>> [...]
>>>>> -------------------------------
>>>>> And the test failed with:
>>>>> Running ./vmstat.test/vmstat.exp ...
>>>>> FAIL: vmstat partition (using sr0)
>>>>>
>>>>>                       === vmstat Summary ===
>>>>>
>>>>> # of expected passes            5
>>>>> # of unexpected failures        1
>>>>> /sources/procps-ng-3.3.7/vmstat version 3.3.7
>>>>> -------------------------------
>>>>> The problem is that '  11       0 sr0 19 0 152 136 0 0 0 0 0 136 136'
>>>>> matches
>>>>> '\\s+\\d+\\s+\\d+\\s+\(\[a-z\]+\\d+\)\\s+\(\[0-9\]\[0-9\]+\)' (in
>>>>> vmstat.exp).

>>>> I guess they were not expecting you to have done reads from the cdrom.

>>> I haven't. Of course, I could disable the CDROM on the virtual machine.
>>> But when it is present, there are always a few reads, even if I boot

>>> from disk. I guess the kernel makes a few reads at init time.
>> That seems specific to your virtual system (which one?).

> Qemu-kvm (1.1.2). Among the options I have:
> -drive file=/mnt/virtualfs/aqemu/debian32.qcow2,cache=writeback \
> -cdrom /mnt/virtualfs/debian-6.0.4-i386-businesscard.iso
>
> So the virtual CDROM is always in the virtual drive, which explains the
> few reads, although I do not mount it.

>>    My non-virtual
>> system has:
>>
>>      11       0 sr0 0 0 0 0 0 0 0 0 0 0 0
>>
>> But it is after sd{a,b,c}, so it is a race condition also.
>>
>> Perhaps the search should be for [s|h]d[a-z]\d\s+\d\d+

> Aren't there cases where the naming is different (for example SSD
> drives)? Just guessing here.

No,  I have an ssd drive and it is just sdc.  It just plugs into the 
sata system like a regular drive.

I suppose it could be a problem with kvm or vmstat.

Checking with qemu-1.4:

ARGS="-enable-kvm -hda lfs73.img"
MEM="-m 2G"
CDROM="-cdrom lfslivecd-x86_64-6.3-r2160-updated-nosrc.iso"
NIC="-net nic -net tap"
sudo qemu $ARGS $CDROM $NIC $MEM $DRIVE2 $REMOTE

I get:

$ cat /proc/diskstats
    7       0 loop0 0 0 0 0 0 0 0 0 0 0 0
    7       1 loop1 0 0 0 0 0 0 0 0 0 0 0
    8       0 sda 1307 2686 42824 8102 141 176 2626 5559 0 5786 13658
    8       1 sda1 21 0 168 26 0 0 0 0 0 26 26
    8       2 sda2 246 50 2152 325 1 0 2 0 0 323 325
    8       3 sda3 621 766 36918 6794 137 176 2624 5426 0 5294 12217
    8       4 sda4 165 1386 1558 331 0 0 0 0 0 320 331
    8       5 sda5 167 484 1332 357 0 0 0 0 0 357 357
   11       0 sr0 33 0 264 123 0 0 0 0 0 123 123

Note that sr0 is last.  In any case, sr0 and loop? and sda are not 
partitions.  It still looks like a race condition (what is the order of 
entries in /proc/diskstats) or just a logic error in the test to me.

   -- Bruce
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-dev
FAQ: http://www.linuxfromscratch.org/faq/
Unsubscribe: See the above information page

Reply via email to