Let me amend my previous comment: The blocking shows up during more
stressful tests, even with the virtual kernel.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1023755
Title:
Precise kernel locks u
We had the same issue with devstack running on XenServer domU. Now, we
changed to the -virtual kernel, and these problems have gone. See:
https://review.openstack.org/#/c/24297/
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://b
Thank you for your explanation Stefan!
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1023755
Title:
Precise kernel locks up while dd to /dev/mapper files > 1Gb (was:
Unable to delete volume)
To m
Thiago, this plays a bit into the issue. But mainly this is a result of changes
in the writeback code that is doing some complex throttling based on memory
limits and estimated drive speeds. This has some issues with the used setup
because we now got two backing devices that are not independent
Just a curiosity... Isn't this related to the following BUGs:
Ubuntu server uses CFQ scheduler instead of deadline:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1008400
Kernel I/O scheduling writes starving reads, local DoS:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1064521
??
This bug was marked as "fix comitted" against Cinder which is INCORRECT.
The fact is the root cause of the issue is an Ubuntu bug and NOT a
Cinder bug.
Changing it back to Invalid, if we need a new bug for a work-around so
be it, but this particular bug should be left clean for the Ubuntu issue
IM
Why oh why reopen the Pandora box ? A comment would have been nice :)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1023755
Title:
Precise kernel locks up while dd to /dev/mapper files > 1Gb (was:
** Changed in: cinder
Status: Fix Committed => In Progress
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1023755
Title:
Precise kernel locks up while dd to /dev/mapper files > 1Gb (was:
Una
Reviewed: https://review.openstack.org/16798
Committed:
http://github.com/openstack/cinder/commit/1405a6440d646524d41adfed4fc1344948e2871f
Submitter: Jenkins
Branch:master
commit 1405a6440d646524d41adfed4fc1344948e2871f
Author: Pádraig Brady
Date: Fri Nov 23 11:24:44 2012 +
use O
** Changed in: cinder
Status: In Progress => Invalid
** Changed in: cinder
Assignee: Pádraig Brady (p-draigbrady) => (unassigned)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1023755
Tit
Fix proposed to branch: master
Review: https://review.openstack.org/16798
** Changed in: cinder
Status: Won't Fix => In Progress
** Changed in: cinder
Assignee: John Griffith (john-griffith) => Pádraig Brady (p-draigbrady)
--
You received this bug notification because you are a memb
A bit of status update: unfortunately I had to realize that doing single
test runs by far is not useful here. By doing several runs of the vol-
test the problem will trigger even with 3.6 kernels. A 3.7 kernel could
not be tested because Precise userspace had issues with the iscsitarget
setup. Anyw
I am doing the tests on real hardware, too (now). And I see some issues
with the Quantal kernel (at least) if that is set to cfq. So I think
there are two issues at least. One which I am currently trying to bisect
between 3.3-rc7 and 3.3 which would make things work at least slow(er)
and the other
same crummy performance on quantal, so the bad performance may be
unrelated:
vishvananda@nebstack006:~$ sudo dd if=/dev/zero of=/dev/mapper/clean1 bs=1M
count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 15.2082 s, 70.6 MB/s
vishvananda@nebstack006:~$ time sudo dd i
I tried this on real hardware and it appears to work although it is
abysmally slow:
vishvananda@nebstack006:~$ echo "0 2097152 zero" | sudo dmsetup create zero1
vishvananda@nebstack006:~$ echo "0 2097152 snapshot /dev/mapper/zero1 /dev/sde
n 128" | sudo dmsetup create clean1
vishvananda@nebstack0
>From what I got so far from the dump I took with the devstack setup on a
loop block device, it seems there are many outstanding io requests for
the snapshot-cow device (expected as dd is busy), but digging down to
loop0 (which is backing the vg) its worker thread got into a situation
where it trie
fyi running the same command on quantal (with kernel 3.5.0-18-generic)
works fine
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1023755
Title:
Precise kernel locks up while dd to /dev/mapper files
FYI i could reproduce a hang in dd doing the following:
truncate -s 1G foo
sudo losetup -f --show foo
echo "0 2097152 zero" | sudo dmsetup create zero1
echo "0 2097152 snapshot /dev/mapper/zero1 /dev/loop0 n 128" | sudo dmsetup
create clean1
sudo dd if=/dev/zero of=/dev/mapper/clean1 bs=1M count=
As noted in lib/cinder (or the devstack docs at
http://devstack.org/lib/cinder.html)If you create a volume group called
stack-volumes it will use that instead of creating a loopback device.
You can even use a different volume group by putting VOLUME_GROUP=xxx in
your localrc.
--
You received thi
I'll see if I can find some time to spin this up on a physical LVM and
let you know how it goes.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1023755
Title:
Precise kernel locks up while dd to /dev
I will be back working on this (though realistically this will not be before
next week). From the various dumps I have been looking at until now it looks
like the problem is somewhere in the way the storage stack is set up and
possibly a slightly higher memory footprint.
Basically cpus are in sc
@Stefan: this is actually making "OpenStack on LTS" look quite bad, was
wondering if there was a way to increase priority on this bug ?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1023755
Title:
P
Now that is interesting: after getting the system into a similar hang on
the second execution of the vol-test script, I took a dump and was
looking at it. The system would be suspended for the duration of the
dump process, that is maybe 1 minute. After a while I suddenly noticed
disk activity on th
Ah, it seems that starting the script a second time after manual cleanup
seemed successful will cause a hang. Apparently on deleting the snapshot
but there is no activity on the disk.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
htt
With emulated devices under Xen, I still get the behaviour as before. So
not locking up but taking long and leaving the COW volume behind. To
have some numbers for long: from issuing the snapshot delete until the
hdd stops writing a lot, it takes about 6 minutes, for the volume delete
is is about 1
Some thoughts on the previous data:
1. With devstack running it is likely that some portions of the 1G memory are
used by executables.
2. Having the VG on a loop mounted file can add some overhead on the IO as
everything has to
go trhough an additional layer and it is not guaranteed that cons
I have been experimenting with the devstack setup on a Xen guest (if
possible I want to use Xen because I can take dumps there). The guest
setup is 2VCPUs, 1G memory and 8G and 5G virtual disks (the second is
mounted to /opt/stack to be used by the volume group of the test. This
is a remainder of t
Hi Stefan,
Best way to set this up and reproduce is using virtual-box with a clean 12.04
server image installed.
Personally at this point I typically do a snapshot or clone so I can easily
start over again.
After that you can follow the steps outlined on the devstack page to get
things up and
John, would you be able to guide me through the steps to set up devstack
up to the point where I can use it to re-produce the problem?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1023755
Title:
Pr
Hey Stefan,
Actually the volume is NOT deleted/zeroed if the snapshot exists. Snapshots
must be removed deleted first.
My setup is devstack in a Virtualbox, with 1G of memory allocated to the
VM. Really this seems to be the most effective way to reproduce. I
have also been succesfull using dev
Oh, is there a certain amount of memory related to the failures?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1023755
Title:
Precise kernel locks up while dd to /dev/mapper files > 1Gb (was:
Unab
I tried over the weekend with a Xen guest and 1, 2, and 4 CPUs. No luck
in failing.
Just a general question for understanding: it is the original volume
that gets zeroed. And out of curiosity: am I understanding this
correctly, that the snapshot is kept as is and the origin is handed out
to be use
John, I could *not* reproduce it. For me the dd would (expectedly) abort
with an error.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1023755
Title:
Precise kernel locks up while dd to /dev/mapper f
Hi Stefan, can you confirm whether you did or did not successfully
reproduce this issue?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1023755
Title:
Precise kernel locks up while dd to /dev/mapper
I was unable to reproduce this with the steps shown in comment #32 (with
two changes as below). The used kernel was 3.2.0-27.43-generic running
as a VM. I tried both the VG on a partition and the VG on a loop-mounted
backing file.
To make the testcase work, the snapshot lvcreate needed "--snapshot
Just while thinking about it (not yet run a reproduction), I think I can guess
around where the problem might be. Both the original volume and the snapshot
are set to 2G (at least in the example). The way a snapshot volume works is
that it contains a mapping table. Sectors that have not changed
psuedo script to reproduce outside of openstack/devstack:
*This omits the iscsi target creation/deletion steps.*
DATA_DIR=~/DATA
VOLUME_BACKING_FILE_SIZE=${VOLUME_BACKING_FILE_SIZE:-5130M}
VOLUME_GROUP=${VOLUME_GROUP:-stack-volumes}
VOLUME_BACKING_FILE=${VOLUME_BACKING_FILE:-$DATA_DIR/${VOLUME_G
** Changed in: cinder
Milestone: folsom-rc1 => None
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1023755
Title:
Precise kernel locks up while dd to /dev/mapper files > 1Gb (was:
Unable to del
** Changed in: nova
Milestone: folsom-rc1 => None
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1023755
Title:
Precise kernel locks up while dd to /dev/mapper files > 1Gb (was:
Unable to delet
Should be fixed at kernel level
** Summary changed:
- Unable to delete the volume snapshot
+ Precise kernel locks up while dd to /dev/mapper files > 1Gb (was: Unable to
delete volume)
** Changed in: nova
Status: In Progress => Won't Fix
** Changed in: cinder
Status: In Progress =
40 matches
Mail list logo