Martin,
After a lot more testing, both synthetic and normally using my day-to-
day tools, I haven't been able to reproduce the disconnect problem, so
I'm writing that off as a fluke or as some silly error on my part.
As far as I can tell, the original qemu-nbd mounting bug has been
solidly fixed.
Hmm, maybe something else was going on. In an isolated test script, I
haven't reproduced the disconnect problem again yet.
I attached the script I'm using in case anyone else what's to give it
ago.
** Attachment added: "qemu-nbd-test.py"
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/143
Hmm, and one more thing: qemu-nbd --disconnect (at least sometimes)
doesn't seem to be working when booting with systemd:
$ ls /dev/nbd0*
/dev/nbd0 /dev/nbd0p1 /dev/nbd0p2 /dev/nbd0p5
$ sudo qemu-nbd --disconnect /dev/nbd0
/dev/nbd0 disconnected
$ echo $?
0
$ ls /dev/nbd0*
/dev/nbd0 /dev/nbd0p
Hmmm, there may still be an issue, as I didn't encounter this yesterday
when doing my task multiple times after booting with Upstart.
I'm mounting these qcow2 disk images in order to export a tarball of the
filesystem. First three tarballs exported swimmingly, but the fourth
time it seemed to hang
@didrocks - yup, it's working now! Thank you!
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to qemu in Ubuntu.
https://bugs.launchpad.net/bugs/1435428
Title:
vivid: systemd breaks qemu-nbd mounting
To manage notifications about this
** Summary changed:
- vivid: mounting with qemu-nbd fails
+ vivid: systemd breaks qemu-nbd mounting
** Description changed:
On Trusty and Utopic, this works:
$ sudo modprobe nbd
$ sudo qemu-nbd --snapshot -c /dev/nbd0 my.qcow2
$ sudo mount /dev/nbd0p1 /mnt
$ sudo umount /mnt
Bu
** Description changed:
On Trusty and Utopic, this works:
$ sudo modprobe nbd
$ sudo qemu-nbd --snapshot -c /dev/nbd0 my.qcow2
$ sudo mount /dev/nbd0p1 /mnt
$ sudo umount /mnt
- But on Vivid, even though the mount command exists with 0, something
- goes awry and the mount point get
Public bug reported:
On Trusty and Utopic, this works:
$ sudo modprobe nbd
$ sudo qemu-nbd --snapshot -c /dev/nbd0 my.qcow2
$ sudo mount /dev/nbd0p1 /mnt
$ sudo umount /mnt
But on Vivid, even though the mount command exists with 0, something
goes awry and the mount point gets unmounted immediate
Same problem when running `reboot`, which I'd say is even more important
for automation. Port 2204 is forwarding to a qemu VM running Utopic,
port 2207 is running Vivid:
jderose@jgd-kudp1:~$ ssh root@localhost -p 2204 reboot
jderose@jgd-kudp1:~$ echo $?
0
jderose@jgd-kudp1:~$ ssh root@localhost -p
Okay, here's a simple way to reproduce:
$ ssh root@whatever shutdown -h now
$ echo $?
On Vivid, the exist status from the ssh client will be 255. On Trusty
and Utopic it will be 0.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to opens
Also, on Vivid there will be this error: "Connection to localhost closed
by remote host."
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to openssh in Ubuntu.
https://bugs.launchpad.net/bugs/1429938
Title:
stopping ssh.service closes e
Hmm, now I'm thinking this has nothing to do with openssh-server.
I think the problem is actually that when I run this over SSH:
# shutdown -h now
My ssh client exists with status 255... whereas running the same thing
prior to the flip-over to systemd would exit with status 0.
--
You received
So interestingly, this isn't happening when I just type these commands
into an SSH session. But if you create a script like this in say
/tmp/test.sh:
#!/bin/bash
apt-get -y purge openssh-server ssh-import-id
apt-get -y autoremove
shutdown -h now
And then execute this through an ssh call like this
Also, just to clarify, this is definitely a change (or in my mind
regression) introduced by systemd. Yesterday, the System76 image master
tool worked fine and dandy with an up-to-date Vivid VM, as it has
throughout the rest of the previous Vivid dev cycle.
Today things broke.
--
You received thi
Being able to run a script like this over SSH:
apt-get -y remove openssh-server
shutdown -h now
Can be extremely useful in automation tooling, but the switch to systemd
breaks this:
https://bugs.launchpad.net/ubuntu/+source/openssh/+bug/1429938
--
You received this bug notification because you
Public bug reported:
On Trusty and Utopic, when you run `apt-get remove openssh-server` over
an SSH connection, your existing SSH connection remains open, so it's
possible to run additional commands afterward.
However, on Vivid now that the switch to systemd has been made, `apt-
get remove opens
Clint,
Ah, thanks for bringing up --xattrs-include=*, I didn't notice this
option!
I agree this is really a bug/misfeature in tar... if I use --xattrs both
when creating and unpacking a tarball, I expect it to just work.
--
You received this bug notification because you are a member of Ubuntu
S
Stéphane,
Gotcha, thanks for the feedback! So am I correct in thinking that the
--xattrs option is currently broken in tar on 14.04? If so, is there any
chance this could be fixed in an SRU?
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed
This also affects the `gnome-keyring` package. The System76 imaging
system (Tribble) uses a tar-based approach similar to the MAAS fast-path
installer, and we've had to add a work-around for /usr/bin/gnome-
keyring-daemon on our desktop images:
$ getcap /usr/bin/gnome-keyring-daemon
/usr/bin/gnom
19 matches
Mail list logo