One more interesting tidbit: it seems that when booting with systemd,
it's never enabling encrypted swap, it's just enabling normal swap using
the underlying physical swap partition.
After booting with systemd:
$ sudo swapon --summary
FilenameTypeSize
Yesterday I didn't have access to a UEFI system or a 2TiB drive to test
with... but I just did a normal install on a UEFI system, and I'm
experiencing this same bug.
As before, booting with init=/sbin/upstart fixes it.
I just wanted to rule out this being a problem introduced by the
System76
Also attached a tarball with everything from generator/
** Attachment added: generator.tgz
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1447282/+attachment/4382430/+files/generator.tgz
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
I attached a tarball will everything from generator.late/, just in case
any other files are useful.
** Attachment added: generator.late.tgz
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1447282/+attachment/4382428/+files/generator.late.tgz
--
You received this bug notification
Public bug reported:
I'm still sorting out the details and eliminating variables, but as far
as I can tell:
Steps to reproduce
===
1) Install Ubuntu using GPT partitioning for the OS drive[*]
2) Choose require my password to login, and check encrypt my home
directory
Expected
As far as I can tell, lp:1447282 is a different bug, but it would be helpful to
have input from the other ecryptfs users who are following this bug:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1447282
--
You received this bug notification because you are a member of Ubuntu
Bugs,
Info from fstab, crypttab, and journalctl:
$cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed.
** Summary changed:
- Prompted for cryptoswap passphrase when using GPT partitioning + encrypted
home directory (ecrptfs)
+ Prompted for cryptoswap passphrase when using GPT partitioning + encrypted
home directory (ecryptfs)
--
You received this bug notification because you are a member of
Oops, when I copy+pasted my fstab earlier, I accidentally left out the
final line, but the cryptswap1 line is actually there.
This is from a different install, so the UUIDs are different. Also, I
forgot that Martin Pitt asked me to include the output from blkid:
$ cat /etc/fstab
# /etc/fstab:
Just tested the truly blank drive scenario on the 20150417.1 daily:
http://cdimage.ubuntu.com/ubuntu/daily-live/20150417.1/
(sha1sum 0f1bdbc623df6816f1d058277811d8191408aeb9)
And it worked okay (although from others it sounds like things
definitely aren't fixed for all scenarios).
BTW, the
Martin,
Thanks for looking into this! So from what you're saying, this could
potentially happen on a bare-metal install also if, say, the
installation media was fast enough?
When I'm doing qemu installs, often most or all of the ISO is already
cached in RAM, so it make sense that such a race
Hmm, and to absolutely avoid confusion:
$ sha1sum vivid-desktop-amd64.iso
e54c7cb9cc54613f44af84c7dce796a729b74c94 vivid-desktop-amd64.iso
:D
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1418706
Oh, and to potentially avoid confusion, when I said I tested the
16-Apr-2015 daily ISO, I'm going by the dates that Apache is showing
here:
http://cdimage.ubuntu.com/daily-live/current/
(Which for me at the moment is still 16-Apr-2015.)
--
You received this bug notification because you are a
@Paulo: no, I didn't think that ISO was the RC, I was just checking in
on the latest daily.
In case booting with Upstart was the variable, I just tested with
systemd (with the same daily ISO), and this EFI partitioning bug (in
terms of my original scenario) seems to be fixed. Although in the
Public bug reported:
I'm running Vivid for the host, installing a Vivid qemu guest from the
latest daily desktop ISO (sha1sum
e8e4f19d4017bec2785aab62894355afb67bfce1). 64bit host and guest.
The install seemingly completes correctly, but at the end when you get
to the Installation Complete
I just did a UEFI-mode install in a kvm VM using the latest Vivid
desktop ISO (16-Apr-2015), and it worked!
I booted the ISO with init=/sbin/upstart to work around the installer
not being able to reboot after the install has finished, but otherwise
things worked perfectly!
--
You received this
Yup, I can confirm the above when it comes to the daily Vivid server
ISOs... I regularly test them for UEFI installs, and at no time since I
filed this bug have I encountered any problems when doing good ol' text-
based d-i server installs.
--
You received this bug notification because you are a
Martin,
After a lot more testing, both synthetic and normally using my day-to-
day tools, I haven't been able to reproduce the disconnect problem, so
I'm writing that off as a fluke or as some silly error on my part.
As far as I can tell, the original qemu-nbd mounting bug has been
solidly
Martin,
After a lot more testing, both synthetic and normally using my day-to-
day tools, I haven't been able to reproduce the disconnect problem, so
I'm writing that off as a fluke or as some silly error on my part.
As far as I can tell, the original qemu-nbd mounting bug has been
solidly
@g-philip - in my testing, i've never chosen the download updates while
installing option, yet uefi mode installs haven't worked for me since i
originally filed this bug.
can you provide more details?
fyi, my testing has been done using a qemu/kvm virtual machine, but
others have confirmed this
@didrocks - yup, it's working now! Thank you!
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to qemu in Ubuntu.
https://bugs.launchpad.net/bugs/1435428
Title:
vivid: systemd breaks qemu-nbd mounting
To manage notifications about this
@didrocks - yup, it's working now! Thank you!
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1435428
Title:
vivid: systemd breaks qemu-nbd mounting
To manage notifications about this bug go to:
Hmm, and one more thing: qemu-nbd --disconnect (at least sometimes)
doesn't seem to be working when booting with systemd:
$ ls /dev/nbd0*
/dev/nbd0 /dev/nbd0p1 /dev/nbd0p2 /dev/nbd0p5
$ sudo qemu-nbd --disconnect /dev/nbd0
/dev/nbd0 disconnected
$ echo $?
0
$ ls /dev/nbd0*
/dev/nbd0
Hmm, and one more thing: qemu-nbd --disconnect (at least sometimes)
doesn't seem to be working when booting with systemd:
$ ls /dev/nbd0*
/dev/nbd0 /dev/nbd0p1 /dev/nbd0p2 /dev/nbd0p5
$ sudo qemu-nbd --disconnect /dev/nbd0
/dev/nbd0 disconnected
$ echo $?
0
$ ls /dev/nbd0*
/dev/nbd0
Hmmm, there may still be an issue, as I didn't encounter this yesterday
when doing my task multiple times after booting with Upstart.
I'm mounting these qcow2 disk images in order to export a tarball of the
filesystem. First three tarballs exported swimmingly, but the fourth
time it seemed to
Hmmm, there may still be an issue, as I didn't encounter this yesterday
when doing my task multiple times after booting with Upstart.
I'm mounting these qcow2 disk images in order to export a tarball of the
filesystem. First three tarballs exported swimmingly, but the fourth
time it seemed to
Hmm, maybe something else was going on. In an isolated test script, I
haven't reproduced the disconnect problem again yet.
I attached the script I'm using in case anyone else what's to give it
ago.
** Attachment added: qemu-nbd-test.py
Hmm, maybe something else was going on. In an isolated test script, I
haven't reproduced the disconnect problem again yet.
I attached the script I'm using in case anyone else what's to give it
ago.
** Attachment added: qemu-nbd-test.py
Except that previously this wasn't racy behavior in practice.
I have automation tooling that has executed tens of thousands of reboot
and shutdown commands over SSH in this way with perfect consistency over
the last two years. The moment the switchover to systemd happened in
Vivid, this tooling
** Summary changed:
- vivid: mounting with qemu-nbd fails
+ vivid: systemd breaks qemu-nbd mounting
** Description changed:
On Trusty and Utopic, this works:
$ sudo modprobe nbd
$ sudo qemu-nbd --snapshot -c /dev/nbd0 my.qcow2
$ sudo mount /dev/nbd0p1 /mnt
$ sudo umount /mnt
** Summary changed:
- vivid: mounting with qemu-nbd fails
+ vivid: systemd breaks qemu-nbd mounting
** Description changed:
On Trusty and Utopic, this works:
$ sudo modprobe nbd
$ sudo qemu-nbd --snapshot -c /dev/nbd0 my.qcow2
$ sudo mount /dev/nbd0p1 /mnt
$ sudo umount /mnt
Public bug reported:
On Trusty and Utopic, this works:
$ sudo modprobe nbd
$ sudo qemu-nbd --snapshot -c /dev/nbd0 my.qcow2
$ sudo mount /dev/nbd0p1 /mnt
$ sudo umount /mnt
But on Vivid, even though the mount command exists with 0, something
goes awry and the mount point gets unmounted
Public bug reported:
On Trusty and Utopic, this works:
$ sudo modprobe nbd
$ sudo qemu-nbd --snapshot -c /dev/nbd0 my.qcow2
$ sudo mount /dev/nbd0p1 /mnt
$ sudo umount /mnt
But on Vivid, even though the mount command exists with 0, something
goes awry and the mount point gets unmounted
** Description changed:
On Trusty and Utopic, this works:
$ sudo modprobe nbd
$ sudo qemu-nbd --snapshot -c /dev/nbd0 my.qcow2
$ sudo mount /dev/nbd0p1 /mnt
$ sudo umount /mnt
- But on Vivid, even though the mount command exists with 0, something
- goes awry and the mount point
** Description changed:
On Trusty and Utopic, this works:
$ sudo modprobe nbd
$ sudo qemu-nbd --snapshot -c /dev/nbd0 my.qcow2
$ sudo mount /dev/nbd0p1 /mnt
$ sudo umount /mnt
- But on Vivid, even though the mount command exists with 0, something
- goes awry and the mount point
I reworked the description as my original assessment was quite off.
But after more thought, I think this behavior change is something that
really needs mentioning in the Vivid releases notes.
After all, the perceived correct behavior of a system strongly tends
toward what the actual behavior has
** Description changed:
- On Trusty and Utopic, when you run `apt-get remove openssh-server` over
- an SSH connection, your existing SSH connection remains open, so it's
- possible to run additional commands afterward.
+ If you send a shutdown or reboot command over SSH to a Trusty or Utopic
+
Martin,
Okay, much thanks!
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1429938
Title:
reboot does not return under systemd
To manage notifications about this bug go to:
Hmm, now I'm thinking this has nothing to do with openssh-server.
I think the problem is actually that when I run this over SSH:
# shutdown -h now
My ssh client exists with status 255... whereas running the same thing
prior to the flip-over to systemd would exit with status 0.
--
You received
Hmm, now I'm thinking this has nothing to do with openssh-server.
I think the problem is actually that when I run this over SSH:
# shutdown -h now
My ssh client exists with status 255... whereas running the same thing
prior to the flip-over to systemd would exit with status 0.
--
You received
So interestingly, this isn't happening when I just type these commands
into an SSH session. But if you create a script like this in say
/tmp/test.sh:
#!/bin/bash
apt-get -y purge openssh-server ssh-import-id
apt-get -y autoremove
shutdown -h now
And then execute this through an ssh call like
So interestingly, this isn't happening when I just type these commands
into an SSH session. But if you create a script like this in say
/tmp/test.sh:
#!/bin/bash
apt-get -y purge openssh-server ssh-import-id
apt-get -y autoremove
shutdown -h now
And then execute this through an ssh call like
Okay, here's a simple way to reproduce:
$ ssh root@whatever shutdown -h now
$ echo $?
On Vivid, the exist status from the ssh client will be 255. On Trusty
and Utopic it will be 0.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Also, on Vivid there will be this error: Connection to localhost closed
by remote host.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to openssh in Ubuntu.
https://bugs.launchpad.net/bugs/1429938
Title:
stopping ssh.service closes
Also, on Vivid there will be this error: Connection to localhost closed
by remote host.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1429938
Title:
stopping ssh.service closes existing ssh
Same problem when running `reboot`, which I'd say is even more important
for automation. Port 2204 is forwarding to a qemu VM running Utopic,
port 2207 is running Vivid:
jderose@jgd-kudp1:~$ ssh root@localhost -p 2204 reboot
jderose@jgd-kudp1:~$ echo $?
0
jderose@jgd-kudp1:~$ ssh root@localhost
Same problem when running `reboot`, which I'd say is even more important
for automation. Port 2204 is forwarding to a qemu VM running Utopic,
port 2207 is running Vivid:
jderose@jgd-kudp1:~$ ssh root@localhost -p 2204 reboot
jderose@jgd-kudp1:~$ echo $?
0
jderose@jgd-kudp1:~$ ssh root@localhost
Okay, here's a simple way to reproduce:
$ ssh root@whatever shutdown -h now
$ echo $?
On Vivid, the exist status from the ssh client will be 255. On Trusty
and Utopic it will be 0.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to
Public bug reported:
On Trusty and Utopic, when you run `apt-get remove openssh-server` over
an SSH connection, your existing SSH connection remains open, so it's
possible to run additional commands afterward.
However, on Vivid now that the switch to systemd has been made, `apt-
get remove
Public bug reported:
On Trusty and Utopic, when you run `apt-get remove openssh-server` over
an SSH connection, your existing SSH connection remains open, so it's
possible to run additional commands afterward.
However, on Vivid now that the switch to systemd has been made, `apt-
get remove
Being able to run a script like this over SSH:
apt-get -y remove openssh-server
shutdown -h now
Can be extremely useful in automation tooling, but the switch to systemd
breaks this:
https://bugs.launchpad.net/ubuntu/+source/openssh/+bug/1429938
--
You received this bug notification because
Being able to run a script like this over SSH:
apt-get -y remove openssh-server
shutdown -h now
Can be extremely useful in automation tooling, but the switch to systemd
breaks this:
https://bugs.launchpad.net/ubuntu/+source/openssh/+bug/1429938
--
You received this bug notification because
Also, just to clarify, this is definitely a change (or in my mind
regression) introduced by systemd. Yesterday, the System76 image master
tool worked fine and dandy with an up-to-date Vivid VM, as it has
throughout the rest of the previous Vivid dev cycle.
Today things broke.
--
You received
Also, just to clarify, this is definitely a change (or in my mind
regression) introduced by systemd. Yesterday, the System76 image master
tool worked fine and dandy with an up-to-date Vivid VM, as it has
throughout the rest of the previous Vivid dev cycle.
Today things broke.
--
You received
Public bug reported:
I got this error after I upgradedet Ubuntu-Mate 15.04 today
ProblemType: Package
DistroRelease: Ubuntu 15.04
Package: nvidia-340-uvm 340.76-0ubuntu1
ProcVersionSignature: Ubuntu 3.18.0-13.14-generic 3.18.5
Uname: Linux 3.18.0-13-generic x86_64
NonfreeKernelModules: nvidia
I hadn't checked in on this for a bit... but for what it's worth, this
bug is still present in the latest daily ISO (02-Mar-2015).
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1418706
Title:
I have exactly the same problem. Actions taken so far (to no avail):
- Switched from apache default to mpm_prefork
- Removed php-apc
Now checking if removing php5-apcu does indeed work. It crashes rather
often so I should know within a day or so...
--
You received this bug notification because
This bug is still present in the 18-Feb-2015 Vivid desktop amd64 daily
ISO, but I imagine the fix just hasn't made it into the daily yet.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1418706
Title:
Confirmed: oem-config-gk is likewise not present after doing a BIOS-mode
OEM install.
But with a BIOS-mode install, nothing crazy happens when I:
$ sudo apt-get install oem-config-gtk ubiquity-frontend-gtk
See screenshot.
** Attachment added: Screenshot from 2015-02-18 09:59:05.png
gotcha, thanks! i misunderstood the status change :)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1418706
Title:
Vivid: UEFI: blank drive incorrectly detected as existing BIOS-mode
install
To
Hmm, possible regression introduced by this fix: after I complete a UEFI
mode install, and then run:
$ sudo apt-get update
$ sudo apt-get dist-upgrade
I get a message that that linux-signed-image-generic-lts-utopic was
automatically installed and is no longer required, can be removed with
Public bug reported:
After doing an OEM install (in UEFI-mode at least), oem-config-gtk isn't
installed.
I have a hunch it's not being installed in BIOS-mode as well (will
confirm that shortly). But the interesting UEFI-mode quirk is that if I
try to:
$ sudo apt-get install oem-config-gtk
** Attachment added: Screenshot from 2015-02-18 09:35:48.png
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1423254/+attachment/4321924/+files/Screenshot%20from%202015-02-18%2009%3A35%3A48.png
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
Just confirmed that this is indeed fixed in the 20150218 deskop amd64
daily ISO.
Also confirmed this fix works when doing an OEM install, which I didn't
test previously for lack of an easy to use live environment.
--
You received this bug notification because you are a member of Ubuntu
Bugs,
Public bug reported:
A UEFI mode install will download and install the latest signed kernel,
when there is a newer version available than what's provided on the ISO.
However, the 14.04.2 daily ISO (20150217 deskop amd64) downloads and
installs the 3.13.0-45 kernel, even though 3.16.0-30
Added a screenshot from during the install.
** Attachment added: Screenshot from 2015-02-17 11:44:33.png
https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1422864/+attachment/4321306/+files/Screenshot%20from%202015-02-17%2011%3A44%3A33.png
--
You received this bug notification because
Same problem with the latest daily ISO (Tue Feb 10).
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1418706
Title:
Vivid: UEFI: blank drive incorrectly detected as existing BIOS-mode
install
To
I'm launching qemu with -cdrom vivid-desktop-amd64.iso. I just double-
checked from the try without installing live session, and the CDROM is
already showing up as /dev/sr0.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
ah, gotcha, that makes sense. i did cross my mind that perhaps it was
looking at the wrong block device when deciding that an existing BIOS-
mode install was present.
for what it's worth, this is a very recent regression. I did a
successful UEFI-mode install using the Vivid daily ISO from about a
okay, latest daily is in a bit better shape: I still get this erroneous
warning dialog, but now clicking Continue on the dialog works, and the
install completes successfully (grub is installed correctly, system
boots fine).
--
You received this bug notification because you are a member of Ubuntu
er, scratch that... system isn't bootable. i ended up back in the live
ISO without noticing it :P
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1418706
Title:
Vivid: UEFI: blank drive incorrectly
Okay, just double checked with `parted -l`... the target drive is
totally uninitialized:
Error: /dev/vda: unrecognised disk label
Model: Virtio Block Device (virtblk)
Disk: /dev/vda 17.2GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:
--
You received this bug
Totally blank... I'm using qemu-img to create a new, empty virtual disk
before doing the install. So every sector should be 0x00 repeated.
I'm having trouble getting a terminal open to run parted. When I try
ubuntu without installed, i get some nautilus error and unity isn't
starting correctly.
I just tested the latest vivid *server* daily ISO in UEFI mode, and it
installs fine. So seems the problem is likely in Ubiquity, not in the
underlying debian installer.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Public bug reported:
When doing a UEFI-mode install using the latest daily Vivid ISO (Thu Feb
5), Ubiquity incorrectly concludes that a blank drive contains an
existing BIOS-mode install (see error in attached screenshot).
The resulting error dialog is also broken: none of the buttons do any
Thanks, Chad!
Note that I'm almostready with a revised patch, so my current patch
shoudl be ignored for now.
My revised patch will incorporate your suggestion for the preinst
script.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
To hopefully make this fix easier for the nvidia driver maintainers to
integrate across all affected versions (including those in the xorg-
edgers PPA), here's a github pull request for the nvidia-331 branch:
https://github.com/tseliot/nvidia-graphics-drivers/pull/7
debdiff coming shortly.
--
Okay, on vivid, unity-greeter 15.04.2-0ubuntu1 re-introduces this bug.
I have a vivid image that was up-to-date as of Fri 5 Dec. In this
snapshot, with applying any updates, I can connect to protected WiFi
just fine.
However, today something landing from proposed re-introduced this bug,
and I
Er, I mean an ABI mismatched in policykit-1, not network-manager
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1388130
Title:
Cannot connect to WiFi with Nvidia GPU using nvidia-331, SSD
To
Another possible hint: some in this set of packages currently in vivid
proposed breaks the WiFi password dialog:
Calculating upgrade... Done
The following NEW packages will be installed:
libisl13
The following packages will be upgraded:
apport apport-gtk btrfs-tools fontconfig
To eliminate more variables, I just tried xubuntu 14.10 (with nvidia-343
from ppa:system76-dev/stable)... and I can connect to password-protected
WiFi just fine.
As Xubuntu and Ubuntu are using most of the same lower-level stack, this
kinda suggests the problem is fairly high-level, potentially
More variables eliminated: this bug does *not* occur on:
- Ubuntu GNOME 14.10
- Ubuntu MATE 14.10
- Kubuntu 14.10
- Xubuntu 14.10 (as mentioned above)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Still no solution, but I've at least (hopefully) eliminated some more
variables.
As I know this problem doesn't currently exist on Vivid, I tried back-
porting `network-manager` and `network-manager-applet` from Vivid, but
no luck... same problem still exists.
And on the off chance that this is
Hmm, after more careful investigation, I think my hunch about
differences in the DBus related process was a dead end.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1388130
Title:
Cannot connect to
Also tried back-porting `policykit-1` (which needed a back-port of
`glib2.0` and `gobject-introspection`)... still no luck.
But at this point, the delta between Utopic and Vivid is still pretty
small, so I feel this is a promising avenue, at least as far as shot-
gunning goes.
--
You received
This fix seems to be working well. I have test packages here if anyone
wants to test them:
https://launchpad.net/~jderose/+archive/ubuntu/nvidia-test
** Patch added: possible-fix.diff
You'd think it wouldn't be related to the nvidia driver... but it
definitely is.
At System76, we've frequently encountered scattered problems like this.
The nvidia proprietary effects the boot sequence enough (for example, no
kms) that it frequently exposes subtle problems in the overall
Oh, and a little more detail on why I'm certain this is related to the
nvidia proprietary driver...
Part of our QA process after we image a system (before it's shipped to
the customer) is to test WiFi. Since we started shipping Utopic, we've
had 0% failure on systems with Intel graphics.
On
Okay, think I just found a lead in /var/log/upstart/lightdm.log:
/etc/modprobe.d is not a file
/etc/modprobe.d is not a file
/etc/modprobe.d is not a file
/etc/modprobe.d is not a file
update-alternatives: error: no alternatives for x86_64-linux-gnu_gfxcore_conf
Failed to get D-Bus connection
BTW, it was the Failed to get D-Bus connection bit above that seems
problematic.
Also, looking in syslog, there are some interesting tidbits:
Nov 21 13:17:45 system76-pc NetworkManager[977]: info (wlan0): device state
change: config - need-auth (reason 'none') [50 60 0]
Nov 21 13:17:45
Another update: I think I've ruled out nvidia-persistenced being started
via udev as the possible culprit.
I tried Trusty with the nvidia-343 driver from the System76 PPA (which
starts nvidia-persistenced with udev)... and I can connect to WiFi just
fine.
I also tried an up-to-date Vivid install
Public bug reported:
nvidia-graphics-drivers-331 (331.89-0ubuntu5) will fail to remove/purge
nvidia-331 when /usr/bin/nvidia-persistenced is running.
The reason is the nvidia-331.postrm script hasn't been updated to
reflect that nvidia-persistenced is now started via udev rather than
Upstart
Ah, one thing I had wrong /usr/bin/stop-nvidia-persistenced needs to
be called in nvidia-331.prerm, not nvidia-331.postrm
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1394348
Title:
disabled
** Affects: system76
Importance: Critical
Assignee: Jason Gerard DeRose (jderose)
Status: Triaged
** Affects: network-manager (Ubuntu)
Importance: Undecided
Status: New
** Tags: amd64 apport-bug utopic
** Attachment added: Screenshot from 2014-10
Note I confirmed that WiFi works fine when using nouveau on the same
hardware.
Also I tried a minimal a nvidia-331 install with --no-install-
recommends, just in case the problem is related to any of the optimus
stuff, which isn't needed for System76 hardware... and still no dice.
Installing
Okay, figured this out, more or less.
I was mocking DMI information in qemu using:
-smbios file=smbios_type_1.bin
The dump was from actual hardware, but under the seemingly more strict
parsing done either by the newer kernel and/or dmidecode, the system-
product-name ends up with a hugely long
apport information
** Attachment added: CurrentDmesg.txt
https://bugs.launchpad.net/bugs/1366351/+attachment/4234542/+files/CurrentDmesg.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1366351
apport information
** Attachment added: BootDmesg.txt
https://bugs.launchpad.net/bugs/1366351/+attachment/4234540/+files/BootDmesg.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1366351
.tmp.unity.support.test.0:
ApportVersion: 2.14.7-0ubuntu5
Architecture: amd64
CompizPlugins: No value set for
`/apps/compiz-1/general/screen0/options/active_plugins'
CompositorRunning: compiz
CompositorUnredirectDriverBlacklist: '(nouveau|Intel).*Mesa 8.0'
CompositorUnredirectFSW: true
apport information
** Attachment added: DpkgLog.txt
https://bugs.launchpad.net/bugs/1366351/+attachment/4234544/+files/DpkgLog.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1366351
Title:
301 - 400 of 1534 matches
Mail list logo