Bug#905392: openvpn: systemd generator ignores overrides in /etc/systemd/system
On 06/08/18 14:16, Bernhard Schmidt wrote: I have changed the script to test if a service file exist at /etc/systemd/system. Are you both sure this is necessary? To my knowledge the symlink tells systemd to start openvpn@.service with the service definition in memory. It does not tell it to start openvpn@.service with the service definition in the file returned by readlink(). AFAIK, unless it's /dev/null, the target of the symlink is irrelevant. Unfortunately it is necessary. I debugged this issue on 2 separate devices and unless the symlink created under /run/systemd/generator links to the custom file in /etc/systemd/system, it would start openvpn@foo tunnels with the service file from /lib. This was apparent in systemctl status openvpn@foo. No amount of deleting and recreating the /etc/systemd/system/openvpn@.service file nor (re)enabling the openvpn@foo service would fix this. It may be the case that it does not happen all the time - I've used this config before and did not run into this then - but I could not figure out why exactly. Perhaps only in some cases systemd looks at /run/systemd/generator/*.target.wants/* over /etc/systemd/system/*.target.wants/* Note though that this only happens if the generator is being activated, which depends on /etc/default/openvpn existing && AUTOSTART being unset or being set to "all" or some subset of VPN configs. -- Met vriendelijke groet, Gerben Meijer Day by Day
Bug#905392: openvpn: systemd generator ignores overrides in /etc/systemd/system
Package: openvpn Version: 2.4.5-1 Severity: normal If AUTOSTART=all or if it is set to specific config files, the systemd openvpn-generator will symlink those config files to /lib/systemd/system/openvpn@.service. This ignores any customisation done by users in /etc/sytstemd/system/openvpn@.service. The generator should test if /etc/systemd/system/openvpn@.service exists, and if so, use that to symlink instead of /lib/systemd/system/openvpn@.service.
Bug#897693: lldpad: Segfault in get_saddr6
Source: lldpad Version: 1.0.1+git20150824.036e314-2 Severity: important Running lldpad on some systems causes a segfault. The issue is described here: https://bugzilla.redhat.com/show_bug.cgi?id=1513337 A patch is attached to that bugreport which fixes the issue. Please apply it in the next package version. -- System Information: Debian Release: buster/sid APT prefers unstable APT policy: (100, 'unstable') Architecture: amd64 (x86_64) Foreign Architectures: i386 Kernel: Linux 4.16.2 (SMP w/8 CPU cores; PREEMPT) Locale: LANG=en_GB.UTF-8, LC_CTYPE=en_GB.UTF-8 (charmap=UTF-8), LANGUAGE=en_GB.UTF-8 (charmap=UTF-8) Shell: /bin/sh linked to /bin/dash Init: systemd (via /run/systemd/system)
Bug#856964: Option search in dnssec-trigger.conf is ignored
Unfortunately it seems that setting set_search_domains=yes in dnssec.conf is not enough. The code in dnssec-trigger-script does not look at the contents of "search:" in /etc/dnssec-trigger/dnssec-trigger.conf even with that set. Instead, it seems to query NetworkManager for search domains but even that fails on current Debian releases, since the configured search domains there do not even show up in the debug log: Sep 15 14:47:19 believe dnssec-triggerd[29297]: Search domains: The reason for that is that the script looks at networkmanagers connection calls: self.zones += connection.get_ip4_config().get_domains() But instead, or additionally, it should call get_searches(); as far as I understand it, zones is what is passed through in a DHCP request as the local domain for a DHCP client but additional DNS search domains configured for a NM connection only show up in get_searches(). So this is broken in multiple ways, and I imagine it's not just on Debian. -- Met vriendelijke groet, Gerben Meijer Day by Day
Bug#840070: cloud.debian.org: ext4 feature too new for jessie e2fsprogs
Package: cloud.debian.org Severity: important The ext4 filesystem for at least the libvirt box on Atlas is too new for the version of e2fsprogs in jessie. Trying to label the root partition results in: tune2fs 1.42.12 (29-Aug-2014) tune2fs: Filesystem has unsupported read-only feature(s) while trying to open /dev/vda1 Couldn't find valid filesystem superblock. Probable cause is an unsupported feature: # dumpe2fs -h /dev/vda1 | grep -i feature dumpe2fs 1.42.12 (29-Aug-2014) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum Journal features: journal_incompat_revoke journal_64bit FEATURE_I4 FEATURE_I4 is "journal_checksum_v3" which is probably set by the hosts mkfs.ext4, but since the jessie e2fsprogs don't support it, it should not be set. It may even be the case that fsck will fail but I have not tested this. To avoid this, "mkfs.ext4 -O ^metadata_csum" should suffice, until e2fsprogs is at 1.43 or higher in jessie..
Bug#806255: Nut 2.7.4
Source: nut Followup-For: Bug #806255 The 2.7.4 release fixes a bug with new kernels where bcmxcp_usb is unable to initialize: usb 4-1.1: usbfs: process 13644 (bcmxcp_usb) did not claim interface 0 before use I've done a NMU backport myself and it fixed this issue, so an upgrade would be very welcome.
Bug#765175: systemd init script integration: status action should use --lines=0
On 28/04/16 23:14, Michael Biebl wrote: Do we? If I run /etc/init.d/foo status on a terminal having a pager might be useful. I was under the impression that the systemctl command detects if you are running it from a terminal or not and disables the pager in the latter case. It may be useful, but sysv initscripts are never expected to invoke a pager. If user wants to see paged status, 'systemctl status foo' is adequate. What's worse, it can't be skipped - passing arguments after '/etc/init.d/foo status' isn't supported by the lsb init-functions wrapper; only when invoking systemctl directly can this be done. As far as detection on a terminal goes, if I run some program or script that does a sysv style '/etc/init.d/foo status' as part of a service check (re Vagrant in my report) it will block - and in this case it blocks 5 times, once for each /etc/init.d/foo status call. There is no way around this other than to hack the source. IMHO it's a matter of reasonable defaults; for 'systemctl status foo' a pager is expected, for '/etc/init.d/foo status' historically it is not and I feel this should be honored for compatibility reasons.
Bug#765175: systemd init script integration: status action should use --lines=0
I'd like to escalate this to a higher priority. There are existing applications that expect init scripts to return a string or RC value. The current code in /lib/lsb/init-functions.d/40-systemd looks like this: [ "$command" = status ] || log_daemon_msg "$s" "$service" /bin/systemctl $sctl_args $command "$service" rc=$? [ "$command" = status ] || log_end_msg $rc A problem arises whenever a script executes the status command, but systemctl then invokes a pager. For example, on a 'vagrant up' with a Vagrantfile using an nfs synced_folder, vagrant will invoke '/etc/init.d/nfs-kernel-server status' for each VM it is bringing up that has such a config. Systemctl then invokes a pager for every VM, effectively halting the process until the user exits the pager. It isn't even clear that a pager is invoked. So please raise the priority on this as it seems only logical to have traditional init scripts not invoke a pager when passing the 'status' argument; using --lines=0 is a viable solution and likely better than --no-pager.
Bug#820438: cloud.debian.org: Vagrant images for libvirt provider
Package: cloud.debian.org Severity: wishlist Hello cloud team, Would it be possible to start uploading official Debian images for the libvirt provider to atlas.hashicorp.com? It is already possible to convert the Vagrant Virtualbox images to qcow2 format and to then simply repackage and upload them, but it may be preferable to remove the virtualbox packages that are installed. If there's a build system for these images that I need to submit patches against, please point me in the right direction. Thanks!
Bug#816388: knockd 0.5 using too much cpu with libpcap 1.5.3 and up
Package: knockd Version: 0.5-3 Severity: important In libpcap 1.5.3 a change was introduced that is causing knockd to use an unreasonably large amount of CPU time. Previously, CPU usage was at 0% - with newer libpcap it's at 5% (on an E5-1620v2 3.7GHz CPU). Jessie uses libpcap 1.6 and is therefore affected. This has been fixed upstream in 0.7; see https://github.com/jvinet/knock/issues/9 for details. Perhaps the proposed package in #761853 can be used for a new upload?
Bug#771523: systemd-journal-upload
Although it is true that syslog can be set up to send logs to other machines, there are use cases with systemd-journal-* that are not easily achievable by syslog alone. For one, syslog does not by itself guarantee delivery of messages. The rsyslog-relp package can provide this but it is a separate package and hence does not strictly qualify as trivial, plus the configuration to reliably transmit messages even when the syslog client-server connection is not working is not documented except on the rsyslog website. With systemd-journal-upload, the --save-state=/some/file flags on clients together with systemd-journal-remote --listen on a server will guarantee the delivery of any journal entries that have not been uploaded before. That is a lot more trivial than configuring rsyslog-relp. Secondly, the feature set of systemd-journal-gatewayd is unique in that it allows for remote clients to connect to a running machine and retrieve or listen for events from its journal, such as core dumps. Such functionality is simply unavailable with just syslog. It in fact allows remote journal viewing without a running server to which the journal is being sent. Even if its maturity is unknown, it is a maintained feature and was mentioned at FOSDEM 2015. By not packaging it, users aren't able to use it and there won't be any reports on its use at all. Though admittedly the dependancy on libmicrohttpd is not ideal, rsyslog-relp is a separate package from rsyslog as well so it would not seem unreasonable to add the set of systemd-journal-gatewayd|upload|remote tools as a separately available package. -- Regards. Gerben Meijer Day by Day -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Bug#765540: grub2: grub fails with error: 'grub_term_highlight_color' not found
I just hit this bug while upgrading from 2.00-22 to 2.02~beta2-15. I believe #743477 is a duplicate of this bug. The current behavior breaks an existing functional boot simply by noninteractively upgrading grub-pc, which happens in an apt-get upgrade and/or dist-upgrade. If this happens on a noninteractive upgrade from wheezy to jessie, it would be a very serious release blocker. By its definition, this should be a critical bug. Steps to reproduce: - Deploy this jessie vagrant box: https://downloads.sourceforge.net/project/vagrantboxjessie/debian80.box (or any box with grub 2.00-22) - apt-get update - DEBIAN_FRONTEND=noninteractive apt-get -y install grub-pc - reboot The DEBIAN_FRONTEND=noninteractive is there to reproduce what happenes during provisioning automation (ansible/puppet/salt/chef etc). An example vagrantfile is here: http://paste.debian.net/130854/ The workaround is to do a grub-install --recheck /dev/sda after upgrading the package. Due to the nature of deployable images like this, the install_devices from debconf will likely never match the newly created drive; even in cases where it is using /dev/sda it may be that an image would get deployed with virtio and the booted drive would end up being /dev/vda. For reference, the output of debconf-get-selections | grep -i grub before upgrading grub-pc packages: /usr/bin/debconf-get-selections |grep -i grub grub-pc grub-pc/chainload_from_menu.lst boolean true # Writing GRUB to boot device failed - continue? grub-pc grub-pc/install_devices_failed boolean false grub-pc grub-pc/kopt_extracted boolean false # Hide the GRUB timeout grub-pc grub-pc/hidden_timeout boolean false # GRUB install devices: grub-pc grub-pc/install_devices multiselect /dev/disk/by-id/ata-VBOX_HARDDISK_VB4950a3eb-80192436 grub-pc grub2/linux_cmdline_default string quiet # Remove GRUB 2 from /boot/grub? grub-pc grub-pc/postrm_purge_boot_grub boolean false # Writing GRUB to boot device failed - try again? grub-pc grub-pc/install_devices_failed_upgrade boolean true grub-pc grub2/linux_cmdline string debian-installer=en_US grub-pc grub2/kfreebsd_cmdline string # GRUB timeout grub-pc grub-pc/timeout string 5 # /boot/grub/device.map has been regenerated grub-pc grub2/device_map_regeneratednote # Continue without installing GRUB? grub-pc grub-pc/install_devices_empty boolean false # GRUB install devices: grub-pc grub-pc/install_devices_disks_changed multiselect grub-pc grub2/kfreebsd_cmdline_default string quiet # Finish conversion to GRUB 2 now? grub-pc grub-pc/mixed_legacy_and_grub2 boolean true And after upgrading: grub-pc grub-pc/kopt_extracted boolean false # GRUB install devices: grub-pc grub-pc/install_devices multiselect /dev/disk/by-id/ata-VBOX_HARDDISK_VB4950a3eb-80192436 grub-pc grub2/linux_cmdline_default string quiet # GRUB install devices: grub-pc grub-pc/install_devices_disks_changed multiselect /dev/disk/by-id/ata-VBOX_HARDDISK_VB4950a3eb-80192436 # /boot/grub/device.map has been regenerated grub-pc grub2/device_map_regeneratednote # Remove GRUB 2 from /boot/grub? grub-pc grub-pc/postrm_purge_boot_grub boolean false grub-pc grub2/kfreebsd_cmdline string grub-pc grub-pc/chainload_from_menu.lst boolean true # Continue without installing GRUB? grub-pc grub-pc/install_devices_empty boolean false # Finish conversion to GRUB 2 now? grub-pc grub-pc/mixed_legacy_and_grub2 boolean true grub-pc grub2/kfreebsd_cmdline_default string quiet grub-pc grub2/linux_cmdline string debian-installer=en_US # Writing GRUB to boot device failed - try again? grub-pc grub-pc/install_devices_failed_upgrade boolean true # What do you want to do about modified configuration file grub? # GRUB timeout; for internal use grub-pc grub-pc/timeout string 0 # Hide the GRUB timeout; for internal use grub-pc grub-pc/hidden_timeout boolean false # Writing GRUB to boot device failed - continue? grub-pc grub-pc/install_devices_failed boolean false -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Bug#768384: /etc/default/openvpn parameters are ignored
Package: openvpn Version: 2.3.4-3 Severity: normal Since the upgrade to systemd init, the OPTARGS and possibly other variables in /etc/default/openvpn are ignored. It should either incorporate those variables under systemd init, or /etc/default/openvpn should be removed as to avoid confusion. -- System Information: Debian Release: jessie/sid APT prefers testing APT policy: (900, 'testing') Architecture: amd64 (x86_64) Kernel: Linux 3.16-3-amd64 (SMP w/4 CPU cores) Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8) Shell: /bin/sh linked to /bin/dash Versions of packages openvpn depends on: ii debconf [debconf-2.0] 1.5.53 ii initscripts2.88dsf-53.2 ii iproute2 3.16.0-1 ii libc6 2.19-7 ii liblzo2-2 2.08-1 ii libpam0g 1.1.8-3 ii libpkcs11-helper1 1.11-1 ii libssl1.0.01.0.1i-1 Versions of packages openvpn recommends: pn easy-rsa none Versions of packages openvpn suggests: ii openssl 1.0.1i-1 pn resolvconf none -- Configuration Files: /etc/default/openvpn changed [not included] -- debconf information excluded -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Bug#757822: openssh-server: systemd start-limit reached after bootup, causing sshd to be stopped
Package: openssh-server Version: 1:6.6p1-6 Severity: important On a fresh install of Jessie with systemd and networkmanager in a VM with 3 interfaces, the if-up.d/openssh-server script restarts sshd with every network interface that comes up at boottime. When these interfaces come up very quickly, this is triggers the default start-limit in systemd, causing sshd to be terminated and the machine in question to become unreachable over ssh. I am able to reproduce this behaviour with 2 ethernet (1 dhcp and 1 static) and 1 tun (openvpn) interface. Log excerpt: Aug 11 14:53:20 vagrant systemd[1]: Starting OpenBSD Secure Shell server... -- Unit ssh.service has begun starting up. Aug 11 14:53:20 vagrant systemd[1]: Started OpenBSD Secure Shell server. -- Unit ssh.service has finished starting up. Aug 11 14:53:20 vagrant sshd[471]: Server listening on 0.0.0.0 port 22. Aug 11 14:53:20 vagrant sshd[471]: Server listening on :: port 22. Aug 11 14:53:20 vagrant NetworkManager[475]: info (eth1): device state change: secondaries - activated (reason 'none') [90 100 0] Aug 11 14:53:20 vagrant NetworkManager[475]: info Activation (eth1) successful, device activated. Aug 11 14:53:20 vagrant NetworkManager[475]: info (eth0): device state change: secondaries - activated (reason 'none') [90 100 0] Aug 11 14:53:20 vagrant NetworkManager[475]: info Activation (eth0) successful, device activated. Aug 11 14:53:20 vagrant systemd[1]: Stopping OpenBSD Secure Shell server... -- Unit ssh.service has begun shutting down. Aug 11 14:53:20 vagrant systemd[1]: Starting OpenBSD Secure Shell server... -- Unit ssh.service has begun starting up. Aug 11 14:53:20 vagrant systemd[1]: Started OpenBSD Secure Shell server. -- Unit ssh.service has finished starting up. Aug 11 14:53:20 vagrant sshd[471]: Received signal 15; terminating. Aug 11 14:53:20 vagrant sshd[989]: Server listening on 0.0.0.0 port 22. Aug 11 14:53:20 vagrant sshd[989]: Server listening on :: port 22. Aug 11 14:53:21 vagrant systemd[1]: Stopping OpenBSD Secure Shell server... -- Unit ssh.service has begun shutting down. Aug 11 14:53:21 vagrant sshd[989]: Received signal 15; terminating. Aug 11 14:53:21 vagrant systemd[1]: Starting OpenBSD Secure Shell server... -- Unit ssh.service has begun starting up. Aug 11 14:53:21 vagrant systemd[1]: Started OpenBSD Secure Shell server. -- Unit ssh.service has finished starting up. Aug 11 14:53:21 vagrant sshd[1059]: Server listening on 0.0.0.0 port 22. Aug 11 14:53:21 vagrant sshd[1059]: Server listening on :: port 22. Aug 11 14:53:21 vagrant systemd[1]: Stopping OpenBSD Secure Shell server... -- Unit ssh.service has begun shutting down. Aug 11 14:53:21 vagrant sshd[1059]: Received signal 15; terminating. Aug 11 14:53:21 vagrant systemd[1]: Starting OpenBSD Secure Shell server... -- Unit ssh.service has begun starting up. Aug 11 14:53:21 vagrant systemd[1]: Started OpenBSD Secure Shell server. -- Unit ssh.service has finished starting up. Aug 11 14:53:21 vagrant sshd[1127]: Server listening on 0.0.0.0 port 22. Aug 11 14:53:21 vagrant sshd[1127]: Server listening on :: port 22. Aug 11 14:53:22 vagrant ifup[269]: bound to 10.0.2.15 -- renewal in 35227 seconds. Aug 11 14:53:22 vagrant systemd[1]: Stopping OpenBSD Secure Shell server... -- Unit ssh.service has begun shutting down. Aug 11 14:53:22 vagrant sshd[1127]: Received signal 15; terminating. Aug 11 14:53:22 vagrant systemd[1]: Starting OpenBSD Secure Shell server... -- Unit ssh.service has begun starting up. Aug 11 14:53:22 vagrant systemd[1]: Started OpenBSD Secure Shell server. -- Unit ssh.service has finished starting up. Aug 11 14:53:22 vagrant sshd[1290]: Server listening on 0.0.0.0 port 22. Aug 11 14:53:22 vagrant sshd[1290]: Server listening on :: port 22. Aug 11 14:53:25 vagrant NetworkManager[475]: info (tun0): device state change: secondaries - activated (reason 'none') [90 100 0] Aug 11 14:53:25 vagrant NetworkManager[475]: info Activation (tun0) successful, device activated. Aug 11 14:53:25 vagrant systemd[1]: Stopping OpenBSD Secure Shell server... -- Unit ssh.service has begun shutting down. Aug 11 14:53:25 vagrant sshd[1290]: Received signal 15; terminating. Aug 11 14:53:25 vagrant systemd[1]: Starting OpenBSD Secure Shell server... -- Unit ssh.service has begun starting up. Aug 11 14:53:25 vagrant systemd[1]: ssh.service start request repeated too quickly, refusing to start. Aug 11 14:53:25 vagrant systemd[1]: Failed to start OpenBSD Secure Shell server. -- Unit ssh.service has failed. Aug 11 14:53:25 vagrant systemd[1]: Unit ssh.service entered failed state. -- System Information: Debian Release: jessie/sid APT prefers testing APT policy: (900, 'testing') Architecture: amd64 (x86_64) Kernel: Linux 3.14-2-amd64 (SMP w/2 CPU cores) Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8) Shell: /bin/sh linked to /bin/dash Versions of packages openssh-server depends on: ii adduser3.113+nmu3 ii debconf [debconf-2.0] 1.5.53
Bug#757822: Acknowledgement (openssh-server: systemd start-limit reached after bootup, causing sshd to be stopped)
Some more investigation on IRC by valdyn shows that this problem was intruduced in bug #502444 where the 'reload' behavior in /etc/network/if-up.d/openssh-server was changed in 'restart' due to a race condition. I've tested changing 'restart' back to 'reload' and that does resolve the issue, it may be the best solution when ssh is controlled by systemd. -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org