[Touch-packages] [Bug 1916773] Re: ua disable fips doesn't work in ua client 27

2021-02-24 Thread David Coronel
wrong project

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to ifupdown in Ubuntu.
https://bugs.launchpad.net/bugs/1916773

Title:
  ua disable fips doesn't work in ua client 27

Status in ifupdown package in Ubuntu:
  Invalid

Bug description:
  I'm trying to disable FIPS from an Ubuntu Pro FIPS 18.04 image in AWS.
  I updated to the latest ua client in the daily PPA.  I have a prompt
  to disable it but it fails:

  ubuntu@ip-172-31-60-238:~$ sudo add-apt-repository ppa:canonical-
  server/ua-client-daily

  ubuntu@ip-172-31-60-238:~$ sudo apt install ubuntu-advantage-pro
  ubuntu-advantage-tools

  ubuntu@ip-172-31-60-238:~$ ua version
  27.0-945~gedf4a7e~ubuntu18.04.1

  ubuntu@ip-172-31-60-238:~$ ua status
  SERVICE   ENTITLED  STATUSDESCRIPTION
  cis-audit no— Center for Internet Security Audit Tools
  esm-infra yes   enabled   UA Infra: Extended Security Maintenance
  fips  yes   enabled   NIST-certified FIPS modules
  fips-updates  no— Uncertified security updates to FIPS modules
  livepatch yes   n/a   Canonical Livepatch service
  [...]

  ubuntu@ip-172-31-60-238:~$ sudo ua disable fips
  This will disable access to certified FIPS packages.
  Are you sure? (y/N) y
  Could not enable FIPS.

  ubuntu@ip-172-31-60-238:~$ ua status
  SERVICE   ENTITLED  STATUSDESCRIPTION
  cis-audit no— Center for Internet Security Audit Tools
  esm-infra yes   enabled   UA Infra: Extended Security Maintenance
  fips  yes   enabled   NIST-certified FIPS modules
  fips-updates  no— Uncertified security updates to FIPS modules
  livepatch yes   n/a   Canonical Livepatch service
  [...]

  I tried rebooting after but I'm still running the fips kernel and fips
  is enabled:

  ubuntu@ip-172-31-60-238:~$ uname -a
  Linux ip-172-31-60-238 4.15.0-2000-aws-fips #4-Ubuntu SMP Tue Jan 28 12:41:43 
UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@ip-172-31-60-238:~$ cat /proc/sys/crypto/fips_enabled
  1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1916773/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1916773] [NEW] ua disable fips doesn't work in ua client 27

2021-02-24 Thread David Coronel
Public bug reported:

I'm trying to disable FIPS from an Ubuntu Pro FIPS 18.04 image in AWS. I
updated to the latest ua client in the daily PPA.  I have a prompt to
disable it but it fails:

ubuntu@ip-172-31-60-238:~$ sudo add-apt-repository ppa:canonical-server
/ua-client-daily

ubuntu@ip-172-31-60-238:~$ sudo apt install ubuntu-advantage-pro ubuntu-
advantage-tools

ubuntu@ip-172-31-60-238:~$ ua version
27.0-945~gedf4a7e~ubuntu18.04.1

ubuntu@ip-172-31-60-238:~$ ua status
SERVICE   ENTITLED  STATUSDESCRIPTION
cis-audit no— Center for Internet Security Audit Tools
esm-infra yes   enabled   UA Infra: Extended Security Maintenance
fips  yes   enabled   NIST-certified FIPS modules
fips-updates  no— Uncertified security updates to FIPS modules
livepatch yes   n/a   Canonical Livepatch service
[...]

ubuntu@ip-172-31-60-238:~$ sudo ua disable fips
This will disable access to certified FIPS packages.
Are you sure? (y/N) y
Could not enable FIPS.

ubuntu@ip-172-31-60-238:~$ ua status
SERVICE   ENTITLED  STATUSDESCRIPTION
cis-audit no— Center for Internet Security Audit Tools
esm-infra yes   enabled   UA Infra: Extended Security Maintenance
fips  yes   enabled   NIST-certified FIPS modules
fips-updates  no— Uncertified security updates to FIPS modules
livepatch yes   n/a   Canonical Livepatch service
[...]

I tried rebooting after but I'm still running the fips kernel and fips
is enabled:

ubuntu@ip-172-31-60-238:~$ uname -a
Linux ip-172-31-60-238 4.15.0-2000-aws-fips #4-Ubuntu SMP Tue Jan 28 12:41:43 
UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

ubuntu@ip-172-31-60-238:~$ cat /proc/sys/crypto/fips_enabled
1

** Affects: ifupdown (Ubuntu)
 Importance: Undecided
 Status: Invalid

** Changed in: ifupdown (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to ifupdown in Ubuntu.
https://bugs.launchpad.net/bugs/1916773

Title:
  ua disable fips doesn't work in ua client 27

Status in ifupdown package in Ubuntu:
  Invalid

Bug description:
  I'm trying to disable FIPS from an Ubuntu Pro FIPS 18.04 image in AWS.
  I updated to the latest ua client in the daily PPA.  I have a prompt
  to disable it but it fails:

  ubuntu@ip-172-31-60-238:~$ sudo add-apt-repository ppa:canonical-
  server/ua-client-daily

  ubuntu@ip-172-31-60-238:~$ sudo apt install ubuntu-advantage-pro
  ubuntu-advantage-tools

  ubuntu@ip-172-31-60-238:~$ ua version
  27.0-945~gedf4a7e~ubuntu18.04.1

  ubuntu@ip-172-31-60-238:~$ ua status
  SERVICE   ENTITLED  STATUSDESCRIPTION
  cis-audit no— Center for Internet Security Audit Tools
  esm-infra yes   enabled   UA Infra: Extended Security Maintenance
  fips  yes   enabled   NIST-certified FIPS modules
  fips-updates  no— Uncertified security updates to FIPS modules
  livepatch yes   n/a   Canonical Livepatch service
  [...]

  ubuntu@ip-172-31-60-238:~$ sudo ua disable fips
  This will disable access to certified FIPS packages.
  Are you sure? (y/N) y
  Could not enable FIPS.

  ubuntu@ip-172-31-60-238:~$ ua status
  SERVICE   ENTITLED  STATUSDESCRIPTION
  cis-audit no— Center for Internet Security Audit Tools
  esm-infra yes   enabled   UA Infra: Extended Security Maintenance
  fips  yes   enabled   NIST-certified FIPS modules
  fips-updates  no— Uncertified security updates to FIPS modules
  livepatch yes   n/a   Canonical Livepatch service
  [...]

  I tried rebooting after but I'm still running the fips kernel and fips
  is enabled:

  ubuntu@ip-172-31-60-238:~$ uname -a
  Linux ip-172-31-60-238 4.15.0-2000-aws-fips #4-Ubuntu SMP Tue Jan 28 12:41:43 
UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

  ubuntu@ip-172-31-60-238:~$ cat /proc/sys/crypto/fips_enabled
  1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1916773/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1795764] Re: systemd: core: Fix edge case when processing /proc/self/mountinfo

2018-12-17 Thread David Coronel
I tested the systemd 229-4ubuntu21.11 package from xenial-proposed in
Ubuntu 16.04 in KVM and also confirm I cannot reproduce the issue with
this updated package:

# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 16.04.5 LTS
Release:16.04
Codename:   xenial

# dpkg -l | grep 229-4ubuntu21.11
ii  libpam-systemd:amd64 229-4ubuntu21.11   
amd64system and service manager - PAM module
ii  libsystemd0:amd64229-4ubuntu21.11   
amd64systemd utility library
ii  systemd  229-4ubuntu21.11   
amd64system and service manager

# mkdir -p bind-test/abc 
# mount --bind bind-test bind-test 
# mount -t tmpfs tmpfs bind-test/abc 
# umount bind-test/abc 
# systemctl list-units --all | grep bind-test 
  root-bind\x2dtest.mount 
loadedactive   mounted   /root/bind-test

+1, LGTM

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1795764

Title:
  systemd: core: Fix edge case when processing /proc/self/mountinfo

Status in systemd package in Ubuntu:
  Fix Released
Status in systemd source package in Xenial:
  Fix Committed

Bug description:
  [Impact]

  kubernetes loaded inactive dead transient mount points grows
  https://github.com/kubernetes/kubernetes/issues/57345

  [Test Case]

  # cd /tmp
  # mkdir -p bind-test/abc
  # mount --bind bind-test bind-test
  # mount -t tmpfs tmpfs bind-test/abc
  # umount bind-test/abc
  # systemctl list-units --all | grep bind-test
  tmp-bind\x2dtest-abc.mount loaded inactive dead /tmp/bind-test/abc
  tmp-bind\x2dtest.mount loaded active mounted /tmp/bind-test

  Expected outcome (w/ the fix) :

  # cd /tmp
  # mkdir -p bind-test/abc
  # mount --bind bind-test bind-test
  # mount -t tmpfs tmpfs bind-test/abc
  # umount bind-test/abc
  # systemctl list-units --all | grep bind-test
  tmp-bind\x2dtest.mount loaded active mounted /tmp/bind-test

  [Regression Potential]

  This is a adapted version of 2 upstream fixes as the original upstream
  commit has been made on top on 2 functions mount_setup_new_unit() &
  mount_setup_existing_unit() that not yet exist systemd 229. It is
  easily adaptable because the current function mount_setup_unit() is
  dealing with both of at the moment instead of being individually
  separate in two distinct function.

  It is an adaptation of commits :
  [65d36b495] core: Fix edge case when processing /proc/self/mountinfo
  [03b8cfede] core: make sure to init mount params before calling 
mount_is_extrinsic()

  This patch changes mount_setup_unit() to prevent the just_mounted
  mount setup flag from being overwritten if it is set to true. This
  will allow all mount units created from /proc/self/mountinfo entries
  to be initialised properly.

  Additionally, the patch got the blessing of 'xnox' who looked at it
  and mention it looks fine to him.

  [Pending SRU]

  Note: No autopkgtests has been reported since systemd (21.5) ...
  between 21.5 and now (21.11) everything released has been about
  security fixes :

  systemd (229-4ubuntu21.11) xenial; urgency=medium ==> Current SRU
  systemd (229-4ubuntu21.10) xenial-security; urgency=medium
  systemd (229-4ubuntu21.9) xenial-security; urgency=medium
  systemd (229-4ubuntu21.8) xenial-security; urgency=medium
  systemd (229-4ubuntu21.6) xenial-security; urgency=medium
  systemd (229-4ubuntu21.5) xenial; urgency=medium ==> Previous SRU

  
  * Regression in autopkgtest for nplan (s390x): test log
  
https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-xenial/xenial/s390x/n/nplan/20181023_132448_031b9@/log.gz

  Error:
  modprobe: FATAL: Module cfg80211 not found in directory 
/lib/modules/4.4.0-138-generic

  Justification:
  This above seems to be a recurrent failure since a couple of release already. 
This wasn't introduce by this particular SRU.

  I don't think having wifi module is relevant in s390x anyway, so most
  likely the module is not there on purpose for kernel w/ s390x
  architecture.

  * Regression in autopkgtest for nplan (amd64): test log
  
https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-xenial/xenial/amd64/n/nplan/20181217_010129_e07e2@/log.gz

  Error: (Ran on autopkgtest Ubuntu infra)
  test_bond_mode_balance_rr_pps (__main__.TestNetworkManager) ... Error: Could 
not create NMClient object: Cannot invoke method; proxy is for a well-known 
name without an owner and proxy was constructed with the 
G_DBUS_PROXY_FLAGS_DO_NOT_AUTO_START flag.FAIL

  test_bridge_priority (__main__.TestNetworkManager) ... Error: Could
  not create NMClient object: Cannot invoke method; proxy is for a well-
  known name without an owner and 

[Touch-packages] [Bug 1766872] Re: 'Enable Network' in recovery mode not working properly.

2018-10-22 Thread David Coronel
I tested friendly-recovery 0.2.31ubuntu2 in Xenial/16.04.3 LTS.

Selecting network Enable networking" from the Recovery Menu now enables
the network and DNS successfully. I can "cat /etc/resolv.conf" vs before
I couldn't. I don't see any issues with dependencies or asking for tty-
ask-password or anything that was reported previously.

Selecting "resume Resume normal boot" works good as well to return to
the normal OS.


** Tags removed: verification-needed-xenial
** Tags added: verification-done-xenial

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1766872

Title:
  'Enable Network' in recovery mode not working properly.

Status in friendly-recovery package in Ubuntu:
  Fix Released
Status in systemd package in Ubuntu:
  Won't Fix
Status in friendly-recovery source package in Xenial:
  Fix Committed
Status in systemd source package in Xenial:
  Won't Fix
Status in friendly-recovery source package in Bionic:
  Fix Released
Status in systemd source package in Bionic:
  Won't Fix
Status in friendly-recovery source package in Cosmic:
  Fix Released
Status in systemd source package in Cosmic:
  Won't Fix
Status in friendly-recovery source package in DD-Series:
  Invalid
Status in systemd source package in DD-Series:
  Won't Fix

Bug description:
  [Impact]

   * network menu in recovery mode doesn't work correctly, blocking at
  starting systemd services depends to enable networking.

  [Test Case]

   * Boot w/ Xenial or Bionic in recovery mode via grub
   * Choose "network" in friendly-recovery menu

   The network won't be activated and it'll be stuck at systemd-tty-ask-
  password :

  # pstree
  systemd-+-bash---pstree
  |-recovery-menu---network---systemctl---systemd-tty-ask

  [Regression Potential]

  * Low.
  * All options works fine.
  * Cosmic has the same changes already in place.

  * According to xnox, resume option fails to boot now.
  After verification the 'resume' has the same effect before/after that change, 
it boots up but still seems to stick in 'recovery' option according to 
/proc/cmdline so I don't see any obvious behaviour change before and after.

  [Other Info]

   * Upstream :
  
https://bazaar.launchpad.net/~ubuntu-core-dev/friendly-recovery/ubuntu/changes/161?start_revid=161
  Revision  154 to 161

  [Original Description]

  This bug has been noticed after the introduction of the fix of (LP:
  #1682637) in Bionic.

  I have notice a block in Bionic when choosing 'Enable Network' option
  in recovery mode on different bionic vanilla system and I can
  reproduce all the time.

  I also asked colleagues to give it a try (for a second pair of eye on
  this) and they have the same result as me.

  Basically, when choosing 'Enable Network' it get block or lock.
  If we hit 'ctrl-c', then a shell arrive and the system has network 
connectivity.

  Here's what I find while enabling "systemd.debug-shell=1" from vtty9 :

  # pstree
  systemd-+-bash---pstree
  |-recovery-menu---network---systemctl---systemd-tty-ask
  |-systemd-journal
  

  # ps
  root 486 473 0 08:29 tty1 00:00:00 /bin/systemd-tty-ask-password-agent

  root 473 486 0 08:29 tty1 00:00:00 systemctl start dbus.socket

  root 486 283 0 08:29 tty1 00:00:00 /bin/sh /lib/recovery-
  mode/options/network

  Additionally,

  systemd-analyze blame:
  "Bootup is not yet finished. Please try again later"

  "systemctl list-jobs" is showing a 100 jobs in 'waiting' state

  The only 'running' unit is friendly-recovery.service :
  52 friendly-recovery.service start running

  The rest are all "waiting". My understanding is that "waiting" units
  will be executed only after those which are "running" are completed.
  Which explain why the "ctlr-c" allow the boot to continue.

  All the systemd special unit important at boot-up are waiting.
  7 sysinit.target start waiting
  3 basic.target   start waiting
  .

  Seems like systemd is not fully initialise in 'Recovery Mode' and
  doesn't allow any 'systemctl start' operation without
  password/passphrase request, which I suspect is hidden by the
  recovery-mode menu.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/friendly-recovery/+bug/1766872/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1788643] Re: zombies pile up, system becomes unresponsive

2018-09-26 Thread David Coronel
Hi Ivan, the best way to engage Canonical Support to get assistance with
this issue will be to file a support case on support.canonical.com and
attach an sosreport of the affected system that is collected when the
issue happens. See my previous comment #5 for the details of sosreport.
Please check with Stephen Zarkos if you need access to the Canonical
Support Portal.

One other idea that may help in case your system is not responsive is to
have a serial console output logged in a gnu screen or tmux session.
Inside this console session, you can enable the maximum log level ("echo
9 > /proc/sysrq-trigger", you might have to run "sysctl -w
kernel.sysrq=1" before to enable sysrq) and run "dmesg -w", which will
dump dmesg and continuously append new entries to the kernel log. This
way, you won't depend on saving logs to the disk to see what's going on,
since the disk access could freeze in the moment of the failure.

You can also enable kdump and all the "panic_on_X" sysctl settings
(section Enabling various types of panics in CrashdumpRecipe
article[1]). If the system is locking up so hard that it freezes, it may
then capture a dump so that we can see what's going on. Refer to the
CrashdumpRecipe article[1] for more information.

[1] https://wiki.ubuntu.com/Kernel/CrashdumpRecipe

Thank you,
David

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1788643

Title:
  zombies pile up, system becomes unresponsive

Status in systemd package in Ubuntu:
  New

Bug description:
  Description:Ubuntu 16.04.5 LTS
  Release:16.04

  systemd:
Installed: 229-4ubuntu21.4
Candidate: 229-4ubuntu21.4
Version table:
   *** 229-4ubuntu21.4 500
  500 http://azure.archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   229-4ubuntu21.1 500
  500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 
Packages
   229-4ubuntu4 500
  500 http://azure.archive.ubuntu.com/ubuntu xenial/main amd64 Packages

  This problem is in Azure. We are seeing these problems on different
  systems. Worker nodes (Ubuntu 16.04) in a hadoop cluster start piling
  up zombies and become unresponsive. The syslog and the kernel logs
  don't provide much information.

  The only error we could correlate with what we are seeing was in the
  audit logs. See at the end of this message, the "Connection timed out"
  and the "Cannot create session: Already running in a session"
  messages.

  Our first suspect was memory pressure on the machines. We added
  logging and settings to reboot on out of memory, but all these turned
  to be red herrings.

  Aug 18 19:11:08 wn2-d3ncsp su[112600]: Successful su for root by root
  Aug 18 19:11:08 wn2-d3ncsp su[112600]: + ??? root:root
  Aug 18 19:11:08 wn2-d3ncsp su[112600]: pam_unix(su:session): session opened 
for user root by (uid=0)
  Aug 18 19:11:08 wn2-d3ncsp systemd-logind[1486]: New session c8 of user root.
  Aug 18 19:11:26 wn2-d3ncsp sshd[112690]: Did not receive identification 
string from 10.84.93.35
  Aug 18 19:11:34 wn2-d3ncsp su[112600]: pam_systemd(su:session): Failed to 
create session: Connection timed out
  Aug 18 19:11:34 wn2-d3ncsp su[112600]: pam_unix(su:session): session closed 
for user root
  Aug 18 19:11:34 wn2-d3ncsp systemd-logind[1486]: Removed session c8.

   
  Aug 18 19:12:03 wn2-d3ncsp sudo: ehiadmin : TTY=pts/1 ; PWD=/home/ehiadmin ; 
USER=root ; COMMAND=/bin/su -
  Aug 18 19:12:03 wn2-d3ncsp sudo: pam_unix(sudo:session): session opened for 
user root by ehiadmin(uid=0)
  Aug 18 19:12:03 wn2-d3ncsp su[113085]: Successful su for root by root
  Aug 18 19:12:03 wn2-d3ncsp su[113085]: + /dev/pts/1 root:root
  Aug 18 19:12:03 wn2-d3ncsp su[113085]: pam_unix(su:session): session opened 
for user root by ehiadmin(uid=0)
  Aug 18 19:12:03 wn2-d3ncsp su[113085]: pam_systemd(su:session): Cannot create 
session: Already running in a session
  Aug 18 19:12:42 wn2-d3ncsp sshd[113274]: Did not receive identification 
string from 10.84.93.42
  Aug 18 19:13:37 wn2-d3ncsp su[113085]: pam_unix(su:session): session closed 
for user root
  Aug 18 19:13:37 wn2-d3ncsp sudo: pam_unix(sudo:session): session closed for 
user root
  Aug 18 19:13:37 wn2-d3ncsp sshd[112285]: pam_unix(sshd:session): session 
closed for user ehiadmin
  Aug 18 19:13:37 wn2-d3ncsp systemd-logind[1486]: Removed session 1291.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1788643/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1788486] Re: apt behaviour when package with strict dependencies rules and version -gt in -updates than -security.

2018-09-04 Thread David Coronel
There is a similar situation where apt-get wants to remove packages when
strict dependencies (equal version number) cannot be met. For example
dovecot-core/dovecot-pop3d/dovecot-imapd.

After installing Ubuntu 16.04.3 LTS from the ISO, install the 3 packages
from the release pocket:

$ sudo apt install dovecot-imapd=1:2.2.22-1ubuntu2 \
dovecot-core=1:2.2.22-1ubuntu2 \
dovecot-pop3d=1:2.2.22-1ubuntu2 

USN-3587-1 [1] wants dovecot-core to be upgraded to 1:2.2.22-1ubuntu2.7

Trying to upgrade dovecot-core to 1:2.2.22-1ubuntu2.7 will want to
remove dovecot-imapd and dovecot-pop3d:

$ sudo apt-get install --dry-run dovecot-core=1:2.2.22-1ubuntu2.7

Reading package lists... Done
Building dependency tree   
Reading state information... Done
Suggested packages:
  ntp dovecot-gssapi dovecot-sieve dovecot-pgsql dovecot-mysql dovecot-sqlite 
dovecot-ldap dovecot-imapd dovecot-pop3d dovecot-lmtpd dovecot-managesieved 
dovecot-solr
The following packages will be REMOVED:
  dovecot-imapd dovecot-pop3d
The following packages will be upgraded:
  dovecot-core
1 upgraded, 0 newly installed, 2 to remove and 171 not upgraded.
Remv dovecot-imapd [1:2.2.22-1ubuntu2]
Remv dovecot-pop3d [1:2.2.22-1ubuntu2]
Inst dovecot-core [1:2.2.22-1ubuntu2] (1:2.2.22-1ubuntu2.7 
Ubuntu:16.04/xenial-security [amd64])
Conf dovecot-core (1:2.2.22-1ubuntu2.7 Ubuntu:16.04/xenial-security [amd64])


Why? The only package that is *required* to be upgraded (because the USN 
specifies it, and because we currently treat USN package specs as binary 
packages), is dovecot-core. Apt's (simple) dependency solver figures it can 
upgrade that, but in doing so it breaks dovecot-imapd and dovecot-pop3d which 
need removing as a consequence. The workaround is to specify all 3 packages: 

$ sudo apt-get install --dry-run dovecot-core=1:2.2.22-1ubuntu2.7 \
dovecot-imapd=1:2.2.22-1ubuntu2.7 \
dovecot-pop3d=1:2.2.22-1ubuntu2.7

Reading package lists... Done
Building dependency tree   
Reading state information... Done
Suggested packages:
  ntp dovecot-gssapi dovecot-sieve dovecot-pgsql dovecot-mysql dovecot-sqlite 
dovecot-ldap dovecot-lmtpd dovecot-managesieved dovecot-solr
The following packages will be upgraded:
  dovecot-core dovecot-imapd dovecot-pop3d
3 upgraded, 0 newly installed, 0 to remove and 171 not upgraded.
Inst dovecot-pop3d [1:2.2.22-1ubuntu2] (1:2.2.22-1ubuntu2.7 
Ubuntu:16.04/xenial-security [amd64]) []
Inst dovecot-imapd [1:2.2.22-1ubuntu2] (1:2.2.22-1ubuntu2.7 
Ubuntu:16.04/xenial-security [amd64]) []
Inst dovecot-core [1:2.2.22-1ubuntu2] (1:2.2.22-1ubuntu2.7 
Ubuntu:16.04/xenial-security [amd64])
Conf dovecot-core (1:2.2.22-1ubuntu2.7 Ubuntu:16.04/xenial-security [amd64])
Conf dovecot-pop3d (1:2.2.22-1ubuntu2.7 Ubuntu:16.04/xenial-security [amd64])
Conf dovecot-imapd (1:2.2.22-1ubuntu2.7 Ubuntu:16.04/xenial-security [amd64])


[1] https://usn.ubuntu.com/3587-1/

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to apt in Ubuntu.
https://bugs.launchpad.net/bugs/1788486

Title:
  apt behaviour when package with strict dependencies rules and version
  -gt in -updates than -security.

Status in apt package in Ubuntu:
  Won't Fix
Status in landscape-client package in Ubuntu:
  Won't Fix
Status in apt source package in Xenial:
  Won't Fix
Status in landscape-client source package in Xenial:
  Won't Fix
Status in apt source package in Bionic:
  Won't Fix
Status in landscape-client source package in Bionic:
  Won't Fix

Bug description:
  [Impact]

  We notice that situation while investigating a security update using
  Landscape, but it also applies to 'apt' outside the Landscape context.

  'apt' should be smarter to detect/install packages with strict
  dependencies such as systemd[1] when a version is specified for
  upgrade (Ex: $ apt-get install systemd=229-4ubuntu-21.1).

  It should automatically install the dependencies (if any) from that
  same version as well instead of failing trying to install the highest
  version available (if any) while installing the specified version for
  the one mentionned :

  
  $ apt-get install systemd=229-4ubuntu-21.1
  
  "systemd : Depends: libsystemd0 (= 229-4ubuntu21.1) but 229-4ubuntu21.4 is to 
be installed"
  =

  To face that problem :
  - Package with lower version should be found in -security ( Ex: 
systemd/229-4ubuntu21.1 )
  - Package with higher version should be found in -updates ( Ex: 
systemd/229-4ubuntu21.4 )
  - Package should have strict dependencies ( Ex: libsystemd0 (= 
${binary:Version}) )
  - The upgrade should only specify version for the package, without it's 
dependencies. (Ex: $ apt-get install systemd=229-4ubuntu-21.1" #systemd without 
libsystemd0 depends)

  Using systemd is a good reproducer, I'm sure finding other package
  with the same situation is easy.

  It has been easily reproduced with systemd on Xenial and Bionic so
  far.

  [1] 

[Touch-packages] [Bug 1788643] Re: zombies pile up, system becomes unresponsive

2018-08-29 Thread David Coronel
Hi Ivan,

I launched an Hadoop HDInsight cluster in Azure and I see it runs on 6
nodes (2 head nodes and 4 worker nodes). Are those nodes the kind of
nodes that this issue is about? If so, is there a way I can have access
to those nodes or is it intentionally blocked for Azure users?

In any case, I think the first step in troubleshooting this issue would
be to run an sosreport while the system is experiencing the issue.

Install sosreport:

sudo apt update
sudo apt install sosreport


Run sosreport:

sudo sosreport -a

A root-only owned archive and a checksum file will be created in /tmp.

If you need help sending us the sosreport privately, you can talk to
Stephen Zarkos who has access to our support portal and FTP server.

Thanks,
David

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1788643

Title:
  zombies pile up, system becomes unresponsive

Status in systemd package in Ubuntu:
  New

Bug description:
  Description:Ubuntu 16.04.5 LTS
  Release:16.04

  systemd:
Installed: 229-4ubuntu21.4
Candidate: 229-4ubuntu21.4
Version table:
   *** 229-4ubuntu21.4 500
  500 http://azure.archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   229-4ubuntu21.1 500
  500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 
Packages
   229-4ubuntu4 500
  500 http://azure.archive.ubuntu.com/ubuntu xenial/main amd64 Packages

  This problem is in Azure. We are seeing these problems on different
  systems. Worker nodes (Ubuntu 16.04) in a hadoop cluster start piling
  up zombies and become unresponsive. The syslog and the kernel logs
  don't provide much information.

  The only error we could correlate with what we are seeing was in the
  audit logs. See at the end of this message, the "Connection timed out"
  and the "Cannot create session: Already running in a session"
  messages.

  Our first suspect was memory pressure on the machines. We added
  logging and settings to reboot on out of memory, but all these turned
  to be red herrings.

  Aug 18 19:11:08 wn2-d3ncsp su[112600]: Successful su for root by root
  Aug 18 19:11:08 wn2-d3ncsp su[112600]: + ??? root:root
  Aug 18 19:11:08 wn2-d3ncsp su[112600]: pam_unix(su:session): session opened 
for user root by (uid=0)
  Aug 18 19:11:08 wn2-d3ncsp systemd-logind[1486]: New session c8 of user root.
  Aug 18 19:11:26 wn2-d3ncsp sshd[112690]: Did not receive identification 
string from 10.84.93.35
  Aug 18 19:11:34 wn2-d3ncsp su[112600]: pam_systemd(su:session): Failed to 
create session: Connection timed out
  Aug 18 19:11:34 wn2-d3ncsp su[112600]: pam_unix(su:session): session closed 
for user root
  Aug 18 19:11:34 wn2-d3ncsp systemd-logind[1486]: Removed session c8.

   
  Aug 18 19:12:03 wn2-d3ncsp sudo: ehiadmin : TTY=pts/1 ; PWD=/home/ehiadmin ; 
USER=root ; COMMAND=/bin/su -
  Aug 18 19:12:03 wn2-d3ncsp sudo: pam_unix(sudo:session): session opened for 
user root by ehiadmin(uid=0)
  Aug 18 19:12:03 wn2-d3ncsp su[113085]: Successful su for root by root
  Aug 18 19:12:03 wn2-d3ncsp su[113085]: + /dev/pts/1 root:root
  Aug 18 19:12:03 wn2-d3ncsp su[113085]: pam_unix(su:session): session opened 
for user root by ehiadmin(uid=0)
  Aug 18 19:12:03 wn2-d3ncsp su[113085]: pam_systemd(su:session): Cannot create 
session: Already running in a session
  Aug 18 19:12:42 wn2-d3ncsp sshd[113274]: Did not receive identification 
string from 10.84.93.42
  Aug 18 19:13:37 wn2-d3ncsp su[113085]: pam_unix(su:session): session closed 
for user root
  Aug 18 19:13:37 wn2-d3ncsp sudo: pam_unix(sudo:session): session closed for 
user root
  Aug 18 19:13:37 wn2-d3ncsp sshd[112285]: pam_unix(sshd:session): session 
closed for user ehiadmin
  Aug 18 19:13:37 wn2-d3ncsp systemd-logind[1486]: Removed session 1291.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1788643/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1788643] Re: zombies pile up, system becomes unresponsive

2018-08-28 Thread David Coronel
Hi Ivan. I'll have to learn about HDInsight then. I was hoping to get a
list of steps I can follow to fast track the learning process. I'll see
what I can do this week. Thanks.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1788643

Title:
  zombies pile up, system becomes unresponsive

Status in systemd package in Ubuntu:
  New

Bug description:
  Description:Ubuntu 16.04.5 LTS
  Release:16.04

  systemd:
Installed: 229-4ubuntu21.4
Candidate: 229-4ubuntu21.4
Version table:
   *** 229-4ubuntu21.4 500
  500 http://azure.archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   229-4ubuntu21.1 500
  500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 
Packages
   229-4ubuntu4 500
  500 http://azure.archive.ubuntu.com/ubuntu xenial/main amd64 Packages

  This problem is in Azure. We are seeing these problems on different
  systems. Worker nodes (Ubuntu 16.04) in a hadoop cluster start piling
  up zombies and become unresponsive. The syslog and the kernel logs
  don't provide much information.

  The only error we could correlate with what we are seeing was in the
  audit logs. See at the end of this message, the "Connection timed out"
  and the "Cannot create session: Already running in a session"
  messages.

  Our first suspect was memory pressure on the machines. We added
  logging and settings to reboot on out of memory, but all these turned
  to be red herrings.

  Aug 18 19:11:08 wn2-d3ncsp su[112600]: Successful su for root by root
  Aug 18 19:11:08 wn2-d3ncsp su[112600]: + ??? root:root
  Aug 18 19:11:08 wn2-d3ncsp su[112600]: pam_unix(su:session): session opened 
for user root by (uid=0)
  Aug 18 19:11:08 wn2-d3ncsp systemd-logind[1486]: New session c8 of user root.
  Aug 18 19:11:26 wn2-d3ncsp sshd[112690]: Did not receive identification 
string from 10.84.93.35
  Aug 18 19:11:34 wn2-d3ncsp su[112600]: pam_systemd(su:session): Failed to 
create session: Connection timed out
  Aug 18 19:11:34 wn2-d3ncsp su[112600]: pam_unix(su:session): session closed 
for user root
  Aug 18 19:11:34 wn2-d3ncsp systemd-logind[1486]: Removed session c8.

   
  Aug 18 19:12:03 wn2-d3ncsp sudo: ehiadmin : TTY=pts/1 ; PWD=/home/ehiadmin ; 
USER=root ; COMMAND=/bin/su -
  Aug 18 19:12:03 wn2-d3ncsp sudo: pam_unix(sudo:session): session opened for 
user root by ehiadmin(uid=0)
  Aug 18 19:12:03 wn2-d3ncsp su[113085]: Successful su for root by root
  Aug 18 19:12:03 wn2-d3ncsp su[113085]: + /dev/pts/1 root:root
  Aug 18 19:12:03 wn2-d3ncsp su[113085]: pam_unix(su:session): session opened 
for user root by ehiadmin(uid=0)
  Aug 18 19:12:03 wn2-d3ncsp su[113085]: pam_systemd(su:session): Cannot create 
session: Already running in a session
  Aug 18 19:12:42 wn2-d3ncsp sshd[113274]: Did not receive identification 
string from 10.84.93.42
  Aug 18 19:13:37 wn2-d3ncsp su[113085]: pam_unix(su:session): session closed 
for user root
  Aug 18 19:13:37 wn2-d3ncsp sudo: pam_unix(sudo:session): session closed for 
user root
  Aug 18 19:13:37 wn2-d3ncsp sshd[112285]: pam_unix(sshd:session): session 
closed for user ehiadmin
  Aug 18 19:13:37 wn2-d3ncsp systemd-logind[1486]: Removed session 1291.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1788643/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1788643] Re: zombies pile up, system becomes unresponsive

2018-08-28 Thread David Coronel
Hi Ivan. Can you post steps to reproduce this issue? Or at least steps
to get a similar environment running to get familiar with this bug? (for
those who are not familiar with HDInsight). Thanks!

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1788643

Title:
  zombies pile up, system becomes unresponsive

Status in systemd package in Ubuntu:
  New

Bug description:
  Description:Ubuntu 16.04.5 LTS
  Release:16.04

  systemd:
Installed: 229-4ubuntu21.4
Candidate: 229-4ubuntu21.4
Version table:
   *** 229-4ubuntu21.4 500
  500 http://azure.archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   229-4ubuntu21.1 500
  500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 
Packages
   229-4ubuntu4 500
  500 http://azure.archive.ubuntu.com/ubuntu xenial/main amd64 Packages

  This problem is in Azure. We are seeing these problems on different
  systems. Worker nodes (Ubuntu 16.04) in a hadoop cluster start piling
  up zombies and become unresponsive. The syslog and the kernel logs
  don't provide much information.

  The only error we could correlate with what we are seeing was in the
  audit logs. See at the end of this message, the "Connection timed out"
  and the "Cannot create session: Already running in a session"
  messages.

  Our first suspect was memory pressure on the machines. We added
  logging and settings to reboot on out of memory, but all these turned
  to be red herrings.

  Aug 18 19:11:08 wn2-d3ncsp su[112600]: Successful su for root by root
  Aug 18 19:11:08 wn2-d3ncsp su[112600]: + ??? root:root
  Aug 18 19:11:08 wn2-d3ncsp su[112600]: pam_unix(su:session): session opened 
for user root by (uid=0)
  Aug 18 19:11:08 wn2-d3ncsp systemd-logind[1486]: New session c8 of user root.
  Aug 18 19:11:26 wn2-d3ncsp sshd[112690]: Did not receive identification 
string from 10.84.93.35
  Aug 18 19:11:34 wn2-d3ncsp su[112600]: pam_systemd(su:session): Failed to 
create session: Connection timed out
  Aug 18 19:11:34 wn2-d3ncsp su[112600]: pam_unix(su:session): session closed 
for user root
  Aug 18 19:11:34 wn2-d3ncsp systemd-logind[1486]: Removed session c8.

   
  Aug 18 19:12:03 wn2-d3ncsp sudo: ehiadmin : TTY=pts/1 ; PWD=/home/ehiadmin ; 
USER=root ; COMMAND=/bin/su -
  Aug 18 19:12:03 wn2-d3ncsp sudo: pam_unix(sudo:session): session opened for 
user root by ehiadmin(uid=0)
  Aug 18 19:12:03 wn2-d3ncsp su[113085]: Successful su for root by root
  Aug 18 19:12:03 wn2-d3ncsp su[113085]: + /dev/pts/1 root:root
  Aug 18 19:12:03 wn2-d3ncsp su[113085]: pam_unix(su:session): session opened 
for user root by ehiadmin(uid=0)
  Aug 18 19:12:03 wn2-d3ncsp su[113085]: pam_systemd(su:session): Cannot create 
session: Already running in a session
  Aug 18 19:12:42 wn2-d3ncsp sshd[113274]: Did not receive identification 
string from 10.84.93.42
  Aug 18 19:13:37 wn2-d3ncsp su[113085]: pam_unix(su:session): session closed 
for user root
  Aug 18 19:13:37 wn2-d3ncsp sudo: pam_unix(sudo:session): session closed for 
user root
  Aug 18 19:13:37 wn2-d3ncsp sshd[112285]: pam_unix(sshd:session): session 
closed for user ehiadmin
  Aug 18 19:13:37 wn2-d3ncsp systemd-logind[1486]: Removed session 1291.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1788643/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1755863] Re: netbooting the bionic live CD over NFS goes straight to maintenance mode :

2018-05-28 Thread David Coronel
I confirm the "toram" workaround from Woodrow allows me to PXE netboot
the most recent Ubuntu 18.04 Desktop amd64 ISO image.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1755863

Title:
  netbooting the bionic live CD over NFS goes straight to maintenance
  mode :

Status in casper package in Ubuntu:
  Confirmed
Status in systemd package in Ubuntu:
  Confirmed

Bug description:
  netbooting the bionic live CD[1] over NFS goes straight to maintenance
  mode :

  [1] http://cdimage.ubuntu.com/daily-live/current/

  # casper.log
  Begin: Adding live session user... ... dbus-daemon[568]: [session uid=999 
pid=568] Activating service name='org.gtk.vfs.Daemon' requested by ':1.0' 
(uid=999 pid=569 comm="" label="unconfined")
  dbus-daemon[568]: [session uid=999 pid=568] Successfully activated service 
'org.gtk.vfs.Daemon'
  dbus-daemon[568]: [session uid=999 pid=568] Activating service 
name='org.gtk.vfs.Metadata' requested by ':1.0' (uid=999 pid=569 comm="" 
label="unconfined")
  fuse: device not found, try 'modprobe fuse' first
  dbus-daemon[568]: [session uid=999 pid=568] Successfully activated service 
'org.gtk.vfs.Metadata'

  (gvfsd-metadata:580): GUdev-CRITICAL **: 16:28:56.270:
  g_udev_device_has_property: assertion 'G_UDEV_IS_DEVICE (device)'
  failed

  (gvfsd-metadata:580): GUdev-CRITICAL **: 16:28:56.270: 
g_udev_device_has_property: assertion 'G_UDEV_IS_DEVICE (device)' failed
  A connection to the bus can't be made
  done.
  Begin: Setting up init... ... done.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/casper/+bug/1755863/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1718966] Re: Cannot install snaps on Ubuntu 14.04 with /var on its own partition

2017-10-03 Thread David Coronel
I tested the fix from ppa:inaddy/lp1718966 and I am now able to install
snaps on my two test systems without using the /etc/fstab workaround,
one with LVM and one without. Both systems have /var isolated. Thanks!

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1718966

Title:
  Cannot install snaps on Ubuntu 14.04 with /var on its own partition

Status in snapd:
  Invalid
Status in systemd package in Ubuntu:
  In Progress
Status in systemd source package in Trusty:
  In Progress

Bug description:
  [Impact]

  Installing snaps is not possible on Ubuntu 14.04 when having /var on
  its own partition, whether its LVM or not.

  The issue is with the core snap being unable to mount:

  The error with /var isolated and using LVM:

  root@ubuntu:~# snap install canonical-livepatch
  error: cannot perform the following tasks:
  - Mount snap "core" (2898) ([start snap-core-2898.mount] failed with exit 
status 6: Failed to issue method call: Unit 
systemd-fsck@dev-mapper-vg00\x2dvarvol.service failed to load: No such file or 
directory. See system logs and 'systemctl status 
systemd-fsck@dev-mapper-vg00\x2dvarvol.service' for details.
  )

  The error with /var isolated but without using LVM:

  root@ubuntu:~# snap install canonical-livepatch
  error: cannot perform the following tasks:
  - Mount snap "core" (2898) ([start snap-core-2898.mount] failed with exit 
status 6: Failed to issue method call: Unit 
systemd-fsck@dev-disk-by\x2duuid-7383abd2\x2d019c\x2d46c2\x2d8b36\x2d34633cc8f3ca.service
 failed to load: No such file or directory. See system logs and 'systemctl 
status 
systemd-fsck@dev-disk-by\x2duuid-7383abd2\x2d019c\x2d46c2\x2d8b36\x2d34633cc8f3ca.service'
 for details.
  )

  The same error happens if I try to install the hello-world snap (with
  LVM in this example):

  root@ubuntu:~# snap install hello-world
  error: cannot perform the following tasks:
  - Mount snap "core" (2898) ([start snap-core-2898.mount] failed with exit 
status 6: Failed to issue method call: Unit 
systemd-fsck@dev-mapper-vg00\x2dvarvol.service failed to load: No such file or 
directory. See system logs and 'systemctl status 
systemd-fsck@dev-mapper-vg00\x2dvarvol.service' for details.
  )

  I cannot reproduce the issue in Ubuntu 16.04.

  I couldn't reproduce this issue by using the Ubuntu 14.04 cloud image
  which doesn't isolate /var on its own partition. I tried adding a
  secondary disk to that cloud image VM and create a dummy VG and LV,
  but couldn't reproduce the issue.

  Also could not reproduce using Ubuntu 14.04 (with LVM or not) but with
  only a / partition and swap.

  [Test Case]

  # Install Ubuntu 14.04 in KVM (I used the 14.04.4 server iso) and
  configure /, /var and swap on their own partitions (with LVM or not,
  the issue happens in both situations).

  root@ubuntu:~# lvs
  LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
  rootvol vg00 -wi-ao--- 3.72g
  swap vg00 -wi-ao--- 3.72g
  varvol vg00 -wi-ao--- 3.72g

  root@ubuntu:~# df -h
  Filesystem Size Used Avail Use% Mounted on
  udev 484M 4.0K 484M 1% /dev
  tmpfs 100M 988K 99M 1% /run
  /dev/dm-0 3.7G 1.7G 1.8G 49% /
  none 4.0K 0 4.0K 0% /sys/fs/cgroup
  none 5.0M 0 5.0M 0% /run/lock
  none 497M 0 497M 0% /run/shm
  none 100M 0 100M 0% /run/user
  /dev/mapper/vg00-varvol 3.7G 716M 2.8G 21% /var

  # Upgrade system, install snapd and reboot

  root@ubuntu:~# apt update
  root@ubuntu:~# apt upgrade -y
  root@ubuntu:~# apt install -y snapd
  root@ubuntu:~# reboot

  # After reboot, check kernel version and try to install the canonical-
  livepatch snap:

  root@ubuntu:~# uname -a
  Linux ubuntu 4.4.0-96-generic #119~14.04.1-Ubuntu SMP Wed Sep 13 08:40:48 UTC 
2017 x86_64 x86_64 x86_64 GNU/Linux

  root@ubuntu:~# snap list
  No snaps are installed yet. Try "snap install hello-world".

  root@ubuntu:~# snap install canonical-livepatch
  error: cannot perform the following tasks:
  - Mount snap "core" (2898) ([start snap-core-2898.mount] failed with exit 
status 6: Failed to issue method call: Unit 
systemd-fsck@dev-mapper-vg00\x2dvarvol.service failed to load: No such file or 
directory. See system logs and 'systemctl status 
systemd-fsck@dev-mapper-vg00\x2dvarvol.service' for details.
  )

  [Regression Potential]

  - Unit file has been added to systemd, could cause an error in some units 
initialization. Since systemd is not used as "init" system for Trusty, this is 
minor regression.
  - Could break all systemd units depending (After=/Wants=) 
systemd-fsck@.service but they are already broken.

  [Other Info]

  [Original Description]

  Installing snaps is not possible on Ubuntu 14.04 when having /var on
  its own partition, whether its LVM or not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/snapd/+bug/1718966/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to

[Touch-packages] [Bug 1718966] Re: Cannot install snaps on Ubuntu 14.04 with /var on its own partition

2017-09-29 Thread David Coronel
I confirm the workaround works for me:

Before:

root@ubuntu:~# cat /etc/fstab
/dev/mapper/vg00-rootvol /   ext4errors=remount-ro 0   1
/dev/mapper/vg00-varvol /varext4defaults0   2
/dev/mapper/vg00-swap noneswapsw  0   0

root@ubuntu:~# snap install hello-world
error: cannot perform the following tasks:
- Mount snap "core" (2898) ([start snap-core-2898.mount] failed with exit 
status 6: Failed to issue method call: Unit 
systemd-fsck@dev-mapper-vg00\x2dvarvol.service failed to load: No such file or 
directory. See system logs and 'systemctl status 
systemd-fsck@dev-mapper-vg00\x2dvarvol.service' for details.
)


After:

root@ubuntu:~# cat /etc/fstab
/dev/mapper/vg00-rootvol /   ext4errors=remount-ro 0   1
/dev/mapper/vg00-varvol /varext4defaults0   0
/dev/mapper/vg00-swap noneswapsw  0   0

root@ubuntu:~# snap install hello-world
hello-world 6.3 from 'canonical' installed

root@ubuntu:~# snap install canonical-livepatch
canonical-livepatch 7 from 'canonical' installed

root@ubuntu:~# sudo canonical-livepatch enable 

root@ubuntu:~# canonical-livepatch status 
kernel: 4.4.0-96.119~14.04.1-generic
fully-patched: true
version: ""


Thanks Rafael!

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1718966

Title:
  Cannot install snaps on Ubuntu 14.04 with /var on its own partition

Status in snapd:
  In Progress
Status in systemd package in Ubuntu:
  In Progress
Status in systemd source package in Trusty:
  In Progress

Bug description:
  Installing snaps is not possible on Ubuntu 14.04 when having /var on
  its own partition, whether its LVM or not.

  The issue is with the core snap being unable to mount:

  The error with /var isolated and using LVM:

  root@ubuntu:~# snap install canonical-livepatch
  error: cannot perform the following tasks:
  - Mount snap "core" (2898) ([start snap-core-2898.mount] failed with exit 
status 6: Failed to issue method call: Unit 
systemd-fsck@dev-mapper-vg00\x2dvarvol.service failed to load: No such file or 
directory. See system logs and 'systemctl status 
systemd-fsck@dev-mapper-vg00\x2dvarvol.service' for details.
  )

  The error with /var isolated but without using LVM:

  root@ubuntu:~# snap install canonical-livepatch
  error: cannot perform the following tasks:
  - Mount snap "core" (2898) ([start snap-core-2898.mount] failed with exit 
status 6: Failed to issue method call: Unit 
systemd-fsck@dev-disk-by\x2duuid-7383abd2\x2d019c\x2d46c2\x2d8b36\x2d34633cc8f3ca.service
 failed to load: No such file or directory. See system logs and 'systemctl 
status 
systemd-fsck@dev-disk-by\x2duuid-7383abd2\x2d019c\x2d46c2\x2d8b36\x2d34633cc8f3ca.service'
 for details.
  )

  The same error happens if I try to install the hello-world snap (with
  LVM in this example):

  root@ubuntu:~# snap install hello-world
  error: cannot perform the following tasks:
  - Mount snap "core" (2898) ([start snap-core-2898.mount] failed with exit 
status 6: Failed to issue method call: Unit 
systemd-fsck@dev-mapper-vg00\x2dvarvol.service failed to load: No such file or 
directory. See system logs and 'systemctl status 
systemd-fsck@dev-mapper-vg00\x2dvarvol.service' for details.
  )

  I cannot reproduce the issue in Ubuntu 16.04.

  I couldn't reproduce this issue by using the Ubuntu 14.04 cloud image
  which doesn't isolate /var on its own partition. I tried adding a
  secondary disk to that cloud image VM and create a dummy VG and LV,
  but couldn't reproduce the issue.

  Also could not reproduce using Ubuntu 14.04 (with LVM or not) but with
  only a / partition and swap.

  
  Steps to reproduce:
  ===

  # Install Ubuntu 14.04 in KVM (I used the 14.04.4 server iso) and
  configure /, /var and swap on their own partitions (with LVM or not,
  the issue happens in both situations).

  root@ubuntu:~# lvs
  LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
  rootvol vg00 -wi-ao--- 3.72g 
  swap vg00 -wi-ao--- 3.72g 
  varvol vg00 -wi-ao--- 3.72g 

  root@ubuntu:~# df -h
  Filesystem   Size  Used Avail Use% Mounted on
  udev 484M  4.0K  484M   1% /dev
  tmpfs100M  988K   99M   1% /run
  /dev/dm-03.7G  1.7G  1.8G  49% /
  none 4.0K 0  4.0K   0% /sys/fs/cgroup
  none 5.0M 0  5.0M   0% /run/lock
  none 497M 0  497M   0% /run/shm
  none 100M 0  100M   0% /run/user
  /dev/mapper/vg00-varvol  3.7G  716M  2.8G  21% /var

  
  # Upgrade system, install snapd and reboot

  root@ubuntu:~# apt update
  root@ubuntu:~# apt upgrade -y
  root@ubuntu:~# apt install -y snapd
  root@ubuntu:~# reboot

  
  # After reboot, check kernel version and try to 

[Touch-packages] [Bug 1176046] Re: isc-dhcp dhclient listens on extra random ports

2017-05-25 Thread David Coronel
I installed isc-dhcp-client-noddns from trusty-proposed and I don't get
the two random extra high ports anymore:

Before:

# dpkg -l | grep isc-dhcp-
ii  isc-dhcp-client  4.2.4-7ubuntu12.9 amd64
ISC DHCP client
ii  isc-dhcp-common  4.2.4-7ubuntu12.9 amd64
common files used by all the isc-dhcp* packages

# netstat -anputa | grep -i dhclient
udp0  0 0.0.0.0:68  0.0.0.0:*   
716/dhclient
udp0  0 0.0.0.0:42496   0.0.0.0:*   
716/dhclient
udp6   0  0 :::7781 :::*
716/dhclient

After:

# dpkg -l | grep isc-dhcp-
ii  isc-dhcp-client  4.2.4-7ubuntu12.10 
amd64ISC DHCP client
ii  isc-dhcp-client-noddns   4.2.4-7ubuntu12.10 
amd64Dynamic DNS (DDNS) disabled DHCP client
ii  isc-dhcp-common  4.2.4-7ubuntu12.10 
amd64common files used by all the isc-dhcp* packages

# netstat -anputa | grep -i dhclient
udp0  0 0.0.0.0:68  0.0.0.0:*   
13411/dhclient  


And if I remove the isc-dhcp-client-noddns package, I get the 2 random ports 
again:

# dpkg -l | grep isc-dhcp-
ii  isc-dhcp-client  4.2.4-7ubuntu12.10 
amd64ISC DHCP client
ii  isc-dhcp-common  4.2.4-7ubuntu12.10 
amd64common files used by all the isc-dhcp* packages

# netstat -anputa | grep -i dhclient
udp0  0 0.0.0.0:68  0.0.0.0:*   
565/dhclient
udp0  0 0.0.0.0:63510   0.0.0.0:*   
565/dhclient
udp6   0  0 :::44100:::*
565/dhclient


Looks good to me.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to isc-dhcp in Ubuntu.
https://bugs.launchpad.net/bugs/1176046

Title:
  isc-dhcp dhclient listens on extra random ports

Status in isc-dhcp package in Ubuntu:
  Fix Released
Status in isc-dhcp source package in Trusty:
  Fix Committed
Status in isc-dhcp source package in Xenial:
  Fix Committed
Status in isc-dhcp source package in Yakkety:
  Fix Released

Bug description:
  [Impact]

  In trusty, there is only 1 version of dhclient, including #define NSUPDATE, 
which introduce DDNS functionnality.
  The DDNS functionnality, generate 2 random extra ports between 1024-65535.

  Impact reported by users :

  "One impact of these random ports is that security hardening becomes more 
difficult. The purpose of these random ports and security implications are 
unknown."
  "We have software that was using one of the lower udp ports but it happened 
to collide with dhclient which seems to allocate 2 random ports."

  There is a randomization mechanism in libdns that prevent dhclient to
  take the sysctl values into account (net.ipv4.ip_local_port_range &
  net.ipv4.ip_local_reserved_ports) to workaround this, and after
  discussion isc-dhcp upstream doesn't want to rely on kernel for
  randomization.

  There is no realtime configuration to disable the feature or
  workaround this. The only possible way is at compile time.

  I also talk with upstream maintainers, and there is no way they will
  accept to reduce the range (1024-65535) for security reason. Reducing
  the port range may facilitate the spoofing.

  Xenial has separated dhclient in two packages :

  isc-dhcp-client pkg : dhclient with DDNS functionality disabled (no random 
extra ports)
  isc-dhcp-client-ddns : dhclient with DDNS functionality enabled (with random 
extra ports)

  The goal here is to reproduce the same situation in Trusty, for this
  bug to be less painful for at least users that doesn't require DDNS
  functionnality.

  [Test Case]

  Run a Trusty image with following package :
  isc-dhcp-client
  isc-dhcp-common

  ```
  dhclient 1110 root 6u IPv4 11535 0t0 UDP *:bootpc
  dhclient 1110 root 20u IPv4 11516 0t0 UDP *:64589 # <--- extra random 
port
  dhclient 1110 root 21u IPv6 11517 0t0 UDP *:7749  # <--- extra random 
port
  ```

  [Regression Potential]

  I did the split such that Trusty users will automatically get "isc-
  dhcp-client-ddns" installed but users bothered by this bug will have
  the option to switch to "isc-dhcp-client-noddns".

  Existing Trusty users can continue to use this DDNS functionality
  after the SRU without any necessary intervention.

  With isc-dhcp-client:
  dhclient 1110 root 6u IPv4 11535 0t0 UDP *:bootpc
  dhclient 1110 root 20u IPv4 11516 0t0 UDP *:64589 # <--- extra random 
port
  dhclient 1110 root 21u IPv6 11517 0t0 UDP *:7749  # <--- extra random 
port

  With isc-dhcp-client-noddns :
  

[Touch-packages] [Bug 1176046] Re: isc-dhcp dhclient listens on extra random ports

2016-11-24 Thread David Coronel
I confirm I have the same behavior in Ubuntu 14.04.5 LTS with isc-dhcp-
client version 4.2.4-7ubuntu12.7

I recompiled the isc-dhcp-client with the fix from
http://forums.debian.net/viewtopic.php?f=10=95273 and I no longer have
the issue:

BEFORE:

root@trustydhclient:~# lsof -i udp 
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME 
dhclient 521 root 5u IPv4 7553 0t0 UDP *:bootpc 
dhclient 521 root 20u IPv4 7479 0t0 UDP *:50522 
dhclient 521 root 21u IPv6 7480 0t0 UDP *:17754 

AFTER:

root@trustydhclient:~# lsof -i udp 
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME 
dhclient 524 root 5u IPv4 7613 0t0 UDP *:bootpc

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to isc-dhcp in Ubuntu.
https://bugs.launchpad.net/bugs/1176046

Title:
  isc-dhcp dhclient listens on extra random ports

Status in isc-dhcp package in Ubuntu:
  Triaged

Bug description:
  Ubuntu 13.04 Server 64-bit.  Fresh install.  Only one network adapter.

  dhclient process is listening on two randomly chosen udp ports in
  addition to the usual port 68.  This appears to be a bug in the
  discovery code for probing information on interfaces in the system.

  Initial research of the code also suggested omapi, but adding omapi
  port  to /etc/dhcp/dhclient.conf only opened a forth port with the
  two random udp ports still enabled.

  Version of included distro dhclient was 4.2.4.  I also tested with the
  latest isc-dhclient-4.2.5-P1 and got the same results.

  Debian has the same bug:
  http://forums.debian.net/viewtopic.php?f=10=95273=495605#p495605

  One impact of these random ports is that security hardening becomes
  more difficult.  The purpose of these random ports and security
  implications are unknown.

  
  Example netstat -lnp  output:

  udp0  0 0.0.0.0:21117   0.0.0.0:* 
  2659/dhclient   
  udp0  0 0.0.0.0:68  0.0.0.0:* 
  2659/dhclient   
  udp6   0  0 :::45664:::*  
  2659/dhclient

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/isc-dhcp/+bug/1176046/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1303769] Re: xubuntu 14.04 service start, stop, does not work in terminal emulation

2016-11-09 Thread David Coronel
I just hit this bug with a fresh install of Ubuntu 14.04.4 Desktop.

Inside Unity, I open a terminal and if I do "service ssh status" I get:
status: Unknown job: ssh

But if I ssh to that box with the same user, the same command works
fine.

The workarounds are to do "sudo service ssh status" or to do "unset
UPSTART_SESSION" before doing "service ssh status".

Can anyone look at this?

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to upstart in Ubuntu.
https://bugs.launchpad.net/bugs/1303769

Title:
  xubuntu 14.04 service start,stop, does not work in terminal emulation

Status in upstart package in Ubuntu:
  Confirmed

Bug description:
  xubuntu from installer-amd64/20101020ubuntu313 terminal emulation can
  not change service status.

  e.g.

  > service smbd restart
  stop: Unknown job: smbd
  start: Unknown job: smbd


  however on tty1 this works as expected. i.e. the smbd is stopped then
  started again with a process number given in output.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/1303769/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1303769] Re: xubuntu 14.04 service start, stop, does not work in terminal emulation

2016-11-09 Thread David Coronel
** Changed in: upstart (Ubuntu)
 Assignee: Ahmad Bin Musa (ahmadbinmusa) => (unassigned)

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to upstart in Ubuntu.
https://bugs.launchpad.net/bugs/1303769

Title:
  xubuntu 14.04 service start,stop, does not work in terminal emulation

Status in upstart package in Ubuntu:
  Confirmed

Bug description:
  xubuntu from installer-amd64/20101020ubuntu313 terminal emulation can
  not change service status.

  e.g.

  > service smbd restart
  stop: Unknown job: smbd
  start: Unknown job: smbd


  however on tty1 this works as expected. i.e. the smbd is stopped then
  started again with a process number given in output.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/1303769/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1607815] [NEW] After suspend, connected to wifi but wifi indicator not showing signal strength

2016-07-29 Thread David Coronel
Public bug reported:

I have an issue that is similar to the following bugs:

https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1574347
https://bugs.launchpad.net/ubuntu/+source/wpa/+bug/1422143

I am running Xenial 16.04.1. I did an apt-get dist-upgrade on July 29th
2016 and rebooted after the upgrade and put the laptop in suspend mode
before going to bed. This morning (July 30th), I resumed my laptop and I
see up-down arrows instead of a wifi icon in the system tray to the top
right. After about 15 seconds, the wifi icon appears instead of the
arrows and does a little scan action animation. After that, my Internet
works but I can't see the strength bars on the icon (as if the signal
strength was zero bars). When I click on the icon, I can't see my wifi
network in the list of networks anywhere.

My wifi connection works, but the icon does not show it.

A "sudo service network-manager restart" fixes this issue. I can then
see my wifi network in the Wi-Fi Networks list.

---

$ lsb_release -rd
Description:Ubuntu 16.04.1 LTS
Release:16.04

$ apt-cache policy network-manager
network-manager:
  Installed: 1.2.0-0ubuntu0.16.04.3
  Candidate: 1.2.0-0ubuntu0.16.04.3
  Version table:
 *** 1.2.0-0ubuntu0.16.04.3 500
500 http://ca.archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
100 /var/lib/dpkg/status
 1.1.93-0ubuntu4 500
500 http://ca.archive.ubuntu.com/ubuntu xenial/main amd64 Packages


Attached is a screenshot of the issue.

** Affects: network-manager (Ubuntu)
 Importance: Undecided
 Status: New

** Attachment added: "Screenshot of the problem"
   
https://bugs.launchpad.net/bugs/1607815/+attachment/4709521/+files/wifi-issue.jpg

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to network-manager in Ubuntu.
https://bugs.launchpad.net/bugs/1607815

Title:
  After suspend, connected to wifi but wifi indicator not showing signal
  strength

Status in network-manager package in Ubuntu:
  New

Bug description:
  I have an issue that is similar to the following bugs:

  https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1574347
  https://bugs.launchpad.net/ubuntu/+source/wpa/+bug/1422143

  I am running Xenial 16.04.1. I did an apt-get dist-upgrade on July
  29th 2016 and rebooted after the upgrade and put the laptop in suspend
  mode before going to bed. This morning (July 30th), I resumed my
  laptop and I see up-down arrows instead of a wifi icon in the system
  tray to the top right. After about 15 seconds, the wifi icon appears
  instead of the arrows and does a little scan action animation. After
  that, my Internet works but I can't see the strength bars on the icon
  (as if the signal strength was zero bars). When I click on the icon, I
  can't see my wifi network in the list of networks anywhere.

  My wifi connection works, but the icon does not show it.

  A "sudo service network-manager restart" fixes this issue. I can then
  see my wifi network in the Wi-Fi Networks list.

  ---

  $ lsb_release -rd
  Description:  Ubuntu 16.04.1 LTS
  Release:  16.04

  $ apt-cache policy network-manager
  network-manager:
Installed: 1.2.0-0ubuntu0.16.04.3
Candidate: 1.2.0-0ubuntu0.16.04.3
Version table:
   *** 1.2.0-0ubuntu0.16.04.3 500
  500 http://ca.archive.ubuntu.com/ubuntu xenial-updates/main amd64 
Packages
  100 /var/lib/dpkg/status
   1.1.93-0ubuntu4 500
  500 http://ca.archive.ubuntu.com/ubuntu xenial/main amd64 Packages

  
  Attached is a screenshot of the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1607815/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1590838] [NEW] shutdown -r + does not work without dbus in xenial LXC container

2016-06-09 Thread David Coronel
Public bug reported:

The command "shutdown -r +5" doesn't work in a xenial lxc container. I
found this out by trying to use Landscape to restart a xenial lxc
container and the operation failed. I was told by the Landscape team
that the restart button simply does a "shutdown -r +5".

The problem seems to be that the dbus package is missing in the xenial
lxc image.

This is what happens:

root@xenialtest:/# shutdown -r +5
Failed to connect to bus: No such file or directory
Failed to connect to bus: No such file or directory

And if I install dbus:

root@xenialtest:/# shutdown -r +5
Shutdown scheduled for Wed 2016-06-08 19:28:44 UTC, use 'shutdown -c' to cancel.

The issue happens whether I use the download template or the ubuntu
template when creating the LXC container:

root@davecorelaptop:/var/cache/lxc# lxc-create -t ubuntu -n test2
root@davecorelaptop:/var/cache/lxc# lxc-start -d -n test2
root@davecorelaptop:/var/cache/lxc# lxc-attach -n test2

root@test2:/# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 16.04 LTS
Release:16.04
Codename:   xenial

root@test2:/# shutdown -r +5
Failed to connect to bus: No such file or directory
Failed to connect to bus: No such file or directory


or


root@davecorelaptop:~# lxc-create -t download -n test
Setting up the GPG keyring
Downloading the image index

---
DISTRELEASE ARCHVARIANT BUILD
---
alpine  3.0 amd64   default 20160608_18:03
alpine  3.0 i386default 20160608_17:50
alpine  3.1 amd64   default 20160608_17:50
alpine  3.1 i386default 20160608_18:03
alpine  3.2 amd64   default 20160608_17:50
alpine  3.2 i386default 20160608_17:50
alpine  3.3 amd64   default 20160608_17:50
alpine  3.3 i386default 20160608_17:50
alpine  edgeamd64   default 20160608_17:50
alpine  edgei386default 20160608_17:50
centos  6   amd64   default 20160609_02:16
centos  6   i386default 20160609_02:16
centos  7   amd64   default 20160609_02:16
debian  jessie  amd64   default 20160608_22:42
debian  jessie  arm64   default 20160609_02:38
debian  jessie  armel   default 20160608_22:42
debian  jessie  armhf   default 20160608_22:42
debian  jessie  i386default 20160608_22:42
debian  jessie  powerpc default 20160608_22:42
debian  jessie  ppc64el default 20160608_22:42
debian  jessie  s390x   default 20160608_22:42
debian  sid amd64   default 20160608_22:42
debian  sid arm64   default 20160608_22:42
debian  sid armel   default 20160608_22:42
debian  sid armhf   default 20160608_22:42
debian  sid i386default 20160608_22:42
debian  sid powerpc default 20160608_22:42
debian  sid ppc64el default 20160608_22:42
debian  sid s390x   default 20160608_22:42
debian  stretch amd64   default 20160608_22:42
debian  stretch arm64   default 20160608_22:42
debian  stretch armel   default 20160608_22:42
debian  stretch armhf   default 20160608_22:42
debian  stretch i386default 20160608_22:42
debian  stretch powerpc default 20160608_22:42
debian  stretch ppc64el default 20160608_22:42
debian  stretch s390x   default 20160608_22:42
debian  wheezy  amd64   default 20160608_22:42
debian  wheezy  armel   default 20160608_22:42
debian  wheezy  armhf   default 20160608_22:42
debian  wheezy  i386default 20160608_22:42
debian  wheezy  powerpc default 20160609_02:38
debian  wheezy  s390x   default 20160608_22:42
fedora  22  amd64   default 20160609_01:27
fedora  22  armhf   default 20160112_01:27
fedora  22  i386default 20160609_01:27
fedora  23  amd64   default 20160609_01:27
fedora  23  i386default 20160609_01:27
gentoo  current amd64   default 20160608_14:12
gentoo  current i386default 20160608_14:12
opensuse13.2amd64   default 20160609_00:53
oracle  6   amd64   default 20160609_11:40
oracle  6   i386default 20160609_11:40
oracle  7   amd64   default 20160609_11:40
plamo   5.x amd64   default 20160608_21:36
plamo   5.x i386default 20160608_21:36
plamo   6.x amd64   default 20160608_21:36
plamo   6.x i386default 20160608_21:36
ubuntu  precise amd64   default 20160609_03:49
ubuntu  precise armel   default 20160609_03:49
ubuntu  precise armhf   default 20160609_03:49
ubuntu  precise i386default 20160609_03:49
ubuntu  precise powerpc default 20160609_03:49
ubuntu  trusty  amd64   default 20160609_03:49
ubuntu  trusty  arm64   default 20160609_03:49
ubuntu  trusty  armhf   default 20160609_03:49
ubuntu  trusty  i386default 20160609_03:49
ubuntu  trusty  powerpc default 20160609_03:49
ubuntu  trusty  ppc64el default 20160609_03:49
ubuntu  wilyamd64   default 20160609_03:49
ubuntu  wilyarm64   default 20160609_03:49
ubuntu  wilyarmhf   default 20160609_03:49
ubuntu  wilyi386default 20160609_03:49
ubuntu  wilypowerpc default 20160609_03:49
ubuntu  wilyppc64el default 20160609_07:06
ubuntu  xenial  amd64   default 20160609_03:49