[Bug 1927078] Re: Don't allow useradd to use fully numeric names

2021-05-04 Thread Victor Tapia
I don't have a strong opinion either, but given that scripts would
ignore the warnings and the resulting numeric users are going to face
random, seemingly unrelated issues thanks to the interaction with
systemd, I think I prefer the failure.

FWIW, I've prepared a test version in a PPA[1] which keeps the rules
from Debian[2] but prevents the fully numeric names. This is what it
looks like:

$ useradd 0
useradd: invalid user name '0'

$ echo $?
3

$ sudo useradd 0c0

$ sudo useradd 0 --badnames

$ cat /etc/passwd | grep ^0
0c0:x:1001:1001::/home/0c0:/bin/sh
0:x:1002:1002::/home/0:/bin/sh



[1] https://launchpad.net/~vtapia/+archive/ubuntu/sf305373
[2] 
https://salsa.debian.org/debian/shadow/-/blob/master/debian/patches/506_relaxed_usernames

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1927078

Title:
  Don't allow useradd to use fully numeric names

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/shadow/+bug/1927078/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1927078] [NEW] Don't allow useradd to use fully numeric names

2021-05-04 Thread Victor Tapia
Public bug reported:

[Description]

Fully numeric names support in Ubuntu is inconsistent in Focal onwards
because systemd does not like them[1] but are still allowed by default
by useradd, leaving the session behavior in hands of the running
applications. Two examples:

1. After creating a user named "0", the user can log in via ssh or
console but loginctl won't create a session for it:

root@focal:/home/ubuntu# useradd -m 0
root@focal:/home/ubuntu# id 0
uid=1005(0) gid=1005(0) groups=1005(0)

..

0@192.168.122.6's password:
Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 5.8.0-48-generic x86_64)

Last login: Thu Apr  8 16:17:06 2021 from 192.168.122.1
$ loginctl
No sessions.
$ w
 16:20:09 up 4 min,  1 user,  load average: 0.03, 0.14, 0.08
USER TTY  FROM LOGIN@   IDLE   JCPU   PCPU WHAT
0pts/0192.168.122.116:170.00s  0.00s  0.00s w  

And pam-systemd shows the following message:

Apr 08 16:17:06 focal sshd[1584]: pam_unix(sshd:session): session opened for 
user 0 by (uid=0)
Apr 08 16:17:06 focal sshd[1584]: pam_systemd(sshd:session): pam-systemd 
initializing
Apr 08 16:17:06 focal sshd[1584]: pam_systemd(sshd:session): Failed to get user 
record: Invalid argument


2. With that same username, every successful authentication in gdm will loop 
back to gdm again instead of starting gnome, making the user unable to login.


Making useradd fail (unless --badnames is set) when a fully numeric name is 
used will make the default OS behavior consistent.


[Other info]

- Upstream does not support fully numeric usernames
- useradd has a --badnames parameter that would still allow the use of these 
type of names

** Affects: shadow (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: shadow (Ubuntu Focal)
 Importance: Undecided
 Status: New

** Affects: shadow (Ubuntu Groovy)
 Importance: Undecided
 Status: New

** Affects: shadow (Ubuntu Hirsute)
 Importance: Undecided
 Status: New

** Affects: shadow (Ubuntu Impish)
 Importance: Undecided
 Status: New

** Also affects: shadow (Ubuntu Groovy)
   Importance: Undecided
   Status: New

** Also affects: shadow (Ubuntu Hirsute)
   Importance: Undecided
   Status: New

** Also affects: shadow (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: shadow (Ubuntu Impish)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1927078

Title:
  Don't allow useradd to use fully numeric names

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/shadow/+bug/1927078/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915811] Re: Empty NUMA topology in machines with high number of CPUs

2021-03-12 Thread Victor Tapia
#VERIFICATION GROOVY

Using the test case described in the description, where a VM has 128
vcpus assigned, the version in -updates does not list the topology:

$ dpkg -l | grep libvirt
ii  libvirt-clients  6.6.0-1ubuntu3.3
amd64Programs for the libvirt library
ii  libvirt-daemon   6.6.0-1ubuntu3.3
amd64Virtualization daemon
ii  libvirt-daemon-driver-qemu   6.6.0-1ubuntu3.3
amd64Virtualization daemon QEMU connection driver
ii  libvirt-daemon-system6.6.0-1ubuntu3.3
amd64Libvirt daemon configuration files
ii  libvirt-daemon-system-systemd6.6.0-1ubuntu3.3
amd64Libvirt daemon configuration files (systemd)
ii  libvirt0:amd64   6.6.0-1ubuntu3.3
amd64library for interfacing with different virtualization systems

$ virsh capabilities | xmllint --xpath '/capabilities/host/topology' -

  
  


The package in -proposed fixes the issue (output shortened):

$ dpkg -l | grep libvirt
ii  libvirt-clients  6.6.0-1ubuntu3.4
amd64Programs for the libvirt library
ii  libvirt-daemon   6.6.0-1ubuntu3.4
amd64Virtualization daemon
ii  libvirt-daemon-driver-qemu   6.6.0-1ubuntu3.4
amd64Virtualization daemon QEMU connection driver
ii  libvirt-daemon-system6.6.0-1ubuntu3.4
amd64Libvirt daemon configuration files
ii  libvirt-daemon-system-systemd6.6.0-1ubuntu3.4
amd64Libvirt daemon configuration files (systemd)
ii  libvirt0:amd64   6.6.0-1ubuntu3.4
amd64library for interfacing with different virtualization systems

$ virsh capabilities | xmllint --xpath '/capabilities/host/topology' -

  

  5023436
  1255859
  0
  

  
  

...

  

  


** Tags removed: verification-needed verification-needed-groovy 
verification-needed-xenial
** Tags added: verification-done verification-done-groovy 
verification-done-xenial

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915811

Title:
  Empty NUMA topology in machines with high number of CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915811/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915811] Re: Empty NUMA topology in machines with high number of CPUs

2021-03-12 Thread Victor Tapia
#VERIFICATION XENIAL

Using the test case described in the description, where a VM has 128
vcpus assigned, the version in -updates does not list the topology:

$ dpkg -l | grep libvirt
ii  libvirt-bin1.3.1-1ubuntu10.30   
   amd64programs for the libvirt library
ii  libvirt0:amd64 1.3.1-1ubuntu10.30   
   amd64library for interfacing with different virtualization 
systems

$ virsh capabilities | xmllint --xpath '/capabilities/host/topology' -
XPath set is empty

The package in -proposed fixes the issue (output shortened):

$ dpkg -l | grep libvirt
ii  libvirt-bin1.3.1-1ubuntu10.31   
   amd64programs for the libvirt library
ii  libvirt0:amd64 1.3.1-1ubuntu10.31   
   amd64library for interfacing with different virtualization 
systems

$ virsh capabilities | xmllint --xpath '/capabilities/host/topology' -

  

  4998464
  1249616
  0
  0
  

  
  

...

  

  


NOTE: if the machine is running a 4.4 kernel, numa_all_cpus_ptr->size
(used to set max_n_cpus in libvirt) is 512 instead of 128 and the issue
cannot be triggered (libvirt max vcpu is 255). Any newer kernel, such as
HWE, sets the value to 128, triggering the issue.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915811

Title:
  Empty NUMA topology in machines with high number of CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915811/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915811] Re: Empty NUMA topology in machines with high number of CPUs

2021-03-10 Thread Victor Tapia
#VERIFICATION USSURI

Using the test case described in the description, where a VM has 128
vcpus assigned, the version in -updates does not list the topology:

$ dpkg -l |grep libvirt
ii  libvirt-clients  6.0.0-0ubuntu8.7~cloud0
 amd64Programs for the libvirt library
ii  libvirt-daemon   6.0.0-0ubuntu8.7~cloud0
 amd64Virtualization daemon
ii  libvirt-daemon-driver-qemu   6.0.0-0ubuntu8.7~cloud0
 amd64Virtualization daemon QEMU connection driver
ii  libvirt-daemon-driver-storage-rbd6.0.0-0ubuntu8.7~cloud0
 amd64Virtualization daemon RBD storage driver
ii  libvirt-daemon-system6.0.0-0ubuntu8.7~cloud0
 amd64Libvirt daemon configuration files
ii  libvirt-daemon-system-systemd6.0.0-0ubuntu8.7~cloud0
 amd64Libvirt daemon configuration files (systemd)
ii  libvirt0:amd64   6.0.0-0ubuntu8.7~cloud0
 amd64library for interfacing with different virtualization systems

$ virsh capabilities | xmllint --xpath '/capabilities/host/topology' -

  
  


The package in -proposed fixes the issue (output shortened):

$ dpkg -l |grep libvirt
ii  libvirt-clients  6.0.0-0ubuntu8.8~cloud0
 amd64Programs for the libvirt library
ii  libvirt-daemon   6.0.0-0ubuntu8.8~cloud0
 amd64Virtualization daemon
ii  libvirt-daemon-driver-qemu   6.0.0-0ubuntu8.8~cloud0
 amd64Virtualization daemon QEMU connection driver
ii  libvirt-daemon-driver-storage-rbd6.0.0-0ubuntu8.8~cloud0
 amd64Virtualization daemon RBD storage driver
ii  libvirt-daemon-system6.0.0-0ubuntu8.8~cloud0
 amd64Libvirt daemon configuration files
ii  libvirt-daemon-system-systemd6.0.0-0ubuntu8.8~cloud0
 amd64Libvirt daemon configuration files (systemd)
ii  libvirt0:amd64   6.0.0-0ubuntu8.8~cloud0
 amd64library for interfacing with different virtualization systems

$ virsh capabilities | xmllint --xpath '/capabilities/host/topology' -

   

  5047560
  1261890
  0
  

  
  

...

  

  



** Tags removed: verification-stein-needed verification-train-needed 
verification-ussuri-needed
** Tags added: verification-stein-done verification-train-done 
verification-ussuri-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915811

Title:
  Empty NUMA topology in machines with high number of CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915811/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915811] Re: Empty NUMA topology in machines with high number of CPUs

2021-03-10 Thread Victor Tapia
#VERIFICATION TRAIN

Using the test case described in the description, where a VM has 128
vcpus assigned, the version in -updates does not list the topology:

$ dpkg -l |grep libvirt
ii  libvirt-clients  5.4.0-0ubuntu5.4~cloud0
 amd64Programs for the libvirt library
ii  libvirt-daemon   5.4.0-0ubuntu5.4~cloud0
 amd64Virtualization daemon
ii  libvirt-daemon-driver-storage-rbd5.4.0-0ubuntu5.4~cloud0
 amd64Virtualization daemon RBD storage driver
ii  libvirt-daemon-system5.4.0-0ubuntu5.4~cloud0
 amd64Libvirt daemon configuration files
ii  libvirt0:amd64   5.4.0-0ubuntu5.4~cloud0
 amd64library for interfacing with different virtualization systems

$ virsh capabilities | xmllint --xpath '/capabilities/host/topology' -
XPath set is empty

The package in -proposed fixes the issue (output shortened):

$ dpkg -l |grep libvirt
ii  libvirt-clients  5.4.0-0ubuntu5.4~cloud1.1  
 amd64Programs for the libvirt library
ii  libvirt-daemon   5.4.0-0ubuntu5.4~cloud1.1  
 amd64Virtualization daemon
ii  libvirt-daemon-driver-storage-rbd5.4.0-0ubuntu5.4~cloud1.1  
 amd64Virtualization daemon RBD storage driver
ii  libvirt-daemon-system5.4.0-0ubuntu5.4~cloud1.1  
 amd64Libvirt daemon configuration files
ii  libvirt0:amd64   5.4.0-0ubuntu5.4~cloud1.1  
 amd64library for interfacing with different virtualization systems

$ virsh capabilities | xmllint --xpath '/capabilities/host/topology' -

  

  5047560
  1261890
  0
  

  
  

...

  

  


-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915811

Title:
  Empty NUMA topology in machines with high number of CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915811/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915811] Re: Empty NUMA topology in machines with high number of CPUs

2021-03-10 Thread Victor Tapia
#VERIFICATION STEIN

Using the test case described in the description, where a VM has 128
vcpus assigned, the version in -updates does not list the topology:

$ dpkg -l |grep libvirt
ii  libvirt-clients  5.0.0-1ubuntu2.6~cloud1
 amd64Programs for the libvirt library
ii  libvirt-daemon   5.0.0-1ubuntu2.6~cloud1
 amd64Virtualization daemon
ii  libvirt-daemon-driver-storage-rbd5.0.0-1ubuntu2.6~cloud1
 amd64Virtualization daemon RBD storage driver
ii  libvirt-daemon-system5.0.0-1ubuntu2.6~cloud1
 amd64Libvirt daemon configuration files
ii  libvirt0:amd64   5.0.0-1ubuntu2.6~cloud1
 amd64library for interfacing with different virtualization systems

$ virsh capabilities | xmllint --xpath '/capabilities/host/topology' -
XPath set is empty

The package in -proposed fixes the issue (output shortened):

$ dpkg -l |grep libvirt
ii  libvirt-clients  5.0.0-1ubuntu2.6~cloud2.1  
 amd64Programs for the libvirt library
ii  libvirt-daemon   5.0.0-1ubuntu2.6~cloud2.1  
 amd64Virtualization daemon
ii  libvirt-daemon-driver-storage-rbd5.0.0-1ubuntu2.6~cloud2.1  
 amd64Virtualization daemon RBD storage driver
ii  libvirt-daemon-system5.0.0-1ubuntu2.6~cloud2.1  
 amd64Libvirt daemon configuration files
ii  libvirt0:amd64   5.0.0-1ubuntu2.6~cloud2.1  
 amd64library for interfacing with different virtualization systems

$ virsh capabilities | xmllint --xpath '/capabilities/host/topology' -

  

  5047560
  1261890
  0
  

  
  

...

  

  


-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915811

Title:
  Empty NUMA topology in machines with high number of CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915811/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915811] Re: Empty NUMA topology in machines with high number of CPUs

2021-03-10 Thread Victor Tapia
#VERIFICATION BIONIC

Using the test case described in the description, where a VM has 128
vcpus assigned, the version in -updates does not list the topology:

$ dpkg -l |grep libvirt
ii  libvirt-bin  4.0.0-1ubuntu8.17  
 amd64programs for the libvirt library
ii  libvirt-clients  4.0.0-1ubuntu8.17  
 amd64Programs for the libvirt library
ii  libvirt-daemon   4.0.0-1ubuntu8.17  
 amd64Virtualization daemon
ii  libvirt-daemon-driver-storage-rbd4.0.0-1ubuntu8.17  
 amd64Virtualization daemon RBD storage driver
ii  libvirt-daemon-system4.0.0-1ubuntu8.17  
 amd64Libvirt daemon configuration files
ii  libvirt0:amd64   4.0.0-1ubuntu8.17  
 amd64library for interfacing with different virtualization systems

$ virsh capabilities | xmllint --xpath '/capabilities/host/topology' -
XPath set is empty

The package in -proposed fixes the issue (output shortened):

$ dpkg -l |grep libvirt
ii  libvirt-bin  4.0.0-1ubuntu8.19  
 amd64programs for the libvirt library
ii  libvirt-clients  4.0.0-1ubuntu8.19  
 amd64Programs for the libvirt library
ii  libvirt-daemon   4.0.0-1ubuntu8.19  
 amd64Virtualization daemon
ii  libvirt-daemon-driver-storage-rbd4.0.0-1ubuntu8.19  
 amd64Virtualization daemon RBD storage driver
ii  libvirt-daemon-system4.0.0-1ubuntu8.19  
 amd64Libvirt daemon configuration files
ii  libvirt0:amd64   4.0.0-1ubuntu8.19  
 amd64library for interfacing with different virtualization systems


$ virsh capabilities | xmllint --xpath '/capabilities/host/topology' -

  

  5047560
  1261890
  0
  

  
  

...

  

  



** Tags removed: verification-needed-bionic
** Tags added: verification-done-bionic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915811

Title:
  Empty NUMA topology in machines with high number of CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915811/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915811] Re: Empty NUMA topology in machines with high number of CPUs

2021-03-09 Thread Victor Tapia
** Patch removed: "bionic-stein.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1915811/+attachment/5468130/+files/bionic-stein.debdiff

** Patch added: "bionic-stein.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1915811/+attachment/5475028/+files/bionic-stein.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915811

Title:
  Empty NUMA topology in machines with high number of CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915811/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915811] Re: Empty NUMA topology in machines with high number of CPUs

2021-03-08 Thread Victor Tapia
#VERIFICATION FOCAL

Using the test case described in the description, where a VM has 128
vcpus assigned, the version in -updates does not list the topology:

$ dpkg -l | grep libvirt
ii  libvirt-clients  6.0.0-0ubuntu8.7  
amd64Programs for the libvirt library
ii  libvirt-daemon   6.0.0-0ubuntu8.7  
amd64Virtualization daemon
ii  libvirt-daemon-driver-qemu   6.0.0-0ubuntu8.7  
amd64Virtualization daemon QEMU connection driver
ii  libvirt-daemon-driver-storage-rbd6.0.0-0ubuntu8.7  
amd64Virtualization daemon RBD storage driver
ii  libvirt0:amd64   6.0.0-0ubuntu8.7  
amd64library for interfacing with different virtualization systems

$ virsh capabilities | xmllint --xpath '/capabilities/host/topology' -

  
  


The package in -proposed fixes the issue (output shortened):

$ dpkg -l | grep libvirt
ii  libvirt-clients  6.0.0-0ubuntu8.8  
amd64Programs for the libvirt library
ii  libvirt-daemon   6.0.0-0ubuntu8.8  
amd64Virtualization daemon
ii  libvirt-daemon-driver-qemu   6.0.0-0ubuntu8.8  
amd64Virtualization daemon QEMU connection driver
ii  libvirt-daemon-driver-storage-rbd6.0.0-0ubuntu8.8  
amd64Virtualization daemon RBD storage driver
ii  libvirt0:amd64   6.0.0-0ubuntu8.8  
amd64library for interfacing with different virtualization systems

$ virsh capabilities | xmllint --xpath '/capabilities/host/topology' -

  

  5027836
  1256959
  0
  

  
  

...

  

  



** Tags removed: verification-needed-focal
** Tags added: verification-done-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915811

Title:
  Empty NUMA topology in machines with high number of CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915811/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915811] Re: Empty NUMA topology in machines with high number of CPUs

2021-02-27 Thread Victor Tapia
** Patch added: "bionic-stein.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1915811/+attachment/5468130/+files/bionic-stein.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915811

Title:
  Empty NUMA topology in machines with high number of CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915811/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915811] Re: Empty NUMA topology in machines with high number of CPUs

2021-02-27 Thread Victor Tapia
** Patch added: "bionic-train.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1915811/+attachment/5468129/+files/bionic-train.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915811

Title:
  Empty NUMA topology in machines with high number of CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915811/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915811] Re: Empty NUMA topology in machines with high number of CPUs

2021-02-26 Thread Victor Tapia
** Patch added: "groovy.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1915811/+attachment/5467627/+files/groovy.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915811

Title:
  Empty NUMA topology in machines with high number of CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915811/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915811] Re: Empty NUMA topology in machines with high number of CPUs

2021-02-26 Thread Victor Tapia
** Patch added: "focal.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1915811/+attachment/5467626/+files/focal.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915811

Title:
  Empty NUMA topology in machines with high number of CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915811/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915811] Re: Empty NUMA topology in machines with high number of CPUs

2021-02-26 Thread Victor Tapia
** Patch added: "bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1915811/+attachment/5467625/+files/bionic.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915811

Title:
  Empty NUMA topology in machines with high number of CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915811/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915811] Re: Empty NUMA topology in machines with high number of CPUs

2021-02-26 Thread Victor Tapia
** Patch added: "xenial.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1915811/+attachment/5467624/+files/xenial.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915811

Title:
  Empty NUMA topology in machines with high number of CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915811/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915819] Re: 'NoneType' object has no attribute 'encode' in requestReceived() when multipart body doesn't include content-disposition

2021-02-26 Thread Victor Tapia
#VERIFICATION-DONE-FOCAL

Using the webserver.py/client.py scripts defined in the test case
section in the description:

$ dpkg -l | grep twisted
ii  python3-twisted  18.9.0-11ubuntu0.20.04.1  all  
Event-based framework for internet applications
ii  python3-twisted-bin:amd6418.9.0-11ubuntu0.20.04.1  amd64
Event-based framework for internet applications

$ twistd3 -y ./webserver.py

$ ./client.py   
== BODY: --8825899812428059282

--8825899812428059282--

The multipart request works fine (it's logged at the end) and there's no
exception thrown to the logs:

$ tail ./twistd.log 
2021-02-26T13:56:41+0100 [-] Loading ./webserver.py...
2021-02-26T13:56:42+0100 [-] Loaded.
2021-02-26T13:56:42+0100 [twisted.scripts._twistd_unix.UnixAppLogger#info] 
twistd 18.9.0 (/usr/bin/python3 3.8.5) starting up.
2021-02-26T13:56:42+0100 [twisted.scripts._twistd_unix.UnixAppLogger#info] 
reactor class: twisted.internet.epollreactor.EPollReactor.
2021-02-26T13:56:42+0100 [-] Site starting on 8080
2021-02-26T13:56:42+0100 [twisted.web.server.Site#info] Starting factory 

2021-02-26T13:56:46+0100 [twisted.python.log#info] 127.0.0.1 - - 
[26/Feb/2021:12:56:46 +] "POST / HTTP/1.1" 404 153 "-" 
"Python-httplib2/0.14.0 (gzip)"

# AUTOPKGTEST NOTES
Autopkgtest failures in automat are due to a missing package in Ubuntu: 
python-twisted-core is missing in Focal (as the rest of python2 packages) but 
is required by automat tests. This has been reported in the following bug:

https://bugs.launchpad.net/ubuntu/+source/automat/+bug/1917041


** Tags removed: verification-needed verification-needed-focal 
verification-needed-groovy
** Tags added: verification-done verification-done-focal 
verification-done-groovy

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915819

Title:
  'NoneType' object has no attribute 'encode' in requestReceived() when
  multipart body doesn't include content-disposition

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/twisted/+bug/1915819/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1917041] [NEW] Autopkgtest fails to run because python-twisted-core is not available (Focal)

2021-02-26 Thread Victor Tapia
Public bug reported:

Automat 0.8.0-1ubuntu1 defines the following test dependencies in
debian/tests/control:

Tests: unit-tests-2 unit-tests-3
Depends: @,
 graphviz,
 python-twisted-core,
 python3-twisted,
 python3-graphviz,

But Focal dropped python2 support, including python-twisted-core:
https://launchpad.net/ubuntu/+source/twisted/18.9.0-11

This results in autopkgtest errors such as this one:
https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-focal/focal/amd64/a/automat/20210222_180749_ce476@/log.gz

Newer releases do not depend on this package and do not face this issue.

** Affects: automat (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1917041

Title:
  Autopkgtest fails to run because python-twisted-core is not available
  (Focal)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/automat/+bug/1917041/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915819] Re: 'NoneType' object has no attribute 'encode' in requestReceived() when multipart body doesn't include content-disposition

2021-02-26 Thread Victor Tapia
#VERIFICATION-DONE-GROOVY

Using the webserver.py/client.py scripts defined in the test case
section in the description:

$ dpkg -l | grep twisted
ii  python3-twisted18.9.0-11ubuntu0.20.10.1  all
  Event-based framework for internet applications
ii  python3-twisted-bin:amd64  18.9.0-11ubuntu0.20.10.1  amd64  
  Event-based framework for internet applications

$ twistd3 -y webserver.py

$ ./client.py
== BODY: --8825899812428059282

--8825899812428059282--

The multipart request works fine (it's logged at the end) and there's no
exception thrown to the logs:

$ tail -n10 twistd.log
2021-02-26T12:22:39+0100 [-] Loading webserver.py...
2021-02-26T12:22:39+0100 [-] Loaded.
2021-02-26T12:22:39+0100 [twisted.scripts._twistd_unix.UnixAppLogger#info] 
twistd 18.9.0 (/usr/bin/python3 3.8.6) starting up.
2021-02-26T12:22:39+0100 [twisted.scripts._twistd_unix.UnixAppLogger#info] 
reactor class: twisted.internet.epollreactor.EPollReactor.
2021-02-26T12:22:39+0100 [-] Site starting on 8080
2021-02-26T12:22:39+0100 [twisted.web.server.Site#info] Starting factory 

2021-02-26T12:22:45+0100 [twisted.python.log#info] 127.0.0.1 - - 
[26/Feb/2021:11:22:44 +] "POST / HTTP/1.1" 404 153 "-" 
"Python-httplib2/0.18.1 (gzip)"

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915819

Title:
  'NoneType' object has no attribute 'encode' in requestReceived() when
  multipart body doesn't include content-disposition

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/twisted/+bug/1915819/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915819] Re: 'NoneType' object has no attribute 'encode' in requestReceived() when multipart body doesn't include content-disposition

2021-02-18 Thread Victor Tapia
** Patch added: "focal.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/twisted/+bug/1915819/+attachment/5464779/+files/focal.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915819

Title:
  'NoneType' object has no attribute 'encode' in requestReceived() when
  multipart body doesn't include content-disposition

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/twisted/+bug/1915819/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915819] Re: 'NoneType' object has no attribute 'encode' in requestReceived() when multipart body doesn't include content-disposition

2021-02-18 Thread Victor Tapia
** Patch added: "groovy.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/twisted/+bug/1915819/+attachment/5464778/+files/groovy.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915819

Title:
  'NoneType' object has no attribute 'encode' in requestReceived() when
  multipart body doesn't include content-disposition

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/twisted/+bug/1915819/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915819] Re: 'NoneType' object has no attribute 'encode' in requestReceived() when multipart body doesn't include content-disposition

2021-02-18 Thread Victor Tapia
** Patch added: "hirsute.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/twisted/+bug/1915819/+attachment/5464777/+files/hirsute.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915819

Title:
  'NoneType' object has no attribute 'encode' in requestReceived() when
  multipart body doesn't include content-disposition

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/twisted/+bug/1915819/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915819] Re: 'NoneType' object has no attribute 'encode' in requestReceived() when multipart body doesn't include content-disposition

2021-02-18 Thread Victor Tapia
** Description changed:

  [impact]
  
  python-twisted errors out with "'NoneType' object has no attribute
  'encode' in requestReceived()" when it tries to parse a multipart mime
  message and python3.7+ is used. This happens because before commit
  cc3fa20 in cpython, cgi.parse_multipart ignored parts without a name
  defined in "content-disposition" (or parts without headers for that
  matter) but after 3.7+ the return of the function can now contain
  NoneType keys, which fail to encode.
  
  [scope]
  
- This bug affects all releases
+ Even though this bug affects all python3-twisted releases, I'll backport
+ the fix just to Focal, Groovy and Hirsute. Bionic and Xenial do not have
+ Python 3.7 as the default interpreter (required to trigger the issue),
+ and the delta in python3-twisted might be to big to backport as the
+ current packages do not contemplate _PY37PLUS at all.
  
  Fixed upstream with commit 310496249, available since 21.2.0rc1
  
  [test case]
  
  1. Save the following code as webserver.py
  
  from twisted.application.internet import TCPServer
  from twisted.application.service import Application
  from twisted.web.resource import Resource
  from twisted.web.server import Site
  
- 
  class Foo(Resource):
- def render_POST(self, request):
- newdata = request.content.getvalue()
- print(newdata)
- return ''
- 
+ def render_POST(self, request):
+ newdata = request.content.getvalue()
+ print(newdata)
+ return ''
  
  root = Resource()
  root.putChild("foo", Foo())
  application = Application("cgi.parse_multipart test")
  TCPServer(8080, Site(root)).setServiceParent(application)
  
  2. Save the following code as client.py (python3-httplib2 is required)
  
  #!/usr/bin/env python
  import httplib2
  
- 
  def http_request(url, method, body=None, headers=None, insecure=False):
- """Issue an http request."""
- http = httplib2.Http(disable_ssl_certificate_validation=insecure)
- if isinstance(url, bytes):
- url = url.decode("ascii")
- return http.request(url, method, body=body, headers=headers)
- 
+ """Issue an http request."""
+ http = httplib2.Http(disable_ssl_certificate_validation=insecure)
+ if isinstance(url, bytes):
+ url = url.decode("ascii")
+ return http.request(url, method, body=body, headers=headers)
  
  url = "http://localhost:8080;
  method = "POST"
  headers = {'Content-Type': 'multipart/form-data; 
boundary="8825899812428059282"'}
  emptyh = '--8825899812428059282\n\n--8825899812428059282--'
  
  print("== BODY: " + emptyh + "\n")
  response, content = http_request(url, method, emptyh, headers)
  
  3. Run the server with "twistd3 -y webserver.py"
  4. Run the client
  5. twistd will fail to encode the key and will drop this traceback in the log 
file (twistd.log)
  
  2021-02-16T13:41:39+0100 [_GenericHTTPChannelProtocol,7,127.0.0.1] Unhandled 
Error
- Traceback (most recent call last):
-   File "/usr/lib/python3/dist-packages/twisted/python/log.py", line 
103, in callWithLogger
- return callWithContext({"system": lp}, func, *args, **kw)
-   File "/usr/lib/python3/dist-packages/twisted/python/log.py", line 
86, in callWithContext
- return context.call({ILogContext: newCtx}, func, *args, **kw)
-   File "/usr/lib/python3/dist-packages/twisted/python/context.py", 
line 122, in callWithContext
- return self.currentContext().callWithContext(ctx, func, *args, 
**kw)
-   File "/usr/lib/python3/dist-packages/twisted/python/context.py", 
line 85, in callWithContext
- return func(*args,**kw)
- ---  ---
-   File 
"/usr/lib/python3/dist-packages/twisted/internet/posixbase.py", line 614, in 
_doReadOrWrite
- why = selectable.doRead()
-   File "/usr/lib/python3/dist-packages/twisted/internet/tcp.py", line 
243, in doRead
- return self._dataReceived(data)
-   File "/usr/lib/python3/dist-packages/twisted/internet/tcp.py", line 
249, in _dataReceived
- rval = self.protocol.dataReceived(data)
-   File "/usr/lib/python3/dist-packages/twisted/web/http.py", line 
2952, in dataReceived
- return self._channel.dataReceived(data)
-   File "/usr/lib/python3/dist-packages/twisted/web/http.py", line 
2245, in dataReceived
- return basic.LineReceiver.dataReceived(self, data)
-   File "/usr/lib/python3/dist-packages/twisted/protocols/basic.py", 
line 579, in dataReceived
- why = self.rawDataReceived(data)
-   File "/usr/lib/python3/dist-packages/twisted/web/http.py", line 
2252, in rawDataReceived
- self._transferDecoder.dataReceived(data)
-   File "/usr/lib/python3/dist-packages/twisted/web/http.py", line 
1699, in dataReceived
- finishCallback(data[contentLength:])
-   File "/usr/lib/python3/dist-packages/twisted/web/http.py", line 
2115, 

[Bug 1915819] Re: 'NoneType' object has no attribute 'encode' in requestReceived() when multipart body doesn't include content-disposition

2021-02-18 Thread Victor Tapia
** Also affects: twisted (Ubuntu Hirsute)
   Importance: Undecided
   Status: New

** Also affects: twisted (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: twisted (Ubuntu Groovy)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915819

Title:
  'NoneType' object has no attribute 'encode' in requestReceived() when
  multipart body doesn't include content-disposition

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/twisted/+bug/1915819/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915819] [NEW] 'NoneType' object has no attribute 'encode' in requestReceived() when multipart body doesn't include content-disposition

2021-02-16 Thread Victor Tapia
Public bug reported:

[impact]

python-twisted errors out with "'NoneType' object has no attribute
'encode' in requestReceived()" when it tries to parse a multipart mime
message and python3.7+ is used. This happens because before commit
cc3fa20 in cpython, cgi.parse_multipart ignored parts without a name
defined in "content-disposition" (or parts without headers for that
matter) but after 3.7+ the return of the function can now contain
NoneType keys, which fail to encode.

[scope]

This bug affects all releases

Fixed upstream with commit 310496249, available since 21.2.0rc1

[test case]

1. Save the following code as webserver.py

from twisted.application.internet import TCPServer
from twisted.application.service import Application
from twisted.web.resource import Resource
from twisted.web.server import Site


class Foo(Resource):
def render_POST(self, request):
newdata = request.content.getvalue()
print(newdata)
return ''


root = Resource()
root.putChild("foo", Foo())
application = Application("cgi.parse_multipart test")
TCPServer(8080, Site(root)).setServiceParent(application)

2. Save the following code as client.py (python3-httplib2 is required)

#!/usr/bin/env python
import httplib2


def http_request(url, method, body=None, headers=None, insecure=False):
"""Issue an http request."""
http = httplib2.Http(disable_ssl_certificate_validation=insecure)
if isinstance(url, bytes):
url = url.decode("ascii")
return http.request(url, method, body=body, headers=headers)


url = "http://localhost:8080;
method = "POST"
headers = {'Content-Type': 'multipart/form-data; 
boundary="8825899812428059282"'}
emptyh = '--8825899812428059282\n\n--8825899812428059282--'

print("== BODY: " + emptyh + "\n")
response, content = http_request(url, method, emptyh, headers)

3. Run the server with "twistd3 -y webserver.py"
4. Run the client
5. twistd will fail to encode the key and will drop this traceback in the log 
file (twistd.log)

2021-02-16T13:41:39+0100 [_GenericHTTPChannelProtocol,7,127.0.0.1] Unhandled 
Error
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/twisted/python/log.py", line 
103, in callWithLogger
return callWithContext({"system": lp}, func, *args, **kw)
  File "/usr/lib/python3/dist-packages/twisted/python/log.py", line 86, 
in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
  File "/usr/lib/python3/dist-packages/twisted/python/context.py", line 
122, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
  File "/usr/lib/python3/dist-packages/twisted/python/context.py", line 
85, in callWithContext
return func(*args,**kw)
---  ---
  File "/usr/lib/python3/dist-packages/twisted/internet/posixbase.py", 
line 614, in _doReadOrWrite
why = selectable.doRead()
  File "/usr/lib/python3/dist-packages/twisted/internet/tcp.py", line 
243, in doRead
return self._dataReceived(data)
  File "/usr/lib/python3/dist-packages/twisted/internet/tcp.py", line 
249, in _dataReceived
rval = self.protocol.dataReceived(data)
  File "/usr/lib/python3/dist-packages/twisted/web/http.py", line 2952, 
in dataReceived
return self._channel.dataReceived(data)
  File "/usr/lib/python3/dist-packages/twisted/web/http.py", line 2245, 
in dataReceived
return basic.LineReceiver.dataReceived(self, data)
  File "/usr/lib/python3/dist-packages/twisted/protocols/basic.py", 
line 579, in dataReceived
why = self.rawDataReceived(data)
  File "/usr/lib/python3/dist-packages/twisted/web/http.py", line 2252, 
in rawDataReceived
self._transferDecoder.dataReceived(data)
  File "/usr/lib/python3/dist-packages/twisted/web/http.py", line 1699, 
in dataReceived
finishCallback(data[contentLength:])
  File "/usr/lib/python3/dist-packages/twisted/web/http.py", line 2115, 
in _finishRequestBody
self.allContentReceived()
  File "/usr/lib/python3/dist-packages/twisted/web/http.py", line 2224, 
in allContentReceived
req.requestReceived(command, path, version)
  File "/usr/lib/python3/dist-packages/twisted/web/http.py", line 898, 
in requestReceived
self.args.update({
  File "/usr/lib/python3/dist-packages/twisted/web/http.py", line 899, 
in 
x.encode('iso-8859-1'): \
builtins.AttributeError: 'NoneType' object has no attribute 'encode'

[regression potential]

This affects the returned dictionaries with non-str keys, which were
discarded in python3.6 or earlier before they reached twisted, so
patching this will make its behavior consistent.

** Affects: twisted (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: sts

-- 
You received this bug notification because you are a 

[Bug 1915811] [NEW] Empty NUMA topology in machines with high number of CPUs

2021-02-16 Thread Victor Tapia
Public bug reported:

[impact]

libvirt fails to populate its NUMA topology when the machine has a large
number of CPUs assigned to a single node. This happens when the number
of CPUs fills the bitmask (all to one), hitting a workaround introduced
to build the NUMA topology on machines that have non contiguous node
ids. This has been already fixed upstream in the commits listed below.

[scope]

The fix is needed for Xenial, Bionic, Focal and Groovy.

It's fixed upstream with commits 24d7d85208 and 551fb778f5 which are
included in v6.8, so both are already in hirsute.

[test case]

On a machine like the EPYC 7702P, after setting the NUMA config to NPS1
(single node per processor), or just a VM with 128 CPUs, "virsh
capabilities" does not show the NUMA topology:

# virsh capabilities | xmllint --xpath '/capabilities/host/topology' -


  
  


When it should show (edited to shorten the description):


  

  5027820
  1256955
  0
  

  
  



  

  



[Where problems could occur]

Any regression would likely involve a misconstruction of the NUMA
topology, in particular for machines with non contiguous node ids.

** Affects: libvirt (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915811

Title:
  Empty NUMA topology in machines with high number of CPUs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1915811/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1826737] Re: lshw does not list NVMe storage devices as "disk" nodes

2021-01-22 Thread Victor Tapia
# VERIFICATION BIONIC

$ cat /etc/lsb-release   
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.5 LTS"

$ apt-cache policy lshw  
lshw:
  Installed: 02.18-0.1ubuntu6.18.04.2
  Candidate: 02.18-0.1ubuntu6.18.04.2
  Version table:
 *** 02.18-0.1ubuntu6.18.04.2 500
500 http://archive.ubuntu.com/ubuntu bionic-proposed/main amd64 Packages
100 /var/lib/dpkg/status
 02.18-0.1ubuntu6.18.04.1 500
500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
 02.18-0.1ubuntu6 500
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages

$ sudo lshw -short -c storage,disk
H/W pathDeviceClass  Description

/0/100/1.1storage82371SB PIIX3 IDE [Natoma/Triton II]
/0/100/5  storageVirtio block device
/0/100/5/0  /dev/vda  disk   32GB Virtual I/O device
/0/100/7  storageQEMU NVM Express Controller
/0/100/7/0  /dev/nvme0storageQEMU NVMe Ctrl
/0/100/7/0/1/dev/nvme0n1  disk   10GB NVMe namespace

Long output: https://pastebin.ubuntu.com/p/bk2Q8JJxzS/

** Tags removed: verification-needed verification-needed-bionic 
verification-needed-focal verification-needed-groovy
** Tags added: verification-done verification-done-bionic 
verification-done-focal verification-done-groovy

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1826737

Title:
  lshw does not list NVMe storage devices as "disk" nodes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1826737] Re: lshw does not list NVMe storage devices as "disk" nodes

2021-01-22 Thread Victor Tapia
# VERIFICATION FOCAL

$ cat /etc/lsb-release 
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.1 LTS"

$ apt-cache policy lshw
lshw:
  Installed: 02.18.85-0.3ubuntu2.20.04.1
  Candidate: 02.18.85-0.3ubuntu2.20.04.1
  Version table:
 *** 02.18.85-0.3ubuntu2.20.04.1 500
500 http://archive.ubuntu.com/ubuntu focal-proposed/main amd64 Packages
100 /var/lib/dpkg/status
 02.18.85-0.3ubuntu2 500
500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

$ sudo lshw -short -c storage,disk
H/W pathDeviceClass  Description

/0/100/1/dev/fb0  storageQEMU NVM Express Controller
/0/100/1/0  /dev/nvme0storageQEMU NVMe Ctrl
/0/100/1/0/1/dev/nvme0n1  disk   10GB NVMe namespace
/0/100/5  storageVirtio block device
/0/100/5/0  /dev/vda  disk   32GB Virtual I/O device
/0/100/1f.2   storage82801IR/IO/IH (ICH9R/DO/DH) 6 port 
SATA Controller [AHCI mode]

Long output: https://pastebin.ubuntu.com/p/xGSbbVWw23/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1826737

Title:
  lshw does not list NVMe storage devices as "disk" nodes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1826737] Re: lshw does not list NVMe storage devices as "disk" nodes

2021-01-22 Thread Victor Tapia
# VERIFICATION GROOVY

$ cat /etc/lsb-release 
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.10
DISTRIB_CODENAME=groovy
DISTRIB_DESCRIPTION="Ubuntu 20.10"

$ apt-cache policy lshw
lshw:
  Installed: 02.18.85-0.3ubuntu2.20.10.1
  Candidate: 02.18.85-0.3ubuntu2.20.10.1
  Version table:
 *** 02.18.85-0.3ubuntu2.20.10.1 500
500 http://archive.ubuntu.com/ubuntu groovy-proposed/main amd64 Packages
100 /var/lib/dpkg/status
 02.18.85-0.3ubuntu2 500
500 http://archive.ubuntu.com/ubuntu groovy/main amd64 Packages

$ sudo lshw -short -c storage,disk
H/W pathDeviceClass  Description

/0/100/1/dev/fb0  storageQEMU NVM Express Controller
/0/100/1/0  /dev/nvme0storageQEMU NVMe Ctrl
/0/100/1/0/1/dev/nvme0n1  disk   10GB NVMe namespace
/0/100/5  storageVirtio block device
/0/100/5/0  /dev/vda  disk   32GB Virtual I/O device
/0/100/1f.2   storage82801IR/IO/IH (ICH9R/DO/DH) 6 port 
SATA Controller [AHCI mode]

Long output: https://pastebin.ubuntu.com/p/jXcqZM4JQQ/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1826737

Title:
  lshw does not list NVMe storage devices as "disk" nodes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1826737] Re: lshw does not list NVMe storage devices as "disk" nodes

2021-01-21 Thread Victor Tapia
Hi Stephen,

I haven't been able to reproduce the issue you are seeing. Is this
happening with the version in -proposed (02.18.85-0.3ubuntu2.20.10.1)?
Would you mind attaching the generated core dump to this bug so I can
take a look at it?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1826737

Title:
  lshw does not list NVMe storage devices as "disk" nodes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1826737] Re: lshw does not list NVMe storage devices as "disk" nodes

2021-01-14 Thread Victor Tapia
** Patch added: "lshw-bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+attachment/5452916/+files/lshw-bionic.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1826737

Title:
  lshw does not list NVMe storage devices as "disk" nodes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1826737] Re: lshw does not list NVMe storage devices as "disk" nodes

2021-01-14 Thread Victor Tapia
** Patch added: "lshw-focal.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+attachment/5452915/+files/lshw-focal.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1826737

Title:
  lshw does not list NVMe storage devices as "disk" nodes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1826737] Re: lshw does not list NVMe storage devices as "disk" nodes

2021-01-14 Thread Victor Tapia
** Patch removed: "lshw-groovy.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+attachment/5452323/+files/lshw-groovy.debdiff

** Patch removed: "lshw-focal.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+attachment/5452324/+files/lshw-focal.debdiff

** Patch added: "lshw-groovy.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+attachment/5452914/+files/lshw-groovy.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1826737

Title:
  lshw does not list NVMe storage devices as "disk" nodes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1826737] Re: lshw does not list NVMe storage devices as "disk" nodes

2021-01-12 Thread Victor Tapia
** Patch added: "lshw-focal.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+attachment/5452324/+files/lshw-focal.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1826737

Title:
  lshw does not list NVMe storage devices as "disk" nodes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1826737] Re: lshw does not list NVMe storage devices as "disk" nodes

2021-01-12 Thread Victor Tapia
** Patch added: "lshw-groovy.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+attachment/5452323/+files/lshw-groovy.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1826737

Title:
  lshw does not list NVMe storage devices as "disk" nodes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1826737] Re: lshw does not list NVMe storage devices as "disk" nodes ( RedHat Bug 1695343 )

2020-12-17 Thread Victor Tapia
** Patch added: "lshw-hirsute.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+attachment/5444545/+files/lshw-hirsute.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1826737

Title:
  lshw does not list NVMe storage devices as "disk" nodes ( RedHat Bug
  1695343 )

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1826737] Re: lshw does not list NVMe storage devices as "disk" nodes ( RedHat Bug 1695343 )

2020-12-17 Thread Victor Tapia
** Description changed:

+ 
+ [Impact]
+ 
+  * NVMe devices are not recognized by lshw in Ubuntu
+ 
+ [Test Case]
+ 
+  * Running "lshw -C disk" or "lshw -C storage" does not show NVMe devices.
+Example: https://pastebin.ubuntu.com/p/FfKGNc7W6M/   
+  
+ [Where problems could occur]
+ 
+  * This upload consists of four cherry-picked patches and the feature is
+ self-contained, so the regression potential is quite low. If there's
+ anything to happen, it would be in the network device scan, where the
+ structure was altered by the main NVMe patch.
+ 
+ [Original description]
+ 
  Ubuntu MATE 19.04, updated 2019-04-28
  
- sudo lshw -class disk
+ sudo lshw -class disk
  
- Expected : info on SSD 
+ Expected : info on SSD
  Actual result : info on USB drive only.
  
  Note this is already reported to RedHat
  
  ProblemType: Bug
  DistroRelease: Ubuntu 19.04
  Package: lshw 02.18-0.1ubuntu7
  ProcVersionSignature: Ubuntu 5.0.0-13.14-generic 5.0.6
  Uname: Linux 5.0.0-13-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia
  ApportVersion: 2.20.10-0ubuntu27
  Architecture: amd64
  CurrentDesktop: MATE
  Date: Sun Apr 28 07:11:45 2019
  InstallationDate: Installed on 2019-04-25 (3 days ago)
  InstallationMedia: Ubuntu-MATE 19.04 "Disco Dingo" - Release amd64 (20190416)
  SourcePackage: lshw
  UpgradeStatus: No upgrade log present (probably fresh install)

** Also affects: lshw (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: lshw (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: lshw (Ubuntu Hirsute)
   Importance: Undecided
   Status: Confirmed

** Also affects: lshw (Ubuntu Groovy)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1826737

Title:
  lshw does not list NVMe storage devices as "disk" nodes ( RedHat Bug
  1695343 )

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lshw/+bug/1826737/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1896614] Re: Race condition when starting dbus services

2020-10-28 Thread Victor Tapia
# VERIFICATION

Note: As a reminder, the issue here is that there's a race condition
between any DBUS service and systemctl daemon-reload, where systemd adds
the DBUS filter (AddMatch) that looks for a name change when that has
already happened. I'll be using systemd-logind as the DBUS service in my
reproducer.

Using the following reproducer:

for i in $(seq 1 1000); do echo $i; ssh $SERVER 'sudo systemctl daemon-
reload & sudo systemctl restart systemd-logind'; done

- With systemd=237-3ubuntu10.42 (-updates), after a few runs, systemd-logind is 
stuck as a running job and ssh is not responsive. DBUS messages[1] show that 
the AddMatch filter is set by systemd after systemd-logind has acquired its 
final name (systemd-login1)
- With systemd=237-3ubuntu10.43 (-proposed), systemd-logind does not get stuck 
and everything continues to work. In a scenario[2] where the systemd DBUS 
AddMatch message arrives after the final systemd-logind NameOwnerChanged, 
systemd is able to catch up thanks to the GetNameOwner introduced in the patch

[1] https://pastebin.ubuntu.com/p/NxRNX9bwCP/
[2] https://pastebin.ubuntu.com/p/jpKpW3g2bK/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896614

Title:
  Race condition when starting dbus services

To manage notifications about this bug go to:
https://bugs.launchpad.net/systemd/+bug/1896614/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1847361] Re: Upgrade of qemu binaries causes running instances not able to dynamically load modules

2020-10-27 Thread Victor Tapia
** Tags removed: verification-stein-needed
** Tags added: verification-stein-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1847361

Title:
  Upgrade of qemu binaries causes running instances not able to
  dynamically load modules

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1847361/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1847361] Re: Upgrade of qemu binaries causes running instances not able to dynamically load modules

2020-10-27 Thread Victor Tapia
=== Verification ===

1. Start qemu using the proposed package:

$ sudo qemu-system-x86_64 -machine none -S -nographic -monitor stdio -serial 
null  
QEMU 3.1.0 monitor - type 'help' for more information
(qemu) info version
3.1.0Debian 1:3.1+dfsg-2ubuntu3.7~cloud1


2. Run "apt install --reinstall" so the prerm scripts copy the modules from 
/usr/lib/x86_64-linux-gnu/qemu/ to /var/run/qemu/$VERSION   
 

$ md5sum /var/run/qemu/Debian_1_3.1+dfsg-2ubuntu3.7~cloud1/block-curl.so 
/usr/lib/x86_64-linux-gnu/qemu/block-curl.so   
9424706c3ea3f1b3845fd3defbf6879c  
/var/run/qemu/Debian_1_3.1+dfsg-2ubuntu3.7~cloud1/block-curl.so
9424706c3ea3f1b3845fd3defbf6879c  /usr/lib/x86_64-linux-gnu/qemu/block-curl.so

3. Remove the module from /usr/lib/x86_64-linux-gnu/qemu/

$ sudo rm /usr/lib/x86_64-linux-gnu/qemu/block-curl.so
$ ll /usr/lib/x86_64-linux-gnu/qemu/block-curl.so
ls: cannot access '/usr/lib/x86_64-linux-gnu/qemu/block-curl.so': No such file 
or directory

4. Add a curl device so it pulls the module

(qemu) drive_add 0 
readonly=on,file=http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/current/images/netboot/mini.iso
OK

5. Confirm it's falling back to /var/run/qemu/$VERSION when
/usr/lib/x86_64-linux-gnu/qemu/ does not work

$ pidof qemu-system-x86_64; sudo cat /proc/$(pidof qemu-system-x86_64)/maps | 
grep -e curl
8687
7fad1ee33000-7fad1eead000 r-xp  08:02 3937904
/usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0
7fad1eead000-7fad1f0ac000 ---p 0007a000 08:02 3937904
/usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0
7fad1f0ac000-7fad1f0af000 r--p 00079000 08:02 3937904
/usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0
7fad1f0af000-7fad1f0b rw-p 0007c000 08:02 3937904
/usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0
7fad1f0b-7fad1f0b5000 r-xp  00:16 993
/run/qemu/Debian_1_3.1+dfsg-2ubuntu3.7~cloud1/block-curl.so
7fad1f0b5000-7fad1f2b4000 ---p 5000 00:16 993
/run/qemu/Debian_1_3.1+dfsg-2ubuntu3.7~cloud1/block-curl.so
7fad1f2b4000-7fad1f2b5000 r--p 4000 00:16 993
/run/qemu/Debian_1_3.1+dfsg-2ubuntu3.7~cloud1/block-curl.so
7fad1f2b5000-7fad1f2b6000 rw-p 5000 00:16 993
/run/qemu/Debian_1_3.1+dfsg-2ubuntu3.7~cloud1/block-curl.so


** Note: if /run is mounted as noexec, step 4 will fail with the following 
message: 

(qemu) drive_add 0 
readonly=on,file=http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/current/images/netboot/mini.iso
Failed to initialize module: /usr/lib/x86_64-linux-gnu/qemu/block-curl.so
Note: only modules from the same build can be loaded.
Failed to open module: 
/var/run/qemu/_Debian_1_2.11+dfsg-1ubuntu7.25_/block-curl.so: failed to map 
segment from shared object
Unknown protocol 'http'

This affects all fixed releases (B/E+), and the workaround is to remount
the /run fs without noexec (sudo mount -o remount,exec /run)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1847361

Title:
  Upgrade of qemu binaries causes running instances not able to
  dynamically load modules

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1847361/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1848497] Re: virtio-balloon change breaks migration from qemu prior to 4.0

2020-10-07 Thread Victor Tapia
Attached backported fix to bug 1847361. Fixes live migrations from
1:2.11+dfsg-1ubuntu7.32 (Queens/Rocky) and 1:3.1+dfsg-2ubuntu3.3 or
previous (Stein) to latest Stein. I also tested the migration from the
patched Stein to Train and works as expected.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1848497

Title:
  virtio-balloon change breaks migration from qemu prior to 4.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1848497/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1847361] Re: Upgrade of qemu binaries causes running instances not able to dynamically load modules

2020-10-07 Thread Victor Tapia
Updated the qemu debdiff with the backported fix for bug 1848497

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1847361

Title:
  Upgrade of qemu binaries causes running instances not able to
  dynamically load modules

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1847361/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1847361] Re: Upgrade of qemu binaries causes running instances not able to dynamically load modules

2020-10-07 Thread Victor Tapia
** Patch removed: "qemu-stein.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1847361/+attachment/5415315/+files/qemu-stein.debdiff

** Patch added: "qemu-stein.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1847361/+attachment/5418856/+files/qemu-stein.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1847361

Title:
  Upgrade of qemu binaries causes running instances not able to
  dynamically load modules

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1847361/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1847361] Re: Upgrade of qemu binaries causes running instances not able to dynamically load modules

2020-09-29 Thread Victor Tapia
I've backported Christian's patches to stein (see attached debdiffs) and
after testing the described reproducer I can confirm that they fix the
issue. Feel free to test them and let me know if there's any issue.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1847361

Title:
  Upgrade of qemu binaries causes running instances not able to
  dynamically load modules

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1847361/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1847361] Re: Upgrade of qemu binaries causes running instances not able to dynamically load modules

2020-09-29 Thread Victor Tapia
** Patch added: "libvirt-stein.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1847361/+attachment/5415316/+files/libvirt-stein.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1847361

Title:
  Upgrade of qemu binaries causes running instances not able to
  dynamically load modules

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1847361/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1847361] Re: Upgrade of qemu binaries causes running instances not able to dynamically load modules

2020-09-29 Thread Victor Tapia
** Patch added: "qemu-stein.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1847361/+attachment/5415315/+files/qemu-stein.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1847361

Title:
  Upgrade of qemu binaries causes running instances not able to
  dynamically load modules

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1847361/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1896614] Re: Race condition when starting dbus services

2020-09-22 Thread Victor Tapia
In the original report, the issue happened randomly on boot when a
service[1] was triggering a reload while systemd-logind was starting,
resulting in a list of queued jobs that were never executed.

The issue can happen too under high load conditions, as reported upstream:
https://github.com/systemd/systemd/issues/12956

To simplify the reproducer I went with systemd-logind+daemon-reload, but
it can be done with any other dbus service.


[1]

[Unit]
Description=Disable unattended upgrades
After=network-online.target local-fs.target

[Service]
Type=oneshot
ExecStart=/bin/bash -c "/bin/chmod 644 /etc/cron.daily/apt-compat ; 
/bin/systemctl disable apt-daily-upgrade.timer apt-daily.timer ; /bin/systemctl 
stop apt-daily-upgrade.timer apt-daily.timer"

[Install]
WantedBy=multi-user.target

** Bug watch added: github.com/systemd/systemd/issues #12956
   https://github.com/systemd/systemd/issues/12956

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896614

Title:
  Race condition when starting dbus services

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1896614/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1896614] [NEW] Race condition when starting dbus services

2020-09-22 Thread Victor Tapia
Public bug reported:

In certain scenarios, such as high load environments or when "systemctl
daemon-reload" runs at the same time a dbus service is starting (e.g.
systemd-logind), systemd is not able to track properly when the service
has started, keeping the job 'running' forever.

The issue appears when systemd runs the "AddMatch" dbus method call to
track the service's "NameOwnerChange" once it has already ran. A working
instance would look like this:

https://pastebin.ubuntu.com/p/868J6WBRQx/

A failing instance would be:

https://pastebin.ubuntu.com/p/HhJZ4p8dT5/

I've been able to reproduce the issue on Bionic (237-3ubuntu10.42)
running:

sudo systemctl daemon-reload & sudo systemctl restart systemd-logind

** Affects: systemd (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: systemd (Ubuntu Bionic)
 Importance: Undecided
 Status: New


** Tags: sts

** Tags added: sts

** Also affects: systemd (Ubuntu Bionic)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896614

Title:
  Race condition when starting dbus services

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1896614/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1865145] Re: Ubuntu Bionic freezes on Supermicro hardware when console redirection is configured in kernel parameters

2020-04-02 Thread Victor Tapia
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1865145

Title:
  Ubuntu Bionic freezes on Supermicro hardware when console redirection
  is configured in kernel parameters

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1865145/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1843044]

2020-02-06 Thread Victor Tapia
Sure, I'm not familiar with the process but will give it a try. Sorry
for the late response btw, I've been afk :)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1843044

Title:
  firefox crashes on a FIPS enabled machine

To manage notifications about this bug go to:
https://bugs.launchpad.net/firefox/+bug/1843044/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1843044]

2020-02-06 Thread Victor Tapia
Created attachment 9123528
Bug 1582169 - Disable reading /proc/sys/crypto/fips_enabled if FIPS is not 
enabled on build

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1843044

Title:
  firefox crashes on a FIPS enabled machine

To manage notifications about this bug go to:
https://bugs.launchpad.net/firefox/+bug/1843044/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1843044]

2020-01-14 Thread Victor Tapia
Created attachment 9120251
nss-stop-fips-query-when-disabled.patch

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1843044

Title:
  firefox crashes on a FIPS enabled machine

To manage notifications about this bug go to:
https://bugs.launchpad.net/firefox/+bug/1843044/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1843044]

2020-01-14 Thread Victor Tapia
Created attachment 9120250
nss-stop-fips-query-when-disabled.patch

I'm attaching a patch that uses NSS_FIPS_DISABLED so
/proc/sys/crypto/fips_enabled won't be checked when NSS is not built in
FIPS mode (without --enable-fips).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1843044

Title:
  firefox crashes on a FIPS enabled machine

To manage notifications about this bug go to:
https://bugs.launchpad.net/firefox/+bug/1843044/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-11-12 Thread Victor Tapia
#VERIFICATION EOAN

Running the following script:

$ cat repro.lua
#!/usr/bin/env lua
lpeg = require "lpeg"

p = lpeg.C(-lpeg.P{lpeg.P'x' * lpeg.V(1) + lpeg.P'y'})
p:match("xx")


- With the current version:

$ dpkg -l|grep lua-lpeg
ii  lua-lpeg:amd64   1.0.0-2 amd64
LPeg library for the Lua language

$ ./repro.lua
[1]4168 segmentation fault (core dumped)  ./repro.lua

nmap segfaults too after a few runs (removing the service fingerprint
first as mentioned in the comment #11):

$ count=1; while true; do nmap -sV 192.168.1.114 -Pn > /dev/null && 
((count+=1)) || break; done; echo $count
7


- With the version in -proposed the script works as expected:

$ dpkg -l | grep lua-lpeg
ii  lua-lpeg:amd64   1.0.0-2ubuntu0.19.10.1  amd64
LPeg library for the Lua language

$ ./repro.lua

$ echo $?
0

nmap works too (was manually stopped after 300+ runs):

$ count=1; while true; do nmap -sV 192.168.1.114 -Pn > /dev/null &&
((count+=1)) || break; done; echo $count

$ echo $count
383

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-11-12 Thread Victor Tapia
#VERIFICATION XENIAL

Running the following script:

$ cat repro.lua
#!/usr/bin/env lua
lpeg = require "lpeg"

p = lpeg.C(-lpeg.P{lpeg.P'x' * lpeg.V(1) + lpeg.P'y'})
p:match("xx")


- With the current version:

$ dpkg -l | grep lua-lpeg
ii  lua-lpeg:amd64   0.12.2-1   
amd64LPeg library for the Lua language

$ ./repro.lua
Segmentation fault (core dumped)

nmap segfaults too after a few runs:

$ count=1; while true; do nmap -sV 192.168.1.114 -Pn > /dev/null && 
((count+=1)) || break; done; echo $count
Segmentation fault (core dumped)
2


- With the version in -proposed the script works as expected:

$ dpkg -l | grep lua-lpeg
ii  lua-lpeg:amd64   0.12.2-1ubuntu1
amd64LPeg library for the Lua language

$ ./repro.lua
$ echo $?
0

nmap works (manually stopped):

$ count=1; while true; do nmap -sV 192.168.1.114 -Pn > /dev/null && 
((count+=1)) || break; done; echo $count
$ echo $count
179

** Tags removed: verification-needed verification-needed-bionic 
verification-needed-disco verification-needed-eoan verification-needed-xenial
** Tags added: verification-done verification-done-bionic 
verification-done-disco verification-done-eoan verification-done-xenial

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-11-12 Thread Victor Tapia
#VERIFICATION BIONIC

Running the following script:

$ cat repro.lua
#!/usr/bin/env lua
lpeg = require "lpeg"

p = lpeg.C(-lpeg.P{lpeg.P'x' * lpeg.V(1) + lpeg.P'y'})
p:match("xx")


- With the current version:

$ dpkg -l|grep lua-lpeg
ii  lua-lpeg:amd64   1.0.0-2 amd64
LPeg library for the Lua language

$ ./repro.lua
Segmentation fault (core dumped)

In this case nmap works because it's using an old internal version of
lua-lpeg (see comment #11)


- With the version in -proposed the script works as expected:

$ dpkg -l | grep lua-lpeg
ii  lua-lpeg:amd64 1.0.0-2ubuntu0.18.04.1 amd64 
   LPeg library for the Lua language
$ ./repro.lua
$ echo $?
0

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-11-12 Thread Victor Tapia
#VERIFICATION DISCO

Running the following script:

$ cat repro.lua
#!/usr/bin/env lua
lpeg = require "lpeg"

p = lpeg.C(-lpeg.P{lpeg.P'x' * lpeg.V(1) + lpeg.P'y'})
p:match("xx")


- With the current version:

$ dpkg -l|grep lua-lpeg
ii  lua-lpeg:amd64   1.0.0-2 amd64
LPeg library for the Lua language

$ ./repro.lua
[1]1119 segmentation fault (core dumped)  ./repro.lua

In this case nmap works because it's using an old internal version of
lua-lpeg (see comment #11)


- With the version in -proposed the script works as expected:

$ dpkg -l | grep lua-lpeg  
ii  lua-lpeg:amd64   1.0.0-2ubuntu0.19.04.1 amd64   
 LPeg library for the Lua language

$ ./repro.lua

$ echo $?
0

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-11-07 Thread Victor Tapia
** Patch added: "xenial.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+attachment/5303500/+files/xenial.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-11-07 Thread Victor Tapia
** Patch added: "bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+attachment/5303499/+files/bionic.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-11-07 Thread Victor Tapia
** Patch added: "eoan.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+attachment/5303496/+files/eoan.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-11-07 Thread Victor Tapia
** Patch added: "disco.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+attachment/5303497/+files/disco.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-11-06 Thread Victor Tapia
** Bug watch added: Debian Bug tracker #942031
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=942031

** Also affects: lua-lpeg (Debian) via
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=942031
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-11-06 Thread Victor Tapia
** Patch added: "focal.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+attachment/5303261/+files/focal.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-11-05 Thread Victor Tapia
** No longer affects: nmap (Ubuntu)

** No longer affects: nmap (Ubuntu Xenial)

** Also affects: lua-lpeg (Ubuntu Eoan)
   Importance: Undecided
   Status: New

** Also affects: lua-lpeg (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: lua-lpeg (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: lua-lpeg (Ubuntu Disco)
   Importance: Undecided
   Status: New

** No longer affects: lua-lpeg (Ubuntu Focal)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-11-05 Thread Victor Tapia
** Description changed:

+ [Impact]
+ 
+ Under certain conditions, lpeg will crash while walking the pattern tree
+ looking for TCapture nodes.
+ 
+ [Test Case]
+ 
+ The reproducer, taken from an upstream discussion (link in "Other
+ info"), is:
+ 
+ $ cat repro.lua
+ #!/usr/bin/env lua
+ lpeg = require "lpeg"
+ 
+ p = lpeg.C(-lpeg.P{lpeg.P'x' * lpeg.V(1) + lpeg.P'y'})
+ p:match("xx")
+ 
+ The program crashes due to a hascaptures() infinite recursion:
+ 
+ $ ./repro.lua
+ Segmentation fault (core dumped)
+ 
+ (gdb) bt -25
+ #523984 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
+ #523985 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
+ #523986 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
+ #523987 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
+ #523988 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
+ #523989 0x77a3743c in hascaptures () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
+ #523990 0x77a3815c in ?? () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
+ #523991 0x77a388e3 in compile () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
+ #523992 0x77a36fab in ?? () from 
/usr/lib/x86_64-linux-gnu/lua/5.2/lpeg.so
+ #523993 0xfd1e in ?? ()
+ #523994 0x5556a5fc in ?? ()
+ #523995 0x555600c8 in ?? ()
+ #523996 0xf63f in ?? ()
+ #523997 0x5556030f in ?? ()
+ #523998 0xdc91 in lua_pcallk ()
+ #523999 0xb896 in ?? ()
+ #524000 0xc54b in ?? ()
+ #524001 0xfd1e in ?? ()
+ #524002 0x55560092 in ?? ()
+ #524003 0xf63f in ?? ()
+ #524004 0x5556030f in ?? ()
+ #524005 0xdc91 in lua_pcallk ()
+ #524006 0xb64b in ?? ()
+ #524007 0x77c94bbb in __libc_start_main (main=0xb5f0, argc=2, 
argv=0x7fffe6d8, init=, fini=, 
rtld_fini=, stack_end=0x7fffe6c8)
+ at ../csu/libc-start.c:308
+ #524008 0xb70a in ?? ()
+ 
+ The expected behavior is to have the program finish normally
+ 
+ [Regression potential]
+ 
+ Low, this is a backport from upstream and only limits the infinite recursion 
in a scenario where it shouldn't happen to begin with (TCapture node search).
+ [Other info]
+ 
+ This was fixed upstream in 1.0.1 by stopping the recursion in TCall
+ nodes and controlling that TRule nodes do not follow siblings (sib2)
+ 
+ The upstream discussion can be found here:
+ http://lua.2524044.n2.nabble.com/LPeg-intermittent-stack-exhaustion-
+ td7674831.html
+ 
+ My analysis can be found here:
+ http://pastebin.ubuntu.com/p/n4824ftZt9/plain/
+ 
+ [Original description]
+ 
  The Ubuntu Error Tracker has been receiving reports about a problem
  regarding nmap.  This problem was most recently seen with version
  7.01-2ubuntu2, the problem page at
  https://errors.ubuntu.com/problem/5e852236a443bab0279d47c8a9b7e55802bfb46f
  contains more details.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-10-09 Thread Victor Tapia
** Also affects: lua-lpeg (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-10-09 Thread Victor Tapia
I've been able to finish the analysis of the bug, this is the summary:

- nmap includes an old version of lpeg (0.12 ~Trusty/oldoldstable) in all 
releases (all files merged in lpeg.c)
- Debian introduced a patch that links nmap's build against an external 
lua-lpeg lib because lpeg is properly packaged (a hygiene measure according to 
Debian's maintainer)
- Upstream introduced a patch, available in B+, that fixed a FTBFS regarding 
lpeg (undefined reference for luaopen_lpeg())
- The version of lua-lpeg in X/B/E has a recursion error
- When both the upstream commit and the external linking patch are available, 
local lpeg is used

This results in:

- X fails because it uses lua-lpeg (no upstream commit in the build)
- B works because it uses local lpeg (upstream commit available)
- E is a special case in my reproducer: the debian patch removes #include 
"lpeg.c" so it uses the external lua-lpeg, but works because the scanned 
service has a fingerprint and avoids the crash. Removing the fingerprint from 
/usr/share/nmap/nmap-service-probes makes it crash as expected

The best way to fix this bug will be to fix the recursion error in lua-
lpeg so nmap would work regardless of the version of lua-lpeg it uses.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lua-lpeg/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1796501] Re: systemd-resolved tries to mitigate DVE-2018-0001 even if DNSSEC=yes

2019-10-04 Thread Victor Tapia
** Patch removed: "eoan.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1796501/+attachment/5294401/+files/eoan.debdiff

** Patch added: "eoan.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1796501/+attachment/5294438/+files/eoan.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1796501

Title:
  systemd-resolved tries to mitigate DVE-2018-0001 even if DNSSEC=yes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1796501/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1796501] Re: systemd-resolved tries to mitigate DVE-2018-0001 even if DNSSEC=yes

2019-10-04 Thread Victor Tapia
** Patch removed: "eoan debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1796501/+attachment/5288416/+files/systemd_241-7ubuntu2.debdiff

** Patch added: "eoan.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1796501/+attachment/5294401/+files/eoan.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1796501

Title:
  systemd-resolved tries to mitigate DVE-2018-0001 even if DNSSEC=yes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1796501/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-10-02 Thread Victor Tapia
# REPRODUCER: Install LXD, make it available over the network and run
nmap against its ip:

# lxd init
Do you want to configure a new storage pool (yes/no) [default=yes]? no
Would you like LXD to be available over the network (yes/no) [default=no]? yes
Address to bind LXD to (not including port) [default=all]: 
Port to bind LXD to [default=8443]: 
Trust password for new clients: 
Again: 
Do you want to configure the LXD bridge (yes/no) [default=yes]? no
LXD has been successfully configured.

$ sudo netstat -anp | grep 8443
tcp6   0  0 :::8443 :::*LISTEN  
817/lxd 

$ while true; do nmap -sV localhost -Pn -d 2>&1 || break; done

Starting Nmap 7.01 ( https://nmap.org ) at 2019-10-02 18:33 CEST
PORTS: Using top 1000 ports found open (TCP:1000, UDP:0, SCTP:0)
--- Timing report ---
  hostgroups: min 1, max 10
  rtt-timeouts: init 1000, min 100, max 1
  max-scan-delay: TCP 1000, UDP 1000, SCTP 1000
  parallelism: min 0, max 0
  max-retries: 10, host-timeout: 0
  min-rate: 0, max-rate: 0
-
NSE: Using Lua 5.2.
NSE: Arguments from CLI: 
NSE: Loaded 35 scripts for scanning.
mass_rdns: Using DNS server 192.168.11.1
Initiating Connect Scan at 18:33
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 22/tcp on 127.0.0.1
Discovered open port 8443/tcp on 127.0.0.1
Completed Connect Scan at 18:33, 0.01s elapsed (1000 total ports)
Overall sending rates: 67769.04 packets / s.
Initiating Service scan at 18:33
Scanning 2 services on localhost (127.0.0.1)
Completed Service scan at 18:35, 82.59s elapsed (2 services on 1 host)
NSE: Script scanning 127.0.0.1.
NSE: Starting runlevel 1 (of 1) scan.
Initiating NSE at 18:35
NSE: Starting rpc-grind against localhost (127.0.0.1:8443).
NSE: Starting http-server-header against localhost (127.0.0.1:8443).
Segmentation fault (core dumped)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nmap/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-09-27 Thread Victor Tapia
** No longer affects: nmap (Ubuntu Eoan)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nmap/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-09-27 Thread Victor Tapia
** Also affects: nmap (Ubuntu Eoan)
   Importance: High
   Status: Confirmed

** Changed in: nmap (Ubuntu Eoan)
 Assignee: (unassigned) => Victor Tapia (vtapia)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nmap/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-09-26 Thread Victor Tapia
I think I found the root cause of this issue. 0003-Link-against-lua-
lpeg.patch tries to fix a linking error related to the function named
luaopen_lpeg(), making nmap use the luaopen_lpeg() in lua-lpeg instead
of the local function declared in lpeg.c (which are slightly different).
The original linking issue was addressed upstream[1] by setting the
function as extern[2] somewhere between 7.0.1 and 7.10, and that's why
newer releases do not see this bug. I have a PPA[3] prepared to start
testing, replacing 0003-Link-against-lua-lpeg.patch with the upstream
fix.

[1] https://github.com/nmap/nmap/issues/237
[2] https://github.com/nmap/nmap/commit/9bcc6c09e22e3a32e8f89a13afee5a9a77b92b62
[3] https://launchpad.net/~vtapia/+archive/ubuntu/sf238253


** Bug watch added: github.com/nmap/nmap/issues #237
   https://github.com/nmap/nmap/issues/237

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nmap/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1580385] Re: /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

2019-09-26 Thread Victor Tapia
** Changed in: nmap (Ubuntu Xenial)
 Assignee: Dan Streetman (ddstreet) => Victor Tapia (vtapia)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1580385

Title:
  /usr/bin/nmap:11:hascaptures:hascaptures:hascaptures:hascaptures:hascaptures

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/nmap/+bug/1580385/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1822062] Re: Race condition on boot between cups and sssd

2019-05-07 Thread Victor Tapia
** Tags removed: verification-needed verification-needed-disco
** Tags added: verification-done verification-done-disco

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1822062

Title:
  Race condition on boot between cups and sssd

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1822062/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1822062] Re: Race condition on boot between cups and sssd

2019-05-07 Thread Victor Tapia
# VERIFICATION: DISCO
- Using the reproducer defined in the test case and the version in -updates:

ubuntu@disco-sssd-ad:~$ dpkg -l | grep -E "cups-daemon| cups |cups-common"
ii  cups  2.2.10-4amd64 
   Common UNIX Printing System(tm) - PPD/driver support, web interface
ii  cups-common   2.2.10-4all   
   Common UNIX Printing System(tm) - common files
ii  cups-daemon   2.2.10-4amd64 
   Common UNIX Printing System(tm) - daemon

ubuntu@disco-sssd-ad:~$ grep -i systemgroup /etc/cups/cups-files.conf 
SystemGroup lpadmins@TESTS.LOCAL
ubuntu@disco-sssd-ad:~$ systemd-analyze critical-chain cups.service
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.

cups.service +161ms
└─cups.socket @46.229s
  └─sysinit.target @42.682s
└─cloud-init.service @37.411s +5.239s
  └─systemd-networkd-wait-online.service @35.640s +1.727s
└─systemd-networkd.service @35.419s +189ms
  └─network-pre.target @35.415s
└─cloud-init-local.service @21.419s +13.992s
  └─systemd-remount-fs.service @7.277s +570ms
└─systemd-journald.socket @7.070s
  └─system.slice @6.915s
└─-.slice @6.915s

- After reboot, cups fails to start:

ubuntu@disco-sssd-ad:~$ systemctl status cups
● cups.service - CUPS Scheduler
   Loaded: loaded (/lib/systemd/system/cups.service; enabled; vendor preset: 
enabled)
   Active: failed (Result: exit-code) since Tue 2019-05-07 11:12:09 UTC; 16min 
ago
 Docs: man:cupsd(8)
  Process: 747 ExecStart=/usr/sbin/cupsd -l (code=exited, status=1/FAILURE)
 Main PID: 747 (code=exited, status=1/FAILURE)

May 07 11:12:09 disco-sssd-ad systemd[1]: Stopped CUPS Scheduler.
May 07 11:12:09 disco-sssd-ad systemd[1]: Started CUPS Scheduler.
May 07 11:12:09 disco-sssd-ad systemd[1]: cups.service: Main process exited, 
code=exited, status=1/FAILURE
May 07 11:12:09 disco-sssd-ad systemd[1]: cups.service: Failed with result 
'exit-code'.
May 07 11:12:09 disco-sssd-ad systemd[1]: cups.service: Service 
RestartSec=100ms expired, scheduling resta
May 07 11:12:09 disco-sssd-ad systemd[1]: cups.service: Scheduled restart job, 
restart counter is at 5.
May 07 11:12:09 disco-sssd-ad systemd[1]: Stopped CUPS Scheduler.
May 07 11:12:09 disco-sssd-ad systemd[1]: cups.service: Start request repeated 
too quickly.
May 07 11:12:09 disco-sssd-ad systemd[1]: cups.service: Failed with result 
'exit-code'.
May 07 11:12:09 disco-sssd-ad systemd[1]: Failed to start CUPS Scheduler.

ubuntu@disco-sssd-ad:~$ grep cupsd /var/log/syslog | grep -v kernel
May  7 11:12:10 disco-sssd-ad cupsd[692]: Unknown SystemGroup 
"lpadmins@TESTS.LOCAL" on line 19 of /etc/cups/cups-files.conf.
May  7 11:12:10 disco-sssd-ad cupsd[692]: Unable to read 
"/etc/cups/cups-files.conf" due to errors.
May  7 11:12:10 disco-sssd-ad cupsd[721]: Unknown SystemGroup 
"lpadmins@TESTS.LOCAL" on line 19 of /etc/cups/cups-files.conf.
...


- Using the version in -proposed, after rebooting, cups works fine:

ubuntu@disco-sssd-ad:~$ dpkg -l | grep -E "cups-daemon| cups |cups-common"
ii  cups  2.2.10-4ubuntu1 amd64 
   Common UNIX Printing System(tm) - PPD/driver support, web interface
ii  cups-common   2.2.10-4ubuntu1 all   
   Common UNIX Printing System(tm) - common files
ii  cups-daemon   2.2.10-4ubuntu1 amd64 
   Common UNIX Printing System(tm) - daemon

ubuntu@disco-sssd-ad:~$ systemctl status cups
● cups.service - CUPS Scheduler
   Loaded: loaded (/lib/systemd/system/cups.service; enabled; vendor preset: 
enabled)
   Active: active (running) since Tue 2019-05-07 11:32:52 UTC; 33s ago
 Docs: man:cupsd(8)
 Main PID: 812 (cupsd)
Tasks: 1 (limit: 2356)
   Memory: 2.5M
   CGroup: /system.slice/cups.service
   └─812 /usr/sbin/cupsd -l

May 07 11:32:52 disco-sssd-ad systemd[1]: Started CUPS Scheduler.
ubuntu@disco-sssd-ad:~$ systemd-analyze critical-chain cups.service
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.

cups.service @49.422s
└─sssd.service @41.473s +7.943s
  └─basic.target @41.321s
└─sockets.target @41.318s
  └─snapd.socket @41.111s +184ms
└─sysinit.target @40.800s
  └─cloud-init.service @37.899s +2.895s
└─systemd-networkd-wait-online.service @36.713s +1.141s
  └─systemd-networkd.service @36.346s +360ms
└─network-pre.target @36.341s
  └─cloud-init-local.service @21.748s +14.588s
└─systemd-remount-fs.service @8.932s +140ms
  └─systemd-journald.socket @8.844s
  

[Bug 1822062] Re: Race condition on boot between cups and sssd

2019-05-07 Thread Victor Tapia
# VERIFICATION: COSMIC
- Using the reproducer defined in the test case and the version in -updates:

ubuntu@cosmic-sssd-ad:~$ dpkg -l | grep -E "cups-daemon| cups |cups-common"
ii  cups  2.2.8-5ubuntu1.2amd64 
   Common UNIX Printing System(tm) - PPD/driver support, web interface
ii  cups-common   2.2.8-5ubuntu1.2all   
   Common UNIX Printing System(tm) - common files
ii  cups-daemon   2.2.8-5ubuntu1.2amd64 
   Common UNIX Printing System(tm) - daemon

ubuntu@cosmic-sssd-ad:~$ grep -i systemgroup /etc/cups/cups-files.conf 
SystemGroup lpadmins@TESTS.LOCAL
ubuntu@cosmic-sssd-ad:~$ systemd-analyze critical-chain cups.service
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.

cups.service @5d 44min 19.075s
└─basic.target @32.610s
  └─sockets.target @32.602s
└─snap.lxd.daemon.unix.socket @2min 24.862s
  └─snap-lxd-10601.mount @2min 1.485s +39ms
└─local-fs-pre.target @8.493s
  └─systemd-tmpfiles-setup-dev.service @8.103s +386ms
└─systemd-sysusers.service @7.546s +550ms
  └─systemd-remount-fs.service @7.143s +373ms
└─systemd-journald.socket @7.033s
  └─-.mount @6.938s
└─system.slice @6.938s
  └─-.slice @6.938s

- After reboot, cups fails to start:

ubuntu@cosmic-sssd-ad:~$ systemctl status cups
● cups.service - CUPS Scheduler
   Loaded: loaded (/lib/systemd/system/cups.service; enabled; vendor preset: 
enabled)
   Active: failed (Result: exit-code) since Tue 2019-05-07 10:06:49 UTC; 57min 
ago
 Docs: man:cupsd(8)
  Process: 1173 ExecStart=/usr/sbin/cupsd -l (code=exited, status=1/FAILURE)
 Main PID: 1173 (code=exited, status=1/FAILURE)

May 07 10:06:49 cosmic-sssd-ad systemd[1]: cups.service: Service 
RestartSec=100ms expired, scheduling rest
May 07 10:06:49 cosmic-sssd-ad systemd[1]: cups.service: Scheduled restart job, 
restart counter is at 5.
May 07 10:06:49 cosmic-sssd-ad systemd[1]: Stopped CUPS Scheduler.
May 07 10:06:49 cosmic-sssd-ad systemd[1]: cups.service: Start request repeated 
too quickly.
May 07 10:06:49 cosmic-sssd-ad systemd[1]: cups.service: Failed with result 
'exit-code'.
May 07 10:06:49 cosmic-sssd-ad systemd[1]: Failed to start CUPS Scheduler.
ubuntu@cosmic-sssd-ad:~$ grep cupsd /var/log/syslog | grep -v kernel
May  7 10:06:45 cosmic-sssd-ad cupsd[1033]: Unknown SystemGroup 
"lpadmins@TESTS.LOCAL" on line 19 of /etc/cups/cups-files.conf.
May  7 10:06:45 cosmic-sssd-ad cupsd[1033]: Unable to read 
"/etc/cups/cups-files.conf" due to errors.
May  7 10:06:47 cosmic-sssd-ad cupsd[1122]: Unknown SystemGroup 
"lpadmins@TESTS.LOCAL" on line 19 of /etc/cups/cups-files.conf.
May  7 10:06:47 cosmic-sssd-ad cupsd[1122]: Unable to read 
"/etc/cups/cups-files.conf" due to errors.
...


- Using the version in -proposed, after rebooting, cups works fine:

ubuntu@cosmic-sssd-ad:~$ dpkg -l | grep -E "cups-daemon| cups |cups-common"
ii  cups  2.2.8-5ubuntu1.3amd64 
   Common UNIX Printing System(tm) - PPD/driver support, web interface
ii  cups-common   2.2.8-5ubuntu1.3all   
   Common UNIX Printing System(tm) - common files
ii  cups-daemon   2.2.8-5ubuntu1.3amd64 
   Common UNIX Printing System(tm) - daemon

ubuntu@cosmic-sssd-ad:~$ systemctl status cups
● cups.service - CUPS Scheduler
   Loaded: loaded (/lib/systemd/system/cups.service; enabled; vendor preset: 
enabled)
   Active: active (running) since Tue 2019-05-07 11:13:20 UTC; 58s ago
 Docs: man:cupsd(8)
 Main PID: 1297 (cupsd)
Tasks: 1 (limit: 2361)
   Memory: 2.7M
   CGroup: /system.slice/cups.service
   └─1297 /usr/sbin/cupsd -l

May 07 11:13:20 cosmic-sssd-ad systemd[1]: Started CUPS Scheduler.

ubuntu@cosmic-sssd-ad:~$ systemd-analyze critical-chain cups.service
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.

cups.service @1min 6.619s
└─sssd.service @54.111s +12.499s
  └─basic.target @54.032s
└─sockets.target @54.030s
  └─snapd.socket @53.965s +61ms
└─sysinit.target @53.361s
  └─cloud-init.service @48.760s +4.493s
└─systemd-networkd-wait-online.service @46.946s +1.809s
  └─systemd-networkd.service @46.237s +675ms
└─network-pre.target @46.230s
  └─cloud-init-local.service @22.765s +23.458s
└─systemd-remount-fs.service @10.923s +199ms
  └─systemd-journald.socket @10.574s
└─system.slice @10.466s
  └─-.slice @10.466s


- Using the version in -proposed, with sssd not 

[Bug 1822062] Re: Race condition on boot between cups and sssd

2019-05-07 Thread Victor Tapia
# VERIFICATION: BIONIC
- Using the reproducer defined in the test case and the version in -updates:

ubuntu@bionic-sssd-ad:~$ dpkg -l | grep -E "cups-daemon| cups |cups-common"
ii  cups  2.2.7-1ubuntu2.4  
  amd64Common UNIX Printing System(tm) - PPD/driver support, web 
interface
ii  cups-common   2.2.7-1ubuntu2.4  
  all  Common UNIX Printing System(tm) - common files
ii  cups-daemon   2.2.7-1ubuntu2.4  
  amd64Common UNIX Printing System(tm) - daemon

ubuntu@bionic-sssd-ad:~$ grep -i systemgroup /etc/cups/cups-files.conf 
SystemGroup lpadmins@TESTS.LOCAL 

ubuntu@bionic-sssd-ad:~$ systemd-analyze critical-chain cups.service
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.

cups.service @5d 44min 17.034s
└─basic.target @41.538s
  └─sockets.target @41.534s
└─lxd.socket @41.422s +104ms
  └─sysinit.target @41.320s
└─systemd-update-utmp.service @40.757s +99ms
  └─systemd-tmpfiles-setup.service @39.550s +1.181s
└─local-fs.target @13.659s
  └─var-lib-lxcfs.mount @43.131s
└─local-fs-pre.target @9.991s
  └─systemd-tmpfiles-setup-dev.service @8.859s +1.127s
└─kmod-static-nodes.service @8.510s +303ms
  └─systemd-journald.socket @8.460s
└─system.slice @8.334s
  └─-.slice @8.326s

- After reboot, cups fails to start:

ubuntu@bionic-sssd-ad:~$ systemctl status cups
● cups.service - CUPS Scheduler
   Loaded: loaded (/lib/systemd/system/cups.service; enabled; vendor preset: 
enabled)
   Active: failed (Result: exit-code) since Tue 2019-05-07 10:06:32 UTC; 27min 
ago
 Docs: man:cupsd(8)
  Process: 969 ExecStart=/usr/sbin/cupsd -l (code=exited, status=1/FAILURE)
 Main PID: 969 (code=exited, status=1/FAILURE)

May 07 10:06:32 bionic-sssd-ad systemd[1]: cups.service: Service hold-off time 
over, scheduling restart.
May 07 10:06:32 bionic-sssd-ad systemd[1]: cups.service: Scheduled restart job, 
restart counter is at 5.
May 07 10:06:32 bionic-sssd-ad systemd[1]: Stopped CUPS Scheduler.
May 07 10:06:32 bionic-sssd-ad systemd[1]: cups.service: Start request repeated 
too quickly.
May 07 10:06:32 bionic-sssd-ad systemd[1]: cups.service: Failed with result 
'exit-code'.
May 07 10:06:32 bionic-sssd-ad systemd[1]: Failed to start CUPS Scheduler.

ubuntu@bionic-sssd-ad:~$ grep cupsd /var/log/syslog | grep -v kernel
May  7 10:06:30 bionic-sssd-ad cupsd[860]: Unknown SystemGroup 
"lpadmins@TESTS.LOCAL" on line 19 of /etc/cups/cups-files.conf.
May  7 10:06:30 bionic-sssd-ad cupsd[860]: Unable to read 
"/etc/cups/cups-files.conf" due to errors.
...


- Using the version in -proposed, after rebooting, cups works fine:

ubuntu@bionic-sssd-ad:~$ dpkg -l | grep -E "cups-daemon| cups |cups-common"
ii  cups  2.2.7-1ubuntu2.5  
  amd64Common UNIX Printing System(tm) - PPD/driver support, web 
interface
ii  cups-common   2.2.7-1ubuntu2.5  
  all  Common UNIX Printing System(tm) - common files
ii  cups-daemon   2.2.7-1ubuntu2.5  
  amd64Common UNIX Printing System(tm) - daemon

ubuntu@bionic-sssd-ad:~$ systemctl status cups
● cups.service - CUPS Scheduler
   Loaded: loaded (/lib/systemd/system/cups.service; enabled; vendor preset: 
enabled)
   Active: active (running) since Tue 2019-05-07 10:36:49 UTC; 9min ago
 Docs: man:cupsd(8)
 Main PID: 1036 (cupsd)
Tasks: 1 (limit: 2361)
   CGroup: /system.slice/cups.service
   └─1036 /usr/sbin/cupsd -l

May 07 10:36:49 bionic-sssd-ad systemd[1]: Started CUPS Scheduler.
ubuntu@bionic-sssd-ad:~$ systemd-analyze critical-chain cups.service
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.

cups.service @46.601s
└─sssd.service @39.137s +7.411s
  └─basic.target @39.068s
└─sockets.target @39.062s
  └─snapd.socket @38.991s +62ms
└─sysinit.target @38.817s
  └─cloud-init.service @35.077s +3.695s
└─systemd-networkd-wait-online.service @33.910s +1.151s
  └─systemd-networkd.service @33.667s +205ms
└─network-pre.target @33.654s
  └─cloud-init-local.service @19.639s +14.007s
└─systemd-remount-fs.service @6.538s +851ms
  └─systemd-journald.socket @6.460s
└─system.slice @6.408s
  └─-.slice @6.129s

- Using the version in -proposed, with sssd not installed in the machine
(and setting SystemGroup to the original local group "lpadmin"), cups
still 

[Bug 1822062] Re: Race condition on boot between cups and sssd

2019-05-07 Thread Victor Tapia
# VERIFICATION: XENIAL
- Using the reproducer defined in the test case and the version in -updates:

ubuntu@xenial-sssd-ad:~$ dpkg -l | grep -E "cups-daemon| cups |cups-common"
ii  cups  2.1.3-4ubuntu0.7  
 amd64Common UNIX Printing System(tm) - PPD/driver support, web 
interface
ii  cups-common   2.1.3-4ubuntu0.7  
 all  Common UNIX Printing System(tm) - common files
ii  cups-daemon   2.1.3-4ubuntu0.7  
 amd64Common UNIX Printing System(tm) - daemon  

ubuntu@xenial-sssd-ad:~$ grep -i systemgroup /etc/cups/cups-files.conf 
SystemGroup lpadmins@TESTS.LOCAL 

ubuntu@xenial-sssd-ad:~$ systemd-analyze critical-chain cups.service
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.

cups.service @5d 44min 12.341s
└─basic.target @35.619s
  └─sockets.target @35.617s
└─lxd.socket @35.592s +11ms
  └─sysinit.target @35.463s
└─cloud-init.service @31.929s +3.152s
  └─networking.service @15.375s +16.549s
└─network-pre.target @15.326s
  └─cloud-init-local.service @6.646s +8.677s
└─systemd-remount-fs.service @5.484s +342ms
  └─system.slice @5.461s
└─-.slice @5.389s

- After reboot, cups fails to start:

ubuntu@xenial-sssd-ad:~$ systemctl status cups
● cups.service - CUPS Scheduler
   Loaded: loaded (/lib/systemd/system/cups.service; enabled; vendor preset: 
enabled)
   Active: failed (Result: start-limit-hit) since Tue 2019-05-07 10:06:07 UTC; 
1min 57s ago
 Docs: man:cupsd(8)
  Process: 1152 ExecStart=/usr/sbin/cupsd -l (code=exited, status=1/FAILURE)
 Main PID: 1152 (code=exited, status=1/FAILURE)

May 07 10:06:07 xenial-sssd-ad systemd[1]: cups.service: Failed with result 
'exit-code'.
May 07 10:06:07 xenial-sssd-ad systemd[1]: Started CUPS Scheduler.
May 07 10:06:07 xenial-sssd-ad cupsd[1152]: Unknown SystemGroup 
"lpadmins@TESTS.LOCAL" on line 19 of /etc/
May 07 10:06:07 xenial-sssd-ad cupsd[1152]: Unable to read 
"/etc/cups/cups-files.conf" due to errors.
May 07 10:06:07 xenial-sssd-ad systemd[1]: cups.service: Main process exited, 
code=exited, status=1/FAILUR
May 07 10:06:07 xenial-sssd-ad systemd[1]: cups.service: Unit entered failed 
state.
May 07 10:06:07 xenial-sssd-ad systemd[1]: cups.service: Failed with result 
'exit-code'.
May 07 10:06:07 xenial-sssd-ad systemd[1]: cups.service: Start request repeated 
too quickly.
May 07 10:06:07 xenial-sssd-ad systemd[1]: Failed to start CUPS Scheduler.
May 07 10:06:07 xenial-sssd-ad systemd[1]: cups.service: Failed with result 
'start-limit-hit'.

- Using the version in -proposed, after rebooting:

ubuntu@xenial-sssd-ad:~$ dpkg -l | grep -E "cups-daemon| cups |cups-common"
ii  cups  2.1.3-4ubuntu0.8  
 amd64Common UNIX Printing System(tm) - PPD/driver support, web 
interface
ii  cups-common   2.1.3-4ubuntu0.8  
 all  Common UNIX Printing System(tm) - common files
ii  cups-daemon   2.1.3-4ubuntu0.8  
 amd64Common UNIX Printing System(tm) - daemon


ubuntu@xenial-sssd-ad:~$ systemctl status cups
● cups.service - CUPS Scheduler
   Loaded: loaded (/lib/systemd/system/cups.service; enabled; vendor preset: 
enabled)
   Active: active (running) since Tue 2019-05-07 10:14:10 UTC; 2min 20s ago
 Docs: man:cupsd(8)
 Main PID: 1276 (cupsd)
Tasks: 1
   Memory: 2.1M
  CPU: 12ms
   CGroup: /system.slice/cups.service
   └─1276 /usr/sbin/cupsd -l

May 07 10:14:10 xenial-sssd-ad systemd[1]: Started CUPS Scheduler.
ubuntu@xenial-sssd-ad:~$ systemd-analyze critical-chain cups.service
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.

cups.service @32.661s
└─sssd.service @29.252s +3.393s
  └─basic.target @29.247s
└─sockets.target @29.245s
  └─lxd.socket @29.225s +10ms
└─sysinit.target @29.117s
  └─cloud-init.service @26.685s +2.416s
└─networking.service @11.315s +15.364s
  └─network-pre.target @11.301s
└─cloud-init-local.service @3.841s +7.457s
  └─systemd-remount-fs.service @3.084s +278ms
└─systemd-journald.socket @3.036s
  └─-.slice @2.984s

- Using the version in -proposed, with sssd not installed in the machine
(and setting SystemGroup to the original local group "lpadmin"), cups
still starts:

buntu@xenial-sssd-ad:~$ systemctl status cups
● cups.service - CUPS Scheduler
   Loaded: loaded (/lib/systemd/system/cups.service; enabled; vendor preset: 
enabled)
   Active: active (running) since Tue 2019-05-07 10:18:50 UTC; 

[Bug 1572908] Re: sssd-ad pam_sss(cron:account): Access denied for user

2019-05-06 Thread Victor Tapia
# VERIFICATION: XENIAL
- Before the upgrade, the cron job does not run:

ubuntu@xenial-sssd-ad:~$ dpkg -l | grep sssd
ii  sssd  1.13.4-1ubuntu1.13
 amd64System Security Services Daemon -- metapackage
ii  sssd-ad   1.13.4-1ubuntu1.13
 amd64System Security Services Daemon -- Active Directory back end
ii  sssd-ad-common1.13.4-1ubuntu1.13
 amd64System Security Services Daemon -- PAC responder
ii  sssd-common   1.13.4-1ubuntu1.13
 amd64System Security Services Daemon -- common files
ii  sssd-ipa  1.13.4-1ubuntu1.13
 amd64System Security Services Daemon -- IPA back end
ii  sssd-krb5 1.13.4-1ubuntu1.13
 amd64System Security Services Daemon -- Kerberos back end
ii  sssd-krb5-common  1.13.4-1ubuntu1.13
 amd64System Security Services Daemon -- Kerberos helpers
ii  sssd-ldap 1.13.4-1ubuntu1.13
 amd64System Security Services Daemon -- LDAP back end
ii  sssd-proxy1.13.4-1ubuntu1.13
 amd64System Security Services Daemon -- proxy back end
ii  sssd-tools1.13.4-1ubuntu1.13
 amd64System Security Services Daemon -- tools

ubuntu@xenial-sssd-ad:~$ tail /var/log/syslog -n20 | grep -i cron | tail -n2
May  6 14:54:01 xenial-sssd-ad cron[1048]: Permission denied
May  6 14:54:01 xenial-sssd-ad CRON[24800]: Permission denied
ubuntu@xenial-sssd-ad:~$ date
Mon May  6 14:54:33 UTC 2019
ubuntu@xenial-sssd-ad:~$ sudo tail 
/var/spool/cron/crontabs/logonuser\@tests.local | grep -v ^#
* * * * * touch /tmp/crontest

- Using the version in -proposed, the cron job works:
 
ubuntu@xenial-sssd-ad:~$ dpkg -l | grep sssd
ii  sssd  1.13.4-1ubuntu1.14
 amd64System Security Services Daemon -- metapackage
ii  sssd-ad   1.13.4-1ubuntu1.14
 amd64System Security Services Daemon -- Active Directory back end
ii  sssd-ad-common1.13.4-1ubuntu1.14
 amd64System Security Services Daemon -- PAC responder
ii  sssd-common   1.13.4-1ubuntu1.14
 amd64System Security Services Daemon -- common files
ii  sssd-ipa  1.13.4-1ubuntu1.14
 amd64System Security Services Daemon -- IPA back end
ii  sssd-krb5 1.13.4-1ubuntu1.14
 amd64System Security Services Daemon -- Kerberos back end
ii  sssd-krb5-common  1.13.4-1ubuntu1.14
 amd64System Security Services Daemon -- Kerberos helpers
ii  sssd-ldap 1.13.4-1ubuntu1.14
 amd64System Security Services Daemon -- LDAP back end
ii  sssd-proxy1.13.4-1ubuntu1.14
 amd64System Security Services Daemon -- proxy back end
ii  sssd-tools1.13.4-1ubuntu1.14
 amd64System Security Services Daemon -- tools 

ubuntu@xenial-sssd-ad:~$ tail /var/log/syslog -n20 | grep -i cron  | tail -n1
May  6 15:08:01 xenial-sssd-ad CRON[2706]: (logonuser@tests.local) CMD (touch 
/tmp/crontest)
ubuntu@xenial-sssd-ad:~$ ll /tmp/crontest 
-rw-r--r-- 1 logonuser@tests.local domain users@tests.local 0 May  6 15:08 
/tmp/crontest
ubuntu@xenial-sssd-ad:~$ date
Mon May  6 15:08:18 UTC 2019


** Tags removed: verification-needed verification-needed-bionic 
verification-needed-cosmic verification-needed-disco verification-needed-xenial
** Tags added: verification-done verification-done-bionic 
verification-done-cosmic verification-done-disco verification-done-xenial

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1572908

Title:
  sssd-ad pam_sss(cron:account): Access denied for user

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1572908] Re: sssd-ad pam_sss(cron:account): Access denied for user

2019-05-06 Thread Victor Tapia
# VERIFICATION: DISCO
- Before the upgrade, the cron job does not run:

ubuntu@disco-sssd-ad:~$ date
Mon May  6 11:30:29 UTC 2019
ubuntu@disco-sssd-ad:~$ tail /var/log/syslog | grep -i cron
May  6 11:30:02 disco-sssd-ad cron[690]: Permission denied
May  6 11:30:02 disco-sssd-ad CRON[14325]: Permission denied
ubuntu@disco-sssd-ad:~$ sudo tail 
/var/spool/cron/crontabs/logonuser\@tests.local | grep -v ^#
* * * * * touch /tmp/crontest
ubuntu@disco-sssd-ad:~$ dpkg -l | grep sssd
ii  sssd  1.16.3-3ubuntu1 amd64 
   System Security Services Daemon -- metapackage
ii  sssd-ad   1.16.3-3ubuntu1 amd64 
   System Security Services Daemon -- Active Directory back end
ii  sssd-ad-common1.16.3-3ubuntu1 amd64 
   System Security Services Daemon -- PAC responder
ii  sssd-common   1.16.3-3ubuntu1 amd64 
   System Security Services Daemon -- common files
ii  sssd-dbus 1.16.3-3ubuntu1 amd64 
   System Security Services Daemon -- D-Bus responder
ii  sssd-ipa  1.16.3-3ubuntu1 amd64 
   System Security Services Daemon -- IPA back end
ii  sssd-kcm  1.16.3-3ubuntu1 amd64 
   System Security Services Daemon -- Kerberos KCM server implementation
ii  sssd-krb5 1.16.3-3ubuntu1 amd64 
   System Security Services Daemon -- Kerberos back end
ii  sssd-krb5-common  1.16.3-3ubuntu1 amd64 
   System Security Services Daemon -- Kerberos helpers
ii  sssd-ldap 1.16.3-3ubuntu1 amd64 
   System Security Services Daemon -- LDAP back end
ii  sssd-proxy1.16.3-3ubuntu1 amd64 
   System Security Services Daemon -- proxy back end
ii  sssd-tools1.16.3-3ubuntu1 amd64 
   System Security Services Daemon -- tools 

- Using the version in -proposed, the cron job works:

ubuntu@disco-sssd-ad:~$ dpkg -l | grep sssd
ii  sssd  1.16.3-3ubuntu1.1   amd64 
   System Security Services Daemon -- metapackage
ii  sssd-ad   1.16.3-3ubuntu1.1   amd64 
   System Security Services Daemon -- Active Directory back end
ii  sssd-ad-common1.16.3-3ubuntu1.1   amd64 
   System Security Services Daemon -- PAC responder
ii  sssd-common   1.16.3-3ubuntu1.1   amd64 
   System Security Services Daemon -- common files
ii  sssd-dbus 1.16.3-3ubuntu1.1   amd64 
   System Security Services Daemon -- D-Bus responder
ii  sssd-ipa  1.16.3-3ubuntu1.1   amd64 
   System Security Services Daemon -- IPA back end
ii  sssd-kcm  1.16.3-3ubuntu1.1   amd64 
   System Security Services Daemon -- Kerberos KCM server implementation
ii  sssd-krb5 1.16.3-3ubuntu1.1   amd64 
   System Security Services Daemon -- Kerberos back end
ii  sssd-krb5-common  1.16.3-3ubuntu1.1   amd64 
   System Security Services Daemon -- Kerberos helpers
ii  sssd-ldap 1.16.3-3ubuntu1.1   amd64 
   System Security Services Daemon -- LDAP back end
ii  sssd-proxy1.16.3-3ubuntu1.1   amd64 
   System Security Services Daemon -- proxy back end
ii  sssd-tools1.16.3-3ubuntu1.1   amd64 
   System Security Services Daemon -- tools 

ubuntu@disco-sssd-ad:~$ date
Mon May  6 14:35:03 UTC 2019
ubuntu@disco-sssd-ad:~$ ll /tmp/crontest 
-rw-r--r-- 1 logonuser@tests.local domain users@tests.local 0 May  6 14:35 
/tmp/crontest
ubuntu@disco-sssd-ad:~$ tail /var/log/syslog | grep -i cron | tail -n1
May  6 14:35:01 disco-sssd-ad CRON[16197]: (logonuser@tests.local) CMD (touch 
/tmp/crontest)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1572908

Title:
  sssd-ad pam_sss(cron:account): Access denied for user

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1572908] Re: sssd-ad pam_sss(cron:account): Access denied for user

2019-05-06 Thread Victor Tapia
# VERIFICATION: COSMIC
- Before the upgrade, the cron job does not run:

ubuntu@cosmic-sssd-ad:~$ tail /var/log/syslog  | grep -i cron 
May  6 12:02:01 cosmic-sssd-ad cron[18740]: Permission denied
May  6 12:02:01 cosmic-sssd-ad CRON[18771]: Permission denied

ubuntu@cosmic-sssd-ad:~$ date
Mon May  6 12:02:23 UTC 2019

ubuntu@cosmic-sssd-ad:~$ sudo tail 
/var/spool/cron/crontabs/logonuser\@tests.local | grep -v ^#
* * * * * touch /tmp/crontest 

ubuntu@cosmic-sssd-ad:~$ dpkg -l |grep sssd
ii  sssd  1.16.3-1ubuntu2 amd64 
   System Security Services Daemon -- metapackage
ii  sssd-ad   1.16.3-1ubuntu2 amd64 
   System Security Services Daemon -- Active Directory back end
ii  sssd-ad-common1.16.3-1ubuntu2 amd64 
   System Security Services Daemon -- PAC responder
ii  sssd-common   1.16.3-1ubuntu2 amd64 
   System Security Services Daemon -- common files
ii  sssd-dbus 1.16.3-1ubuntu2 amd64 
   System Security Services Daemon -- D-Bus responder
ii  sssd-ipa  1.16.3-1ubuntu2 amd64 
   System Security Services Daemon -- IPA back end
ii  sssd-krb5 1.16.3-1ubuntu2 amd64 
   System Security Services Daemon -- Kerberos back end
ii  sssd-krb5-common  1.16.3-1ubuntu2 amd64 
   System Security Services Daemon -- Kerberos helpers
ii  sssd-ldap 1.16.3-1ubuntu2 amd64 
   System Security Services Daemon -- LDAP back end
ii  sssd-proxy1.16.3-1ubuntu2 amd64 
   System Security Services Daemon -- proxy back end
ii  sssd-tools1.16.3-1ubuntu2 amd64 
   System Security Services Daemon -- tools 

- Using the version in -proposed, the cron job works:

ubuntu@cosmic-sssd-ad:~$ dpkg -l | grep sssd
ii  sssd  1.16.3-1ubuntu2.1   amd64 
   System Security Services Daemon -- metapackage
ii  sssd-ad   1.16.3-1ubuntu2.1   amd64 
   System Security Services Daemon -- Active Directory back end
ii  sssd-ad-common1.16.3-1ubuntu2.1   amd64 
   System Security Services Daemon -- PAC responder
ii  sssd-common   1.16.3-1ubuntu2.1   amd64 
   System Security Services Daemon -- common files
ii  sssd-dbus 1.16.3-1ubuntu2.1   amd64 
   System Security Services Daemon -- D-Bus responder
ii  sssd-ipa  1.16.3-1ubuntu2.1   amd64 
   System Security Services Daemon -- IPA back end
ii  sssd-krb5 1.16.3-1ubuntu2.1   amd64 
   System Security Services Daemon -- Kerberos back end
ii  sssd-krb5-common  1.16.3-1ubuntu2.1   amd64 
   System Security Services Daemon -- Kerberos helpers
ii  sssd-ldap 1.16.3-1ubuntu2.1   amd64 
   System Security Services Daemon -- LDAP back end
ii  sssd-proxy1.16.3-1ubuntu2.1   amd64 
   System Security Services Daemon -- proxy back end
ii  sssd-tools1.16.3-1ubuntu2.1   amd64 
   System Security Services Daemon -- tools

ubuntu@cosmic-sssd-ad:~$ tail /var/log/syslog | grep -i cron | tail -n1
May  6 14:14:01 cosmic-sssd-ad CRON[8985]: (logonuser@tests.local) CMD (touch 
/tmp/crontest)

ubuntu@cosmic-sssd-ad:~$ date
Mon May  6 14:14:14 UTC 2019

ubuntu@cosmic-sssd-ad:~$ ll /tmp/crontest 
-rw-r--r-- 1 logonuser@tests.local domain users@tests.local 0 May  6 14:14 
/tmp/crontest

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1572908

Title:
  sssd-ad pam_sss(cron:account): Access denied for user

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1572908] Re: sssd-ad pam_sss(cron:account): Access denied for user

2019-05-06 Thread Victor Tapia
# VERIFICATION: BIONIC
- Before the upgrade, the cron job does not run:
ubuntu@bionic-sssd-ad:~$ dpkg -l|grep sssd
ii  sssd  1.16.1-1ubuntu1.1 
  amd64System Security Services Daemon -- metapackage
ii  sssd-ad   1.16.1-1ubuntu1.1 
  amd64System Security Services Daemon -- Active Directory back end
ii  sssd-ad-common1.16.1-1ubuntu1.1 
  amd64System Security Services Daemon -- PAC responder
ii  sssd-common   1.16.1-1ubuntu1.1 
  amd64System Security Services Daemon -- common files
ii  sssd-dbus 1.16.1-1ubuntu1.1 
  amd64System Security Services Daemon -- D-Bus responder
ii  sssd-ipa  1.16.1-1ubuntu1.1 
  amd64System Security Services Daemon -- IPA back end
ii  sssd-krb5 1.16.1-1ubuntu1.1 
  amd64System Security Services Daemon -- Kerberos back end
ii  sssd-krb5-common  1.16.1-1ubuntu1.1 
  amd64System Security Services Daemon -- Kerberos helpers
ii  sssd-ldap 1.16.1-1ubuntu1.1 
  amd64System Security Services Daemon -- LDAP back end
ii  sssd-proxy1.16.1-1ubuntu1.1 
  amd64System Security Services Daemon -- proxy back end
ii  sssd-tools1.16.1-1ubuntu1.1 
  amd64System Security Services Daemon -- tools

ubuntu@bionic-sssd-ad:~$ sudo tail 
/var/spool/cron/crontabs/logonuser\@tests.local | grep -v ^#
* * * * * touch /tmp/crontest

ubuntu@bionic-sssd-ad:~$ tail -n20 /var/log/syslog | grep -i CRON
May  6 11:04:01 bionic-sssd-ad cron[933]: Permission denied
May  6 11:04:01 bionic-sssd-ad CRON[4605]: Permission denied

ubuntu@bionic-sssd-ad:~$ date
Mon May  6 11:04:22 UTC 2019

- Using the version in -proposed, the cron job works:

ubuntu@bionic-sssd-ad:~$ dpkg -l | grep sssd
ii  sssd  1.16.1-1ubuntu1.2 
  amd64System Security Services Daemon -- metapackage
ii  sssd-ad   1.16.1-1ubuntu1.2 
  amd64System Security Services Daemon -- Active Directory back end
ii  sssd-ad-common1.16.1-1ubuntu1.2 
  amd64System Security Services Daemon -- PAC responder
ii  sssd-common   1.16.1-1ubuntu1.2 
  amd64System Security Services Daemon -- common files
ii  sssd-dbus 1.16.1-1ubuntu1.2 
  amd64System Security Services Daemon -- D-Bus responder
ii  sssd-ipa  1.16.1-1ubuntu1.2 
  amd64System Security Services Daemon -- IPA back end
ii  sssd-krb5 1.16.1-1ubuntu1.2 
  amd64System Security Services Daemon -- Kerberos back end
ii  sssd-krb5-common  1.16.1-1ubuntu1.2 
  amd64System Security Services Daemon -- Kerberos helpers
ii  sssd-ldap 1.16.1-1ubuntu1.2 
  amd64System Security Services Daemon -- LDAP back end
ii  sssd-proxy1.16.1-1ubuntu1.2 
  amd64System Security Services Daemon -- proxy back end
ii  sssd-tools1.16.1-1ubuntu1.2 
  amd64System Security Services Daemon -- tools

ubuntu@bionic-sssd-ad:~$ ll /tmp/crontest
-rw-r--r-- 1 logonuser@tests.local domain users@tests.local 0 May  6 14:07 
/tmp/crontest

ubuntu@bionic-sssd-ad:~$ date
Mon May  6 14:07:07 UTC 2019

ubuntu@bionic-sssd-ad:~$ tail /var/log/syslog | grep -i cron
...
May  6 14:07:01 bionic-sssd-ad CRON[27167]: (logonuser@tests.local) CMD (touch 
/tmp/crontest)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1572908

Title:
  sssd-ad pam_sss(cron:account): Access denied for user

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1572908] Re: sssd-ad pam_sss(cron:account): Access denied for user

2019-04-24 Thread Victor Tapia
The fix is included in sssd 1.16.4, currently in debian experimental

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1572908

Title:
  sssd-ad pam_sss(cron:account): Access denied for user

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1572908] Re: sssd-ad pam_sss(cron:account): Access denied for user

2019-04-24 Thread Victor Tapia
** Also affects: sssd (Ubuntu Eoan)
   Importance: Medium
 Assignee: Victor Tapia (vtapia)
   Status: In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1572908

Title:
  sssd-ad pam_sss(cron:account): Access denied for user

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1822062] Re: Race condition on boot between cups and sssd

2019-04-24 Thread Victor Tapia
** Patch added: "xenial-cups.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1822062/+attachment/5258671/+files/xenial-cups.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1822062

Title:
  Race condition on boot between cups and sssd

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1822062/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1822062] Re: Race condition on boot between cups and sssd

2019-04-24 Thread Victor Tapia
** Patch added: "disco-cups.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1822062/+attachment/5258668/+files/disco-cups.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1822062

Title:
  Race condition on boot between cups and sssd

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1822062/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1822062] Re: Race condition on boot between cups and sssd

2019-04-24 Thread Victor Tapia
** Patch added: "bionic-cups.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1822062/+attachment/5258670/+files/bionic-cups.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1822062

Title:
  Race condition on boot between cups and sssd

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1822062/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1822062] Re: Race condition on boot between cups and sssd

2019-04-24 Thread Victor Tapia
** Patch added: "cosmic-cups.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1822062/+attachment/5258669/+files/cosmic-cups.debdiff

** Patch removed: "disco-cups.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1822062/+attachment/5258668/+files/disco-cups.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1822062

Title:
  Race condition on boot between cups and sssd

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1822062/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1822062] Re: Race condition on boot between cups and sssd

2019-04-24 Thread Victor Tapia
** Patch added: "disco-cups.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1822062/+attachment/5258667/+files/disco-cups.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1822062

Title:
  Race condition on boot between cups and sssd

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1822062/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1822062] Re: Race condition on boot between cups and sssd

2019-04-24 Thread Victor Tapia
** Description changed:

+ [Impact]
+ 
+  * When cups has set the "SystemGroup" directive to an external group
+ provided through sss and cups starts before sssd has finished booting,
+ cups will crash because the group does not exist.
+ 
+  * The patch adds an "After=sssd.service" clause to the service unit
+ file.
+ 
+ [Test Case]
+ 
+  * Configure an external authentication service (LDAP, AD...) and create
+ a group, for instance "lpadmins@tests.local"
+ 
+  * Set SystemGroup to match that group (SystemGroup =
+ "lpadmins@tests.local")
+ 
+  * Reboot
+ 
+  * If cups has started before sssd has finished booting, cups will crash:
+ Mar 27 10:10:33 cups-sssd cupsd[21463]: Unknown SystemGroup 
"lpadmins@tests.local" on line 19 of /etc/cups/cups-files.conf.
+ 
+  * If cups starts after sssd, it will work fine.
+ 
+ [Regression Potential]
+ 
+  * Minimal: this patch affects just the ordering of the service unit
+ file.
+ 
+ [Other Info]
+  
+  * Upstream: 
https://github.com/apple/cups/commit/4d0f1959a3f46973caec2cd41828c59674fe195d
+ 
+ [Original description]
+ 
  When cups has set the "SystemGroup" directive to an external group
  provided through sss and cups starts before sssd has finished booting,
  cups will crash because the group does not exist. For instance, with a
  group named lpadmins@tests.local served from Active Directory through
  sssd, if the sssd service hasn't booted before cups:
  
  Mar 27 10:10:33 cups-sssd systemd[1]: Started CUPS Scheduler.
  Mar 27 10:10:33 cups-sssd systemd[1]: Started CUPS Scheduler.
  Mar 27 10:10:33 cups-sssd systemd[1]: Started Make remote CUPS printers 
available locally.
  Mar 27 10:10:33 cups-sssd cupsd[21463]: Unknown SystemGroup 
"lpadmins@tests.local" on line 19 of /etc/cups/cups-files.conf.
  Mar 27 10:10:33 cups-sssd cupsd[21463]: Unable to read 
"/etc/cups/cups-files.conf" due to errors.
  Mar 27 10:10:33 cups-sssd systemd[1]: cups.service: Main process exited, 
code=exited, status=1/FAILURE
  Mar 27 10:10:33 cups-sssd systemd[1]: cups.service: Failed with result 
'exit-code'.
  Mar 27 10:10:33 cups-sssd systemd[1]: cups.service: Service hold-off time 
over, scheduling restart.
  Mar 27 10:10:33 cups-sssd systemd[1]: cups.service: Scheduled restart job, 
restart counter is at 2.
  Mar 27 10:10:33 cups-sssd systemd[1]: Stopping Make remote CUPS printers 
available locally...
  Mar 27 10:10:33 cups-sssd systemd[1]: Stopped Make remote CUPS printers 
available locally.
  Mar 27 10:10:33 cups-sssd systemd[1]: Stopped CUPS Scheduler.
  
  If sssd is running before cups starts, everything works as expected.

** Also affects: cups (Ubuntu Eoan)
   Importance: Undecided
   Status: New

** Also affects: cups (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: cups (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: cups (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: cups (Ubuntu Cosmic)
   Importance: Undecided
   Status: New

** Description changed:

  [Impact]
  
-  * When cups has set the "SystemGroup" directive to an external group
+  * When cups has set the "SystemGroup" directive to an external group
  provided through sss and cups starts before sssd has finished booting,
  cups will crash because the group does not exist.
  
-  * The patch adds an "After=sssd.service" clause to the service unit
+  * The patch adds an "After=sssd.service" clause to the service unit
  file.
  
  [Test Case]
  
-  * Configure an external authentication service (LDAP, AD...) and create
+  * Configure an external authentication service (LDAP, AD...) and create
  a group, for instance "lpadmins@tests.local"
  
-  * Set SystemGroup to match that group (SystemGroup =
- "lpadmins@tests.local")
+  * Set SystemGroup to match that group in /etc/cups/cups-files.conf:
+ SystemGroup lpadmins@tests.local
  
-  * Reboot
+  * Reboot
  
-  * If cups has started before sssd has finished booting, cups will crash:
+  * If cups has started before sssd has finished booting, cups will crash:
  Mar 27 10:10:33 cups-sssd cupsd[21463]: Unknown SystemGroup 
"lpadmins@tests.local" on line 19 of /etc/cups/cups-files.conf.
  
-  * If cups starts after sssd, it will work fine.
+  * If cups starts after sssd, it will work fine.
  
  [Regression Potential]
  
-  * Minimal: this patch affects just the ordering of the service unit
+  * Minimal: this patch affects just the ordering of the service unit
  file.
  
  [Other Info]
-  
-  * Upstream: 
https://github.com/apple/cups/commit/4d0f1959a3f46973caec2cd41828c59674fe195d
+ 
+  * Upstream:
+ https://github.com/apple/cups/commit/4d0f1959a3f46973caec2cd41828c59674fe195d
  
  [Original description]
  
  When cups has set the "SystemGroup" directive to an external group
  provided through sss and cups starts before sssd has finished booting,
  cups will crash because the group does not exist. For instance, with a
  group named lpadmins@tests.local served from Active 

[Bug 1572908] Re: sssd-ad pam_sss(cron:account): Access denied for user

2019-04-23 Thread Victor Tapia
** Patch added: "eoan-sssd-gpo.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+attachment/5258263/+files/eoan-sssd-gpo.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1572908

Title:
  sssd-ad pam_sss(cron:account): Access denied for user

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1368411] Re: Cannot insert IPV6 rule before IPV4 rules

2019-03-29 Thread Victor Tapia
The fix works as expected in B/C:

#COSMIC

ubuntu@c-ufw:~$ dpkg -l | grep ufw
ii  ufw 0.36-0ubuntu0.18.10.1   all 
 program for managing a Netfilter firewall

ubuntu@c-ufw:~$ sudo ufw status numbered
Status: active

 To Action  From
 -- --  
[ 1] 22/tcp ALLOW INAnywhere  
[ 2] Anywhere   ALLOW IN1.2.3.4   
[ 3] 22/tcp (v6)ALLOW INAnywhere (v6) 
[ 4] Anywhere (v6)  ALLOW IN2001:db8::/32 

ubuntu@c-ufw:~$ sudo ufw prepend deny from 6.7.8.9
Rule inserted
ubuntu@c-ufw:~$ sudo ufw prepend deny from 2a02:2210:12:a:b820:fff:fea2:25d1
Rule inserted (v6)
ubuntu@c-ufw:~$ sudo ufw status numbered
Status: active

 To Action  From
 -- --  
[ 1] Anywhere   DENY IN 6.7.8.9   
[ 2] 22/tcp ALLOW INAnywhere  
[ 3] Anywhere   ALLOW IN1.2.3.4   
[ 4] Anywhere (v6)  DENY IN 2a02:2210:12:a:b820:fff:fea2:25d1
[ 5] 22/tcp (v6)ALLOW INAnywhere (v6) 
[ 6] Anywhere (v6)  ALLOW IN2001:db8::/32  

#BIONIC

ubuntu@b-ufw:~$ dpkg -l | grep ufw
ii  ufw 0.36-0ubuntu0.18.04.1   
all  program for managing a Netfilter firewall

ubuntu@b-ufw:~$ sudo ufw status numbered
Status: active

 To Action  From
 -- --  
[ 1] 22/tcp ALLOW INAnywhere  
[ 2] Anywhere   ALLOW IN1.2.3.4   
[ 3] 22/tcp (v6)ALLOW INAnywhere (v6) 
[ 4] Anywhere (v6)  ALLOW IN2001:db8::/32 

ubuntu@b-ufw:~$ sudo ufw prepend allow from 2001:db8::/32
Skipping inserting existing rule (v6)
ubuntu@b-ufw:~$ sudo ufw status numbered
Status: active

 To Action  From
 -- --  
[ 1] 22/tcp ALLOW INAnywhere  
[ 2] Anywhere   ALLOW IN1.2.3.4   
[ 3] 22/tcp (v6)ALLOW INAnywhere (v6) 
[ 4] Anywhere (v6)  ALLOW IN2001:db8::/32 

ubuntu@b-ufw:~$ sudo ufw prepend deny from 6.7.8.9
Rule inserted
ubuntu@b-ufw:~$ sudo ufw prepend deny from 2a02:2210:12:a:b820:fff:fea2:25d1
Rule inserted (v6)

ubuntu@b-ufw:~$ sudo ufw status numbered
Status: active

 To Action  From
 -- --  
[ 1] Anywhere   DENY IN 6.7.8.9   
[ 2] 22/tcp ALLOW INAnywhere  
[ 3] Anywhere   ALLOW IN1.2.3.4   
[ 4] Anywhere (v6)  DENY IN 2a02:2210:12:a:b820:fff:fea2:25d1
[ 5] 22/tcp (v6)ALLOW INAnywhere (v6) 
[ 6] Anywhere (v6)  ALLOW IN2001:db8::/32

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1368411

Title:
  Cannot insert IPV6 rule before IPV4 rules

To manage notifications about this bug go to:
https://bugs.launchpad.net/ufw/+bug/1368411/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1822062] [NEW] Race condition on boot between cups and sssd

2019-03-28 Thread Victor Tapia
Public bug reported:

When cups has set the "SystemGroup" directive to an external group
provided through sss and cups starts before sssd has finished booting,
cups will crash because the group does not exist. For instance, with a
group named lpadmins@tests.local served from Active Directory through
sssd, if the sssd service hasn't booted before cups:

Mar 27 10:10:33 cups-sssd systemd[1]: Started CUPS Scheduler.
Mar 27 10:10:33 cups-sssd systemd[1]: Started CUPS Scheduler.
Mar 27 10:10:33 cups-sssd systemd[1]: Started Make remote CUPS printers 
available locally.
Mar 27 10:10:33 cups-sssd cupsd[21463]: Unknown SystemGroup 
"lpadmins@tests.local" on line 19 of /etc/cups/cups-files.conf.
Mar 27 10:10:33 cups-sssd cupsd[21463]: Unable to read 
"/etc/cups/cups-files.conf" due to errors.
Mar 27 10:10:33 cups-sssd systemd[1]: cups.service: Main process exited, 
code=exited, status=1/FAILURE
Mar 27 10:10:33 cups-sssd systemd[1]: cups.service: Failed with result 
'exit-code'.
Mar 27 10:10:33 cups-sssd systemd[1]: cups.service: Service hold-off time over, 
scheduling restart.
Mar 27 10:10:33 cups-sssd systemd[1]: cups.service: Scheduled restart job, 
restart counter is at 2.
Mar 27 10:10:33 cups-sssd systemd[1]: Stopping Make remote CUPS printers 
available locally...
Mar 27 10:10:33 cups-sssd systemd[1]: Stopped Make remote CUPS printers 
available locally.
Mar 27 10:10:33 cups-sssd systemd[1]: Stopped CUPS Scheduler.

If sssd is running before cups starts, everything works as expected.

** Affects: cups (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1822062

Title:
  Race condition on boot between cups and sssd

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1822062/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1572908] Re: sssd-ad pam_sss(cron:account): Access denied for user

2019-03-11 Thread Victor Tapia
** Patch added: "bionic-sssd-gpo.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+attachment/5245457/+files/bionic-sssd-gpo.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1572908

Title:
  sssd-ad pam_sss(cron:account): Access denied for user

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1572908] Re: sssd-ad pam_sss(cron:account): Access denied for user

2019-03-11 Thread Victor Tapia
** Tags added: sts

** Patch added: "disco-sssd-gpo.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+attachment/5245455/+files/disco-sssd-gpo.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1572908

Title:
  sssd-ad pam_sss(cron:account): Access denied for user

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1572908] Re: sssd-ad pam_sss(cron:account): Access denied for user

2019-03-11 Thread Victor Tapia
** Patch added: "cosmic-sssd-gpo.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+attachment/5245456/+files/cosmic-sssd-gpo.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1572908

Title:
  sssd-ad pam_sss(cron:account): Access denied for user

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1572908] Re: sssd-ad pam_sss(cron:account): Access denied for user

2019-03-11 Thread Victor Tapia
** Patch added: "xenial-sssd-gpo.debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+attachment/5245458/+files/xenial-sssd-gpo.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1572908

Title:
  sssd-ad pam_sss(cron:account): Access denied for user

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1572908] Re: sssd-ad pam_sss(cron:account): Access denied for user

2019-02-28 Thread Victor Tapia
** Description changed:

  [Impact]
  
  SSSD has GPO_CROND set to "crond" in its code while Debian/Ubuntu use
  "cron" as a PAM service. This difference makes AD users have cron
  blocked by default, instead of having it enabled.
  
  [Test Case]
  
  - With an Active Directory user created (e.g. logonuser@TESTS.LOCAL),
  set a cron task:
  
  logonuser@tests.local@xenial-sssd-ad:~$ crontab -l | grep -v ^#
  * * * * * true /tmp/crontest
  
  - If the default is set to "crond" the task is blocked:
  
  # ag pam /var/log/ | grep -i denied | head -n 2
  /var/log/auth.log.1:772:Feb 21 11:00:01 xenial-sssd-ad CRON[2387]: 
pam_sss(cron:account): Access denied for user logonuser@tests.local: 6 
(Permission denied)
  /var/log/auth.log.1:773:Feb 21 11:01:01 xenial-sssd-ad CRON[2390]: 
pam_sss(cron:account): Access denied for user logonuser@tests.local: 6 
(Permission denied)
  
  - Setting GPO_CROND to "cron" or adding "ad_gpo_map_batch = +cron" to
  the configuration file solves the issue.
  
  [Regression potential]
  
+ Minimal. The default value does not apply to Debian/Ubuntu, and those
+ who added a configuration option to circumvent the issue
+ ("ad_gpo_map_batch = +cron") will continue working after this patch is
+ applied.
+ 
  [Other Info]
+ 
+ Upstream commit: 
+ https://github.com/SSSD/sssd/commit/bc65ba9a07a924a58b13a0d5a935114ab72b7524
  
  [Original description]
  
  User cron jobs has Access denied for user
  
  pr 21 11:05:02 edvlw08 CRON[6848]: pam_sss(cron:account): Access denied for 
user : 6 (Zugriff verweigert)
  Apr 21 11:05:02 edvlw08 CRON[6848]: Zugriff verweigert
  Apr 21 11:05:02 edvlw08 cron[965]: Zugriff verweigert
  
  SSSD-AD Login works, i see also my AD groups
  
  Description:Ubuntu 16.04 LTS
  Release:16.04
  
  sssd:
    Installed: 1.13.4-1ubuntu1
    Candidate: 1.13.4-1ubuntu1
    Version table:
   *** 1.13.4-1ubuntu1 500
  500 http://at.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
  100 /var/lib/dpkg/status
  sssd-ad:
    Installed: 1.13.4-1ubuntu1
    Candidate: 1.13.4-1ubuntu1
    Version table:
   *** 1.13.4-1ubuntu1 500
  500 http://at.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
  100 /var/lib/dpkg/status
  libpam-sss:
    Installed: 1.13.4-1ubuntu1
    Candidate: 1.13.4-1ubuntu1
    Version table:
   *** 1.13.4-1ubuntu1 500
  500 http://at.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
  100 /var/lib/dpkg/status
  
  /ect/sssd/sssd.conf
  [sssd]
  services = nss, pam
  config_file_version = 2
  domains = test.at
  
  [nss]
  default_shell = /bin/false
  
  [domain/test.at]
  decription = TEST - ActiveDirectory
  enumerate = false
  cache_credentials = true
  id_provider = ad
  auth_provider = ad
  chpass_provider = ad
  ad_domain = test.at
  access_provider = ad
  subdomains_provider = none
  ldap_use_tokengroups = false
  dyndns_update = true
  krb5_realm = TEST.AT
  krb5_store_password_if_offline = true
  ldap_id_mapping = false
  krb5_keytab = /etc/krb5.host.keytab
  ldap_krb5_keytab = /etc/krb5.host.keytab
  ldap_use_tokengroups = false
  ldap_referrals = false

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1572908

Title:
  sssd-ad pam_sss(cron:account): Access denied for user

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1572908] Re: sssd-ad pam_sss(cron:account): Access denied for user

2019-02-28 Thread Victor Tapia
** Changed in: sssd (Ubuntu Disco)
   Status: Expired => Confirmed

** Changed in: sssd (Ubuntu Disco)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1572908

Title:
  sssd-ad pam_sss(cron:account): Access denied for user

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1572908/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

  1   2   3   >