[Bug 1865340] Re: "secret" parameter not available in mod_proxy_ajp on focal

2020-03-04 Thread Andreas Hasenack
https://httpd.apache.org/docs/2.4/mod/mod_proxy_ajp.html seems to
indicate "secret" will be available in 2.4.42:

?secret 0x0CString  Supported since 2.4.42

>From https://bugzilla.redhat.com/show_bug.cgi?id=1397241, looks like
redhat has had "secret" support for quite a while. That bug report links
to this changeset:

https://svn.apache.org/viewvc?view=revision&revision=1738878

Looks like this is the 2.4.42 commit:
https://github.com/apache/httpd/commit/d8b6d798c177dfdb90cef1a29395afcc043f3c86

With a follow-up doc update:
https://github.com/apache/httpd/commit/4de7604dd086c7bebdcab4ae9dbbec24b59edabc

I grabbed the above from the 2.4.x branch

** Bug watch added: Red Hat Bugzilla #1397241
   https://bugzilla.redhat.com/show_bug.cgi?id=1397241

** Changed in: apache2 (Ubuntu)
   Status: New => Triaged

** Changed in: apache2 (Ubuntu)
   Importance: Undecided => High

** Tags added: server-next

-- 
You received this bug notification because you are a member of Ubuntu
Server, which is subscribed to apache2 in Ubuntu.
https://bugs.launchpad.net/bugs/1865340

Title:
  "secret" parameter not available in mod_proxy_ajp on focal

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1865340/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1865152] Re: SoftRoCE device disappears

2020-03-04 Thread Andreas Hasenack
Hello,

I just tried this on a eoan vm, but didn't experience the same as you. I did:
rxe_cfg start
rxe_cfg add ens2 (my nic)

Now I have:
root@eoan:~# rxe_cfg status
  Name  Link  Driver  Speed  NMTU  IPv4_addr  RDEV  RMTU  
  ens2  yes   virtio_net 1458  10.48.132.219  rxe0  1024  (3)  

root@eoan:~# rxe_cfg persistent
ens2

Could you please verify your steps and try to reproduce this on a clean eoan 
vm? And maybe attach logs, like dmesg and /var/log/syslog. For example, my 
dmesg recorded this:
[   59.829728] rdma_rxe: loaded
[   88.526483] infiniband rxe0: set active
[   88.526487] infiniband rxe0: added ens2
[   88.659926] Loading iSCSI transport class v2.0-870.
[   88.676175] iscsi: registered transport (iser)
[   88.711353] RPC: Registered named UNIX socket transport module.
[   88.711355] RPC: Registered udp transport module.
[   88.711356] RPC: Registered tcp transport module.
[   88.711357] RPC: Registered tcp NFSv4.1 backchannel transport module.
[   88.732917] RPC: Registered rdma transport module.
[   88.732919] RPC: Registered rdma backchannel transport module.


** Changed in: rdma-core (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Server, which is subscribed to rdma-core in Ubuntu.
https://bugs.launchpad.net/bugs/1865152

Title:
  SoftRoCE device disappears

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rdma-core/+bug/1865152/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1862602] Re: GPSD does not serve network clients even -G is used

2020-03-04 Thread Andreas Hasenack
That's a good suggestion, I'll reopen this bug with a severity of
wishlist. Thanks!

** Changed in: gpsd (Ubuntu)
   Status: Invalid => Triaged

** Changed in: gpsd (Ubuntu)
   Importance: Undecided => Wishlist

** Summary changed:

- GPSD does not serve network clients even -G is used
+ Better document how to listen on the network

-- 
You received this bug notification because you are a member of Ubuntu
Server, which is subscribed to gpsd in Ubuntu.
https://bugs.launchpad.net/bugs/1862602

Title:
  Better document how to listen on the network

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gpsd/+bug/1862602/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1859773] Re: Apache DBD Auth not working with mysql Focal 20.04

2020-03-04 Thread Andreas Hasenack
PPA with builds for eoan and focal:
https://launchpad.net/~ahasenack/+archive/ubuntu/mysql8-my-init/

Note: focal build does not have proposed enabled.

** Changed in: apr-util (Ubuntu Focal)
 Assignee: (unassigned) => Andreas Hasenack (ahasenack)

** Changed in: apr-util (Ubuntu Focal)
   Status: Triaged => In Progress

** Changed in: apr-util (Ubuntu Eoan)
 Assignee: (unassigned) => Andreas Hasenack (ahasenack)

** Changed in: apr-util (Ubuntu Eoan)
   Status: Triaged => In Progress

** Description changed:

+ [Impact]
+ 
+  * An explanation of the effects of the bug on users and
+ 
+  * justification for backporting the fix to the stable release.
+ 
+  * In addition, it is helpful, but not required, to include an
+explanation of how the upload fixes this bug.
+ 
+ [Test Case]
+ 
+  * detailed instructions how to reproduce the bug
+ 
+  * these should allow someone who is not familiar with the affected
+package to reproduce the bug and verify that the updated package fixes
+the problem.
+ 
+ [Regression Potential]
+ 
+  * discussion of how regressions are most likely to manifest as a result
+ of this change.
+ 
+  * It is assumed that any SRU candidate patch is well-tested before
+upload and has a low overall risk of regression, but it's important
+to make the effort to think about what ''could'' happen in the
+event of a regression.
+ 
+  * This both shows the SRU team that the risks have been considered,
+and provides guidance to testers in regression-testing the SRU.
+ 
+ [Other Info]
+  
+  * Anything else you think is useful to include
+  * Anticipate questions from users, SRU, +1 maintenance, security teams and 
the Technical Board
+  * and address these questions in advance
+ 
+ [Original Description]
+ 
  Using Apache DBD auth for MySQL server 8 causes an error in Focal 20.04
  Same setup is working fine in 16.04 and 18.04
  
  The following line in Apache2/conf-enabled/ causes the error:
  DBDriver mysql
  
  apache start failed:
  apachectl[1188]: Can't load driver file apr_dbd_mysql.so
  
  The file exists in
  /usr/lib/x86_64-linux-gnu/apr-util-1/apr_dbd_mysql.so
  
  linking the file to
  /lib/apache2/modules/
  doesn't change the error message

** Summary changed:

- Apache DBD Auth not working with mysql  Focal 20.04
+ Apache DBD Auth not working with mysql

** Description changed:

  [Impact]
+ The MySQL dbd driver fails to load into apache. This causes apache2 to fail 
to start if it's configured in this way.
  
-  * An explanation of the effects of the bug on users and
+ Since MySQL 8.0.2[1], the my_init() function is no longer exported, and
+ it's not expected that clients will call it.
  
-  * justification for backporting the fix to the stable release.
+ Confirming that, the current build logs for apr-util show:
+ /home/ubuntu/deb/apr-util/apr-util-1.6.1/dbd/apr_dbd_mysql.c:1267:5: warning: 
implicit declaration of function ‘my_init’; did you mean ‘mysql_init’? 
[-Wimplicit-function-declaration]
+  1267 | my_init();
+   | ^~~
+   | mysql_init
  
-  * In addition, it is helpful, but not required, to include an
-explanation of how the upload fixes this bug.
+ Furthermore, they also show[2] that loading the mysql driver failed, but
+ that doesn't cause the build to fail (unknown reason: not addressed in
+ this update):
+ 
+ (...)
+ Loaded pgsql driver OK.
+ Failed to open pgsql[]
+ Failed to load driver file apr_dbd_mysql.so <<<
+ Loaded sqlite3 driver OK.
+ (...)
+ 
+ The fix is to not call my_init(). This was confirmed with MySQL
+ upstream.
+ 
  
  [Test Case]
  
-  * detailed instructions how to reproduce the bug
+  * detailed instructions how to reproduce the bug
  
-  * these should allow someone who is not familiar with the affected
-package to reproduce the bug and verify that the updated package fixes
-the problem.
+  * these should allow someone who is not familiar with the affected
+    package to reproduce the bug and verify that the updated package fixes
+    the problem.
  
  [Regression Potential]
  
-  * discussion of how regressions are most likely to manifest as a result
+  * discussion of how regressions are most likely to manifest as a result
  of this change.
  
-  * It is assumed that any SRU candidate patch is well-tested before
-upload and has a low overall risk of regression, but it's important
-to make the effort to think about what ''could'' happen in the
-event of a regression.
+  * It is assumed that any SRU candidate patch is well-tested before
+    upload and has a low overall risk of regression, but it's important
+    to make the effort to think about what ''could'' happen in the
+    event of a regression.
  
-  * This both shows the SRU team that the risks have been considered,
-and provides guidance to testers in regression-testing the SRU.
+  * This both shows the SRU team that the risks have been considered,
+    and provides guidance to testers in regression-tes

[Bug 1866119] Re: [bionic] fence_scsi not working properly with 1.1.18-2ubuntu1.1

2020-03-04 Thread Rafael David Tinoco
** Description changed:

+ OBS: This bug was originally into LP: #1865523 but it was split.
+ 
+  SRU: pacemaker
+ 
+ [Impact]
+ 
+  * fence_scsi is not currently working in a share disk environment
+ 
+  * all clusters relying in fence_scsi and/or fence_scsi + watchdog won't
+ be able to start the fencing agents OR, in worst case scenarios, the
+ fence_scsi agent might start but won't make scsi reservations in the
+ shared scsi disk.
+ 
+  * this bug is taking care of pacemaker 1.1.18 issues with fence_scsi,
+ since the later was fixed at LP: #1865523.
+ 
+ [Test Case]
+ 
+  * having a 3-node setup, nodes called "clubionic01, clubionic02,
+ clubionic03", with a shared scsi disk (fully supporting persistent
+ reservations) /dev/sda, with corosync and pacemaker operational and
+ running, one might try:
+ 
+ rafaeldtinoco@clubionic01:~$ crm configure
+ crm(live)configure# property stonith-enabled=on
+ crm(live)configure# property stonith-action=off
+ crm(live)configure# property no-quorum-policy=stop
+ crm(live)configure# property have-watchdog=true
+ crm(live)configure# commit
+ crm(live)configure# end
+ crm(live)# end
+ 
+ rafaeldtinoco@clubionic01:~$ crm configure primitive fence_clubionic \
+ stonith:fence_scsi params \
+ pcmk_host_list="clubionic01 clubionic02 clubionic03" \
+ devices="/dev/sda" \
+ meta provides=unfencing
+ 
+ And see the following errors:
+ 
+ Failed Actions:
+ * fence_clubionic_start_0 on clubionic02 'unknown error' (1): call=6, 
status=Error, exitreason='',
+ last-rc-change='Wed Mar  4 19:53:12 2020', queued=0ms, exec=1105ms
+ * fence_clubionic_start_0 on clubionic03 'unknown error' (1): call=6, 
status=Error, exitreason='',
+ last-rc-change='Wed Mar  4 19:53:13 2020', queued=0ms, exec=1109ms
+ * fence_clubionic_start_0 on clubionic01 'unknown error' (1): call=6, 
status=Error, exitreason='',
+ last-rc-change='Wed Mar  4 19:53:11 2020', queued=0ms, exec=1108ms
+ 
+ and corosync.log will show:
+ 
+ warning: unpack_rsc_op_failure: Processing failed op start for
+ fence_clubionic on clubionic01: unknown error (1)
+ 
+ [Regression Potential]
+ 
+  * LP: #1865523 shows fence_scsi fully operational after SRU for that
+ bug is done.
+ 
+  * LP: #1865523 used pacemaker 1.1.19 (vanilla) in order to fix
+ fence_scsi.
+ 
+  * TODO
+ 
+ [Other Info]
+ 
+  * Original Description:
+ 
  Trying to setup a cluster with an iscsi shared disk, using fence_scsi as
  the fencing mechanism, I realized that fence_scsi is not working in
  Ubuntu Bionic. I first thought it was related to Azure environment (LP:
  #1864419), where I was trying this environment, but then, trying
  locally, I figured out that somehow pacemaker 1.1.18 is not fencing the
  shared scsi disk properly.
  
  Note: I was able to "backport" vanilla 1.1.19 from upstream and
  fence_scsi worked. I have then tried 1.1.18 without all quilt patches
  and it didnt work as well. I think that bisecting 1.1.18 <-> 1.1.19
  might tell us which commit has fixed the behaviour needed by the
  fence_scsi agent.
  
  (k)rafaeldtinoco@clubionic01:~$ crm conf show
  node 1: clubionic01.private
  node 2: clubionic02.private
  node 3: clubionic03.private
  primitive fence_clubionic stonith:fence_scsi \
- params pcmk_host_list="10.250.3.10 10.250.3.11 10.250.3.12" 
devices="/dev/sda" \
- meta provides=unfencing
+ params pcmk_host_list="10.250.3.10 10.250.3.11 10.250.3.12" 
devices="/dev/sda" \
+ meta provides=unfencing
  property cib-bootstrap-options: \
- have-watchdog=false \
- dc-version=1.1.18-2b07d5c5a9 \
- cluster-infrastructure=corosync \
- cluster-name=clubionic \
- stonith-enabled=on \
- stonith-action=off \
- no-quorum-policy=stop \
- symmetric-cluster=true
+ have-watchdog=false \
+ dc-version=1.1.18-2b07d5c5a9 \
+ cluster-infrastructure=corosync \
+ cluster-name=clubionic \
+ stonith-enabled=on \
+ stonith-action=off \
+ no-quorum-policy=stop \
+ symmetric-cluster=true
  
  
  
  (k)rafaeldtinoco@clubionic02:~$ sudo crm_mon -1
  Stack: corosync
  Current DC: clubionic01.private (version 1.1.18-2b07d5c5a9) - partition with 
quorum
  Last updated: Mon Mar 2 15:55:30 2020
  Last change: Mon Mar 2 15:45:33 2020 by root via cibadmin on 
clubionic01.private
  
  3 nodes configured
  1 resource configured
  
  Online: [ clubionic01.private clubionic02.private clubionic03.private ]
  
  Active resources:
  
-  fence_clubionic (stonith:fence_scsi): Started clubionic01.private
+  fence_clubionic (stonith:fence_scsi): Started clubionic01.private
  
  
  
  (k)rafaeldtinoco@clubionic02:~$ sudo sg_persist --in --read-keys 
--device=/dev/sda
-   LIO-ORG cluster.bionic. 4.0
-   Peripheral device type: disk
-   PR generation=0x0, there are NO registered reservation keys
+   LIO-ORG cluster.bionic. 4.0
+   Peripheral device type: disk
+   PR generation=0x0, there ar

[Bug 1865523] Re: [bionic] fence_scsi not working properly with 1.1.18-2ubuntu1.1

2020-03-04 Thread Rafael David Tinoco
** No longer affects: pacemaker (Ubuntu)

** No longer affects: pacemaker (Ubuntu Bionic)

** Description changed:

+ OBS: I have split this bug into 2 bugs:
+  - fence-agents (this) and pacemaker (LP: #1866119)
+ 
   SRU: fence-agents
  
  [Impact]
  
   * fence_scsi is not currently working in a share disk environment
  
   * all clusters relying in fence_scsi and/or fence_scsi + watchdog won't
  be able to start the fencing agents OR, in worst case scenarios, the
  fence_scsi agent might start but won't make scsi reservations in the
  shared scsi disk.
  
  [Test Case]
  
   * having a 3-node setup, nodes called "clubionic01, clubionic02,
  clubionic03", with a shared scsi disk (fully supporting persistent
  reservations) /dev/sda, one might try the following command:
  
  sudo fence_scsi --verbose -n clubionic01 -d /dev/sda -k 3abe -o off
  
  from nodes "clubionic02 or clubionic03" and check if the reservation
  worked:
  
  (k)rafaeldtinoco@clubionic02:~$ sudo sg_persist --in --read-keys 
--device=/dev/sda
    LIO-ORG   cluster.bionic.   4.0
    Peripheral device type: disk
    PR generation=0x0, there are NO registered reservation keys
  
  (k)rafaeldtinoco@clubionic02:~$ sudo sg_persist -r /dev/sda
    LIO-ORG   cluster.bionic.   4.0
    Peripheral device type: disk
    PR generation=0x0, there is NO reservation held
  
   * having a 3-node setup, nodes called "clubionic01, clubionic02,
  clubionic03", with a shared scsi disk (fully supporting persistent
  reservations) /dev/sda, with corosync and pacemaker operational and
  running, one might try:
  
  rafaeldtinoco@clubionic01:~$ crm configure
  crm(live)configure# property stonith-enabled=on
  crm(live)configure# property stonith-action=off
  crm(live)configure# property no-quorum-policy=stop
  crm(live)configure# property have-watchdog=true
  crm(live)configure# property symmetric-cluster=true
  crm(live)configure# commit
  crm(live)configure# end
  crm(live)# end
  
  rafaeldtinoco@clubionic01:~$ crm configure primitive fence_clubionic \
  stonith:fence_scsi params \
  pcmk_host_list="clubionic01 clubionic02 clubionic03" \
  devices="/dev/sda" \
  meta provides=unfencing
  
  And see that crm_mon won't show fence_clubionic resource operational.
  
  [Regression Potential]
  
-  * Comments #3 and #4 show this new version fully working.
-  
-  * This fix has a potential of breaking other "nowadays working" fencing 
agent. If that happens, I suggest that ones affected revert previous to 
previous package AND open a bug against either pacemaker and/or fence-agents.
+  * Comments #3 and #4 show this new version fully working.
+ 
+  * This fix has a potential of breaking other "nowadays working" fencing
+ agent. If that happens, I suggest that ones affected revert previous to
+ previous package AND open a bug against either pacemaker and/or fence-
+ agents.
  
   * Judging by this issue, it is very likely that any Ubuntu user that
  have tried using fence_scsi has probably migrated to a newer version
  because fence_scsi agent is broken since its release.
  
   * The way I fixed fence_scsi was this:
  
  I packaged pacemaker in latest 1.1.X version and kept it "vanilla" so I
  could bisect fence-agents. At that moment I realized that bisecting was
  going to be hard because there were multiple issues, not only one. I
  backported the latest fence-agents together with Pacemaker 1.1.19-0 and
  saw that it worked.
  
  From then on, I bisected the following intervals:
  
  4.3.0 .. 4.4.0 (eoan - working)
  4.2.0 .. 4.3.0
  4.1.0 .. 4.2.0
  4.0.25 .. 4.1.0 (bionic - not working)
  
  In each of those intervals I discovered issues. For example, Using 4.3.0
  I faced problems so I had to backport fixes that were in between 4.4.0
  and 4.3.0. Then, backporting 4.2.0, I faced issues so I had to backport
  fixes from the 4.3.0 <-> 4.2.0 interval. I did this until I was at
  4.0.25 version, current Bionic fence-agents version.
  
  [Other Info]
  
   * Original Description:
  
  Trying to setup a cluster with an iscsi shared disk, using fence_scsi as
  the fencing mechanism, I realized that fence_scsi is not working in
  Ubuntu Bionic. I first thought it was related to Azure environment (LP:
  #1864419), where I was trying this environment, but then, trying
  locally, I figured out that somehow pacemaker 1.1.18 is not fencing the
  shared scsi disk properly.
  
  Note: I was able to "backport" vanilla 1.1.19 from upstream and
  fence_scsi worked. I have then tried 1.1.18 without all quilt patches
  and it didnt work as well. I think that bisecting 1.1.18 <-> 1.1.19
  might tell us which commit has fixed the behaviour needed by the
  fence_scsi agent.
  
  (k)rafaeldtinoco@clubionic01:~$ crm conf show
  node 1: clubionic01.private
  node 2: clubionic02.private
  node 3: clubionic03.private
  primitive fence_clubionic stonith:fence_scsi \
  params pcmk_host_list="10.250.3.10 10.250.3.11 10.250.3.12" 
devices="/dev/sda" \
    

[Bug 1866119] [NEW] [bionic] fence_scsi not working properly with 1.1.18-2ubuntu1.1

2020-03-04 Thread Rafael David Tinoco
Public bug reported:

OBS: This bug was originally into LP: #1865523 but it was split.

 SRU: pacemaker

[Impact]

 * fence_scsi is not currently working in a share disk environment

 * all clusters relying in fence_scsi and/or fence_scsi + watchdog won't
be able to start the fencing agents OR, in worst case scenarios, the
fence_scsi agent might start but won't make scsi reservations in the
shared scsi disk.

 * this bug is taking care of pacemaker 1.1.18 issues with fence_scsi,
since the later was fixed at LP: #1865523.

[Test Case]

 * having a 3-node setup, nodes called "clubionic01, clubionic02,
clubionic03", with a shared scsi disk (fully supporting persistent
reservations) /dev/sda, with corosync and pacemaker operational and
running, one might try:

rafaeldtinoco@clubionic01:~$ crm configure
crm(live)configure# property stonith-enabled=on
crm(live)configure# property stonith-action=off
crm(live)configure# property no-quorum-policy=stop
crm(live)configure# property have-watchdog=true
crm(live)configure# commit
crm(live)configure# end
crm(live)# end

rafaeldtinoco@clubionic01:~$ crm configure primitive fence_clubionic \
stonith:fence_scsi params \
pcmk_host_list="clubionic01 clubionic02 clubionic03" \
devices="/dev/sda" \
meta provides=unfencing

And see the following errors:

Failed Actions:
* fence_clubionic_start_0 on clubionic02 'unknown error' (1): call=6, 
status=Error, exitreason='',
last-rc-change='Wed Mar  4 19:53:12 2020', queued=0ms, exec=1105ms
* fence_clubionic_start_0 on clubionic03 'unknown error' (1): call=6, 
status=Error, exitreason='',
last-rc-change='Wed Mar  4 19:53:13 2020', queued=0ms, exec=1109ms
* fence_clubionic_start_0 on clubionic01 'unknown error' (1): call=6, 
status=Error, exitreason='',
last-rc-change='Wed Mar  4 19:53:11 2020', queued=0ms, exec=1108ms

and corosync.log will show:

warning: unpack_rsc_op_failure: Processing failed op start for
fence_clubionic on clubionic01: unknown error (1)

[Regression Potential]

 * LP: #1865523 shows fence_scsi fully operational after SRU for that
bug is done.

 * LP: #1865523 used pacemaker 1.1.19 (vanilla) in order to fix
fence_scsi.

 * TODO

[Other Info]

 * Original Description:

Trying to setup a cluster with an iscsi shared disk, using fence_scsi as
the fencing mechanism, I realized that fence_scsi is not working in
Ubuntu Bionic. I first thought it was related to Azure environment (LP:
#1864419), where I was trying this environment, but then, trying
locally, I figured out that somehow pacemaker 1.1.18 is not fencing the
shared scsi disk properly.

Note: I was able to "backport" vanilla 1.1.19 from upstream and
fence_scsi worked. I have then tried 1.1.18 without all quilt patches
and it didnt work as well. I think that bisecting 1.1.18 <-> 1.1.19
might tell us which commit has fixed the behaviour needed by the
fence_scsi agent.

(k)rafaeldtinoco@clubionic01:~$ crm conf show
node 1: clubionic01.private
node 2: clubionic02.private
node 3: clubionic03.private
primitive fence_clubionic stonith:fence_scsi \
params pcmk_host_list="10.250.3.10 10.250.3.11 10.250.3.12" 
devices="/dev/sda" \
meta provides=unfencing
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=1.1.18-2b07d5c5a9 \
cluster-infrastructure=corosync \
cluster-name=clubionic \
stonith-enabled=on \
stonith-action=off \
no-quorum-policy=stop \
symmetric-cluster=true



(k)rafaeldtinoco@clubionic02:~$ sudo crm_mon -1
Stack: corosync
Current DC: clubionic01.private (version 1.1.18-2b07d5c5a9) - partition with 
quorum
Last updated: Mon Mar 2 15:55:30 2020
Last change: Mon Mar 2 15:45:33 2020 by root via cibadmin on clubionic01.private

3 nodes configured
1 resource configured

Online: [ clubionic01.private clubionic02.private clubionic03.private ]

Active resources:

 fence_clubionic (stonith:fence_scsi): Started clubionic01.private



(k)rafaeldtinoco@clubionic02:~$ sudo sg_persist --in --read-keys 
--device=/dev/sda
  LIO-ORG cluster.bionic. 4.0
  Peripheral device type: disk
  PR generation=0x0, there are NO registered reservation keys

(k)rafaeldtinoco@clubionic02:~$ sudo sg_persist -r /dev/sda
  LIO-ORG cluster.bionic. 4.0
  Peripheral device type: disk
  PR generation=0x0, there is NO reservation held

** Affects: pacemaker (Ubuntu)
 Importance: Undecided
 Status: Fix Released

** Affects: pacemaker (Ubuntu Bionic)
 Importance: Undecided
 Status: In Progress

** Also affects: pacemaker (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Changed in: pacemaker (Ubuntu)
   Status: New => Fix Released

** Changed in: pacemaker (Ubuntu Bionic)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Server, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1866119

Title:
  [bionic] fence_scsi not working properly with 1.1.1

[Bug 1859773] Re: Apache DBD Auth not working with mysql Focal 20.04

2020-03-04 Thread Andreas Hasenack
** Also affects: apr-util (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: apache2 (Ubuntu)

** Changed in: apr-util (Ubuntu Eoan)
   Status: New => Triaged

** Changed in: apr-util (Ubuntu Focal)
   Status: New => Triaged

** Changed in: apr-util (Ubuntu Eoan)
   Importance: Undecided => High

** Changed in: apr-util (Ubuntu Focal)
   Importance: Undecided => High

** No longer affects: apache2 (Ubuntu Eoan)

** No longer affects: apache2 (Ubuntu Focal)

-- 
You received this bug notification because you are a member of Ubuntu
Server, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1859773

Title:
  Apache DBD Auth not working with mysql  Focal 20.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apr-util/+bug/1859773/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1859773] Re: Apache DBD Auth not working with mysql Focal 20.04

2020-03-04 Thread Andreas Hasenack
The build logs show this:
/home/ubuntu/deb/apr-util/apr-util-1.6.1/dbd/apr_dbd_mysql.c:1267:5: warning: 
implicit declaration of function ‘my_init’; did you mean ‘mysql_init’? 
[-Wimplicit-function-declaration]
 1267 | my_init();
  | ^~~
  | mysql_init

Checking the mysql 8 release notes at 
https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-2.html we see:
"""
The my_init() function is no longer included in the list of symbols exported 
from libmysqlclient. It need not be called explicitly by client programs 
because it is called implicitly by other C API initialization functions. 
"""

In the dbd/apr_dbd_mysql.c code, it is called like this:
static void dbd_mysql_init(apr_pool_t *pool)
{
#if MYSQL_VERSION_ID < 10
my_init();
#endif
mysql_thread_init();

-- 
You received this bug notification because you are a member of Ubuntu
Server, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1859773

Title:
  Apache DBD Auth not working with mysql  Focal 20.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1859773/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1859773] Re: Apache DBD Auth not working with mysql Focal 20.04

2020-03-04 Thread Andreas Hasenack
Dropping that my_init() call fixed the loading of the module for me.
I'll publish a ppa for people to test.

-- 
You received this bug notification because you are a member of Ubuntu
Server, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1859773

Title:
  Apache DBD Auth not working with mysql  Focal 20.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1859773/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1865523] Re: [bionic] fence_scsi not working properly with 1.1.18-2ubuntu1.1

2020-03-04 Thread Rafael David Tinoco
** Description changed:

+  SRU: fence-agents
+ 
+ [Impact]
+ 
+  * fence_scsi is not currently working in a share disk environment
+ 
+  * all clusters relying in fence_scsi and/or fence_scsi + watchdog won't
+ be able to start the fencing agents OR, in worst case scenarios, the
+ fence_scsi agent might start but won't make scsi reservations in the
+ shared scsi disk.
+ 
+ [Test Case]
+ 
+  * having a 3-node setup, nodes called "clubionic01, clubionic02,
+ clubionic03", with a shared scsi disk (fully supporting persistent
+ reservations) /dev/sda, one might try the following command:
+ 
+ sudo fence_scsi --verbose -n clubionic01 -d /dev/sda -k 3abe -o off
+ 
+ from nodes "clubionic02 or clubionic03" and check if the reservation
+ worked:
+ 
+ (k)rafaeldtinoco@clubionic02:~$ sudo sg_persist --in --read-keys 
--device=/dev/sda
+   LIO-ORG   cluster.bionic.   4.0
+   Peripheral device type: disk
+   PR generation=0x0, there are NO registered reservation keys
+ 
+ (k)rafaeldtinoco@clubionic02:~$ sudo sg_persist -r /dev/sda
+   LIO-ORG   cluster.bionic.   4.0
+   Peripheral device type: disk
+   PR generation=0x0, there is NO reservation held
+ 
+  * having a 3-node setup, nodes called "clubionic01, clubionic02,
+ clubionic03", with a shared scsi disk (fully supporting persistent
+ reservations) /dev/sda, with corosync and pacemaker operational and
+ running, one might try:
+ 
+ rafaeldtinoco@clubionic01:~$ crm configure
+ crm(live)configure# property stonith-enabled=on
+ crm(live)configure# property stonith-action=off
+ crm(live)configure# property no-quorum-policy=stop
+ crm(live)configure# property have-watchdog=true
+ crm(live)configure# property symmetric-cluster=true
+ crm(live)configure# commit
+ crm(live)configure# end
+ crm(live)# end
+ 
+ rafaeldtinoco@clubionic01:~$ crm configure primitive fence_clubionic \
+ stonith:fence_scsi params \
+ pcmk_host_list="clubionic01 clubionic02 clubionic03" \
+ devices="/dev/sda" \
+ meta provides=unfencing
+ 
+ And see that crm_mon won't show fence_clubionic resource operational.
+ 
+ [Regression Potential]
+ 
+  * Judging by this issue, it is very likely that any Ubuntu user that
+ have tried using fence_scsi has probably migrated to a newer version
+ because fence_scsi agent is broken since its release.
+ 
+  * The way I fixed fence_scsi was this:
+ 
+ I packaged pacemaker in latest 1.1.X version and kept it "vanilla" so I
+ could bisect fence-agents. At that moment I realized that bisecting was
+ going to be hard because there were multiple issues, not only one. I
+ backported the latest fence-agents together with Pacemaker 1.1.19-0 and
+ saw that it worked.
+ 
+ From then on, I bisected the following intervals:
+ 
+ 4.3.0 .. 4.4.0 (eoan - working)
+ 4.2.0 .. 4.3.0
+ 4.1.0 .. 4.2.0
+ 4.0.25 .. 4.1.0 (bionic - not working)
+ 
+ In each of those intervals I discovered issues. For example, Using 4.3.0
+ I faced problems so I had to backport fixes that were in between 4.4.0
+ and 4.3.0. Then, backporting 4.2.0, I faced issues so I had to backport
+ fixes from the 4.3.0 <-> 4.2.0 interval. I did this until I was at
+ 4.0.25 version, current Bionic fence-agents version.
+ 
+ [Other Info]
+  
+  * Original Description:
+ 
  Trying to setup a cluster with an iscsi shared disk, using fence_scsi as
  the fencing mechanism, I realized that fence_scsi is not working in
  Ubuntu Bionic. I first thought it was related to Azure environment (LP:
  #1864419), where I was trying this environment, but then, trying
  locally, I figured out that somehow pacemaker 1.1.18 is not fencing the
  shared scsi disk properly.
  
  Note: I was able to "backport" vanilla 1.1.19 from upstream and
  fence_scsi worked. I have then tried 1.1.18 without all quilt patches
  and it didnt work as well. I think that bisecting 1.1.18 <-> 1.1.19
  might tell us which commit has fixed the behaviour needed by the
  fence_scsi agent.
  
  (k)rafaeldtinoco@clubionic01:~$ crm conf show
  node 1: clubionic01.private
  node 2: clubionic02.private
  node 3: clubionic03.private
  primitive fence_clubionic stonith:fence_scsi \
- params pcmk_host_list="10.250.3.10 10.250.3.11 10.250.3.12" 
devices="/dev/sda" \
- meta provides=unfencing
+ params pcmk_host_list="10.250.3.10 10.250.3.11 10.250.3.12" 
devices="/dev/sda" \
+ meta provides=unfencing
  property cib-bootstrap-options: \
- have-watchdog=false \
- dc-version=1.1.18-2b07d5c5a9 \
- cluster-infrastructure=corosync \
- cluster-name=clubionic \
- stonith-enabled=on \
- stonith-action=off \
- no-quorum-policy=stop \
- symmetric-cluster=true
+ have-watchdog=false \
+ dc-version=1.1.18-2b07d5c5a9 \
+ cluster-infrastructure=corosync \
+ cluster-name=clubionic \
+ stonith-enabled=on \
+ stonith-action=off \
+ no-quorum-policy=stop \
+ symmetric-cluster=true
  
  
  
  (k)rafaeldtinoc

[Bug 1865218] Re: mod_php gets disabled during do-release-upgrade

2020-03-04 Thread Andreas Hasenack
Thanks for the reproduction steps, marking as triaged.

** Changed in: php-defaults (Ubuntu)
   Status: New => Triaged

** Changed in: php-defaults (Ubuntu)
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Ubuntu
Server, which is subscribed to php-defaults in Ubuntu.
https://bugs.launchpad.net/bugs/1865218

Title:
  mod_php gets disabled during do-release-upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/php-defaults/+bug/1865218/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1864555] Re: focal: upgrade if multipath-tools results in modprobe error messages

2020-03-04 Thread Andreas Hasenack
*** This bug is a duplicate of bug 1864992 ***
https://bugs.launchpad.net/bugs/1864992

I believe this is a duplicate of
https://bugs.launchpad.net/curtin/+bug/1864992 and not related to
multipath

** This bug has been marked a duplicate of bug 1864992
   depmod: ERROR: ../libkmod/libkmod.c:515 lookup_builtin_file() could not open 
builtin file '/lib/modules/5.4.0-14-generic/modules.builtin.bin'

-- 
You received this bug notification because you are a member of Ubuntu
Server, which is subscribed to multipath-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1864555

Title:
  focal: upgrade if multipath-tools results in modprobe error messages

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1864555/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1864338] Re: samba does not install cleanly on machine with no IPv4 address

2020-03-04 Thread Andreas Hasenack
*** This bug is a duplicate of bug 1731502 ***
https://bugs.launchpad.net/bugs/1731502

This is bug #1731502, also filed upstream at
https://bugzilla.samba.org/show_bug.cgi?id=13111

** Bug watch added: Samba Bugzilla #13111
   https://bugzilla.samba.org/show_bug.cgi?id=13111

** This bug has been marked a duplicate of bug 1731502
   nmbd starts fine with no interfaces, but doesn't notify systemd that it 
started

-- 
You received this bug notification because you are a member of Ubuntu
Server, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1864338

Title:
  samba does not install cleanly on machine with no IPv4 address

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/samba/+bug/1864338/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1825712] Re: bind9 is compiled without support for EdDSA DNSSEC keys

2020-03-04 Thread Andreas Hasenack
It's a valid request, I'm just not sure if the version of bind in bionic
is good enough for this support. I vaguely remember reading somewhere
that certain encryption types were not working well in certain versions
of bind9 (sorry, very vague, I know). Because of that I'm confirming the
bug, but this would have to be investigated.

** Changed in: bind9 (Ubuntu Bionic)
   Status: New => Triaged

** Changed in: bind9 (Ubuntu Bionic)
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Ubuntu
Server, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1825712

Title:
  bind9 is compiled without support for EdDSA DNSSEC keys

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/bind9/+bug/1825712/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1613423] Re: Mitaka + Trusty (kernel 3.13) not using apparmor capability by default, when it does, live migration doesn't work (/tmp/memfd-XXX can't be created)

2020-03-04 Thread Rafael David Tinoco
Preferred way for this fix was:

commit 0d34fbabc13891da41582b0823867dc5733fffef
Author: Rafael David Tinoco 
Date:   Mon Oct 24 15:35:03 2016

vhost: migration blocker only if shared log is used

Commit 31190ed7 added a migration blocker in vhost_dev_init() to
check if memfd would succeed. It is better if this blocker first
checks if vhost backend requires shared log. This will avoid a
situation where a blocker is added inappropriately (e.g. shared
log allocation fails when vhost backend doesn't support it).

Signed-off-by: Rafael David Tinoco 
Reviewed-by: Marc-André Lureau 
Reviewed-by: Michael S. Tsirkin 
Signed-off-by: Michael S. Tsirkin 

And accepted upstream. I'm closing this bug.

** Changed in: libvirt (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Server, which is subscribed to libvirt in Ubuntu.
https://bugs.launchpad.net/bugs/1613423

Title:
  Mitaka + Trusty (kernel 3.13) not using apparmor capability by
  default, when it does, live migration doesn't work (/tmp/memfd-XXX
  can't be created)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1613423/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1613423] Re: Mitaka + Trusty (kernel 3.13) not using apparmor capability by default, when it does, live migration doesn't work (/tmp/memfd-XXX can't be created)

2020-03-04 Thread Rafael David Tinoco
QEMU BUG: https://bugs.launchpad.net/qemu/+bug/1626972

** Changed in: libvirt (Ubuntu)
   Status: Fix Released => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Server, which is subscribed to libvirt in Ubuntu.
https://bugs.launchpad.net/bugs/1613423

Title:
  Mitaka + Trusty (kernel 3.13) not using apparmor capability by
  default, when it does, live migration doesn't work (/tmp/memfd-XXX
  can't be created)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1613423/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs