[Group.of.nepali.translators] [Bug 520546] Re: [SRU]Alt+KEY incorrectly behaves like Ctrl+Alt+KEY, and/or unwanted VT switch from Alt+Left/Right

2020-12-09 Thread Rafael David Tinoco
I have just faced this and kbd_mode -s also fixed the issue for me. The
behavior started out of nothing (something I did without perceiving most
likely). I'm using:

[rafaeldtinoco@fujitsu ~]$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 20.10
Release:20.10
Codename:   groovy

apt-history shows last apt command was yesterday (for my host) and it
was working fine yesterday. Unfortunately I have no time to dig into
this now, but wanted to state that this still happens in 20.10 (and
possibly 20.04 according to @ribalkin report).

** Also affects: kbd (Ubuntu Lucid)
   Importance: Undecided
   Status: New

** Also affects: xorg-server (Ubuntu Lucid)
   Importance: Undecided
   Status: New

** Also affects: console-setup (Ubuntu Lucid)
   Importance: Undecided
   Status: New

** Also affects: kbd (Ubuntu Maverick)
   Importance: Undecided
   Status: New

** Also affects: xorg-server (Ubuntu Maverick)
   Importance: Undecided
   Status: New

** Also affects: console-setup (Ubuntu Maverick)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/520546

Title:
  [SRU]Alt+KEY incorrectly behaves like Ctrl+Alt+KEY, and/or unwanted VT
  switch from Alt+Left/Right

Status in console-setup package in Ubuntu:
  Fix Released
Status in kbd package in Ubuntu:
  Fix Released
Status in xorg-server package in Ubuntu:
  Invalid
Status in console-setup source package in Lucid:
  New
Status in kbd source package in Lucid:
  New
Status in xorg-server source package in Lucid:
  New
Status in console-setup source package in Maverick:
  New
Status in kbd source package in Maverick:
  New
Status in xorg-server source package in Maverick:
  New
Status in console-setup source package in Xenial:
  Confirmed
Status in kbd source package in Xenial:
  Won't Fix
Status in xorg-server source package in Xenial:
  Confirmed
Status in console-setup source package in Bionic:
  Fix Released
Status in kbd source package in Bionic:
  Won't Fix
Status in linux source package in Bionic:
  Confirmed
Status in xorg-server source package in Bionic:
  Invalid
Status in console-setup source package in Cosmic:
  Fix Released
Status in kbd source package in Cosmic:
  Won't Fix
Status in linux source package in Cosmic:
  Won't Fix
Status in xorg-server source package in Cosmic:
  Invalid
Status in console-setup source package in Disco:
  Fix Released
Status in kbd source package in Disco:
  Won't Fix
Status in linux source package in Disco:
  Won't Fix
Status in xorg-server source package in Disco:
  Invalid
Status in console-setup source package in Eoan:
  Fix Released
Status in kbd source package in Eoan:
  Fix Released
Status in linux source package in Eoan:
  Confirmed
Status in xorg-server source package in Eoan:
  Invalid

Bug description:
  (kbd)
  [Impact]

   * kbd_mode -u is documented to break keyboards in modes other than xlate and 
unicode, while it is still called by some scripts. Those scripts are called 
transitively by maintainer scripts such as the one already fixed in 
console-setup. 
   * To avoid accidentally breaking keyboards a -f option is added to force 
such breaking mode changes. Without -f only the safe mode changes are performed 
and an error is printed when the requested mode change is not safe. Next 
upstream version will also exit with error, but the cherry-picked fix makes 
kbd_mode return success even when the mode switch is not performed to avoid 
regressions of scripts.

  [Test case]

   * Verify that safe mode switches work and dangerous ones are skipped
  without -f. Please note that the test will temporarily break the
  system's keyboard and it is recommended to run the test in a VM.

  rbalint@MacBookAir-test:~$ sudo kbd_mode -C /dev/tty4; echo $?
  The keyboard is in Unicode (UTF-8) mode
  0
  rbalint@MacBookAir-test:~$ sudo kbd_mode -a -C /dev/tty4; echo $?
  0
  rbalint@MacBookAir-test:~$ sudo kbd_mode -a -C /dev/tty4; echo $?
  0
  rbalint@MacBookAir-test:~$ sudo kbd_mode -C /dev/tty4
  The keyboard is in xlate (8-bit) mode
  rbalint@MacBookAir-test:~$ sudo kbd_mode -u -C /dev/tty4; echo $?
  0
  rbalint@MacBookAir-test:~$ sudo kbd_mode -C /dev/tty4
  The keyboard is in Unicode (UTF-8) mode
  rbalint@MacBookAir-test:~$ sudo kbd_mode -u -C /dev/tty0; echo $?
  The keyboard is in some unknown mode
  Changing to the requested mode may make your keyboard unusable, please use -f 
to force the change.
  0
  rbalint@MacBookAir-test:~$ sudo kbd_mode -f -u -C /dev/tty0; echo $?
  0
  rbalint@MacBookAir-test:~$ sudo kbd_mode -C /dev/tty0
  The keyboard is in Unicode (UTF-8) mode
  rbalint@MacBookAir-test:~$ sudo kbd_mode -s -C /dev/tty0
  rbalint@MacBookAir-test:~$ sudo kbd_mode -C /dev/tty0
  The keyboard is in raw (scancode) mode
  rbalint@MacBookAir-test:

[Group.of.nepali.translators] [Bug 1590799] Re: nfs-kernel-server does not start because of dependency failure

2020-08-20 Thread Rafael David Tinoco
This issue also does NOT affect Bionic:

[rafaeldtinoco@bnfstests ~]$ systemctl status nfs-kernel-server.service 
● nfs-server.service - NFS server and services
   Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor 
preset: enabled)
   Active: active (exited) since Thu 2020-08-20 17:46:54 UTC; 29s ago
  Process: 1537 ExecStopPost=/usr/sbin/exportfs -f (code=exited, 
status=0/SUCCESS)
  Process: 1536 ExecStopPost=/usr/sbin/exportfs -au (code=exited, 
status=0/SUCCESS)
  Process: 1535 ExecStop=/usr/sbin/rpc.nfsd 0 (code=exited, status=0/SUCCESS)
  Process: 1561 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, 
status=0/SUCCESS)
  Process: 1560 ExecStartPre=/usr/sbin/exportfs -r (code=exited, 
status=0/SUCCESS)
 Main PID: 1561 (code=exited, status=0/SUCCESS)

Aug 20 17:46:54 bnfstests systemd[1]: Starting NFS server and services...
Aug 20 17:46:54 bnfstests systemd[1]: Started NFS server and services.
[rafaeldtinoco@bnfstests ~]$ systemctl status rpcbind.socket
Failed to dump process list, ignoring: No such file or directory
● rpcbind.socket - RPCbind Server Activation Socket
   Loaded: loaded (/lib/systemd/system/rpcbind.socket; enabled; vendor preset: 
enabled)
   Active: active (running) since Thu 2020-08-20 17:44:25 UTC; 3min 6s ago
   Listen: /run/rpcbind.sock (Stream)
   CGroup: /system.slice/rpcbind.socket

Warning: Journal has been rotated since unit was started. Log output is 
incomplete or unavailable.
[rafaeldtinoco@bnfstests ~]$ systemctl status rpcbind.service
● rpcbind.service - RPC bind portmap service
   Loaded: loaded (/lib/systemd/system/rpcbind.service; enabled; vendor preset: 
enabled)
   Active: active (running) since Thu 2020-08-20 17:44:26 UTC; 3min 14s ago
 Docs: man:rpcbind(8)
 Main PID: 382 (rpcbind)
Tasks: 1 (limit: 2338)
   CGroup: /system.slice/rpcbind.service
   └─382 /sbin/rpcbind -f -w

Aug 20 17:44:26 bnfstests systemd[1]: Starting RPC bind portmap service...
Aug 20 17:44:26 bnfstests systemd[1]: Started RPC bind portmap service.
[rafaeldtinoco@bnfstests ~]$ systemctl status nfs-
nfs-blkmap.service nfs-config.service nfs-mountd.service
nfs-client.target  nfs-idmapd.service nfs-server.service
nfs-common.service nfs-kernel-server.service  nfs-utils.service
[rafaeldtinoco@bnfstests ~]$ systemctl status nfs-mountd.service 
● nfs-mountd.service - NFS Mount Daemon
   Loaded: loaded (/lib/systemd/system/nfs-mountd.service; static; vendor 
preset: enabled)
   Active: active (running) since Thu 2020-08-20 17:46:54 UTC; 54s ago
  Process: 1556 ExecStart=/usr/sbin/rpc.mountd $RPCMOUNTDARGS (code=exited, 
status=0/SUCCESS)
 Main PID: 1559 (rpc.mountd)
Tasks: 1 (limit: 2338)
   CGroup: /system.slice/nfs-mountd.service
   └─1559 /usr/sbin/rpc.mountd --manage-gids

Aug 20 17:46:54 bnfstests systemd[1]: Starting NFS Mount Daemon...
Aug 20 17:46:54 bnfstests rpc.mountd[1559]: Version 1.3.3 starting
Aug 20 17:46:54 bnfstests systemd[1]: Started NFS Mount Daemon.
[rafaeldtinoco@bnfstests ~]$ systemctl is-enabled nfs-blkmap.service 
disabled
[rafaeldtinoco@bnfstests ~]$ systemctl is-enabled nfs-client.target 
enabled
[rafaeldtinoco@bnfstests ~]$ systemctl is-enabled nfs-common.service 
masked
[rafaeldtinoco@bnfstests ~]$ systemctl is-enabled nfs-config.service 
static
[rafaeldtinoco@bnfstests ~]$ systemctl is-enabled nfs-idmapd.service 
static
[rafaeldtinoco@bnfstests ~]$ systemctl is-enabled nfs-kernel-server.service 
enabled
[rafaeldtinoco@bnfstests ~]$ systemctl is-enabled nfs-mountd.service 
static
[rafaeldtinoco@bnfstests ~]$ systemctl is-enabled nfs-server.service 
enabled
[rafaeldtinoco@bnfstests ~]$ systemctl is-enabled nfs-utils.service 
static
[rafaeldtinoco@bnfstests ~]$ systemctl is-active rpcbind
active
[rafaeldtinoco@bnfstests ~]$ systemctl is-active rpcbind.service 
active
[rafaeldtinoco@bnfstests ~]$ systemctl is-active rpcbind.socket 
active
[rafaeldtinoco@bnfstests ~]$ systemctl is-active nfs-kernel-server.service 
active
[rafaeldtinoco@bnfstests ~]$ systemctl is-active nfs-mountd.service 
active
[rafaeldtinoco@bnfstests ~]$ systemctl is-active nfs-client.target 
active

Note: if it affects your Bionic is very likely that you are using:

[rafaeldtinoco@bnfstests ~]$ ls /etc/init.d/*nfs*
/etc/init.d/nfs-common  /etc/init.d/nfs-kernel-server

[rafaeldtinoco@bnfstests ~]$ ls /etc/init.d/*rpc*
/etc/init.d/rpcbind

systemd generators using the services from /etc/init.d/ (you can check
that using systemctl status ). If you are using only systemd NFS
service units, then you should NOT get " DEPENDENCY ERRORS " (the
original situation described by this bug).

Of course there are OTHER issues that can happen during NFS service
initialization (or any of its dependent services). Feel free to open NEW
bugs if you think you faced a bug (and not a misconfiguration issue).

For local configuration issues, you can find assistance here:
http://www.ubuntu.com/support/community
(or in the existing m

[Group.of.nepali.translators] [Bug 1590799] Re: nfs-kernel-server does not start because of dependency failure

2020-08-20 Thread Rafael David Tinoco
This does NOT affect Groovy:

[rafaeldtinoco@nfstests ~]$ systemctl status nfs-kernel-server.service 
● nfs-server.service - NFS server and services
 Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor 
preset: enabled)
 Active: active (exited) since Thu 2020-08-20 17:28:05 UTC; 48s ago
   Main PID: 1185 (code=exited, status=0/SUCCESS)
  Tasks: 0 (limit: 2311)
 Memory: 0B
 CGroup: /system.slice/nfs-server.service

Aug 20 17:28:04 nfstests systemd[1]: Starting NFS server and services...
Aug 20 17:28:05 nfstests systemd[1]: Finished NFS server and services.
[rafaeldtinoco@nfstests ~]$ systemctl status rpcbind.socket 
● rpcbind.socket - RPCbind Server Activation Socket
 Loaded: loaded (/lib/systemd/system/rpcbind.socket; enabled; vendor 
preset: enabled)
 Active: active (running) since Thu 2020-08-20 17:27:21 UTC; 1min 40s ago
   Triggers: ● rpcbind.service
 Listen: /run/rpcbind.sock (Stream)
 0.0.0.0:111 (Stream)
 0.0.0.0:111 (Datagram)
 [::]:111 (Stream)
 [::]:111 (Datagram)
  Tasks: 0 (limit: 2311)
 Memory: 60.0K
 CGroup: /system.slice/rpcbind.socket

Warning: journal has been rotated since unit was started, output may be 
incomplete.
[rafaeldtinoco@nfstests ~]$ systemctl status rpcbind.service 
● rpcbind.service - RPC bind portmap service
 Loaded: loaded (/lib/systemd/system/rpcbind.service; enabled; vendor 
preset: enabled)
 Active: active (running) since Thu 2020-08-20 17:27:21 UTC; 1min 43s ago
TriggeredBy: ● rpcbind.socket
   Docs: man:rpcbind(8)
   Main PID: 289 (rpcbind)
  Tasks: 1 (limit: 2311)
 Memory: 2.7M
 CGroup: /system.slice/rpcbind.service
 └─289 /sbin/rpcbind -f -w


[rafaeldtinoco@nfstests ~]$ systemctl is-enabled nfs-blkmap.service 
disabled
[rafaeldtinoco@nfstests ~]$ systemctl is-enabled nfs-client.target 
enabled
[rafaeldtinoco@nfstests ~]$ systemctl is-enabled nfs-common.service 
masked
[rafaeldtinoco@nfstests ~]$ systemctl is-enabled nfs-config.service 
static
[rafaeldtinoco@nfstests ~]$ systemctl is-enabled nfs-idmapd.service 
static
[rafaeldtinoco@nfstests ~]$ systemctl is-enabled nfs-kernel-server.service 
alias
[rafaeldtinoco@nfstests ~]$ systemctl is-enabled nfs-mountd.service 
static
[rafaeldtinoco@nfstests ~]$ systemctl is-enabled nfs-server.service 
enabled
[rafaeldtinoco@nfstests ~]$ systemctl is-enabled nfs-utils.service 
static
[rafaeldtinoco@nfstests ~]$ systemctl is-active rpcbind.
rpcbind.service  rpcbind.socket   rpcbind.target   
[rafaeldtinoco@nfstests ~]$ systemctl is-active rpcbind.service 
active
[rafaeldtinoco@nfstests ~]$ systemctl is-active rpcbind.socket
active
[rafaeldtinoco@nfstests ~]$ systemctl is-active rpcbind.target
active
[rafaeldtinoco@nfstests ~]$ systemctl is-active nfs-server.service 
active
[rafaeldtinoco@nfstests ~]$ systemctl is-active nfs-client.target 
active


** Description changed:

+ NOTE FOR THIS BUG:
+ 
+ Whoever finds this and thinks is facing this same problem, please, be
+ aware that disabling all nfs related sysv init scripts is advised before
+ posting here you are also suffering from this same issue.
+ 
+ There was an original issue with *systemd units* and not the units that
+ are created automatically by systemd because of sysv (/etc/init.d)
+ files.  Having enabled units that are automatically generated by systemd
+ on behalf of /etc/init.d (specially after upgrades) can indeed happen
+ and the fix for this is to have only systemd units enabled.
+ 
+ If not totally understand, completely uninstall nfs-kernel-server and
+ related packages, purge configs, install nfs-kernel-server package again
+ and that will make sure systemd units are used by default.
+ 
  [Impact]
  
-  * nfs-mountd doesn't get started because of a race condition happening when 
rpcbind.socket is not specified as a needed service for it to start.
-  * nfs-server using rpcbind.target instead of using rpcbind.socket. Target 
should not be used (Comment #24)
+  * nfs-mountd doesn't get started because of a race condition happening when 
rpcbind.socket is not specified as a needed service for it to start.
+  * nfs-server using rpcbind.target instead of using rpcbind.socket. Target 
should not be used (Comment #24)
  
  [Test Case]
  
-  * Install nfs-kernel-server inside a xenial lxc guest and restart it until 
nfs-mountd doesn't start complaining on rpc error.
-  * Comment #25
+  * Install nfs-kernel-server inside a xenial lxc guest and restart it until 
nfs-mountd doesn't start complaining on rpc error.
+  * Comment #25
  
  [Regression Potential]
  
-  * Cons: Systemd dependencies could brake for nfs-server and nfs-mountd.
-  * Pros: Patches have been accepted upstream (and tested).
+  * Cons: Systemd dependencies could brake for nfs-server and nfs-mountd.
+  * Pros: Patches have been accepted upstream (and tested).
  
  [Other Info]
-  
+ 
  # Original Bug Description
  
  Immediately after boot:
  
  root@feynmann

[Group.of.nepali.translators] [Bug 1590799] Re: nfs-kernel-server does not start because of dependency failure

2020-08-20 Thread Rafael David Tinoco
** No longer affects: nfs-utils (Ubuntu Trusty)

** No longer affects: nfs-utils (Ubuntu Yakkety)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1590799

Title:
  nfs-kernel-server does not start because of dependency failure

Status in nfs-utils package in Ubuntu:
  Confirmed
Status in nfs-utils source package in Xenial:
  Fix Released
Status in nfs-utils source package in Zesty:
  Fix Released
Status in nfs-utils source package in Bionic:
  Confirmed

Bug description:
  [Impact]

   * nfs-mountd doesn't get started because of a race condition happening when 
rpcbind.socket is not specified as a needed service for it to start.
   * nfs-server using rpcbind.target instead of using rpcbind.socket. Target 
should not be used (Comment #24)

  [Test Case]

   * Install nfs-kernel-server inside a xenial lxc guest and restart it until 
nfs-mountd doesn't start complaining on rpc error.
   * Comment #25

  [Regression Potential]

   * Cons: Systemd dependencies could brake for nfs-server and nfs-mountd.
   * Pros: Patches have been accepted upstream (and tested).

  [Other Info]
   
  # Original Bug Description

  Immediately after boot:

  root@feynmann:~# systemctl status nfs-kernel-server
  ● nfs-server.service - NFS server and services
     Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor 
preset: enabled)
     Active: inactive (dead)

  Jun 09 14:35:47 feynmann systemd[1]: Dependency failed for NFS server and 
services.
  Jun 09 14:35:47 feynmann systemd[1]: nfs-server.service: Job 
nfs-server.service/start failed

  root@feynmann:~# systemctl status nfs-mountd.service
  ● nfs-mountd.service - NFS Mount Daemon
     Loaded: loaded (/lib/systemd/system/nfs-mountd.service; static; vendor 
preset: enabled)
     Active: failed (Result: exit-code) since Thu 2016-06-09 14:35:47 BST; 7min 
ago
    Process: 1321 ExecStart=/usr/sbin/rpc.mountd $RPCMOUNTDARGS (code=exited, 
status=1/FAILURE)

  Jun 09 14:35:47 feynmann systemd[1]: Starting NFS Mount Daemon...
  Jun 09 14:35:47 feynmann rpc.mountd[1321]: mountd: could not create listeners
  Jun 09 14:35:47 feynmann systemd[1]: nfs-mountd.service: Control process 
exited, code=exited
  Jun 09 14:35:47 feynmann systemd[1]: Failed to start NFS Mount Daemon.
  Jun 09 14:35:47 feynmann systemd[1]: nfs-mountd.service: Unit entered failed 
state.
  Jun 09 14:35:47 feynmann systemd[1]: nfs-mountd.service: Failed with result 
'exit-code'.

  root@feynmann:~# systemctl list-dependencies nfs-kernel-server
  nfs-kernel-server.service
  ● ├─auth-rpcgss-module.service
  ● ├─nfs-config.service
  ● ├─nfs-idmapd.service
  ● ├─nfs-mountd.service
  ● ├─proc-fs-nfsd.mount
  ● ├─rpc-svcgssd.service
  ● ├─system.slice
  ● ├─network.target
  ● └─rpcbind.target
  ●   └─rpcbind.service

  root@feynmann:~# systemctl list-dependencies nfs-mountd.service
  nfs-mountd.service
  ● ├─nfs-config.service
  ● ├─nfs-server.service
  ● ├─proc-fs-nfsd.mount
  ● └─system.slice
  root@feynmann:~#

  root@feynmann:~# lsb_release -rd
  Description:  Ubuntu 16.04 LTS
  Release:  16.04

  root@feynmann:~# apt-cache policy nfs-kernel-server
  nfs-kernel-server:
    Installed: 1:1.2.8-9ubuntu12
    Candidate: 1:1.2.8-9ubuntu12
    Version table:
   *** 1:1.2.8-9ubuntu12 500
  500 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
  100 /var/lib/dpkg/status

  Additional comments:

  1. There seems to be a circular dependency between nfs-mountd and 
nfs-kernel-server
  2. I can get it working by changing the AFter,Requires in 
/lib/ssystemd/system/nfs-{mountd|server}.service files. I have managed to get 
nfs-kernel-server to start but not nfs-mountd.
  3. /usr/lib/systemd/scripts/nfs-utils_env.sh references 
/etc/sysconfig/nfs which is Centos/RedHat location of this file. Also 
/etc/default/nfs does not exist. (possibly unrelated to this bug)
  4. A file "/lib/systemd/system/-.slice" exists. this file prevents 
execution of 'ls *' or 'grep xxx *' commands in that directory. I am unsure 
whether this is intended by the systemd developers but it is unfriendly when 
investigating this bug.

  Attempted solution:

  1. Edit /lib/systemd/system/nfs-server.service (original lines are
  commented out:

  [Unit]
  Description=NFS server and services
  DefaultDependencies=no
  Requires=network.target proc-fs-nfsd.mount rpcbind.target
  # Requires=nfs-mountd.service
  Wants=nfs-idmapd.service

  After=local-fs.target
  #After=network.target proc-fs-nfsd.mount rpcbind.target nfs-mountd.service
  After=network.target proc-fs-nfsd.mount rpcbind.target
  After=nfs-idmapd.service rpc-statd.service
  #Before=rpc-statd-notify.service
  Before=nfs-mountd.service rpc-statd-notify.service
  ...

  followed by a systemctl daemon-reload and a reboot.

  This results in nfs-kernel-server starting correctly but

[Group.of.nepali.translators] [Bug 1890790] Re: Build Failure with --enable-ssl-crtd flag

2020-08-20 Thread Rafael David Tinoco
Alright, you can fix it by doing:

edit src/ssl/certificate_db.h and change

#define Here __FILE__, __LINE__

to

#ifndef Here
#define Here __FILE__, __LINE__
#endif

and that will fix your issue.

But, please, do notice that this bug is unsupported by Ubuntu and this
bug is indeed invalid.

"no modification" is "not changing package configure options" and that
is why we didn't catch this issue after the security fixes were applied:
because we don't build with those options (including crypto) so we never
faced a FTBFS after applying the patches.

I can't move on with a "SRU" (Stable Release Update) for a FTBFS in a
configure option we don't use to generate our binary packages, I hope
you understand. Just wanted to clarify this for further needs. Best
place to discuss those issues could be Ubuntu User mailing list, or even
to query someone at Ubuntu Devel mailing list.

For the record, after the fix:

$ ldd ./debian/squid/usr/sbin/squid3 | grep ssl
libssl.so.1.0.0 => /lib/x86_64-linux-gnu/libssl.so.1.0.0 
(0x7f5b37dd)

Have a good one ;)

** Changed in: squid3 (Ubuntu)
   Status: New => Invalid

** Changed in: squid3 (Ubuntu Xenial)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1890790

Title:
  Build Failure with --enable-ssl-crtd flag

Status in squid3 package in Ubuntu:
  Invalid
Status in squid3 source package in Xenial:
  Invalid

Bug description:
  I have a script that grabs the latest package source for squid3 and
  builds, adding the --enable-ssl-crtd and --with-openssl flags. After
  the last package update "squid3_3.5.12-1ubuntu7.12.debian.tar.xz" this
  errors out during compilation.

  I have narrowed it down to the --enable-ssl-crtd flag. The error is as
  follows:

  ssl/certificate_db.h:56:0: error: "Here" redefined [-Werror]
   #define Here __FILE__, __LINE__
   ^
  In file included from ../src/base/TextException.h:15:0,
   from ../src/SBufExceptions.h:12,
   from ../src/SBuf.h:14,
   from ../src/http/MethodType.h:12,
   from ../src/HttpRequestMethod.h:12,
   from ../src/AccessLogEntry.h:18,
   from acl/FilledChecklist.h:12,
   from client_side.cc:61:
  ../src/base/Here.h:15:0: note: this is the location of the previous definition
   #define Here() SourceLocation(__FUNCTION__, __FILE__, __LINE__)
   ^

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/squid3/+bug/1890790/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1890790] Re: Build Failure with --enable-ssl-crtd flag

2020-08-10 Thread Rafael David Tinoco
Thank you for taking the time to file a bug report.

Since it seems likely to me that this is a local configuration problem,
specific to a package recompilation issue, rather than a bug in Ubuntu,
I am marking this bug as 'Invalid'.

However, if you believe that this is really a bug in Ubuntu, then we
would be grateful if you would provide a more complete description of
the problem with steps to reproduce, explain why you believe this is a
bug in Ubuntu rather than a problem specific to your system, and then
change the bug status back to "New".

For local configuration issues, you can find assistance here:
http://www.ubuntu.com/support/community

** Also affects: squid3 (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: squid3 (Ubuntu)
   Status: Confirmed => Invalid

** Changed in: squid3 (Ubuntu Xenial)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1890790

Title:
  Build Failure with --enable-ssl-crtd flag

Status in squid3 package in Ubuntu:
  Invalid
Status in squid3 source package in Xenial:
  Invalid

Bug description:
  I have a script that grabs the latest package source for squid3 and
  builds, adding the --enable-ssl-crtd and --with-openssl flags. After
  the last package update "squid3_3.5.12-1ubuntu7.12.debian.tar.xz" this
  errors out during compilation.

  I have narrowed it down to the --enable-ssl-crtd flag. The error is as
  follows:

  ssl/certificate_db.h:56:0: error: "Here" redefined [-Werror]
   #define Here __FILE__, __LINE__
   ^
  In file included from ../src/base/TextException.h:15:0,
   from ../src/SBufExceptions.h:12,
   from ../src/SBuf.h:14,
   from ../src/http/MethodType.h:12,
   from ../src/HttpRequestMethod.h:12,
   from ../src/AccessLogEntry.h:18,
   from acl/FilledChecklist.h:12,
   from client_side.cc:61:
  ../src/base/Here.h:15:0: note: this is the location of the previous definition
   #define Here() SourceLocation(__FUNCTION__, __FILE__, __LINE__)
   ^

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/squid3/+bug/1890790/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1883614] Re: sssd got killed due to segfault in ubuntu 16.04

2020-08-10 Thread Rafael David Tinoco
** Also affects: sssd (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: sssd (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: sssd (Ubuntu)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1883614

Title:
  sssd  got killed due to segfault in ubuntu 16.04

Status in sssd package in Ubuntu:
  Triaged
Status in sssd source package in Xenial:
  Confirmed

Bug description:
  SSSD.LOG
  --
  (Sun Jun 14 20:23:53 2020) [sssd] [mt_svc_sigkill] (0x0010): [pam][13305] is 
not responding to SIGTERM. Sending SIGKILL.
  (Sun Jun 14 20:29:34 2020) [sssd] [monitor_restart_service] (0x0010): Process 
[nss], definitely stopped!

  apport.log:
  --
  ERROR: apport (pid 623) Sun Jun 14 20:25:21 2020: Unhandled exception:
  Traceback (most recent call last):
File "/usr/share/apport/apport", line 515, in 
  get_pid_info(pid)
File "/usr/share/apport/apport", line 62, in get_pid_info
  proc_pid_fd = os.open('/proc/%s' % pid, os.O_RDONLY | os.O_PATH | 
os.O_DIRECTORY)
  FileNotFoundError: [Errno 2] No such file or directory: '/proc/13305'
  ERROR: apport (pid 623) Sun Jun 14 20:25:21 2020: pid: 623, uid: 0, gid: 0, 
euid: 0, egid: 0
  ERROR: apport (pid 623) Sun Jun 14 20:25:21 2020: environment: environ({})
  root@gamma13:/var/log# ps -fp 13305
  UIDPID  PPID  C STIME TTY  TIME CMD

  
  syslog
  --
  Jun 14 20:20:32 gamma13 sssd[be[myorg]]: Starting up
  Jun 14 20:22:06 gamma13 kernel: [2543859.316724] sssd_pam[13305]: segfault at 
a4 ip 7f0f77329989 sp 7fff35844480 error 4 in 
libdbus-1.so.3.14.6[7f0f772ff000+4b000]
  Jun 14 20:22:06 gamma13 sssd[be[myorg]]: Starting up
  Jun 14 20:22:53 gamma13 sssd: Killing service [pam], not responding to pings!
  Jun 14 20:23:53 gamma13 sssd: [pam][13305] is not responding to SIGTERM. 
Sending SIGKILL.
  Jun 14 20:23:58 gamma13 sssd[pam]: Starting up
  Jun 14 20:24:27 gamma13 smtpd[1732]: smtp-in: session 689f0b74a7b74828: 
connection from host gamma13.internal.myorg.com.internal.myorg.com [local] 
established
  Jun 14 20:25:01 gamma13 CRON[1041]: (root) CMD (command -v debian-sa1 > 
/dev/null && debian-sa1 1 1)
  Jun 14 20:25:01 gamma13 CRON[1042]: (root) CMD (/usr/sbin/icsisnap 
/var/lib/icsisnap)
  Jun 14 20:27:57 gamma13 sssd[be[myorg]]: Starting up
  Jun 14 20:27:58 gamma13 systemd[1]: Started Session 16859 of user kamals.
  Jun 14 20:29:18 gamma13 sssd[be[myorg]]: Starting up
  Jun 14 20:29:28 gamma13 sssd[nss]: Starting up
  Jun 14 20:29:30 gamma13 sssd[nss]: Starting up
  Jun 14 20:29:37 gamma13 sssd[nss]: Starting up
  Jun 14 20:29:37 gamma13 sssd: Exiting the SSSD. Could not restart critical 
service [nss].
  Jun 14 20:29:44 gamma13 sssd[be[myorg]]: Shutting down
  Jun 14 20:29:44 gamma13 sssd[pam]: Shutting down

  
  [Another server had this log 

  Jun  9 21:12:52 grid kernel: [5088481.338650] rpcmgr[1409]: segfault
  at 7fa5541c1d13 ip 7fa5dcb5be8f sp 7fa5d35ccc80 error 4 in
  libpthread-2.23.so[7fa5dcb54000+18000]

  ]

  
  kamals@gamma13:~$ uname -r
  4.4.0-178-generic
  kamals@gamma13:~$ cat /etc/os-release
  NAME="Ubuntu"
  VERSION="16.04.6 LTS (Xenial Xerus)"
  ID=ubuntu
  ID_LIKE=debian
  PRETTY_NAME="Ubuntu 16.04.6 LTS"

  
  Hi, the sssd got killed for second time with segfault, the above log is the 
latest sssd shutdown.

  Please help me how to fix this issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1883614/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1453463] Re: undefined symbol: FAMNoExists

2020-08-10 Thread Rafael David Tinoco
This was "Fix Released" because of my uploaded fix to groovy. I'm
considering it a "temporary" fix and should revisit this before groovy
is out (depending on discussion in salsa merge thread).

** Changed in: lighttpd (Ubuntu)
   Status: Fix Released => In Progress

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1453463

Title:
  undefined symbol: FAMNoExists

Status in lighttpd:
  Fix Released
Status in lighttpd package in Ubuntu:
  In Progress
Status in lighttpd source package in Xenial:
  Triaged
Status in lighttpd source package in Bionic:
  Triaged
Status in lighttpd source package in Focal:
  Confirmed

Bug description:
  lighttpd won't start.

  Steps to reproduce:
  $ sudo /usr/sbin/lighttpd
  or
  $ sudo systemctl start lighttpd

  Expected outcome:
  daemon starts.

  Seen instead:
  /usr/sbin/lighttpd: symbol lookup error: /usr/sbin/lighttpd: undefined 
symbol: FAMNoExists
  or
  Job for lighttpd.service failed. See "systemctl status lighttpd.service" and 
"journalctl -xe" for details.
  $ systemctl status lighttpd.service -l
  May 09 17:53:32 deunan systemd[1]: Starting Lighttpd Daemon...
  May 09 17:53:32 deunan lighttpd[8229]: /usr/sbin/lighttpd: symbol lookup 
error: /usr/sbin/lighttpd: undefined symbol: FAMNoExists

  Other info:
  `ldd /usr/sbin/lighttpd` does not report any missing shared libraries.

  ProblemType: Bug
  DistroRelease: Ubuntu 15.04
  Package: lighttpd 1.4.35-4ubuntu1
  ProcVersionSignature: Ubuntu 3.19.0-16.16-generic 3.19.3
  Uname: Linux 3.19.0-16-generic i686
  NonfreeKernelModules: nvidia
  ApportVersion: 2.17.2-0ubuntu1
  Architecture: i386
  Date: Sat May  9 17:51:52 2015
  InstallationDate: Installed on 2013-06-08 (700 days ago)
  InstallationMedia: Xubuntu 13.04 "Raring Ringtail" - Release i386 (20130423.1)
  ProcEnviron:
   LANGUAGE=en_CA:en
   TERM=screen
   PATH=(custom, no user)
   LANG=en_CA.UTF-8
   SHELL=/bin/bash
  SourcePackage: lighttpd
  UpgradeStatus: Upgraded to vivid on 2015-04-25 (14 days ago)
  mtime.conffile..etc.lighttpd.conf.available.10.cgi.conf: 2013-08-02T23:17:55
  mtime.conffile..etc.lighttpd.conf.available.10.fastcgi.conf: 
2013-09-11T11:19:16

To manage notifications about this bug go to:
https://bugs.launchpad.net/lighttpd/+bug/1453463/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1890276] Re: inetd does not answer broadcast requests

2020-08-07 Thread Rafael David Tinoco
# working env: xenial (without inetd)

*:111   *:*  
users:(("rpcbind",pid=6679,fd=6))
*:919   *:*  
users:(("rpcbind",pid=6679,fd=7))
*:998   *:*  
users:(("rpc.rstatd",pid=6759,fd=3))

(c)rafaeldtinoco@xenial:~$ sudo tcpdump -i eth0 -n
12:23:59.243382 IP 10.250.97.142.38369 > 10.250.97.255.111: UDP, length 100
12:23:59.245356 IP 10.250.97.213.919 > 10.250.97.142.38369: UDP, length 140

# working env: bionic (without inetd)

0.0.0.0:111   0.0.0.0:*  
users:(("rpcbind",pid=1073,fd=6))   
 
0.0.0.0:825   0.0.0.0:*  
users:(("rpcbind",pid=1073,fd=7))
0.0.0.0:752   0.0.0.0:*  
users:(("rpc.rstatd",pid=10753,fd=3))

 
(c)rafaeldtinoco@bionic:~$ sudo tcpdump -i eth0 -n
11:49:40.673843 IP 10.250.97.227.38276 > 10.250.97.255.111: UDP, length 100
11:49:40.677280 IP 10.250.97.142.825 > 10.250.97.227.38276: UDP, length 140



# not-working env: eoan (without inetd)

0.0.0.0:111   0.0.0.0:*  
users:(("rpcbind",pid=2799,fd=5),("systemd",pid=1,fd=32))   

0.0.0.0:901   0.0.0.0:*  
users:(("rpc.rstatd",pid=2846,fd=3))  

(c)rafaeldtinoco@eoan:~$ sudo tcpdump -i eth0 -n
11:54:17.931899 IP 10.250.97.227.48295 > 10.250.97.255.111: UDP, length 100

# not-working env: focal (without inetd)
  
0.0.0.0:111   0.0.0.0:*  
users:(("rpcbind",pid=6184,fd=5),("systemd",pid=1,fd=42))   

127.0.0.1:862 0.0.0.0:*  
users:(("rpc.statd",pid=6198,fd=5))
0.0.0.0:39800 0.0.0.0:*  
users:(("rpc.statd",pid=6198,fd=8))  

note: systemd has rpcbind socket, removing that variable had no effect.



But I noticed rpcbind is running as _rpc user in the non-working
environments.

There was a major change in rpcbind from affected versions.

And… finally:

(c)rafaeldtinoco@groovy:~$ rup
bionic.lxd12:56 up 18:31, load 0.22 0.16 0.10
groovy.lxd12:56 up 12:51, load 0.22 0.16 0.10
xenial.lxd12:56 up 18:26, load 0.22 0.16 0.10

I had to replace groovy rpcbind with rpcbind from bionic.

Now, in some other time, I'll have to check what changes to rpcbind (a
lot) made this happen.


** Summary changed:

- inetd does not answer broadcast requests
+ rpcbind changes after bionic broke rup broadcast feature

** Also affects: rpcbind (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: openbsd-inetd (Ubuntu)

** No longer affects: openbsd-inetd (Ubuntu Xenial)

** No longer affects: openbsd-inetd (Ubuntu Eoan)

** No longer affects: openbsd-inetd (Ubuntu Bionic)

** No longer affects: openbsd-inetd (Ubuntu Focal)

** Changed in: rpcbind (Ubuntu)
   Status: New => Confirmed

** Changed in: rpcbind (Ubuntu Focal)
   Status: New => Confirmed

** Changed in: rpcbind (Ubuntu Eoan)
   Status: New => Confirmed

** Changed in: rpcbind (Ubuntu Bionic)
   Status: New => Fix Released

** Changed in: rpcbind (Ubuntu Xenial)
   Status: New => Fix Released

** Changed in: rpcbind (Ubuntu)
   Importance: Undecided => Medium

** Changed in: rpcbind (Ubuntu)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1890276

Title:
  rpcbind changes after bionic broke rup broadcast feature

Status in rpcbind package in Ubuntu:
  Confirmed
Status in rpcbind source package in Xenial:
  Fix Released
Status in rpcbind source package in Bionic:
  Fix Released
Status in rpcbind source package in Eoan:
  Confirmed
Status in rpcbind source package in Focal:
  Confirmed

Bug description:
  When I call inetd services rup or rusersd in broadcast mode, I get
  answers from my Ubuntu 18.04 machines only. An Ubuntu 20.04 machine
  (here rzpc101) answers only when addressed directly:

  zierke@rzpc100$ rup
  rzpc100.informatik.un 14:02 up  10 days,8:30, load 0.52 0.38 0.23
  rzlinux.informatik.un 14:02 up1 day,3:12, load 0.04 0.09 0.03
  rzpc174.informatik.un 14:02 up   4 days,   19:28, load 0.00 0.01 0.00
  ^C
  zierke@rzpc100$ rup rzpc101
  rzpc101.informatik.un 14:02 up  3:29,   3 users, load 0.33 0.32 
0.15
  zierke@rzpc100$ rusers
  rzpc100.in

[Group.of.nepali.translators] [Bug 1890276] Re: inetd does not answer broadcast requests

2020-08-06 Thread Rafael David Tinoco
** Also affects: openbsd-inetd (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: openbsd-inetd (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: openbsd-inetd (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: openbsd-inetd (Ubuntu Eoan)
   Importance: Undecided
   Status: New

** Changed in: openbsd-inetd (Ubuntu Bionic)
   Status: New => Fix Released

** Changed in: openbsd-inetd (Ubuntu Xenial)
   Status: New => Fix Released

** Changed in: openbsd-inetd (Ubuntu)
   Status: New => Triaged

** Changed in: openbsd-inetd (Ubuntu Eoan)
   Status: New => Triaged

** Changed in: openbsd-inetd (Ubuntu Focal)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1890276

Title:
  inetd does not answer broadcast requests

Status in openbsd-inetd package in Ubuntu:
  Triaged
Status in openbsd-inetd source package in Xenial:
  Fix Released
Status in openbsd-inetd source package in Bionic:
  Fix Released
Status in openbsd-inetd source package in Eoan:
  Triaged
Status in openbsd-inetd source package in Focal:
  Triaged

Bug description:
  When I call inetd services rup or rusersd in broadcast mode, I get
  answers from my Ubuntu 18.04 machines only. An Ubuntu 20.04 machine
  (here rzpc101) answers only when addressed directly:

  zierke@rzpc100$ rup
  rzpc100.informatik.un 14:02 up  10 days,8:30, load 0.52 0.38 0.23
  rzlinux.informatik.un 14:02 up1 day,3:12, load 0.04 0.09 0.03
  rzpc174.informatik.un 14:02 up   4 days,   19:28, load 0.00 0.01 0.00
  ^C
  zierke@rzpc100$ rup rzpc101
  rzpc101.informatik.un 14:02 up  3:29,   3 users, load 0.33 0.32 
0.15
  zierke@rzpc100$ rusers
  rzpc100.informatik.u zierke zierke 
  rzlinux.informatik.u zierke zierke zierke 
  ^C
  zierke@rzpc100$ rusers rzpc101
  rzpc101.informatik.u zierke zierke zierke 

  zierke@rzpc101$ lsb_release -rd
  Description:  Ubuntu 20.04.1 LTS
  Release:  20.04

  zierke@rzpc100$ lsb_release -rd
  Description:  Ubuntu 18.04.4 LTS
  Release:  18.04

  ProblemType: Bug
  DistroRelease: Ubuntu 20.04
  Package: openbsd-inetd 0.20160825-4build1
  ProcVersionSignature: Ubuntu 5.4.0-42.46-generic 5.4.44
  Uname: Linux 5.4.0-42-generic x86_64
  NonfreeKernelModules: nvidia_modeset nvidia
  ApportVersion: 2.20.11-0ubuntu27.4
  Architecture: amd64
  CasperMD5CheckResult: skip
  CurrentDesktop: ubuntu:GNOME
  Date: Tue Aug  4 14:03:06 2020
  InstallationDate: Installed on 2020-07-24 (11 days ago)
  InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Release amd64 (20200423)
  SourcePackage: openbsd-inetd
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openbsd-inetd/+bug/1890276/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1877617] Re: Automatic scans cause instability for cloud use cases

2020-06-18 Thread Rafael David Tinoco
I'm moving groovy update need to:

https://bugs.launchpad.net/ubuntu/+source/open-iscsi/+bug/1884175

So this is fully Fix Released!

** No longer affects: open-iscsi (Ubuntu Groovy)

** Changed in: open-iscsi (Ubuntu)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1877617

Title:
  Automatic scans cause instability for cloud use cases

Status in open-iscsi package in Ubuntu:
  Fix Released
Status in open-iscsi source package in Xenial:
  Won't Fix
Status in open-iscsi source package in Bionic:
  Fix Released
Status in open-iscsi source package in Eoan:
  Fix Released
Status in open-iscsi source package in Focal:
  Fix Released

Bug description:
  [Impact]

  When using iSCSI storage underneath cloud applications such as
  OpenStack or Kubernetes, the automatic bus scan on login causes
  problems, because it results in SCSI disks being registered in the
  kernel that will never get cleaned up, and when those disks are
  eventually deleted off the server, I/O errors begin to accumulate,
  eventually slowing down the whole SCSI subsystem, spamming the kernel
  log, and causing timeouts at higher levels such that users are forced
  to reboot the node to get back to a usable state.

  [Test Case]

  

  # To demonstrate this problem, I create a VM running Ubuntu 20.04.0

  # Install both iSCSI initiator and target on this host
  sudo apt-get -y install open-iscsi targetcli-fb

  # Start the services
  sudo systemctl start iscsid.service targetclid.service

  # Create a randomly generated target IQN
  TARGET_IQN=$(iscsi-iname)

  # Get the initator IQN
  INITIATOR_IQN=$(sudo awk -F = '/InitiatorName=/ {print $2}' 
/etc/iscsi/initiatorname.iscsi)

  # Set up an iSCSI target and target portal, and grant access to ourselves
  sudo targetcli /iscsi create $TARGET_IQN
  sudo targetcli /iscsi/$TARGET_IQN/tpg1/acls create $INITIATOR_IQN

  # Create two 1GiB LUNs backed by files, and expose them through the target 
portal
  sudo targetcli /backstores/fileio create lun1 /lun1 1G
  sudo targetcli /iscsi/$TARGET_IQN/tpg1/luns create /backstores/fileio/lun1 1
  sudo targetcli /backstores/fileio create lun2 /lun2 1G
  sudo targetcli /iscsi/$TARGET_IQN/tpg1/luns create /backstores/fileio/lun2 2

  # Truncate the kernel log so we can see messages after this point only
  sudo dmesg -C

  # Register the local iSCSI target with out initiator, and login
  sudo iscsiadm -m node -p 127.0.0.1 -T $TARGET_IQN -o new
  sudo iscsiadm -m node -p 127.0.0.1 -T $TARGET_IQN --login

  # Get the list of disks from the iSCSI session, and stash it in an array
  eval "DISKS=\$(sudo iscsiadm -m session -P3 | awk '/Attached scsi disk/ 
{print \$4}')"

  # Print the list
  echo $DISKS

  # Note that there are two disks found already (the two LUNs we created
  # above) despite the fact that we only just logged in.

  # Now delete a LUN from the target
  sudo targetcli /iscsi/$TARGET_IQN/tpg1/luns delete lun2
  sudo targetcli /backstores/fileio delete lun2

  # Attempt to read each of the disks
  for DISK in $DISKS ; do sudo blkid /dev/$DISK || true ; done

  # Look at the kernel log
  dmesg

  # Notice I/O errors related to the disk that the kernel remembers

  

  # Now to demostrate how this problem is fixed, I create a new Ubuntu
  20.04.0 VM

  
  # Add PPA with modified version of open-iscsi
  sudo add-apt-repository -y ppa:bswartz/open-iscsi
  sudo apt-get update

  # Install both iSCSI initiator and target on this host
  sudo apt-get -y install open-iscsi targetcli-fb

  # Start the services
  sudo systemctl start iscsid.service targetclid.service

  # Set the scan option to "manual"
  sudo sed -i 's/^\(node.session.scan\).*/\1 = manual/' /etc/iscsi/iscsid.conf
  sudo systemctl restart iscsid.service

  # Create a randomly generated target IQN
  TARGET_IQN=$(iscsi-iname)

  # Get the initator IQN
  INITIATOR_IQN=$(sudo awk -F = '/InitiatorName=/ {print $2}' 
/etc/iscsi/initiatorname.iscsi)

  # Set up an iSCSI target and target portal, and grant access to ourselves
  sudo targetcli /iscsi create $TARGET_IQN
  sudo targetcli /iscsi/$TARGET_IQN/tpg1/acls create $INITIATOR_IQN

  # Create two 1GiB LUNs backed by files, and expose them through the target 
portal
  sudo targetcli /backstores/fileio create lun1 /lun1 1G
  sudo targetcli /iscsi/$TARGET_IQN/tpg1/luns create /backstores/fileio/lun1 1
  sudo targetcli /backstores/fileio create lun2 /lun2 1G
  sudo targetcli /iscsi/$TARGET_IQN/tpg1/luns create /backstores/fileio/lun2 2

  # Truncate the kernel log so we can see messages after this point only
  sudo dmesg -C

  # Register the local iSCSI target with out initiator, and login
  sudo iscsiadm -m node -p 127.0.0.1 -T $TARGET_IQN -o new
  sudo iscsiadm -m node -p 127.0.0.1 -T $TARGET_IQN --login

  # Get the list of di

[Group.of.nepali.translators] [Bug 1576588] Re: google-authenticator with openvpn fails on 16.04

2020-06-14 Thread Rafael David Tinoco
@ahanins or @me.neerajkhandelwal,

Could any of you provide the config files @ahasenack has asked ?

Meanwhile, I'll re-flag this as incomplete again.

** Changed in: openvpn (Ubuntu)
   Status: Expired => Incomplete

** Also affects: openvpn (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: openvpn (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: openvpn (Ubuntu Xenial)
   Status: New => Incomplete

** Changed in: openvpn (Ubuntu Bionic)
   Status: New => Fix Released

** Changed in: openvpn (Ubuntu)
   Status: Incomplete => Fix Released

** Changed in: openvpn (Ubuntu)
   Importance: High => Medium

** Changed in: openvpn (Ubuntu)
   Importance: Medium => Undecided

** Changed in: openvpn (Ubuntu Xenial)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1576588

Title:
  google-authenticator with openvpn fails on 16.04

Status in openvpn package in Ubuntu:
  Fix Released
Status in openvpn source package in Xenial:
  Incomplete
Status in openvpn source package in Bionic:
  Fix Released

Bug description:
  We are using a standard https://openvpn.net/ community server, with
  2-factor authentication via Google Authenticator enabled

  This has worked with latest version of openvpn in 14.04 (all through
  only via the terminal)

  When doing a fresh install of 16.04, and initiating the vpn from the
  terminal with: openvpnvpn client-config.ovpn it ends with this error:

  Fri Apr 29 10:12:27 2016 SENT CONTROL [OpenVPN Server]: 'PUSH_REQUEST' 
(status=1)
  Fri Apr 29 10:12:27 2016 AUTH: Received control message: AUTH_FAILED,Google 
Authenticator Code must be a number
  Fri Apr 29 10:12:27 2016 SIGTERM[soft,auth-failure] received, process exiting

  We have noticed that the user/password + google authenticator dialog
  has changed from the old one

  Enter Auth Username: 
  Enter Auth Password: 
  CHALLENGE: Enter Google Authenticator Code
  Response: **

  All info is now hidden with asterisks, where only the password was in
  the old version

  We suspect something wrongly happens when parsing the google-
  authenticator response

  Thank you

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openvpn/+bug/1576588/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1773529] Re: [SRU] Missing DKIM fixes in Xenial (Exim 4.86)

2020-06-14 Thread Rafael David Tinoco
Thank you for taking the time to file a bug report.

I'm removing the server-next tag as this "SRU" does not fit its
purposes. There is also a bigger issue on this bug and I'm classifying
it as "Invalid" per SRU guidelines:
https://wiki.ubuntu.com/StableReleaseUpdates (High Impact Bugs vs Other
Safe Cases). There is currently no guideline for a big set of patches
being backported to a specific version just "because", with no specific
fix being tested and verified.

If you need a fix for an existing stable release, please read the SRU
page: https://wiki.ubuntu.com/StableReleaseUpdates#When then complete
steps 1 through 4 of
https://wiki.ubuntu.com/StableReleaseUpdates#Procedure

Note that that SRU team would need to make a final decision on accepting
an SRU.

** Changed in: exim4 (Ubuntu Xenial)
   Status: Triaged => Invalid

** Tags removed: bitesize server-next

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1773529

Title:
  [SRU] Missing DKIM fixes in Xenial (Exim 4.86)

Status in exim4 package in Ubuntu:
  Fix Released
Status in exim4 source package in Xenial:
  Invalid

Bug description:
  [Impact]
  TBD

  [Test Case]
  TBD

  [Regression Potential]

  [Fix]
  Applies to any critical bug fixes for DKIM between 4.86.2-2ubuntu2.3 and 
4.92-7ubuntu1 that would be easily backported to 4.86.2.

   exim4 | 4.86.2-2ubuntu2   | xenial   | source, all
   exim4 | 4.86.2-2ubuntu2.3 | xenial-security  | source, all
   exim4 | 4.86.2-2ubuntu2.3 | xenial-updates   | source, all
   exim4 | 4.90.1-1ubuntu1   | bionic   | source, all
   exim4 | 4.90.1-1ubuntu1.2 | bionic-security  | source, all
   exim4 | 4.90.1-1ubuntu1.2 | bionic-updates   | source, all
   exim4 | 4.91-6ubuntu1 | cosmic   | source, all
   exim4 | 4.91-6ubuntu1.1   | cosmic-security  | source, all
   exim4 | 4.91-6ubuntu1.1   | cosmic-updates   | source, all
   exim4 | 4.92-4ubuntu1 | disco| source, all
   exim4 | 4.92-4ubuntu1.1   | disco-proposed   | source, all
   exim4 | 4.92-7ubuntu1 | eoan | source, all

  [Discussion]

  [Original Report]

  Exim is missing the following DKIM fixes, and probably many more:
  https://bugs.exim.org/show_bug.cgi?id=2278
  https://bugs.exim.org/show_bug.cgi?id=1721

  This package is not being maintained and only receives security fixes.

  It needs to either track the current Exim release or get bug fixes
  applied.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/exim4/+bug/1773529/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1877617] Re: Automatic scans cause instability for cloud use cases

2020-05-26 Thread Rafael David Tinoco
Thanks for the testing Ben.

I haven't backported it to Xenial, will flag it as won't fix. Xenial is
too "mature" to change right now, all cloud archives are bionic oriented
(or should be)... will flag it as won't fix so its more clear.

** Also affects: open-iscsi (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: open-iscsi (Ubuntu Xenial)
   Status: New => Won't Fix

** Changed in: open-iscsi (Ubuntu Groovy)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1877617

Title:
  Automatic scans cause instability for cloud use cases

Status in open-iscsi package in Ubuntu:
  Triaged
Status in open-iscsi source package in Xenial:
  Won't Fix
Status in open-iscsi source package in Bionic:
  Fix Committed
Status in open-iscsi source package in Eoan:
  Fix Committed
Status in open-iscsi source package in Focal:
  Fix Committed
Status in open-iscsi source package in Groovy:
  Triaged

Bug description:
  [Impact]

  When using iSCSI storage underneath cloud applications such as
  OpenStack or Kubernetes, the automatic bus scan on login causes
  problems, because it results in SCSI disks being registered in the
  kernel that will never get cleaned up, and when those disks are
  eventually deleted off the server, I/O errors begin to accumulate,
  eventually slowing down the whole SCSI subsystem, spamming the kernel
  log, and causing timeouts at higher levels such that users are forced
  to reboot the node to get back to a usable state.

  [Test Case]

  

  # To demonstrate this problem, I create a VM running Ubuntu 20.04.0

  # Install both iSCSI initiator and target on this host
  sudo apt-get -y install open-iscsi targetcli-fb

  # Start the services
  sudo systemctl start iscsid.service targetclid.service

  # Create a randomly generated target IQN
  TARGET_IQN=$(iscsi-iname)

  # Get the initator IQN
  INITIATOR_IQN=$(sudo awk -F = '/InitiatorName=/ {print $2}' 
/etc/iscsi/initiatorname.iscsi)

  # Set up an iSCSI target and target portal, and grant access to ourselves
  sudo targetcli /iscsi create $TARGET_IQN
  sudo targetcli /iscsi/$TARGET_IQN/tpg1/acls create $INITIATOR_IQN

  # Create two 1GiB LUNs backed by files, and expose them through the target 
portal
  sudo targetcli /backstores/fileio create lun1 /lun1 1G
  sudo targetcli /iscsi/$TARGET_IQN/tpg1/luns create /backstores/fileio/lun1 1
  sudo targetcli /backstores/fileio create lun2 /lun2 1G
  sudo targetcli /iscsi/$TARGET_IQN/tpg1/luns create /backstores/fileio/lun2 2

  # Truncate the kernel log so we can see messages after this point only
  sudo dmesg -C

  # Register the local iSCSI target with out initiator, and login
  sudo iscsiadm -m node -p 127.0.0.1 -T $TARGET_IQN -o new
  sudo iscsiadm -m node -p 127.0.0.1 -T $TARGET_IQN --login

  # Get the list of disks from the iSCSI session, and stash it in an array
  eval "DISKS=\$(sudo iscsiadm -m session -P3 | awk '/Attached scsi disk/ 
{print \$4}')"

  # Print the list
  echo $DISKS

  # Note that there are two disks found already (the two LUNs we created
  # above) despite the fact that we only just logged in.

  # Now delete a LUN from the target
  sudo targetcli /iscsi/$TARGET_IQN/tpg1/luns delete lun2
  sudo targetcli /backstores/fileio delete lun2

  # Attempt to read each of the disks
  for DISK in $DISKS ; do sudo blkid /dev/$DISK || true ; done

  # Look at the kernel log
  dmesg

  # Notice I/O errors related to the disk that the kernel remembers

  

  # Now to demostrate how this problem is fixed, I create a new Ubuntu
  20.04.0 VM

  
  # Add PPA with modified version of open-iscsi
  sudo add-apt-repository -y ppa:bswartz/open-iscsi
  sudo apt-get update

  # Install both iSCSI initiator and target on this host
  sudo apt-get -y install open-iscsi targetcli-fb

  # Start the services
  sudo systemctl start iscsid.service targetclid.service

  # Set the scan option to "manual"
  sudo sed -i 's/^\(node.session.scan\).*/\1 = manual/' /etc/iscsi/iscsid.conf
  sudo systemctl restart iscsid.service

  # Create a randomly generated target IQN
  TARGET_IQN=$(iscsi-iname)

  # Get the initator IQN
  INITIATOR_IQN=$(sudo awk -F = '/InitiatorName=/ {print $2}' 
/etc/iscsi/initiatorname.iscsi)

  # Set up an iSCSI target and target portal, and grant access to ourselves
  sudo targetcli /iscsi create $TARGET_IQN
  sudo targetcli /iscsi/$TARGET_IQN/tpg1/acls create $INITIATOR_IQN

  # Create two 1GiB LUNs backed by files, and expose them through the target 
portal
  sudo targetcli /backstores/fileio create lun1 /lun1 1G
  sudo targetcli /iscsi/$TARGET_IQN/tpg1/luns create /backstores/fileio/lun1 1
  sudo targetcli /ba

[Group.of.nepali.translators] [Bug 1584629] Re: Failed to start LSB: Load O2CB cluster services at system boot.

2020-05-21 Thread Rafael David Tinoco
Thank you for taking the time to report this bug. In an effort to keep an
up-to-date and valid list of bugs to work on, I have reviewed this report to
verify it still requires effort and occurs on an Ubuntu release in standard
support, and it does not.

It is unfortunate that we were unable to resolve this defect, however there
appears to be no further action possible at this time. I am therefore moving the
bug to 'Fix Released' to current development release as the issue does not apply
to it any longer.

If you disagree or have new information, we would be grateful if you could
please open a new bug mentioning new affected versions with a possible
reproducer in the bug description.


** Changed in: ocfs2-tools (Ubuntu Trusty)
   Status: New => Triaged

** No longer affects: ocfs2-tools (Ubuntu Xenial)

** No longer affects: ocfs2-tools (Ubuntu Yakkety)

** Changed in: ocfs2-tools (Ubuntu)
   Status: Triaged => Fix Released

** Changed in: ocfs2-tools (Ubuntu Trusty)
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1584629

Title:
  Failed to start LSB: Load O2CB cluster services at system boot.

Status in ocfs2-tools package in Ubuntu:
  Fix Released
Status in ocfs2-tools source package in Trusty:
  Triaged

Bug description:
  Ubuntu 16.04.

  Sometimes (not every boot) o2cb failed to start:

  systemctl status o2cb
  ● o2cb.service - LSB: Load O2CB cluster services at system boot.
     Loaded: loaded (/etc/init.d/o2cb; bad; vendor preset: enabled)
     Active: failed (Result: exit-code) since Пн 2016-05-23 11:46:43 SAMT; 2min 
12s ago
   Docs: man:systemd-sysv-generator(8)
    Process: 1526 ExecStart=/etc/init.d/o2cb start (code=exited, 
status=1/FAILURE)

  май 23 11:46:43 inetgw1 systemd[1]: Starting LSB: Load O2CB cluster services 
at system boot
  май 23 11:46:43 inetgw1 o2cb[1526]: Loading filesystem "configfs": OK
  май 23 11:46:43 inetgw1 o2cb[1526]: Mounting configfs filesystem at 
/sys/kernel/config: mount: configfs is already
  май 23 11:46:43 inetgw1 o2cb[1526]:configfs is already mounted on 
/sys/kernel/config
  май 23 11:46:43 inetgw1 o2cb[1526]: Unable to mount configfs filesystem
  май 23 11:46:43 inetgw1 o2cb[1526]: Failed
  май 23 11:46:43 inetgw1 systemd[1]: o2cb.service: Control process exited, 
code=exited status=1
  май 23 11:46:43 inetgw1 systemd[1]: Failed to start LSB: Load O2CB cluster 
services at system boot..
  май 23 11:46:43 inetgw1 systemd[1]: o2cb.service: Unit entered failed state.
  май 23 11:46:43 inetgw1 systemd[1]: o2cb.service: Failed with result 
'exit-code'.

  next try is successful:
  systemctl status o2cb
  ● o2cb.service - LSB: Load O2CB cluster services at system boot.
     Loaded: loaded (/etc/init.d/o2cb; bad; vendor preset: enabled)
     Active: active (exited) since Пн 2016-05-23 11:49:07 SAMT; 1s ago
   Docs: man:systemd-sysv-generator(8)
    Process: 2101 ExecStart=/etc/init.d/o2cb start (code=exited, 
status=0/SUCCESS)

  май 23 11:49:07 inetgw1 systemd[1]: Starting LSB: Load O2CB cluster services 
at system boot
  май 23 11:49:07 inetgw1 o2cb[2101]: Loading stack plugin "o2cb": OK
  май 23 11:49:07 inetgw1 o2cb[2101]: Loading filesystem "ocfs2_dlmfs": OK
  май 23 11:49:07 inetgw1 o2cb[2101]: Mounting ocfs2_dlmfs filesystem at /dlm: 
OK
  май 23 11:49:07 inetgw1 o2cb[2101]: Setting cluster stack "o2cb": OK
  май 23 11:49:07 inetgw1 o2cb[2101]: Starting O2CB cluster inetgw: OK
  май 23 11:49:07 inetgw1 systemd[1]: Started LSB: Load O2CB cluster services 
at system boot..

  I guess this is startup dependency problem.

  Thank you!

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ocfs2-tools/+bug/1584629/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1621901] Re: impitool lacks support for ipv6 addresses

2020-05-19 Thread Rafael David Tinoco
** Changed in: ipmitool (Ubuntu Xenial)
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1621901

Title:
  impitool lacks support for ipv6 addresses

Status in MAAS:
  Invalid
Status in ipmitool package in Ubuntu:
  Fix Released
Status in ipmitool source package in Xenial:
  Won't Fix
Status in ipmitool source package in Yakkety:
  Won't Fix
Status in ipmitool package in Debian:
  Fix Released

Bug description:
  # sudo ipmitool lan print
  Set in Progress : Set Complete
  Auth Type Support   :
  Auth Type Enable: Callback :
  : User :
  : Operator :
  : Admin:
  : OEM  :
  IP Address Source   : Static Address
  IP Address  : 0.0.0.0
  Subnet Mask : 0.0.0.0
  MAC Address : 14:58:d0:47:70:28
  SNMP Community String   :
  BMC ARP Control : ARP Responses Enabled, Gratuitous ARP Disabled
  Default Gateway IP  : 0.0.0.0
  802.1q VLAN ID  : Disabled
  802.1q VLAN Priority: 0
  Cipher Suite Priv Max   : Not Available
  Bad Password Threshold  : Not Available
  #

  The iLo in question is configured with a static IPv6 address (though SLAAC 
made no difference), and is quite reachable:
  ssh -l lamont 2001:...::cb
  lamont@2001:...::cb's password:
  User:lamont logged-in to kearns.example.com(0.0.0.0 / 
FE80::1658:D0FF:FE47:7028)
  iLO 4 Standard 2.03 at  Nov 07 2014
  Server Name:
  Server Power: On

  In fact, I can't seem to get the iLo to tell me its IPv6 address (other
  than link-local) at the ssh prompt -- only at via web ui.

  Please fix ipmitool to return the ipv6 address of the iLo, if any, in
  addition to any IPv4 address.

  lamont

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1621901/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1815101] Re: [master] Restarting systemd-networkd breaks keepalived, heartbeat, corosync, pacemaker (interface aliases are restarted)

2020-05-14 Thread Rafael David Tinoco
@napsty: the "workaround" (from your blog) is actually to use:

- ifupdown/bridge-utils/vlan/resolvconf for network setup   OR
- use systemd-networkd DIRECTLY with the KeepConfiguration= option in .network 
file

Just highlighting it here.

@ddstreet, you said you would try to come up with the netplan change for
KeepConfiguration. Did you have time to check on this ? (just checking).

Cheers o/

** Changed in: keepalived (Ubuntu)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: keepalived (Ubuntu Xenial)
     Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: keepalived (Ubuntu Bionic)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: keepalived (Ubuntu Disco)
     Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: keepalived (Ubuntu Eoan)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: systemd (Ubuntu)
     Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: systemd (Ubuntu Xenial)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: systemd (Ubuntu Bionic)
     Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: systemd (Ubuntu Disco)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: systemd (Ubuntu Eoan)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: netplan
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** No longer affects: keepalived (Ubuntu Eoan)

** No longer affects: keepalived (Ubuntu Disco)

** Also affects: heartbeat (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: keepalived (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: systemd (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Changed in: systemd (Ubuntu Focal)
   Status: New => Fix Released

** Changed in: keepalived (Ubuntu Focal)
   Status: New => Confirmed

** No longer affects: heartbeat (Ubuntu Focal)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1815101

Title:
  [master] Restarting systemd-networkd breaks keepalived, heartbeat,
  corosync, pacemaker (interface aliases are restarted)

Status in netplan:
  Confirmed
Status in heartbeat package in Ubuntu:
  Won't Fix
Status in keepalived package in Ubuntu:
  In Progress
Status in systemd package in Ubuntu:
  In Progress
Status in keepalived source package in Xenial:
  Confirmed
Status in systemd source package in Xenial:
  Confirmed
Status in keepalived source package in Bionic:
  Confirmed
Status in systemd source package in Bionic:
  Confirmed
Status in systemd source package in Disco:
  Won't Fix
Status in systemd source package in Eoan:
  Fix Released
Status in keepalived source package in Focal:
  Confirmed
Status in systemd source package in Focal:
  Fix Released

Bug description:
  [impact]

  - ALL related HA software has a small problem if interfaces are being
  managed by systemd-networkd: nic restarts/reconfigs are always going
  to wipe all interfaces aliases when HA software is not expecting it to
  (no coordination between them.

  - keepalived, smb ctdb, pacemaker, all suffer from this. Pacemaker is
  smarter in this case because it has a service monitor that will
  restart the virtual IP resource, in affected node & nic, before
  considering a real failure, but other HA service might consider a real
  failure when it is not.

  [test case]

  - comment #14 is a full test case: to have 3 node pacemaker, in that
  example, and cause a networkd service restart: it will trigger a
  failure for the virtual IP resource monitor.

  - other example is given in the original description for keepalived.
  both suffer from the same issue (and other HA softwares as well).

  [regression potential]

  - this backports KeepConfiguration parameter, which adds some
  significant complexity to networkd's configuration and behavior, which
  could lead to regressions in correctly configuring the network at
  networkd start, or incorrectly maintaining configuration at networkd
  restart, or losing network state at networkd stop.

  - Any regressions are most likely to occur during networkd start,
  restart, or stop, and most likely to involve missing or incorrect ip
  address(es).

  - the change is based in upstream patches adding the exact feature we
  needed to fix this issue & it will be integrated with a netplan change
  to add the needed stanza to systemd nic configuration file
  (KeepConfiguration=)

  [other info]

  original description:
  ---

  Configure netplan for interfaces, for example (a working config with
  IP addresses obfuscated)

  network:
 

[Group.of.nepali.translators] [Bug 1582899] Re: in-target: mkinitramfs: failed to determine device for /

2020-05-08 Thread Rafael David Tinoco
I'm marking this as incomplete for 18.04.2 and wont fix for xenial based
on @ahasenack's last input.

** Also affects: base-installer (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: initramfs-tools (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: live-installer (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: base-installer (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: initramfs-tools (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: live-installer (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: live-installer (Ubuntu)
   Status: Confirmed => Incomplete

** Changed in: live-installer (Ubuntu Bionic)
   Status: New => Incomplete

** Changed in: live-installer (Ubuntu Xenial)
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1582899

Title:
  in-target: mkinitramfs: failed to determine device for /

Status in base-installer package in Ubuntu:
  Invalid
Status in initramfs-tools package in Ubuntu:
  Invalid
Status in live-installer package in Ubuntu:
  Incomplete
Status in base-installer source package in Xenial:
  New
Status in initramfs-tools source package in Xenial:
  New
Status in live-installer source package in Xenial:
  Won't Fix
Status in base-installer source package in Bionic:
  New
Status in initramfs-tools source package in Bionic:
  New
Status in live-installer source package in Bionic:
  Incomplete

Bug description:
  Sysadmin reported in #ubuntu (later #ubuntu-kernel) the 16.04 ubuntu-
  server ISO installer failed due to being unable to configure linux-
  image-4.4.0-21-generic.

  Lots of diagnostics and one SSH remote session later we seem to have
  narrowed it down to the installer.

  At the installer's boot menu the F6 option "Expert mode" is chosen.

  During initial ram file-system creation (after the kernel image is installed) 
the /dev/ file-system is not mounted in /target/ and therefore
  the initramfs-tools/hook-functions::dep_add_modules_mount() cannot match
  the mount device of "/" (in this case /dev/sda3) with any node under /dev/ 
which only contains static entries.

  Cause appears to be that live-installer.postinst has the crucial step
  calling library.sh:setup_dev() commented out:

  #waypoint 1 setup_dev

  OS=linux
  setup_dev() calls setup_dev_${OS}
  setup_dev_linux() mounts procfs and devtmpfs into /target/

  

  Originally the cause of the error message appeared to be that the
  symlink names in /dev/disk/by-uuid/  haven't been updated after the
  partitioning stage if there were pre-existing partitions and file-
  systems on the install device, *and* the sysadmin chose to format the
  existing partitions when selecting mountpoints.

  In this case a hardware RAID device presents:

  /dev/sda1 (/boot/)
  /dev/sda2 (swap)
  /dev/sda3 (/)

  From the shell I noticed:

  root@tmpstorage:/# ll /dev/disk/by-uuid/
  total 0
  lrwxrwxrwx 1 root root  10 May 17 19:39 130e4419-4bfd-46d2-87f9-62e5379bf591 
-> ../../sda1
  lrwxrwxrwx 1 root root  10 May 17 19:39 127d3fa1-c07c-48e4-9e26-1b926d37625c 
-> ../../sda3
  lrwxrwxrwx 1 root root  10 May 17 19:39 78b88456-2b0b-4265-9ed2-5db61522d887 
-> ../../sda2
  lrwxrwxrwx 1 root root   9 May 17 19:39 2016-04-20-22-45-29-00 -> ../../sr1
  drwxr-xr-x 6 root root 120 May 17 19:39 ..
  drwxr-xr-x 2 root root 120 May 17 19:39 .

  root@tmpstorage:/# blkid /dev/sda*
  /dev/sda: PTUUID="a84e60fd" PTTYPE="dos"
  /dev/sda1: UUID="61365714-8ff7-47a2-8035-8aed9e3191a6" TYPE="ext4" 
PARTUUID="a84e60fd-01"
  /dev/sda2: UUID="78b88456-2b0b-4265-9ed2-5db61522d887" TYPE="swap" 
PARTUUID="a84e60fd-02"
  /dev/sda3: UUID="75f68451-9472-47c7-9efc-ed032bfa9987" TYPE="ext4" 
PARTUUID="a84e60fd-03"

  More details to follow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/base-installer/+bug/1582899/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1590799] Re: nfs-kernel-server does not start because of dependency failure

2020-04-29 Thread Rafael David Tinoco
@gagarin, @rostislav,

If you can reproduce this at will, could you provide more information
about it ?

Executing something like this:

$ for _service in $(systemctl list-dependencies nfs-kernel-server
--plain | tail -n +2 | awk '{print $1}'); do systemctl cat --full --no-
pager $_service > $_service.cat | journalctl _SYSTEMD_UNIT=$_service >
$_service.log | awk '{print $1}'; done ; journalctl --no-pager > big.log

and providing me "*.cat *.log" in a .tar.gz file out be very helpful.

rafaeldtinoco -at- ubuntu.com <- if you don't want to expose your logs
file in this bug.

Thanks a lot!

** Changed in: nfs-utils (Ubuntu)
   Status: Fix Released => Confirmed

** Changed in: nfs-utils (Ubuntu Bionic)
   Status: Triaged => Confirmed

** Changed in: nfs-utils (Ubuntu)
   Importance: Medium => Undecided

** Changed in: nfs-utils (Ubuntu Yakkety)
   Importance: Medium => Undecided

** Changed in: nfs-utils (Ubuntu Zesty)
   Importance: Medium => Undecided

** Changed in: nfs-utils (Ubuntu Xenial)
   Importance: Medium => Undecided

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1590799

Title:
  nfs-kernel-server does not start because of dependency failure

Status in nfs-utils package in Ubuntu:
  Confirmed
Status in nfs-utils source package in Trusty:
  Invalid
Status in nfs-utils source package in Xenial:
  Fix Released
Status in nfs-utils source package in Yakkety:
  Invalid
Status in nfs-utils source package in Zesty:
  Fix Released
Status in nfs-utils source package in Bionic:
  Confirmed

Bug description:
  [Impact]

   * nfs-mountd doesn't get started because of a race condition happening when 
rpcbind.socket is not specified as a needed service for it to start.
   * nfs-server using rpcbind.target instead of using rpcbind.socket. Target 
should not be used (Comment #24)

  [Test Case]

   * Install nfs-kernel-server inside a xenial lxc guest and restart it until 
nfs-mountd doesn't start complaining on rpc error.
   * Comment #25

  [Regression Potential]

   * Cons: Systemd dependencies could brake for nfs-server and nfs-mountd.
   * Pros: Patches have been accepted upstream (and tested).

  [Other Info]
   
  # Original Bug Description

  Immediately after boot:

  root@feynmann:~# systemctl status nfs-kernel-server
  ● nfs-server.service - NFS server and services
     Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor 
preset: enabled)
     Active: inactive (dead)

  Jun 09 14:35:47 feynmann systemd[1]: Dependency failed for NFS server and 
services.
  Jun 09 14:35:47 feynmann systemd[1]: nfs-server.service: Job 
nfs-server.service/start failed

  root@feynmann:~# systemctl status nfs-mountd.service
  ● nfs-mountd.service - NFS Mount Daemon
     Loaded: loaded (/lib/systemd/system/nfs-mountd.service; static; vendor 
preset: enabled)
     Active: failed (Result: exit-code) since Thu 2016-06-09 14:35:47 BST; 7min 
ago
    Process: 1321 ExecStart=/usr/sbin/rpc.mountd $RPCMOUNTDARGS (code=exited, 
status=1/FAILURE)

  Jun 09 14:35:47 feynmann systemd[1]: Starting NFS Mount Daemon...
  Jun 09 14:35:47 feynmann rpc.mountd[1321]: mountd: could not create listeners
  Jun 09 14:35:47 feynmann systemd[1]: nfs-mountd.service: Control process 
exited, code=exited
  Jun 09 14:35:47 feynmann systemd[1]: Failed to start NFS Mount Daemon.
  Jun 09 14:35:47 feynmann systemd[1]: nfs-mountd.service: Unit entered failed 
state.
  Jun 09 14:35:47 feynmann systemd[1]: nfs-mountd.service: Failed with result 
'exit-code'.

  root@feynmann:~# systemctl list-dependencies nfs-kernel-server
  nfs-kernel-server.service
  ● ├─auth-rpcgss-module.service
  ● ├─nfs-config.service
  ● ├─nfs-idmapd.service
  ● ├─nfs-mountd.service
  ● ├─proc-fs-nfsd.mount
  ● ├─rpc-svcgssd.service
  ● ├─system.slice
  ● ├─network.target
  ● └─rpcbind.target
  ●   └─rpcbind.service

  root@feynmann:~# systemctl list-dependencies nfs-mountd.service
  nfs-mountd.service
  ● ├─nfs-config.service
  ● ├─nfs-server.service
  ● ├─proc-fs-nfsd.mount
  ● └─system.slice
  root@feynmann:~#

  root@feynmann:~# lsb_release -rd
  Description:  Ubuntu 16.04 LTS
  Release:  16.04

  root@feynmann:~# apt-cache policy nfs-kernel-server
  nfs-kernel-server:
    Installed: 1:1.2.8-9ubuntu12
    Candidate: 1:1.2.8-9ubuntu12
    Version table:
   *** 1:1.2.8-9ubuntu12 500
  500 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
  100 /var/lib/dpkg/status

  Additional comments:

  1. There seems to be a circular dependency between nfs-mountd and 
nfs-kernel-server
  2. I can get it working by changing the AFter,Requires in 
/lib/ssystemd/system/nfs-{mountd|server}.service files. I have managed to get 
nfs-kernel-server to start but not nfs-mountd.
  3. /usr/lib/systemd/scripts/nfs-utils_env.sh references 
/etc/sysconfig/nfs which is Centos/RedHat l

[Group.of.nepali.translators] [Bug 1590799] Re: nfs-kernel-server does not start because of dependency failure

2020-04-29 Thread Rafael David Tinoco
** Changed in: nfs-utils (Ubuntu Zesty)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: nfs-utils (Ubuntu Yakkety)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: nfs-utils (Ubuntu Xenial)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: nfs-utils (Ubuntu)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Also affects: nfs-utils (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Tags removed: sts-sru-needed verification-done-xenial verification-needed 
xenial
** Tags added: server-next

** Changed in: nfs-utils (Ubuntu Bionic)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1590799

Title:
  nfs-kernel-server does not start because of dependency failure

Status in nfs-utils package in Ubuntu:
  Fix Released
Status in nfs-utils source package in Trusty:
  Invalid
Status in nfs-utils source package in Xenial:
  Fix Released
Status in nfs-utils source package in Yakkety:
  Invalid
Status in nfs-utils source package in Zesty:
  Fix Released
Status in nfs-utils source package in Bionic:
  Triaged

Bug description:
  [Impact]

   * nfs-mountd doesn't get started because of a race condition happening when 
rpcbind.socket is not specified as a needed service for it to start.
   * nfs-server using rpcbind.target instead of using rpcbind.socket. Target 
should not be used (Comment #24)

  [Test Case]

   * Install nfs-kernel-server inside a xenial lxc guest and restart it until 
nfs-mountd doesn't start complaining on rpc error.
   * Comment #25

  [Regression Potential]

   * Cons: Systemd dependencies could brake for nfs-server and nfs-mountd.
   * Pros: Patches have been accepted upstream (and tested).

  [Other Info]
   
  # Original Bug Description

  Immediately after boot:

  root@feynmann:~# systemctl status nfs-kernel-server
  ● nfs-server.service - NFS server and services
     Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor 
preset: enabled)
     Active: inactive (dead)

  Jun 09 14:35:47 feynmann systemd[1]: Dependency failed for NFS server and 
services.
  Jun 09 14:35:47 feynmann systemd[1]: nfs-server.service: Job 
nfs-server.service/start failed

  root@feynmann:~# systemctl status nfs-mountd.service
  ● nfs-mountd.service - NFS Mount Daemon
     Loaded: loaded (/lib/systemd/system/nfs-mountd.service; static; vendor 
preset: enabled)
     Active: failed (Result: exit-code) since Thu 2016-06-09 14:35:47 BST; 7min 
ago
    Process: 1321 ExecStart=/usr/sbin/rpc.mountd $RPCMOUNTDARGS (code=exited, 
status=1/FAILURE)

  Jun 09 14:35:47 feynmann systemd[1]: Starting NFS Mount Daemon...
  Jun 09 14:35:47 feynmann rpc.mountd[1321]: mountd: could not create listeners
  Jun 09 14:35:47 feynmann systemd[1]: nfs-mountd.service: Control process 
exited, code=exited
  Jun 09 14:35:47 feynmann systemd[1]: Failed to start NFS Mount Daemon.
  Jun 09 14:35:47 feynmann systemd[1]: nfs-mountd.service: Unit entered failed 
state.
  Jun 09 14:35:47 feynmann systemd[1]: nfs-mountd.service: Failed with result 
'exit-code'.

  root@feynmann:~# systemctl list-dependencies nfs-kernel-server
  nfs-kernel-server.service
  ● ├─auth-rpcgss-module.service
  ● ├─nfs-config.service
  ● ├─nfs-idmapd.service
  ● ├─nfs-mountd.service
  ● ├─proc-fs-nfsd.mount
  ● ├─rpc-svcgssd.service
  ● ├─system.slice
  ● ├─network.target
  ● └─rpcbind.target
  ●   └─rpcbind.service

  root@feynmann:~# systemctl list-dependencies nfs-mountd.service
  nfs-mountd.service
  ● ├─nfs-config.service
  ● ├─nfs-server.service
  ● ├─proc-fs-nfsd.mount
  ● └─system.slice
  root@feynmann:~#

  root@feynmann:~# lsb_release -rd
  Description:  Ubuntu 16.04 LTS
  Release:  16.04

  root@feynmann:~# apt-cache policy nfs-kernel-server
  nfs-kernel-server:
    Installed: 1:1.2.8-9ubuntu12
    Candidate: 1:1.2.8-9ubuntu12
    Version table:
   *** 1:1.2.8-9ubuntu12 500
  500 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
  100 /var/lib/dpkg/status

  Additional comments:

  1. There seems to be a circular dependency between nfs-mountd and 
nfs-kernel-server
  2. I can get it working by changing the AFter,Requires in 
/lib/ssystemd/system/nfs-{mountd|server}.service files. I have managed to get 
nfs-kernel-server to start but not nfs-mountd.
  3. /usr/lib/systemd/scripts/nfs-utils_env.sh references 
/etc/sysconfig/nfs which is Centos/RedHat location of this file. Also 
/etc/default/nfs does not exist. (possibly unrelated to this bug)
  4. A file "/lib/systemd/system/-.slice" exists. this file prevents 
execution of 'ls *' or 'grep xxx *' commands in that directory. I am unsure 
whether this is intended by the sys

[Group.of.nepali.translators] [Bug 1871353] Re: after split brain detection heartbeart service stops unexpectedly

2020-04-09 Thread Rafael David Tinoco
** Changed in: heartbeat (Ubuntu)
   Status: New => Triaged

** Changed in: heartbeat (Ubuntu)
   Importance: Undecided => Medium

** Also affects: heartbeat (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: heartbeat (Ubuntu Xenial)
   Status: New => Triaged

** Changed in: heartbeat (Ubuntu Xenial)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1871353

Title:
  after split brain detection heartbeart service stops unexpectedly

Status in heartbeat package in Ubuntu:
  Triaged
Status in heartbeat source package in Xenial:
  Triaged

Bug description:
  Ubuntu Release
  --
  Description:  Ubuntu 16.04.6 LTS
  Release:  16.04

  
  heartbeat Package
  -
  heartbeat:
Installed: 1:3.0.6-2
Candidate: 1:3.0.6-2
Version table:
   *** 1:3.0.6-2 500
  500 http://mirror.hetzner.de/ubuntu/packages xenial/main amd64 
Packages
  100 /var/lib/dpkg/status

  
  Scenario Description
  
  When heartbeat detects a split brain scenario a restart is triggered by the 
heartbeat service itself.

  
  Expectation
  ---
  The heartbeat service should be running after a split brain scenario was 
detected.

  
  Obervation
  --
  The heartbeat service was no longer running after the split brain scenario.

  
  Further Investigation
  -
  systemd detects the restart and executes the ExecStop action.
  This behaviour is documented at 
https://www.freedesktop.org/software/systemd/man/systemd.service.html#ExecStop=

  This problem most likely arises because of the automatically generated
  systemctl service file (converted from the init.d script by systemd-
  sysv-generator).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/heartbeat/+bug/1871353/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1789527] Re: Galera agent doesn't work when grastate.dat contains safe_to_bootstrap

2020-04-08 Thread Rafael David Tinoco
commit 16dee87e24ee1a0d6e37b5fa7b91c303f7c912db
Author: Ralf Haferkamp 
Date:   Tue Aug 22 15:47:47 2017 +0200

galera: Honor "safe_to_bootstrap" flag in grastate.dat

With version 3.19 galera introduced the "safe_to_bootstrap" flag to the
grastate.dat file [1]. When all nodes of a cluster are shutdown cleanly,
the last node shutting down gets this flag set to 1. (All others get a
0).

This commit enhances the galera resource agent to make use of that flag
when selecting an appropriate node for bootstrapping the cluster.  When
any of the cluster nodes has the "safe_to_bootstrap" flag set to 1, that
node is immediately selected as the boostrap node of the cluster.

When all nodes have safe_to_bootstrap=0 or the flag is not present the
current bootstrap behaviour mostly unchanged. We just set
"safe_to_bootstrap" to 1 in grastate.dat on the selected bootstrap node
to a allow for galera to start, as outlined in the galera documentation
[2].

Fixes: #915

[1] 
http://galeracluster.com/2016/11/introducing-the-safe-to-bootstrap-feature-in-galera-cluster
[2] 
http://galeracluster.com/documentation-webpages/restartingcluster.html#safe-to-bootstrap-protection

$ git describe --tags 16dee87e24ee1a0d6e37b5fa7b91c303f7c912db
v4.0.1-107-g16dee87e

resource-agents | 1:3.9.2-5ubuntu4 | precise |
resource-agents | 1:3.9.2-5ubuntu4.1   | precise-updates |
resource-agents | 1:3.9.3+git20121009-3ubuntu2 | trusty  |
resource-agents | 1:3.9.7-1| xenial  |
resource-agents | 1:3.9.7-1ubuntu1.1   | xenial-updates  |

not affected:

resource-agents | 1:4.1.0~rc1-1ubuntu1 | bionic  |
resource-agents | 1:4.1.0~rc1-1ubuntu1.2   | bionic-updates  |
resource-agents | 1:4.2.0-1ubuntu1 | disco   |
resource-agents | 1:4.2.0-1ubuntu1.1   | disco-updates   |
resource-agents | 1:4.2.0-1ubuntu2 | eoan|
resource-agents | 1:4.4.0-3ubuntu1 | focal   |

** Also affects: resource-agents (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: resource-agents (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Also affects: resource-agents (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: resource-agents (Ubuntu Trusty)
   Status: New => Won't Fix

** Changed in: resource-agents (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: resource-agents (Ubuntu Bionic)
   Status: New => Fix Released

** Changed in: resource-agents (Ubuntu)
   Status: Triaged => Fix Released

** Changed in: resource-agents (Ubuntu)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: resource-agents (Ubuntu)
   Importance: Medium => Undecided

** Tags added: block-proposed-xenial

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1789527

Title:
  Galera agent doesn't work when grastate.dat contains safe_to_bootstrap

Status in resource-agents package in Ubuntu:
  Fix Released
Status in resource-agents source package in Trusty:
  Won't Fix
Status in resource-agents source package in Xenial:
  Confirmed
Status in resource-agents source package in Bionic:
  Fix Released

Bug description:
  Galera resource agent is not able to put mysql up and master even if
  safe_to_bootstrap flag in grastate.dat is set to 1.

  * res_percona_promote_0 on 09fde2-2 'unknown error' (1): call=1373,
  status=complete, exitreason='MySQL server failed to start (pid=2432)
  (rc=0), please check your installation',

  
  The resource agent is not able to handle safe_to_bootstrap feature in galera: 
http://galeracluster.com/2016/11/introducing-the-safe-to-bootstrap-feature-in-galera-cluster/

  I use percona cluster database which uses the same galera mechanism
  for clustering.

  Packages I use in Xenial:

  resource-agents   3.9.7-1
  percona-xtradb-cluster-server-5.6 5.6.37-26.21-0ubuntu0.16.04.2
  pacemaker 1.1.14-2ubuntu1.4
  corosync  2.3.5-3ubuntu2.1

  A workaround exist in : 
https://github.com/ClusterLabs/resource-agents/issues/915
  A fix also exist but it was not addressed to xenial package: 
https://github.com/ClusterLabs/resource-agents/pull/1022

  Is it possible to add this fix on the recent package of resource-
  agents in Xenial ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/resource-agents/+bug/1789527/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1645896] Re: package ldirectord 1:3.9.7-1 failed to install/upgrade: le sous-processus script post-installation installé a retourné une erreur de sortie d'état 1

2020-04-08 Thread Rafael David Tinoco
Thank you for taking the time to report this bug. In an effort to keep an
up-to-date and valid list of bugs to work on, I have reviewed this report
to verify it still requires effort and occurs on an Ubuntu release in
standard support, and it does not.

It is unfortunate that we were unable to resolve this defect, however
there appears to be no further action possible at this time. I am
therefore moving the bug to 'Incomplete'. If you disagree or have
new information, we would be grateful if you could please add a comment 
stating why and then change the status of the bug to 'New'.

** Also affects: resource-agents (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: resource-agents (Ubuntu Xenial)
   Status: New => Incomplete

** Changed in: resource-agents (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1645896

Title:
  package ldirectord 1:3.9.7-1 failed to install/upgrade: le sous-
  processus script post-installation installé a retourné une erreur de
  sortie d'état 1

Status in resource-agents package in Ubuntu:
  Fix Released
Status in resource-agents source package in Xenial:
  Incomplete

Bug description:
  sc4i@sc4i-IMEDIA-X9641:~$ lsb_release -rd
  Description:  Ubuntu 16.04.1 LTS
  Release:  16.04
  sc4i@sc4i-IMEDIA-X9641:~$ apt-cache policy ldirectord
  ldirectord:
Installé : 1:3.9.7-1
Candidat : 1:3.9.7-1
   Table de version :
   *** 1:3.9.7-1 500
  500 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
  500 http://archive.ubuntu.com/ubuntu xenial/universe i386 Packages
  500 http://fr.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
  500 http://fr.archive.ubuntu.com/ubuntu xenial/universe i386 Packages
  100 /var/lib/dpkg/status

  E: ldirectord: le sous-processus script post-installation installé a retourné 
une erreur de sortie d'état 1
  Des erreurs ont été rencontrées pendant l'exécution : ldirectord
  E: Sub-process /usr/bin/dpkg returned an error code (1)
  Échec de l'installation d'un paquet. Tentative de récupération :
  Paramétrage de ldirectord (1:3.9.7-1) ...
  Job for ldirectord.service failed because the control process exited with 
error code. See "systemctl status ldirectord.service" and "journalctl -xe" for 
details.
  invoke-rc.d: initscript ldirectord, action "start" failed.
  dpkg: erreur de traitement du paquet ldirectord (--configure) :
   le sous-processus script post-installation installé a retourné une erreur de 
sortie d'état 1
  Des erreurs ont été rencontrées pendant l'exécution : ldirectord

  ProblemType: Package
  DistroRelease: Ubuntu 16.04
  Package: ldirectord 1:3.9.7-1
  ProcVersionSignature: Ubuntu 4.4.0-47.68-generic 4.4.24
  Uname: Linux 4.4.0-47-generic x86_64
  ApportVersion: 2.20.1-0ubuntu2.1
  Architecture: amd64
  Date: Tue Nov 29 23:10:27 2016
  ErrorMessage: le sous-processus script post-installation installé a retourné 
une erreur de sortie d'état 1
  InstallationDate: Installed on 2016-11-23 (6 days ago)
  InstallationMedia: Lubuntu 16.04 LTS "Xenial Xerus" - Release amd64 
(20160420.1)
  PackageArchitecture: all
  RelatedPackageVersions:
   dpkg 1.18.4ubuntu1.1
   apt  1.2.15
  SourcePackage: resource-agents
  Title: package ldirectord 1:3.9.7-1 failed to install/upgrade: le 
sous-processus script post-installation installé a retourné une erreur de 
sortie d'état 1
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/resource-agents/+bug/1645896/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1223845] Re: nginx resource agent doesn't start

2020-04-08 Thread Rafael David Tinoco
Thank you for taking the time to report this bug. In an effort to keep an
up-to-date and valid list of bugs to work on, I have reviewed this report
to verify it still requires effort and occurs on an Ubuntu release in
standard support, and it does not.

$ git describe 549fb9ef2a59fc4ea09f2c19aa3379cdc73135f4

v3.9.2-53-g549fb9ef

resource-agents | 1:3.9.2-5ubuntu4 | precise |
resource-agents | 1:3.9.2-5ubuntu4.1   | precise-updates |
resource-agents | 1:3.9.3+git20121009-3ubuntu2 | trusty  |
resource-agents | 1:3.9.7-1| xenial  |
resource-agents | 1:3.9.7-1ubuntu1.1   | xenial-updates  |
resource-agents | 1:4.1.0~rc1-1ubuntu1 | bionic  |
resource-agents | 1:4.1.0~rc1-1ubuntu1.2   | bionic-updates  |
resource-agents | 1:4.2.0-1ubuntu1 | disco   |
resource-agents | 1:4.2.0-1ubuntu1.1   | disco-updates   |
resource-agents | 1:4.2.0-1ubuntu2 | eoan|
resource-agents | 1:4.4.0-3ubuntu1 | focal   |

Precise and Trusty are affected. Will flag both as won't fix due to
release schedule.

Fix Released for Others.

It is unfortunate that we were unable to resolve this defect, however
there appears to be no further action possible at this time. I am
therefore moving the bug to 'won't fix'. If you disagree or have
new information, we would be grateful if you could please add a comment 
stating why and then change the status of the bug to 'New'.

** Also affects: resource-agents (Ubuntu Precise)
   Importance: Undecided
   Status: New

** Also affects: resource-agents (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: resource-agents (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Also affects: resource-agents (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: resource-agents (Ubuntu Trusty)
   Status: New => Won't Fix

** Changed in: resource-agents (Ubuntu Precise)
   Status: New => Won't Fix

** Changed in: resource-agents (Ubuntu Xenial)
   Status: New => Fix Released

** Changed in: resource-agents (Ubuntu Bionic)
   Status: New => Fix Released

** Changed in: resource-agents (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1223845

Title:
  nginx resource agent doesn't start

Status in resource-agents package in Ubuntu:
  Fix Released
Status in resource-agents source package in Precise:
  Won't Fix
Status in resource-agents source package in Trusty:
  Won't Fix
Status in resource-agents source package in Xenial:
  Fix Released
Status in resource-agents source package in Bionic:
  Fix Released

Bug description:
  There is a bug in resource-agents 1:3.9.2-5ubuntu4.1 which prevents
  nginx agent from starting. Fix is here: https://github.com/ClusterLabs
  /resource-agents/commit/549fb9ef2a59fc4ea09f2c19aa3379cdc73135f4

  With that fix applied, all works well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/resource-agents/+bug/1223845/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1589531] Re: HA_VARRUN has trailing slash

2020-04-08 Thread Rafael David Tinoco
I flagged this as blocking next SRU to Xenial so this fix can be
included together with next SRU. To me, after so much time, this fix is
only worth as long as done with a bigger one (as end of support for
Xenial is in next year).

** Also affects: resource-agents (Ubuntu Focal)
   Importance: Medium
   Status: Confirmed

** Also affects: resource-agents (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: resource-agents (Ubuntu Eoan)
   Importance: Undecided
   Status: New

** Also affects: resource-agents (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** No longer affects: resource-agents (Ubuntu Focal)

** Changed in: resource-agents (Ubuntu)
   Status: Confirmed => Fix Released

** Changed in: resource-agents (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: resource-agents (Ubuntu Bionic)
   Status: New => Fix Released

** Changed in: resource-agents (Ubuntu Eoan)
   Status: New => Fix Released

** Tags added: block-proposed-xenial

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1589531

Title:
  HA_VARRUN has trailing slash

Status in resource-agents package in Ubuntu:
  Fix Released
Status in resource-agents source package in Xenial:
  Confirmed
Status in resource-agents source package in Bionic:
  Fix Released
Status in resource-agents source package in Eoan:
  Fix Released

Bug description:
  Because HA_VARRUN contains a trailing slash a check in ocf_mkstatedir
  if $path starts with /var/run/ fails.

  This is a bug in 3.9.7-1 on Xenial.

  Fix in upstream:
  
https://github.com/ClusterLabs/resource-agents/commit/571f8dd928b168eef36f79316a5198df9cbdbdca

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/resource-agents/+bug/1589531/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1638210] Re: saidar segmentation fault

2020-04-02 Thread Rafael David Tinoco
There are other comments saying that the issue does not seem to be
solved:

https://github.com/libstatgrab/libstatgrab/issues/102#issuecomment-509715203

I'm marking this as incomplete since we still need a dump to do anything
further.


** Also affects: libstatgrab (Ubuntu Eoan)
   Importance: Undecided
   Status: New

** Also affects: libstatgrab (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: libstatgrab (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Changed in: libstatgrab (Ubuntu)
   Status: Confirmed => Fix Released

** Changed in: libstatgrab (Ubuntu Xenial)
   Status: New => Incomplete

** Changed in: libstatgrab (Ubuntu Bionic)
   Status: New => Incomplete

** Changed in: libstatgrab (Ubuntu Eoan)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1638210

Title:
  saidar segmentation fault

Status in libstatgrab package in Ubuntu:
  Fix Released
Status in libstatgrab source package in Xenial:
  Incomplete
Status in libstatgrab source package in Bionic:
  Incomplete
Status in libstatgrab source package in Eoan:
  Incomplete
Status in libstatgrab package in Debian:
  Fix Released

Bug description:
  Erreur de segmentation (core dumped)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libstatgrab/+bug/1638210/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1828988] Re: rabbitmq server fails to start after cluster reboot

2020-04-02 Thread Rafael David Tinoco
Removed rabbitmq-server package as the bug is related to the charm
itself.

** No longer affects: rabbitmq-server (Ubuntu)

** No longer affects: rabbitmq-server (Ubuntu Xenial)

** No longer affects: rabbitmq-server (Ubuntu Bionic)

** No longer affects: rabbitmq-server (Ubuntu Eoan)

** No longer affects: rabbitmq-server (Ubuntu Focal)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1828988

Title:
  rabbitmq server fails to start after cluster reboot

Status in OpenStack rabbitmq-server charm:
  In Progress

Bug description:
  After rebooting an entire fcb cluster (shutdown -r on all nodes), my
  rabbitmq cluster failed to come back up.

  rabbitmqctl cluster_status:

  http://paste.ubuntu.com/p/hh4GV2BJ8R/

  juju status for rabbitmq-server:
  http://paste.ubuntu.com/p/ptrJSrHGkG/

  bundle:
  http://paste.ubuntu.com/p/k35TTVp3Ps/

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-rabbitmq-server/+bug/1828988/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1641238] Re: as a reverse proxy, a 100 continue response is sent prematurely when a request contains expects: 100-continue

2020-04-02 Thread Rafael David Tinoco
** Changed in: apache2 (Ubuntu Disco)
   Status: Triaged => Won't Fix

** No longer affects: apache2 (Ubuntu Focal)

** Changed in: apache2 (Ubuntu)
   Importance: Medium => Undecided

** Changed in: apache2 (Ubuntu Disco)
   Importance: Medium => Undecided

** Changed in: apache2 (Ubuntu Bionic)
   Status: New => Triaged

** Changed in: apache2 (Ubuntu Xenial)
   Status: New => Triaged

** Changed in: apache2 (Ubuntu Trusty)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1641238

Title:
  as a reverse proxy, a 100 continue response is sent prematurely when a
  request contains expects: 100-continue

Status in Apache2 Web Server:
  Fix Released
Status in apache2 package in Ubuntu:
  Fix Released
Status in apache2 source package in Trusty:
  Triaged
Status in apache2 source package in Xenial:
  Triaged
Status in apache2 source package in Bionic:
  Triaged
Status in apache2 source package in Disco:
  Won't Fix
Status in apache2 source package in Eoan:
  Fix Released

Bug description:
  This effects trusty, xenial and current httpd trunk.

  https://bz.apache.org/bugzilla/show_bug.cgi?id=60330

  As a reverse proxy, a 100 continue response is sent prematurely when a
  request contains expects: 100-continue. This causes the requesting
  client to send a body. The apache httpd proxy will then read the body
  and attempt to send it to the backend, but the backend already sent an
  error and should be allowed to NOT read the remaining request body,
  which never should have existed. When the backend does not read the
  request body mod_proxy_pass errors and returns a 500 error to the
  client. The client never receives the correct error message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/apache2/+bug/1641238/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1439649] Re: Pacemaker unable to communicate with corosync on restart under lxc

2020-04-01 Thread Rafael David Tinoco
>From Corosync 2.4.1 Release Notes:

This release contains fix for one regression and few more smaller fixes.

"""
During 2.3.6 development the bug which is causing pacemaker to not work after 
corosync configuration file is reloaded happened. Solution is ether to use this 
fixed version (recommended) or as a quick workaround (for users who wants to 
stay on 2.3.6 or 2.4.0) is to create file pacemaker (file name can be 
arbitrary) in /etc/corosync/uidgid.d directory with following content (you can 
also put same stanza into /etc/corosync/corosync.conf):

uidgid {
gid: haclient
}
"""

Anyone relying in Trusty or Xenial corosync:

 corosync | 2.3.3-1ubuntu1   | trusty
 corosync | 2.3.3-1ubuntu4   | trusty-updates
 corosync | 2.3.5-3ubuntu1   | xenial
 corosync | 2.3.5-3ubuntu2.3 | xenial-security
 corosync | 2.3.5-3ubuntu2.3 | xenial-updates

should apply the mitigation above, like discovered previously by
commenters of this bug.

Note: Trusty is already EOS so I'm marking it as "won't fix".

Xenial should include the mitigation in a SRU.

** Changed in: pacemaker (Ubuntu Trusty)
   Status: Confirmed => Won't Fix

** Changed in: pacemaker (Ubuntu Trusty)
   Importance: Medium => Undecided

** Changed in: pacemaker (Ubuntu Xenial)
   Importance: Medium => High

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1439649

Title:
  Pacemaker unable to communicate with corosync on restart under lxc

Status in pacemaker package in Ubuntu:
  Fix Released
Status in pacemaker source package in Trusty:
  Won't Fix
Status in pacemaker source package in Xenial:
  Confirmed
Status in pacemaker source package in Bionic:
  Fix Released

Bug description:
  We've seen this a few times with three node clusters, all running in
  LXC containers; pacemaker fails to restart correctly as it can't
  communicate with corosync, resulting in a down cluster.  Rebooting the
  containers resolves the issue, so suspect some sort of bad state
  either in corosync or pacemaker.

  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
mcp_read_config: Configured corosync to accept connections from group 115: 
Library error (2)
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: main: 
Starting Pacemaker 1.1.10 (Build: 42f2063):  generated-manpages agent-manpages 
ncurses libqb-logging libqb-ipc lha-fencing upstart nagios  heartbeat 
corosync-native snmp libesmtp
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
cluster_connect_quorum: Quorum acquired
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
corosync_node_name: Unable to get node name for nodeid 1000
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
corosync_node_name: Unable to get node name for nodeid 1001
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
corosync_node_name: Unable to get node name for nodeid 1003
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
corosync_node_name: Unable to get node name for nodeid 1001
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
get_node_name: Defaulting to uname -n for the local corosync node name
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
crm_update_peer_state: pcmk_quorum_notification: Node 
juju-machine-4-lxc-4[1001] - state is now member (was (null))
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
corosync_node_name: Unable to get node name for nodeid 1003
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
crm_update_peer_state: pcmk_quorum_notification: Node (null)[1003] - state is 
now member (was (null))
  Apr  2 11:41:32 juju-machine-4-lxc-4 crmd[1033748]:   notice: main: CRM Git 
Version: 42f2063
  Apr  2 11:41:32 juju-machine-4-lxc-4 stonith-ng[1033744]:   notice: 
crm_cluster_connect: Connecting to cluster infrastructure: corosync
  Apr  2 11:41:32 juju-machine-4-lxc-4 stonith-ng[1033744]:   notice: 
corosync_node_name: Unable to get node name for nodeid 1001
  Apr  2 11:41:32 juju-machine-4-lxc-4 stonith-ng[1033744]:   notice: 
get_node_name: Defaulting to uname -n for the local corosync node name
  Apr  2 11:41:32 juju-machine-4-lxc-4 attrd[1033746]:   notice: 
crm_cluster_connect: Connecting to cluster infrastructure: corosync
  Apr  2 11:41:32 juju-machine-4-lxc-4 corosync[1033732]:  [MAIN  ] Denied 
connection attempt from 109:115
  Apr  2 11:41:32 juju-machine-4-lxc-4 corosync[1033732]:  [QB] Invalid IPC 
credentials (1033732-1033746).
  Apr  2 11:41:32 juju-machine-4-lxc-4 attrd[1033746]:error: 
cluster_connect_cpg: Could not connect to the Cluster Process Group API: 11
  Apr  2 11:41:32 juju-machine-4-lxc-4 attrd[1033746]:error: main: HA 
Signon failed
  Apr  2 11:41:32 juju-machine-4-lxc-4 attrd[1033746]:error: main: Abo

[Group.of.nepali.translators] [Bug 1439649] Re: Pacemaker unable to communicate with corosync on restart under lxc

2020-04-01 Thread Rafael David Tinoco
** Also affects: pacemaker (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: pacemaker (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Changed in: pacemaker (Ubuntu)
   Status: Confirmed => Fix Released

** Also affects: pacemaker (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Changed in: pacemaker (Ubuntu Bionic)
   Status: New => Fix Released

** Changed in: pacemaker (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: pacemaker (Ubuntu Trusty)
   Status: New => Confirmed

** Changed in: pacemaker (Ubuntu Xenial)
   Importance: Undecided => High

** Changed in: pacemaker (Ubuntu Trusty)
   Importance: Undecided => Medium

** Changed in: pacemaker (Ubuntu Xenial)
   Importance: High => Medium

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1439649

Title:
  Pacemaker unable to communicate with corosync on restart under lxc

Status in pacemaker package in Ubuntu:
  Fix Released
Status in pacemaker source package in Trusty:
  Won't Fix
Status in pacemaker source package in Xenial:
  Confirmed
Status in pacemaker source package in Bionic:
  Fix Released

Bug description:
  We've seen this a few times with three node clusters, all running in
  LXC containers; pacemaker fails to restart correctly as it can't
  communicate with corosync, resulting in a down cluster.  Rebooting the
  containers resolves the issue, so suspect some sort of bad state
  either in corosync or pacemaker.

  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
mcp_read_config: Configured corosync to accept connections from group 115: 
Library error (2)
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: main: 
Starting Pacemaker 1.1.10 (Build: 42f2063):  generated-manpages agent-manpages 
ncurses libqb-logging libqb-ipc lha-fencing upstart nagios  heartbeat 
corosync-native snmp libesmtp
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
cluster_connect_quorum: Quorum acquired
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
corosync_node_name: Unable to get node name for nodeid 1000
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
corosync_node_name: Unable to get node name for nodeid 1001
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
corosync_node_name: Unable to get node name for nodeid 1003
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
corosync_node_name: Unable to get node name for nodeid 1001
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
get_node_name: Defaulting to uname -n for the local corosync node name
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
crm_update_peer_state: pcmk_quorum_notification: Node 
juju-machine-4-lxc-4[1001] - state is now member (was (null))
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
corosync_node_name: Unable to get node name for nodeid 1003
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
crm_update_peer_state: pcmk_quorum_notification: Node (null)[1003] - state is 
now member (was (null))
  Apr  2 11:41:32 juju-machine-4-lxc-4 crmd[1033748]:   notice: main: CRM Git 
Version: 42f2063
  Apr  2 11:41:32 juju-machine-4-lxc-4 stonith-ng[1033744]:   notice: 
crm_cluster_connect: Connecting to cluster infrastructure: corosync
  Apr  2 11:41:32 juju-machine-4-lxc-4 stonith-ng[1033744]:   notice: 
corosync_node_name: Unable to get node name for nodeid 1001
  Apr  2 11:41:32 juju-machine-4-lxc-4 stonith-ng[1033744]:   notice: 
get_node_name: Defaulting to uname -n for the local corosync node name
  Apr  2 11:41:32 juju-machine-4-lxc-4 attrd[1033746]:   notice: 
crm_cluster_connect: Connecting to cluster infrastructure: corosync
  Apr  2 11:41:32 juju-machine-4-lxc-4 corosync[1033732]:  [MAIN  ] Denied 
connection attempt from 109:115
  Apr  2 11:41:32 juju-machine-4-lxc-4 corosync[1033732]:  [QB] Invalid IPC 
credentials (1033732-1033746).
  Apr  2 11:41:32 juju-machine-4-lxc-4 attrd[1033746]:error: 
cluster_connect_cpg: Could not connect to the Cluster Process Group API: 11
  Apr  2 11:41:32 juju-machine-4-lxc-4 attrd[1033746]:error: main: HA 
Signon failed
  Apr  2 11:41:32 juju-machine-4-lxc-4 attrd[1033746]:error: main: Aborting 
startup
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:error: 
pcmk_child_exit: Child process attrd (1033746) exited: Network is down (100)
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:  warning: 
pcmk_child_exit: Pacemaker child process attrd no longer wishes to be 
respawned. Shutting ourselves down.
  Apr  2 11:41:32 juju-machine-4-lxc-4 pacemakerd[1033741]:   notice: 
pcmk_shutdown_worker: Shuting down Pacemaker
  Apr  2 11:41:32 juju-machine-4-lxc-4 pa

[Group.of.nepali.translators] [Bug 1052449] Re: corosync hangs due to missing pacemaker shutdown scripts

2020-03-29 Thread Rafael David Tinoco
I'm marking Precise and Trusty as affected.

The suggested fix is currently implemented by Xenial already:

mcp/pacemaker.in:

...
# Required-Start:   $network corosync
# Should-Start: $syslog
# Required-Stop:$network corosync
...


** Also affects: pacemaker (Ubuntu Precise)
   Importance: Undecided
   Status: New

** Also affects: pacemaker (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: pacemaker (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Changed in: pacemaker (Ubuntu Xenial)
   Status: New => Fix Released

** Changed in: pacemaker (Ubuntu Trusty)
   Status: New => Confirmed

** Changed in: pacemaker (Ubuntu Precise)
   Status: New => Confirmed

** Changed in: pacemaker (Ubuntu)
   Status: Triaged => Fix Released

** Changed in: pacemaker (Ubuntu)
   Importance: High => Undecided

** Changed in: pacemaker (Ubuntu Trusty)
   Status: Confirmed => Triaged

** Changed in: pacemaker (Ubuntu Precise)
   Status: Confirmed => Triaged

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1052449

Title:
  corosync hangs due to missing pacemaker shutdown scripts

Status in pacemaker package in Ubuntu:
  Fix Released
Status in pacemaker source package in Precise:
  Triaged
Status in pacemaker source package in Trusty:
  Triaged
Status in pacemaker source package in Xenial:
  Fix Released

Bug description:
  The pacemaker package installs the right init script but doesn't link
  it to the according runlevels. If corosync is activated and started on
  the system this leads to a hanging shutdown / reboot because corosync
  only ends if pacemaker is stopped beforehand. In addition to this the
  pacemaker daemon has to start after corosync but has to stop before
  corosync.

  A possible solution would be to link the init script accordingly an
  enable it throug /etc/default/

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pacemaker/+bug/1052449/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1677684] Re: /usr/bin/corosync-blackbox: 34: /usr/bin/corosync-blackbox: qb-blackbox: not found

2020-03-19 Thread Rafael David Tinoco
This issue still exists and should be fixed. I'm putting together with
some other SRUs so all of them are done at once. Thanks @niedbalski for
bringing up this issue. I'll fix Ubuntu Focal for now and try to get
along with the needed SRUs.

** Changed in: corosync (Ubuntu Trusty)
 Assignee: Jorge Niedbalski (niedbalski) => (unassigned)

** Changed in: corosync (Ubuntu)
 Assignee: Jorge Niedbalski (niedbalski) => (unassigned)

** Changed in: corosync (Ubuntu Zesty)
 Assignee: Jorge Niedbalski (niedbalski) => (unassigned)

** Changed in: corosync (Ubuntu Xenial)
 Assignee: Jorge Niedbalski (niedbalski) => (unassigned)

** Changed in: corosync (Ubuntu)
   Importance: Medium => Undecided

** Changed in: corosync (Ubuntu Zesty)
   Importance: Medium => Undecided

** Changed in: corosync (Ubuntu Trusty)
   Importance: Medium => Undecided

** Changed in: corosync (Ubuntu Xenial)
   Importance: Medium => Undecided

** Also affects: corosync (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: corosync (Ubuntu Eoan)
   Importance: Undecided
   Status: New

** Also affects: corosync (Ubuntu Focal)
   Importance: Undecided
   Status: Incomplete

** Also affects: corosync (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Changed in: corosync (Ubuntu Focal)
   Status: Incomplete => Confirmed

** Changed in: corosync (Ubuntu Eoan)
   Status: New => Confirmed

** Changed in: corosync (Ubuntu Disco)
   Status: New => Won't Fix

** Changed in: corosync (Ubuntu Bionic)
   Status: New => Confirmed

** Changed in: corosync (Ubuntu Zesty)
   Status: Incomplete => Won't Fix

** Changed in: corosync (Ubuntu Xenial)
   Status: Incomplete => Confirmed

** Changed in: corosync (Ubuntu Trusty)
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1677684

Title:
  /usr/bin/corosync-blackbox: 34: /usr/bin/corosync-blackbox: qb-
  blackbox: not found

Status in corosync package in Ubuntu:
  Confirmed
Status in corosync source package in Trusty:
  Won't Fix
Status in corosync source package in Xenial:
  Confirmed
Status in corosync source package in Zesty:
  Won't Fix
Status in corosync source package in Bionic:
  Confirmed
Status in corosync source package in Disco:
  Won't Fix
Status in corosync source package in Eoan:
  Confirmed
Status in corosync source package in Focal:
  Confirmed

Bug description:
  [Environment]

  Ubuntu Xenial 16.04
  Amd64

  [Test Case]

  1) sudo apt-get install corosync
  2) sudo corosync-blackbox.

  root@juju-niedbalski-xenial-machine-5:/home/ubuntu# dpkg -L corosync |grep 
black
  /usr/bin/corosync-blackbox

  Expected results: corosync-blackbox runs OK.

  Current results:

  $ sudo corosync-blackbox
  /usr/bin/corosync-blackbox: 34: /usr/bin/corosync-blackbox: qb-blackbox: not 
found

  [Impact]

   * Cannot run corosync-blackbox

  [Regression Potential]

  * None identified.

  [Fix]
  Make the package dependant of libqb-dev

  root@juju-niedbalski-xenial-machine-5:/home/ubuntu# dpkg -L libqb-dev | grep 
qb-bl
  /usr/sbin/qb-blackbox

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1677684/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1654403] Re: Race condition in hacluster charm that leaves pacemaker down

2020-03-19 Thread Rafael David Tinoco
** Also affects: corosync (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: corosync (Ubuntu Xenial)
   Status: New => Incomplete

** Changed in: corosync (Ubuntu)
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1654403

Title:
  Race condition in hacluster charm that leaves pacemaker down

Status in OpenStack hacluster charm:
  Fix Released
Status in corosync package in Ubuntu:
  Fix Released
Status in corosync source package in Xenial:
  Incomplete
Status in hacluster package in Juju Charms Collection:
  Invalid

Bug description:
  Symptom: one or more hacluster nodes are left in an executing state.
  Observing the process list on the affected nodes the command 'crm node list' 
is in an infinite loop and pacemaker is not started. On nodes that complete the 
crm node list and other crm commands pacemaker is started.

  See the artefacts from this run:
  
https://openstack-ci-reports.ubuntu.com/artifacts/test_charm_pipeline/openstack/charm-percona-cluster/417131/1/1873/index.html

  Hypothesis: There is a race that leads to crm node list being executed
  before pacemaker is started. It is also possible that something causes
  pacemaker to fail to start.

  Suggest a check for pacemaker heath before any crm commands are run.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1654403/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1437359] Re: A PIDFILE is double-defined for the corosync-notifyd init script

2020-03-19 Thread Rafael David Tinoco
** Also affects: corosync (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: corosync (Ubuntu Focal)
   Importance: Undecided
 Assignee: Rafael David Tinoco (rafaeldtinoco)
   Status: Triaged

** Also affects: corosync (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Also affects: corosync (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: corosync (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: corosync (Ubuntu Eoan)
   Importance: Undecided
   Status: New

** Changed in: corosync (Ubuntu Focal)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: corosync (Ubuntu Disco)
   Status: New => Won't Fix

** Changed in: corosync (Ubuntu Trusty)
   Status: New => Won't Fix

** Changed in: corosync (Ubuntu Eoan)
   Status: New => Triaged

** Changed in: corosync (Ubuntu Xenial)
   Status: New => Triaged

** Changed in: corosync (Ubuntu Bionic)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1437359

Title:
  A PIDFILE is double-defined for the corosync-notifyd init script

Status in corosync package in Ubuntu:
  In Progress
Status in corosync source package in Trusty:
  Won't Fix
Status in corosync source package in Xenial:
  Triaged
Status in corosync source package in Bionic:
  Triaged
Status in corosync source package in Disco:
  Won't Fix
Status in corosync source package in Eoan:
  Triaged
Status in corosync source package in Focal:
  In Progress

Bug description:
  A /etc/init.d/corosync-notifyd contains two definitions for the PIDFILE:
  > PIDFILE=/var/run/$NAME.pid
  > SCRIPTNAME=/etc/init.d/$NAME
  > PIDFILE=/var/run/corosync.pid

  The first one is correct and the second one is wrong as it refers to
  the corosync service's pidfile instead

  The corosync package version is 2.3.3-1ubuntu1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1437359/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1586876] Re: Corosync report "Started" itself too early

2020-03-19 Thread Rafael David Tinoco
>From upstream documentation:

"""
Pacemaker used to obtain membership and quorum from a custom Corosync plugin. 
This plugin also had the capability to start Pacemaker automatically when 
Corosync was started. Neither behavior is possible with Corosync 2.0 and beyond 
as support for plugins was removed.

Instead, Pacemaker must be started as a separate job/initscript. Also, since 
Pacemaker made use of the plugin for message routing, a node using the plugin 
(Corosync prior to 2.0) cannot talk to one that isn’t (Corosync 2.0+).
Rolling upgrades between these versions are therefore not possible and an 
alternate strategy must be used.
"""

showing that since Ubuntu Trusty this detection behavior is not
supported any longer. Nowadays, we start both services separately and
using systemd.

Corosync starts with a simple one-node only (localhost) ring configured:

(c)rafaeldtinoco@clusterdev:~$ systemctl status corosync
● corosync.service - Corosync Cluster Engine
 Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor 
preset: enabled)
 Active: active (running) since Thu 2020-03-19 20:16:49 UTC; 45min ago
   Docs: man:corosync
 man:corosync.conf
 man:corosync_overview
   Main PID: 851 (corosync)
  Tasks: 9 (limit: 23186)
 Memory: 125.9M
 CGroup: /system.slice/corosync.service
 └─851 /usr/sbin/corosync -f

(c)rafaeldtinoco@clusterdev:~$ sudo corosync-quorumtool
Quorum information
--
Date: Thu Mar 19 21:02:21 2020
Quorum provider:  corosync_votequorum
Nodes:1
Node ID:  1
Ring ID:  1.5
Quorate:  Yes

Votequorum information
--
Expected votes:   1
Highest expected: 1
Total votes:  1
Quorum:   1
Flags:Quorate

Membership information
--
Nodeid  Votes Name
 1  1 node1 (local)

AND systemd is responsible to guarantee the synchronicity needed.



>From pacemaker service unit:

...
After=corosync.service
Requires=corosync.service

...

# If you want Corosync to stop whenever Pacemaker is stopped,
# uncomment the next line too:
#
# ExecStopPost=/bin/sh -c 'pidof pacemaker-controld || killall -TERM corosync'

...

# Pacemaker will restart along with Corosync if Corosync is stopped while
# Pacemaker is running.
# In this case, if you want to be fenced always (if you do not want to restart)
# uncomment ExecStopPost below.
#
# ExecStopPost=/bin/sh -c 'pidof corosync || \
#  /usr/bin/systemctl --no-block stop pacemaker' 

you have different options to control behavior for start/stop and
restart accordingly with corosync status.


** Changed in: corosync (Ubuntu Focal)
   Status: Triaged => Fix Released

** Changed in: corosync (Ubuntu Eoan)
   Status: Triaged => Fix Released

** Changed in: corosync (Ubuntu Bionic)
   Status: Triaged => Fix Released

** Changed in: corosync (Ubuntu Xenial)
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1586876

Title:
  Corosync report "Started" itself too early

Status in corosync package in Ubuntu:
  Fix Released
Status in corosync source package in Trusty:
  Won't Fix
Status in corosync source package in Xenial:
  Won't Fix
Status in corosync source package in Bionic:
  Fix Released
Status in corosync source package in Disco:
  Won't Fix
Status in corosync source package in Eoan:
  Fix Released
Status in corosync source package in Focal:
  Fix Released

Bug description:
  Problem description:
  currently, we have no service state check after start-stop-daemon in 
do_start(),
  it might lead to an error if corosync report itself started too early,
  pacemaker might think it is a 'heartbeat' backended, which is not we desired,
  we should check if corosync is "really" started, then report its state,

  syslog with wrong state:
  May 24 19:53:50 myhost corosync[1018]:   [MAIN  ] Corosync Cluster Engine 
('1.4.2'): started and ready to provide service.
  May 24 19:53:50 myhost corosync[1018]:   [MAIN  ] Corosync built-in features: 
nss
  May 24 19:53:50 myhost corosync[1018]:   [MAIN  ] Successfully read main 
configuration file '/etc/corosync/corosync.conf'.
  May 24 19:53:50 myhost corosync[1018]:   [TOTEM ] Initializing transport 
(UDP/IP Unicast).
  May 24 19:53:50 myhost corosync[1018]:   [TOTEM ] Initializing 
transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
  May 24 19:53:50 myhost pacemakerd: [1094]: info: Invoked: pacemakerd
  May 24 19:53:50 myhost pacemakerd: [1094]: info: crm_log_init_worker: Changed 
active directory to /var/lib/heartbeat/cores/root
  May 24 19:53:50 myhost pacemakerd: [1094]: info: get_cluster_type: Assuming a 
'heartbeat' based cluster
  May 24 19:53:50 myhost pacemakerd: [1094]: info: read_config: Reading

[Group.of.nepali.translators] [Bug 1586876] Re: Corosync report "Started" itself too early

2020-03-19 Thread Rafael David Tinoco
** Also affects: corosync (Ubuntu Eoan)
   Importance: Undecided
   Status: New

** Also affects: corosync (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: corosync (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: corosync (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: corosync (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Also affects: corosync (Ubuntu Focal)
   Importance: Medium
 Assignee: guessi (guessi)
   Status: In Progress

** Changed in: corosync (Ubuntu Focal)
 Assignee: guessi (guessi) => (unassigned)

** Changed in: corosync (Ubuntu Focal)
   Importance: Medium => Undecided

** Changed in: corosync (Ubuntu Trusty)
   Status: New => Won't Fix

** Changed in: corosync (Ubuntu Disco)
   Status: New => Won't Fix

** Changed in: corosync (Ubuntu Focal)
   Status: In Progress => Triaged

** Changed in: corosync (Ubuntu Eoan)
   Status: New => Triaged

** Changed in: corosync (Ubuntu Xenial)
   Status: New => Triaged

** Changed in: corosync (Ubuntu Bionic)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1586876

Title:
  Corosync report "Started" itself too early

Status in corosync package in Ubuntu:
  Triaged
Status in corosync source package in Trusty:
  Won't Fix
Status in corosync source package in Xenial:
  Triaged
Status in corosync source package in Bionic:
  Triaged
Status in corosync source package in Disco:
  Won't Fix
Status in corosync source package in Eoan:
  Triaged
Status in corosync source package in Focal:
  Triaged

Bug description:
  Problem description:
  currently, we have no service state check after start-stop-daemon in 
do_start(),
  it might lead to an error if corosync report itself started too early,
  pacemaker might think it is a 'heartbeat' backended, which is not we desired,
  we should check if corosync is "really" started, then report its state,

  syslog with wrong state:
  May 24 19:53:50 myhost corosync[1018]:   [MAIN  ] Corosync Cluster Engine 
('1.4.2'): started and ready to provide service.
  May 24 19:53:50 myhost corosync[1018]:   [MAIN  ] Corosync built-in features: 
nss
  May 24 19:53:50 myhost corosync[1018]:   [MAIN  ] Successfully read main 
configuration file '/etc/corosync/corosync.conf'.
  May 24 19:53:50 myhost corosync[1018]:   [TOTEM ] Initializing transport 
(UDP/IP Unicast).
  May 24 19:53:50 myhost corosync[1018]:   [TOTEM ] Initializing 
transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
  May 24 19:53:50 myhost pacemakerd: [1094]: info: Invoked: pacemakerd
  May 24 19:53:50 myhost pacemakerd: [1094]: info: crm_log_init_worker: Changed 
active directory to /var/lib/heartbeat/cores/root
  May 24 19:53:50 myhost pacemakerd: [1094]: info: get_cluster_type: Assuming a 
'heartbeat' based cluster
  May 24 19:53:50 myhost pacemakerd: [1094]: info: read_config: Reading 
configure for stack: heartbeat

  expected result:
  May 24 21:45:02 myhost corosync[1021]:   [MAIN  ] Completed service 
synchronization, ready to provide service.
  May 24 21:45:02 myhost pacemakerd: [1106]: info: Invoked: pacemakerd
  May 24 21:45:02 myhost pacemakerd: [1106]: info: crm_log_init_worker: Changed 
active directory to /var/lib/heartbeat/cores/root
  May 24 21:45:02 myhost pacemakerd: [1106]: info: config_find_next: Processing 
additional service options...
  May 24 21:45:02 myhost pacemakerd: [1106]: info: get_config_opt: Found 
'pacemaker' for option: name
  May 24 21:45:02 myhost pacemakerd: [1106]: info: get_config_opt: Found '1' 
for option: ver
  May 24 21:45:02 myhost pacemakerd: [1106]: info: get_cluster_type: Detected 
an active 'classic openais (with plugin)' cluster

  please note the order of following two lines:
  * corosync: [MAIN  ] Completed service synchronization, ready to provide 
service.
  * pacemakerd: info: get_cluster_type: ...

  affected versions:
  ALL (precise, trusty, vivid, wily, xenial, yakkety)

  upstream solution: wait_for_ipc()
  https://github.com/corosync/corosync/blob/master/init/corosync.in#L84-L99

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1586876/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1864404] Re: [bionic] fence_scsi cannot open /var/run/cluster/fence_scsi.key (does not exist)

2020-02-23 Thread Rafael David Tinoco
** Also affects: fence-agents (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** No longer affects: fence-agents (Ubuntu Xenial)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1864404

Title:
  [bionic] fence_scsi cannot open /var/run/cluster/fence_scsi.key (does
  not exist)

Status in fence-agents package in Ubuntu:
  New
Status in fence-agents source package in Bionic:
  New

Bug description:
  Whenever trying to configure fence_scsi using Ubuntu Bionic the
  following error happens:

  Failed Actions:
  * fence_clubionicpriv01_start_0 on clubionic01 'unknown error' (1): call=8, 
status=Error, exitreason='',
  last-rc-change='Mon Feb 24 03:20:28 2020', queued=0ms, exec=1132ms

  And the logs show:

  Feb 24 03:20:31 clubionic02 fence_scsi[14072]: Failed: Cannot open file 
"/var/run/cluster/fence_scsi.key"
  Feb 24 03:20:31 clubionic02 fence_scsi[14072]: Please use '-h' for usage

  That happens because the key to be used by fence_scsi agent does not
  exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fence-agents/+bug/1864404/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1864410] Re: [xenial] fence_scsi does not support hostnames in node list

2020-02-23 Thread Rafael David Tinoco
** Also affects: fence-agents (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** No longer affects: fence-agents (Ubuntu Xenial)

** Summary changed:

- [xenial] fence_scsi does not support hostnames in node list
+ [bionic] fence_scsi does not support hostnames in node list

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1864410

Title:
  [bionic] fence_scsi does not support hostnames in node list

Status in fence-agents package in Ubuntu:
  New
Status in fence-agents source package in Bionic:
  New

Bug description:
  Whenever configuring fence_scsi in a pacemaker cluster, the
  "fence_scsi" fencing agent won't support having node names in its
  pcmk_host_list. You can reproduce this behavior using the fence_scsi
  agent directly:

  $ sudo fence_scsi --verbose -n clubionic01 --action=off -d /dev/disk
  /by-path/acpi-VMBUS:01-scsi-0:0:0:0

  Delay 0 second(s) before logging in to the fence device
  Executing: /usr/sbin/corosync-cmapctl totem.cluster_name

  0 totem.cluster_name (str) = clubionic

  
  Executing: /usr/sbin/corosync-cmapctl nodelist.

  0 nodelist.local_node_pos (u32) = 0
  nodelist.node.0.name (str) = clubionic01
  nodelist.node.0.nodeid (u32) = 1
  nodelist.node.0.ring0_addr (str) = 10.250.3.10
  nodelist.node.1.name (str) = clubionic02
  nodelist.node.1.nodeid (u32) = 2
  nodelist.node.1.ring0_addr (str) = 10.250.3.11
  nodelist.node.2.name (str) = clubionic03
  nodelist.node.2.nodeid (u32) = 3
  nodelist.node.2.ring0_addr (str) = 10.250.3.12

  
  Failed: unable to parse output of corosync-cmapctl or node does not exist

  Please use '-h' for usage

  --

  When using IP address fence_scsi agent works:

  rafaeldtinoco@clubionic01:~$ sudo fence_scsi --verbose -n 10.250.3.10 
--action=off -d /dev/disk/by-path/acpi-VMBUS:01-scsi-0:0:0:0
  Delay 0 second(s) before logging in to the fence device
  Executing: /usr/sbin/corosync-cmapctl totem.cluster_name

  0 totem.cluster_name (str) = clubionic

  
  Executing: /usr/sbin/corosync-cmapctl nodelist.

  0 nodelist.local_node_pos (u32) = 0
  nodelist.node.0.name (str) = clubionic01
  nodelist.node.0.nodeid (u32) = 1
  nodelist.node.0.ring0_addr (str) = 10.250.3.10
  nodelist.node.1.name (str) = clubionic02
  nodelist.node.1.nodeid (u32) = 2
  nodelist.node.1.ring0_addr (str) = 10.250.3.11
  nodelist.node.2.name (str) = clubionic03
  nodelist.node.2.nodeid (u32) = 3
  nodelist.node.2.ring0_addr (str) = 10.250.3.12

  
  Executing: /usr/bin/sg_turs /dev/disk/by-path/acpi-VMBUS:01-scsi-0:0:0:0

  0

  Executing: /usr/bin/sg_turs /dev/disk/by-path/acpi-
  VMBUS:01-scsi-0:0:0:0

  0

  Executing: /usr/bin/sg_persist -n -i -k -d /dev/disk/by-path/acpi-
  VMBUS:01-scsi-0:0:0:0

  0   PR generation=0x4, there are NO registered reservation keys

  
  No registration for key 3abe on device 
/dev/disk/by-path/acpi-VMBUS:01-scsi-0:0:0:0

  Success: Already OFF

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fence-agents/+bug/1864410/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1864410] [NEW] fence_scsi does not support hostnames in node list

2020-02-23 Thread Rafael David Tinoco
Public bug reported:

Whenever configuring fence_scsi in a pacemaker cluster, the "fence_scsi"
fencing agent won't support having node names in its pcmk_host_list. You
can reproduce this behavior using the fence_scsi agent directly:

$ sudo fence_scsi --verbose -n clubionic01 --action=off -d /dev/disk/by-
path/acpi-VMBUS:01-scsi-0:0:0:0

Delay 0 second(s) before logging in to the fence device
Executing: /usr/sbin/corosync-cmapctl totem.cluster_name

0 totem.cluster_name (str) = clubionic


Executing: /usr/sbin/corosync-cmapctl nodelist.

0 nodelist.local_node_pos (u32) = 0
nodelist.node.0.name (str) = clubionic01
nodelist.node.0.nodeid (u32) = 1
nodelist.node.0.ring0_addr (str) = 10.250.3.10
nodelist.node.1.name (str) = clubionic02
nodelist.node.1.nodeid (u32) = 2
nodelist.node.1.ring0_addr (str) = 10.250.3.11
nodelist.node.2.name (str) = clubionic03
nodelist.node.2.nodeid (u32) = 3
nodelist.node.2.ring0_addr (str) = 10.250.3.12


Failed: unable to parse output of corosync-cmapctl or node does not exist

Please use '-h' for usage

--

When using IP address fence_scsi agent works:

rafaeldtinoco@clubionic01:~$ sudo fence_scsi --verbose -n 10.250.3.10 
--action=off -d /dev/disk/by-path/acpi-VMBUS:01-scsi-0:0:0:0
Delay 0 second(s) before logging in to the fence device
Executing: /usr/sbin/corosync-cmapctl totem.cluster_name

0 totem.cluster_name (str) = clubionic


Executing: /usr/sbin/corosync-cmapctl nodelist.

0 nodelist.local_node_pos (u32) = 0
nodelist.node.0.name (str) = clubionic01
nodelist.node.0.nodeid (u32) = 1
nodelist.node.0.ring0_addr (str) = 10.250.3.10
nodelist.node.1.name (str) = clubionic02
nodelist.node.1.nodeid (u32) = 2
nodelist.node.1.ring0_addr (str) = 10.250.3.11
nodelist.node.2.name (str) = clubionic03
nodelist.node.2.nodeid (u32) = 3
nodelist.node.2.ring0_addr (str) = 10.250.3.12


Executing: /usr/bin/sg_turs /dev/disk/by-path/acpi-VMBUS:01-scsi-0:0:0:0

0

Executing: /usr/bin/sg_turs /dev/disk/by-path/acpi-VMBUS:01-scsi-0:0:0:0

0

Executing: /usr/bin/sg_persist -n -i -k -d /dev/disk/by-path/acpi-
VMBUS:01-scsi-0:0:0:0

0   PR generation=0x4, there are NO registered reservation keys


No registration for key 3abe on device 
/dev/disk/by-path/acpi-VMBUS:01-scsi-0:0:0:0

Success: Already OFF

** Affects: fence-agents (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: fence-agents (Ubuntu Xenial)
 Importance: Medium
     Assignee: Rafael David Tinoco (rafaeldtinoco)
 Status: Confirmed

** Also affects: fence-agents (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: fence-agents (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: fence-agents (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: fence-agents (Ubuntu Xenial)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1864410

Title:
  fence_scsi does not support hostnames in node list

Status in fence-agents package in Ubuntu:
  New
Status in fence-agents source package in Xenial:
  Confirmed

Bug description:
  Whenever configuring fence_scsi in a pacemaker cluster, the
  "fence_scsi" fencing agent won't support having node names in its
  pcmk_host_list. You can reproduce this behavior using the fence_scsi
  agent directly:

  $ sudo fence_scsi --verbose -n clubionic01 --action=off -d /dev/disk
  /by-path/acpi-VMBUS:01-scsi-0:0:0:0

  Delay 0 second(s) before logging in to the fence device
  Executing: /usr/sbin/corosync-cmapctl totem.cluster_name

  0 totem.cluster_name (str) = clubionic

  
  Executing: /usr/sbin/corosync-cmapctl nodelist.

  0 nodelist.local_node_pos (u32) = 0
  nodelist.node.0.name (str) = clubionic01
  nodelist.node.0.nodeid (u32) = 1
  nodelist.node.0.ring0_addr (str) = 10.250.3.10
  nodelist.node.1.name (str) = clubionic02
  nodelist.node.1.nodeid (u32) = 2
  nodelist.node.1.ring0_addr (str) = 10.250.3.11
  nodelist.node.2.name (str) = clubionic03
  nodelist.node.2.nodeid (u32) = 3
  nodelist.node.2.ring0_addr (str) = 10.250.3.12

  
  Failed: unable to parse output of corosync-cmapctl or node does not exist

  Please use '-h' for usage

  --

  When using IP address fence_scsi agent works:

  rafaeldtinoco@clubionic01:~$ sudo fence_scsi --verbose -n 10.250.3.10 
--action=off -d /dev/disk/by-path/acpi-VMBUS:01-scsi-0:0:0:0
  Delay 0 second(s) before logging in to the fence device
  Executing: /usr/sbin/corosync-cmapctl totem.cluster_name

  0 totem.cluster_name (str) = clubionic

  
  Executing: /usr/sbin/corosync-cmapctl nodelist.

  0 nodelist.local_node_pos (u32) = 0
  nodelist.node.0.name (str) = clubionic01
  nodelist.node.0.nodeid (u32) = 1
  nodelist.node.0.ring0_addr (str) = 10.250.3.10
  nodelist.node.1.name (str) = clubi

[Group.of.nepali.translators] [Bug 1864404] [NEW] [bionic] fence_scsi cannot open /var/run/cluster/fence_scsi.key (does not exist)

2020-02-23 Thread Rafael David Tinoco
Public bug reported:

Whenever trying to configure fence_scsi using Ubuntu Bionic the
following error happens:

Failed Actions:
* fence_clubionicpriv01_start_0 on clubionic01 'unknown error' (1): call=8, 
status=Error, exitreason='',
last-rc-change='Mon Feb 24 03:20:28 2020', queued=0ms, exec=1132ms

And the logs show:

Feb 24 03:20:31 clubionic02 fence_scsi[14072]: Failed: Cannot open file 
"/var/run/cluster/fence_scsi.key"
Feb 24 03:20:31 clubionic02 fence_scsi[14072]: Please use '-h' for usage

That happens because the key to be used by fence_scsi agent does not
exist.

** Affects: fence-agents (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: fence-agents (Ubuntu Xenial)
 Importance: High
 Assignee: Rafael David Tinoco (rafaeldtinoco)
 Status: Confirmed

** Also affects: fence-agents (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: fence-agents (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: fence-agents (Ubuntu Xenial)
   Importance: Undecided => High

** Changed in: fence-agents (Ubuntu Xenial)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1864404

Title:
  [bionic] fence_scsi cannot open /var/run/cluster/fence_scsi.key (does
  not exist)

Status in fence-agents package in Ubuntu:
  New
Status in fence-agents source package in Xenial:
  Confirmed

Bug description:
  Whenever trying to configure fence_scsi using Ubuntu Bionic the
  following error happens:

  Failed Actions:
  * fence_clubionicpriv01_start_0 on clubionic01 'unknown error' (1): call=8, 
status=Error, exitreason='',
  last-rc-change='Mon Feb 24 03:20:28 2020', queued=0ms, exec=1132ms

  And the logs show:

  Feb 24 03:20:31 clubionic02 fence_scsi[14072]: Failed: Cannot open file 
"/var/run/cluster/fence_scsi.key"
  Feb 24 03:20:31 clubionic02 fence_scsi[14072]: Please use '-h' for usage

  That happens because the key to be used by fence_scsi agent does not
  exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fence-agents/+bug/1864404/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1815101] Re: [master] Restarting systemd-networkd breaks keepalived, heartbeat, corosync, pacemaker (interface aliases are restarted)

2020-02-13 Thread Rafael David Tinoco
** Also affects: heartbeat (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: keepalived (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: systemd (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: keepalived (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: keepalived (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: keepalived (Ubuntu Xenial)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

** No longer affects: heartbeat (Ubuntu Xenial)

** Changed in: systemd (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: systemd (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: systemd (Ubuntu Xenial)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1815101

Title:
  [master] Restarting systemd-networkd breaks keepalived, heartbeat,
  corosync, pacemaker (interface aliases are restarted)

Status in Keepalived Charm:
  New
Status in netplan:
  Confirmed
Status in heartbeat package in Ubuntu:
  Won't Fix
Status in keepalived package in Ubuntu:
  In Progress
Status in systemd package in Ubuntu:
  In Progress
Status in keepalived source package in Xenial:
  Confirmed
Status in systemd source package in Xenial:
  Confirmed
Status in keepalived source package in Bionic:
  Confirmed
Status in systemd source package in Bionic:
  Confirmed
Status in keepalived source package in Disco:
  Won't Fix
Status in systemd source package in Disco:
  Won't Fix
Status in keepalived source package in Eoan:
  In Progress
Status in systemd source package in Eoan:
  Fix Released

Bug description:
  [impact]

  - ALL related HA software has a small problem if interfaces are being
  managed by systemd-networkd: nic restarts/reconfigs are always going
  to wipe all interfaces aliases when HA software is not expecting it to
  (no coordination between them.

  - keepalived, smb ctdb, pacemaker, all suffer from this. Pacemaker is
  smarter in this case because it has a service monitor that will
  restart the virtual IP resource, in affected node & nic, before
  considering a real failure, but other HA service might consider a real
  failure when it is not.

  [test case]

  - comment #14 is a full test case: to have 3 node pacemaker, in that
  example, and cause a networkd service restart: it will trigger a
  failure for the virtual IP resource monitor.

  - other example is given in the original description for keepalived.
  both suffer from the same issue (and other HA softwares as well).

  [regression potential]

  - this backports KeepConfiguration parameter, which adds some
  significant complexity to networkd's configuration and behavior, which
  could lead to regressions in correctly configuring the network at
  networkd start, or incorrectly maintaining configuration at networkd
  restart, or losing network state at networkd stop.

  - Any regressions are most likely to occur during networkd start,
  restart, or stop, and most likely to involve missing or incorrect ip
  address(es).

  - the change is based in upstream patches adding the exact feature we
  needed to fix this issue & it will be integrated with a netplan change
  to add the needed stanza to systemd nic configuration file
  (KeepConfiguration=)

  [other info]

  original description:
  ---

  Configure netplan for interfaces, for example (a working config with
  IP addresses obfuscated)

  network:
  ethernets:
  eth0:
  addresses: [192.168.0.5/24]
  dhcp4: false
  nameservers:
    search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
    addresses: [10.22.11.1]
  eth2:
  addresses:
    - 12.13.14.18/29
    - 12.13.14.19/29
  gateway4: 12.13.14.17
  dhcp4: false
  nameservers:
    search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
    addresses: [10.22.11.1]
  eth3:
  addresses: [10.22.11.6/24]
  dhcp4: false
  nameservers:
    search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
    addresses: [10.22.11.1]
  eth4:
  addresses: [10.22.14.6/24]
  dhcp4: false
  nameservers:
    search: [blah.com, other.blah.com, hq.blah.com, cust.blah.com, 
phone.blah.com]
    addresses: [10.22.11.1]
  eth7:
  addresses: [9.5.17.34/29]
  dhcp4: false
  optional: true
  nameservers:
    search: [blah.c

[Group.of.nepali.translators] [Bug 1578622] Re: [SRU] glance do not require hypervisor_mapping

2019-11-04 Thread Rafael David Tinoco
** Changed in: simplestreams (Ubuntu Bionic)
   Status: In Progress => Fix Released

** Changed in: simplestreams (Ubuntu Bionic)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1578622

Title:
  [SRU] glance do not require hypervisor_mapping

Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  Won't Fix
Status in simplestreams source package in Yakkety:
  Fix Released
Status in simplestreams source package in Bionic:
  Fix Released

Bug description:
  [Impact]
  currently the glance mirror requires hypervisor_mapping config in the api.
  Better to not required that as a library consumer would not necessarily 
provide it.

  [Test Case]

  * deploy a openstack environment with keystone v2 enabled
    - get a copy of the bundle available at 
http://paste.ubuntu.com/p/qxwSDtDZ52/ , this bundle deploys a minimal version 
of xenial-mitaka.

  Expected Result:
  - "glance image-list" lists trusty and xenial images
  - the file glance-simplestreams-sync/0:/var/log/glance-simplestreams-sync.log 
contains details of the images pulled from cloud-images.u.c (example: 
https://pastebin.ubuntu.com/p/RWG8QrkVDz/ )

  Actual result:

  - "glance image-list" is empty
  - glance-simplestreams-sync/0 is in blocked state and message "Image sync 
failed, retrying soon."

  In /var/log/glance-simplestreams-sync.log:
  ERROR * 02-14 15:46:07 [PID:1898] * root * Exception during syncing:
  Traceback (most recent call last):
File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 462, in main
  do_sync(charm_conf, status_exchange)
File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 273, in do_sync
  tmirror.sync(smirror, path=initial_path)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 91, in sync
  return self.sync_index(reader, path, data, content)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 254, in sync_index
  self.sync(reader, path=epath)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 89, in sync
  return self.sync_products(reader, path, data, content)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 341, in sync_products
  self.insert_item(item, src, target, pgree, ipath_cs)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/glance.py", 
line 242, in insert_item
  if self.config['hypervisor_mapping'] and 'ftype' in flat:
  KeyError: 'hypervisor_mapping'

  
  [Regression Potential]

  * This patch makes an argument optional only, there is no expected
  regressions in users of this library.

  [Other Info]

  The bundle used in the test case uses a modified version of the
  glance-simplestreams-sync charm that removes the hypervisor_mapping
  parameter when using simplestreams library.
  https://pastebin.ubuntu.com/p/Ny7jFnGfnY/

To manage notifications about this bug go to:
https://bugs.launchpad.net/simplestreams/+bug/1578622/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1583276] Re: glance restarted during image upload, image stuck in "saving" state

2019-11-04 Thread Rafael David Tinoco
** Changed in: simplestreams (Ubuntu Bionic)
   Status: In Progress => Fix Released

** Changed in: simplestreams (Ubuntu Bionic)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1583276

Title:
  glance restarted during image upload, image stuck in "saving" state

Status in Glance - Simplestreams Sync Charm:
  Invalid
Status in Landscape Server:
  Fix Committed
Status in Landscape Server 16.06 series:
  Fix Released
Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  Won't Fix
Status in simplestreams source package in Bionic:
  Fix Released
Status in simplestreams source package in Disco:
  Fix Released

Bug description:
  [Impact]

  CI run: https://ci.lscape.net/job/landscape-system-tests/2451/

  gs3 log attached. Below some excerpts:

  It looks like the problem started when it uploaded the first image (14.04) to 
glance. That failed:
  WARNING   * 05-18 13:36:48 [PID:10259] * root * No rabbitmq connection 
available for msg{'status': 'Syncing', 'message': 
'ubuntu-trusty-14.04-amd64-server-20160516 99%\n
  DEBUG * 05-18 13:36:50 [PID:10259] * glanceclient.common.http * curl -i 
-X POST -H 'x-image-meta-property-source_content_id: 
com.ubuntu.cloud:released:download' -H 'INFO  * 05-18 13:37:01 [PID:10364] 
* root * glance-simplestreams-sync started.
  INFO  * 05-18 13:37:01 [PID:10364] * root * 
/var/run/glance-simplestreams-sync.pid is locked, exiting
  ERROR * 05-18 13:37:07 [PID:10259] * root * Glance Client exception 
during do_sync:
  Error communicating with http://10.96.10.146:9292 [Errno 32] Broken pipe
  Will continue polling.
  Traceback (most recent call last):
File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 444, in main
  do_sync(charm_conf, status_exchange)
File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 215, in do_sync
  tmirror.sync(smirror, path=initial_path)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 91, in sync
  return self.sync_index(reader, path, data, content)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 254, in sync_index
  self.sync(reader, path=epath)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 89, in sync
  return self.sync_products(reader, path, data, content)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 341, in sync_products
  self.insert_item(item, src, target, pgree, ipath_cs)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/glance.py", 
line 303, in insert_item
  ret = self.gclient.images.create(**create_kwargs)
File "/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line 
253, in create
  'POST', '/v1/images', headers=hdrs, body=image_data)
File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 
289, in raw_request
  return self._http_request(url, method, **kwargs)
File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 
235, in _http_request
  raise exc.CommunicationError(message=message)
  CommunicationError: Error communicating with http://10.96.10.146:9292 [Errno 
32] Broken pipe
  INFO  * 05-18 13:37:07 [PID:10259] * root * sync done.
  INFO  * 05-18 13:38:01 [PID:10372] * root * glance-simplestreams-sync 
started.

  [Test Case]

  Found evidence in the juju logs for glance/0 that it restarted right at 
13:37:07:
  2016-05-18 13:37:07 INFO ceph-relation-changed ^MReading state information... 
0%^MReading state information... 0%^MReading state information... Done
  2016-05-18 13:37:07 INFO juju-log ceph:50: Loaded template from 
templates/ceph.conf
  2016-05-18 13:37:07 INFO juju-log ceph:50: Rendering from template: ceph.conf
  2016-05-18 13:37:07 INFO juju-log ceph:50: Wrote template 
/var/lib/charm/glance/ceph.conf.
  2016-05-18 13:37:07 INFO ceph-relation-changed glance-api stop/waiting
  2016-05-18 13:37:07 INFO ceph-relation-changed glance-api start/running, 
process 62839

  glance/1 had its last restart later:
  2016-05-18 13:32:01 INFO ceph-relation-changed glance-api start/running, 
process 31045

  glance/2 at that time too:
  2016-05-18 13:32:00 INFO ceph-relation-changed glance-api start/running, 
process 30584

  In gs3, a few log entries later, we can see that 14.04 is in state "saving" 
in glance:
  (...)
  {"images": [{"status": "sa

[Group.of.nepali.translators] [Bug 1584938] Re: Incomplete simplestreams metadata and failed juju bootstrap

2019-11-04 Thread Rafael David Tinoco
** Changed in: simplestreams (Ubuntu Bionic)
   Status: In Progress => Fix Released

** Changed in: simplestreams (Ubuntu Bionic)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1584938

Title:
  Incomplete simplestreams metadata and failed juju bootstrap

Status in Glance - Simplestreams Sync Charm:
  Invalid
Status in Landscape Server:
  Fix Committed
Status in Landscape Server 16.06 series:
  Won't Fix
Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  Won't Fix
Status in simplestreams source package in Bionic:
  Fix Released
Status in simplestreams source package in Disco:
  Fix Released

Bug description:
  [Impact]

  This was seen on a Landscape Autopilot deployment (using ceph/ceph)
  after the fix for lp:1583276 was committed:

  root@juju-machine-0-lxc-1:/var/lib/landscape/juju-homes/1# juju ssh 
glance-simplestreams-sync/0 apt-cache policy python-simplestreams
  Warning: Permanently added 'node-11.vmwarestack,10.245.202.81' (ECDSA) to the 
list of known hosts.
  Warning: Permanently added '10.245.201.73' (ECDSA) to the list of known hosts.
  python-simplestreams:
    Installed: 0.1.0~bzr434~trunk-0ubuntu1~ubuntu14.04.1
    Candidate: 0.1.0~bzr434~trunk-0ubuntu1~ubuntu14.04.1

  The glance-simplestreams-sync.log [1] shows that the xenial image was
  not completely updated and got stuck in the queued state:

  WARNING   * 05-23 16:29:08 [PID:18247] * sstreams * Ignoring inactive
  image 37d8dfe7-f829-48e0-a66f-aff0cbe0a201 with status 'queued'

  The log then shows that another xenial image was downloaded and
  ultimately the sync process switches to the daily cron.

  [Test Case]

  The problem occurs with the user tries to juju bootstrap with trusty.
  It appears that the trusty simplestreams metadata is incomplete [2,3]
  and leads to a failed bootstrap [4]. Creating a trusty instance via
  horizon (no juju involved) works fine and a bootstrap with xenial
  works also.

  [Regression Potention]

  This part was added specifically for the Xenial backport, including:

  - 436-glance-fix-race-conditions.patch (LP: #1584938)

  And chances of regression are small based on the MR feedback from SEG
  and this particular bug already stating the issue was fixed.

  [Other Info]

  Attached are a collection of logs including all the pastebins linked
  here.

  [1] - https://pastebin.canonical.com/157049/
  [2] - https://pastebin.canonical.com/157058/
  [3] - https://pastebin.canonical.com/157059/
  [4] - https://pastebin.canonical.com/157060/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-simplestreams-sync-charm/+bug/1584938/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1686437] Re: [SRU] glance sync: need keystone v3 auth support

2019-11-04 Thread Rafael David Tinoco
** Changed in: simplestreams (Ubuntu Bionic)
   Status: In Progress => Fix Released

** Changed in: simplestreams (Ubuntu Bionic)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1686437

Title:
  [SRU] glance sync: need keystone v3 auth support

Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  Won't Fix
Status in simplestreams source package in Bionic:
  Fix Released
Status in simplestreams source package in Cosmic:
  Fix Released
Status in simplestreams source package in Disco:
  Fix Released

Bug description:
  [Impact]

  simplestreams can't sync images when keystone is configured to use v3,
  keystone v2 is deprecated since mitaka[0] (the version shipped with
  xenial)

  The OpenStack Keystone charm supports v3 only since Queens and
  later[1]

  [Test Case]

  * deploy a openstack environment with keystone v3 enabled
- get a copy of the bundle available at 
http://paste.ubuntu.com/p/hkhsHKqt4h/ , this bundle deploys a minimal version 
of xenial-mitaka.

  Expected result:

  - "glance image-list" lists trusty and xenial images
  - the file glance-simplestreams-sync/0:/var/log/glance-simplestreams-sync.log 
 contains details of the images pulled from cloud-images.u.c (example: 
https://pastebin.ubuntu.com/p/RWG8QrkVDz/ )

  Actual result:

  - "glance image-list" is empty
  - the file glance-simplestreams-sync/0:/var/log/glance-simplestreams-sync.log 
 contains the following stacktrace
  INFO  * 04-09 22:04:06 [PID:14571] * root * Calling DryRun mirror to get 
item list
  ERROR * 04-09 22:04:06 [PID:14571] * root * Exception during syncing:
  Traceback (most recent call last):
File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 471, in main
  do_sync(charm_conf, status_exchange)
File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 232, in do_sync
  objectstore=store)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/glance.py", 
line 374, in __init__
  super(ItemInfoDryRunMirror, self).__init__(config, objectstore)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/glance.py", 
line 126, in __init__
  self.keystone_creds = openstack.load_keystone_creds()
File "/usr/lib/python2.7/dist-packages/simplestreams/openstack.py", line 
61, in load_keystone_creds
  raise ValueError("(tenant_id or tenant_name)")
  ValueError: (tenant_id or tenant_name)

  
  [Regression Potential]

  * A possible regression will manifest itself figuring out if v2 or v3
  should be used, after the connection is made there are no further
  changes introduced by this SRU

  
  [Other Info]

  When trying to test my changes for bug 1686086, I was unable to auth
  to keystone, which means glance image sync just doesn't work with
  a v3 keystone.

  Related bugs:
   * bug 1719879: swift client needs to use v1 auth prior to ocata
   * bug 1728982: openstack mirror with keystone v3 always imports new images
   * bug 1611987: glance-simplestreams-sync charm doesn't support keystone v3

  [0] 
https://docs.openstack.org/releasenotes/keystone/mitaka.html#deprecation-notes
  [1] 
https://docs.openstack.org/charm-guide/latest/1802.html#keystone-support-is-v3-only-for-queens-and-later

To manage notifications about this bug go to:
https://bugs.launchpad.net/simplestreams/+bug/1686437/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1686086] Re: glance mirror and nova-lxd need support for squashfs images

2019-11-04 Thread Rafael David Tinoco
** Changed in: simplestreams (Ubuntu Bionic)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: simplestreams (Ubuntu Bionic)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1686086

Title:
  glance mirror and nova-lxd need support for squashfs images

Status in nova-lxd:
  Fix Released
Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  Won't Fix
Status in simplestreams source package in Artful:
  Won't Fix
Status in simplestreams source package in Bionic:
  Fix Released
Status in simplestreams source package in Disco:
  Fix Released

Bug description:
  [Impact]

  Zesty does not produce root.tar.gz or root.tar.xz images.
  This means that for a nova lxd cloud we need some changes to support zesty 
images.

   * simplestreams will need to learn that lxd can support a 'squashfs' image 
type, and how to upload that to glance (what 'disk_format' option to put on the 
glance image).
   * nova-lxd will need to know that it can handle squashfs images (it may 
already do that)
   * nova-lxd will possibly need to do something special with that.

  [Test Case]

  According to comment #4, a reproducer was hard to be found.

  [Regression Potential]

  This part was added specifically for the Xenial backport, including:

  - 455-nova-lxd-support-squashfs-images.patch (LP: #1686086)

  And chances of regression are small based on the MR feedback from SEG
  and this particular bug already stating the issue was fixed.

  [Other Info]

  N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova-lxd/+bug/1686086/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1719879] Re: [SRU] swift client needs to use v1 auth prior to ocata

2019-11-04 Thread Rafael David Tinoco
** Changed in: simplestreams (Ubuntu Bionic)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: simplestreams (Ubuntu Bionic)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1719879

Title:
  [SRU] swift client needs to use v1 auth prior to ocata

Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  Won't Fix
Status in simplestreams source package in Artful:
  Won't Fix
Status in simplestreams source package in Bionic:
  Fix Released
Status in simplestreams source package in Disco:
  Fix Released

Bug description:
  [Impact]

  Users upgrading their environment by stages may see problems with the
  image synchronization when swift is left in mitaka (or newton) while
  other services have been upgraded to keystone, it's common for clouds
  where the API is availability is strict to do this kind of upgrades.

  
  [Test Case]

  * deploy openstack artful-pike except swift
juju deploy ./bundle.yaml  # http://paste.ubuntu.com/p/27mzxxX9jC/
  * Add swift to the already deployed model
juju deploy ./swift.yaml  # http://paste.ubuntu.com/p/RzZ2JMBjbg/
  * Once things have settled run the script 
/etc/cron.daily/glance_simplestreams_sync in glance-simplestreams-sync/0
juju ssh glance-simplestreams-sync/0 sudo 
/etc/cron.daily/glance_simplestreams_sync in glance-simplestreams-sync/0

  Expected result:

  Images for xenial and trusty are uploaded available in glance
  (openstack image list)

  Actual result:

  The synchronization scripts fails due that it's not possible to
  authenticate with swift

  [Potential Regression]

  * This patch attempts to authenticate using v3 and falls back to v2,
  so environments where keystone is configured to authenticate using v2
  AND v3 may see a change in the behavior where simplestreams will be
  silently preferring v3 over v2 while before this patch only v2 was
  being used.

  [Other Info]
  I talked with David today (thedac) and he mentioned that the support for 
adding keystone v3 auth to simplestreams glance sync has issues when using 
older swift clients.

  The swift client lagged behind other openstack client libraries in
  gaining support for v3 auth.

  Note: This bug does not affect xenial or zesty.  They do not have the
  keystone v3 support yet, and the code submitted for SRU contains this
  fix.

  Related bugs:
   * bug 1686437: glance sync: need keystone v3 auth support

To manage notifications about this bug go to:
https://bugs.launchpad.net/simplestreams/+bug/1719879/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1728982] Re: [SRU] openstack mirror with keystone v3 always imports new images

2019-11-04 Thread Rafael David Tinoco
** Changed in: simplestreams (Ubuntu Bionic)
   Status: In Progress => Fix Released

** Changed in: simplestreams (Ubuntu Bionic)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1728982

Title:
  [SRU] openstack mirror with keystone v3 always imports new images

Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  Won't Fix
Status in simplestreams source package in Artful:
  Won't Fix
Status in simplestreams source package in Bionic:
  Fix Released
Status in simplestreams source package in Disco:
  Fix Released

Bug description:
  [Impact]

  On every execution of /etc/cron.*/glance_simplestreams_sync
  simplestreams will upload all the images that match the configured
  filters no matter of those same images were already uploaded in a
  previous execution.

  This will potentially lead to the inability to upload new images due
  to not having enough free space.

  [Test Case]

  * deploy artful-pike
juju deploy ./artful-pike.yaml  # http://paste.ubuntu.com/p/RZqm3cGjqk/

  * Wait until glance-simplestreams-sync runs the first sync-up execution.
  * Verify the images are in glance running "openstack image list"
  * Run the synchronization script again
juju ssh glance-simplestreams-sync/0 sudo 
/etc/cron.daily/glance_simplestreams-sync

  Expected results:

  "openstack image list" prints the same list of images as before
  running the synchronization script for the 2nd time

  Actual result:

  "openstack image list" prints a list of images duplicates, e.g.:

  $  openstack image list
  
+--+---++
  | ID   | Name 
 | Status |
  
+--+---++
  | 7f946cbf-57e1-4704-92ea-928d8d4e9454 | 
auto-sync/ubuntu-trusty-14.04-amd64-server-20180404-disk1.img | active |
  | 7a5afbf8-f072-49af-9629-483fc27c627a | 
auto-sync/ubuntu-trusty-14.04-amd64-server-20180404-disk1.img | active |
  | c9a1dfbd-9e5d-4261-b43f-585e65f9733a | 
auto-sync/ubuntu-xenial-16.04-amd64-server-20180405-disk1.img | active |
  | a731c994-61f3-43ea-b86c-227baec101e3 | 
auto-sync/ubuntu-xenial-16.04-amd64-server-20180405-disk1.img | active |
  
+--+---++

  [Potential Regression]

  * this patch allows simplestreams to connect to swift and verify if
  the image was already uploaded or not, any possible regression will
  manifest around the ability of simplestreams to connect to swift.

  [Other Info]

  When using the newly added (revno 450) v3 support for mirroring to
  openstack simplestreams will not notice that images already exist in
  glace.  The result is that every 'sync' will import all new images.

  The issue was simply that the tenant_id was not being correctly passed 
through to the glance query.
  It ended considering only existing images that were owned by None, which 
didn't match anything.

  Note:
   * This bug is present in Artful when using the v3 keystone api. It is *not* 
present in xenial or zesty as they do not have v3 keystone support, and the 
code that is submitted for merge request has the fix included.

To manage notifications about this bug go to:
https://bugs.launchpad.net/simplestreams/+bug/1728982/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1583276] Re: glance restarted during image upload, image stuck in "saving" state

2019-10-16 Thread Rafael David Tinoco
** Changed in: simplestreams (Ubuntu Xenial)
   Status: In Progress => Won't Fix

** Changed in: simplestreams (Ubuntu Bionic)
   Status: Fix Released => In Progress

** Changed in: simplestreams (Ubuntu Xenial)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: simplestreams (Ubuntu Bionic)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1583276

Title:
  glance restarted during image upload, image stuck in "saving" state

Status in Glance - Simplestreams Sync Charm:
  Invalid
Status in Landscape Server:
  Fix Committed
Status in Landscape Server 16.06 series:
  Fix Released
Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  Won't Fix
Status in simplestreams source package in Bionic:
  In Progress
Status in simplestreams source package in Disco:
  Fix Released

Bug description:
  [Impact]

  CI run: https://ci.lscape.net/job/landscape-system-tests/2451/

  gs3 log attached. Below some excerpts:

  It looks like the problem started when it uploaded the first image (14.04) to 
glance. That failed:
  WARNING   * 05-18 13:36:48 [PID:10259] * root * No rabbitmq connection 
available for msg{'status': 'Syncing', 'message': 
'ubuntu-trusty-14.04-amd64-server-20160516 99%\n
  DEBUG * 05-18 13:36:50 [PID:10259] * glanceclient.common.http * curl -i 
-X POST -H 'x-image-meta-property-source_content_id: 
com.ubuntu.cloud:released:download' -H 'INFO  * 05-18 13:37:01 [PID:10364] 
* root * glance-simplestreams-sync started.
  INFO  * 05-18 13:37:01 [PID:10364] * root * 
/var/run/glance-simplestreams-sync.pid is locked, exiting
  ERROR * 05-18 13:37:07 [PID:10259] * root * Glance Client exception 
during do_sync:
  Error communicating with http://10.96.10.146:9292 [Errno 32] Broken pipe
  Will continue polling.
  Traceback (most recent call last):
File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 444, in main
  do_sync(charm_conf, status_exchange)
File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 215, in do_sync
  tmirror.sync(smirror, path=initial_path)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 91, in sync
  return self.sync_index(reader, path, data, content)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 254, in sync_index
  self.sync(reader, path=epath)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 89, in sync
  return self.sync_products(reader, path, data, content)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 341, in sync_products
  self.insert_item(item, src, target, pgree, ipath_cs)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/glance.py", 
line 303, in insert_item
  ret = self.gclient.images.create(**create_kwargs)
File "/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line 
253, in create
  'POST', '/v1/images', headers=hdrs, body=image_data)
File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 
289, in raw_request
  return self._http_request(url, method, **kwargs)
File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 
235, in _http_request
  raise exc.CommunicationError(message=message)
  CommunicationError: Error communicating with http://10.96.10.146:9292 [Errno 
32] Broken pipe
  INFO  * 05-18 13:37:07 [PID:10259] * root * sync done.
  INFO  * 05-18 13:38:01 [PID:10372] * root * glance-simplestreams-sync 
started.

  [Test Case]

  Found evidence in the juju logs for glance/0 that it restarted right at 
13:37:07:
  2016-05-18 13:37:07 INFO ceph-relation-changed ^MReading state information... 
0%^MReading state information... 0%^MReading state information... Done
  2016-05-18 13:37:07 INFO juju-log ceph:50: Loaded template from 
templates/ceph.conf
  2016-05-18 13:37:07 INFO juju-log ceph:50: Rendering from template: ceph.conf
  2016-05-18 13:37:07 INFO juju-log ceph:50: Wrote template 
/var/lib/charm/glance/ceph.conf.
  2016-05-18 13:37:07 INFO ceph-relation-changed glance-api stop/waiting
  2016-05-18 13:37:07 INFO ceph-relation-changed glance-api start/running, 
process 62839

  glance/1 had its last restart later:
  2016-05-18 13:32:01 INFO ceph-relation-changed glance-api start/running, 
process 31045

  glance/2 at that time too:
  2016-05-18 13:32:00 INFO ceph-relation-change

[Group.of.nepali.translators] [Bug 1611987] Re: [SRU] glance-simplestreams-sync charm doesn't support keystone v3

2019-10-16 Thread Rafael David Tinoco
** Changed in: simplestreams (Ubuntu Xenial)
   Status: In Progress => Won't Fix

** Changed in: simplestreams (Ubuntu Xenial)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: simplestreams (Ubuntu Bionic)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1611987

Title:
  [SRU] glance-simplestreams-sync charm doesn't support keystone v3

Status in OpenStack glance-simplestreams-sync charm:
  Fix Released
Status in Glance - Simplestreams Sync Charm:
  Invalid
Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  Won't Fix
Status in simplestreams source package in Artful:
  Won't Fix
Status in simplestreams source package in Bionic:
  Fix Released
Status in glance-simplestreams-sync package in Juju Charms Collection:
  Invalid

Bug description:
  [Impact]

  simplestreams can't sync images when keystone is configured to use v3,
  keystone v2 is deprecated since mitaka[0] (the version shipped with
  xenial)

  The OpenStack Keystone charm supports v3 only since Queens and
  later[1]

  [Test Case]

  * deploy a openstack environment with keystone v3 enabled
- get a copy of the bundle available at 
http://paste.ubuntu.com/p/hkhsHKqt4h/ , this bundle deploys a minimal version 
of xenial-mitaka.

  Expected result:

  - "glance image-list" lists trusty and xenial images
  - the file glance-simplestreams-sync/0:/var/log/glance-simplestreams-sync.log 
contains details of the images pulled from cloud-images.u.c (example: 
https://pastebin.ubuntu.com/p/RWG8QrkVDz/ )

  Actual result:

  - "glance image-list" is empty
  - the file glance-simplestreams-sync/0:/var/log/glance-simplestreams-sync.log 
contains the following stacktrace
  INFO * 04-09 22:04:06 [PID:14571] * root * Calling DryRun mirror to get item 
list
  ERROR * 04-09 22:04:06 [PID:14571] * root * Exception during syncing:
  Traceback (most recent call last):
File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 471, in main
  do_sync(charm_conf, status_exchange)
File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 232, in do_sync
  objectstore=store)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/glance.py", 
line 374, in __init__
  super(ItemInfoDryRunMirror, self).__init__(config, objectstore)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/glance.py", 
line 126, in __init__
  self.keystone_creds = openstack.load_keystone_creds()
File "/usr/lib/python2.7/dist-packages/simplestreams/openstack.py", line 
61, in load_keystone_creds
  raise ValueError("(tenant_id or tenant_name)")
  ValueError: (tenant_id or tenant_name)

  [Regression Potential]

  * A possible regression will manifest itself figuring out if v2 or v3
  should be used, after the connection is made there are no further
  changes introduced by this SRU

  [Other Info]

  I was deploying a Mitaka Trusty 16.04 charm based Openstack cloud
  (using the cloud archives), including glance-simplestreams-sync, using
  keystone v3.

  Once I had everything deployed, the glance-simplestreams-sync service
  couldn't authenticate because it's using keystone v2, not v3, as you
  can see from the following:

  INFO  * 08-10 23:16:01 [PID:33554] * root * glance-simplestreams-sync 
started.
  DEBUG * 08-10 23:16:01 [PID:33554] * keystoneclient.session * REQ: curl 
-i -X POST http://x.y.z.240:5000/v2.0/tokens -H "Content-Type: 
application/json" -H "User-Agent: python-keystoneclient" -d '{"auth": 
{"passwordCredentials": {"username": "image-stream", "password": 
"thisisnotapassword"}, "tenantId": "blahblahtenantidblahblah"}}'
  INFO  * 08-10 23:16:01 [PID:33554] * urllib3.connectionpool * Starting 
new HTTP connection (1): x.y.z.240
  DEBUG * 08-10 23:16:01 [PID:33554] * urllib3.connectionpool * Setting 
read timeout to None
  DEBUG * 08-10 23:16:01 [PID:33554] * urllib3.connectionpool * "POST 
/v2.0/tokens HTTP/1.1" 401 114
  DEBUG * 08-10 23:16:01 [PID:33554] * keystoneclient.session * RESP: [401] 
CaseInsensitiveDict({'content-length': '114', 'vary': 'X-Auth-Token', 'server': 
'Apache/2.4.7 (Ubuntu)', 'date': 'Wed, 10 Aug 2016 23:16:01 GMT', 
'www-authenticate': 'Keystone uri="http://x.y.z.240:5000";', 
'x-openstack-request-id': 'req-f8aaf12d-01

[Group.of.nepali.translators] [Bug 1584938] Re: Incomplete simplestreams metadata and failed juju bootstrap

2019-10-16 Thread Rafael David Tinoco
** Changed in: simplestreams (Ubuntu Xenial)
   Status: In Progress => Won't Fix

** Changed in: simplestreams (Ubuntu Bionic)
   Status: Fix Released => In Progress

** Changed in: simplestreams (Ubuntu Xenial)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: simplestreams (Ubuntu Bionic)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1584938

Title:
  Incomplete simplestreams metadata and failed juju bootstrap

Status in Glance - Simplestreams Sync Charm:
  Invalid
Status in Landscape Server:
  Fix Committed
Status in Landscape Server 16.06 series:
  Won't Fix
Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  Won't Fix
Status in simplestreams source package in Bionic:
  In Progress
Status in simplestreams source package in Disco:
  Fix Released

Bug description:
  [Impact]

  This was seen on a Landscape Autopilot deployment (using ceph/ceph)
  after the fix for lp:1583276 was committed:

  root@juju-machine-0-lxc-1:/var/lib/landscape/juju-homes/1# juju ssh 
glance-simplestreams-sync/0 apt-cache policy python-simplestreams
  Warning: Permanently added 'node-11.vmwarestack,10.245.202.81' (ECDSA) to the 
list of known hosts.
  Warning: Permanently added '10.245.201.73' (ECDSA) to the list of known hosts.
  python-simplestreams:
    Installed: 0.1.0~bzr434~trunk-0ubuntu1~ubuntu14.04.1
    Candidate: 0.1.0~bzr434~trunk-0ubuntu1~ubuntu14.04.1

  The glance-simplestreams-sync.log [1] shows that the xenial image was
  not completely updated and got stuck in the queued state:

  WARNING   * 05-23 16:29:08 [PID:18247] * sstreams * Ignoring inactive
  image 37d8dfe7-f829-48e0-a66f-aff0cbe0a201 with status 'queued'

  The log then shows that another xenial image was downloaded and
  ultimately the sync process switches to the daily cron.

  [Test Case]

  The problem occurs with the user tries to juju bootstrap with trusty.
  It appears that the trusty simplestreams metadata is incomplete [2,3]
  and leads to a failed bootstrap [4]. Creating a trusty instance via
  horizon (no juju involved) works fine and a bootstrap with xenial
  works also.

  [Regression Potention]

  This part was added specifically for the Xenial backport, including:

  - 436-glance-fix-race-conditions.patch (LP: #1584938)

  And chances of regression are small based on the MR feedback from SEG
  and this particular bug already stating the issue was fixed.

  [Other Info]

  Attached are a collection of logs including all the pastebins linked
  here.

  [1] - https://pastebin.canonical.com/157049/
  [2] - https://pastebin.canonical.com/157058/
  [3] - https://pastebin.canonical.com/157059/
  [4] - https://pastebin.canonical.com/157060/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-simplestreams-sync-charm/+bug/1584938/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1578622] Re: [SRU] glance do not require hypervisor_mapping

2019-10-16 Thread Rafael David Tinoco
** Changed in: simplestreams (Ubuntu Xenial)
   Status: In Progress => Won't Fix

** Also affects: simplestreams (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Changed in: simplestreams (Ubuntu Bionic)
   Status: New => Triaged

** Changed in: simplestreams (Ubuntu Bionic)
   Status: Triaged => In Progress

** Changed in: simplestreams (Ubuntu Xenial)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: simplestreams (Ubuntu Bionic)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1578622

Title:
  [SRU] glance do not require hypervisor_mapping

Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  Won't Fix
Status in simplestreams source package in Yakkety:
  Fix Released
Status in simplestreams source package in Bionic:
  In Progress

Bug description:
  [Impact]
  currently the glance mirror requires hypervisor_mapping config in the api.
  Better to not required that as a library consumer would not necessarily 
provide it.

  [Test Case]

  * deploy a openstack environment with keystone v2 enabled
    - get a copy of the bundle available at 
http://paste.ubuntu.com/p/qxwSDtDZ52/ , this bundle deploys a minimal version 
of xenial-mitaka.

  Expected Result:
  - "glance image-list" lists trusty and xenial images
  - the file glance-simplestreams-sync/0:/var/log/glance-simplestreams-sync.log 
contains details of the images pulled from cloud-images.u.c (example: 
https://pastebin.ubuntu.com/p/RWG8QrkVDz/ )

  Actual result:

  - "glance image-list" is empty
  - glance-simplestreams-sync/0 is in blocked state and message "Image sync 
failed, retrying soon."

  In /var/log/glance-simplestreams-sync.log:
  ERROR * 02-14 15:46:07 [PID:1898] * root * Exception during syncing:
  Traceback (most recent call last):
File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 462, in main
  do_sync(charm_conf, status_exchange)
File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 273, in do_sync
  tmirror.sync(smirror, path=initial_path)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 91, in sync
  return self.sync_index(reader, path, data, content)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 254, in sync_index
  self.sync(reader, path=epath)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 89, in sync
  return self.sync_products(reader, path, data, content)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/__init__.py", 
line 341, in sync_products
  self.insert_item(item, src, target, pgree, ipath_cs)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/glance.py", 
line 242, in insert_item
  if self.config['hypervisor_mapping'] and 'ftype' in flat:
  KeyError: 'hypervisor_mapping'

  
  [Regression Potential]

  * This patch makes an argument optional only, there is no expected
  regressions in users of this library.

  [Other Info]

  The bundle used in the test case uses a modified version of the
  glance-simplestreams-sync charm that removes the hypervisor_mapping
  parameter when using simplestreams library.
  https://pastebin.ubuntu.com/p/Ny7jFnGfnY/

To manage notifications about this bug go to:
https://bugs.launchpad.net/simplestreams/+bug/1578622/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1686086] Re: glance mirror and nova-lxd need support for squashfs images

2019-10-16 Thread Rafael David Tinoco
** Changed in: simplestreams (Ubuntu Xenial)
   Status: In Progress => Won't Fix

** Changed in: simplestreams (Ubuntu Bionic)
   Status: Fix Released => In Progress

** Changed in: simplestreams (Ubuntu Xenial)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: simplestreams (Ubuntu Bionic)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1686086

Title:
  glance mirror and nova-lxd need support for squashfs images

Status in nova-lxd:
  Fix Released
Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  Won't Fix
Status in simplestreams source package in Artful:
  Won't Fix
Status in simplestreams source package in Bionic:
  In Progress
Status in simplestreams source package in Disco:
  Fix Released

Bug description:
  [Impact]

  Zesty does not produce root.tar.gz or root.tar.xz images.
  This means that for a nova lxd cloud we need some changes to support zesty 
images.

   * simplestreams will need to learn that lxd can support a 'squashfs' image 
type, and how to upload that to glance (what 'disk_format' option to put on the 
glance image).
   * nova-lxd will need to know that it can handle squashfs images (it may 
already do that)
   * nova-lxd will possibly need to do something special with that.

  [Test Case]

  According to comment #4, a reproducer was hard to be found.

  [Regression Potential]

  This part was added specifically for the Xenial backport, including:

  - 455-nova-lxd-support-squashfs-images.patch (LP: #1686086)

  And chances of regression are small based on the MR feedback from SEG
  and this particular bug already stating the issue was fixed.

  [Other Info]

  N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova-lxd/+bug/1686086/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1719879] Re: [SRU] swift client needs to use v1 auth prior to ocata

2019-10-16 Thread Rafael David Tinoco
** Changed in: simplestreams (Ubuntu Xenial)
   Status: In Progress => Won't Fix

** Changed in: simplestreams (Ubuntu Bionic)
   Status: Fix Released => In Progress

** Changed in: simplestreams (Ubuntu Xenial)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: simplestreams (Ubuntu Bionic)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1719879

Title:
  [SRU] swift client needs to use v1 auth prior to ocata

Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  Won't Fix
Status in simplestreams source package in Artful:
  Won't Fix
Status in simplestreams source package in Bionic:
  In Progress
Status in simplestreams source package in Disco:
  Fix Released

Bug description:
  [Impact]

  Users upgrading their environment by stages may see problems with the
  image synchronization when swift is left in mitaka (or newton) while
  other services have been upgraded to keystone, it's common for clouds
  where the API is availability is strict to do this kind of upgrades.

  
  [Test Case]

  * deploy openstack artful-pike except swift
juju deploy ./bundle.yaml  # http://paste.ubuntu.com/p/27mzxxX9jC/
  * Add swift to the already deployed model
juju deploy ./swift.yaml  # http://paste.ubuntu.com/p/RzZ2JMBjbg/
  * Once things have settled run the script 
/etc/cron.daily/glance_simplestreams_sync in glance-simplestreams-sync/0
juju ssh glance-simplestreams-sync/0 sudo 
/etc/cron.daily/glance_simplestreams_sync in glance-simplestreams-sync/0

  Expected result:

  Images for xenial and trusty are uploaded available in glance
  (openstack image list)

  Actual result:

  The synchronization scripts fails due that it's not possible to
  authenticate with swift

  [Potential Regression]

  * This patch attempts to authenticate using v3 and falls back to v2,
  so environments where keystone is configured to authenticate using v2
  AND v3 may see a change in the behavior where simplestreams will be
  silently preferring v3 over v2 while before this patch only v2 was
  being used.

  [Other Info]
  I talked with David today (thedac) and he mentioned that the support for 
adding keystone v3 auth to simplestreams glance sync has issues when using 
older swift clients.

  The swift client lagged behind other openstack client libraries in
  gaining support for v3 auth.

  Note: This bug does not affect xenial or zesty.  They do not have the
  keystone v3 support yet, and the code submitted for SRU contains this
  fix.

  Related bugs:
   * bug 1686437: glance sync: need keystone v3 auth support

To manage notifications about this bug go to:
https://bugs.launchpad.net/simplestreams/+bug/1719879/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1686437] Re: [SRU] glance sync: need keystone v3 auth support

2019-10-16 Thread Rafael David Tinoco
** Changed in: simplestreams (Ubuntu Bionic)
   Status: Fix Released => In Progress

** Changed in: simplestreams (Ubuntu Xenial)
   Status: In Progress => Won't Fix

** Changed in: simplestreams (Ubuntu Xenial)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: simplestreams (Ubuntu Bionic)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1686437

Title:
  [SRU] glance sync: need keystone v3 auth support

Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  Won't Fix
Status in simplestreams source package in Bionic:
  In Progress
Status in simplestreams source package in Cosmic:
  Fix Released
Status in simplestreams source package in Disco:
  Fix Released

Bug description:
  [Impact]

  simplestreams can't sync images when keystone is configured to use v3,
  keystone v2 is deprecated since mitaka[0] (the version shipped with
  xenial)

  The OpenStack Keystone charm supports v3 only since Queens and
  later[1]

  [Test Case]

  * deploy a openstack environment with keystone v3 enabled
- get a copy of the bundle available at 
http://paste.ubuntu.com/p/hkhsHKqt4h/ , this bundle deploys a minimal version 
of xenial-mitaka.

  Expected result:

  - "glance image-list" lists trusty and xenial images
  - the file glance-simplestreams-sync/0:/var/log/glance-simplestreams-sync.log 
 contains details of the images pulled from cloud-images.u.c (example: 
https://pastebin.ubuntu.com/p/RWG8QrkVDz/ )

  Actual result:

  - "glance image-list" is empty
  - the file glance-simplestreams-sync/0:/var/log/glance-simplestreams-sync.log 
 contains the following stacktrace
  INFO  * 04-09 22:04:06 [PID:14571] * root * Calling DryRun mirror to get 
item list
  ERROR * 04-09 22:04:06 [PID:14571] * root * Exception during syncing:
  Traceback (most recent call last):
File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 471, in main
  do_sync(charm_conf, status_exchange)
File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 232, in do_sync
  objectstore=store)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/glance.py", 
line 374, in __init__
  super(ItemInfoDryRunMirror, self).__init__(config, objectstore)
File "/usr/lib/python2.7/dist-packages/simplestreams/mirrors/glance.py", 
line 126, in __init__
  self.keystone_creds = openstack.load_keystone_creds()
File "/usr/lib/python2.7/dist-packages/simplestreams/openstack.py", line 
61, in load_keystone_creds
  raise ValueError("(tenant_id or tenant_name)")
  ValueError: (tenant_id or tenant_name)

  
  [Regression Potential]

  * A possible regression will manifest itself figuring out if v2 or v3
  should be used, after the connection is made there are no further
  changes introduced by this SRU

  
  [Other Info]

  When trying to test my changes for bug 1686086, I was unable to auth
  to keystone, which means glance image sync just doesn't work with
  a v3 keystone.

  Related bugs:
   * bug 1719879: swift client needs to use v1 auth prior to ocata
   * bug 1728982: openstack mirror with keystone v3 always imports new images
   * bug 1611987: glance-simplestreams-sync charm doesn't support keystone v3

  [0] 
https://docs.openstack.org/releasenotes/keystone/mitaka.html#deprecation-notes
  [1] 
https://docs.openstack.org/charm-guide/latest/1802.html#keystone-support-is-v3-only-for-queens-and-later

To manage notifications about this bug go to:
https://bugs.launchpad.net/simplestreams/+bug/1686437/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1728982] Re: [SRU] openstack mirror with keystone v3 always imports new images

2019-10-16 Thread Rafael David Tinoco
** Changed in: simplestreams (Ubuntu Xenial)
   Status: In Progress => Won't Fix

** Changed in: simplestreams (Ubuntu Bionic)
   Status: Fix Released => In Progress

** Changed in: simplestreams (Ubuntu Xenial)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

** Changed in: simplestreams (Ubuntu Bionic)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1728982

Title:
  [SRU] openstack mirror with keystone v3 always imports new images

Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  Won't Fix
Status in simplestreams source package in Artful:
  Won't Fix
Status in simplestreams source package in Bionic:
  In Progress
Status in simplestreams source package in Disco:
  Fix Released

Bug description:
  [Impact]

  On every execution of /etc/cron.*/glance_simplestreams_sync
  simplestreams will upload all the images that match the configured
  filters no matter of those same images were already uploaded in a
  previous execution.

  This will potentially lead to the inability to upload new images due
  to not having enough free space.

  [Test Case]

  * deploy artful-pike
juju deploy ./artful-pike.yaml  # http://paste.ubuntu.com/p/RZqm3cGjqk/

  * Wait until glance-simplestreams-sync runs the first sync-up execution.
  * Verify the images are in glance running "openstack image list"
  * Run the synchronization script again
juju ssh glance-simplestreams-sync/0 sudo 
/etc/cron.daily/glance_simplestreams-sync

  Expected results:

  "openstack image list" prints the same list of images as before
  running the synchronization script for the 2nd time

  Actual result:

  "openstack image list" prints a list of images duplicates, e.g.:

  $  openstack image list
  
+--+---++
  | ID   | Name 
 | Status |
  
+--+---++
  | 7f946cbf-57e1-4704-92ea-928d8d4e9454 | 
auto-sync/ubuntu-trusty-14.04-amd64-server-20180404-disk1.img | active |
  | 7a5afbf8-f072-49af-9629-483fc27c627a | 
auto-sync/ubuntu-trusty-14.04-amd64-server-20180404-disk1.img | active |
  | c9a1dfbd-9e5d-4261-b43f-585e65f9733a | 
auto-sync/ubuntu-xenial-16.04-amd64-server-20180405-disk1.img | active |
  | a731c994-61f3-43ea-b86c-227baec101e3 | 
auto-sync/ubuntu-xenial-16.04-amd64-server-20180405-disk1.img | active |
  
+--+---++

  [Potential Regression]

  * this patch allows simplestreams to connect to swift and verify if
  the image was already uploaded or not, any possible regression will
  manifest around the ability of simplestreams to connect to swift.

  [Other Info]

  When using the newly added (revno 450) v3 support for mirroring to
  openstack simplestreams will not notice that images already exist in
  glace.  The result is that every 'sync' will import all new images.

  The issue was simply that the tenant_id was not being correctly passed 
through to the glance query.
  It ended considering only existing images that were owned by None, which 
didn't match anything.

  Note:
   * This bug is present in Artful when using the v3 keystone api. It is *not* 
present in xenial or zesty as they do not have v3 keystone support, and the 
code that is submitted for merge request has the fix included.

To manage notifications about this bug go to:
https://bugs.launchpad.net/simplestreams/+bug/1728982/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1840956] Re: qemu-user-static fails to install in WSL

2019-09-30 Thread Rafael David Tinoco
** Also affects: qemu (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: qemu (Ubuntu Xenial)
   Importance: Undecided => Low

** Changed in: qemu (Ubuntu Xenial)
   Status: New => Confirmed

** Summary changed:

- qemu-user-static fails to install in WSL
+ qemu-user-static fails to install in WSL and LXD

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1840956

Title:
  qemu-user-static fails to install in WSL and LXD

Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Xenial:
  Confirmed
Status in qemu source package in Bionic:
  Won't Fix
Status in qemu source package in Disco:
  Fix Committed

Bug description:
  [Impact]

   * The check in qemu postinst to nnot by accident run in a container 
 doesn't work in WSL. Due to that it tries to register bin_fmt types 
 which can't work in that environment.

   * Fix the check, so that it recognizes WSL (and probably a few other 
 containsers)

  [Test Case]

   * Install qemu-user-static in WSL(1) Ubuntu guest

  [Regression Potential]

   * The old check just detected LXD/LXC and any other container that put 
 the container into  /proc/1/environ. So we could now (on install) skip 
 bin_fmt registration on some containers where we did it before.
 Overall that is just what we wanted, but there could be containers set 
 up very privileged (uncommon) that would be able to do that before.
 Those would regress in a sense that it is not done on install.
 But the change would not prevent that in those (expected to be rare 
 cases) the user/admin registers the type later.

  [Other Info]
   
   * n/a

  ---

  Happened running do-release-upgrade from 18.04 to 18.10 on Windows
  Subsystem for Linux, Windows 10 1903. qemu-user-static can no longer
  be installed or run.

  ProblemType: Package
  DistroRelease: Ubuntu 19.04
  Package: qemu-user-static 1:3.1+dfsg-2ubuntu3.3
  ProcVersionSignature: Microsoft 4.4.0-18362.1-Microsoft 4.4.35
  Uname: Linux 4.4.0-18362-Microsoft x86_64
  ApportVersion: 2.20.10-0ubuntu27.1
  Architecture: amd64
  Date: Wed Aug 21 11:43:54 2019
  Dmesg: [0.029344]  Microsoft 4.4.0-18362.1-Microsoft 4.4.35
  ErrorMessage: installed qemu-user-static package post-installation script 
subprocess returned error exit status 2
  Python3Details: /usr/bin/python3.7, Python 3.7.3, python3-minimal, 3.7.3-1
  PythonDetails: N/A
  RelatedPackageVersions:
   dpkg 1.19.6ubuntu1.1
   apt  1.8.1
  SourcePackage: qemu
  Title: package qemu-user-static 1:3.1+dfsg-2ubuntu3.3 failed to 
install/upgrade: installed qemu-user-static package post-installation script 
subprocess returned error exit status 2
  UpgradeStatus: Upgraded to disco on 2019-08-21 (0 days ago)
  mtime.conffile..etc.apport.crashdb.conf: 2019-08-09T13:43:51.502822

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1840956/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1686086] Re: glance mirror and nova-lxd need support for squashfs images

2019-09-20 Thread Rafael David Tinoco
For Xenial SRU, please read summary at:

https://bugs.launchpad.net/ubuntu/+source/simplestreams/+bug/1686437/comments/19

** Changed in: simplestreams (Ubuntu Xenial)
   Status: Confirmed => In Progress

** Also affects: simplestreams (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: simplestreams (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Changed in: simplestreams (Ubuntu Bionic)
   Importance: Undecided => Medium

** Changed in: simplestreams (Ubuntu Disco)
   Importance: Undecided => Medium

** Changed in: simplestreams (Ubuntu Xenial)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

** Changed in: simplestreams (Ubuntu Bionic)
   Status: New => Fix Released

** Changed in: simplestreams (Ubuntu Disco)
   Status: New => Fix Released

** Description changed:

+ [Impact]
+ 
  Zesty does not produce root.tar.gz or root.tar.xz images.
  This means that for a nova lxd cloud we need some changes to support zesty 
images.
  
   * simplestreams will need to learn that lxd can support a 'squashfs' image 
type, and how to upload that to glance (what 'disk_format' option to put on the 
glance image).
   * nova-lxd will need to know that it can handle squashfs images (it may 
already do that)
   * nova-lxd will possibly need to do something special with that.
+ 
+ [Test Case]
+ 
+ According to comment #4, a reproducer was hard to be found.
+ 
+ [Regression Potential]
+ 
+ This part was added specifically for the Xenial backport, including:
+ 
+ - 455-nova-lxd-support-squashfs-images.patch (LP: #1686086)
+ 
+ And chances of regression are small based on the MR feedback from SEG
+ and this particular bug already stating the issue was fixed.
+ 
+ [Other Info]
+ 
+ N/A

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1686086

Title:
  glance mirror and nova-lxd need support for squashfs images

Status in nova-lxd:
  Fix Released
Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  In Progress
Status in simplestreams source package in Artful:
  Won't Fix
Status in simplestreams source package in Bionic:
  Fix Released
Status in simplestreams source package in Disco:
  Fix Released

Bug description:
  [Impact]

  Zesty does not produce root.tar.gz or root.tar.xz images.
  This means that for a nova lxd cloud we need some changes to support zesty 
images.

   * simplestreams will need to learn that lxd can support a 'squashfs' image 
type, and how to upload that to glance (what 'disk_format' option to put on the 
glance image).
   * nova-lxd will need to know that it can handle squashfs images (it may 
already do that)
   * nova-lxd will possibly need to do something special with that.

  [Test Case]

  According to comment #4, a reproducer was hard to be found.

  [Regression Potential]

  This part was added specifically for the Xenial backport, including:

  - 455-nova-lxd-support-squashfs-images.patch (LP: #1686086)

  And chances of regression are small based on the MR feedback from SEG
  and this particular bug already stating the issue was fixed.

  [Other Info]

  N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova-lxd/+bug/1686086/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1583276] Re: glance restarted during image upload, image stuck in "saving" state

2019-09-20 Thread Rafael David Tinoco
endering from template: ceph.conf
  2016-05-18 13:37:07 INFO juju-log ceph:50: Wrote template 
/var/lib/charm/glance/ceph.conf.
  2016-05-18 13:37:07 INFO ceph-relation-changed glance-api stop/waiting
  2016-05-18 13:37:07 INFO ceph-relation-changed glance-api start/running, 
process 62839
  
  glance/1 had its last restart later:
  2016-05-18 13:32:01 INFO ceph-relation-changed glance-api start/running, 
process 31045
  
  glance/2 at that time too:
  2016-05-18 13:32:00 INFO ceph-relation-changed glance-api start/running, 
process 30584
  
  In gs3, a few log entries later, we can see that 14.04 is in state "saving" 
in glance:
  (...)
  {"images": [{"status": "saving", "deleted_at": null, "name": 
"auto-sync/ubuntu-trusty-14.04-amd64-server-20160516-disk1.img", "deleted": 
false, "container_format": "bare
  (...)
  
  It remains in this state throughout the logs.
  
  gs3 then proceeds to download 16.04, upload it to glance, and then
  finally publishes the streams. It's unknown if 14.04 is part of this
  publication or not.
  
+ [Regression Potential]
+ 
+ This part was added specifically for the Xenial backport, including:
+ 
+ - 433-glance-ignore-inactive-images.patch (LP: #1583276)
+ 
+ And chances of regression are small based on the MR feedback from SEG
+ and this particular bug already stating the issue was fixed.
+ 
+ [Other Info]
+ 
  Landscape however fails, because it doesn't find enough "active" images:
  May 18 14:33:29 job-handler-1 INFO  RetryingCall for '_wait_for_active_image' 
failed, will try again.
  May 18 14:33:29 job-handler-1 INFO  Traceback: : 
#012/usr/lib/python2.7/dist-packages/twisted/internet/defer.py:393:callback#012/usr/lib/python2.7/dist-packages/twisted/internet/defer.py:501:_startRunCallbacks#012/usr/lib/python2.7/dist-packages/twisted/internet/defer.py:588:_runCallbacks#012/usr/lib/python2.7/dist-packages/twisted/internet/defer.py:1184:gotResult#012---
  
---#012/usr/lib/python2.7/dist-packages/twisted/internet/defer.py:1128:_inlineCallbacks#012/opt/canonical/landscape/canonical/landscape/model/openstack/jobs/images.py:48:_wait_for_active_image

** Also affects: simplestreams (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: simplestreams (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: simplestreams (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: simplestreams (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: simplestreams (Ubuntu)
   Status: New => Fix Released

** Changed in: simplestreams (Ubuntu Disco)
   Status: New => Fix Released

** Changed in: simplestreams (Ubuntu Bionic)
   Status: New => Fix Released

** Changed in: simplestreams (Ubuntu Xenial)
   Status: New => In Progress

** Changed in: simplestreams (Ubuntu Xenial)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1583276

Title:
  glance restarted during image upload, image stuck in "saving" state

Status in Glance - Simplestreams Sync Charm:
  Invalid
Status in Landscape Server:
  Fix Committed
Status in Landscape Server 16.06 series:
  Fix Released
Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  In Progress
Status in simplestreams source package in Bionic:
  Fix Released
Status in simplestreams source package in Disco:
  Fix Released

Bug description:
  [Impact]

  CI run: https://ci.lscape.net/job/landscape-system-tests/2451/

  gs3 log attached. Below some excerpts:

  It looks like the problem started when it uploaded the first image (14.04) to 
glance. That failed:
  WARNING   * 05-18 13:36:48 [PID:10259] * root * No rabbitmq connection 
available for msg{'status': 'Syncing', 'message': 
'ubuntu-trusty-14.04-amd64-server-20160516 99%\n
  DEBUG * 05-18 13:36:50 [PID:10259] * glanceclient.common.http * curl -i 
-X POST -H 'x-image-meta-property-source_content_id: 
com.ubuntu.cloud:released:download' -H 'INFO  * 05-18 13:37:01 [PID:10364] 
* root * glance-simplestreams-sync started.
  INFO  * 05-18 13:37:01 [PID:10364] * root * 
/var/run/glance-simplestreams-sync.pid is locked, exiting
  ERROR * 05-18 13:37:07 [PID:10259] * root * Glance Client exception 
during do_sync:
  Error communicating with http://10.96.10.146:9292 [Errno 32] Broken pipe
  Will continue polling.
  Traceback (most recent call last):
File "/usr/share/glance-simplestreams-sync/glance-simplestreams-sync.py", 
line 444, in main
  do_sync(charm_conf, status_exchange

[Group.of.nepali.translators] [Bug 1719879] Re: [SRU] swift client needs to use v1 auth prior to ocata

2019-09-20 Thread Rafael David Tinoco
** Also affects: simplestreams (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: simplestreams (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: simplestreams (Ubuntu Xenial)
   Status: New => In Progress

** Changed in: simplestreams (Ubuntu Xenial)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

** Changed in: simplestreams (Ubuntu Disco)
   Status: New => Fix Released

** Changed in: simplestreams (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: simplestreams (Ubuntu Disco)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1719879

Title:
  [SRU] swift client needs to use v1 auth prior to ocata

Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  In Progress
Status in simplestreams source package in Artful:
  Won't Fix
Status in simplestreams source package in Bionic:
  Fix Released
Status in simplestreams source package in Disco:
  Fix Released

Bug description:
  [Impact]

  Users upgrading their environment by stages may see problems with the
  image synchronization when swift is left in mitaka (or newton) while
  other services have been upgraded to keystone, it's common for clouds
  where the API is availability is strict to do this kind of upgrades.

  
  [Test Case]

  * deploy openstack artful-pike except swift
juju deploy ./bundle.yaml  # http://paste.ubuntu.com/p/27mzxxX9jC/
  * Add swift to the already deployed model
juju deploy ./swift.yaml  # http://paste.ubuntu.com/p/RzZ2JMBjbg/
  * Once things have settled run the script 
/etc/cron.daily/glance_simplestreams_sync in glance-simplestreams-sync/0
juju ssh glance-simplestreams-sync/0 sudo 
/etc/cron.daily/glance_simplestreams_sync in glance-simplestreams-sync/0

  Expected result:

  Images for xenial and trusty are uploaded available in glance
  (openstack image list)

  Actual result:

  The synchronization scripts fails due that it's not possible to
  authenticate with swift

  [Potential Regression]

  * This patch attempts to authenticate using v3 and falls back to v2,
  so environments where keystone is configured to authenticate using v2
  AND v3 may see a change in the behavior where simplestreams will be
  silently preferring v3 over v2 while before this patch only v2 was
  being used.

  [Other Info]
  I talked with David today (thedac) and he mentioned that the support for 
adding keystone v3 auth to simplestreams glance sync has issues when using 
older swift clients.

  The swift client lagged behind other openstack client libraries in
  gaining support for v3 auth.

  Note: This bug does not affect xenial or zesty.  They do not have the
  keystone v3 support yet, and the code submitted for SRU contains this
  fix.

  Related bugs:
   * bug 1686437: glance sync: need keystone v3 auth support

To manage notifications about this bug go to:
https://bugs.launchpad.net/simplestreams/+bug/1719879/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1728982] Re: [SRU] openstack mirror with keystone v3 always imports new images

2019-09-20 Thread Rafael David Tinoco
** Also affects: simplestreams (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: simplestreams (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: simplestreams (Ubuntu Xenial)
   Status: New => Fix Released

** Changed in: simplestreams (Ubuntu Xenial)
   Status: Fix Released => In Progress

** Changed in: simplestreams (Ubuntu Xenial)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

** Changed in: simplestreams (Ubuntu Disco)
   Status: New => Fix Released

** Changed in: simplestreams (Ubuntu Bionic)
   Importance: High => Medium

** Changed in: simplestreams (Ubuntu)
   Importance: High => Medium

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1728982

Title:
  [SRU] openstack mirror with keystone v3 always imports new images

Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  In Progress
Status in simplestreams source package in Artful:
  Won't Fix
Status in simplestreams source package in Bionic:
  Fix Released
Status in simplestreams source package in Disco:
  Fix Released

Bug description:
  [Impact]

  On every execution of /etc/cron.*/glance_simplestreams_sync
  simplestreams will upload all the images that match the configured
  filters no matter of those same images were already uploaded in a
  previous execution.

  This will potentially lead to the inability to upload new images due
  to not having enough free space.

  [Test Case]

  * deploy artful-pike
juju deploy ./artful-pike.yaml  # http://paste.ubuntu.com/p/RZqm3cGjqk/

  * Wait until glance-simplestreams-sync runs the first sync-up execution.
  * Verify the images are in glance running "openstack image list"
  * Run the synchronization script again
juju ssh glance-simplestreams-sync/0 sudo 
/etc/cron.daily/glance_simplestreams-sync

  Expected results:

  "openstack image list" prints the same list of images as before
  running the synchronization script for the 2nd time

  Actual result:

  "openstack image list" prints a list of images duplicates, e.g.:

  $  openstack image list
  
+--+---++
  | ID   | Name 
 | Status |
  
+--+---++
  | 7f946cbf-57e1-4704-92ea-928d8d4e9454 | 
auto-sync/ubuntu-trusty-14.04-amd64-server-20180404-disk1.img | active |
  | 7a5afbf8-f072-49af-9629-483fc27c627a | 
auto-sync/ubuntu-trusty-14.04-amd64-server-20180404-disk1.img | active |
  | c9a1dfbd-9e5d-4261-b43f-585e65f9733a | 
auto-sync/ubuntu-xenial-16.04-amd64-server-20180405-disk1.img | active |
  | a731c994-61f3-43ea-b86c-227baec101e3 | 
auto-sync/ubuntu-xenial-16.04-amd64-server-20180405-disk1.img | active |
  
+--+---++

  [Potential Regression]

  * this patch allows simplestreams to connect to swift and verify if
  the image was already uploaded or not, any possible regression will
  manifest around the ability of simplestreams to connect to swift.

  [Other Info]

  When using the newly added (revno 450) v3 support for mirroring to
  openstack simplestreams will not notice that images already exist in
  glace.  The result is that every 'sync' will import all new images.

  The issue was simply that the tenant_id was not being correctly passed 
through to the glance query.
  It ended considering only existing images that were owned by None, which 
didn't match anything.

  Note:
   * This bug is present in Artful when using the v3 keystone api. It is *not* 
present in xenial or zesty as they do not have v3 keystone support, and the 
code that is submitted for merge request has the fix included.

To manage notifications about this bug go to:
https://bugs.launchpad.net/simplestreams/+bug/1728982/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1584938] Re: Incomplete simplestreams metadata and failed juju bootstrap

2019-09-20 Thread Rafael David Tinoco
** Also affects: simplestreams (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: simplestreams (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: simplestreams (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Changed in: simplestreams (Ubuntu Disco)
   Status: New => Fix Released

** Changed in: simplestreams (Ubuntu Bionic)
   Status: New => Fix Released

** Changed in: simplestreams (Ubuntu Xenial)
   Status: New => In Progress

** Changed in: simplestreams (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: simplestreams (Ubuntu)
   Importance: High => Medium

** Changed in: simplestreams (Ubuntu Xenial)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

** Description changed:

+ [Impact]
+ 
  This was seen on a Landscape Autopilot deployment (using ceph/ceph)
  after the fix for lp:1583276 was committed:
  
  root@juju-machine-0-lxc-1:/var/lib/landscape/juju-homes/1# juju ssh 
glance-simplestreams-sync/0 apt-cache policy python-simplestreams
  Warning: Permanently added 'node-11.vmwarestack,10.245.202.81' (ECDSA) to the 
list of known hosts.
  Warning: Permanently added '10.245.201.73' (ECDSA) to the list of known hosts.
  python-simplestreams:
-   Installed: 0.1.0~bzr434~trunk-0ubuntu1~ubuntu14.04.1
-   Candidate: 0.1.0~bzr434~trunk-0ubuntu1~ubuntu14.04.1
+   Installed: 0.1.0~bzr434~trunk-0ubuntu1~ubuntu14.04.1
+   Candidate: 0.1.0~bzr434~trunk-0ubuntu1~ubuntu14.04.1
  
- 
- The glance-simplestreams-sync.log [1] shows that the xenial image was not 
completely updated and got stuck in the queued state:
+ The glance-simplestreams-sync.log [1] shows that the xenial image was
+ not completely updated and got stuck in the queued state:
  
  WARNING   * 05-23 16:29:08 [PID:18247] * sstreams * Ignoring inactive
  image 37d8dfe7-f829-48e0-a66f-aff0cbe0a201 with status 'queued'
  
+ The log then shows that another xenial image was downloaded and
+ ultimately the sync process switches to the daily cron.
  
- The log then shows that another xenial image was downloaded and ultimately 
the sync process switches to the daily cron.
+ [Test Case]
  
  The problem occurs with the user tries to juju bootstrap with trusty. It
  appears that the trusty simplestreams metadata is incomplete [2,3] and
  leads to a failed bootstrap [4]. Creating a trusty instance via horizon
  (no juju involved) works fine and a bootstrap with xenial works also.
  
- Attached are a collection of logs including all of the pastebins linked
+ [Regression Potention]
+ 
+ This part was added specifically for the Xenial backport, including:
+ 
+ - 436-glance-fix-race-conditions.patch (LP: #1584938)
+ 
+ And chances of regression are small based on the MR feedback from SEG
+ and this particular bug already stating the issue was fixed.
+ 
+ [Other Info]
+ 
+ Attached are a collection of logs including all the pastebins linked
  here.
  
  [1] - https://pastebin.canonical.com/157049/
  [2] - https://pastebin.canonical.com/157058/
  [3] - https://pastebin.canonical.com/157059/
  [4] - https://pastebin.canonical.com/157060/

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1584938

Title:
  Incomplete simplestreams metadata and failed juju bootstrap

Status in Glance - Simplestreams Sync Charm:
  Invalid
Status in Landscape Server:
  Fix Committed
Status in Landscape Server 16.06 series:
  Won't Fix
Status in simplestreams:
  Fix Released
Status in simplestreams package in Ubuntu:
  Fix Released
Status in simplestreams source package in Xenial:
  In Progress
Status in simplestreams source package in Bionic:
  Fix Released
Status in simplestreams source package in Disco:
  Fix Released

Bug description:
  [Impact]

  This was seen on a Landscape Autopilot deployment (using ceph/ceph)
  after the fix for lp:1583276 was committed:

  root@juju-machine-0-lxc-1:/var/lib/landscape/juju-homes/1# juju ssh 
glance-simplestreams-sync/0 apt-cache policy python-simplestreams
  Warning: Permanently added 'node-11.vmwarestack,10.245.202.81' (ECDSA) to the 
list of known hosts.
  Warning: Permanently added '10.245.201.73' (ECDSA) to the list of known hosts.
  python-simplestreams:
    Installed: 0.1.0~bzr434~trunk-0ubuntu1~ubuntu14.04.1
    Candidate: 0.1.0~bzr434~trunk-0ubuntu1~ubuntu14.04.1

  The glance-simplestreams-sync.log [1] shows that the xenial image was
  not completely updated and got stuck in the queued state:

  WARNING   * 05-23 16:29:08 [PID:18247] * sstreams * Ignoring inactive
  image 37d8dfe7-f829-48e0-a66f-aff0cbe0a201 with status 'queued'

  The log then shows that another xenial image was downloaded and
  ultimately the sync process switches to the daily cron.

  [Test Case]

  The problem occ

[Group.of.nepali.translators] [Bug 1687095] Re: crm cluster health: NameError: global name 'parallax' is not defined

2019-09-03 Thread Rafael David Tinoco
In Eoan:

$ sudo crm cluster health
/usr/lib/python3/dist-packages/crmsh/scripts.py:431: YAMLLoadWarning: calling 
yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. 
Please read https://msg.pyyaml.org/load for full details.
  data = yaml.load(f)
INFO: Verify health and configuration
INFO: Nodes: cluster01, cluster02, cluster03
OK: Collect information
Ctrl-C, leavinglth check...



It looks like, despite parallax, crm cluster health might need *a real*
fix in Eoan as well.

After the command is issued, it hangs and one of the nodes has:

root 14476  0.0  0.0   2864  2116 ?Ss   21:46   0:00 /bin/sh
/usr/sbin/hb_report __slave DEST=health-report FROM_TIME=1567460760.0
TO_TIME=0 USER_NODES= NODES=cluster01 cluster02 cluster03
HA_LOG=/var/log/syslog SANITIZE=passw.* DO_SANITIZE=0 SKIP_LVL=False
EXTRA_LOGS=/var/log/syslog /var/log/pacemaker/pacemaker.log
/var/log/pacemaker.log /var/log/ha-cluster-bootstrap.log
PCMK_LOG=/var/log/pacemaker/pacemaker.log /var/log/pacemaker.log
VERBOSITY=0

And it is basically waiting on a read() - likely from "cat".

When issuing the hb_report by hand:

hb_report -n "cluster01 cluster02 cluster03" -f 10:00 -t 21:00

I'm able to get the report (important for a remote debug, for example).

For me, this is likely an issue in between crmsh <-> cluster-glue
compatibility due changes in one or another over time.

I'll review this later as crmsh is in -universe and this is not one of
its core functions to be cleaned up right now.

** Tags added: server-next

** Changed in: crmsh (Ubuntu)
   Status: Fix Released => Confirmed

** Changed in: crmsh (Ubuntu Xenial)
   Status: Triaged => Confirmed

** Summary changed:

- crm cluster health: NameError: global name 'parallax' is not defined
+ crm cluster health does not work: python3-parallax and cluster-glue 
(hb_report) dependencies

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1687095

Title:
  crm cluster health does not work: python3-parallax and cluster-glue
  (hb_report) dependencies

Status in crmsh package in Ubuntu:
  Confirmed
Status in crmsh source package in Xenial:
  Confirmed
Status in crmsh package in Debian:
  Fix Released

Bug description:
  [Environment]

  No LSB modules are available.
  Distributor ID: Ubuntu
  Description:Ubuntu 16.04.2 LTS
  Release:16.04
  Codename:   xenial

  [Description]

  Running 'crm cluster health' raises the following exception:

  
  root@juju-niedbalski-xenial-machine-15:/home/ubuntu# crm cluster health
  INFO: Check the health of the cluster
  INFO: Nodes: juju-niedbalski-xenial-machine-13, 
juju-niedbalski-xenial-machine-14, juju-niedbalski-xenial-machine-15
  Traceback (most recent call last):
File "/usr/sbin/crm", line 54, in 
  rc = main.run()
File "/usr/lib/python2.7/dist-packages/crmsh/main.py", line 351, in run
  return main_input_loop(context, user_args)
File "/usr/lib/python2.7/dist-packages/crmsh/main.py", line 240, in 
main_input_loop
  rc = handle_noninteractive_use(context, user_args)
File "/usr/lib/python2.7/dist-packages/crmsh/main.py", line 196, in 
handle_noninteractive_use
  if context.run(' '.join(l)):
File "/usr/lib/python2.7/dist-packages/crmsh/ui_context.py", line 75, in run
  rv = self.execute_command() is not False
File "/usr/lib/python2.7/dist-packages/crmsh/ui_context.py", line 245, in 
execute_command
  rv = self.command_info.function(*arglist)
File "/usr/lib/python2.7/dist-packages/crmsh/ui_cluster.py", line 158, in 
do_health
  return scripts.run(script, script_args(params), script_printer())
File "/usr/lib/python2.7/dist-packages/crmsh/scripts.py", line 2045, in run
  opts = _make_options(params)
File "/usr/lib/python2.7/dist-packages/crmsh/scripts.py", line 383, in 
_make_options
  opts = parallax.Options()
  NameError: global name 'parallax' is not defined

  [Solution]

  - Depends on python-parallax package.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/crmsh/+bug/1687095/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1687095] Re: crm cluster health: NameError: global name 'parallax' is not defined

2019-09-03 Thread Rafael David Tinoco
** Also affects: crmsh (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: crmsh (Ubuntu)
   Status: Confirmed => Fix Released

** Changed in: crmsh (Ubuntu Xenial)
   Status: New => Triaged

** Changed in: crmsh (Ubuntu)
   Importance: High => Undecided

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1687095

Title:
  crm cluster health: NameError: global name 'parallax' is not defined

Status in crmsh package in Ubuntu:
  Fix Released
Status in crmsh source package in Xenial:
  Triaged
Status in crmsh package in Debian:
  Fix Released

Bug description:
  [Environment]

  No LSB modules are available.
  Distributor ID: Ubuntu
  Description:Ubuntu 16.04.2 LTS
  Release:16.04
  Codename:   xenial

  [Description]

  Running 'crm cluster health' raises the following exception:

  
  root@juju-niedbalski-xenial-machine-15:/home/ubuntu# crm cluster health
  INFO: Check the health of the cluster
  INFO: Nodes: juju-niedbalski-xenial-machine-13, 
juju-niedbalski-xenial-machine-14, juju-niedbalski-xenial-machine-15
  Traceback (most recent call last):
File "/usr/sbin/crm", line 54, in 
  rc = main.run()
File "/usr/lib/python2.7/dist-packages/crmsh/main.py", line 351, in run
  return main_input_loop(context, user_args)
File "/usr/lib/python2.7/dist-packages/crmsh/main.py", line 240, in 
main_input_loop
  rc = handle_noninteractive_use(context, user_args)
File "/usr/lib/python2.7/dist-packages/crmsh/main.py", line 196, in 
handle_noninteractive_use
  if context.run(' '.join(l)):
File "/usr/lib/python2.7/dist-packages/crmsh/ui_context.py", line 75, in run
  rv = self.execute_command() is not False
File "/usr/lib/python2.7/dist-packages/crmsh/ui_context.py", line 245, in 
execute_command
  rv = self.command_info.function(*arglist)
File "/usr/lib/python2.7/dist-packages/crmsh/ui_cluster.py", line 158, in 
do_health
  return scripts.run(script, script_args(params), script_printer())
File "/usr/lib/python2.7/dist-packages/crmsh/scripts.py", line 2045, in run
  opts = _make_options(params)
File "/usr/lib/python2.7/dist-packages/crmsh/scripts.py", line 383, in 
_make_options
  opts = parallax.Options()
  NameError: global name 'parallax' is not defined

  [Solution]

  - Depends on python-parallax package.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/crmsh/+bug/1687095/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1836929] Re: chrony has migration regressions from autopkgtests (disco/eoan)

2019-08-01 Thread Rafael David Tinoco
** Changed in: chrony (Ubuntu Disco)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1836929

Title:
  chrony has migration regressions from autopkgtests (disco/eoan)

Status in chrony package in Ubuntu:
  Fix Released
Status in chrony source package in Xenial:
  Won't Fix
Status in chrony source package in Bionic:
  Won't Fix
Status in chrony source package in Cosmic:
  Won't Fix
Status in chrony source package in Disco:
  Fix Released
Status in chrony source package in Eoan:
  Fix Released

Bug description:
  [Impact]

   * Recent changes have caused the chrony autopkgtests to fail.
     In this case upstream of chrony and the clk tests changed, which need 
 to be back under control to match what works reliably for Disco.

   * Later versions have this fixed, backport the changes to fix it in Disco
     as well

  [Test Case]

   * Let the autopkgtests run (which is part of the proposed migration
     anyway)
     Sniffs already show them completing:
     https://bileto.ubuntu.com/excuses/3759/disco.html

  [Regression Potential]

   * There is no functional change, only the self-tests as well the
     autopkgtest execution are changed.
     The one source for a regression could be that the rebuild picks
     something up that triggers a behavior change. But the PPA builds have
     not shown something (at least not something obvious)

  [Other Info]

   * This is one of the cases where the actual package as used by the user
     has no bug. I'm unsure how to proceed. Do we want to push it just to
     disco-proposed but keep it there (to avoid downloads for "nothing")?
     I know rbasak wanted to discuss that in the SRU team for the even worse
     https://bugs.launchpad.net/cloud-archive/+bug/1829823/comments/14
     To some extend this come under the same banner.
   * If this is denied from SRU for this reason I'd ask to add a force-
     badtest as a replacement to unblock proposed migration.

  ---

  Checking last eoan merge I realized that some tests were failing for
  chrony:

  
https://code.launchpad.net/~paelzer/ubuntu/+source/chrony/+git/chrony/+merge/369588/comments/967625

  But eoan ran autopkgtests okay when the trigger was
  chrony/3.5-2ubuntu2 (this last merge):

  http://autopkgtest.ubuntu.com/packages/chrony/eoan/amd64

  Despite having failed for the previous 12 times (eoan).

  Now, for the first time, we have the same failure for disco:

  http://autopkgtest.ubuntu.com/packages/chrony/disco/amd64

  http://bit.ly/2LpMx4G

  """
  make: Leaving directory 
'/tmp/autopkgtest.pBHSAl/build.WCD/src/test/simulation/clknetsim'
  ...
  110-chronyc    PASS
  111-knownclient    FAIL
  112-port   FAIL
  121-orphan PASS
  ...
  SUMMARY:
    TOTAL  50
    PASSED 48
    FAILED 2(111-knownclient 112-port) (255 256 257 258 259 260 261 262 263 
264 265 266 267 268 269 270 271 272 273 274 255 256 257 258 259 260 261 262 263 
264 265 266 267 268 269 270 271 272 273 274 265)
  """

  And I'm able to reproduce locally:

  """
  (c)inaddy@iproute2verification:~/work/sources/ubuntu/chrony/test/simulation$ 
./111-knownclient
  Testing reply to client configured as server:
    network with 1*1 servers and 1 clients:
  non-default settings:
    client_conf=acquisitionport 123
    server_conf=server 192.168.123.2 noselect acquisitionport 123
  starting node 1: OK
  starting node 2: OK
  running simulation: OK
  checking chronyd exit:
    node 1: OK
    node 2: OK
  checking source selection:
    node 2: OK
  checking port numbers in packet log:
    node 1: BAD
    node 2: BAD
  FAIL

  AND

  (c)inaddy@iproute2verification:~/work/sources/ubuntu/chrony/test/simulation$ 
./112-port
  Testing port and acquisitionport directives:
    network with 1*1 servers and 1 clients:
  non-default settings:
  starting node 1: OK
  starting node 2: OK
  running simulation: OK
  checking chronyd exit:
    node 1: OK
    node 2: OK
  checking source selection:
    node 2: OK
  checking mean/min incoming/outgoing packet interval:
    node 1: 2.74e+02 2.74e+02 6.40e+01 6.40e+01 OK
    node 2: 2.74e+02 2.74e+02 6.40e+01 6.40e+01 OK
  checking clock sync time, max/rms time/freq error:
    node 2: 132 9.29e-05 1.21e-06 5.77e-05 1.01e-07 OK
  checking port numbers in packet log:
    node 1: BAD
    node 2: BAD
    network with 1*1 servers and 1 clients:
  non-default settings:
    client_conf=acquisitionport 123
  starting node 1: OK
  starting node 2: OK
  running simulation: OK
  checking chronyd exit:
    node 1: OK
    node 2:

[Group.of.nepali.translators] [Bug 1836929] Re: chrony has migration regressions from autopkgtests (disco/eoan)

2019-07-18 Thread Rafael David Tinoco
> TODO: marking as a duplicate of: LP: #1736882
TODO: marking as a duplicate of: LP: #1836882

** This bug has been marked a duplicate of bug 1836882
   autopkgtest due to other packages changing

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1836929

Title:
  chrony has migration regressions from autopkgtests (disco/eoan)

Status in chrony package in Ubuntu:
  Fix Released
Status in chrony source package in Xenial:
  Won't Fix
Status in chrony source package in Bionic:
  Won't Fix
Status in chrony source package in Cosmic:
  Won't Fix
Status in chrony source package in Disco:
  Triaged
Status in chrony source package in Eoan:
  Fix Released

Bug description:
  [Impact]

   * Recent changes have caused the chrony autopkgtests to fail.
     In this case upstream of chrony and the clk tests changed, which need 
 to be back under control to match what works reliably for Disco.

   * Later versions have this fixed, backport the changes to fix it in Disco
     as well

  [Test Case]

   * Let the autopkgtests run (which is part of the proposed migration
     anyway)
     Sniffs already show them completing:
     https://bileto.ubuntu.com/excuses/3759/disco.html

  [Regression Potential]

   * There is no functional change, only the self-tests as well the
     autopkgtest execution are changed.
     The one source for a regression could be that the rebuild picks
     something up that triggers a behavior change. But the PPA builds have
     not shown something (at least not something obvious)

  [Other Info]

   * This is one of the cases where the actual package as used by the user
     has no bug. I'm unsure how to proceed. Do we want to push it just to
     disco-proposed but keep it there (to avoid downloads for "nothing")?
     I know rbasak wanted to discuss that in the SRU team for the even worse
     https://bugs.launchpad.net/cloud-archive/+bug/1829823/comments/14
     To some extend this come under the same banner.
   * If this is denied from SRU for this reason I'd ask to add a force-
     badtest as a replacement to unblock proposed migration.

  ---

  Checking last eoan merge I realized that some tests were failing for
  chrony:

  
https://code.launchpad.net/~paelzer/ubuntu/+source/chrony/+git/chrony/+merge/369588/comments/967625

  But eoan ran autopkgtests okay when the trigger was
  chrony/3.5-2ubuntu2 (this last merge):

  http://autopkgtest.ubuntu.com/packages/chrony/eoan/amd64

  Despite having failed for the previous 12 times (eoan).

  Now, for the first time, we have the same failure for disco:

  http://autopkgtest.ubuntu.com/packages/chrony/disco/amd64

  http://bit.ly/2LpMx4G

  """
  make: Leaving directory 
'/tmp/autopkgtest.pBHSAl/build.WCD/src/test/simulation/clknetsim'
  ...
  110-chronyc    PASS
  111-knownclient    FAIL
  112-port   FAIL
  121-orphan PASS
  ...
  SUMMARY:
    TOTAL  50
    PASSED 48
    FAILED 2(111-knownclient 112-port) (255 256 257 258 259 260 261 262 263 
264 265 266 267 268 269 270 271 272 273 274 255 256 257 258 259 260 261 262 263 
264 265 266 267 268 269 270 271 272 273 274 265)
  """

  And I'm able to reproduce locally:

  """
  (c)inaddy@iproute2verification:~/work/sources/ubuntu/chrony/test/simulation$ 
./111-knownclient
  Testing reply to client configured as server:
    network with 1*1 servers and 1 clients:
  non-default settings:
    client_conf=acquisitionport 123
    server_conf=server 192.168.123.2 noselect acquisitionport 123
  starting node 1: OK
  starting node 2: OK
  running simulation: OK
  checking chronyd exit:
    node 1: OK
    node 2: OK
  checking source selection:
    node 2: OK
  checking port numbers in packet log:
    node 1: BAD
    node 2: BAD
  FAIL

  AND

  (c)inaddy@iproute2verification:~/work/sources/ubuntu/chrony/test/simulation$ 
./112-port
  Testing port and acquisitionport directives:
    network with 1*1 servers and 1 clients:
  non-default settings:
  starting node 1: OK
  starting node 2: OK
  running simulation: OK
  checking chronyd exit:
    node 1: OK
    node 2: OK
  checking source selection:
    node 2: OK
  checking mean/min incoming/outgoing packet interval:
    node 1: 2.74e+02 2.74e+02 6.40e+01 6.40e+01 OK
    node 2: 2.74e+02 2.74e+02 6.40e+01 6.40e+01 OK
  checking clock sync time, max/rms time/freq error:
    node 2: 132 9.29e-05 1.21e-06 5.77e-05 1.01e-07 OK
  checking port numbers in packet log:
    node 1: BAD
    node 2: BAD
    network with 1*1 servers and 1 clients:
  non-default settings:
    client_conf=acquisitionport 123
  starting node 1: OK
  s

[Group.of.nepali.translators] [Bug 1836929] Re: chrony has migration regressions from autopkgtests (disco/eoan)

2019-07-17 Thread Rafael David Tinoco
I have checked all other versions and they have passed latest
autopkgtest because the change, to clknetsim that caused the issue was
made in:

Date: Tue Jun 11 17:05:22 2019 +0200

And debian fix was made in:

Date: Tue, 18 Jun 2019 15:41:50 +0200

But the tests never ran after that. I'm requesting all chrony tests:

Xenial/Bionic/Cosmic/Disco

to be marked as known issues in ubuntu autopkgtests.

** Also affects: chrony (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: chrony (Ubuntu Cosmic)
   Importance: Undecided
   Status: New

** Also affects: chrony (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: chrony (Ubuntu Cosmic)
   Status: New => Won't Fix

** Changed in: chrony (Ubuntu Bionic)
   Status: New => Won't Fix

** Changed in: chrony (Ubuntu Xenial)
   Status: New => Won't Fix

** Changed in: chrony (Ubuntu Cosmic)
   Importance: Undecided => Medium

** Changed in: chrony (Ubuntu Bionic)
   Importance: Undecided => Medium

** Changed in: chrony (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: chrony (Ubuntu Disco)
 Assignee: Rafael David Tinoco (rafaeldtinoco) => (unassigned)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1836929

Title:
  chrony has migration regressions from autopkgtests (disco/eoan)

Status in chrony package in Ubuntu:
  Fix Released
Status in chrony source package in Xenial:
  Won't Fix
Status in chrony source package in Bionic:
  Won't Fix
Status in chrony source package in Cosmic:
  Won't Fix
Status in chrony source package in Disco:
  Won't Fix
Status in chrony source package in Eoan:
  Fix Released

Bug description:
  Checking last eoan merge I realized that some tests were failing for
  chrony:

  
https://code.launchpad.net/~paelzer/ubuntu/+source/chrony/+git/chrony/+merge/369588/comments/967625

  But eoan ran autopkgtests okay when the trigger was
  chrony/3.5-2ubuntu2 (this last merge):

  http://autopkgtest.ubuntu.com/packages/chrony/eoan/amd64

  Despite having failed for the previous 12 times (eoan).

  Now, for the first time, we have the same failure for disco:

  http://autopkgtest.ubuntu.com/packages/chrony/disco/amd64

  http://bit.ly/2LpMx4G

  """
  make: Leaving directory 
'/tmp/autopkgtest.pBHSAl/build.WCD/src/test/simulation/clknetsim'
  ...
  110-chronyc    PASS
  111-knownclient    FAIL
  112-port   FAIL
  121-orphan PASS
  ...
  SUMMARY:
    TOTAL  50
    PASSED 48
    FAILED 2(111-knownclient 112-port) (255 256 257 258 259 260 261 262 263 
264 265 266 267 268 269 270 271 272 273 274 255 256 257 258 259 260 261 262 263 
264 265 266 267 268 269 270 271 272 273 274 265)
  """

  And I'm able to reproduce locally:

  """
  (c)inaddy@iproute2verification:~/work/sources/ubuntu/chrony/test/simulation$ 
./111-knownclient
  Testing reply to client configured as server:
    network with 1*1 servers and 1 clients:
  non-default settings:
    client_conf=acquisitionport 123
    server_conf=server 192.168.123.2 noselect acquisitionport 123
  starting node 1: OK
  starting node 2: OK
  running simulation: OK
  checking chronyd exit:
    node 1: OK
    node 2: OK
  checking source selection:
    node 2: OK
  checking port numbers in packet log:
    node 1: BAD
    node 2: BAD
  FAIL

  AND

  (c)inaddy@iproute2verification:~/work/sources/ubuntu/chrony/test/simulation$ 
./112-port
  Testing port and acquisitionport directives:
    network with 1*1 servers and 1 clients:
  non-default settings:
  starting node 1: OK
  starting node 2: OK
  running simulation: OK
  checking chronyd exit:
    node 1: OK
    node 2: OK
  checking source selection:
    node 2: OK
  checking mean/min incoming/outgoing packet interval:
    node 1: 2.74e+02 2.74e+02 6.40e+01 6.40e+01 OK
    node 2: 2.74e+02 2.74e+02 6.40e+01 6.40e+01 OK
  checking clock sync time, max/rms time/freq error:
    node 2: 132 9.29e-05 1.21e-06 5.77e-05 1.01e-07 OK
  checking port numbers in packet log:
    node 1: BAD
    node 2: BAD
    network with 1*1 servers and 1 clients:
  non-default settings:
    client_conf=acquisitionport 123
  starting node 1: OK
  starting node 2: OK
  running simulation: OK
  checking chronyd exit:
    node 1: OK
    node 2: OK
  checking port numbers in packet log:
    node 1: BAD
    node 2: BAD
  FAIL
  """

  When doing verification for an iproute2 bug (LP: #1831775) we fac

[Group.of.nepali.translators] [Bug 1828288] Re: QEMU might fail to start on AMD CPUs when 'host-passthrough' is used

2019-05-17 Thread Rafael David Tinoco
** Changed in: qemu (Ubuntu Xenial)
   Status: New => In Progress

** Changed in: qemu (Ubuntu)
   Status: In Progress => Fix Released

** Changed in: qemu (Ubuntu Xenial)
 Assignee: (unassigned) => Rafael David Tinoco (rafaeldtinoco)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1828288

Title:
  QEMU might fail to start on AMD CPUs when 'host-passthrough' is used

Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Xenial:
  In Progress

Bug description:
  [Impact]

   * QEMU does not work in some AMD hardware when using host-passthrough
  as cpu-mode (usually to allow nested KVM to work).

  [Test Case]

   * to use Xenial qemu (1:2.5+dfsg-5ubuntu10.36 ou 1:2.5+dfsg-5ubuntu10.37)
   * to use the following XML file: https://paste.ubuntu.com/p/BSyFY7ksR5/
   * to have AMD FX(tm)-8350 Eight-Core Processor CPU or similar

  [Regression Potential]

   * initial qemu code could be affected, disallowing other guests, in other 
architectures, to be started
   * suggested patch is simple, being a positional change only
   * patch is upstream based and identifies the issue and is reported to be a 
fix for the described issue

  [Other Info]

   * INITIAL CASE DESCRIPTION:

  When using latest QEMU (-proposed) in Xenial you might encounter the
  following problem when trying to initialize your guests:

  

  (c)inaddy@qemubug:~$ apt-cache policy qemu-system-x86
  qemu-system-x86:
    Installed: 1:2.5+dfsg-5ubuntu10.37
    Candidate: 1:2.5+dfsg-5ubuntu10.37
    Version table:
   *** 1:2.5+dfsg-5ubuntu10.37 500
  500 http://ubuntu.c3sl.ufpr.br//ubuntu xenial-proposed/main amd64 
Packages
  100 /var/lib/dpkg/status
   1:2.5+dfsg-5ubuntu10.36 500
  500 http://ubuntu.c3sl.ufpr.br//ubuntu xenial-updates/main amd64 
Packages
   1:2.5+dfsg-5ubuntu10 500
  500 http://ubuntu.c3sl.ufpr.br/ubuntu xenial/main amd64 Packages

  

  (c)inaddy@qemubug:~$ virsh list --all
   IdName   State
  
   - kdebianshut off
   - kguest shut off

  (c)inaddy@qemubug:~$ virsh start --console kguest
  error: Failed to start domain kguest
  error: internal error: process exited while connecting to monitor: warning: 
host doesn't support requested feature: CPUID.8001H:EDX [bit 0]
  warning: host doesn't support requested feature: CPUID.8001H:EDX [bit 1]
  warning: host doesn't support requested feature: CPUID.8001H:EDX [bit 2]
  warning: host doesn't support requested feature: CPUID.8001H:EDX [bit 3]
  warning: host doesn't support requested feature: CPUID.8001H:EDX [bit 4]
  warning: host doesn't support requested feature: CPUID.8001H:EDX [bit 5]
  warning: host doesn't support requested feature: CPUID.8001H:EDX [bit 6]
  warning: host doesn't support requested feature: CPUID.8001H:EDX [bit 7]
  warning: host doesn't support requested feature: CPUID.8001H:EDX [bit 8]
  warning: host doesn't support requested feature: CPUID.8001H:EDX [bit 9]
  warning: host doesn't support requested feature: CPUID.8001H:EDX [bit 12]
  warning: host doesn't support requested feature: CPUID.8001H:EDX [bit 13]
  warning: host doesn't support requested feature: CPU

  

  This happens because x86_cpu_get_migratable_flags() does not support
  CPUID_EXT2_AMD_ALIASES. After cherry-picking upstream patch
  9997cf7bdac056aeed246613639675c5a9f8fdc2, that moves
  CPUID_EXT2_AMD_ALIASES code to after x86_cpu_filter_features(), the
  problem is fixed. Other QEMU versions are properly fixed and don't
  face this issue.

  Cherry-picking commit and re-building the package makes it to work:
  

  (c)inaddy@qemubug:~$ virsh start --console kguest
  Domain kguest started
  Connected to domain kguest
  Escape character is ^]
  [0.00] Linux version 4.19.0-4-amd64 (debian-ker...@lists.debian.org) 
(gcc version 8.3.0 (Debian 8.3.0-2)) #1
  SMP Debian 4.19.28-2 (2019-03-15)
  [0.00] Command line: root=/dev/vda noresume console=tty0 
console=ttyS0,38400n8 apparmor=0 net.ifnames=0 crashkernel=256M
  [0.00] random: get_random_u32 called from bsp_init_amd+0x20b/0x2b0 
with crng_init=0
  [0.00] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point 
registers'
  [0.00] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
  [0.00] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
  [0.00] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
  [0.00] x86/fpu: Enabled xstate features 0x7, context size is 832 
bytes, using 'standard' format

[Group.of.nepali.translators] [Bug 1706132] Re: xfs slab objects (memory) leak when xfs shutdown is called

2018-05-18 Thread Rafael David Tinoco
** Changed in: linux (Ubuntu)
   Status: In Progress => Fix Released

** Changed in: linux (Ubuntu)
 Assignee: Rafael David Tinoco (inaddy) => (unassigned)

** Changed in: linux (Ubuntu Xenial)
 Assignee: Rafael David Tinoco (inaddy) => (unassigned)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1706132

Title:
  xfs slab objects (memory) leak when xfs shutdown is called

Status in linux package in Ubuntu:
  Fix Released
Status in linux source package in Xenial:
  Fix Released

Bug description:
  [Impact]

   * xfs kernel memory leak in case of xfs shutdown due to i/o errors
   * if xfs on iscsi, iscsi disconnection and module unload will case mem leak

  [Test Case]

   * configure tgtd with 1 lun and make it available through tcp/ip
   * configure open-iscsi to map this lun
   * make sure node.session.timeo.replacement_timeout = 0 in iscsid.conf
   * mount a xfs volume using the lun from tgtd host, run bonnie -d /xfsdir
   * in tgtd server, drop iscsi packets and watch client to have i/o errors
   * after sometime (depending on timeout) xfs will call shutdown
   * make sure the i/o errors led to xfs shutdown (comment #3)
   * after shutdown you try to remove xfs module and it will leak

  [Regression Potential]

   * based on upstream fix
   * tested in the same environment
   * potential damage to xfs

  [Other Info]

  Original Description:

     This scenario is testing [iscsi <-> scsi <-> disk <-> xfs]

  [  551.125604] sd 2:0:0:1: rejecting I/O to offline device
  [  551.125615] sd 2:0:0:1: rejecting I/O to offline device
  [  551.125627] sd 2:0:0:1: rejecting I/O to offline device
  [  551.125639] sd 2:0:0:1: rejecting I/O to offline device

  [  551.135216] XFS (sda1): metadata I/O error: block 0xeffe01 
("xfs_trans_read_buf_map") error 5 numblks 1
  [  551.135274] XFS (sda1): page discard on page ea0002a89cc0, inode 0x83, 
offset 6442385408.

  # when XFS shuts down because of an error (or offline disk, example):

  [  551.850498] XFS (sda1): xfs_do_force_shutdown(0x2) called from line 1197 
of file /build/linux-lts-xenial-roXrYH/linux-lts-xenial-4.4.0/fs/xfs/xfs_log.c. 
 Return address = 0xc0300388
  [  551.850568] XFS (sda1): Log I/O Error Detected.  Shutting down filesystem

  [  551.850618] XFS (sda1): xfs_log_force: error -5 returned.

  [  551.850630] XFS (sda1): Failing async write on buffer block 0x77ff08. 
Retrying async write.
  [  551.850634] XFS (sda1): Failing async write on buffer block 0x77ff10. 
Retrying async write.
  [  551.850638] XFS (sda1): Failing async write on buffer block 0x77ff01. 
Retrying async write.
  [  551.853171] XFS (sda1): Please umount the filesystem and rectify the 
problem(s)

  [  551.874131] XFS (sda1): metadata I/O error: block 0x1dffc49 
("xlog_iodone") error 5 numblks 64
  [  551.877993] XFS (sda1): xfs_do_force_shutdown(0x2) called from line 1197 
of file /build/linux-lts-xenial-roXrYH/linux-lts-xenial-4.4.0/fs/xfs/xfs_log.c. 
 Return address = 0xc0300388

  [  551.899036] XFS (sda1): xfs_log_force: error -5 returned.
  [  569.323074] XFS (sda1): xfs_log_force: error -5 returned.
  [  599.403085] XFS (sda1): xfs_log_force: error -5 returned.
  [  629.483111] XFS (sda1): xfs_log_force: error -5 returned.
  [  659.563115] XFS (sda1): xfs_log_force: error -5 returned.
  [  689.643014] XFS (sda1): xfs_log_force: error -5 returned.

  # when I execute:

  # sudo umount /dev/sda1:

  [81634.923043] XFS (sda1): xfs_log_force: error -5 returned.
  [81640.739097] XFS (sda1): xfs_log_force: error -5 returned.
  [81640.739137] XFS (sda1): Unmounting Filesystem
  [81640.739463] XFS (sda1): xfs_log_force: error -5 returned.
  [81640.739508] XFS (sda1): xfs_log_force: error -5 returned.
  [81640.742741] sd 2:0:0:1: rejecting I/O to offline device
  [81640.745576] blk_update_request: 25 callbacks suppressed
  [81640.745601] blk_update_request: I/O error, dev sda, sector 0

  # i was able to umount and then to remove iscsi disk.

  # but if i try to unload the xfs module:

  inaddy@(trustyiscsicli):~$ sudo rmmod xfs
  [82211.059301] 
=
  [82211.063764] BUG xfs_log_ticket (Tainted: G   OE  ): Objects 
remaining in xfs_log_ticket on kmem_cache_close()
  [82211.067450] 
-
  [82211.067450]
  [82211.070580] INFO: Slab 0xea0002eb7640 objects=22 used=1 
fp=0x8800badd9f18 flags=0xc00080
  [82211.074430] INFO: Object 0x8800badd9228 @offset=552
  [82211.076133] kmem_cache_destroy xfs_log_ticket: Slab cache still has objects

  AND

  [82211.059301] 
=
  [82211.063764] BUG xfs_log_ticket (Ta

[Group.of.nepali.translators] [Bug 1569925] Re: Shutdown hang on 16.04 with iscsi targets

2018-01-19 Thread Rafael David Tinoco
** Changed in: open-iscsi (Ubuntu Trusty)
   Status: New => Opinion

** Changed in: linux (Ubuntu Trusty)
   Status: New => In Progress

** Changed in: linux (Ubuntu Trusty)
 Assignee: (unassigned) => Rafael David Tinoco (inaddy)

** Changed in: linux (Ubuntu Trusty)
   Importance: Undecided => Medium

** Changed in: open-iscsi (Ubuntu Trusty)
   Importance: Undecided => Medium

** Changed in: open-iscsi (Ubuntu Trusty)
 Assignee: (unassigned) => Rafael David Tinoco (inaddy)

** No longer affects: open-iscsi (Ubuntu Zesty)

** No longer affects: linux (Ubuntu Zesty)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1569925

Title:
  Shutdown hang on 16.04 with iscsi targets

Status in linux package in Ubuntu:
  In Progress
Status in open-iscsi package in Ubuntu:
  Opinion
Status in linux source package in Trusty:
  In Progress
Status in open-iscsi source package in Trusty:
  Opinion
Status in linux source package in Xenial:
  In Progress
Status in open-iscsi source package in Xenial:
  Opinion
Status in linux source package in Artful:
  In Progress
Status in open-iscsi source package in Artful:
  Opinion
Status in linux source package in Bionic:
  In Progress
Status in open-iscsi source package in Bionic:
  Opinion

Bug description:
  [Impact]

   * open-iscsi users might face hangs during OS shutdown.
   * hangs can be caused by manual iscsi configuration/setup.
   * hangs can also be caused by bad systemd unit ordering.
   * if transport layer interface vanishes before lun is
 disconnected, then the hang will happen.
   * check comment #89 for the fix decision.
   
  [Test Case]

   * a simple way of reproducing the kernel hang is to disable
 the open-iscsi logouts. this simulates a situation when
 a service has shutdown the network interface, used by
 the transport layer, before proper iscsi logout was done.

 $ log into all iscsi luns

 $ systemctl edit --full open-iscsi.service
 ...
 #ExecStop=/lib/open-iscsi/logout-all.sh
 ...

 $ sudo reboot # this will make server to hang forever
   # on shutdown

  [Regression Potential]

   * the regression is low because the change acts on the iscsi
 transport layer code ONLY when the server is in shutdown 
 state.

   * any error in logic would only appear during shutdown and
 would not cause any harm to data.

  [Other Info]
   
   * ORIGINAL BUG DESCRIPTION

  I have 4 servers running the latest 16.04 updates from the development
  branch (as of right now).

  Each server is connected to NetApp storage using iscsi software
  initiator.  There are a total of 56 volumes spread across two NetApp
  arrays.  Each volume has 4 paths available to it which are being
  managed by device mapper.

  While logged into the iscsi sessions all I have to do is reboot the
  server and I get a hang.

  I see a message that says:

    "Reached target Shutdown"

  followed by

    "systemd-shutdown[1]: Failed to finalize DM devices, ignoring"

  and then I see 8 lines that say:

    "connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection2:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection3:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection4:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection5:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection6:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection7:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection8:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    NOTE: the actual values of the *'s differ for each line above.

  This seems like a bug somewhere but I am unaware of any additional
  logging that I could turn on to pinpoint the problem.

  Note I also have similar setups that are not doing iscsi and they
  don't have this problem.

  Here is a screenshot of what I see on the shell when I try to reboot:

  (https://launchpadlibrarian.net/291303059/Screenshot.jpg)

  This is being tracked in NetApp bug tracker CQ number 860251.

  If I log out of all iscsi sessions before rebooting then I do not
  experience the hang:

  iscsiadm -m node -U all

  We are wondering if this could be som

[Group.of.nepali.translators] [Bug 1569925] Re: Shutdown hang on 16.04 with iscsi targets

2018-01-19 Thread Rafael David Tinoco
Changed open-iscsi to opinion since I choose to fix the kernel instead
of fixing Userland. No matter what you do in userland, the kernel had
space for freezing and hanging in different scenarios depending on how
userland disconnected the transport layer. I kept the "linux" source
package as being affected and will SRU it through Ubuntu Kernel Team.

** Changed in: open-iscsi (Ubuntu Xenial)
   Status: In Progress => Opinion

** Changed in: open-iscsi (Ubuntu)
   Status: In Progress => Opinion

** Changed in: open-iscsi (Ubuntu Zesty)
   Status: In Progress => Opinion

** Changed in: open-iscsi (Ubuntu Artful)
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1569925

Title:
  Shutdown hang on 16.04 with iscsi targets

Status in linux package in Ubuntu:
  In Progress
Status in open-iscsi package in Ubuntu:
  Opinion
Status in linux source package in Xenial:
  In Progress
Status in open-iscsi source package in Xenial:
  Opinion
Status in linux source package in Zesty:
  In Progress
Status in open-iscsi source package in Zesty:
  Opinion
Status in linux source package in Artful:
  In Progress
Status in open-iscsi source package in Artful:
  Opinion

Bug description:
  I have 4 servers running the latest 16.04 updates from the development
  branch (as of right now).

  Each server is connected to NetApp storage using iscsi software
  initiator.  There are a total of 56 volumes spread across two NetApp
  arrays.  Each volume has 4 paths available to it which are being
  managed by device mapper.

  While logged into the iscsi sessions all I have to do is reboot the
  server and I get a hang.

  I see a message that says:

    "Reached target Shutdown"

  followed by

    "systemd-shutdown[1]: Failed to finalize DM devices, ignoring"

  and then I see 8 lines that say:

    "connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection2:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection3:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection4:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection5:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection6:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection7:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection8:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    NOTE: the actual values of the *'s differ for each line above.

  This seems like a bug somewhere but I am unaware of any additional
  logging that I could turn on to pinpoint the problem.

  Note I also have similar setups that are not doing iscsi and they
  don't have this problem.

  Here is a screenshot of what I see on the shell when I try to reboot:

  (https://launchpadlibrarian.net/291303059/Screenshot.jpg)

  This is being tracked in NetApp bug tracker CQ number 860251.

  If I log out of all iscsi sessions before rebooting then I do not
  experience the hang:

  iscsiadm -m node -U all

  We are wondering if this could be some kind of shutdown ordering
  problem.  Like the network devices have already disappeared and then
  iscsi tries to perform some operation (hence the ping timeouts).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1569925/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1743637] Re: QEMU vhost-user shutdown suffers from use after free (missing clean shutdown)

2018-01-16 Thread Rafael David Tinoco
** Changed in: qemu (Ubuntu Xenial)
   Status: New => In Progress

** Changed in: qemu (Ubuntu)
   Status: In Progress => Fix Released

** Changed in: qemu (Ubuntu)
 Assignee: Rafael David Tinoco (inaddy) => (unassigned)

** Changed in: qemu (Ubuntu Xenial)
 Assignee: (unassigned) => Rafael David Tinoco (inaddy)

** Changed in: qemu (Ubuntu Xenial)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1743637

Title:
  QEMU vhost-user shutdown suffers from use after free (missing clean
  shutdown)

Status in qemu package in Ubuntu:
  Fix Released
Status in qemu source package in Xenial:
  In Progress

Bug description:
  # BUG Description after dump analysis

  - The logic net_cleanup calls the vhost_net_stop. 
  - This last one iterates over all vhost networks to stop one by one. 
  - Idea behind is to cleanly do the virtqueue stop, releasing resources. 
  - In order to stop the virtqueue, vhost has to get the vring base address 
(by sending a msg of VHOST_USER_GET_VERING_BASE)
  - the char device would read from the socket the base address. 
  - if it reads nothing, the qemu tcp channel driver would disconnect the 
socket. 
  - when the socket is disconnected, vhost_user stops all the queues to that 
vhost_user socket. 

  From the dump:

  By disconnecting charnet2 device we reach the error. Since the char
  device has already been disconnected, the vhost_user_stop tries to
  stop all queues but it accidentally treats all of them the same (and
  charnet4 is a TAP device, not a VHOST USER).

   Logic Error:

  Here is the charnet2 data at the time of the error:

  Name : filename (from CharDriverState) 
  Details:0x556a934b0a90 "disconnected:unix:/run/openvswitch/vhostuser-vcic" 
  Default:0x556a934b0a90 "disconnected:unix:/run/openvswitch/vhostuser-vcic" 
  Decimal:93916226062992 
  Hex:0x556a934b0a90 
  Binary:101010101101010100100110100101110101001 
  Octal:02526522322605220 

  When it realizes the connection is gone it creates an event:

  qemu_chr_be_event(chr, CHR_EVENT_CLOSED);

  Which will call:

  net_vhost_user_event

  This last function finds all NetClientState using a pointer called
  "name".

  The event was originated the device charnet2 and the event callback is
  running using charnet4, which explains why the bad decision (assert)
  was made (trying to assert if a TAP device is a VHOST_USER one).

   Possible Fix

  There is already a commit upstream that might address this:

  commit c1bf3531aecf4a0ba25bb150dd5fe21edf406c88 
  Author: Marc-André Lureau  2016-02-23 18:10:49 
  Committer: Michael S. Tsirkin  2016-03-11 14:59:12 
  Branches: master, origin/HEAD, origin/master, origin/stable-2.10, 
origin/stable-2.6, origin/stable-2.7, origin/stable-2.8, origin/stable-2.9 

  vhost-user: fix use after free

  "name" is freed after visiting options, instead use the first NetClientState 
  name. Adds a few assert() for clarifying and checking some impossible states.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1743637/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1569925] Re: Shutdown hang on 16.04 with iscsi targets

2017-09-01 Thread Rafael David Tinoco
Unrelated to systemd. Related to open-iscsi systemd unit files and
kernel iscsi transport logout during shutdown.

** Changed in: systemd (Ubuntu Artful)
   Importance: High => Medium

** Changed in: systemd (Ubuntu Artful)
   Status: Confirmed => Triaged

** Changed in: systemd (Ubuntu Artful)
 Assignee: Nish Aravamudan (nacc) => (unassigned)

** No longer affects: systemd (Ubuntu Xenial)

** No longer affects: systemd (Ubuntu Zesty)

** No longer affects: systemd (Ubuntu Artful)

** Changed in: linux (Ubuntu Xenial)
 Assignee: (unassigned) => Rafael David Tinoco (inaddy)

** Changed in: linux (Ubuntu Zesty)
 Assignee: (unassigned) => Rafael David Tinoco (inaddy)

** Changed in: linux (Ubuntu Artful)
 Assignee: (unassigned) => Rafael David Tinoco (inaddy)

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1569925

Title:
  Shutdown hang on 16.04 with iscsi targets

Status in linux package in Ubuntu:
  In Progress
Status in open-iscsi package in Ubuntu:
  In Progress
Status in linux source package in Xenial:
  In Progress
Status in open-iscsi source package in Xenial:
  In Progress
Status in linux source package in Zesty:
  In Progress
Status in open-iscsi source package in Zesty:
  In Progress
Status in linux source package in Artful:
  In Progress
Status in open-iscsi source package in Artful:
  In Progress

Bug description:
  I have 4 servers running the latest 16.04 updates from the development
  branch (as of right now).

  Each server is connected to NetApp storage using iscsi software
  initiator.  There are a total of 56 volumes spread across two NetApp
  arrays.  Each volume has 4 paths available to it which are being
  managed by device mapper.

  While logged into the iscsi sessions all I have to do is reboot the
  server and I get a hang.

  I see a message that says:

    "Reached target Shutdown"

  followed by

    "systemd-shutdown[1]: Failed to finalize DM devices, ignoring"

  and then I see 8 lines that say:

    "connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection2:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection3:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection4:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection5:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection6:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection7:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection8:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    NOTE: the actual values of the *'s differ for each line above.

  This seems like a bug somewhere but I am unaware of any additional
  logging that I could turn on to pinpoint the problem.

  Note I also have similar setups that are not doing iscsi and they
  don't have this problem.

  Here is a screenshot of what I see on the shell when I try to reboot:

  (https://launchpadlibrarian.net/291303059/Screenshot.jpg)

  This is being tracked in NetApp bug tracker CQ number 860251.

  If I log out of all iscsi sessions before rebooting then I do not
  experience the hang:

  iscsiadm -m node -U all

  We are wondering if this could be some kind of shutdown ordering
  problem.  Like the network devices have already disappeared and then
  iscsi tries to perform some operation (hence the ping timeouts).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1569925/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1569925] Re: Shutdown hang on 16.04 with iscsi targets

2017-09-01 Thread Rafael David Tinoco
Unrelated to systemd. Related to open-iscsi systemd unit files and
kernel iscsi transport logout during shutdown.

** Changed in: open-iscsi (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: open-iscsi (Ubuntu Xenial)
   Status: New => In Progress

** Changed in: open-iscsi (Ubuntu Zesty)
   Importance: Undecided => Medium

** Changed in: open-iscsi (Ubuntu Zesty)
   Status: New => In Progress

** Changed in: open-iscsi (Ubuntu Artful)
   Importance: Undecided => Medium

** Changed in: open-iscsi (Ubuntu Artful)
   Status: New => In Progress

** Changed in: linux (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: linux (Ubuntu Xenial)
   Status: Incomplete => In Progress

** Changed in: linux (Ubuntu Zesty)
   Importance: Undecided => Medium

** Changed in: linux (Ubuntu Zesty)
   Status: Incomplete => In Progress

** Changed in: linux (Ubuntu Artful)
   Importance: Undecided => Medium

** Changed in: linux (Ubuntu Artful)
   Status: Incomplete => In Progress

** No longer affects: systemd (Ubuntu)

** Changed in: systemd (Ubuntu Xenial)
   Status: In Progress => Triaged

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1569925

Title:
  Shutdown hang on 16.04 with iscsi targets

Status in linux package in Ubuntu:
  In Progress
Status in open-iscsi package in Ubuntu:
  In Progress
Status in linux source package in Xenial:
  In Progress
Status in open-iscsi source package in Xenial:
  In Progress
Status in linux source package in Zesty:
  In Progress
Status in open-iscsi source package in Zesty:
  In Progress
Status in linux source package in Artful:
  In Progress
Status in open-iscsi source package in Artful:
  In Progress

Bug description:
  I have 4 servers running the latest 16.04 updates from the development
  branch (as of right now).

  Each server is connected to NetApp storage using iscsi software
  initiator.  There are a total of 56 volumes spread across two NetApp
  arrays.  Each volume has 4 paths available to it which are being
  managed by device mapper.

  While logged into the iscsi sessions all I have to do is reboot the
  server and I get a hang.

  I see a message that says:

    "Reached target Shutdown"

  followed by

    "systemd-shutdown[1]: Failed to finalize DM devices, ignoring"

  and then I see 8 lines that say:

    "connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection2:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection3:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection4:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection5:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection6:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection7:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection8:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    NOTE: the actual values of the *'s differ for each line above.

  This seems like a bug somewhere but I am unaware of any additional
  logging that I could turn on to pinpoint the problem.

  Note I also have similar setups that are not doing iscsi and they
  don't have this problem.

  Here is a screenshot of what I see on the shell when I try to reboot:

  (https://launchpadlibrarian.net/291303059/Screenshot.jpg)

  This is being tracked in NetApp bug tracker CQ number 860251.

  If I log out of all iscsi sessions before rebooting then I do not
  experience the hang:

  iscsiadm -m node -U all

  We are wondering if this could be some kind of shutdown ordering
  problem.  Like the network devices have already disappeared and then
  iscsi tries to perform some operation (hence the ping timeouts).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1569925/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1569925] Re: Shutdown hang on 16.04 with iscsi targets

2017-09-01 Thread Rafael David Tinoco
** Also affects: open-iscsi (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: linux (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1569925

Title:
  Shutdown hang on 16.04 with iscsi targets

Status in linux package in Ubuntu:
  New
Status in open-iscsi package in Ubuntu:
  New
Status in systemd package in Ubuntu:
  Confirmed
Status in linux source package in Xenial:
  New
Status in open-iscsi source package in Xenial:
  New
Status in systemd source package in Xenial:
  In Progress
Status in linux source package in Zesty:
  New
Status in open-iscsi source package in Zesty:
  New
Status in systemd source package in Zesty:
  New
Status in linux source package in Artful:
  New
Status in open-iscsi source package in Artful:
  New
Status in systemd source package in Artful:
  Confirmed

Bug description:
  I have 4 servers running the latest 16.04 updates from the development
  branch (as of right now).

  Each server is connected to NetApp storage using iscsi software
  initiator.  There are a total of 56 volumes spread across two NetApp
  arrays.  Each volume has 4 paths available to it which are being
  managed by device mapper.

  While logged into the iscsi sessions all I have to do is reboot the
  server and I get a hang.

  I see a message that says:

    "Reached target Shutdown"

  followed by

    "systemd-shutdown[1]: Failed to finalize DM devices, ignoring"

  and then I see 8 lines that say:

    "connection1:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection2:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection3:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection4:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection5:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection6:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection7:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    "connection8:0: ping timeout of 5 secs expired, recv timeout 5, last rx 
4311815***, last ping 43118164**, now 4311817***"
    NOTE: the actual values of the *'s differ for each line above.

  This seems like a bug somewhere but I am unaware of any additional
  logging that I could turn on to pinpoint the problem.

  Note I also have similar setups that are not doing iscsi and they
  don't have this problem.

  Here is a screenshot of what I see on the shell when I try to reboot:

  (https://launchpadlibrarian.net/291303059/Screenshot.jpg)

  This is being tracked in NetApp bug tracker CQ number 860251.

  If I log out of all iscsi sessions before rebooting then I do not
  experience the hang:

  iscsiadm -m node -U all

  We are wondering if this could be some kind of shutdown ordering
  problem.  Like the network devices have already disappeared and then
  iscsi tries to perform some operation (hence the ping timeouts).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1569925/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp


[Group.of.nepali.translators] [Bug 1590799] Re: nfs-kernel-server does not start because of dependency failure

2017-04-10 Thread Rafael David Tinoco
Johan why are you changing status for this bug ? It has not been
released yet. The status is changed automatically when the package moves
from -proposed to -updates.

** Changed in: nfs-utils (Ubuntu Xenial)
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1590799

Title:
  nfs-kernel-server does not start because of dependency failure

Status in nfs-utils package in Ubuntu:
  Fix Released
Status in nfs-utils source package in Trusty:
  Invalid
Status in nfs-utils source package in Xenial:
  Fix Committed
Status in nfs-utils source package in Yakkety:
  Fix Committed
Status in nfs-utils source package in Zesty:
  Fix Released

Bug description:
  [Impact]

   * nfs-mountd doesn't get started because of a race condition happening when 
rpcbind.socket is not specified as a needed service for it to start.
   * nfs-server using rpcbind.target instead of using rpcbind.socket. Target 
should not be used (Comment #24)

  [Test Case]

   * Install nfs-kernel-server inside a xenial lxc guest and restart it until 
nfs-mountd doesn't start complaining on rpc error.
   * Comment #25

  [Regression Potential]

   * Cons: Systemd dependencies could brake for nfs-server and nfs-mountd.
   * Pros: Patches have been accepted upstream (and tested).

  [Other Info]
   
  # Original Bug Description

  Immediately after boot:

  root@feynmann:~# systemctl status nfs-kernel-server
  ● nfs-server.service - NFS server and services
     Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor 
preset: enabled)
     Active: inactive (dead)

  Jun 09 14:35:47 feynmann systemd[1]: Dependency failed for NFS server and 
services.
  Jun 09 14:35:47 feynmann systemd[1]: nfs-server.service: Job 
nfs-server.service/start failed

  root@feynmann:~# systemctl status nfs-mountd.service
  ● nfs-mountd.service - NFS Mount Daemon
     Loaded: loaded (/lib/systemd/system/nfs-mountd.service; static; vendor 
preset: enabled)
     Active: failed (Result: exit-code) since Thu 2016-06-09 14:35:47 BST; 7min 
ago
    Process: 1321 ExecStart=/usr/sbin/rpc.mountd $RPCMOUNTDARGS (code=exited, 
status=1/FAILURE)

  Jun 09 14:35:47 feynmann systemd[1]: Starting NFS Mount Daemon...
  Jun 09 14:35:47 feynmann rpc.mountd[1321]: mountd: could not create listeners
  Jun 09 14:35:47 feynmann systemd[1]: nfs-mountd.service: Control process 
exited, code=exited
  Jun 09 14:35:47 feynmann systemd[1]: Failed to start NFS Mount Daemon.
  Jun 09 14:35:47 feynmann systemd[1]: nfs-mountd.service: Unit entered failed 
state.
  Jun 09 14:35:47 feynmann systemd[1]: nfs-mountd.service: Failed with result 
'exit-code'.

  root@feynmann:~# systemctl list-dependencies nfs-kernel-server
  nfs-kernel-server.service
  ● ├─auth-rpcgss-module.service
  ● ├─nfs-config.service
  ● ├─nfs-idmapd.service
  ● ├─nfs-mountd.service
  ● ├─proc-fs-nfsd.mount
  ● ├─rpc-svcgssd.service
  ● ├─system.slice
  ● ├─network.target
  ● └─rpcbind.target
  ●   └─rpcbind.service

  root@feynmann:~# systemctl list-dependencies nfs-mountd.service
  nfs-mountd.service
  ● ├─nfs-config.service
  ● ├─nfs-server.service
  ● ├─proc-fs-nfsd.mount
  ● └─system.slice
  root@feynmann:~#

  root@feynmann:~# lsb_release -rd
  Description:  Ubuntu 16.04 LTS
  Release:  16.04

  root@feynmann:~# apt-cache policy nfs-kernel-server
  nfs-kernel-server:
    Installed: 1:1.2.8-9ubuntu12
    Candidate: 1:1.2.8-9ubuntu12
    Version table:
   *** 1:1.2.8-9ubuntu12 500
  500 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
  100 /var/lib/dpkg/status

  Additional comments:

  1. There seems to be a circular dependency between nfs-mountd and 
nfs-kernel-server
  2. I can get it working by changing the AFter,Requires in 
/lib/ssystemd/system/nfs-{mountd|server}.service files. I have managed to get 
nfs-kernel-server to start but not nfs-mountd.
  3. /usr/lib/systemd/scripts/nfs-utils_env.sh references 
/etc/sysconfig/nfs which is Centos/RedHat location of this file. Also 
/etc/default/nfs does not exist. (possibly unrelated to this bug)
  4. A file "/lib/systemd/system/-.slice" exists. this file prevents 
execution of 'ls *' or 'grep xxx *' commands in that directory. I am unsure 
whether this is intended by the systemd developers but it is unfriendly when 
investigating this bug.

  Attempted solution:

  1. Edit /lib/systemd/system/nfs-server.service (original lines are
  commented out:

  [Unit]
  Description=NFS server and services
  DefaultDependencies=no
  Requires=network.target proc-fs-nfsd.mount rpcbind.target
  # Requires=nfs-mountd.service
  Wants=nfs-idmapd.service

  After=local-fs.target
  #After=network.target proc-fs-nfsd.mount rpcbind.target nfs-mountd.service
  After=network.target proc-fs-nfsd.mount rpcbind.target
  After=nfs-idmapd.service rpc-st

[Group.of.nepali.translators] [Bug 1317491] Re: virsh blockcommit hangs at 100%

2017-02-15 Thread Rafael David Tinoco
** Changed in: libvirt (Ubuntu Zesty)
   Importance: Medium => Undecided

** Changed in: libvirt (Ubuntu Zesty)
   Status: Confirmed => Fix Released

** Changed in: libvirt (Ubuntu Yakkety)
   Status: New => Fix Released

** Changed in: libvirt (Ubuntu Zesty)
 Assignee: Rafael David Tinoco (inaddy) => (unassigned)

** Changed in: libvirt (Ubuntu Xenial)
   Status: New => Fix Released

** Changed in: libvirt (Ubuntu Trusty)
   Status: New => In Progress

** Changed in: libvirt (Ubuntu Trusty)
 Assignee: (unassigned) => Rafael David Tinoco (inaddy)

** Changed in: libvirt (Ubuntu Trusty)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1317491

Title:
  virsh blockcommit hangs at 100%

Status in libvirt package in Ubuntu:
  Fix Released
Status in libvirt source package in Trusty:
  In Progress
Status in libvirt source package in Xenial:
  Fix Released
Status in libvirt source package in Yakkety:
  Fix Released
Status in libvirt source package in Zesty:
  Fix Released

Bug description:
  virsh blockcommit hangs at 100% and nothing happens.

  I only found the following discussion:
  
http://t358434.emulators-libvirt-user.emulatortalk.info/virsh-blockcommit-hangs-at-100-t358434.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1317491/+subscriptions

___
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : group.of.nepali.translators@lists.launchpad.net
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help   : https://help.launchpad.net/ListHelp