[Bug 2064859] Re: GNOME's automatic timezone doesn't work with Failed to query location: No WiFi networks found

2024-05-06 Thread Nobuto Murata
> Do you use wpa_supplicant or iwd on that system?

I'm with wpa_supplicant (default). And netplan status says the Wifi
connection is up so not sure the "No WiFi" line is the root cause or
just a red-herring.


●  3: wlp2s0 wifi UP (NetworkManager: NM-94eee488-50b3-42db-8b93-cc8d7dcad210)
  MAC Address: 04:7b:cb: (Qualcomm Technologies, Inc)
Addresses: 10.55.0.221/16 (dhcp)
   fe80::/64 (link)
DNS Addresses: 
   
   80.58.61.250
   Routes: default via 10.55.0.1 from 10.55.0.221 metric 600 (dhcp)
   10.55.0.0/16 from 10.55.0.221 metric 600 (link)
   fe80::/64 metric 1024


** Changed in: gnome-control-center (Ubuntu)
   Status: Incomplete => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2064859

Title:
  GNOME's automatic timezone doesn't work with Failed to query location:
  No WiFi networks found

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnome-control-center/+bug/2064859/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2064859] Re: GNOME's automatic timezone doesn't work with Failed to query location: No WiFi networks found

2024-05-06 Thread Nobuto Murata
There is no feedback in the UI anywhere. The line was from journalctl
> May 05 21:57:22 t14 geoclue[71430]: Failed to query location: No WiFi 
> networks found
and nothing happens.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2064859

Title:
  GNOME's automatic timezone doesn't work with Failed to query location:
  No WiFi networks found

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnome-control-center/+bug/2064859/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2064859] [NEW] GNOME's automatic timezone doesn't work with Failed to query location: No WiFi networks found

2024-05-05 Thread Nobuto Murata
Public bug reported:

I'm aware that the underlying service is going to be retired as covered by:
https://bugs.launchpad.net/ubuntu/+source/gnome-control-center/+bug/2062178

However, the service is still active as of writing but somehow GNOME
desktop env cannot determine the timezone. It's worth noting that gnome-
maps for example can successfully locate where I am through the geoclue.

By toggling the config off and on by:
$ gsettings set org.gnome.desktop.datetime automatic-timezone false
$ gsettings set org.gnome.desktop.datetime automatic-timezone true

I only get:
> May 05 21:57:22 t14 geoclue[71430]: Failed to query location: No WiFi 
> networks found
and nothing happens.

By stopping the geoclue service and running it by hand with more debugging, I 
get more output including the exact location though.
$ sudo systemctl stop geoclue.service


$ sudo -H -u geoclue env G_MESSAGES_DEBUG=Geoclue /usr/libexec/geoclue | tee 
geoclue_debug.log
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239: Loading config: 
/etc/geoclue/geoclue.conf
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239: Using the default locate URL: Key 
file does not have key “url” in group “wifi”
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239: Using the default submission URL: 
Key file does not have key “submission-url” in group “wifi”
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239: GeoClue configuration:
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239: Agents:
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   geoclue-demo-agent
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   gnome-shell
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   
io.elementary.desktop.agent-geoclue2
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   sm.puri.Phosh
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   lipstick
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239: Network NMEA source: enabled
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239: Network NMEA socket: none
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239: 3G source: enabled
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239: CDMA source: enabled
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239: Modem GPS source: enabled
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239: WiFi source: enabled
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239: WiFi locate URL: 
https://location.services.mozilla.com/v1/geolocate?key=
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239: WiFi submit URL: 
https://location.services.mozilla.com/v2/geosubmit?key=
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239: WiFi submit data: disabled
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239: WiFi submission nickname: geoclue
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239: Static source: enabled
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239: Compass: enabled
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239: Application configs:
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   ID: lipstick
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   Allowed: yes
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   System: yes
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   Users: all
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   ID: firefox
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   Allowed: yes
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   System: no
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   Users: all
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   ID: epiphany
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   Allowed: yes
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   System: no
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   Users: all
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   ID: sm.puri.Phosh
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   Allowed: yes
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   System: yes
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   Users: all
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   ID: 
io.elementary.desktop.agent-geoclue2
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   Allowed: yes
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   System: yes
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   Users: all
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   ID: org.gnome.Shell
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   Allowed: yes
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   System: yes
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   Users: all
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   ID: gnome-color-panel
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   Allowed: yes
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   System: yes
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   Users: all
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   ID: gnome-datetime-panel
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   Allowed: yes
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   System: yes
(geoclue:87959): Geoclue-DEBUG: 22:00:05.239:   Users: all
(geoclue:87959): Geoclue-DEBUG: 

[Bug 2056387] Re: [T14 Gen 3 AMD] Fail to suspend/resume for the second time

2024-05-02 Thread Nobuto Murata
It's no longer reproducible at least with linux-image-6.8.0-31-generic,
closing.

** Changed in: linux (Ubuntu)
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2056387

Title:
  [T14 Gen 3 AMD] Fail to suspend/resume for the second time

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2056387/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2056387] Re: [T14 Gen 3 AMD] Fail to suspend/resume for the second time

2024-04-10 Thread Nobuto Murata
** Summary changed:

- Fail to suspend/resume for the second time
+ [T14 Gen 3 AMD] Fail to suspend/resume for the second time

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2056387

Title:
  [T14 Gen 3 AMD] Fail to suspend/resume for the second time

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2056387/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2059386] Re: curtin-install.log and curtin-install-cfg.yaml are not collected

2024-03-28 Thread Nobuto Murata
** Attachment added: "curtin-install-cfg.yaml"
   
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/2059386/+attachment/5760167/+files/curtin-install-cfg.yaml

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2059386

Title:
  curtin-install.log and curtin-install-cfg.yaml are not collected

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/2059386/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2059386] Re: curtin-install.log and curtin-install-cfg.yaml are not collected

2024-03-28 Thread Nobuto Murata
It's worth noting that those files contain some MAAS token.

** Attachment added: "curtin-install.log"
   
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/2059386/+attachment/5760166/+files/curtin-install.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2059386

Title:
  curtin-install.log and curtin-install-cfg.yaml are not collected

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/2059386/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2059386] [NEW] curtin-install.log and curtin-install-cfg.yaml are not collected

2024-03-28 Thread Nobuto Murata
Public bug reported:

Installed: 4.5.6-0ubuntu1~22.04.2

When a server was provisioned by MAAS, troubleshooting of the
installation process or configuration issue require the logs of the
curtin process.

It's usually stored in /root

# ll -h /root/curtin-install*
-r 1 root root 5.4K Mar 28 05:34 /root/curtin-install-cfg.yaml
-r 1 root root 109K Mar 28 05:36 /root/curtin-install.log

But those are not collected by sosreport.

$ sudo sos report -a --all-logs

$ grep curtin sos_logs/sos.log 
2024-03-28 05:52:14,335 INFO: [plugin:apt] collecting path 
'/etc/apt/sources.list.curtin.old'
2024-03-28 05:52:14,361 INFO: [plugin:apt] collecting path 
'/etc/apt/apt.conf.d/90curtin-aptproxy'
2024-03-28 05:52:34,778 INFO: [plugin:system] collecting path 
'/etc/default/grub.d/50-curtin-settings.cfg'

** Affects: sosreport (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2059386

Title:
  curtin-install.log and curtin-install-cfg.yaml are not collected

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/2059386/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1988366] Re: python-rtslib-fb needs to handle new attribute cpus_allowed_list

2024-03-22 Thread Nobuto Murata
** Description changed:

- python-rtslib-fb needs to properly handle the new kernel module
- attribute cpus_allowed_list.
+ [ Impact ]
+ 
+ * getting information about "attached_luns" fails via python3-rtslib-fb
+ when running the HWE kernel on jammy due to the new kernel module
+ attribute cpus_allowed_list
+ 
+ * As a consequence, the following operations on jammy fails:
+ 
+   - creating an iSCSI target with Ceph-iSCSI service
+https://docs.ceph.com/en/quincy/rbd/iscsi-target-cli/
+ 
+ (LUN.allocate) created test-iscsi-pool/disk_1 successfully
+ (LUN.add_dev_to_lio) Adding image 'test-iscsi-pool/disk_1' to LIO backstore 
user:rbd
+ tcmu-runner: tcmu_rbd_open:1162 rbd/test-iscsi-pool.disk_1: address: 
{172.16.12.185:0/2337103748}
+ (LUN.add_dev_to_lio) Successfully added test-iscsi-pool/disk_1 to LIO
+ LUN alloc problem - Delete from LIO/backstores failed - [Errno 20] Not a 
directory: '/sys/kernel/config/target/iscsi/cpus_allowed_list'
+ 
+   - targetcli clearconfig confirm=True
+ 
+ [Errno 20] Not a directory:
+ '/sys/kernel/config/target/iscsi/cpus_allowed_list'
+ 
+   - targetctl clear
+ 
+ $ sudo targetctl clear
+ Traceback (most recent call last):
+   File "/usr/bin/targetctl", line 82, in 
+ main()
+   File "/usr/bin/targetctl", line 79, in main
+ funcs[sys.argv[1]](savefile)
+   File "/usr/bin/targetctl", line 57, in clear
+ RTSRoot().clear_existing(confirm=True)
+   File "/usr/lib/python3/dist-packages/rtslib_fb/root.py", line 318, in 
clear_existing
+ so.delete()
+   File "/usr/lib/python3/dist-packages/rtslib_fb/tcm.py", line 269, in delete
+ for lun in self._gen_attached_luns():
+   File "/usr/lib/python3/dist-packages/rtslib_fb/tcm.py", line 215, in 
_gen_attached_luns
+ for tpgt_dir in listdir(tpgts_base):
+ NotADirectoryError: [Errno 20] Not a directory: 
'/sys/kernel/config/target/iscsi/cpus_allowed_list'
+ 
+ 
+ [ Test Plan ]
+ 
+ ## create two VMs, one for the GA kernel and the other for the HWE kernel
+ for kernel in ga hwe; do
+ uvt-kvm create \
+ --cpu=4 --memory=4096 \
+ rtslib-fb-sru-testing-$kernel \
+ release=jammy
+ 
+ uvt-kvm wait rtslib-fb-sru-testing-$kernel
+ uvt-kvm ssh rtslib-fb-sru-testing-$kernel 'sudo apt-get update && sudo 
apt-get upgrade -y'
+ uvt-kvm ssh rtslib-fb-sru-testing-$kernel 'sudo apt-get install -y 
python3-rtslib-fb targetcli-fb'
+ done
+ 
+ ## Install the HWE kernel and reboot
+ uvt-kvm ssh rtslib-fb-sru-testing-hwe 'sudo apt-get install -y 
linux-generic-hwe-22.04 && sudo reboot'
+ 
+ 
+ ## Upgrade python3-rtslib-fb to the -proposed one
+ 
+ 
+ ## create the test iSCSI target based on the quickstart guide in targetcli(8)
+ ## https://manpages.ubuntu.com/manpages/jammy/en/man8/targetcli.8.html
+ cat 

[Bug 1988366] Re: python-rtslib-fb needs to handle new attribute cpus_allowed_list

2024-03-21 Thread Nobuto Murata
Ceph-iSCSI is a bit complicated example as a reproducer
https://docs.ceph.com/en/quincy/rbd/iscsi-overview/
But the simplest reproducer is `targetctl clear` with jammy HWE kernel.

$ sudo targetctl clear
Traceback (most recent call last):
  File "/usr/bin/targetctl", line 82, in 
main()
  File "/usr/bin/targetctl", line 79, in main
funcs[sys.argv[1]](savefile)
  File "/usr/bin/targetctl", line 57, in clear
RTSRoot().clear_existing(confirm=True)
  File "/usr/lib/python3/dist-packages/rtslib_fb/root.py", line 318, in 
clear_existing
so.delete()
  File "/usr/lib/python3/dist-packages/rtslib_fb/tcm.py", line 269, in delete
for lun in self._gen_attached_luns():
  File "/usr/lib/python3/dist-packages/rtslib_fb/tcm.py", line 215, in 
_gen_attached_luns
for tpgt_dir in listdir(tpgts_base):
NotADirectoryError: [Errno 20] Not a directory: 
'/sys/kernel/config/target/iscsi/cpus_allowed_list'

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1988366

Title:
  python-rtslib-fb needs to handle new attribute cpus_allowed_list

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-rtslib-fb/+bug/1988366/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1988366] Re: python-rtslib-fb needs to handle new attribute cpus_allowed_list

2024-03-21 Thread Nobuto Murata
The workaround is to switch back to GA kernel (v5.15), but it's far from
ideal to be used for newer generation of servers (less than two years
old).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1988366

Title:
  python-rtslib-fb needs to handle new attribute cpus_allowed_list

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-rtslib-fb/+bug/1988366/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1988366] Re: python-rtslib-fb needs to handle new attribute cpus_allowed_list

2024-03-21 Thread Nobuto Murata
The latest LTS (jammy) is missing this patch, and causes a failure in
LUN operations when the host is running the HWE kernel, v6.5.

 python3-rtslib-fb | 2.1.74-0ubuntu4   | jammy  | all
 python3-rtslib-fb | 2.1.74-0ubuntu5   | mantic | all
 python3-rtslib-fb | 2.1.74-0ubuntu5   | noble  | all

Those are the log lines from the ceph-iscsi use cases (to expose an RBD
volume over iSCSI/LIO) and it fails to complete the export creation and
will be stuck at an unrecoverable state unless manually fixing
gateway.conf in rados by deleting a half broken volume.


(LUN.allocate) created test-iscsi-pool/disk_1 successfully
(LUN.add_dev_to_lio) Adding image 'test-iscsi-pool/disk_1' to LIO backstore 
user:rbd
tcmu-runner: tcmu_rbd_open:1162 rbd/test-iscsi-pool.disk_1: address: 
{172.16.12.185:0/2337103748}
(LUN.add_dev_to_lio) Successfully added test-iscsi-pool/disk_1 to LIO
LUN alloc problem - Delete from LIO/backstores failed - [Errno 20] Not a 
directory: '/sys/kernel/config/target/iscsi/cpus_allowed_list'


similar report:
https://bugs.launchpad.net/python-cinderclient/yoga/+bug/2008010

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1988366

Title:
  python-rtslib-fb needs to handle new attribute cpus_allowed_list

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-rtslib-fb/+bug/1988366/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2055239] Re: Warning: The unit file, source configuration file or drop-ins of {apt-news, esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

2024-03-13 Thread Nobuto Murata
It's not the apt-news nor esm-cache service that was modified.

It looks like systemd warns about daemon-reload in any cases if any of the 
systemd unit files are modified and daemon-reload wasn't called after that.
https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/2055239/comments/12

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2055239

Title:
  Warning: The unit file, source configuration file or drop-ins of {apt-
  news,esm-cache}.service changed on disk. Run 'systemctl daemon-reload'
  to reload units.

To manage notifications about this bug go to:
https://bugs.launchpad.net/snapd/+bug/2055239/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2056387] Re: Fail to suspend/resume for the second time

2024-03-06 Thread Nobuto Murata
Random pointers although I'm not sure those are identical to my issue:
https://www.reddit.com/r/archlinux/comments/199am0a/thinkpad_t14_suspend_broken_in_kernel_670/
https://discussion.fedoraproject.org/t/random-resume-after-suspend-issue-on-thinkpad-t14s-amd-gen3-radeon-680m-ryzen-7/103452

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2056387

Title:
  Fail to suspend/resume for the second time

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2056387/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2056387] Re: Fail to suspend/resume for the second time

2024-03-06 Thread Nobuto Murata
Multiple suspends in a row worked without an external monitor connected,
but after connecting it the machine failed to suspend/resume.

** Attachment added: "failed_on_suspend_after_connecting_monitor.log"
   
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2056387/+attachment/5753543/+files/failed_on_suspend_after_connecting_monitor.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2056387

Title:
  Fail to suspend/resume for the second time

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2056387/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2056387] Re: Fail to suspend/resume for the second time

2024-03-06 Thread Nobuto Murata
kernel log when trying suspend/resume twice in a row. The machine got
frozen while the power LED is still on in the second suspend and there
is no second "PM: suspend entry (s2idle)" in the kernel log.

** Attachment added: "failed_on_second_suspend.log"
   
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2056387/+attachment/5753542/+files/failed_on_second_suspend.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2056387

Title:
  Fail to suspend/resume for the second time

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2056387/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2056387] [NEW] Fail to suspend/resume for the second time

2024-03-06 Thread Nobuto Murata
Public bug reported:

I had a similar issue before:
https://bugs.launchpad.net/ubuntu/+source/linux-hwe-5.19/+bug/2007718
However, I haven't seen the issue with later kernels until getting 
6.8.0-11.11+1 recently.

* 6.8.0-11 - fails to suspend/resume for the second time although the first 
suspend/resume works
* 6.6.0-14 - no issue in terms of suspend/resume

One thing worth noting is that connecting an external monitor or not
might be a key to trigger this issue. I cannot reproduce it when an
external monitor (through USB-C to HDMI) is not connected.

ProblemType: Bug
DistroRelease: Ubuntu 24.04
Package: linux-image-6.8.0-11-generic 6.8.0-11.11
ProcVersionSignature: Ubuntu 6.8.0-11.11-generic 6.8.0-rc4
Uname: Linux 6.8.0-11-generic x86_64
NonfreeKernelModules: zfs
ApportVersion: 2.28.0-0ubuntu1
Architecture: amd64
CasperMD5CheckResult: pass
CurrentDesktop: ubuntu:GNOME
Date: Thu Mar  7 11:17:31 2024
InstallationDate: Installed on 2024-01-08 (59 days ago)
InstallationMedia: Ubuntu 24.04 LTS "Noble Numbat" - Daily amd64 (20240104)
MachineType: LENOVO 21CFCTO1WW
ProcEnviron:
 LANG=en_US.UTF-8
 PATH=(custom, no user)
 SHELL=/bin/bash
 TERM=xterm-256color
 XDG_RUNTIME_DIR=
ProcFB: 0 amdgpudrmfb
ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-6.8.0-11-generic 
root=/dev/mapper/ubuntu--vg-ubuntu--lv ro quiet splash vt.handoff=7
PulseList: Error: command ['pacmd', 'list'] failed with exit code 1: No 
PulseAudio daemon running, or not running as session daemon.
RelatedPackageVersions:
 linux-restricted-modules-6.8.0-11-generic N/A
 linux-backports-modules-6.8.0-11-generic  N/A
 linux-firmware20240202.git36777504-0ubuntu1
SourcePackage: linux
UpgradeStatus: No upgrade log present (probably fresh install)
dmi.bios.date: 11/30/2023
dmi.bios.release: 1.49
dmi.bios.vendor: LENOVO
dmi.bios.version: R23ET73W (1.49 )
dmi.board.asset.tag: Not Available
dmi.board.name: 21CFCTO1WW
dmi.board.vendor: LENOVO
dmi.board.version: SDK0T76461 WIN
dmi.chassis.asset.tag: No Asset Information
dmi.chassis.type: 10
dmi.chassis.vendor: LENOVO
dmi.chassis.version: None
dmi.ec.firmware.release: 1.32
dmi.modalias: 
dmi:bvnLENOVO:bvrR23ET73W(1.49):bd11/30/2023:br1.49:efr1.32:svnLENOVO:pn21CFCTO1WW:pvrThinkPadT14Gen3:rvnLENOVO:rn21CFCTO1WW:rvrSDK0T76461WIN:cvnLENOVO:ct10:cvrNone:skuLENOVO_MT_21CF_BU_Think_FM_ThinkPadT14Gen3:
dmi.product.family: ThinkPad T14 Gen 3
dmi.product.name: 21CFCTO1WW
dmi.product.sku: LENOVO_MT_21CF_BU_Think_FM_ThinkPad T14 Gen 3
dmi.product.version: ThinkPad T14 Gen 3
dmi.sys.vendor: LENOVO

** Affects: linux (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug noble

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2056387

Title:
  Fail to suspend/resume for the second time

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2056387/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2055239] Re: Warning: The unit file, source configuration file or drop-ins of {apt-news, esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

2024-03-04 Thread Nobuto Murata
Hmm, it happened again between those two `apt update`. It might be snapd
related.

2024-03-05T10:49:54.513356+09:00 t14 sudo:   nobuto : TTY=pts/0 ; 
PWD=/home/nobuto ; USER=root ; COMMAND=/usr/bin/apt update
2024-03-05T11:00:47.422897+09:00 t14 sudo:   nobuto : TTY=pts/0 ; 
PWD=/home/nobuto ; USER=root ; COMMAND=/usr/bin/apt update

$ uptime 
 11:01:51 up 14 min,  1 user,  load average: 0.91, 0.90, 0.75

$ find /etc/systemd /lib/systemd -mmin -15
/etc/systemd/system
/etc/systemd/system/snap-go-10535.mount
/etc/systemd/system/multi-user.target.wants
/etc/systemd/system/multi-user.target.wants/snap-go-10535.mount
/etc/systemd/system/snapd.mounts.target.wants
/etc/systemd/system/snapd.mounts.target.wants/snap-go-10535.mount


$ snap refresh --time
timer: 00:00~24:00/4
last: today at 10:53 JST
next: today at 17:07 JST

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2055239

Title:
  Warning: The unit file, source configuration file or drop-ins of {apt-
  news,esm-cache}.service changed on disk. Run 'systemctl daemon-reload'
  to reload units.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/2055239/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2055239] Re: Warning: The unit file, source configuration file or drop-ins of {apt-news, esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

2024-03-04 Thread Nobuto Murata
The list of files modified in the last two hours (if I increase the
range to the last 2 days, it lists almost everything).

$ find /etc/systemd /lib/systemd/ -mmin -7200
/etc/systemd/system
/etc/systemd/system/snap-chromium-2768.mount
/etc/systemd/system/snap-hugo-18726.mount
/etc/systemd/system/snap-juju-26548.mount
/etc/systemd/system/sshd-keygen@.service.d
/etc/systemd/system/snap-zoom\x2dclient-225.mount
/etc/systemd/system/snap-hugo-18753.mount
/etc/systemd/system/snap-juju-25751.mount
/etc/systemd/system/graphical.target.wants
/etc/systemd/system/multi-user.target.wants
/etc/systemd/system/multi-user.target.wants/snap-chromium-2768.mount
/etc/systemd/system/multi-user.target.wants/snap-hugo-18726.mount
/etc/systemd/system/multi-user.target.wants/snap-juju-26548.mount
/etc/systemd/system/multi-user.target.wants/snap-zoom\x2dclient-225.mount
/etc/systemd/system/multi-user.target.wants/snap-hugo-18753.mount
/etc/systemd/system/multi-user.target.wants/snap-juju-25751.mount
/etc/systemd/system/multi-user.target.wants/snap-hugo-18706.mount
/etc/systemd/system/snap.juju.fetch-oci.service
/etc/systemd/system/snap-hugo-18706.mount
/etc/systemd/system/snapd.mounts.target.wants
/etc/systemd/system/snapd.mounts.target.wants/snap-chromium-2768.mount
/etc/systemd/system/snapd.mounts.target.wants/snap-hugo-18726.mount
/etc/systemd/system/snapd.mounts.target.wants/snap-juju-26548.mount
/etc/systemd/system/snapd.mounts.target.wants/snap-zoom\x2dclient-225.mount
/etc/systemd/system/snapd.mounts.target.wants/snap-hugo-18753.mount
/etc/systemd/system/snapd.mounts.target.wants/snap-juju-25751.mount
/etc/systemd/system/snapd.mounts.target.wants/snap-hugo-18706.mount
/lib/systemd/system
/lib/systemd/system/tailscaled.service
/lib/systemd/system-generators

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2055239

Title:
  Warning: The unit file, source configuration file or drop-ins of {apt-
  news,esm-cache}.service changed on disk. Run 'systemctl daemon-reload'
  to reload units.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/2055239/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2055239] Re: Warning: The unit file, source configuration file or drop-ins of {apt-news, esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

2024-03-04 Thread Nobuto Murata
Just for completeness.

$ sudo apt update
Warning: The unit file, source configuration file or drop-ins of 
apt-news.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Warning: The unit file, source configuration file or drop-ins of 
esm-cache.service changed on disk. Run 'systemctl daemon-reload' to reload 
units.
Hit:1 http://ftp.riken.jp/Linux/ubuntu noble InRelease
Hit:2 http://ftp.riken.jp/Linux/ubuntu noble-updates InRelease
Hit:3 http://ftp.riken.jp/Linux/ubuntu noble-backports InRelease
Hit:4 http://ftp.riken.jp/Linux/ubuntu noble-proposed InRelease
Hit:5 https://repo.steampowered.com/steam stable InRelease
Hit:6 https://packages.microsoft.com/repos/code stable InRelease
Hit:7 http://security.ubuntu.com/ubuntu noble-security InRelease
Get:8 https://pkgs.tailscale.com/stable/ubuntu noble InRelease
Fetched 6,563 B in 1s (6,699 B/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
67 packages can be upgraded. Run 'apt list --upgradable' to see them.


$ dpkg --verify ubuntu-advantage-tools; echo $?
0

$ apt policy ubuntu-advantage-tools
ubuntu-advantage-tools:
  Installed: 31.1
  Candidate: 31.1
  Version table:
 31.2 100
100 http://ftp.riken.jp/Linux/ubuntu noble-proposed/main amd64 Packages
100 http://ftp.riken.jp/Linux/ubuntu noble-proposed/main i386 Packages
 *** 31.1 500
500 http://ftp.riken.jp/Linux/ubuntu noble/main amd64 Packages
500 http://ftp.riken.jp/Linux/ubuntu noble/main i386 Packages
100 /var/lib/dpkg/status

$ systemctl cat apt-news.service
# /usr/lib/systemd/system/apt-news.service
# APT News is hosted at https://motd.ubuntu.com/aptnews.json and can include
# timely information related to apt updates available to your system.
# This service runs in the background during an `apt update` to download the
# latest news and set it to appear in the output of the next `apt upgrade`.
# The script won't do anything if you've run: `pro config set apt_news=false`.
# The script will limit network requests to at most once per 24 hours.
# You can also host your own aptnews.json and configure your system to use it
# with the command:
# `pro config set apt_news_url=https://yourhostname/path/to/aptnews.json`

[Unit]
Description=Update APT News

[Service]
Type=oneshot
ExecStart=/usr/bin/python3 /usr/lib/ubuntu-advantage/apt_news.py
AppArmorProfile=ubuntu_pro_apt_news
CapabilityBoundingSet=~CAP_SYS_ADMIN
CapabilityBoundingSet=~CAP_NET_ADMIN
CapabilityBoundingSet=~CAP_NET_BIND_SERVICE
CapabilityBoundingSet=~CAP_SYS_PTRACE
CapabilityBoundingSet=~CAP_NET_RAW
PrivateTmp=true
RestrictAddressFamilies=~AF_NETLINK
RestrictAddressFamilies=~AF_PACKET
# These may break some tests, and should be enabled carefully
#NoNewPrivileges=true
#PrivateDevices=true
#ProtectControlGroups=true
# ProtectHome=true seems to reliably break the GH integration test with a lunar 
lxd on jammy host
#ProtectHome=true
#ProtectKernelModules=true
#ProtectKernelTunables=true
#ProtectSystem=full
#RestrictSUIDSGID=true
# Unsupported in bionic
# Suggestion from systemd.exec(5) manpage on SystemCallFilter
#SystemCallFilter=@system-service
#SystemCallFilter=~@mount
#SystemCallErrorNumber=EPERM
#ProtectClock=true
#ProtectKernelLogs=true

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2055239

Title:
  Warning: The unit file, source configuration file or drop-ins of {apt-
  news,esm-cache}.service changed on disk. Run 'systemctl daemon-reload'
  to reload units.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/2055239/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2055239] Re: Warning: The unit file, source configuration file or drop-ins of {apt-news, esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

2024-03-04 Thread Nobuto Murata
> @nobotu - was yours really an empty file or did you not copy more than
one?

Are you referring to the `systemctl cat apt-news.service` in the bug
description? If so, my apologies. I just pasted the file line of the
content on purpose just for confirming the full path of the service. The
flie wasn't empty at all and I didn't touch the file manually at all
either.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2055239

Title:
  Warning: The unit file, source configuration file or drop-ins of {apt-
  news,esm-cache}.service changed on disk. Run 'systemctl daemon-reload'
  to reload units.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/2055239/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2055239] Re: Warning: The unit file, source configuration file or drop-ins of {apt-news, esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

2024-03-04 Thread Nobuto Murata
** Description changed:

  I recently started seeing the following warning messages when I run `apt
  update`.
  
  $ sudo apt update
  Warning: The unit file, source configuration file or drop-ins of 
apt-news.service changed on disk. Run 'systemctl daemon-reload' to reload units.
  Warning: The unit file, source configuration file or drop-ins of 
esm-cache.service changed on disk. Run 'systemctl daemon-reload' to reload 
units.
  ...
  
  apt-news.service for example is in /lib/systemd/system/apt-news.service
  and it's a static file managed by the package. Does the package
  maintenance script call systemd related hooks to reload the config
  whenever the package gets updated?
  
  $ systemctl cat apt-news.service
  # /usr/lib/systemd/system/apt-news.service
+ # APT News is hosted at https://motd.ubuntu.com/aptnews.json and can include
+ # timely information related to apt updates available to your system.
+ ...
  
  $ dpkg -S /lib/systemd/system/apt-news.service
  ubuntu-pro-client: /lib/systemd/system/apt-news.service
  
- ProblemType: Bug
- DistroRelease: Ubuntu 24.04
+ ProblemType: BugDistroRelease: Ubuntu 24.04
  Package: ubuntu-pro-client 31.1
  ProcVersionSignature: Ubuntu 6.6.0-14.14-generic 6.6.3
  Uname: Linux 6.6.0-14-generic x86_64
  NonfreeKernelModules: zfs
  ApportVersion: 2.28.0-0ubuntu1
  Architecture: amd64
  CasperMD5CheckResult: pass
  CurrentDesktop: ubuntu:GNOME
  Date: Wed Feb 28 13:06:35 2024
  InstallationDate: Installed on 2024-01-08 (51 days ago)
  InstallationMedia: Ubuntu 24.04 LTS "Noble Numbat" - Daily amd64 (20240104)
  ProcEnviron:
-  LANG=en_US.UTF-8
-  PATH=(custom, no user)
-  SHELL=/bin/bash
-  TERM=xterm-256color
-  XDG_RUNTIME_DIR=
- SourcePackage: ubuntu-advantage-tools
+  LANG=en_US.UTF-8
+  PATH=(custom, no user)
+  SHELL=/bin/bash
+  TERM=xterm-256color
+  XDG_RUNTIME_DIR=SourcePackage: ubuntu-advantage-tools
  UpgradeStatus: No upgrade log present (probably fresh install)
  apparmor_logs.txt:
-  
+ 
  cloud-id.txt-error:
-  Failed running command 'cloud-id' [exit(2)]. Message: REDACTED config part 
/etc/cloud/cloud.cfg.d/99-installer.cfg, insufficient permissions
-  REDACTED config part /etc/cloud/cloud.cfg.d/90-installer-network.cfg, 
insufficient permissions
-  REDACTED config part /etc/cloud/cloud.cfg.d/99-installer.cfg, insufficient 
permissions
-  REDACTED config part /etc/cloud/cloud.cfg.d/90-installer-network.cfg, 
insufficient permissions
+  Failed running command 'cloud-id' [exit(2)]. Message: REDACTED config part 
/etc/cloud/cloud.cfg.d/99-installer.cfg, insufficient permissions
+  REDACTED config part /etc/cloud/cloud.cfg.d/90-installer-network.cfg, 
insufficient permissions
+  REDACTED config part /etc/cloud/cloud.cfg.d/99-installer.cfg, insufficient 
permissions
+  REDACTED config part /etc/cloud/cloud.cfg.d/90-installer-network.cfg, 
insufficient permissions
  livepatch-status.txt-error: Invalid command specified 
'/snap/bin/canonical-livepatch status'.
  uaclient.conf:
-  contract_url: https://contracts.canonical.com
-  log_level: debug
+  contract_url: https://contracts.canonical.com
+  log_level: debug

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2055239

Title:
  Warning: The unit file, source configuration file or drop-ins of {apt-
  news,esm-cache}.service changed on disk. Run 'systemctl daemon-reload'
  to reload units.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/2055239/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2055239] Re: Warning: The unit file, source configuration file or drop-ins of {apt-news, esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

2024-02-29 Thread Nobuto Murata
I tried to minimize the test case but no luck so far. I will report it
back whenever I find something additional.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2055239

Title:
  Warning: The unit file, source configuration file or drop-ins of {apt-
  news,esm-cache}.service changed on disk. Run 'systemctl daemon-reload'
  to reload units.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/2055239/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2055239] Re: Warning: The unit file, source configuration file or drop-ins of {apt-news, esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

2024-02-28 Thread Nobuto Murata
It was puzzling indeed, but now I have a reproduction step.

$ sudo apt update
-> no warning

$ sudo apt upgrade
-> to install something to invoke the rsyslog trigger.

Processing triggers for rsyslog (8.2312.0-3ubuntu3) ...
Warning: The unit file, source configuration file or drop-ins of 
rsyslog.service changed on disk. Run 'systemctl daemon-reload' to reload units.

$ sudo apt update
-> will see the warning.

The warning happens with every systemctl commands so it's not really
ubuntu-pro-tools specific issue. However, systemctl warnings are not
expected with `apt` commands usually so that's why this could be
considered as a surprise. For fixing this properly, the place may not be
in pro-tools itself but somewhere else.

** Attachment added: "apt-terminal.log"
   
https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/2055239/+attachment/5750457/+files/apt-terminal.log

** Changed in: ubuntu-advantage-tools (Ubuntu)
   Status: Incomplete => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2055239

Title:
  Warning: The unit file, source configuration file or drop-ins of {apt-
  news,esm-cache}.service changed on disk. Run 'systemctl daemon-reload'
  to reload units.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/2055239/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2055239] [NEW] Warning: The unit file, source configuration file or drop-ins of {apt-news, esm-cache}.service changed on disk. Run 'systemctl daemon-reload' to reload units.

2024-02-27 Thread Nobuto Murata
Public bug reported:

I recently started seeing the following warning messages when I run `apt
update`.

$ sudo apt update
Warning: The unit file, source configuration file or drop-ins of 
apt-news.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Warning: The unit file, source configuration file or drop-ins of 
esm-cache.service changed on disk. Run 'systemctl daemon-reload' to reload 
units.
...

apt-news.service for example is in /lib/systemd/system/apt-news.service
and it's a static file managed by the package. Does the package
maintenance script call systemd related hooks to reload the config
whenever the package gets updated?

$ systemctl cat apt-news.service
# /usr/lib/systemd/system/apt-news.service

$ dpkg -S /lib/systemd/system/apt-news.service
ubuntu-pro-client: /lib/systemd/system/apt-news.service

ProblemType: Bug
DistroRelease: Ubuntu 24.04
Package: ubuntu-pro-client 31.1
ProcVersionSignature: Ubuntu 6.6.0-14.14-generic 6.6.3
Uname: Linux 6.6.0-14-generic x86_64
NonfreeKernelModules: zfs
ApportVersion: 2.28.0-0ubuntu1
Architecture: amd64
CasperMD5CheckResult: pass
CurrentDesktop: ubuntu:GNOME
Date: Wed Feb 28 13:06:35 2024
InstallationDate: Installed on 2024-01-08 (51 days ago)
InstallationMedia: Ubuntu 24.04 LTS "Noble Numbat" - Daily amd64 (20240104)
ProcEnviron:
 LANG=en_US.UTF-8
 PATH=(custom, no user)
 SHELL=/bin/bash
 TERM=xterm-256color
 XDG_RUNTIME_DIR=
SourcePackage: ubuntu-advantage-tools
UpgradeStatus: No upgrade log present (probably fresh install)
apparmor_logs.txt:
 
cloud-id.txt-error:
 Failed running command 'cloud-id' [exit(2)]. Message: REDACTED config part 
/etc/cloud/cloud.cfg.d/99-installer.cfg, insufficient permissions
 REDACTED config part /etc/cloud/cloud.cfg.d/90-installer-network.cfg, 
insufficient permissions
 REDACTED config part /etc/cloud/cloud.cfg.d/99-installer.cfg, insufficient 
permissions
 REDACTED config part /etc/cloud/cloud.cfg.d/90-installer-network.cfg, 
insufficient permissions
livepatch-status.txt-error: Invalid command specified 
'/snap/bin/canonical-livepatch status'.
uaclient.conf:
 contract_url: https://contracts.canonical.com
 log_level: debug

** Affects: ubuntu-advantage-tools (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug need-amd64-retrace noble

** Information type changed from Private to Public

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2055239

Title:
  Warning: The unit file, source configuration file or drop-ins of {apt-
  news,esm-cache}.service changed on disk. Run 'systemctl daemon-reload'
  to reload units.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/2055239/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1939390] Re: Missing dependency: lsscsi

2022-04-11 Thread Nobuto Murata
To accommodate the upstream change, we need backporting down to
Victoria.

os-brick (master=)$ git branch -r --contains 
fc6ca22bdb955137d97cb9bcfc84104426e53842
  origin/HEAD -> origin/master
  origin/master
  origin/stable/victoria
  origin/stable/wallaby
  origin/stable/xena
  origin/stable/yoga

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939390

Title:
  Missing dependency: lsscsi

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1939390/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1967131] Re: CONFIG_NR_CPUS=64 in -kvm is too low compared to -generic

2022-03-30 Thread Nobuto Murata
Thank you Stefan for the prompt response. I'm marking this as Invalid
for the time being assuming the value was intended.

** Changed in: linux-kvm (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1967131

Title:
  CONFIG_NR_CPUS=64 in -kvm is too low compared to -generic

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-kvm/+bug/1967131/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1967131] [NEW] CONFIG_NR_CPUS=64 in -kvm is too low compared to -generic

2022-03-30 Thread Nobuto Murata
Public bug reported:

-kvm flavor has CONFIG_NR_CPUS=64 although -generic has
CONFIG_NR_CPUS=8192 these days.

It will be a problem especially when launching a VM on top of a
hypervisor with more than 64 CPU threads available. Then the guest can
only use up to 64 vCPUs even when more vCPUs are allocated by a
hypervisor.

I've checked the latest available package for Jammy, but there was no change 
around CONFIG_NR_CPUS.
https://launchpad.net/ubuntu/+source/linux-kvm/5.15.0-1003.3

$ lsb_release -r
Release:20.04

$ dpkg -S /boot/config*
linux-modules-5.4.0-105-generic: /boot/config-5.4.0-105-generic
linux-modules-5.4.0-1059-kvm: /boot/config-5.4.0-1059-kvm

$ grep CONFIG_NR_CPUS /boot/config*
/boot/config-5.4.0-105-generic:CONFIG_NR_CPUS_RANGE_BEGIN=8192
/boot/config-5.4.0-105-generic:CONFIG_NR_CPUS_RANGE_END=8192
/boot/config-5.4.0-105-generic:CONFIG_NR_CPUS_DEFAULT=8192
/boot/config-5.4.0-105-generic:CONFIG_NR_CPUS=8192
/boot/config-5.4.0-1059-kvm:CONFIG_NR_CPUS_RANGE_BEGIN=2
/boot/config-5.4.0-1059-kvm:CONFIG_NR_CPUS_RANGE_END=512
/boot/config-5.4.0-1059-kvm:CONFIG_NR_CPUS_DEFAULT=64
/boot/config-5.4.0-1059-kvm:CONFIG_NR_CPUS=64

** Affects: linux-kvm (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1967131

Title:
  CONFIG_NR_CPUS=64 in -kvm is too low compared to -generic

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-kvm/+bug/1967131/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1963698] Re: ovn-controller on Wallaby creates high CPU usage after moving port

2022-03-06 Thread Nobuto Murata
In this specific case (the environment Olivier described), we tested
focal-xena and the issue was NOT reproducible. We've decided to go with
Xena so field-high can be dropped (I'm not able to remove the
subscription by myself here).

Assuming that it might be focal-wallaby specific since we haven't seen this 
kind of issues in other customers with ussuri, there may be some patches which 
needs to be backported. e.g. other distribution seems to have backported the 
following:
https://github.com/ovn-org/ovn/commit/c83294970c62f662015a7979b12250580bee3001
(no idea if it's connected to the issue or not though)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1963698

Title:
  ovn-controller on Wallaby creates high CPU usage after moving port

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1963698/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1963698] Re: ovn-controller on Wallaby creates high CPU usage after moving port

2022-03-04 Thread Nobuto Murata
** Project changed: networking-ovn => ovn (Ubuntu)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1963698

Title:
  ovn-controller on Wallaby creates high CPU usage after moving port

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1963698/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1959712] Re: [SRU] Add iSCSI support to HPE 3PAR driver for Primera 4.2 and higher

2022-02-24 Thread Nobuto Murata
Tested and verified with cloud-archive:ussuri-proposed.

apt-cache policy cinder-common
cinder-common:
  Installed: 2:16.4.2-0ubuntu2~cloud0
  Candidate: 2:16.4.2-0ubuntu2~cloud0
  Version table:
 *** 2:16.4.2-0ubuntu2~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-proposed/ussuri/main amd64 Packages
100 /var/lib/dpkg/status
 2:16.4.1-0ubuntu1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-updates/ussuri/main amd64 Packages
 2:12.0.10-0ubuntu2 500
500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
 2:12.0.9-0ubuntu1.2 500
500 http://security.ubuntu.com/ubuntu bionic-security/main amd64 
Packages
 2:12.0.0-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages


[after applying -proposed]

2022-02-24 16:05:15.445 13263 INFO
cinder.volume.flows.manager.create_volume [req-
ce6cfe06-8291-4933-981d-d5ea90c8f575 c637fcb15b914d54b9a7c202b2abd27a
8e9bce8207ee4f49a8bd2fc68d1ac6cd - 05a27486835a4d31a65cb4f1911c4f16
05a27486835a4d31a65cb4f1911c4f16] Volume
volume-f0fa16f0-d8c1-44b0-b939-75a3f54e820e
(f0fa16f0-d8c1-44b0-b939-75a3f54e820e): created successfully


** Tags removed: verification-ussuri-needed
** Tags added: verification-ussuri-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1959712

Title:
  [SRU] Add iSCSI support to HPE 3PAR driver for Primera 4.2 and higher

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1959712/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1959712] Re: [SRU] Add iSCSI support to HPE 3PAR driver for Primera 4.2 and higher

2022-02-24 Thread Nobuto Murata
Tested and verified with cloud-archive:victoria-proposed.

apt-cache policy cinder-common
cinder-common:
  Installed: 2:17.2.0-0ubuntu1~cloud1
  Candidate: 2:17.2.0-0ubuntu1~cloud1
  Version table:
 *** 2:17.2.0-0ubuntu1~cloud1 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
focal-proposed/victoria/main amd64 Packages
100 /var/lib/dpkg/status
 2:17.2.0-0ubuntu1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
focal-updates/victoria/main amd64 Packages
 2:16.4.1-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages
 2:16.1.0-0ubuntu1 500
500 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages
 2:16.0.0~b3~git2020041012.eb915e2db-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

[before applying -proposed]

2022-02-24 13:56:09.072 52866 ERROR cinder.volume.drivers.hpe.hpe_3par_iscsi 
[req-d69ea8d0-9059-46de-b931-d5acf2d6f9c1 - - - - -] For Primera, only FC is 
supported. iSCSI cannot be used
2022-02-24 13:56:09.073 52866 ERROR cinder.volume.manager 
[req-d69ea8d0-9059-46de-b931-d5acf2d6f9c1 - - - - -] Failed to initialize 
driver.: NotImplementedError
2022-02-24 13:56:09.073 52866 ERROR cinder.volume.manager Traceback (most 
recent call last):
2022-02-24 13:56:09.073 52866 ERROR cinder.volume.manager   File 
"/usr/lib/python3/dist-packages/cinder/volume/manager.py", line 466, in 
_init_host
2022-02-24 13:56:09.073 52866 ERROR cinder.volume.manager 
self.driver.do_setup(ctxt)
2022-02-24 13:56:09.073 52866 ERROR cinder.volume.manager   File 
"/usr/lib/python3/dist-packages/cinder/volume/drivers/hpe/hpe_3par_base.py", 
line 438, in do_setup
2022-02-24 13:56:09.073 52866 ERROR cinder.volume.manager 
self._do_setup(common)
2022-02-24 13:56:09.073 52866 ERROR cinder.volume.manager   File 
"/usr/lib/python3/dist-packages/cinder/volume/drivers/hpe/hpe_3par_iscsi.py", 
line 149, in _do_setup
2022-02-24 13:56:09.073 52866 ERROR cinder.volume.manager raise 
NotImplementedError()
2022-02-24 13:56:09.073 52866 ERROR cinder.volume.manager NotImplementedError


[after applying -proposed]
2022-02-24 14:18:26.737 58796 INFO cinder.volume.flows.manager.create_volume 
[req-a12f1979-f76f-4187-9e23-cc667d0ef403 2271c96c9b804f4e86e12ab55a6d9344 
268e52aa19b040fca0430b9616a0017e - - -] Volume 
volume-028e4021-506c-43eb-9f49-d705a420d216 
(028e4021-506c-43eb-9f49-d705a420d216): created successfully


$ openstack volume list 

+--+---+---+--+-+
| ID   | Name  | Status| Size | Attached to 
|
+--+---+---+--+-+
| 028e4021-506c-43eb-9f49-d705a420d216 | test2 | available |   10 | 
|
| 57c761cf-5a7e-401d-8b91-10872b5f7804 | test1 | error |   10 | 
|
+--+---+---+--+-+

test1 (w/error) is before, and test2 (w/ available) is after applying
-proposed.

** Tags removed: verification-victoria-needed
** Tags added: verification-victoria-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1959712

Title:
  [SRU] Add iSCSI support to HPE 3PAR driver for Primera 4.2 and higher

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1959712/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1959712] Re: [SRU] Add iSCSI support to HPE 3PAR driver for Primera 4.2 and higher

2022-02-24 Thread Nobuto Murata
Tested and verified with focal-proposed.

apt-cache policy cinder-common
cinder-common:
  Installed: 2:16.4.2-0ubuntu2
  Candidate: 2:16.4.2-0ubuntu2
  Version table:
 *** 2:16.4.2-0ubuntu2 500
500 http://archive.ubuntu.com/ubuntu focal-proposed/main amd64 Packages
100 /var/lib/dpkg/status
 2:16.4.1-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages
 2:16.1.0-0ubuntu1 500
500 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages
 2:16.0.0~b3~git2020041012.eb915e2db-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

[before applying -proposed]

2022-02-24 06:59:34.018 52526 ERROR cinder.volume.drivers.hpe.hpe_3par_iscsi 
[req-65769a6c-7b72-49b4-84e8-dac7bbacc5d1 - - - - -] For Primera, only FC is 
supported. iSCSI cannot be used
2022-02-24 06:59:34.018 52526 ERROR cinder.volume.manager 
[req-65769a6c-7b72-49b4-84e8-dac7bbacc5d1 - - - - -] Failed to initialize 
driver.: NotImplementedError
2022-02-24 06:59:34.018 52526 ERROR cinder.volume.manager Traceback (most 
recent call last):
2022-02-24 06:59:34.018 52526 ERROR cinder.volume.manager   File 
"/usr/lib/python3/dist-packages/cinder/volume/manager.py", line 466, in 
_init_host
2022-02-24 06:59:34.018 52526 ERROR cinder.volume.manager 
self.driver.do_setup(ctxt)
2022-02-24 06:59:34.018 52526 ERROR cinder.volume.manager   File 
"/usr/lib/python3/dist-packages/cinder/volume/drivers/hpe/hpe_3par_base.py", 
line 438, in do_setup
2022-02-24 06:59:34.018 52526 ERROR cinder.volume.manager 
self._do_setup(common)
2022-02-24 06:59:34.018 52526 ERROR cinder.volume.manager   File 
"/usr/lib/python3/dist-packages/cinder/volume/drivers/hpe/hpe_3par_iscsi.py", 
line 149, in _do_setup
2022-02-24 06:59:34.018 52526 ERROR cinder.volume.manager raise 
NotImplementedError()
2022-02-24 06:59:34.018 52526 ERROR cinder.volume.manager NotImplementedError


[after applying -proposed]

2022-02-24 08:59:05.246 78217 INFO
cinder.volume.flows.manager.create_volume
[req-3599f424-29db-45b4-bf2d-4a665f3a30b0
91f76503c81d4398ad8f7ec426d2f149 9c05cd46a17c460d87ca375f20a20803 -
5c2cd4d3196643e2b6a4d8f7dc64eef0 5c2cd4d3196643e2b6a4d8f7dc64eef0]
Volume volume-2bfb4855-b8c9-4862-9d83-293ff43c6a32
(2bfb4855-b8c9-4862-9d83-293ff43c6a32): created successfully


** Tags removed: verification-needed verification-needed-focal
** Tags added: verification-done verification-done-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1959712

Title:
  [SRU] Add iSCSI support to HPE 3PAR driver for Primera 4.2 and higher

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1959712/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1947063] Re: Missing dependency to sysfsutils, nvme-cli

2022-02-22 Thread Nobuto Murata
There is a separate bug for `lsscsi` since it's impenitent to iSCSI use cases:
https://bugs.launchpad.net/ubuntu/+source/python-os-brick/+bug/1939390

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1947063

Title:
  Missing dependency to sysfsutils, nvme-cli

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-os-brick/+bug/1947063/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1939390] Re: Missing dependency: lsscsi

2022-02-22 Thread Nobuto Murata
Okay, I've added a comment there:
https://bugs.launchpad.net/ubuntu/+source/python-os-brick/+bug/1947063

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939390

Title:
  Missing dependency: lsscsi

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1939390/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1947063] Re: Missing dependency to sysfsutils

2022-02-22 Thread Nobuto Murata
Upstream refreshed the list of dependencies by adding more commands,
etc. "nvme" command from nvme-cli package is one of them.


This is a warning in the NVMe-oF code path, but it's invoked regardless whether 
NVMe-oF is used or not.

2022-02-22 11:00:42.531 713772 WARNING os_brick.initiator.connectors.nvmeof 
[req-426349fa-2fff-42f2-b4e3-dbed65ae1d8a b8b3819299c442bb9ba4fb651a59b496 
077b7968c0de41069cd29e3819220420 - - -] Could not generate host nqn: [Errno 2] 
No such file or directory
Command: nvme gen-hostnqn | tee /etc/nvme/hostnqn
Exit code: -
Stdout: None
Stderr: None: oslo_concurrency.processutils.ProcessExecutionError: [Errno 2] No 
such file or directory

** Summary changed:

- Missing dependency to sysfsutils
+ Missing dependency to sysfsutils, nvme-cli

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1947063

Title:
  Missing dependency to sysfsutils, nvme-cli

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-os-brick/+bug/1947063/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1939390] Re: Missing dependency: lsscsi

2022-02-21 Thread Nobuto Murata
> I *think* we also had this problem on systems that had NVMe volumes.
The nvme-cli package is not pulled in, even though it is used by os-
brick:

Did it block any operation by missing the nvme command? It looks like
it's in a critical path for NVMe-oF usecase, but it generates a warning
instead of an error when it's not found.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939390

Title:
  Missing dependency: lsscsi

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1939390/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1959712] Re: [SRU] Add iSCSI support to HPE 3PAR driver for Primera 4.2 and higher

2022-02-21 Thread Nobuto Murata
Hi Raghavendra,

First of all, thank you for your effort tying to make things forward.
I'm afraid devstack works in this specific case because devstack pulls
Cinder from git repository directly instead of using Ubuntu's binary
packages (.deb basically) if I'm not mistaken. This validation requires
using deb packages proposed in specific repository pockets.

I'm aware that a team working with us is trying to validate the proposed
packages this week (or by early next week) so I think we can wait for
them to provide the feedback.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1959712

Title:
  [SRU] Add iSCSI support to HPE 3PAR driver for Primera 4.2 and higher

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1959712/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1939390] Re: Missing dependency: lsscsi

2022-02-21 Thread Nobuto Murata
Subscribing ~field-medium

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939390

Title:
  Missing dependency: lsscsi

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1939390/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1939390] Re: Missing dependency: lsscsi

2022-02-21 Thread Nobuto Murata
Similar with this one: https://bugs.launchpad.net/ubuntu/+source/python-
os-brick/+bug/1947063

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939390

Title:
  Missing dependency: lsscsi

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1939390/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1939390] Re: Missing dependency: lsscsi

2022-02-21 Thread Nobuto Murata
Adding Ubuntu packaging task. It seems lsscsi dependency has been added fairly 
recently (July 2020) so it looks like it's something os-brick binary package 
should install as a dependency:
https://bugs.launchpad.net/os-brick/+bug/1793259
https://opendev.org/openstack/os-brick/commit/fc6ca22bdb955137d97cb9bcfc84104426e53842

At this point, there is no Depends or Recommends from os-brick or
OpenStack related packages.

$ apt rdepends lsscsi
lsscsi
Reverse Depends:
  Depends: libguestfs0
  Depends: libguestfs0
  Depends: zfs-test
  Depends: x2gothinclient-cdmanager
  Suggests: stressant-meta
  Recommends: sitesummary-client


** Also affects: python-os-brick (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939390

Title:
  Missing dependency: lsscsi

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1939390/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1953052] Re: In Jammy sound level reset after reboot

2022-02-20 Thread Nobuto Murata
> TL;DR: pipewire-pulse and PulseAudio should *not* be installed at the
same time as they serve the same function and applications don't know
the difference.

This is not the case at least for the default Ubuntu flavor (w/ GNOME).

$ curl -s 
https://cdimages.ubuntu.com/daily-live/current/jammy-desktop-amd64.manifest | 
egrep 'pipewire|pulseaudio'
gstreamer1.0-pipewire:amd64 0.3.44-1
gstreamer1.0-pulseaudio:amd64   1.18.5-1ubuntu3
libpipewire-0.3-0:amd64 0.3.44-1
libpipewire-0.3-common  0.3.44-1
libpipewire-0.3-modules:amd64   0.3.44-1
pipewire:amd64  0.3.44-1
pipewire-bin0.3.44-1
pipewire-media-session  0.4.1-2
pulseaudio  1:15.0+dfsg1-1ubuntu6
pulseaudio-module-bluetooth 1:15.0+dfsg1-1ubuntu6
pulseaudio-utils1:15.0+dfsg1-1ubuntu6

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1953052

Title:
  In Jammy sound level reset after reboot

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/alsa-utils/+bug/1953052/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1953052] Re: In Jammy sound level reset after reboot

2022-02-17 Thread Nobuto Murata
I can confirm that after stopping pipewire temporarily by:
$ systemctl --user stop pipewire.socket pipewire.service
The volume level is properly recovered across plugging in and out a headset for 
example, which is good.

Both pulseaudio and pipewire are installed out of the box and running if
I'm not mistaken.

$ apt show pulseaudio |& grep Task:
Task: ubuntu-desktop-minimal, ubuntu-desktop, ubuntu-desktop-raspi, 
kubuntu-desktop, xubuntu-core, xubuntu-desktop, lubuntu-desktop, 
ubuntustudio-desktop-core, ubuntustudio-desktop, ubuntukylin-desktop, 
ubuntu-mate-core, ubuntu-mate-desktop, ubuntu-budgie-desktop, 
ubuntu-budgie-desktop-raspi

$ apt show pipewire |& grep Task:
Task: ubuntu-desktop-minimal, ubuntu-desktop, ubuntu-desktop-raspi, 
kubuntu-desktop, xubuntu-core, xubuntu-desktop, lubuntu-desktop, 
ubuntustudio-desktop-core, ubuntustudio-desktop, ubuntukylin-desktop, 
ubuntu-mate-core, ubuntu-mate-desktop, ubuntu-budgie-desktop, 
ubuntu-budgie-desktop-raspi

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1953052

Title:
  In Jammy sound level reset after reboot

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/alsa-utils/+bug/1953052/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1959712] Re: [SRU] Add iSCSI support to HPE 3PAR driver for Primera 4.2 and higher

2022-02-01 Thread Nobuto Murata
** Description changed:

  [Description]
  OpenStack cinder in Focal (OpenStack Ussuri) is lacking iSCSI support for HPE 
Primera 4.2 and higher. This is now supported in Cinder and we would like to 
enable it in Ubuntu Focal as well as OpenStack Ussuri.
  
  The rationale for this SRU falls under harware enablement for Long Term
  Support releases.
  
  [Test Case]
  1. Deploy OpenStack with:
   - HPE 3PAR *iSCSI* driver enabled for Cinder
   - with Primera >= 4.2
-   as per: 
https://docs.openstack.org/cinder/rocky/configuration/block-storage/drivers/hpe-3par-driver.html
+   as per: 
https://docs.openstack.org/cinder/latest/configuration/block-storage/drivers/hpe-3par-driver.html
  2. Create a volume
    openstack volume create (--type hpe-primera) --size 10 sru-test-before
  
  3. Confirm the volume is successfully created while it gave an error as
  "For Primera, only FC is supported. iSCSI cannot be used" previously.
  
  [Regression Potential]
  This is a pretty simple patch. Where the code used to always raise a not 
implemented error for Primera, it now raises the not implemented error only for 
< Primera 4.2. Where things could go wrong? It's possible the version checking 
could have an issue but that code seems standard and the upstream patch has 
been merged back to stable/victoria [1]. Regression testing will cover that 
code path as well.
  [1] https://review.opendev.org/c/openstack/cinder/+/820611

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1959712

Title:
  [SRU] Add iSCSI support to HPE 3PAR driver for Primera 4.2 and higher

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cinder/+bug/1959712/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1959712] Re: [SRU] Add iSCSI support to HPE 3PAR driver for Primera 4.2 and higher

2022-02-01 Thread Nobuto Murata
** Description changed:

  [Description]
  OpenStack cinder in Focal (OpenStack Ussuri) is lacking iSCSI support for HPE 
Primera 4.2 and higher. This is now supported in Cinder and we would like to 
enable it in Ubuntu Focal as well as OpenStack Ussuri.
  
  The rationale for this SRU falls under harware enablement for Long Term
  Support releases.
  
  [Test Case]
- TBD
+ 1. Deploy OpenStack with:
+  - HPE 3PAR *iSCSI* driver enabled for Cinder
+  - with Primara >= 4.2
+   as per: 
https://docs.openstack.org/cinder/rocky/configuration/block-storage/drivers/hpe-3par-driver.html
+ 2. Create a volume
+   openstack volume create (--type hpe-primera) --size 10 sru-test-before
+ 
+ 3. Confirm the volume is successfully created while it gave "For
+ Primera, only FC is supported. iSCSI cannot be used" before.
+ 
  
  [Regression Potential]
  This is a pretty simple patch. Where the code used to always raise a not 
implemented error for Primera, it now raises the not implemented error only for 
< Primera 4.2. Where things could go wrong? It's possible the version checking 
could have an issue but that code seems standard and the upstream patch has 
been merged back to stable/victoria [1]. Regression testing will cover that 
code path as well.
  [1] https://review.opendev.org/c/openstack/cinder/+/820611

** Description changed:

  [Description]
  OpenStack cinder in Focal (OpenStack Ussuri) is lacking iSCSI support for HPE 
Primera 4.2 and higher. This is now supported in Cinder and we would like to 
enable it in Ubuntu Focal as well as OpenStack Ussuri.
  
  The rationale for this SRU falls under harware enablement for Long Term
  Support releases.
  
  [Test Case]
  1. Deploy OpenStack with:
-  - HPE 3PAR *iSCSI* driver enabled for Cinder
-  - with Primara >= 4.2
-   as per: 
https://docs.openstack.org/cinder/rocky/configuration/block-storage/drivers/hpe-3par-driver.html
+  - HPE 3PAR *iSCSI* driver enabled for Cinder
+  - with Primera >= 4.2
+   as per: 
https://docs.openstack.org/cinder/rocky/configuration/block-storage/drivers/hpe-3par-driver.html
  2. Create a volume
-   openstack volume create (--type hpe-primera) --size 10 sru-test-before
+   openstack volume create (--type hpe-primera) --size 10 sru-test-before
  
  3. Confirm the volume is successfully created while it gave "For
  Primera, only FC is supported. iSCSI cannot be used" before.
  
- 
  [Regression Potential]
  This is a pretty simple patch. Where the code used to always raise a not 
implemented error for Primera, it now raises the not implemented error only for 
< Primera 4.2. Where things could go wrong? It's possible the version checking 
could have an issue but that code seems standard and the upstream patch has 
been merged back to stable/victoria [1]. Regression testing will cover that 
code path as well.
  [1] https://review.opendev.org/c/openstack/cinder/+/820611

** Description changed:

  [Description]
  OpenStack cinder in Focal (OpenStack Ussuri) is lacking iSCSI support for HPE 
Primera 4.2 and higher. This is now supported in Cinder and we would like to 
enable it in Ubuntu Focal as well as OpenStack Ussuri.
  
  The rationale for this SRU falls under harware enablement for Long Term
  Support releases.
  
  [Test Case]
  1. Deploy OpenStack with:
   - HPE 3PAR *iSCSI* driver enabled for Cinder
   - with Primera >= 4.2
    as per: 
https://docs.openstack.org/cinder/rocky/configuration/block-storage/drivers/hpe-3par-driver.html
  2. Create a volume
    openstack volume create (--type hpe-primera) --size 10 sru-test-before
  
- 3. Confirm the volume is successfully created while it gave "For
- Primera, only FC is supported. iSCSI cannot be used" before.
+ 3. Confirm the volume is successfully created while it gave an error as
+ "For Primera, only FC is supported. iSCSI cannot be used" before.
  
  [Regression Potential]
  This is a pretty simple patch. Where the code used to always raise a not 
implemented error for Primera, it now raises the not implemented error only for 
< Primera 4.2. Where things could go wrong? It's possible the version checking 
could have an issue but that code seems standard and the upstream patch has 
been merged back to stable/victoria [1]. Regression testing will cover that 
code path as well.
  [1] https://review.opendev.org/c/openstack/cinder/+/820611

** Description changed:

  [Description]
  OpenStack cinder in Focal (OpenStack Ussuri) is lacking iSCSI support for HPE 
Primera 4.2 and higher. This is now supported in Cinder and we would like to 
enable it in Ubuntu Focal as well as OpenStack Ussuri.
  
  The rationale for this SRU falls under harware enablement for Long Term
  Support releases.
  
  [Test Case]
  1. Deploy OpenStack with:
   - HPE 3PAR *iSCSI* driver enabled for Cinder
   - with Primera >= 4.2
    as per: 
https://docs.openstack.org/cinder/rocky/configuration/block-storage/drivers/hpe-3par-driver.html
  2. Create a volume
    openstack volume create (--type hpe-primera) --size 10 

[Bug 1936842] Re: agent cannot be up on LXD/Fan network on OpenStack OVN/geneve mtu=1442

2021-10-18 Thread Nobuto Murata
Let me know what log / log level you want to see to compare. I'm
attaching the machine log of the VM for the time being.

** Attachment added: "machine-0.log"
   
https://bugs.launchpad.net/juju/+bug/1936842/+attachment/5533786/+files/machine-0.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1936842

Title:
  agent cannot be up on LXD/Fan network on OpenStack OVN/geneve mtu=1442

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju/+bug/1936842/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1936842] Re: agent cannot be up on LXD/Fan network on OpenStack OVN/geneve mtu=1442

2021-10-18 Thread Nobuto Murata
Hmm, I'm not sure where the difference comes from. With Juju 2.9.16 I
still see mtu=1442 on VM NIC (expected) and mtu=1450 (bigger than
underlying NIC) on fan-252 bridge.


ubuntu@juju-913ba4-k8s-on-openstack-0:~$ brctl show
bridge name bridge id   STP enabled interfaces
fan-252 8000.0653e0778a0a   no  ftun0
veth966fdd48
lxdbr0  8000.00163e2848ef   no
ubuntu@juju-913ba4-k8s-on-openstack-0:~$ ip l
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode 
DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens3:  mtu 1442 qdisc fq_codel state UP 
mode DEFAULT group default qlen 1000
link/ether fa:16:3e:27:18:ce brd ff:ff:ff:ff:ff:ff
3: fan-252:  mtu 1450 qdisc noqueue state UP 
mode DEFAULT group default qlen 1000
link/ether 06:53:e0:77:8a:0a brd ff:ff:ff:ff:ff:ff
4: ftun0:  mtu 1392 qdisc noqueue master 
fan-252 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 82:8a:c4:02:b5:77 brd ff:ff:ff:ff:ff:ff
5: lxdbr0:  mtu 1500 qdisc noqueue state 
DOWN mode DEFAULT group default qlen 1000
link/ether 00:16:3e:28:48:ef brd ff:ff:ff:ff:ff:ff
7: veth966fdd48@if6:  mtu 1450 qdisc noqueue 
master fan-252 state UP mode DEFAULT group default qlen 1000
link/ether 06:53:e0:77:8a:0a brd ff:ff:ff:ff:ff:ff link-netnsid 0

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1936842

Title:
  agent cannot be up on LXD/Fan network on OpenStack OVN/geneve mtu=1442

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju/+bug/1936842/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1947063] [NEW] Missing dependency to sysfsutils

2021-10-13 Thread Nobuto Murata
Public bug reported:

At the moment, python3-os-brick pulls iSCSI dependency such as open-
iscsi but doesn't pull FC dependency as sysfsutils at all.

os-brick is actively using the "systool" command to detect HBA and bais if it's 
not installed. It would be nice to add sysfsutils package at least as 
Recommends of python3-os-brick.
https://github.com/openstack/os-brick/blob/4baa502ec8c6c62184c474f126c0ad07eb3409f1/os_brick/initiator/linuxfc.py#L151-L177


$ apt depends python3-os-brick 
python3-os-brick
  Depends: open-iscsi
  Depends: os-brick-common (= 5.0.1-0ubuntu1)
  Depends: python3-babel (>= 2.3.4)
  Depends: python3-eventlet (>= 0.30.1)
  Depends: python3-os-win (>= 5.4.0)
  Depends: python3-oslo.concurrency (>= 4.4.0)
  Depends: python3-oslo.context (>= 1:3.1.1)
  Depends: python3-oslo.i18n (>= 5.0.1)
  Depends: python3-oslo.log (>= 4.4.0)
  Depends: python3-oslo.privsep (>= 2.4.0)
  Depends: python3-oslo.serialization (>= 4.1.0)
  Depends: python3-oslo.service (>= 2.5.0)
  Depends: python3-oslo.utils (>= 4.8.0)
  Depends: python3-pbr (>= 5.5.1)
  Depends: python3-requests (>= 2.25.1)
  Depends: python3-retrying (>= 1.2.3)
  Depends: python3-tenacity (>= 6.3.1)
  Depends: 
python3:i386
python3
  Suggests: python-os-brick-doc

** Affects: python-os-brick (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1947063

Title:
  Missing dependency to sysfsutils

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-os-brick/+bug/1947063/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940957] Re: DPDK ports get disabled after Open vSwitch restart with Intel XXV710(i40e) and 25G AOC cables

2021-09-03 Thread Nobuto Murata
** Description changed:

- Ubuntu 20.04 LTS
- dpdk 19.11.7-0ubuntu0.20.04.1
+ - Ubuntu 20.04 LTS
+ - dpdk 19.11.7-0ubuntu0.20.04.1
+   (we tested it with 19.11.10~rc1, but the problem persists)
+ - Intel XXV710
+ - Cisco 25G AOC cables
  
- We are seeing issues with link status of ports as DPDK-bond members and those 
links suddenly go away and marked as down. There are multiple parameters that 
could cause this issue, but one of the suggestions we've got from a server 
vendor was that the following upstream patch would be required to support 25G 
AOC/ACC cables.
- https://github.com/DPDK/dpdk/commit/b1daa34614
- (available for v21.05 onward)
+ Patch to backport:
+ https://git.dpdk.org/dpdk/commit/?id=b1daa3461429e7674206a714c17adca65e9b44b4
  
- (todo: define a test case)
+ [Impact]
  
- [expected status]
-  dpdk-bond0 
- bond_mode: balance-tcp
- bond may use recirculation: yes, Recirc-ID : 1
- bond-hash-basis: 0
- updelay: 0 ms
- downdelay: 0 ms
- next rebalance: 1691 ms
- lacp_status: negotiated
- lacp_fallback_ab: false
- active slave mac: 40:a6:b7:3e:4a:60(dpdk-d2cb784)
- slave dpdk-7272e20: enabled
-   may_enable: true
- slave dpdk-d2cb784: enabled
-   active slave
-   may_enable: true
+ DPDK ports for a bond get disabled and no traffic goes in and out after
+ openvswitch restart with the combination above. If that happens the DPDK
+ bond has to be re-created as a workaround but it's not feasible since
+ service restart basically breaks everything.
  
- 
- [after sometime - links are lost]
   dpdk-bond0 
  bond_mode: balance-tcp
  bond may use recirculation: yes, Recirc-ID : 1
  bond-hash-basis: 0
  updelay: 0 ms
  downdelay: 0 ms
  next rebalance: 7267 ms
  lacp_status: configured
  lacp_fallback_ab: false
  active slave mac: 00:00:00:00:00:00(none)
  slave dpdk-7272e20: disabled
may_enable: false
  slave dpdk-d2cb784: disabled
may_enable: false
+ 
+ [Test Plan]
+ 
+ 1. configure a DPDK bond with openvswitch as follows for example.
+ 
+ $ sudo ovs-appctl bond/show dpdk-bond0
+ 
+  dpdk-bond0 
+ bond_mode: balance-tcp
+ bond may use recirculation: yes, Recirc-ID : 1
+ bond-hash-basis: 0
+ updelay: 0 ms
+ downdelay: 0 ms
+ next rebalance: 1691 ms
+ lacp_status: negotiated
+ lacp_fallback_ab: false
+ active slave mac: 40:a6:b7:XX:YY:ZZ(dpdk-d2cb784)
+ slave dpdk-7272e20: enabled
+   may_enable: true
+ slave dpdk-d2cb784: enabled
+   active slave
+   may_enable: true
+ 
+ 2. Apply updated packages
+ 
+ 3. Reboot the machine (just to make sure we are not using anything old)
+ 
+ 4. Restart the openvswitch
+ 
+ $ sudo systemctl restart openvswitch-switch
+ 
+ 5. Confirm ports are enabled after both the step 3. and 4. and the port
+ status matches the one in the step 1.
+ 
+ 
+ [Where problems could occur]
+ 
+ The scope of the patch is i40e and the two specific cable types only:
+ i40e + 25G AOC and ACC cables so it's unlikely to affect any other
+ combinations. Before this patch, 25G AOC/ACC cables were not in the
+ additional PHY types of the driver functionality so it's not likely to
+ make things worse.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940957

Title:
  DPDK ports get disabled after Open vSwitch restart with Intel
  XXV710(i40e) and 25G AOC cables

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/dpdk/+bug/1940957/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940957] Re: DPDK ports get disabled after Open vSwitch restart with Intel XXV710(i40e) and 25G AOC cables

2021-09-03 Thread Nobuto Murata
** Summary changed:

- i40e: support 25G AOC/ACC cables
+ DPDK ports get disabled after Open vSwitch restart with Intel XXV710(i40e) 
and 25G AOC cables

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940957

Title:
  DPDK ports get disabled after Open vSwitch restart with Intel
  XXV710(i40e) and 25G AOC cables

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/dpdk/+bug/1940957/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940957] Re: i40e: support 25G AOC/ACC cables

2021-08-25 Thread Nobuto Murata
A test build for testing:
https://launchpad.net/~nobuto/+archive/ubuntu/dpdk

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940957

Title:
  i40e: support 25G AOC/ACC cables

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/dpdk/+bug/1940957/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940957] [NEW] i40e: support 25G AOC/ACC cables

2021-08-24 Thread Nobuto Murata
Public bug reported:

Ubuntu 20.04 LTS
dpdk 19.11.7-0ubuntu0.20.04.1

We are seeing issues with link status of ports as DPDK-bond members and those 
links suddenly go away and marked as down. There are multiple parameters that 
could cause this issue, but one of the suggestions we've got from a server 
vendor was that the following upstream patch would be required to support 25G 
AOC/ACC cables.
https://github.com/DPDK/dpdk/commit/b1daa34614
(available for v21.05 onward)

(todo: define a test case)

[expected status]
 dpdk-bond0 
bond_mode: balance-tcp
bond may use recirculation: yes, Recirc-ID : 1
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
next rebalance: 1691 ms
lacp_status: negotiated
lacp_fallback_ab: false
active slave mac: 40:a6:b7:3e:4a:60(dpdk-d2cb784)
slave dpdk-7272e20: enabled
  may_enable: true
slave dpdk-d2cb784: enabled
  active slave
  may_enable: true


[after sometime - links are lost]
 dpdk-bond0 
bond_mode: balance-tcp
bond may use recirculation: yes, Recirc-ID : 1
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
next rebalance: 7267 ms
lacp_status: configured
lacp_fallback_ab: false
active slave mac: 00:00:00:00:00:00(none)
slave dpdk-7272e20: disabled
  may_enable: false
slave dpdk-d2cb784: disabled
  may_enable: false

** Affects: dpdk (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940957

Title:
  i40e: support 25G AOC/ACC cables

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/dpdk/+bug/1940957/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1939898] Re: Unnatended postgresql-12 upgrade caused MAAS internal error

2021-08-16 Thread Nobuto Murata
A deployment method improvement in the field will be tracked as a
private bug LP: #1889498.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939898

Title:
  Unnatended postgresql-12 upgrade caused MAAS internal error

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1939898/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1939898] Re: Unnatended postgresql-12 upgrade caused MAAS internal error

2021-08-16 Thread Nobuto Murata
Closing MAAS task since MAAS just connects to PostgreSQL through tcp
connections.

One correction to my previous statement:

$ sudo lsof / | grep plpgsql.so
postgres  21822 postgres  memREG  252,1   202824 1295136 
/usr/lib/postgresql/12/lib/plpgsql.so
postgres  21948 postgres  memREG  252,1   202824 1295136 
/usr/lib/postgresql/12/lib/plpgsql.so

^^^ this was a red herring. The direction was the other way around; the
old binary of postgres in memory tried to invoke updated .so on disk in
case process is not restarted after a package update.

$ sudo lsof / | egrep -w 'DEL|deleted' | grep postgres
postgres  21160 postgres  txtREG  252,1  8117568 1293956 
/usr/lib/postgresql/12/bin/postgres (deleted)
postgres  21163 postgres  txtREG  252,1  8117568 1293956 
/usr/lib/postgresql/12/bin/postgres (deleted)
postgres  21164 postgres  txtREG  252,1  8117568 1293956 
/usr/lib/postgresql/12/bin/postgres (deleted)
postgres  21165 postgres  txtREG  252,1  8117568 1293956 
/usr/lib/postgresql/12/bin/postgres (deleted)
postgres  21166 postgres  txtREG  252,1  8117568 1293956 
/usr/lib/postgresql/12/bin/postgres (deleted)
postgres  21167 postgres  txtREG  252,1  8117568 1293956 
/usr/lib/postgresql/12/bin/postgres (deleted)
postgres  21168 postgres  txtREG  252,1  8117568 1293956 
/usr/lib/postgresql/12/bin/postgres (deleted)

** Changed in: maas
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939898

Title:
  Unnatended postgresql-12 upgrade caused MAAS internal error

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1939898/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1939898] Re: Unnatended postgresql-12 upgrade caused MAAS internal error

2021-08-16 Thread Nobuto Murata
The only scenario I can think of is NOT restarting postgres after the
package update. This could happen when postgres process is managed
outside of init(systemd) such as pacemaker, etc. for HA purposes.

$ sudo lsof / | grep plpgsql.so
postgres  21822 postgres  DELREG  252,1  1295136 
/usr/lib/postgresql/12/lib/plpgsql.so
postgres  21948 postgres  DELREG  252,1  1295136 
/usr/lib/postgresql/12/lib/plpgsql.so

For a single node scenario with systemd, postgres can be restarted in
the postinst of the postgres-common package so there should be no
orphaned library in memory. So this might not be related to MAAS snap or
postgres packages, but to a simple fact that postgres needs to be
restarted after a package update (especially when it's managed by
something like pacemaker than systemd itself).


[steps to reproduce on a fresh machine/VM]

$ sudo apt update && sudo apt upgrade -y

$ cat 

[Bug 1870829] Re: AptX and AptX HD unavailable as Bluetooth audio quality options

2021-08-15 Thread Nobuto Murata
From a duplicate of this bug, as tldr:
https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/1939933
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=991597
> > > Does that mean that enabling it, would only add some dependencies but
> > > not actually do anything?
> >
> > Yes, a (soft) dependency should probably be added against
> > gstreamer1.0-plugins-bad, but as I said, the needed version (>= 1.19) is
> > not yet in debian
> 
> https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/commit/b916522382aaa33f216636a292e97cd769ac4093
> > 1.19.1

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1870829

Title:
  AptX and AptX HD unavailable as Bluetooth audio quality options

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/1870829/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1939933] Re: pulseaudio is not built with AptX support

2021-08-14 Thread Nobuto Murata
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=991597
> > Does that mean that enabling it, would only add some dependencies but 
> > not actually do anything?
> 
> Yes, a (soft) dependency should probably be added against 
> gstreamer1.0-plugins-bad, but as I said, the needed version (>= 1.19) is 
> not yet in debian

https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/commit/b916522382aaa33f216636a292e97cd769ac4093
> 1.19.1

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939933

Title:
  pulseaudio is not built with AptX support

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/1939933/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1939933] Re: pulseaudio is not built with AptX support

2021-08-14 Thread Nobuto Murata
** Bug watch added: Debian Bug tracker #991597
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=991597

** Also affects: pulseaudio (Debian) via
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=991597
   Importance: Unknown
   Status: Unknown

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939933

Title:
  pulseaudio is not built with AptX support

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/1939933/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1939933] [NEW] pulseaudio is not built with AptX support

2021-08-14 Thread Nobuto Murata
Public bug reported:

The changelog mentions AptX, but it's not actually enabled in the build
if I'm not mistaken. Aptx support seems to require gstreamer in the
build dependency at least.

[changelog]
pulseaudio (1:15.0+dfsg1-1ubuntu1) impish; urgency=medium

  * New upstream version resynchronized on Debian
   - Support for bluetooth LDAC and AptX codecs and HFP profiles for
 improved audio quality

[upstream release notes]
https://www.freedesktop.org/wiki/Software/PulseAudio/Notes/15.0/#supportforldacandaptxbluetoothcodecsplussbcxqsbcwithhigher-qualityparameters


[meson.build]

bluez5_gst_dep = dependency('gstreamer-1.0', version : '>= 1.14', required : 
get_option('bluez5-gstreamer'))
bluez5_gstapp_dep = dependency('gstreamer-app-1.0', required : 
get_option('bluez5-gstreamer'))
have_bluez5_gstreamer = false
if bluez5_gst_dep.found() and bluez5_gstapp_dep.found()
  have_bluez5_gstreamer = true
  cdata.set('HAVE_GSTLDAC', 1)
  cdata.set('HAVE_GSTAPTX', 1)
endif


[build log]
https://launchpadlibrarian.net/551304449/buildlog_ubuntu-impish-amd64.pulseaudio_1%3A15.0+dfsg1-1ubuntu1_BUILDING.txt.gz

...
Dependency gstreamer-1.0 skipped: feature gstreamer disabled
Dependency gstreamer-app-1.0 skipped: feature gstreamer disabled
Dependency gstreamer-rtp-1.0 skipped: feature gstreamer disabled
Run-time dependency gstreamer-1.0 found: NO (tried pkgconfig and cmake)
Run-time dependency gstreamer-app-1.0 found: NO (tried pkgconfig and cmake)
...

Enable D-Bus:  true
  Enable BlueZ 5:  true
Enable native headsets:true
Enable  ofono headsets:true
Enable GStreamer based codecs: false

** Affects: pulseaudio (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939933

Title:
  pulseaudio is not built with AptX support

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/1939933/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904580] Re: Permissions 0644 for '/var/lib/nova/.ssh/id_rsa' are too open

2021-08-04 Thread Nobuto Murata
Previously reported as
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1904745

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904580

Title:
  Permissions 0644 for '/var/lib/nova/.ssh/id_rsa' are too open

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1904580/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904580] Re: Permissions 0644 for '/var/lib/nova/.ssh/id_rsa' are too open

2021-08-04 Thread Nobuto Murata
root@casual-condor:/var/lib/nova# ll .ssh/
total 28
drwxr-xr-x  2 nova root 4096 Aug  3 10:43 ./
drwxr-xr-x 10 nova nova 4096 Aug  3 10:25 ../
-rw-r--r--  1 root root 1197 Aug  3 10:54 authorized_keys
-rw---  1 nova root 1823 Aug  3 10:25 id_rsa
-rw-r--r--  1 nova root  400 Aug  3 10:25 id_rsa.pub
-rw-r--r--  1 root root 5526 Aug  3 10:54 known_hosts

^^^ 600 to id_rsa

root@casual-condor:/var/lib/nova# find /var/lib/nova -type f -exec chmod
0644 "{}" + -o -type d -exec chmod 0755 "{}" +

root@casual-condor:/var/lib/nova# ll .ssh/
total 28
drwxr-xr-x  2 nova root 4096 Aug  3 10:43 ./
drwxr-xr-x 10 nova nova 4096 Aug  3 10:25 ../
-rw-r--r--  1 root root 1197 Aug  3 10:54 authorized_keys
-rw-r--r--  1 nova root 1823 Aug  3 10:25 id_rsa
-rw-r--r--  1 nova root  400 Aug  3 10:25 id_rsa.pub
-rw-r--r--  1 root root 5526 Aug  3 10:54 known_hosts

^^^ 644 to id_rsa

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904580

Title:
  Permissions 0644 for '/var/lib/nova/.ssh/id_rsa' are too open

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1904580/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1904580] Re: Permissions 0644 for '/var/lib/nova/.ssh/id_rsa' are too open

2021-08-04 Thread Nobuto Murata
> Charms were not upgraded while this broke. We simply upgrade the
packages.

If that's the case, package maintainer script might be related? For
example,

$ grep /var/lib/nova /var/lib/dpkg/info/nova-common.postinst
--home /var/lib/nova \
chown -R nova:nova /var/lib/nova/
find /var/lib/nova -type f -exec chmod 0644 "{}" + -o -type d -exec chmod 
0755 "{}" +
find /var/lib/nova -name "console.log" -exec chmod 0600 "{}" +
find /var/lib/nova -name "console.log" -exec chown root:root "{}" +


** Project changed: nova => nova (Ubuntu)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1904580

Title:
  Permissions 0644 for '/var/lib/nova/.ssh/id_rsa' are too open

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-compute/+bug/1904580/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-27 Thread Nobuto Murata
[focal-victoria]

All of the uploads succeeded. And -proposed shortened time for the
larger sizes.

$ sudo apt-get install python3-glance-store/focal-proposed
$ sudo systemctl restart glance-api

$ apt-cache policy python3-glance-store
python3-glance-store:
  Installed: 2.3.0-0ubuntu1~cloud1
  Candidate: 2.3.0-0ubuntu1~cloud1
  Version table:
 *** 2.3.0-0ubuntu1~cloud1 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
focal-proposed/victoria/main amd64 Packages
100 /var/lib/dpkg/status
 2.3.0-0ubuntu1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
focal-updates/victoria/main amd64 Packages
 2.0.0-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

sizecephswift   s3  s3 -proposed
0MiB0m10.780s   0m2.762s0m3.819s0m3.235s
3MiB0m3.334s0m2.541s0m3.019s0m2.389s
5MiB0m2.731s0m2.244s0m2.687s0m2.417s
9MiB0m2.725s0m2.334s0m2.243s0m2.069s
10MiB   0m2.752s0m2.315s0m3.055s0m2.120s
128MiB  0m4.404s0m3.604s0m56.566s   0m4.728s
512MiB  0m8.411s0m6.021s13m45.246s  0m17.453s
1024MiB 0m14.707s   0m11.430s   54m22.978s  0m55.937s

** Tags removed: verification-needed verification-victoria-needed
** Tags added: verification-done verification-victoria-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849

Title:
  s3 backend takes time exponentially

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1934849/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-27 Thread Nobuto Murata
[bionic-ussuri]

All of the uploads succeeded. And -proposed shortened time for the
larger sizes.

$ sudo apt-get install python3-glance-store/bionic-proposed
$ sudo systemctl restart glance-api

$ apt-cache policy python3-glance-store
python3-glance-store:
  Installed: 2.0.0-0ubuntu2~cloud0
  Candidate: 2.0.0-0ubuntu2~cloud0
  Version table:
 *** 2.0.0-0ubuntu2~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-proposed/ussuri/main amd64 Packages
100 /var/lib/dpkg/status
 2.0.0-0ubuntu1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-updates/ussuri/main amd64 Packages
 0.23.0-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages

sizecephswift   s3  s3 -proposed
0MiB0m2.985s0m2.832s0m2.487s0m2.293s
3MiB0m2.761s0m2.175s0m1.970s0m2.367s
5MiB0m2.729s0m2.224s0m2.031s0m1.981s
9MiB0m2.783s0m2.265s0m2.168s0m2.087s
10MiB   0m2.824s0m2.256s0m2.122s0m2.037s
128MiB  0m4.703s0m2.966s0m47.824s   0m3.869s
512MiB  0m8.979s0m5.110s12m27.668s  0m16.269s
1024MiB 0m15.121s   0m15.742s   50m56.406s  0m48.282s

** Tags added: verification-ussuri-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849

Title:
  s3 backend takes time exponentially

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1934849/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-27 Thread Nobuto Murata
Just for the record, this is the current status with focal-victoria. No
diff between -updates and -proposed.

$ apt-cache policy python3-glance-store
python3-glance-store:
  Installed: 2.3.0-0ubuntu1~cloud0
  Candidate: 2.3.0-0ubuntu1~cloud0
  Version table:
 *** 2.3.0-0ubuntu1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
focal-updates/victoria/main amd64 Packages
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
focal-proposed/victoria/main amd64 Packages
100 /var/lib/dpkg/status
 2.0.0-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849

Title:
  s3 backend takes time exponentially

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1934849/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-27 Thread Nobuto Murata
@Corey,

Somehow the binary package for cloud-archive:victoria-proposed is not
published yet. Can you please double-check the build status of the
package? I just don't know where to look.

cloud1 in the source vs cloud0 in the binary.


$ curl -s 
http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/focal-proposed/victoria/main/source/Sources.gz
 | zcat | grep -A2 'Package: python-glance-store'
Package: python-glance-store
Binary: python-glance-store-doc, python3-glance-store
Version: 2.3.0-0ubuntu1~cloud1


$ curl -s 
http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/focal-proposed/victoria/main/binary-amd64/Packages
 | grep -A7 'Package: python3-glance-store'
Package: python3-glance-store
Source: python-glance-store
Priority: optional
Section: python
Installed-Size: 983
Maintainer: Ubuntu Developers 
Architecture: all
Version: 2.3.0-0ubuntu1~cloud0

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849

Title:
  s3 backend takes time exponentially

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1934849/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-26 Thread Nobuto Murata
[focal-wallaby]

All of the uploads succeeded. And -proposed shortened time for the
larger sizes.

$ sudo apt-get install python3-glance-store/focal-proposed
$ sudo systemctl restart glance-api

$ apt-cache policy python3-glance-store
python3-glance-store:
  Installed: 2.5.0-0ubuntu2~cloud0
  Candidate: 2.5.0-0ubuntu2~cloud0
  Version table:
 *** 2.5.0-0ubuntu2~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
focal-proposed/wallaby/main amd64 Packages
100 /var/lib/dpkg/status
 2.5.0-0ubuntu1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
focal-updates/wallaby/main amd64 Packages
 2.0.0-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

sizecephswift   s3  s3 -proposed
0MiB0m3.389s0m2.876s0m2.389s0m4.459s
3MiB0m2.788s0m2.278s0m2.076s0m3.138s
5MiB0m2.716s0m2.337s0m2.122s0m3.246s
9MiB0m3.506s0m2.396s0m2.274s0m2.782s
10MiB   0m2.859s0m2.366s0m2.324s0m2.829s
128MiB  0m4.514s0m3.306s0m55.145s   0m4.868s
512MiB  0m10.862s   0m9.692s13m31.848s  0m21.378s
1024MiB 0m21.965s   0m27.575s   54m27.784s  1m4.299s

** Tags removed: verification-wallaby-needed
** Tags added: verification-wallaby-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849

Title:
  s3 backend takes time exponentially

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1934849/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-26 Thread Nobuto Murata
[focal]

All of the uploads succeeded. And -proposed shortened time for the
larger sizes.

$ sudo apt-get install python3-glance-store/focal-proposed
$ sudo systemctl restart glance-api

$ apt-cache policy python3-glance-store
python3-glance-store:
  Installed: 2.0.0-0ubuntu2
  Candidate: 2.0.0-0ubuntu2
  Version table:
 *** 2.0.0-0ubuntu2 400
400 http://archive.ubuntu.com/ubuntu focal-proposed/main amd64 Packages
100 /var/lib/dpkg/status
 2.0.0-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages

sizecephswift   s3  s3 -proposed
0MiB0m3.571s0m2.249s0m2.185s0m1.959s
3MiB0m2.724s0m2.285s0m1.986s0m2.118s
5MiB0m2.717s0m2.694s0m2.093s0m2.213s
9MiB0m2.749s0m2.342s0m2.357s0m2.107s
10MiB   0m3.415s0m2.342s0m2.289s0m2.139s
128MiB  0m4.644s0m3.109s0m54.343s   0m4.287s
512MiB  0m11.121s   0m6.246s13m35.948s  0m16.853s
1024MiB 0m16.292s   0m11.441s   54m24.339s  0m52.231s

** Tags removed: verification-needed-focal
** Tags added: verification-done-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849

Title:
  s3 backend takes time exponentially

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1934849/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-26 Thread Nobuto Murata
[hirsute]

All of the uploads succeeded. And -proposed shortened time for the
larger sizes.

$ sudo apt-get install python3-glance-store/hirsute-proposed
$ sudo systemctl restart glance-api

$ apt-cache policy python3-glance-store
python3-glance-store:
  Installed: 2.5.0-0ubuntu2
  Candidate: 2.5.0-0ubuntu2
  Version table:
 *** 2.5.0-0ubuntu2 400
400 http://archive.ubuntu.com/ubuntu hirsute-proposed/main amd64 
Packages
100 /var/lib/dpkg/status
 2.5.0-0ubuntu1 500
500 http://archive.ubuntu.com/ubuntu hirsute/main amd64 Packages


sizecephswift   s3  s3 -proposed
0MiB0m2.495s0m2.394s0m2.262s0m2.352s
3MiB0m2.721s0m2.270s0m2.144s0m2.430s
5MiB0m2.726s0m2.314s0m2.200s0m2.177s
9MiB0m2.737s0m2.377s0m2.664s0m2.238s
10MiB   0m2.772s0m2.350s0m2.756s0m2.160s
128MiB  0m4.261s0m3.381s0m57.001s   0m5.542s
512MiB  0m11.619s   0m11.734s   14m13.448s  0m24.012s
1024MiB 0m23.787s   0m20.756s   56m37.906s  1m3.716s

** Tags removed: verification-needed-hirsute
** Tags added: verification-done-hirsute

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849

Title:
  s3 backend takes time exponentially

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1934849/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-20 Thread Nobuto Murata
My update in the bug description was somehow rolled back (by me in the
record), trying again.

** Description changed:

  [Impact]
- [Test Case]
- I have a test Ceph cluster as an object storage with both Swift and S3 
protocols enabled for Glance (Ussuri). When I use Swift backend with Glance, an 
image upload completes quickly enough. But with S3 backend Glance, it takes 
much more time to upload an image and it seems to rise exponentially.
+ 
+ Glance with S3 backend cannot accept image uploads in a realistic time
+ frame. For example, an 1GB image upload takes ~60 minutes although other
+ backends such as swift can complete it with 10 seconds.
+ 
+ [Test Plan]
+ 
+ 1. Deploy a partial OpenStack with multiple Glance backends including S3
+   (zaza test bundles can be used with "ceph" which will set up "rbd", 
"swift", and "s3" backends - 
https://opendev.org/openstack/charm-glance/src/branch/master/tests/tests.yaml)
+ 2. Upload multiple images with variety of sizes
+ 3. confirm the duration of uploading images are shorter in general after 
applying the updated package
+   (expected duration of 1GB is from ~60 minutes to 1-3 minutes)
+ 
+ for backend in ceph swift s3; do
+ echo "[$backend]"
+ for i in {0,3,5,9,10,128,512,1024}; do
+ dd if=/dev/zero of=my-image.img bs=1M count=$i oflag=sync
+ echo "${i}MiB"
+ time glance image-create \
+ --store $backend \
+ --file my-image.img --name "my-image-${backend}-${i}MiB" \
+ --disk-format raw --container-format bare \
+ --progress
+ done
+ done
+ 
+ [Where problems could occur]
+ 
+ Since we bump WRITE_CHUNKSIZE from 64KiB to 5MiB, there might be a case where 
image uploads fail if the size of the image is less than WRITE_CHUNKSIZE. Or 
there might be an unexpected latency in the worst case scenario. We will try to 
address the concerns by testing multiple images uploads with multiple sizes 
including some corner cases as follows:
+ - 0 - zero
+ - 3MiB - less than the new WRITE_CHUNKSIZE(5MiB)
+ - 5MiB - exactly same as the new WRITE_CHUNKSIZE(5MiB)
+ - 9MiB - bigger than new WRITE_CHUNKSIZE(5MiB) but less than twice
+ - 10MiB - exactly twice as the new WRITE_CHUNKSIZE(5MiB)
+ - 128MiB, 512MiB, 1024MiB - some large images
+ 
+ 
+ 
+ I have a test Ceph cluster as an object storage with both Swift and S3
+ protocols enabled for Glance (Ussuri). When I use Swift backend with
+ Glance, an image upload completes quickly enough. But with S3 backend
+ Glance, it takes much more time to upload an image and it seems to rise
+ exponentially.
  
  It's worth noting that when uploading an image with S3 backend, a single
  core is consumed 100% by glance-api process.
- 
- for backend in swift s3; do
- for i in {8,16,32,64,128,512}; do
- dd if=/dev/zero of=my-image.img bs=1M count=$i oflag=sync
- /usr/bin/time --format=%E glance image-create \
- --store $backend \
- --file my-image.img --name my-image \
- --disk-format raw --container-format bare \
- --progress
- done
- done
  
  [swift]
  8MB   -  2.4s
  16MB  -  2.8s
  32MB  -  2.6s
  64MB  -  2.7s
  128MB -  3.1s
  ...
  512MB -  5.9s
  
  [s3]
  8MB   -  2.2s
  16MB  -  2.9s
  32MB  -  5.5s
  64MB  - 16.3s
  128MB - 54.9s
  ...
  512MB - 14m26s
  
  Btw, downloading of 512MB image with S3 backend can complete with less
  than 10 seconds.
  
  $ time openstack image save --file downloaded.img 
917c5424-4350-4bc5-98ca-66d40e101843
  real0m5.673s
  
  $ du -h downloaded.img
  512Mdownloaded.img
  
  [/etc/glance/glance-api.conf]
  
  enabled_backends = local:file, ceph:rbd, swift:swift, s3:s3
  
  [swift]
  auth_version = 3
  auth_address = http://192.168.151.131:5000/v3
  ...
  container = glance
  large_object_size = 5120
  large_object_chunk_size = 200
  
  [s3]
  s3_store_host = http://192.168.151.137:80/
  ...
  s3_store_bucket = zaza-glance-s3-test
  s3_store_large_object_size = 5120
  s3_store_large_object_chunk_size = 200
  
+ ProblemType: Bug
+ DistroRelease: Ubuntu 20.04
  ProblemType: BugDistroRelease: Ubuntu 20.04
  Package: python3-glance-store 2.0.0-0ubuntu1
  ProcVersionSignature: Ubuntu 5.4.0-77.86-generic 5.4.119
  Uname: Linux 5.4.0-77-generic x86_64
- NonfreeKernelModules: bluetooth ecdh_generic ecc tcp_diag inet_diag 
binfmt_misc veth zfs zunicode zlua zavl icp zcommon znvpair spl unix_diag 
nft_masq nft_chain_nat bridge stp llc vhost_vsock 
vmw_vsock_virtio_transport_common vhost vsock ebtable_filter ebtables 
ip6table_raw ip6table_mangle ip6table_nat ip6table_filter ip6_tables 
iptable_raw iptable_mangle iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 
nf_defrag_ipv4 iptable_filter bpfilter nf_tables nfnetlink dm_multipath 
scsi_dh_rdac scsi_dh_emc scsi_dh_alua kvm_amd ccp input_leds kvm joydev mac_hid 
serio_raw qemu_fw_cfg sch_fq_codel ip_tables x_tables autofs4 btrfs 
zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq 

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-19 Thread Nobuto Murata
** Description changed:

  [Impact]
- 
- Glance with S3 backend cannot accept image uploads in a realistic time
- frame. For example, an 1GB image upload takes ~60 minutes although other
- backends such as swift can complete it with 10 seconds.
- 
- [Test Plan]
- 
- 1. Deploy a partial OpenStack with multiple Glance backends including S3
-   (zaza test bundles can be used with "ceph" which will set up "rbd", 
"swift", and "s3" backends - 
https://opendev.org/openstack/charm-glance/src/branch/master/tests/tests.yaml)
- 2. Upload multiple images with variety of sizes
- 3. confirm the duration of uploading images are shorter in general after 
applying the updated package
-   (expected duration of 1GB is from ~60 minutes to 1-3 minutes)
- 
- for backend in ceph swift s3; do
- echo "[$backend]"
- for i in {0,3,5,9,10,128,512,1024}; do
- dd if=/dev/zero of=my-image.img bs=1M count=$i oflag=sync
- echo "${i}MiB"
- time glance image-create \
- --store $backend \
- --file my-image.img --name "my-image-${backend}-${i}MiB" \
- --disk-format raw --container-format bare \
- --progress
- done
- done
- 
- 
- [Where problems could occur]
- 
- Since we bump WRITE_CHUNKSIZE from 64KiB to 5MiB, there might be a case where 
image uploads fail if the size of the image is less than WRITE_CHUNKSIZE. Or 
there might be an unexpected latency in the worst case scenario. We will try to 
address the concerns by testing multiple images uploads with multiple sizes 
including some corner cases as follows:
- - 0 - zero
- - 3MiB - less than the new WRITE_CHUNKSIZE(5MiB)
- - 5MiB - exactly same as the new WRITE_CHUNKSIZE(5MiB)
- - 9MiB - bigger than new WRITE_CHUNKSIZE(5MiB) but less than twice
- - 10MiB - exactly twice as the new WRITE_CHUNKSIZE(5MiB)
- - 128MiB, 512MiB, 1024MiB - some large images
- 
- 
- 
- 
- I have a test Ceph cluster as an object storage with both Swift and S3
- protocols enabled for Glance (Ussuri). When I use Swift backend with
- Glance, an image upload completes quickly enough. But with S3 backend
- Glance, it takes much more time to upload an image and it seems to rise
- exponentially.
+ [Test Case]
+ I have a test Ceph cluster as an object storage with both Swift and S3 
protocols enabled for Glance (Ussuri). When I use Swift backend with Glance, an 
image upload completes quickly enough. But with S3 backend Glance, it takes 
much more time to upload an image and it seems to rise exponentially.
  
  It's worth noting that when uploading an image with S3 backend, a single
  core is consumed 100% by glance-api process.
+ 
+ for backend in swift s3; do
+ for i in {8,16,32,64,128,512}; do
+ dd if=/dev/zero of=my-image.img bs=1M count=$i oflag=sync
+ /usr/bin/time --format=%E glance image-create \
+ --store $backend \
+ --file my-image.img --name my-image \
+ --disk-format raw --container-format bare \
+ --progress
+ done
+ done
  
  [swift]
  8MB   -  2.4s
  16MB  -  2.8s
  32MB  -  2.6s
  64MB  -  2.7s
  128MB -  3.1s
  ...
  512MB -  5.9s
  
  [s3]
  8MB   -  2.2s
  16MB  -  2.9s
  32MB  -  5.5s
  64MB  - 16.3s
  128MB - 54.9s
  ...
  512MB - 14m26s
  
  Btw, downloading of 512MB image with S3 backend can complete with less
  than 10 seconds.
  
  $ time openstack image save --file downloaded.img 
917c5424-4350-4bc5-98ca-66d40e101843
  real0m5.673s
  
  $ du -h downloaded.img
  512Mdownloaded.img
  
  [/etc/glance/glance-api.conf]
  
  enabled_backends = local:file, ceph:rbd, swift:swift, s3:s3
  
  [swift]
  auth_version = 3
  auth_address = http://192.168.151.131:5000/v3
  ...
  container = glance
  large_object_size = 5120
  large_object_chunk_size = 200
  
  [s3]
  s3_store_host = http://192.168.151.137:80/
  ...
  s3_store_bucket = zaza-glance-s3-test
  s3_store_large_object_size = 5120
  s3_store_large_object_chunk_size = 200
  
  ProblemType: BugDistroRelease: Ubuntu 20.04
  Package: python3-glance-store 2.0.0-0ubuntu1
  ProcVersionSignature: Ubuntu 5.4.0-77.86-generic 5.4.119
  Uname: Linux 5.4.0-77-generic x86_64
+ NonfreeKernelModules: bluetooth ecdh_generic ecc tcp_diag inet_diag 
binfmt_misc veth zfs zunicode zlua zavl icp zcommon znvpair spl unix_diag 
nft_masq nft_chain_nat bridge stp llc vhost_vsock 
vmw_vsock_virtio_transport_common vhost vsock ebtable_filter ebtables 
ip6table_raw ip6table_mangle ip6table_nat ip6table_filter ip6_tables 
iptable_raw iptable_mangle iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 
nf_defrag_ipv4 iptable_filter bpfilter nf_tables nfnetlink dm_multipath 
scsi_dh_rdac scsi_dh_emc scsi_dh_alua kvm_amd ccp input_leds kvm joydev mac_hid 
serio_raw qemu_fw_cfg sch_fq_codel ip_tables x_tables autofs4 btrfs 
zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor 
async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear crct10dif_pclmul 
crc32_pclmul cirrus ghash_clmulni_intel 

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-19 Thread Nobuto Murata
** Description changed:

  [Impact]
- [Test Case]
- I have a test Ceph cluster as an object storage with both Swift and S3 
protocols enabled for Glance (Ussuri). When I use Swift backend with Glance, an 
image upload completes quickly enough. But with S3 backend Glance, it takes 
much more time to upload an image and it seems to rise exponentially.
+ 
+ Glance with S3 backend cannot accept image uploads in a realistic time
+ frame. For example, an 1GB image upload takes ~60 minutes although other
+ backends such as swift can complete it with 10 seconds.
+ 
+ [Test Plan]
+ 
+ 1. Deploy a partial OpenStack with multiple Glance backends including S3
+   (zaza test bundles can be used with "ceph" which will set up "rbd", 
"swift", and "s3" backends - 
https://opendev.org/openstack/charm-glance/src/branch/master/tests/tests.yaml)
+ 2. Upload multiple images with variety of sizes
+ 3. confirm the duration of uploading images are shorter in general after 
applying the updated package
+   (expected duration of 1GB is from ~60 minutes to 1-3 minutes)
+ 
+ for backend in ceph swift s3; do
+ echo "[$backend]"
+ for i in {0,3,5,9,10,128,512,1024}; do
+ dd if=/dev/zero of=my-image.img bs=1M count=$i oflag=sync
+ echo "${i}MiB"
+ time glance image-create \
+ --store $backend \
+ --file my-image.img --name "my-image-${backend}-${i}MiB" \
+ --disk-format raw --container-format bare \
+ --progress
+ done
+ done
+ 
+ 
+ [Where problems could occur]
+ 
+ Since we bump WRITE_CHUNKSIZE from 64KiB to 5MiB, there might be a case where 
image uploads fail if the size of the image is less than WRITE_CHUNKSIZE. Or 
there might be an unexpected latency in the worst case scenario. We will try to 
address the concerns by testing multiple images uploads with multiple sizes 
including some corner cases as follows:
+ - 0 - zero
+ - 3MiB - less than the new WRITE_CHUNKSIZE(5MiB)
+ - 5MiB - exactly same as the new WRITE_CHUNKSIZE(5MiB)
+ - 9MiB - bigger than new WRITE_CHUNKSIZE(5MiB) but less than twice
+ - 10MiB - exactly twice as the new WRITE_CHUNKSIZE(5MiB)
+ - 128MiB, 512MiB, 1024MiB - some large images
+ 
+ 
+ 
+ 
+ I have a test Ceph cluster as an object storage with both Swift and S3
+ protocols enabled for Glance (Ussuri). When I use Swift backend with
+ Glance, an image upload completes quickly enough. But with S3 backend
+ Glance, it takes much more time to upload an image and it seems to rise
+ exponentially.
  
  It's worth noting that when uploading an image with S3 backend, a single
  core is consumed 100% by glance-api process.
- 
- for backend in swift s3; do
- for i in {8,16,32,64,128,512}; do
- dd if=/dev/zero of=my-image.img bs=1M count=$i oflag=sync
- time glance image-create \
- --store $backend \
- --file my-image.img --name my-image \
- --disk-format raw --container-format bare \
- --progress
- done
- done
  
  [swift]
  8MB   -  2.4s
  16MB  -  2.8s
  32MB  -  2.6s
  64MB  -  2.7s
  128MB -  3.1s
  ...
  512MB -  5.9s
  
  [s3]
  8MB   -  2.2s
  16MB  -  2.9s
  32MB  -  5.5s
  64MB  - 16.3s
  128MB - 54.9s
  ...
  512MB - 14m26s
  
  Btw, downloading of 512MB image with S3 backend can complete with less
  than 10 seconds.
  
  $ time openstack image save --file downloaded.img 
917c5424-4350-4bc5-98ca-66d40e101843
  real0m5.673s
  
  $ du -h downloaded.img
  512Mdownloaded.img
  
  [/etc/glance/glance-api.conf]
  
  enabled_backends = local:file, ceph:rbd, swift:swift, s3:s3
  
  [swift]
  auth_version = 3
  auth_address = http://192.168.151.131:5000/v3
  ...
  container = glance
  large_object_size = 5120
  large_object_chunk_size = 200
  
  [s3]
  s3_store_host = http://192.168.151.137:80/
  ...
  s3_store_bucket = zaza-glance-s3-test
  s3_store_large_object_size = 5120
  s3_store_large_object_chunk_size = 200
  
- ProblemType: Bug
- DistroRelease: Ubuntu 20.04
+ ProblemType: BugDistroRelease: Ubuntu 20.04
  Package: python3-glance-store 2.0.0-0ubuntu1
  ProcVersionSignature: Ubuntu 5.4.0-77.86-generic 5.4.119
  Uname: Linux 5.4.0-77-generic x86_64
- NonfreeKernelModules: bluetooth ecdh_generic ecc tcp_diag inet_diag 
binfmt_misc veth zfs zunicode zlua zavl icp zcommon znvpair spl unix_diag 
nft_masq nft_chain_nat bridge stp llc vhost_vsock 
vmw_vsock_virtio_transport_common vhost vsock ebtable_filter ebtables 
ip6table_raw ip6table_mangle ip6table_nat ip6table_filter ip6_tables 
iptable_raw iptable_mangle iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 
nf_defrag_ipv4 iptable_filter bpfilter nf_tables nfnetlink dm_multipath 
scsi_dh_rdac scsi_dh_emc scsi_dh_alua kvm_amd ccp input_leds kvm joydev mac_hid 
serio_raw qemu_fw_cfg sch_fq_codel ip_tables x_tables autofs4 btrfs 
zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor 
async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear crct10dif_pclmul 
crc32_pclmul cirrus 

[Bug 1885169] Re: Some arping version only accept integer number as -w argument

2021-07-16 Thread Nobuto Murata
It's likely iputils-arping.

$ apt rdepends arping
arping
Reverse Depends:
  Conflicts: iputils-arping
  Depends: netconsole
  Depends: ifupdown-extra

$ apt rdepends iputils-arping
iputils-arping
Reverse Depends:
  Depends: neutron-l3-agent
  Recommends: python3-networking-arista
  Recommends: neutron-mlnx-agent
  Depends: neutron-l3-agent
  Recommends: whereami
  Recommends: python3-networking-arista
  Recommends: neutron-mlnx-agent
 |Depends: netconsole
  Depends: libguestfs-rescue
 |Depends: ifupdown-extra
  Depends: dracut-network
  Conflicts: arping

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1885169

Title:
  Some arping version only accept integer number as -w argument

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1885169/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1885169] Re: Some arping version only accept integer number as -w argument

2021-07-16 Thread Nobuto Murata
On focal, there are two packages offering arping binary:

[iputils-arping(main)]
$ sudo arping -U -I eth0 -c 1 -w 1.5 10.48.98.1
arping: invalid argument: '1.5'

[arping(universe)]
$ arping -U -I eth0 -c 1 -w 1.5 10.48.98.1
ARPING 10.48.98.1

I don't know which one our charms install.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1885169

Title:
  Some arping version only accept integer number as -w argument

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1885169/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-11 Thread Nobuto Murata
Subscribing Canonical's ~field-high to initiate the Ubuntu package's SRU
process in a timely manner.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849

Title:
  s3 backend takes time exponentially

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1934849/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-07 Thread Nobuto Murata
> I *think* hash calculation and verifier have to be outside of the loop
to avoid the overhead. I will confirm it with a manual testing.

This hypothesis wasn't true, it was really about the chunk size.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849

Title:
  s3 backend takes time exponentially

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1934849/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-07 Thread Nobuto Murata
I *think* hash calculation and verifier have to be outside of the loop
to avoid the overhead. I will confirm it with a manual testing.

for chunk in utils.chunkreadable(image_file, self.WRITE_CHUNKSIZE):
image_data += chunk
image_size += len(chunk)
os_hash_value.update(chunk)
checksum.update(chunk)
if verifier:
verifier.update(chunk)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849

Title:
  s3 backend takes time exponentially

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1934849/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-07 Thread Nobuto Murata
Yeah, I put the same config on purpose for both s3 and swift. But
tweaking large_object_size didn't make any difference.

[swift]
large_object_size = 5120
large_object_chunk_size = 200

[s3]
s3_store_large_object_size = 5120
s3_store_large_object_chunk_size = 200

After digging into the actual environment, it seems image_size is always zero 
so this condition never goes to multipart upload, but singlepart upload always.
https://opendev.org/openstack/glance_store/src/branch/stable/ussuri/glance_store/_drivers/s3.py#L597

I guess Glance will never know the actual image_size at the beginning of 
getting image data from a client until getting all of the data. I just don't 
know what kind of cases we will get non-null value in "size".
https://docs.openstack.org/api-ref/image/v2/index.html?expanded=create-image-detail#create-image

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849

Title:
  s3 backend takes time exponentially

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1934849/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-07 Thread Nobuto Murata
And by using "4 * units.Mi" it can be 20s.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849

Title:
  s3 backend takes time exponentially

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1934849/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-07 Thread Nobuto Murata
Okay, as the utils.chunkreadable loop is taking time I've tried a larger
WRITE_CHUNKSIZE by hand. It can decrease the amount of time of uploading
a 512MB image from 14 minutes to 60 seconds.

$ git diff
diff --git a/glance_store/_drivers/s3.py b/glance_store/_drivers/s3.py
index 1c18531..576c573 100644
--- a/glance_store/_drivers/s3.py
+++ b/glance_store/_drivers/s3.py
@@ -361,7 +361,7 @@ class Store(glance_store.driver.Store):
 EXAMPLE_URL = "s3://:@//"
 
 READ_CHUNKSIZE = 64 * units.Ki
-WRITE_CHUNKSIZE = READ_CHUNKSIZE
+WRITE_CHUNKSIZE = 1024 * units.Ki
 
 @staticmethod
 def get_schemes():

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849

Title:
  s3 backend takes time exponentially

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1934849/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-07 Thread Nobuto Murata
The code part in question is this for loop:
https://opendev.org/openstack/glance_store/src/branch/stable/ussuri/glance_store/_drivers/s3.py#L638-L644

2021-07-07 11:50:06.735 - def _add_singlepart
2021-07-07 11:50:06.736 - getting into utils.chunkreadable loop
2021-07-07 11:50:06.736 - loop invoked
2021-07-07 11:50:06.737 - loop invoked
2021-07-07 11:50:06.737 - loop invoked
...
2021-07-07 11:50:22.514 - loop invoked (1024 times in total)
2021-07-07 11:50:22.544 - put_object

So it's taking time before data is passed to boto3 using put_object.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849

Title:
  s3 backend takes time exponentially

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1934849/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-07 Thread Nobuto Murata
S3 performance itself is not bad. Uploading 512MB object can complete
within a few seconds. So I suppose it's on how Glance S3 driver is using
boto3.

$ time python3 upload.py

real0m3.644s
user0m3.124s
sys 0m1.835s


$ cat upload.py 
import boto3

s3 = boto3.client(
"s3",
endpoint_url="http://192.168.151.137:80/;,
aws_access_key_id="Z99Y0ST1T8F66RUZXSJT",
aws_secret_access_key="S5sYaE6pa8btV1HQLtz99RG2DNvX4CpzjOfDMjMZ",
)

with open("my-image.img", "rb") as f:
s3.upload_fileobj(f, "zaza-glance-s3-test", "my-test-object")

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849

Title:
  s3 backend takes time exponentially

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1934849/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-06 Thread Nobuto Murata
Debug log of when uploading a 512MB image with S3 backend.

** Attachment added: "glance-api.log"
   
https://bugs.launchpad.net/ubuntu/+source/python-glance-store/+bug/1934849/+attachment/5509534/+files/glance-api.log

** Also affects: glance-store
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849

Title:
  s3 backend takes time exponentially

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1934849/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] Re: s3 backend takes time exponentially

2021-07-06 Thread Nobuto Murata
python3-boto3 1.9.253-1

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849

Title:
  s3 backend takes time exponentially

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1934849/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934849] [NEW] s3 backend takes time exponentially

2021-07-06 Thread Nobuto Murata
Public bug reported:

I have a test Ceph cluster as an object storage with both Swift and S3
protocols enabled for Glance (Ussuri). When I use Swift backend with
Glance, an image upload completes quickly enough. But with S3 backend
Glance, it takes much more time to upload an image and it seems to rise
exponentially.

It's worth noting that when uploading an image with S3 backend, a single
core is consumed 100% by glance-api process.

for backend in swift s3; do
for i in {8,16,32,64,128,512}; do
dd if=/dev/zero of=my-image.img bs=1M count=$i oflag=sync
time glance image-create \
--store $backend \
--file my-image.img --name my-image \
--disk-format raw --container-format bare \
--progress
done
done

[swift]
8MB   -  2.4s
16MB  -  2.8s
32MB  -  2.6s
64MB  -  2.7s
128MB -  3.1s
...
512MB -  5.9s

[s3]
8MB   -  2.2s
16MB  -  2.9s
32MB  -  5.5s
64MB  - 16.3s
128MB - 54.9s
...
512MB - 14m26s

Btw, downloading of 512MB image with S3 backend can complete with less
than 10 seconds.

$ time openstack image save --file downloaded.img 
917c5424-4350-4bc5-98ca-66d40e101843
real0m5.673s

$ du -h downloaded.img 
512Mdownloaded.img


[/etc/glance/glance-api.conf]

enabled_backends = local:file, ceph:rbd, swift:swift, s3:s3

[swift]
auth_version = 3
auth_address = http://192.168.151.131:5000/v3
...
container = glance
large_object_size = 5120
large_object_chunk_size = 200


[s3]
s3_store_host = http://192.168.151.137:80/
...
s3_store_bucket = zaza-glance-s3-test
s3_store_large_object_size = 5120
s3_store_large_object_chunk_size = 200

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: python3-glance-store 2.0.0-0ubuntu1
ProcVersionSignature: Ubuntu 5.4.0-77.86-generic 5.4.119
Uname: Linux 5.4.0-77-generic x86_64
NonfreeKernelModules: bluetooth ecdh_generic ecc tcp_diag inet_diag binfmt_misc 
veth zfs zunicode zlua zavl icp zcommon znvpair spl unix_diag nft_masq 
nft_chain_nat bridge stp llc vhost_vsock vmw_vsock_virtio_transport_common 
vhost vsock ebtable_filter ebtables ip6table_raw ip6table_mangle ip6table_nat 
ip6table_filter ip6_tables iptable_raw iptable_mangle iptable_nat nf_nat 
nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 iptable_filter bpfilter nf_tables 
nfnetlink dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua kvm_amd ccp 
input_leds kvm joydev mac_hid serio_raw qemu_fw_cfg sch_fq_codel ip_tables 
x_tables autofs4 btrfs zstd_compress raid10 raid456 async_raid6_recov 
async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 
multipath linear crct10dif_pclmul crc32_pclmul cirrus ghash_clmulni_intel 
drm_kms_helper virtio_net syscopyarea aesni_intel sysfillrect sysimgblt 
fb_sys_fops crypto_simd cryptd drm virtio_blk glue_helper net_failover psmouse 
failover floppy i2c_piix4 pata_acpi
ApportVersion: 2.20.11-0ubuntu27.18
Architecture: amd64
CasperMD5CheckResult: skip
Date: Wed Jul  7 04:46:05 2021
PackageArchitecture: all
ProcEnviron:
 TERM=screen-256color
 PATH=(custom, no user)
 LANG=C.UTF-8
 SHELL=/bin/bash
SourcePackage: python-glance-store
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: glance-store
 Importance: Undecided
 Status: New

** Affects: python-glance-store (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug focal uec-images

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934849

Title:
  s3 backend takes time exponentially

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1934849/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1891259] Re: snap installation with core18 fails at 'Ensure prerequisites for "etcd" are available' in air-gapped environments as snapd always requires core(16) or "snapd" snap

2021-04-14 Thread Nobuto Murata
Now that "snapd" snap is seeded into the base image of focal along with
core18 for "lxd" snap . That actually solves the original issue in a
different way. We no longer have to upload "snapd" snap using a charm
resource.

Bionic is still affected, but I don't think it's common for new
deployments these days so I'm closing the charm task as Invalid until
somebody really needs the work for bionic.

[/var/lib/snapd/seed/seed.yaml on focal]
snaps:
  -
name: core18
channel: stable
file: core18_1997.snap
  -
name: snapd
channel: stable
file: snapd_11402.snap
  -
name: lxd
channel: 4.0/stable/ubuntu-20.04
file: lxd_19647.snap


** Changed in: charm-etcd
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1891259

Title:
  snap installation with core18 fails at 'Ensure prerequisites for
  "etcd" are available' in air-gapped environments as snapd always
  requires core(16) or "snapd" snap

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-etcd/+bug/1891259/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1903221] Re: ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-NN/: (11) Resource temporarily unavailable

2021-04-05 Thread Nobuto Murata
Adding Ubuntu Ceph packaging task here.

30-ceph-osd.conf file is owned by ceph-osd package as follows.

$ dpkg -S /etc/sysctl.d/30-ceph-osd.conf
ceph-osd: /etc/sysctl.d/30-ceph-osd.conf

However, as far as I see in 15.2.8-0ubuntu0.20.04.1/focal there is no
place in /var/lib/dpkg/info/ceph-osd.postinst activating the sysctl file
so it requires a reboot basically or manual intervention is required to
apply the value in the package. I think that's why fs.aio-max-nr wasn't
applied in the first place.

** Also affects: ceph (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1903221

Title:
  ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-
  NN/: (11) Resource temporarily unavailable

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-osd/+bug/1903221/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1657202] Re: OpenStack not installed with consistent uid/gid for glance/cinder/nova users

2021-02-28 Thread Nobuto Murata
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=983635
> 64065 | gnocchi   | Gnocchi - Metric as a Service

** Bug watch added: Debian Bug tracker #983635
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=983635

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1657202

Title:
  OpenStack not installed with consistent uid/gid for glance/cinder/nova
  users

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cinder/+bug/1657202/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1657202] Re: OpenStack not installed with consistent uid/gid for glance/cinder/nova users

2021-02-25 Thread Nobuto Murata
Here is the current maintainer code:
https://git.launchpad.net/ubuntu/+source/openstack-pkg-tools/tree/pkgos_func?h=ubuntu/focal-proposed#n786

and the previous upstream bug in Debian:
https://bugs.debian.org/884178

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1657202

Title:
  OpenStack not installed with consistent uid/gid for glance/cinder/nova
  users

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cinder/+bug/1657202/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1657202] Re: OpenStack not installed with consistent uid/gid for glance/cinder/nova users

2021-02-25 Thread Nobuto Murata
Excuse me for reviving an old bug report, but Gnocchi also requires a
static uid/gid to support NFS use case.

https://gnocchi.xyz/intro.html
> If you need to scale the number of server with the file driver, you can 
> export and share the data via NFS among all Gnocchi processes.

** Also affects: gnocchi (Ubuntu)
   Importance: Undecided
   Status: New

** Bug watch added: Debian Bug tracker #884178
   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=884178

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1657202

Title:
  OpenStack not installed with consistent uid/gid for glance/cinder/nova
  users

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cinder/+bug/1657202/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1916610] Re: MAAS booting with ga-20.04 kernel results in nvme bcache device can't be registered

2021-02-23 Thread Nobuto Murata
Initially we thought we were hit by
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1910201

But it looks like some patches are already in focal GA kernel like
https://kernel.ubuntu.com/git/ubuntu/ubuntu-focal.git/commit/?id=d256617be44956fe4f048295a71b31d44d9104d9

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1916610

Title:
  MAAS booting with ga-20.04 kernel results in nvme bcache device can't
  be registered

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1916610/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1891259] Re: snap installation with core18 fails at 'Ensure prerequisites for "etcd" are available' in air-gapped environments as snapd always requires core(16) or "snapd" snap

2021-01-06 Thread Nobuto Murata
** Summary changed:

- snap installation with core18 fails at 'Ensure prerequisites for "etcd" are 
available' in air-gapped environments as snapd always requires core(16)
+ snap installation with core18 fails at 'Ensure prerequisites for "etcd" are 
available' in air-gapped environments as snapd always requires core(16) or 
"snapd" snap

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1891259

Title:
  snap installation with core18 fails at 'Ensure prerequisites for
  "etcd" are available' in air-gapped environments as snapd always
  requires core(16) or "snapd" snap

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-etcd/+bug/1891259/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1908770] [NEW] [pull-pkg] unnecessary crash report with ubuntutools.pullpkg.InvalidPullValueError in parse_pull(): Must specify --pull

2020-12-18 Thread Nobuto Murata
Public bug reported:

When invoking the command without `--pull`, the command tells me "Must
specify --pull" which is good. However, it also triggers crash file
collection process as /var/crash/_usr_bin_pull-pkg.1000.crash and gives
unnecessary traceback as follows.

$ pull-pkg my-package
Traceback (most recent call last):
  File "/usr/bin/pull-pkg", line 29, in 
PullPkg.main()
  File "/usr/lib/python3/dist-packages/ubuntutools/pullpkg.py", line 96, in main
cls(*args, **kwargs).pull()
  File "/usr/lib/python3/dist-packages/ubuntutools/pullpkg.py", line 354, in 
pull
options = self.parse_args(args)
  File "/usr/lib/python3/dist-packages/ubuntutools/pullpkg.py", line 178, in 
parse_args
return self.parse_options(vars(newparser.parse_args(args)))
  File "/usr/lib/python3/dist-packages/ubuntutools/pullpkg.py", line 292, in 
parse_options
options['pull'] = self.parse_pull(options['pull'])
  File "/usr/lib/python3/dist-packages/ubuntutools/pullpkg.py", line 182, in 
parse_pull
raise InvalidPullValueError("Must specify --pull")
ubuntutools.pullpkg.InvalidPullValueError: Must specify --pull


$ grep Title: /var/crash/_usr_bin_pull-pkg.1000.crash 
Title: pull-pkg crashed with ubuntutools.pullpkg.InvalidPullValueError in 
parse_pull(): Must specify --pull

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: ubuntu-dev-tools 0.176
ProcVersionSignature: Ubuntu 5.4.0-58.64-generic 5.4.73
Uname: Linux 5.4.0-58-generic x86_64
NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
ApportVersion: 2.20.11-0ubuntu27.14
Architecture: amd64
CasperMD5CheckResult: skip
CurrentDesktop: ubuntu:GNOME
Date: Sat Dec 19 16:28:49 2020
InstallationDate: Installed on 2020-03-22 (271 days ago)
InstallationMedia: Ubuntu 20.04 LTS "Focal Fossa" - Alpha amd64 (20200321)
PackageArchitecture: all
ProcEnviron:
 TERM=xterm-256color
 PATH=(custom, no user)
 XDG_RUNTIME_DIR=
 LANG=en_US.UTF-8
 SHELL=/bin/bash
SourcePackage: ubuntu-dev-tools
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: ubuntu-dev-tools (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1908770

Title:
  [pull-pkg] unnecessary crash report with
  ubuntutools.pullpkg.InvalidPullValueError in parse_pull(): Must
  specify --pull

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-dev-tools/+bug/1908770/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1900668] Re: MAAS PXE Boot stalls with grub 2.02

2020-12-09 Thread Nobuto Murata
** Tags added: ps5

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1900668

Title:
  MAAS PXE Boot stalls with grub 2.02

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas-images/+bug/1900668/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1900668] Re: MAAS PXE Boot stalls with grub 2.02

2020-12-07 Thread Nobuto Murata
> Michał Ajduk based on the above comments #13 and #15, can you please
confirm if there is anything outstanding or not working w.r.t. this
issue?

Michal's comment on #17 scratched the comment #13. So there is still an
outstanding issue here indeed.

** Changed in: maas-images
   Status: Incomplete => New

** Changed in: grub2 (Ubuntu)
   Status: Incomplete => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1900668

Title:
  MAAS PXE Boot stalls with grub 2.02

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1900668/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1900668] Re: MAAS PXE Boot stalls with grub 2.02

2020-11-13 Thread Nobuto Murata
** Also affects: maas-images
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1900668

Title:
  MAAS PXE Boot stalls with grub 2.02

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1900668/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1773765] Re: There is a possibility that 'running' notification will remain

2020-09-16 Thread Nobuto Murata
** Also affects: masakari (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1773765

Title:
  There is a possibility that 'running' notification will remain

To manage notifications about this bug go to:
https://bugs.launchpad.net/masakari/+bug/1773765/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1891259] Re: snap installation with core18 fails at 'Ensure prerequisites for "etcd" are available' in air-gapped environments as snapd always requires core(16)

2020-08-20 Thread Nobuto Murata
> On top of the already existing snaps, can you also download the snapd snap 
> and ack its assertion?
> 
> So the command sequence looks like this:
> 
> snap download snapd
> snap download core18
> snap download etcd
> 
> # on the host
> snap ack ./snapd_*.assert
> snap ack ./core18_*.assert
> snap ack ./etcd_*.assert
> sudo snap install ./snapd_*.snap
> sudo snap install ./core18_*.snap
> sudo snap install ./etcd_*.snap
> 
> This works in a 18.04 VM for me.

Interesting, it works for me too. Does that suggest that there is a
"fix" for the original issue somewhere between snapd 2.45.1+18.04.2
(packaged) and 2.45.3.1 (snap)?

$ snap version
snap2.45.3.1
snapd   2.45.3.1
series  16
ubuntu  18.04
kernel  4.15.0-112-generic

$ snap list
NameVersion   Rev   Tracking  Publisher   Notes
core18  20200724  1885  - canonical✓  base
etcd3.4.5 230   - canonical✓  -
snapd   2.45.3.1  8790  - canonical✓  snapd

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1891259

Title:
  snap installation with core18 fails at 'Ensure prerequisites for
  "etcd" are available' in air-gapped environments as snapd always
  requires core(16)

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-etcd/+bug/1891259/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1891259] Re: snap installation with core18 fails at 'Ensure prerequisites for "etcd" are available' in air-gapped environments as snapd always requires core(16)

2020-08-19 Thread Nobuto Murata
** Summary changed:

- snap installation with core18 fails at 'Ensure prerequisites for "etcd" are 
available' in air-gapped environments
+ snap installation with core18 fails at 'Ensure prerequisites for "etcd" are 
available' in air-gapped environments as snapd always requires core(16)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1891259

Title:
  snap installation with core18 fails at 'Ensure prerequisites for
  "etcd" are available' in air-gapped environments as snapd always
  requires core(16)

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-etcd/+bug/1891259/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1891259] Re: snap installation with core18 fails at 'Ensure prerequisites for "etcd" are available' in air-gapped environments

2020-08-19 Thread Nobuto Murata
Hi,
 
> In this environment, can you show all of the places that snapd could be 
> re-execing to?

The assumption here is that the unit is deployed by Juju and based on a
cloud image instead of a standard desktop or server image. Thus, there
is no core snap available out of the box.

> What is the output of
> 
> ```
> snap list core

$ snap list core
error: no matching snaps installed

> snap list snapd

$ snap list snapd
error: no matching snaps installed

> apt show snapd
> apt show snapd 2>/dev/null | grep Version:

I'm giving the output of policy instead of show.

$ apt policy snapd
snapd:
  Installed: 2.45.1+18.04.2
  Candidate: 2.45.1+18.04.2
  Version table:
 *** 2.45.1+18.04.2 500
500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
500 http://security.ubuntu.com/ubuntu bionic-security/main amd64 
Packages
100 /var/lib/dpkg/status
 2.32.5+18.04 500
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages

> snap version

$ snap version
snap2.45.1+18.04.2
snapd   2.45.1+18.04.2
series  16
ubuntu  18.04
kernel  4.15.0-112-generic

> ```
> 
> and also
> 
> ```
> snap list
> ```

$ snap list
No snaps are installed yet. Try 'snap install hello-world'.


** Changed in: snapd
   Status: Incomplete => New

** Changed in: snapd (Ubuntu)
   Status: Incomplete => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1891259

Title:
  snap installation with core18 fails at 'Ensure prerequisites for
  "etcd" are available' in air-gapped environments

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-etcd/+bug/1891259/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

  1   2   3   4   5   6   7   8   9   10   >