[Bug 2064815] Re: Always black screen on first reboot after fresh install 24.04, Intel and AMD GPU

2024-11-07 Thread Gábor Mészáros
*** This bug is a duplicate of bug 2063143 ***
https://bugs.launchpad.net/bugs/2063143

Hello , recently updated Kde neon to 24.04 . This workaround worked for
me: Ryzen 5 3500U.

Interestingly the error doesn't come up with xanmod kernel. With my custom and 
liquorix I had to use the workaround.
All tested are latest 6.11.6

Can you guys check xanmod kernel if that works for you?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2064815

Title:
  Always black screen on first reboot after fresh install 24.04, Intel
  and AMD GPU

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2064815/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933492] [NEW] Subnet class does not support configuring allow_dns

2021-06-24 Thread Gábor Mészáros
Public bug reported:

Cross-posting bug: https://github.com/maas/python-libmaas/issues/261

in python3-libmaas class Subnet() does not allow setting allow_dns option.
Through the cli and gui it can be configured.

maas  subnet update 2 allow_dns=false

https://github.com/maas/maas/blob/master/src/maasserver/api/subnets.py#L100-L101

(side note: cli only allows false/true, not 0/1 as the documentation
suggests).

** Affects: python-libmaas (Ubuntu)
 Importance: Undecided
 Status: New

** Description changed:

+ Cross-posting bug: https://github.com/maas/python-libmaas/issues/261
+ 
  in python3-libmaas class Subnet() does not allow setting allow_dns option.
  Through the cli and gui it can be configured.
  
  maas  subnet update 2 allow_dns=false
  
  
https://github.com/maas/maas/blob/master/src/maasserver/api/subnets.py#L100-L101
  
  (side note: cli only allows false/true, not 0/1 as the documentation
  suggests).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933492

Title:
  Subnet class does not support configuring allow_dns

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/python-libmaas/+bug/1933492/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1900668] Re: MAAS PXE Boot stalls with grub 2.02

2020-10-21 Thread Gábor Mészáros
** Description changed:

  # ENVIRONMENT
  MAAS version (SNAP):
-   maas 2.8.2-8577-g.a3e674063 8980 2.8/stable canonical✓ -
+   maas 2.8.2-8577-g.a3e674063 8980 2.8/stable canonical✓ -
  
-   MAAS was cleanly installed. KVM POD setup works.
+   MAAS was cleanly installed. KVM POD setup works.
  
-   MAAS status:
-   bind9 RUNNING pid 9258, uptime 15:13:02
-   dhcpd RUNNING pid 26173, uptime 15:09:30
-   dhcpd6 STOPPED Not started
-   http RUNNING pid 19526, uptime 15:10:49
-   ntp RUNNING pid 27147, uptime 14:02:18
-   proxy RUNNING pid 25909, uptime 15:09:33
-   rackd RUNNING pid 7219, uptime 15:13:20
-   regiond RUNNING pid 7221, uptime 15:13:20
-   syslog RUNNING pid 19634, uptime 15:10:48
+   MAAS status:
+   bind9 RUNNING pid 9258, uptime 15:13:02
+   dhcpd RUNNING pid 26173, uptime 15:09:30
+   dhcpd6 STOPPED Not started
+   http RUNNING pid 19526, uptime 15:10:49
+   ntp RUNNING pid 27147, uptime 14:02:18
+   proxy RUNNING pid 25909, uptime 15:09:33
+   rackd RUNNING pid 7219, uptime 15:13:20
+   regiond RUNNING pid 7221, uptime 15:13:20
+   syslog RUNNING pid 19634, uptime 15:10:48
  
  Servers:
  HPE DL380 Gen10 configured to UEFI boot via PXE (PXE legacy mode), Secure 
boot disabled. All servers (18) experience the described problem.
  
  UEFI Boot menu contains 2 entries alowing one to select the PXE mode:
  - HPE Ethernet 1Gb 4-port 366FLR Adapter - NIC (HTTP(S) IPv4)
  - HPE Ethernet 1Gb 4-port 366FLR Adapter - NIC (PXE IPv4)
  
  # PROBLEM DESCRIPTION
  Similiar to https://bugs.launchpad.net/maas/+bug/1899840
  
  PXE boot stalls after downloading grubx64.efi but before downloading grub.cfg:
  2020-10-20 07:18:21 provisioningserver.rackdservices.tftp: [info] bootx64.efi 
requested by 10.216.240.69
  2020-10-20 07:18:21 provisioningserver.rackdservices.tftp: [info] bootx64.efi 
requested by 10.216.240.69
  2020-10-20 07:18:21 provisioningserver.rackdservices.tftp: [info] grubx64.efi 
requested by 10.216.240.69
  
- Grub drops to the grub prompt. 
+ Grub drops to the grub prompt.
  Within the grub prompt:
  - net_ls_addr shows correct IP address
  - net_ls_routes shows correct routing
  - net_bootps (that should initialize DHCP request from grub) fails with a 
message: failed to send packet
  
  We've also noticed that in a working scenario grub just after start up but 
before downloading grub conf sends arp request for MAAS IP:
  135172020-10-19 13:53:38.864937HewlettP_02:3d:e8BroadcastARP  
  60Who has 10.216.240.1? Tell 10.216.240.51
  and MAAS replies.
  
  When the boot stalls, one of the symptoms is that grub does not send the
  ARP request for MAAS IP. It also does not reply to MAAS ARP requests. It
  looks as if the EFI_NET stack was failing.
  
  # WORKAROUNDS
  1) during the the PXE boot send ARP requests from MAAS to query the node IP. 
This seems to prevent the node from loosing connectivity.
  
  Tested 4 times on independent nodes.
  
  2) Custom built grub:
  grub-mkimage -c grub.conf -o grubx64.efi -O x86_64-efi -p /grub normal 
configfile tftp memdisk boot diskfilter efifwsetup efi_gop efinet ls net normal 
part_gpt tar ext2 linuxefi http echo chain search search_fs_uuid search_label 
search_fs_file test tr true minicmd
  
  Grub version: 2.02-2ubuntu8.18
  
  The grub PXE image built in the way described above works on all nodes
  (18) all the time (4 times tested).
  
- When I've included grub module linix.mod, I've managed to reproduce the
+ When I've included grub module linux.mod, I've managed to reproduce the
  described problem.
  
  It seems that the issue can be related to
  https://savannah.gnu.org/bugs/?func=detailitem&item_id=50715

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1900668

Title:
  MAAS PXE Boot stalls with grub 2.02

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1900668/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852441] Re: In bionic, one of the ceph packages installed causes chrony to auto-install even on lxd

2020-05-11 Thread Gábor Mészáros
I've just found this piece of code change, that removes the removal of ntp when 
we're running in container:
https://review.opendev.org/#/c/584051/5/lib/ceph/utils.py@a43

originating from here: NTP implementation hard-coded
;https://bugs.launchpad.net/charm-ceph-mon/+bug/1780690

Is there a way to revert back the
if is_container():
   PACKAGES.remove('ntp')

code snippet?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852441

Title:
  In bionic, one of the ceph packages installed causes chrony to auto-
  install even on lxd

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-mon/+bug/1852441/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1870360] [NEW] sstream-mirror max options doesn't work with filtering for versions

2020-04-02 Thread Gábor Mészáros
Public bug reported:

On eoan:
ubuntu@eoan:~$ apt list --installed simplestreams
Listing... Done
simplestreams/eoan,now 0.1.0-25-gba75825b-0ubuntu1 all [installed]
ubuntu@eoan:~$ sstream-mirror --no-verify --max=1 --progress 
--path=streams/v1/index2.sjson https://streams.canonical.com/juju/tools ./ 
arch=amd64 'release~(xenial|bionic)' 'version~(2.4|2.6)' --dry-run
+ com.ubuntu.juju:16.04:amd64 20190503 2.6-rc2-xenial-amd64 
agent/2.6-rc2/juju-2.6-rc2-ubuntu-amd64.tgz 28 Mb
+ com.ubuntu.juju:18.04:amd64 20190503 2.6-rc2-bionic-amd64 
agent/2.6-rc2/juju-2.6-rc2-ubuntu-amd64.tgz 28 Mb
+ com.ubuntu.juju:16.04:amd64 20191029 2.6.10-xenial-amd64 
agent/2.6.10/juju-2.6.10-ubuntu-amd64.tgz 29 Mb
+ com.ubuntu.juju:18.04:amd64 20191029 2.6.10-bionic-amd64 
agent/2.6.10/juju-2.6.10-ubuntu-amd64.tgz 29 Mb
+ com.ubuntu.juju:16.04:amd64 20191029 2.6.10-xenial-amd64 
agent/2.6.10/juju-2.6.10-ubuntu-amd64.tgz 29 Mb
+ com.ubuntu.juju:18.04:amd64 20191029 2.6.10-bionic-amd64 
agent/2.6.10/juju-2.6.10-ubuntu-amd64.tgz 29 Mb
174 Mb change

On bionic:
root@iptv-receiver:~# apt list --installed simplestreams
Listing... Done
simplestreams/bionic,now 0.1.0-490-g3cc8988-0ubuntu1~ubuntu18.04.1 all 
[installed]
N: There are 2 additional versions. Please use the '-a' switch to see them.
root@bionic:~# sstream-mirror --no-verify --max=1 --progress 
--path=streams/v1/index2.sjson https://streams.canonical.com/juju/tools ./ 
arch=amd64 'release~(xenial|bionic)' 'version
~(2.4|2.6)' --dry-run
0 Mb change
Same thing with the non-dev version (from bionic-updates): 
root@bionic:~# apt list --installed simplestreams
Listing... Done
simplestreams/bionic-updates,now 0.1.0~bzr460-0ubuntu1.1 all 
[installed,upgradable to: 0.1.0-490-g3cc8988-0ubuntu1~ubuntu18.04.1]
N: There is 1 additional version. Please use the '-a' switch to see it
root@bionic:~# sstream-mirror --no-verify --max=1 --progress 
--path=streams/v1/index2.sjson https://streams.canonical.com/juju/tools ./ 
arch=amd64 'release~(xenial|bionic)' 'version
~(2.4|2.6)' --dry-run
0 Mb change

** Affects: simplestreams (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1870360

Title:
  sstream-mirror max options doesn't work with filtering for versions

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/simplestreams/+bug/1870360/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1814911] Re: charm deployment fails, when using self-signed certificate, which has IP address only (SAN)

2019-02-13 Thread Gábor Mészáros
James, I can confirm that the fix is working properly on our sites.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1814911

Title:
  charm deployment fails, when using self-signed certificate, which has
  IP address only (SAN)

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-helpers/+bug/1814911/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1811117] Re: Failed deployment: FileNotFoundError: [Errno 2] No such file or directory: '/sys/class/block/bcache0/bcache0p1/slaves'

2019-01-28 Thread Gábor Mészáros
The version published in the ppa looks good so far.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/187

Title:
  Failed deployment: FileNotFoundError: [Errno 2] No such file or
  directory: '/sys/class/block/bcache0/bcache0p1/slaves'

To manage notifications about this bug go to:
https://bugs.launchpad.net/curtin/+bug/187/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1811117] Re: Failed deployment: FileNotFoundError: [Errno 2] No such file or directory: '/sys/class/block/bcache0/bcache0p1/slaves'

2019-01-28 Thread Gábor Mészáros
** Tags removed: field-high

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/187

Title:
  Failed deployment: FileNotFoundError: [Errno 2] No such file or
  directory: '/sys/class/block/bcache0/bcache0p1/slaves'

To manage notifications about this bug go to:
https://bugs.launchpad.net/curtin/+bug/187/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1811117] Re: Failed deployment: FileNotFoundError: [Errno 2] No such file or directory: '/sys/class/block/bcache0/bcache0p1/slaves'

2019-01-28 Thread Gábor Mészáros
it's affecting one of our customer deployments, running on xenial. I'm
curious if it's planned to release the fix on xenial, and when.

** Tags added: 4010 field-hige

** Tags removed: field-hige
** Tags added: field-high

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/187

Title:
  Failed deployment: FileNotFoundError: [Errno 2] No such file or
  directory: '/sys/class/block/bcache0/bcache0p1/slaves'

To manage notifications about this bug go to:
https://bugs.launchpad.net/curtin/+bug/187/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1811117] Re: Failed deployment: FileNotFoundError: [Errno 2] No such file or directory: '/sys/class/block/bcache0/bcache0p1/slaves'

2019-01-28 Thread Gábor Mészáros
** Also affects: curtin (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/187

Title:
  Failed deployment: FileNotFoundError: [Errno 2] No such file or
  directory: '/sys/class/block/bcache0/bcache0p1/slaves'

To manage notifications about this bug go to:
https://bugs.launchpad.net/curtin/+bug/187/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1783542] [NEW] OSD creation on bcache device not working

2018-07-25 Thread Gábor Mészáros
Public bug reported:

Using the ceph-osd charm on xenial hwe.

The charm successfully calls out for the regular ceph-disk prepare
command, but that fails since it thinks of that device being a partition
and doesn't create partition on top of bcache.

The log ends up with failing:
2018-07-24 14:41:40 INFO juju-log osdize cmd: ['ceph-disk', 'prepare', 
'--fs-type', 'xfs', '/dev/bcache4', '/dev/sdd']
2018-07-24 14:41:42 DEBUG add-disk set_data_partition: incorrect partition 
UUID: None, expected ['4fbd7e29-9d25-41b8-afd0-5ec00ceff05d', 
'4fbd7e29-9d25-41b8-afd0-062c0ceff05d', '4fbd7e29-8ae0-4982-bf9d-5a8d867af560', 
'4fbd7e29-9d25-41b8-afd0-35865ceff05d']

Full log:
https://pastebin.canonical.com/p/8JMM9JbhxZ/

After inspecting the ceph-disk source, I've found out where it is failing:
ceph/xenial-updates,now 10.2.9-0ubuntu0.16.04.1 amd64 [installed]
/usr/lib/python2.7/dist-packages/ceph_disk/main.py:#763

def is_partition(dev):
"""
Check whether a given device path is a partition or a full disk.
"""
if is_mpath(dev):
return is_partition_mpath(dev)

dev = os.path.realpath(dev)
st = os.lstat(dev)
if not stat.S_ISBLK(st.st_mode):
raise Error('not a block device', dev)

name = get_dev_name(dev)
if is_bcache(name):
return True
if os.path.exists(os.path.join('/sys/block', name)):
return False

If I remove the is_bcache check, it tries to create the partition on top
and succeeds with in only with running on xenial-hwe kernel
(4.13.0-45-generic at the moment).

However patches that support partitioning of bcache devices are not
available on the mainline kernel, so I suspect it would fail when not
having our patches applied. [0]

Note that this is probably related to the LP#1667078 fix [1] (bcache device 
numbers increase by 16) and 
https://launchpadlibrarian.net/309401983/0001-bcache-Fix-bcache-device-names.patch

Also note that this issue is not related to LP#1729145 or LP#1728742 as
I'm already running on the fixed kernel and the sympthoms are different.
[2]. Also MAAS is up to date. [3]

I assume this patch has been accepted by Canonical (not mainline ceph)
and causing us the issue [4].

[0]: 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b8c0d911ac5285e6be8967713271a51bdc5a936a
[1]: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1667078
[2]: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1729145
[3]: https://bugs.launchpad.net/curtin/+bug/1728742
[4]: http://tracker.ceph.com/issues/13278

** Affects: ceph (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: 4010

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1783542

Title:
  OSD creation on bcache device not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1783542/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1778704] Re: redeployment of node with bcache fails

2018-07-05 Thread Gábor Mészáros
** Changed in: curtin
   Status: Incomplete => Invalid

** Changed in: curtin (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1778704

Title:
  redeployment of node with bcache fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/curtin/+bug/1778704/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1778704] Re: redeployment of node with bcache fails

2018-07-02 Thread Gábor Mészáros
Ryan,

I did some more testing and seems the issue originates from how I do
storage preparation, using the vendor's proprietary tool, during
commissioning I reconfigure all the RAID controllers, during which the
drives get disconnected and re-plugged later. This results that curtin
does not see the bcache drives as attached, but the physical layout on
the drives remain.

I assume curtin only does the wipefs when the bcache devices are attached, 
correct?
What I did for now, maybe just a workaround, but to wipe the drives during the 
drive configuration commission script.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1778704

Title:
  redeployment of node with bcache fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/curtin/+bug/1778704/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1778704] Re: redeployment of node with bcache fails

2018-06-29 Thread Gábor Mészáros
That I can later, not now.
What I see now though is mds are not clean and resync started:

Jun 29 14:42:36 ic-skbrat2-s40pxtg mdadm[2746]: RebuildStarted event detected 
on md device /dev/md2
Jun 29 14:42:36 ic-skbrat2-s40pxtg kernel: [  279.213933] md/raid1:md0: not 
clean -- starting background reconstruction
Jun 29 14:42:36 ic-skbrat2-s40pxtg kernel: [  279.213936] md/raid1:md0: active 
with 2 out of 2 mirrors
Jun 29 14:42:36 ic-skbrat2-s40pxtg kernel: [  279.213968] md0: detected 
capacity change from 0 to 1995440128
Jun 29 14:42:36 ic-skbrat2-s40pxtg kernel: [  279.214033] md: resync of RAID 
array md0
Jun 29 14:42:36 ic-skbrat2-s40pxtg kernel: [  279.214039] md: minimum 
_guaranteed_  speed: 1000 KB/sec/disk.
Jun 29 14:42:36 ic-skbrat2-s40pxtg kernel: [  279.214041] md: using maximum 
available idle IO bandwidth (but not more than 20 KB/sec) for resync.
Jun 29 14:42:36 ic-skbrat2-s40pxtg kernel: [  279.214051] md: using 128k 
window, over a total of 1948672k.
Jun 29 14:42:36 ic-skbrat2-s40pxtg mdadm[2746]: NewArray event detected on md 
device /dev/md0
Jun 29 14:42:36 ic-skbrat2-s40pxtg mdadm[2746]: RebuildStarted event detected 
on md device /dev/md0
Jun 29 14:42:42 ic-skbrat2-s40pxtg mdadm[2746]: Rebuild51 event detected on md 
device /dev/md0
Jun 29 14:42:42 ic-skbrat2-s40pxtg mdadm[2746]: NewArray event detected on md 
device /dev/md1
Jun 29 14:42:42 ic-skbrat2-s40pxtg kernel: [  284.312105] md: bind
Jun 29 14:42:42 ic-skbrat2-s40pxtg kernel: [  284.312238] md: bind
Jun 29 14:42:42 ic-skbrat2-s40pxtg kernel: [  284.313697] md/raid1:md1: not 
clean -- starting background reconstruction
Jun 29 14:42:42 ic-skbrat2-s40pxtg kernel: [  284.313701] md/raid1:md1: active 
with 2 out of 2 mirrors
Jun 29 14:42:42 ic-skbrat2-s40pxtg kernel: [  284.313774] created bitmap (2 
pages) for device md1
Jun 29 14:42:42 ic-skbrat2-s40pxtg kernel: [  284.314044] md1: bitmap 
initialized from disk: read 1 pages, set 3515 of 3515 bits
Jun 29 14:42:42 ic-skbrat2-s40pxtg kernel: [  284.314138] md1: detected 
capacity change from 0 to 235862491136
Jun 29 14:42:42 ic-skbrat2-s40pxtg kernel: [  284.314228] md: delaying resync 
of md1 until md0 has finished (they share one or more physical units)
Jun 29 14:42:42 ic-skbrat2-s40pxtg kernel: [  284.412570] bcache: 
bch_journal_replay() journal replay done, 0 keys in 2 entries, seq 78030
Jun 29 14:42:42 ic-skbrat2-s40pxtg kernel: [  284.437013] bcache: 
bch_cached_dev_attach() Caching md2 as bcache0 on set 
38d7614a-32f6-4e4f-a044-ab0f06434bf4
Jun 29 14:42:42 ic-skbrat2-s40pxtg kernel: [  284.437033] bcache: 
register_cache() registered cache device md1
Jun 29 14:42:42 ic-skbrat2-s40pxtg kernel: [  284.454171] bcache: 
register_bcache() error opening /dev/md1: device already registered
Jun 29 14:42:42 ic-skbrat2-s40pxtg kernel: [  284.532188] bcache: 
register_bcache() error opening /dev/md1: device already registered
Jun 29 14:42:42 ic-skbrat2-s40pxtg kernel: [  284.563413] bcache: 
register_bcache() error opening /dev/md1: device already registered
Jun 29 14:42:42 ic-skbrat2-s40pxtg kernel: [  284.642738] bcache: 
register_bcache() error opening /dev/md2: device already registered (emitting 
change event)
Jun 29 14:42:42 ic-skbrat2-s40pxtg kernel: [  284.702291] bcache: 
register_bcache() error opening /dev/md2: device already registered (emitting 
change event)
Jun 29 14:42:42 ic-skbrat2-s40pxtg kernel: [  284.748625] bcache: 
register_bcache() error opening /dev/md1: device already registered
Jun 29 14:42:42 ic-skbrat2-s40pxtg kernel: [  284.772383] bcache: 
register_bcache() error opening /dev/md1: device already registered
Jun 29 14:42:42 ic-skbrat2-s40pxtg cloud-init[4053]: An error occured handling 
'bcache0': RuntimeError - ('Unexpected old bcache device: %s', '/dev/md2')
Jun 29 14:42:42 ic-skbrat2-s40pxtg cloud-init[4053]: ('Unexpected old bcache 
device: %s', '/dev/md2')
Jun 29 14:42:42 ic-skbrat2-s40pxtg cloud-init[4053]: curtin: Installation 
failed with exception: Unexpected error while running command.
Jun 29 14:42:42 ic-skbrat2-s40pxtg cloud-init[4053]: Command: ['curtin', 
'block-meta', 'custom']

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1778704

Title:
  redeployment of node with bcache fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/curtin/+bug/1778704/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1778704] Re: redeployment of node with bcache fails

2018-06-29 Thread Gábor Mészáros
unfortunately the issue still exist on my deployment.

** Changed in: curtin
   Status: Incomplete => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1778704

Title:
  redeployment of node with bcache fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/curtin/+bug/1778704/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1778704] Re: redeployment of node with bcache fails

2018-06-29 Thread Gábor Mészáros
curtin: Installation started. (18.1-17-gae48e86f-0ubuntu1~16.04.1)
third party drivers not installed or necessary.t 
An error occured handling 'bcache0': RuntimeError - ('Unexpected old bcache 
device: %s', '/dev/md2')A 
('Unexpected old bcache device: %s', '/dev/md2')( 
curtin: Installation failed with exception: Unexpected error while running 
command.c 
Command: ['curtin', 'block-meta', 'custom']C 
Exit code: 3E 
Reason: -R 
Stdout: An error occured handling 'bcache0': RuntimeError - ('Unexpected old 
bcache device: %s', '/dev/md2')S 
('Unexpected old bcache device: %s', '/dev/md2') 
 
Stderr: ''S

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1778704

Title:
  redeployment of node with bcache fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/curtin/+bug/1778704/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1743966] Re: Trusty deployments fail when custom archives are configured

2018-01-17 Thread Gábor Mészáros
** Tags added: cpe-onsite

** Tags added: 4010

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1743966

Title:
  Trusty deployments fail when custom archives are configured

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1743966/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1532789] Re: Trusty multipath-tools suffering seg faults

2016-05-25 Thread Gábor Mészáros
Martin,

The revert was intentional, due to a minor misunderstanding, which we
clarified.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1532789

Title:
  Trusty multipath-tools suffering seg faults

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1532789/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1565769] [NEW] Cannot select and copy text from hangouts call chat

2016-04-04 Thread Gábor Mészáros
Public bug reported:

When in a hangout call, I cannot copy text from the chat window (on the right). 
Links are clickable but nothing can be highlighted. Right click works though.
Same happens for google-chrome, logged in (extensions), and with clean config.

ProblemType: Bug
DistroRelease: Ubuntu 15.10
Package: chromium-browser 49.0.2623.108-0ubuntu0.15.10.1.1223
ProcVersionSignature: Ubuntu 4.2.0-34.39-generic 4.2.8-ckt4
Uname: Linux 4.2.0-34-generic x86_64
NonfreeKernelModules: zfs zunicode zcommon znvpair zavl
ApportVersion: 2.19.1-0ubuntu5
Architecture: amd64
CurrentDesktop: Unity
DRM.card0.DP.1:
 edid-base64: 
 dpms: Off
 modes: 
 enabled: disabled
 status: disconnected
DRM.card0.DP.2:
 edid-base64: 
 dpms: Off
 modes: 
 enabled: disabled
 status: disconnected
DRM.card0.HDMI.A.1:
 edid-base64: 
 dpms: Off
 modes: 
 enabled: disabled
 status: disconnected
DRM.card0.HDMI.A.2:
 edid-base64: 
 dpms: Off
 modes: 
 enabled: disabled
 status: disconnected
DRM.card0.eDP.1:
 edid-base64: 
AP///wAGrz0TACIXAQSVHxF4AoflpFZQniYNUFQBAQEBAQEBAQEBAQEBAQEBFDeAuHA4JEAQED4ANa0QAAAaECyAuHA4JEAQED4ANa0QAAAa/gBNMVdIVoFCMTQwSEFOQSGeAAoBCiAgAHE=
 dpms: On
 modes: 1920x1080 1920x1080
 enabled: enabled
 status: connected
Date: Mon Apr  4 14:10:30 2016
Desktop-Session:
 'ubuntu'
 '/etc/xdg/xdg-ubuntu:/usr/share/upstart/xdg:/etc/xdg'
 '/usr/share/ubuntu:/usr/share/gnome:/usr/local/share/:/usr/share/'
DetectedPlugins:
 
Env:
 'None'
 'None'
InstallationDate: Installed on 2015-12-10 (115 days ago)
InstallationMedia: Ubuntu 15.10 "Wily Werewolf" - Release amd64 (20151021)
Load-Avg-1min: 0.87
Load-Processes-Running-Percent:   0.3%
MachineType: Dell Inc. Latitude E7450
ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-4.2.0-34-generic.efi.signed 
root=UUID=a465b79e-f742-48d0-a2fe-40bae7c7514d ro nomdmonddf nomdmonisw
SourcePackage: chromium-browser
UdevLog: Error: [Errno 2] No such file or directory: '/var/log/udev'
UpgradeStatus: No upgrade log present (probably fresh install)
dmi.bios.date: 10/28/2015
dmi.bios.vendor: Dell Inc.
dmi.bios.version: A08
dmi.board.vendor: Dell Inc.
dmi.chassis.type: 9
dmi.chassis.vendor: Dell Inc.
dmi.modalias: 
dmi:bvnDellInc.:bvrA08:bd10/28/2015:svnDellInc.:pnLatitudeE7450:pvr:rvnDellInc.:rn:rvr:cvnDellInc.:ct9:cvr:
dmi.product.name: Latitude E7450
dmi.sys.vendor: Dell Inc.
gconf-keys: /desktop/gnome/applications/browser/exec = 
b'firefox\n'/desktop/gnome/url-handlers/https/command = b'sensible-browser 
%s\n'/desktop/gnome/url-handlers/https/enabled = 
b'true\n'/desktop/gnome/url-handlers/http/command = b'sensible-browser 
%s\n'/desktop/gnome/url-handlers/http/enabled = 
b'true\n'/desktop/gnome/session/required_components/windowmanager = 
b''/apps/metacity/general/compositing_manager = 
b''/desktop/gnome/interface/icon_theme = 
b'gnome\n'/desktop/gnome/interface/gtk_theme = b'Clearlooks\n'
modified.conffile..etc.default.chromium.browser: [deleted]

** Affects: chromium-browser (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug wily

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1565769

Title:
  Cannot select and copy text from hangouts call chat

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/chromium-browser/+bug/1565769/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1549882] Re: sosreport should collect application crash dumps

2016-03-11 Thread Gábor Mészáros
I agree with you on that it's more or less being addressed in the newer 
versions. 
/var/log/crash/cores is a location manually configured, but I'm unsure why it 
was necessary.

How can I close this bug? Should I set its status to invalid?

Thanks

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1549882

Title:
  sosreport should collect application crash dumps

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1549882/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1549882] [NEW] sosreport should collect application crash dumps

2016-02-25 Thread Gábor Mészáros
Public bug reported:

Using version 3.0-1~ubuntu12.04.1 it does not pick up the crash logs found in 
/var/log/crash/cores/.
I suspect this is true for other versions as well and it would be helpful to 
have the ability of getting these information.

root@compute-0-9:~# ls /var/log/crash/cores
core.compute-0-9.1456398130.multipathd.7078  
core.compute-0-9.1456405881.multipathd.30583  
core.compute-0-9.1456407922.multipathd.20061  
core.compute-0-9.1456408509.multipathd.5140

root@compute-0-9:~# tar tJf /tmp/sosreport-DI.09-20160225135748.tar.xz
.
.
.
sosreport-DI.09-20160225135748/var/
sosreport-DI.09-20160225135748/var/spool/
sosreport-DI.09-20160225135748/var/spool/cron/
sosreport-DI.09-20160225135748/var/spool/cron/crontabs/
sosreport-DI.09-20160225135748/var/spool/cron/crontabs/nova
sosreport-DI.09-20160225135748/var/spool/cron/atjobs/
sosreport-DI.09-20160225135748/var/spool/cron/atjobs/.SEQ
sosreport-DI.09-20160225135748/var/log/
sosreport-DI.09-20160225135748/var/log/kern.log-20160225.gz
sosreport-DI.09-20160225135748/var/log/boot.log
sosreport-DI.09-20160225135748/var/log/cron.log-20160225.gz
sosreport-DI.09-20160225135748/var/log/cron.log
sosreport-DI.09-20160225135748/var/log/audit/
sosreport-DI.09-20160225135748/var/log/audit/audit.log
sosreport-DI.09-20160225135748/var/log/dmesg
sosreport-DI.09-20160225135748/var/log/syslog
sosreport-DI.09-20160225135748/var/log/dpkg.log
sosreport-DI.09-20160225135748/var/log/kern.log
sosreport-DI.09-20160225135748/var/log/udev
sosreport-DI.09-20160225135748/var/log/libvirt/
sosreport-DI.09-20160225135748/var/log/libvirt/libvirtd.log
sosreport-DI.09-20160225135748/var/log/libvirt/uml/
sosreport-DI.09-20160225135748/var/log/libvirt/uml/.placeholder
sosreport-DI.09-20160225135748/var/log/libvirt/qemu/
sosreport-DI.09-20160225135748/var/log/libvirt/qemu/instance-0052.log
.
.
.
sosreport-DI.09-20160225135748/var/log/libvirt/qemu/instance-00d6.log
sosreport-DI.09-20160225135748/var/log/libvirt/shutdownlog.log
sosreport-DI.09-20160225135748/var/log/libvirt/lxc/
sosreport-DI.09-20160225135748/var/log/libvirt/lxc/.placeholder
sosreport-DI.09-20160225135748/var/log/boot
sosreport-DI.09-20160225135748/var/lib/
sosreport-DI.09-20160225135748/var/lib/dbus/
sosreport-DI.09-20160225135748/var/lib/dbus/machine-id
sosreport-DI.09-20160225135748/boot/
.
.
.

** Affects: sosreport (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1549882

Title:
  sosreport should collect application crash dumps

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1549882/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1520192] Re: multipath-tools from Precise should have been fixed together with Trusty fixes

2015-12-01 Thread Gábor Mészáros
Steps taken to provoke the problem:
1.) deploy any VM instance using nova commands, that is booted from EMC VNX 
series SAN connected to compute using iscsi, dm-multipath
2.) unplug/disable network connectivity on any of the active paths
3.) check multipath -ll output for faulty failed path/device status.
4.) re-plug/enable network connectivity on the disabled paths
5.) check multipath -ll output for recovering to active enabled path/device 
status

Somtimes the problem occurs when disabling the interface, other times
only when re-enabling it.

Expected output for multipath -ll:
36006016047813400dd029f614896e511 dm-3 DGC ,VRAID   
size=50G features='0' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=70 status=active
  |- 6:0:0:15 sdi   8:128  active ready  running
  |- 7:0:0:15 sdk   8:160  active ready  running
  |- 8:0:0:15 sdm   8:192  active ready  running
  `- 9:0:0:15 sdo   8:224  active ready  running

Actual output (even after several hours of waiting, with active traffic on 
storage):
36006016047813400dd029f614896e511 dm-3 DGC,VRAID
size=50G features='0' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=70 status=active
  |- 6:0:0:159 sdi 8:128 active ready  running
  |- 7:0:0:159 sdk 8:160 failed ready  running
  |- 8:0:0:159 sdm 8:192 active ready  running
  `- 9:0:0:159 sdo 8:224 failed ready  running

A previously discovered workaround for the problem can be achieved by reloading 
the multipath-tools service (or restarting, but multipath -r does not always 
fixes it).
The package with backported changes is confirmed to fix the issue, without 
having to reload the service.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1520192

Title:
  multipath-tools from Precise should have been fixed together with
  Trusty fixes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1520192/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1520192] Re: multipath-tools from Precise should have been fixed together with Trusty fixes

2015-12-01 Thread Gábor Mészáros
** Attachment added: "Log that shows the problem"
   
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1520192/+attachment/4527996/+files/reproduction_failing.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1520192

Title:
  multipath-tools from Precise should have been fixed together with
  Trusty fixes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1520192/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1520192] Re: multipath-tools from Precise should have been fixed together with Trusty fixes

2015-12-01 Thread Gábor Mészáros
With the backported package the problem cannot be reproduced.

** Attachment added: "Log that shows no problem with newer version of 
multipath-tools and kpartx"
   
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1520192/+attachment/4527997/+files/reproduction_fixed.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1520192

Title:
  multipath-tools from Precise should have been fixed together with
  Trusty fixes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1520192/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1520192] Re: multipath-tools from precise should have been fixed together with Trusty fixes

2015-11-27 Thread Gábor Mészáros
** Summary changed:

- Precise multipath-tools from precise should have been fixed together with 
Trusty fixes
+ multipath-tools from precise should have been fixed together with Trusty fixes

** Summary changed:

- multipath-tools from precise should have been fixed together with Trusty fixes
+ multipath-tools from Precise should have been fixed together with Trusty fixes

** Description changed:

  Precise multipath-tools MIGHT need fixes from trusty. This has already
  been proved in one iSCSI multipath installation where precise multipath,
  intermittently, connected to VNX storages show paths as: active/failed
- when it should show - even after the path check timeout - failed.
+ when it should show - even after the path check timeout - faulty/failed.
  
  * Improve description showing output *
  
  Using trusty multipath in Precise, the same environment does NOT suffer
  from this issue.
  
  Differences between both versions:
  
   LP: #1468897 - https://bugs.launchpad.net/bugs/1468897
   LP: #1386637 - https://bugs.launchpad.net/bugs/1386637
  
  - 0001-multipath-add-checker_timeout-default-config-option.patch
  - 0002-Make-params-variable-local.patch
  - 0003-libmultipath-Fix-possible-string-overflow.patch
  - 0004-Update-hwtable-factorization.patch
  - 0005-Fixup-strip-trailing-whitespaces-for-getuid-return-v.patch
  - 0006-Remove-sysfs_attr-cache.patch
  - 0007-Move-setup_thread_attr-to-uevent.c.patch
  - 0008-Use-lists-for-uevent-processing.patch
  - 0009-Start-uevent-service-handler-from-main-thread.patch
  - 0010-libmultipath-rework-sysfs-handling.patch
  - 0011-Rework-sysfs-device-handling-in-multipathd.patch
  - 0012-Only-check-offline-status-for-SCSI-devices.patch
  - 0013-Check-for-offline-path-in-get_prio.patch
  - 0014-libmultipath-Remove-duplicate-calls-to-path_offline.patch
  - 0015-Update-dev_loss_tmo-for-no_path_retry.patch
  - 0016-Reload-map-for-device-read-only-setting-changes.patch
  - 0017-multipath-get-right-sysfs-value-for-checker_timeout.patch
  - 0018-multipath-handle-offlined-paths.patch
  - 0019-multipath-fix-scsi-timeout-code.patch
  - 0020-multipath-make-tgt_node_name-work-for-iscsi-devices.patch
  - 0021-multipath-cleanup-dev_loss_tmo-issues.patch
  - 0022-Fix-for-setting-0-to-fast_io_fail.patch
  - 0023-Fix-fast_io_fail-capping.patch
  - 0024-multipath-enable-getting-uevents-through-libudev.patch
  - 0025-Use-devpath-as-argument-for-sysfs-functions.patch
  - 0026-multipathd-remove-references-to-sysfs_device.patch
  - 0027-multipathd-use-struct-path-as-argument-for-event-pro.patch
  - 0028-Add-global-udev-reference-pointer-to-config.patch
  - 0029-Use-udev-enumeration-during-discovery.patch
  - 0030-use-struct-udev_device-during-discovery.patch
  - 0031-More-debugging-output-when-synchronizing-path-states.patch
  - 0032-Use-struct-udev_device-instead-of-sysdev.patch
  - 0033-discovery-Fixup-cciss-discovery.patch
  - 0035-Use-udev-devices-during-discovery.patch
  - 0036-Remove-all-references-to-hand-craftes-sysfs-code.patch
  - 0037-multipath-libudev-cleanup-and-bugfixes.patch
  - 0038-multipath-check-if-a-device-belongs-to-multipath.patch
  - 0039-multipath-and-wwids_file-multipath.conf-option.patch
  - 0040-multipath-Check-blacklists-as-soon-as-possible.patch
  - 0041-add-wwids-file-cleanup-options.patch
  - 0042-add-find_multipaths-option.patch
  
   LP: #1431650 - https://bugs.launchpad.net/bugs/1431650
  
  - Added debian/patches/0015-shared-lock-for-udev.patch
  
   LP: #1441930 - https://bugs.launchpad.net/bugs/1441930
  
  - Support disks with non 512-byte sectors
  
   LP: #1435706 - https://bugs.launchpad.net/bugs/1435706 ( GOOD
  CANDIDATE )
  
  - Correctly write FC timeout attributes to sysfs.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1520192

Title:
  multipath-tools from Precise should have been fixed together with
  Trusty fixes

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1520192/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs