[Bug 1923590] Re: Telco Customer needs Minimal ISO with customizable set and versions of packages

2021-10-06 Thread Szilard Cserey
Hi Steve,

You said:
~
Picking and choosing specific versions of packages from the archive as INPUT 
into the ISO, as opposed to taking the most recent version of packages at the 
time the ISO is built, would simply enable bad practices (omitting security 
updates) and lure users into not feeding back into Ubuntu information about SRU 
regressions that should be addressed for all users, not on a per-customer basis.
~

But this is exactly what they want, so we must provide them some kind of 
solution where they can cook their own ISO with packages and versions that are 
currently used by them in the multitude of deployments at their own customers
In the real world, customers don't have the latest version of packages, but 
instead they have a snapshot of versions captured at some point time in the 
past, and when they publish a certain release of their product to their 
customer, then these versions must be kept intact, because all the development 
and testing was based on these specific versions, you can't just tell them to 
upgrade to the latest and greatest, when they spent months to test their 
solutions on this snapshot of package versions. This is Telco world, they can't 
just jump from one day to another with package version, as I said it takes a 
lot of time to do a thorough testing before they release a product to their 
customer

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1923590

Title:
  Telco Customer needs Minimal ISO  with customizable set and versions
  of packages

To manage notifications about this bug go to:
https://bugs.launchpad.net/subiquity/+bug/1923590/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1923590] Re: Telco Customer needs Minimal ISO with customizable set and versions of packages

2021-10-06 Thread Szilard Cserey
The rationale behind this request is that the Telco customer has a
requirement to include such bootable Custom/Minimal 16.04 ISO image into
the block storage deliverable that they can provide to their own
customers.

The server is not connected to Internet

There are no local Ubuntu mirror servers in the Datacenter

The preferred way to install Ubuntu on that server is by using an ISO,
and that ISO must contain everything needed

Here's the list of packages needed by this Telco customer
https://pastebin.ubuntu.com/p/S7py2zFw6n/

The Custom/Minimal 16.04 ISO needs to have the 4.15.0-112-generic HWE
Kernel

They also wish to get continuous support from Canonical for this Custom/Minimal 
16.04 ISO in the below manner:
- Anytime a package, which is also part of the Ubuntu Minimal ISO, is getting 
fixed, they wish to get a new updated ISO from Canonical
- Any other package that's not needed by them but it's currently present in the 
Public Xenial 16.04 ISO, must NOT be included in the Custom/Minimal 16.04 ISO

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1923590

Title:
  Telco Customer needs Minimal ISO  with customizable set and versions
  of packages

To manage notifications about this bug go to:
https://bugs.launchpad.net/subiquity/+bug/1923590/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1830615] [NEW] glusterfs-server 3.13.2-1build1 installation fails in Bionic because starting up the glustereventsd.service times out

2019-05-27 Thread Szilard Cserey
Public bug reported:


glusterfs-server installation fails on Bionic


apt-get install glusterfs-server

...

Job for glustereventsd.service failed because a timeout was exceeded.
See "systemctl status glustereventsd.service" and "journalctl -xe" for details.
invoke-rc.d: initscript glustereventsd, action "start" failed.
● glustereventsd.service - LSB: Gluster Events Server
   Loaded: loaded (/etc/init.d/glustereventsd; generated)
   Active: failed (Result: timeout) since Mon 2019-05-27 09:52:15 UTC; 15ms ago
 Docs: man:systemd-sysv-generator(8)
  Process: 17909 ExecStart=/etc/init.d/glustereventsd start (code=killed, 
signal=TERM)
Tasks: 3 (limit: 4915)
   CGroup: /system.slice/glustereventsd.service
   ├─17927 /usr/bin/python /usr/sbin/glustereventsd -p 
/var/run/glustereventsd.pid
   └─17931 /usr/bin/python /usr/sbin/glustereventsd -p 
/var/run/glustereventsd.pid

May 27 09:47:14 vm1 systemd[1]: Starting LSB: Gluster Events Server...
May 27 09:47:14 vm1 glustereventsd[17909]:  * Starting glustereventsd service 
glustereventsd
dpkg: error processing package glusterfs-server (--configure):
 installed glusterfs-server package post-installation script subprocess 
returned error exit status 1
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for ureadahead (0.100.0-20) ...
Processing triggers for systemd (237-3ubuntu10.15) ...
Errors were encountered while processing:
 glusterfs-server
E: Sub-process /usr/bin/dpkg returned an error code (1)


dpkg -l | grep gluster
ii  glusterfs-client3.13.2-1build1  
amd64clustered file-system (client package)
ii  glusterfs-common3.13.2-1build1  
amd64GlusterFS common libraries and translator modules
iF  glusterfs-server3.13.2-1build1  
amd64clustered file-system (server package)


systemctl status glustereventsd.service
● glustereventsd.service - LSB: Gluster Events Server
   Loaded: loaded (/etc/init.d/glustereventsd; generated)
   Active: failed (Result: timeout) since Mon 2019-05-27 09:52:15 UTC; 2min 9s 
ago
 Docs: man:systemd-sysv-generator(8)
   CGroup: /system.slice/glustereventsd.service
   ├─17927 /usr/bin/python /usr/sbin/glustereventsd -p 
/var/run/glustereventsd.pid
   └─17931 /usr/bin/python /usr/sbin/glustereventsd -p 
/var/run/glustereventsd.pid

May 27 09:47:14 vm1 systemd[1]: Starting LSB: Gluster Events Server...
May 27 09:47:14 vm1 glustereventsd[17909]:  * Starting glustereventsd service 
glustereventsd
May 27 09:52:15 vm1 systemd[1]: glustereventsd.service: Start operation timed 
out. Terminating.
May 27 09:52:15 vm1 systemd[1]: glustereventsd.service: Failed with result 
'timeout'.
May 27 09:52:15 vm1 systemd[1]: Failed to start LSB: Gluster Events Server.


The issue is also reported here:
https://github.com/gluster/glusterfs-debian/issues/22

** Affects: glusterfs (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1830615

Title:
  glusterfs-server  3.13.2-1build1  installation  fails  in Bionic
  because starting up the glustereventsd.service times out

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/glusterfs/+bug/1830615/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1779756] Re: Intel XL710 - i40e driver does not work with kernel 4.15 (Ubuntu 18.04)

2018-12-20 Thread Szilard Cserey
Hi Joe,

I tried to install the Test Kernel on Xenial but I stumbled upon this
dependency issue

sudo dpkg -i linux-headers-4.15.0-43-generic_4.15.0-43.47~lp1779756_amd64.deb 
... 
linux-headers-4.15.0-43-generic depends on libssl1.1 (>= 1.1.0); however: 
Package libssl1.1 is not installed. 

Unfortunately I can't find libssl1.1 for Xenial, only libssl1.0.0 is
available for it.

Can you please create a Xenial adaptation of the 4.15 Test Kernel.

Thanks a lot in advance,
Szilard

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1779756

Title:
  Intel XL710 - i40e driver does not work with kernel 4.15 (Ubuntu
  18.04)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1779756/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1791758] Re: ldisc crash on reopened tty

2018-09-10 Thread Szilard Cserey
** Attachment added: ""bt" and "bt -l" from Kernel crashdump"
   
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1791758/+attachment/5187256/+files/CRASH_BT.log

** Changed in: linux (Ubuntu)
   Status: Incomplete => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1791758

Title:
  ldisc crash on reopened tty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1791758/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1791758] Re: ldisc crash on reopened tty

2018-09-10 Thread Szilard Cserey
** Attachment added: "Call Trace from dmesg"
   
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1791758/+attachment/5187255/+files/DMESG_CALL_TRACE.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1791758

Title:
  ldisc crash on reopened tty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1791758/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1791758] [NEW] ldisc crash on reopened tty

2018-09-10 Thread Szilard Cserey
Public bug reported:

The following Oops was discovered by user:

[684766.39] BUG: unable to handle kernel paging request at 2268
[684766.667642] IP: [] n_tty_receive_buf_common+0x6a/0xae0
[684766.668487] PGD 8019574fe067 PUD 19574ff067 PMD 0 
[684766.669194] Oops:  [#1] SMP 
[684766.669687] Modules linked in: xt_nat dccp_diag dccp tcp_diag udp_diag 
inet_diag unix_diag xt_connmark ipt_REJECT nf_reject_ipv4 nf_conntrack_netlink 
nfnetlink veth ip6table_filter ip6_tables xt_tcpmss xt_multiport xt_conntrack 
iptable_filter xt_CHECKSUM xt_tcpudp iptable_mangle xt_CT iptable_raw 
ipt_MASQUERADE nf_nat_masquerade_ipv4 xt_comment iptable_nat ip_tables x_tables 
target_core_mod configfs softdog scini(POE) ib_iser rdma_cm iw_cm ib_cm ib_sa 
ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi 
openvswitch(OE) nf_nat_ipv6 nf_nat_ipv4 nf_nat gre kvm_intel kvm irqbypass ttm 
crct10dif_pclmul drm_kms_helper crc32_pclmul ghash_clmulni_intel drm 
aesni_intel aes_x86_64 i2c_piix4 lrw gf128mul fb_sys_fops syscopyarea 
glue_helper sysfillrect ablk_helper cryptd sysimgblt joydev
[684766.679406]  input_leds mac_hid serio_raw 8250_fintek br_netfilter bridge 
stp llc nf_conntrack_proto_gre nf_conntrack_ipv6 nf_defrag_ipv6 
nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack xfs raid10 raid456 
async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq 
libcrc32c raid1 raid0 psmouse multipath floppy pata_acpi linear dm_multipath
[684766.683585] CPU: 15 PID: 7470 Comm: kworker/u40:1 Tainted: P   OE   
4.4.0-124-generic #148~14.04.1-Ubuntu
[684766.684967] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
Bochs 01/01/2011
[684766.686062] Workqueue: events_unbound flush_to_ldisc
[684766.686703] task: 88165e5d8000 ti: 88170dc2c000 task.ti: 
88170dc2c000
[684766.687670] RIP: 0010:[]  [] 
n_tty_receive_buf_common+0x6a/0xae0
[684766.688870] RSP: 0018:88170dc2fd28  EFLAGS: 00010202
[684766.689521] RAX:  RBX: 88162c895000 RCX: 
0001
[684766.690488] RDX:  RSI: 88162c895020 RDI: 
8819c2d3d4d8
[684766.691518] RBP: 88170dc2fdc0 R08: 0001 R09: 
81ec2ba0
[684766.692480] R10: 0004 R11:  R12: 
8819c2d3d400
[684766.693423] R13: 8819c45b2670 R14: 8816a358c028 R15: 
8819c2d3d400
[684766.694390] FS:  () GS:8819d73c() 
knlGS:
[684766.695484] CS:  0010 DS:  ES:  CR0: 80050033
[684766.696182] CR2: 2268 CR3: 00195752 CR4: 
00360670
[684766.697141] DR0:  DR1:  DR2: 

[684766.698114] DR3:  DR6: fffe0ff0 DR7: 
0400
[684766.699079] Stack:
[684766.699412]   8819c2d3d4d8  
8819c2d3d648
[684766.700467]  8819c2d3d620 8819c9c10400 88170dc2fd68 
8106312e
[684766.701501]  88170dc2fd78 0001  
88162c895020
[684766.702534] Call Trace:
[684766.702905]  [] ? kvm_sched_clock_read+0x1e/0x30
[684766.703685]  [] n_tty_receive_buf2+0x14/0x20
[684766.704505]  [] flush_to_ldisc+0xd5/0x120
[684766.705269]  [] process_one_work+0x156/0x400
[684766.706008]  [] worker_thread+0x11a/0x480
[684766.706686]  [] ? rescuer_thread+0x310/0x310
[684766.707386]  [] kthread+0xd8/0xf0
[684766.707993]  [] ? kthread_park+0x60/0x60
[684766.708664]  [] ret_from_fork+0x55/0x80
[684766.709335]  [] ? kthread_park+0x60/0x60
[684766.709998] Code: 85 70 ff ff ff e8 97 5f 33 00 49 8d 87 20 02 00 00 c7 45 
b4 00 00 00 00 48 89 45 88 49 8d 87 48 02 00 00 48 89 45 80 48 8b 45 b8 <48> 8b 
b0 68 22 00 00 48 8b 08 89 f0 29 c8 41 f6 87 30 01 00 00 
[684766.713290] RIP  [] n_tty_receive_buf_common+0x6a/0xae0
[684766.714105]  RSP 
[684766.714609] CR2: 2268

The issue happened in a VM
KDUMP was configured, so a full Kernel crashdump was created

User has Ubuntu Trusty, Kernel 4.4.0-124 on its VM

The Call Trace is similar with the one that is describe in this upstream patch
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=71472fa9c52b1da27663c275d416d8654b905f05

** Affects: linux (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1791758

Title:
  ldisc crash on reopened tty

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1791758/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1645324] Re: ebtables: Lock file handling has races

2017-06-02 Thread Szilard Cserey
The verification was done using Test Case 1 and Test Case 2

However I noticed that in the Trusty version the lock file
is located at /var/lib/ebtables/lock
Whereas on the Xenial/Yakkety/Zesty versions lock file is 
at /run/ebtables.lock

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1645324

Title:
  ebtables: Lock file handling has races

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ebtables/+bug/1645324/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1645324] Re: ebtables: Lock file handling has races

2017-06-02 Thread Szilard Cserey
Hi All,

I have successfully tested the following versions of ebtables

for Trusty:  2.0.10.4-3ubuntu1.14.04.1   OK

for Xenial:  2.0.10.4-3.4ubuntu2 OK

for Yakkety: 2.0.10.4-3.5ubuntu1.16.10.1 OK

for Zesty:   2.0.10.4-3.5ubuntu1.17.04.1 OK

Szilard

** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1645324

Title:
  ebtables: Lock file handling has races

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ebtables/+bug/1645324/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1645324] Re: ebtables: Lock file handling has races

2017-06-02 Thread Szilard Cserey
** Description changed:

  [Impact]
  
   * ebtables uses creation of a file with an exclusive flag
     as a lock to synchronize table modification when used
     with --concurrent parameter.
  
   * If ebtables crashes it will leave a stale lock file.
     This will prevent another copy of ebtables from running,
     and indirectly any other service that depends on ebtables
     will also be affected.
  
   * This change adds support for real locks that get
     cleaned up if a process exits or crashes.
  
  [Test Case]
  
   * Test Case1:
     1. $ sudo touch /var/lib/ebtables/lock"
     2. $ sudo ebtables --concurrent -L
     3. ebtables can't acquire a lock
  
   * Test Case 2:
     1. $ while true; do /usr/sbin/ebtables --concurrent -L; done
     2. hard reboot VM
     3. likely that the lock file is present under /var/lib/ebtables
     4. libvird hanging, try to connect to qemu:///system
  
  [Regression Potential]
  
   * Normal Use:
     There is no regression potential during normal use and
     operation of ebtables.
  
   * Package Upgrade:
     There is a very very small regression potential during the package
     upgrade to the latest version. Once the package is upgraded that
     potential is gone. It is a very small potential because several
     things have to happen in a very small time frame and in an exact
     order since ebtables is not a resident program like a daemon:
   1. ebtables is launched during package upgrade AND
   2. new ebtables binary has not yet been written to disk AND
   3. it is launched with --concurent switch AND
   4. another ebtables with new binary is launched AND
   5. it is launched with --concurent switch AND
   6. the first ebtables copy hasn't exited yet AND
   7. both copies of ebtables are launched with a WRITE command AND
   8. both copies of ebtables are manipulating the same resource.
     Then one of the binaries could potentially fail, but once the old
     binary exits the potential is gone so subsequent re-runs of
     ebtables will succeed.
  
  * Dragan's patch has been submitted to Debian via :
-   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=860590
+   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=860590
  
  * Note that the ebtables upstream project is nearly dead. Nowadays, all
  the development is now happening in nft project which is intended to be
  replacement.
- 
  
  [Original Text]
  libvirtd is hanging after startup due to ebtables lock file -from an earlier 
run- remains intact when the system reboots.
  Same issue is happening than it is reported here: 
https://bugzilla.redhat.com/show_bug.cgi?id=1290327 when the system boots.
  
  After booting the system, It's not possible connect to the qemu-service.
  - libvirt daemon tried to obtain a lock:
  [pid 20966] read(24, "Trying to obtain lock /var/lib/e"..., 1024) = 45
  [pid 20966] poll([{fd=22, events=POLLIN}, {fd=24, events=POLLIN}], 2, 
4294967295) = 1 ([{fd=24, revents=POLLIN}])
  [pid 20966] read(24, "Trying to obtain lock /var/lib/e"..., 1024) = 45
  [pid 20966] poll([{fd=22, events=POLLIN}, {fd=24, events=POLLIN}], 2, 
4294967295) = 1 ([{fd=24, revents=POLLIN}])
  [pid 20966] read(24, "Trying to obtain lock /var/lib/e"..., 1024) = 45
  [pid 20966] poll([{fd=22, events=POLLIN}, {fd=24, events=POLLIN}], 2, 
4294967295^CProcess 20916 detached
  
  - there was a file named 'lock' in /var/lib/ebtables directory with timestamp 
14:54
  - ebtables was configured:
  * Ebtables support available, number of installed rules [ OK ]
  (other nodes appeared to be in the same state from ebtables point of view, 
but without the lock file)
  - I removed the lock file and libvirt started to work instantly - the lock 
obtain messages have disappeared from the trace and virsh commands are working
  - at 14:54 the host was booting up. According to the logs, there were other 
reboots after that one, but the lock file remained intact (at least the 
timestamp was not updated).
  
  Could you please suggest a solution to be sure that ebtables lock file
  is removed during boot?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1645324

Title:
  ebtables: Lock file handling has races

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ebtables/+bug/1645324/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1668931] Re: test

2017-03-01 Thread Szilard Cserey
this is only a test bugreport

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1668931

Title:
  test

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1668931/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs