bind-chroot not duplicating my forward and reverse tables

2021-06-24 Thread ToddAndMargo via users

Hi All,

Fedora 34
bind-chroot-9.16.16-1.fc34.x86_64


I am trying to clean up my bind-chroot forward and reverse files.

The goal is to have bind-chroot do its thing by duplicating
these two files over into
/var/named/chroot/var/named/slaves/
with the identical inodes like it does with named.root and named.root.key:

# stat /etc/named.root.key /var/named/chroot/etc/named.root.key
...
File: /etc/named.root.key
Inode: 60033354
...
File: /var/named/chroot/etc/named.root.key
...
Inode: 60033354


In my /etc/named.conf, I have

zone "abc.local" {
type master;
# file "/var/named/chroot/var/named/slaves/abc.hosts";
file "slaves/abc.hosts";
allow-update { key DHCP_UPDATER; };
};

zone "255.168.192.in-addr.arpa" {
type master;
# file "/var/named/chroot/var/named/slaves/abc.hosts.rev";
file "slaves/abc.hosts.rev";
allow-update { key DHCP_UPDATER; };
};


After I stopped
# systemctl start named-chroot
I copied and moved the following:


Before:
# find /var/named/ -iname abc.hosts\*
/var/named/chroot/var/named/slaves/abc.hosts.000
/var/named/chroot/var/named/slaves/abc.hosts.rev.000

# cp /var/named/chroot/var/named/slaves/abc.hosts 
/var/named/chroot/var/named/slaves/abc.hosts.000

# mv /var/named/chroot/var/named/slaves/abc.hosts .
# cp /var/named/chroot/var/named/slaves/abc.hosts.rev 
/var/named/chroot/var/named/slaves/abc.hosts.rev.000

# mv /var/named/chroot/var/named/slaves/abc.hosts.rev .
# find /var/named/ -iname abc.hosts*

After:
# find /var/named/ -iname abc.hosts\*
/var/named/slaves/abc.hosts.rev
/var/named/slaves/abc.hosts
/var/named/chroot/var/named/slaves/abc.hosts.000
/var/named/chroot/var/named/slaves/abc.hosts.rev.000


But when I restarted named-chroot, my great plans got dashed:

# systemctl start named-chroot
...
Jun 24 20:35:45 rn6.abc.local bash[83464]: zone abc.local/IN: 
loading from master file /slaves/abc.hosts faile>
Jun 24 20:35:45 rn6.abc.local bash[83464]: zone abc.local/IN: not 
loaded due to errors.
Jun 24 20:35:45 rn6.abc.local bash[83464]: _default/abc.local/IN: 
file not found
Jun 24 20:35:45 rn6.abc.local bash[83464]: zone 
255.168.192.in-addr.arpa/IN: loading from master file /slaves/abc.host>
Jun 24 20:35:45 rn6.abc.local bash[83464]: zone 
255.168.192.in-addr.arpa/IN: not loaded due to errors.
Jun 24 20:35:45 rn6.abc.local bash[83464]: 
_default/255.168.192.in-addr.arpa/IN: file not found
Jun 24 20:35:45 rn6.abc.local bash[83464]: zone 
0.0.127.in-addr.arpa/IN: loaded serial 1997022700



named-chroot can't find abc.hosts or abc.hosts.rev in
   /var/named/chroot/var/named/slaves

And in case they got copied to somewhere else I did another find:
# find /var/named/ -iname abc.hosts\*
/var/named/slaves/abc.hosts.rev
/var/named/slaves/abc.hosts
/var/named/chroot/var/named/slaves/abc.hosts.000
/var/named/chroot/var/named/slaves/abc.hosts.rev.000

No change.

What am I missing?

Many thanks,
-T


___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Learning ipv6 quirks

2021-06-24 Thread Robert McBroom via users

On 6/23/21 12:59 AM, Gordon Messmer wrote:

On 6/22/21 8:55 PM, Robert McBroom via users wrote:

On 6/21/21 11:41 PM, Gordon Messmer wrote:

On 6/21/21 6:17 AM, Robert McBroom via users wrote:
@RobertPC ~]# mount -v -t nfs 
[fd2e:cb3b:f005::ec1]:/mnt/HD/HD_a2/mcstuffy /mnt/mcstuffy

mount.nfs: timeout set for Mon Jun 21 06:42:25 2021
mount.nfs: trying text-based options 
'vers=4.2,addr=fd2e:cb3b:f005::ec1,clientaddr=fd2e:cb3b:f005::ec1'

mount.nfs: mount(2): Connection refused


1: Is the nfs port open on ipv6?  Use "ss -ln | grep :2049" and look 
for a listening port with an IPv6 address, like:

    tcp    LISTEN 0  64 [::]:2049 [::]:*
2: Does your firewall allow access to port 2049 on IPv6?  Use 
"firewall-cmd --list-services" and look for "nfs", or use "ip6tables 
-L" and look for the input chain for your default zone (possibly 
IN_public_allow).


root@MyCloudEX2Ultra ~ # ss -ln | grep :2049
-sh: ss: not found



In that case you probably only have busybox's netstat, and I don't 
know what flags it supports.  Try "netstat -tln" and if that doesn't 
work maybe "netstat -ln" to get a list of the listening ports.




root@MyCloudEX2Ultra ~ # ip6tables -L
Chain INPUT (policy ACCEPT)
target prot opt source   destination
   tcp  anywhere anywhere tcp dpt:22 state 
NEW recent: SET name: SSH side: source mask: 
:::::::
SSHBFATK   tcp  anywhere anywhere tcp dpt:22 state 
NEW recent: UPDATE seconds: 600 hit_count: 201 name: SSH side: source 
mask: :::::::



The system's input chain should allow NFS traffic on IPv6 by virtue of 
the ACCEPT policy.  That suggests that the NFS service isn't listening 
on an IPv6 network socket.

___


@MyCloudEX2Ultra ~ # netstat -tln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address   Foreign Address State
tcp    0  0 0.0.0.0:54553 0.0.0.0:*   LISTEN
tcp    0  0 0.0.0.0:46363 0.0.0.0:*   LISTEN
tcp    0  0 0.0.0.0:36507 0.0.0.0:*   LISTEN
tcp    0  0 127.0.0.1:2812 0.0.0.0:*   LISTEN
tcp    0  0 0.0.0.0:445 0.0.0.0:*   LISTEN
tcp    0  0 0.0.0.0:56864 0.0.0.0:*   LISTEN
tcp    0  0 127.0.0.1:8000 0.0.0.0:*   LISTEN
tcp    0  0 0.0.0.0:2049 0.0.0.0:*   LISTEN
tcp    0  0 192.168.1.239:49154 0.0.0.0:*   LISTEN
tcp    0  0 127.0.0.1:9091 0.0.0.0:*   LISTEN
tcp    0  0 0.0.0.0:3688 0.0.0.0:*   LISTEN
tcp    0  0 127.0.0.1:3306 0.0.0.0:*   LISTEN
tcp    0  0 0.0.0.0:53291 0.0.0.0:*   LISTEN
tcp    0  0 0.0.0.0:139 0.0.0.0:*   LISTEN
tcp    0  0 192.168.1.239:5357 0.0.0.0:*   LISTEN
tcp    0  0 0.0.0.0:111 0.0.0.0:*   LISTEN
tcp    0  0 0.0.0.0:37969 0.0.0.0:*   LISTEN
tcp    0  0 0.0.0.0:21 0.0.0.0:*   LISTEN
tcp    0  0 0.0.0.0:22 0.0.0.0:*   LISTEN
tcp    0  0 0.0.0.0:7575 0.0.0.0:*   LISTEN
tcp6   0  0 ::1:2812 :::*    LISTEN
tcp6   0  0 :::34109 :::*    LISTEN
tcp6   0  0 :::445 :::*    LISTEN
tcp6   0  0 :::8543 :::*    LISTEN
tcp6   0  0 :::56864 :::*    LISTEN
tcp6   0  0 :::49152 :::*    LISTEN
tcp6   0  0 :::8001 :::*    LISTEN
tcp6   0  0 :::8002 :::*    LISTEN
tcp6   0  0 :::8003 :::*    LISTEN
tcp6   0  0 :::6600 :::*    LISTEN
tcp6   0  0 :::3689 :::*    LISTEN
tcp6   0  0 :::139 :::*    LISTEN
tcp6   0  0 fe80::200:c0ff:fe3:5357 :::*    LISTEN
tcp6   0  0 :::4430 :::*    LISTEN
tcp6   0  0 :::111 :::*    LISTEN
tcp6   0  0 :::80 :::*    LISTEN
tcp6   0  0 :::21 :::*    LISTEN
tcp6   0  0 :::22 :::*    LISTEN

@RobertPC ~]# mount -v -t nfs -o vers=3,proto=tcp6 
[2600:1702:4860:9dd0::2d]:/mnt/HD/HD_a2/mcstuffy /mnt/mcstuffy

mount.nfs: timeout set for Thu Jun 24 23:30:20 2021
Created symlink 
/run/systemd/system/remote-fs.target.wants/rpc-statd.service → 
/usr/lib/systemd/system/rpc-statd.service.
mount.nfs: trying text-based options 
'vers=3,proto=tcp6,addr=2600:1702:4860:9dd0::2d'

mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: portmap query failed: RPC: Program not registered
mount.nfs: trying text-based options 
'vers=3,proto=tcp6,addr=2600:1702:4860:9dd0::2d'

mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: portmap 

RedShift

2021-06-24 Thread ToddAndMargo via users

Hi All,

Fedora 34
Xfce 4.14
redshift-1.12-11.fc34.x86_64
redshift-gtk-1.12-11.fc34.x86_64

Hi All,

Red Shift is all screwed up again:


Unable to connect to GeoClue. Unable to get location from provider:

https://github.com/jonls/redshift/issues/318#issuecomment-865667340

is back in full force.


Is there an alternatives to Red Shift?

Many thanks,
-T


--
~
When we ask for advice, we are usually looking for an accomplice.
   --  Charles Varlet de La Grange
~
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Learning ipv6 quirks

2021-06-24 Thread Ed Greshko



On 24/06/2021 19:59, Robert McBroom via users wrote:


With ipv4 the mount is successful with apparently trying alternate port and 
protocol automatically.

mount.nfs: trying text-based options 'addr=192.168.1.239'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: trying 192.168.1.239 prog 13 vers 3 prot TCP port 2049
mount.nfs: prog 15, trying vers=3, prot=17
mount.nfs: trying 192.168.1.239 prog 15 vers 3 prot UDP port 37811

Is there a way to tell ipv6 mount to use prot UDP port 37811?


You can try

[egreshko@meimei ~]$ sudo mount -t nfs -o vers=3 
[2001:b030:112f:2::53]:/home/egreshko /mnt

[egreshko@meimei ~]$ df -T | grep nfs
nas:/volume1/aux  nfs4 5621463168 1996292608 3625170560  
36% /aux
nas:/volume1/misty    nfs4 5621463168 1996292608 3625170560  
36% /home/egreshko/misty
[2001:b030:112f:2::53]:/home/egreshko nfs    32504832   17629184 14537216  55% 
/mnt

Have you determined why nfs V4 isn't available?



Oh, BTW, I should have sent the -v version.

[egreshko@meimei ~]$ sudo mount -t nfs -o vers=3 -v 
[2001:b030:112f:2::53]:/home/egreshko /mnt
mount.nfs: timeout set for Fri Jun 25 07:02:41 2021
mount.nfs: trying text-based options 'vers=3,addr=2001:b030:112f:2::53'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: trying 2001:b030:112f:2::53 prog 13 vers 3 prot TCP port 2049
mount.nfs: prog 15, trying vers=3, prot=17
mount.nfs: trying 2001:b030:112f:2::53 prog 15 vers 3 prot UDP port 20048

You really should post examples of IPv6 test which fail and not IPv4 examples 
which succeed.

--
Remind me to ignore comments which aren't germane to the thread.
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Can qemu be safely removed?

2021-06-24 Thread Erik P. Olsen
On 2021-06-24 at 12:00:40 Samuel Sieb wrote:

> On 6/24/21 11:57 AM, Erik P. Olsen wrote:
> > I don't believe I am using qemu for anything and would therefore prefer to 
> > remove it
> > entirely. Is it doable?  
> 
> I don't see any reason why not.  Did you try?  It's installed by default 
> for gnome-boxes.

OK, so I've removed it and so far no problem :-)


-- 
Erik P. Olsen - Copenhagen, Denmark
Fedora 34/64 bit xfce Claws-Mail POP3 Gramps 5.1.3 Bacula 11.0.5
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Can qemu be safely removed?

2021-06-24 Thread Samuel Sieb

On 6/24/21 11:57 AM, Erik P. Olsen wrote:

I don't believe I am using qemu for anything and would therefore prefer to 
remove it
entirely. Is it doable?


I don't see any reason why not.  Did you try?  It's installed by default 
for gnome-boxes.

___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Can qemu be safely removed?

2021-06-24 Thread Erik P. Olsen
Hello,

I don't believe I am using qemu for anything and would therefore prefer to 
remove it
entirely. Is it doable?

Thanks.

-- 
Erik P. Olsen - Copenhagen, Denmark
Fedora 34/64 bit xfce Claws-Mail POP3 Gramps 5.1.3 Bacula 11.0.5
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: plymouth-quit-wait taking too long

2021-06-24 Thread Joe Zeff

On 6/24/21 5:15 AM, Christopher Ross wrote:

The top part of systemd-analyze-blame is

1min 23.232s plymouth-quit-wait.service
  53.077s cs-firewall-bouncer.service
  52.219s dovecot.service
  26.525s crowdsec.service
  26.075s libvirtd.service
  25.910s postfix.service
  25.888s vmware.service
  25.870s nfs-server.service


This is just a guess, but it looks to me like the firewall-bouncer 
service is causing the delay and the others are just waiting for it to 
become available.

___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: plymouth-quit-wait taking too long

2021-06-24 Thread Patrick O'Callaghan
On Thu, 2021-06-24 at 14:21 +0100, Christopher Ross wrote:
> > I was under the impression that GDM + KDE didn't work well, maybe
> > try
> > using a KDE-friendly login manager like sddm?
> 
> 
> I only recently changed from sddm to gdm because the "switch user" 
> functionality has been removed from sddm in Fedora 34.

Same here.

poc
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: plymouth-quit-wait taking too long

2021-06-24 Thread Christopher Ross



On 24/06/2021 14:05, Jonathan Billings wrote:

On Thu, Jun 24, 2021 at 01:34:27PM +0100, Christopher Ross wrote:

I should have mentioned that I'm running KDE not Gnome, but at this stage it
hasn't run. The 3½ minutes is the time it takes from power on to bring up
the GDM login screen.

I can't remember the exact message from abrtd. I good follow up question is
probably how can I get more information about the oopses?

Looking in /var/spool/abrt/oops-2021-06-24-09:48:20-3565-0/dmesg

It does seem the first oops was nvidia

...
[   14.920735] intel_rapl_common: RAPL package-0 domain package
locked by BIOS
[   14.956559] pktcdvd: pktcdvd0: writer mapped to sr0
[   15.073374] zram0: detected capacity change from 0 to 16777216
[   15.095100] Adding 8388604k swap on /dev/zram0.  Priority:100
extents:1 across:8388604k SSFS
[   15.374420] [drm] Initialized nvidia-drm 0.0.0 20160202 for
:01:00.0 on minor 1
[   15.684579] nvidia-gpu :01:00.3: i2c timeout error e000
[   15.684583] ucsi_ccg 8-0008: i2c_transfer failed -110
[   15.684585] ucsi_ccg 8-0008: ucsi_ccg_init failed - -110
[   15.684590] ucsi_ccg: probe of 8-0008 failed with error -110
[   40.291071] watchdog: BUG: soft lockup - CPU#3 stuck for 26s!
[plymouthd:445]
[   40.291074] Modules linked in: intel_rapl_msr intel_rapl_common
at24 mei_hdcp iTCO_wdt intel_pmc_bxt iTCO_vendor_support ucsi_ccg
typec_ucsi typec pktcdvd x86_pkg_temp_thermal intel_powerclamp
coretemp kvm_intel kvm irqbypass rapl intel_cstate intel_uncore
raid0 eeepc_wmi asus_wmi sparse_keymap rfkill wmi_bmof pcspkr
snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio
nvidia_drm(POE) joydev snd_hda_codec_hdmi nvidia_modeset(POE)
snd_hda_intel i2c_i801 snd_intel_dspcfg i2c_smbus snd_intel_sdw_acpi
snd_usb_audio apple_mfi_fastcharge snd_hda_codec nvidia_uvm(POE)
snd_usbmidi_lib snd_hda_core snd_rawmidi mc snd_hwdep snd_seq
snd_seq_device lpc_ich snd_pcm nvidia(POE) mei_me snd_timer snd mei
i2c_nvidia_gpu soundcore nfsd auth_rpcgss nfs_acl lockd grace
binfmt_misc sunrpc nfs_ssc zram ip_tables i915 crct10dif_pclmul
crc32_pclmul crc32c_intel cdc_mbim cdc_wdm e1000e mxm_wmi
i2c_algo_bit ghash_clmulni_intel drm_kms_helper cec cdc_ncm
cdc_ether drm usbnet mii wmi video fuse
[   40.291104] CPU: 3 PID: 445 Comm: plymouthd Tainted: P
OE 5.12.11-300.fc34.x86_64 #1
[   40.291106] Hardware name: System manufacturer System Product
Name/MAXIMUS V FORMULA, BIOS 1903 08/19/2013
[   40.291107] RIP: 0010:os_io_read_dword+0x8/0x10 [nvidia]
[   40.291311] Code: 00 00 0f 1f 44 00 00 89 fa ec c3 0f 1f 80 00 00
00 00 0f 1f 44 00 00 89 fa 66 ed c3 66 0f 1f 44 00 00 0f 1f 44 00 00
89 fa ed  0f 1f 80 00 00 00 00 0f 1f 44 00 00 48 85 ff 75 0c 48
8b 05 c7
[   40.291312] RSP: 0018:aeb9c05cf7d8 EFLAGS: 0202
[   40.291313] RAX: 03763580 RBX: 17d3 RCX:
000c
[   40.291314] RDX: e00c RSI: 000c40e4 RDI:
e00c
[   40.291314] RBP: 8c7220e12b10 R08: c2aa5380 R09:
0282
[   40.291315] R10: 0202 R11: 0040 R12:
8c7220e12b3c
[   40.291316] R13: 8c7220e12b38 R14: c000 R15:
c000
[   40.291316] FS:  7f27013a4800() GS:8c790fec()
knlGS:
[   40.291317] CS:  0010 DS:  ES:  CR0: 80050033
[   40.291318] CR2: 7ffca2940078 CR3: 00010a834001 CR4:
001706e0
[   40.291319] Call Trace:
[   40.291321]  _nv041000rm+0x4c/0x70 [nvidia]
[   40.291545]  ? _nv040998rm+0x30/0x30 [nvidia]
[   40.291768]  ? _nv000834rm+0x4f/0x130 [nvidia]

I agree, it looks like a nvidia-related issue.  What kernel arguments
do you have (look at /proc/cmdline or edit the GRUB entry when you're
booting).  Also, it looks like the nvidia modeset driver was running,
which might be having problems with plymouth?

You could try booting without the 'rhgb quiet' arguments in the kernel
line in GRUB, so you can see if disabling plymouth helps, and it will
show you what is actually happening during boot.

I was under the impression that GDM + KDE didn't work well, maybe try
using a KDE-friendly login manager like sddm?



I only recently changed from sddm to gdm because the "switch user" 
functionality has been removed from sddm in Fedora 34. We do share this 
computer.


That change didn't fix the boot issue either.

Regards,
Chris R.
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 

Re: plymouth-quit-wait taking too long

2021-06-24 Thread Jonathan Billings
On Thu, Jun 24, 2021 at 01:34:27PM +0100, Christopher Ross wrote:
> I should have mentioned that I'm running KDE not Gnome, but at this stage it
> hasn't run. The 3½ minutes is the time it takes from power on to bring up
> the GDM login screen.
> 
> I can't remember the exact message from abrtd. I good follow up question is
> probably how can I get more information about the oopses?
> 
> Looking in /var/spool/abrt/oops-2021-06-24-09:48:20-3565-0/dmesg
> 
> It does seem the first oops was nvidia
> 
>...
>[   14.920735] intel_rapl_common: RAPL package-0 domain package
>locked by BIOS
>[   14.956559] pktcdvd: pktcdvd0: writer mapped to sr0
>[   15.073374] zram0: detected capacity change from 0 to 16777216
>[   15.095100] Adding 8388604k swap on /dev/zram0.  Priority:100
>extents:1 across:8388604k SSFS
>[   15.374420] [drm] Initialized nvidia-drm 0.0.0 20160202 for
>:01:00.0 on minor 1
>[   15.684579] nvidia-gpu :01:00.3: i2c timeout error e000
>[   15.684583] ucsi_ccg 8-0008: i2c_transfer failed -110
>[   15.684585] ucsi_ccg 8-0008: ucsi_ccg_init failed - -110
>[   15.684590] ucsi_ccg: probe of 8-0008 failed with error -110
>[   40.291071] watchdog: BUG: soft lockup - CPU#3 stuck for 26s!
>[plymouthd:445]
>[   40.291074] Modules linked in: intel_rapl_msr intel_rapl_common
>at24 mei_hdcp iTCO_wdt intel_pmc_bxt iTCO_vendor_support ucsi_ccg
>typec_ucsi typec pktcdvd x86_pkg_temp_thermal intel_powerclamp
>coretemp kvm_intel kvm irqbypass rapl intel_cstate intel_uncore
>raid0 eeepc_wmi asus_wmi sparse_keymap rfkill wmi_bmof pcspkr
>snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio
>nvidia_drm(POE) joydev snd_hda_codec_hdmi nvidia_modeset(POE)
>snd_hda_intel i2c_i801 snd_intel_dspcfg i2c_smbus snd_intel_sdw_acpi
>snd_usb_audio apple_mfi_fastcharge snd_hda_codec nvidia_uvm(POE)
>snd_usbmidi_lib snd_hda_core snd_rawmidi mc snd_hwdep snd_seq
>snd_seq_device lpc_ich snd_pcm nvidia(POE) mei_me snd_timer snd mei
>i2c_nvidia_gpu soundcore nfsd auth_rpcgss nfs_acl lockd grace
>binfmt_misc sunrpc nfs_ssc zram ip_tables i915 crct10dif_pclmul
>crc32_pclmul crc32c_intel cdc_mbim cdc_wdm e1000e mxm_wmi
>i2c_algo_bit ghash_clmulni_intel drm_kms_helper cec cdc_ncm
>cdc_ether drm usbnet mii wmi video fuse
>[   40.291104] CPU: 3 PID: 445 Comm: plymouthd Tainted: P  
>OE 5.12.11-300.fc34.x86_64 #1
>[   40.291106] Hardware name: System manufacturer System Product
>Name/MAXIMUS V FORMULA, BIOS 1903 08/19/2013
>[   40.291107] RIP: 0010:os_io_read_dword+0x8/0x10 [nvidia]
>[   40.291311] Code: 00 00 0f 1f 44 00 00 89 fa ec c3 0f 1f 80 00 00
>00 00 0f 1f 44 00 00 89 fa 66 ed c3 66 0f 1f 44 00 00 0f 1f 44 00 00
>89 fa ed  0f 1f 80 00 00 00 00 0f 1f 44 00 00 48 85 ff 75 0c 48
>8b 05 c7
>[   40.291312] RSP: 0018:aeb9c05cf7d8 EFLAGS: 0202
>[   40.291313] RAX: 03763580 RBX: 17d3 RCX:
>000c
>[   40.291314] RDX: e00c RSI: 000c40e4 RDI:
>e00c
>[   40.291314] RBP: 8c7220e12b10 R08: c2aa5380 R09:
>0282
>[   40.291315] R10: 0202 R11: 0040 R12:
>8c7220e12b3c
>[   40.291316] R13: 8c7220e12b38 R14: c000 R15:
>c000
>[   40.291316] FS:  7f27013a4800() GS:8c790fec()
>knlGS:
>[   40.291317] CS:  0010 DS:  ES:  CR0: 80050033
>[   40.291318] CR2: 7ffca2940078 CR3: 00010a834001 CR4:
>001706e0
>[   40.291319] Call Trace:
>[   40.291321]  _nv041000rm+0x4c/0x70 [nvidia]
>[   40.291545]  ? _nv040998rm+0x30/0x30 [nvidia]
>[   40.291768]  ? _nv000834rm+0x4f/0x130 [nvidia]

I agree, it looks like a nvidia-related issue.  What kernel arguments
do you have (look at /proc/cmdline or edit the GRUB entry when you're
booting).  Also, it looks like the nvidia modeset driver was running,
which might be having problems with plymouth?

You could try booting without the 'rhgb quiet' arguments in the kernel
line in GRUB, so you can see if disabling plymouth helps, and it will
show you what is actually happening during boot.

I was under the impression that GDM + KDE didn't work well, maybe try
using a KDE-friendly login manager like sddm?

-- 
Jonathan Billings 
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: plymouth-quit-wait taking too long

2021-06-24 Thread Christopher Ross



On 24/06/2021 13:11, Jonathan Billings wrote:

On Thu, Jun 24, 2021 at 12:15:48PM +0100, Christopher Ross wrote:

Fedora 34 on my i7 with 32G RAM and nvidia RTX2060 card takes minutes to
boot, and when it finally does there are a number of "something went wrong"
notifications. How best can I diagnose and fix this so that it boots quickly
and without errors?

Do you have the RPMFusion nvidia packages installed?  Are you using
any 3rd-party nvidia drivers?  Or are you using the nouveau driver?


Yes, RPMFusion is enabled and nvidia drivers installed.

The nvidia driver might be compiling on boot (dkms) which takes a long
time, and if it fails, will cause GL issues that can break 'nautilus',
and if 'nautilus' crashes, the GNOME session will do the 'Something
went wrong' alert.


It happens every boot, not just when there is a kernel or driver update. 
In that instance it should not need to recompile.


I should have mentioned that I'm running KDE not Gnome, but at this 
stage it hasn't run. The 3½ minutes is the time it takes from power on 
to bring up the GDM login screen.


I can't remember the exact message from abrtd. I good follow up question 
is probably how can I get more information about the oopses?


Looking in /var/spool/abrt/oops-2021-06-24-09:48:20-3565-0/dmesg

It does seem the first oops was nvidia

   ...
   [   14.920735] intel_rapl_common: RAPL package-0 domain package
   locked by BIOS
   [   14.956559] pktcdvd: pktcdvd0: writer mapped to sr0
   [   15.073374] zram0: detected capacity change from 0 to 16777216
   [   15.095100] Adding 8388604k swap on /dev/zram0.  Priority:100
   extents:1 across:8388604k SSFS
   [   15.374420] [drm] Initialized nvidia-drm 0.0.0 20160202 for
   :01:00.0 on minor 1
   [   15.684579] nvidia-gpu :01:00.3: i2c timeout error e000
   [   15.684583] ucsi_ccg 8-0008: i2c_transfer failed -110
   [   15.684585] ucsi_ccg 8-0008: ucsi_ccg_init failed - -110
   [   15.684590] ucsi_ccg: probe of 8-0008 failed with error -110
   [   40.291071] watchdog: BUG: soft lockup - CPU#3 stuck for 26s!
   [plymouthd:445]
   [   40.291074] Modules linked in: intel_rapl_msr intel_rapl_common
   at24 mei_hdcp iTCO_wdt intel_pmc_bxt iTCO_vendor_support ucsi_ccg
   typec_ucsi typec pktcdvd x86_pkg_temp_thermal intel_powerclamp
   coretemp kvm_intel kvm irqbypass rapl intel_cstate intel_uncore
   raid0 eeepc_wmi asus_wmi sparse_keymap rfkill wmi_bmof pcspkr
   snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio
   nvidia_drm(POE) joydev snd_hda_codec_hdmi nvidia_modeset(POE)
   snd_hda_intel i2c_i801 snd_intel_dspcfg i2c_smbus snd_intel_sdw_acpi
   snd_usb_audio apple_mfi_fastcharge snd_hda_codec nvidia_uvm(POE)
   snd_usbmidi_lib snd_hda_core snd_rawmidi mc snd_hwdep snd_seq
   snd_seq_device lpc_ich snd_pcm nvidia(POE) mei_me snd_timer snd mei
   i2c_nvidia_gpu soundcore nfsd auth_rpcgss nfs_acl lockd grace
   binfmt_misc sunrpc nfs_ssc zram ip_tables i915 crct10dif_pclmul
   crc32_pclmul crc32c_intel cdc_mbim cdc_wdm e1000e mxm_wmi
   i2c_algo_bit ghash_clmulni_intel drm_kms_helper cec cdc_ncm
   cdc_ether drm usbnet mii wmi video fuse
   [   40.291104] CPU: 3 PID: 445 Comm: plymouthd Tainted: P  
   OE 5.12.11-300.fc34.x86_64 #1
   [   40.291106] Hardware name: System manufacturer System Product
   Name/MAXIMUS V FORMULA, BIOS 1903 08/19/2013
   [   40.291107] RIP: 0010:os_io_read_dword+0x8/0x10 [nvidia]
   [   40.291311] Code: 00 00 0f 1f 44 00 00 89 fa ec c3 0f 1f 80 00 00
   00 00 0f 1f 44 00 00 89 fa 66 ed c3 66 0f 1f 44 00 00 0f 1f 44 00 00
   89 fa ed  0f 1f 80 00 00 00 00 0f 1f 44 00 00 48 85 ff 75 0c 48
   8b 05 c7
   [   40.291312] RSP: 0018:aeb9c05cf7d8 EFLAGS: 0202
   [   40.291313] RAX: 03763580 RBX: 17d3 RCX:
   000c
   [   40.291314] RDX: e00c RSI: 000c40e4 RDI:
   e00c
   [   40.291314] RBP: 8c7220e12b10 R08: c2aa5380 R09:
   0282
   [   40.291315] R10: 0202 R11: 0040 R12:
   8c7220e12b3c
   [   40.291316] R13: 8c7220e12b38 R14: c000 R15:
   c000
   [   40.291316] FS:  7f27013a4800() GS:8c790fec()
   knlGS:
   [   40.291317] CS:  0010 DS:  ES:  CR0: 80050033
   [   40.291318] CR2: 7ffca2940078 CR3: 00010a834001 CR4:
   001706e0
   [   40.291319] Call Trace:
   [   40.291321]  _nv041000rm+0x4c/0x70 [nvidia]
   [   40.291545]  ? _nv040998rm+0x30/0x30 [nvidia]
   [   40.291768]  ? _nv000834rm+0x4f/0x130 [nvidia]
   ...








The top part of systemd-analyze-blame is

1min 23.232s plymouth-quit-wait.service
  53.077s cs-firewall-bouncer.service
  52.219s dovecot.service
  26.525s crowdsec.service

It appears you're using some sort of 3rd-party firewall driver called
'crowdsec'.  Is that the problem?  It does seem to be up there,
although it could be waiting for something else to start.



Re: Learning ipv6 quirks

2021-06-24 Thread Ed Greshko

On 24/06/2021 19:59, Robert McBroom via users wrote:


With ipv4 the mount is successful with apparently trying alternate port and 
protocol automatically.

mount.nfs: trying text-based options 'addr=192.168.1.239'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: trying 192.168.1.239 prog 13 vers 3 prot TCP port 2049
mount.nfs: prog 15, trying vers=3, prot=17
mount.nfs: trying 192.168.1.239 prog 15 vers 3 prot UDP port 37811

Is there a way to tell ipv6 mount to use prot UDP port 37811?


You can try

[egreshko@meimei ~]$ sudo mount -t nfs -o vers=3 
[2001:b030:112f:2::53]:/home/egreshko /mnt

[egreshko@meimei ~]$ df -T | grep nfs
nas:/volume1/aux  nfs4 5621463168 1996292608 3625170560  
36% /aux
nas:/volume1/misty    nfs4 5621463168 1996292608 3625170560  
36% /home/egreshko/misty
[2001:b030:112f:2::53]:/home/egreshko nfs    32504832   17629184 14537216  55% 
/mnt

Have you determined why nfs V4 isn't available?

--
Remind me to ignore comments which aren't germane to the thread.
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: plymouth-quit-wait taking too long

2021-06-24 Thread Jonathan Billings
On Thu, Jun 24, 2021 at 12:15:48PM +0100, Christopher Ross wrote:
> Fedora 34 on my i7 with 32G RAM and nvidia RTX2060 card takes minutes to
> boot, and when it finally does there are a number of "something went wrong"
> notifications. How best can I diagnose and fix this so that it boots quickly
> and without errors?

Do you have the RPMFusion nvidia packages installed?  Are you using
any 3rd-party nvidia drivers?  Or are you using the nouveau driver?

The nvidia driver might be compiling on boot (dkms) which takes a long
time, and if it fails, will cause GL issues that can break 'nautilus',
and if 'nautilus' crashes, the GNOME session will do the 'Something
went wrong' alert.

> The top part of systemd-analyze-blame is
> 
>1min 23.232s plymouth-quit-wait.service
>  53.077s cs-firewall-bouncer.service
>  52.219s dovecot.service
>  26.525s crowdsec.service

It appears you're using some sort of 3rd-party firewall driver called
'crowdsec'.  Is that the problem?  It does seem to be up there,
although it could be waiting for something else to start.


-- 
Jonathan Billings 
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Learning ipv6 quirks

2021-06-24 Thread Robert McBroom via users

On 6/23/21 12:59 AM, Gordon Messmer wrote:

On 6/22/21 8:55 PM, Robert McBroom via users wrote:

On 6/21/21 11:41 PM, Gordon Messmer wrote:

On 6/21/21 6:17 AM, Robert McBroom via users wrote:
@RobertPC ~]# mount -v -t nfs 
[fd2e:cb3b:f005::ec1]:/mnt/HD/HD_a2/mcstuffy /mnt/mcstuffy

mount.nfs: timeout set for Mon Jun 21 06:42:25 2021
mount.nfs: trying text-based options 
'vers=4.2,addr=fd2e:cb3b:f005::ec1,clientaddr=fd2e:cb3b:f005::ec1'

mount.nfs: mount(2): Connection refused


1: Is the nfs port open on ipv6?  Use "ss -ln | grep :2049" and look 
for a listening port with an IPv6 address, like:

    tcp    LISTEN 0  64 [::]:2049 [::]:*
2: Does your firewall allow access to port 2049 on IPv6?  Use 
"firewall-cmd --list-services" and look for "nfs", or use "ip6tables 
-L" and look for the input chain for your default zone (possibly 
IN_public_allow).


root@MyCloudEX2Ultra ~ # ss -ln | grep :2049
-sh: ss: not found



In that case you probably only have busybox's netstat, and I don't 
know what flags it supports.  Try "netstat -tln" and if that doesn't 
work maybe "netstat -ln" to get a list of the listening ports.




root@MyCloudEX2Ultra ~ # ip6tables -L
Chain INPUT (policy ACCEPT)
target prot opt source   destination
   tcp  anywhere anywhere tcp dpt:22 state 
NEW recent: SET name: SSH side: source mask: 
:::::::
SSHBFATK   tcp  anywhere anywhere tcp dpt:22 state 
NEW recent: UPDATE seconds: 600 hit_count: 201 name: SSH side: source 
mask: :::::::



The system's input chain should allow NFS traffic on IPv6 by virtue of 
the ACCEPT policy.  That suggests that the NFS service isn't listening 
on an IPv6 network socket.


With ipv4 the mount is successful with apparently trying alternate port 
and protocol automatically.


mount.nfs: trying text-based options 'addr=192.168.1.239'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: trying 192.168.1.239 prog 13 vers 3 prot TCP port 2049
mount.nfs: prog 15, trying vers=3, prot=17
mount.nfs: trying 192.168.1.239 prog 15 vers 3 prot UDP port 37811

Is there a way to tell ipv6 mount to use prot UDP port 37811?
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Long wait for start job

2021-06-24 Thread Christopher Ross



On 24/06/2021 12:20, Patrick O'Callaghan wrote:

On Thu, 2021-06-24 at 11:42 +0100, Christopher Ross wrote:


How can I go about diagnosing and fixing that?
In 2021 i7 machines should not be taking literally minutes to boot.
On my system (also an i7) that takes only 5s. Presumably something is
holding it up. Try running 'systemd-analyze plot > trace.svg' followed
by 'eog trace.svg' and look at the resulting chart.



I have started a new thread, per Ed and George's suggestions.

Regards,
Chris R.

___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] Announcing 389 Directory Server 2.0.6

2021-06-24 Thread Thierry Bordaz


   389 Directory Server 2.0.6

The 389 Directory Server team is proud to announce 389-ds-base version 2.0.6

Fedora packages are available on Fedora 34 and Rawhide

Fedora 34:

https://koji.fedoraproject.org/koji/taskinfo?taskID=70696310 
 - Koji 
https://bodhi.fedoraproject.org/updates/FEDORA-2021-6cec1584ab 
 - Bodhi


Rawhide:

https://koji.fedoraproject.org/koji/taskinfo?taskID=70730267 
 - Koji


The new packages and versions are:

 * 389-ds-base-2.0.6-1

Source tarballs are available for download at Download 
389-ds-base Source 




 Highlights in 2.0.6

 * Bug & security fixes


 Installation and Upgrade

See Download  for 
information about setting up your yum repositories.


To install the server use *dnf install 389-ds-base*

To install the Cockpit UI plugin use *dnf install cockpit-389-ds*

After rpm install completes, run *dscreate interactive*

For upgrades, simply install the package. There are no further 
steps required.


There are no upgrade steps besides installing the new rpms

See Install_Guide 
 for 
more information about the initial installation and setup


See Source  
for information about source tarballs and SCM (git) access.



 Feedback

We are very interested in your feedback!

Please provide feedback and comments to the 389-users mailing list: 
https://lists.fedoraproject.org/admin/lists/389-users.lists.fedoraproject.org 



If you find a bug, or would like to see a new feature, file it in our 
GitHub project: https://github.com/389ds/389-ds-base 



 * Bump version to 2.0.6
 * Issue 4803 - Improve DB Locks Monitoring Feature Descriptions
 * Issue 4803 - Improve DB Locks Monitoring Feature Descriptions (#4810)
 * Issue 4169 - UI - Migrate Typeaheads to PF4 (#4808)
 * Issue 4414 - disk monitoring - prevent division by zero crash
 * Issue 4788 - CLI should support Temporary Password Rules
   attributes (#4793)
 * Issue 4656 - Fix replication plugin rename dependency issues
 * Issue 4656 - replication name change upgrade code causes crash with
   dynamic plugins
 * Issue 4506 - Improve SASL logging
 * Issue 4709 - Fix double free in dbscan
 * Issue 4093 - Fix MEP test case
 * Issue 4747 - Remove unstable/unstatus tests (followup) (#4809)
 * Issue 4791 - Missing dependency for RetroCL RFE (#4792)
 * Issue 4794 - BUG - don’t capture container output (#4798)
 * Issue 4593 - Log an additional message if the server certificate
   nickname doesn’t match nsSSLPersonalitySSL value
 * Issue 4797 - ACL IP ADDRESS evaluation may corrupt
   c_isreplication_session connection flags (#4799)
 * Issue 4169 - UI Migrate checkbox to PF4 (#4769)
 * Issue 4447 - Crash when the Referential Integrity log is manually edited
 * Issue 4773 - Add CI test for DNA interval assignment
 * Issue 4789 - Temporary password rules are not enforce with local
   password policy (#4790)
 * Issue 4379 - fixing regression in test_info_disclosure
 * Issue 4379 - Allow more than 1 empty AttributeDescription for
   ldapsearch, without the risk of denial of service
 * Issue 4379 - Allow more than 1 empty AttributeDescription for
   ldapsearch, without the risk of denial of service
 * Issue 4575 Update test docstrings metadata
 * Issue 4753 - Adjust our tests to 389-ds-base-snmp missing in RHEL
   9 Appstream
 * removed the snmp_present() from utils.py as we have
   get_rpm_version() in conftest.py
 * Issue 4753 - Adjust our tests to 389-ds-base-snmp missing in RHEL
   9 Appstream

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Long wait for start job

2021-06-24 Thread Patrick O'Callaghan
On Thu, 2021-06-24 at 11:42 +0100, Christopher Ross wrote:
> 
> 
> On 12/06/2021 20:45, Joe Zeff wrote:
> > On 6/12/21 11:54 AM, Patrick O'Callaghan wrote:
> > > Not my case. I don't have systemd-udev-settle.service. This is a
> > > fresh
> > > install of F34.
> > 
> > Ok, you can still use systemd-analyze blame to find out what's
> > causing 
> > the delay.
> 
> On my F34 boot systemd-udev-settle.service takes only 1.921s but the 
> real biggie is
> 
>  1min 23.232s plymouth-quit-wait.service
> 
> 
> How can I go about diagnosing and fixing that?
> In 2021 i7 machines should not be taking literally minutes to boot.

On my system (also an i7) that takes only 5s. Presumably something is
holding it up. Try running 'systemd-analyze plot > trace.svg' followed
by 'eog trace.svg' and look at the resulting chart.

poc
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


plymouth-quit-wait taking too long

2021-06-24 Thread Christopher Ross


Dear fellow Fedorans,

Fedora 34 on my i7 with 32G RAM and nvidia RTX2060 card takes minutes to 
boot, and when it finally does there are a number of "something went 
wrong" notifications. How best can I diagnose and fix this so that it 
boots quickly and without errors?


CPU: Quad Core Intel Core i7-3770K (-MT MCP-) speed/min/max: 
4324/1600/4400 MHz Kernel: 5.12.11-300.fc34.x86_64 x86_64
Up: 2h 30m Mem: 8717.9/31785.3 MiB (27.4%) Storage: 25.69 TiB (59.7% 
used) Procs: 391 Shell: Bash inxi: 3.3.03



The top part of systemd-analyze-blame is

   1min 23.232s plymouth-quit-wait.service
 53.077s cs-firewall-bouncer.service
 52.219s dovecot.service
 26.525s crowdsec.service
 26.075s libvirtd.service
 25.910s postfix.service
 25.888s vmware.service
 25.870s nfs-server.service
  8.275s network.service
  4.279s abrtd.service
  4.111s smartd.service
  1.921s systemd-udev-settle.service
  1.355s lvm2-monitor.service
  1.309s user@1006.service
  1.188s
   
systemd-fsck@dev-disk-by\x2duuid-d91d02e3\x2dce79\x2d4aa8\x2d9446\x2df531c05ca7a7.service
  1.145s udisks2.service
   711ms akmods.service
   602ms initrd-switch-root.service



Many thanks,
Chris R.


___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Long wait for start job

2021-06-24 Thread George N. White III
On Thu, 24 Jun 2021 at 07:42, Christopher Ross 
wrote:

>
>
> On 12/06/2021 20:45, Joe Zeff wrote:
>
> On 6/12/21 11:54 AM, Patrick O'Callaghan wrote:
>
> Not my case. I don't have systemd-udev-settle.service. This is a fresh
> install of F34.
>
>
> Ok, you can still use systemd-analyze blame to find out what's causing the
> delay.
>
>
> On my F34 boot systemd-udev-settle.service takes only 1.921s but the real
> biggie is
>
> 1min 23.232s plymouth-quit-wait.service
>
>
> How can I go about diagnosing and fixing that?
> In 2021 i7 machines should not be taking literally minutes to boot.
>

Plymouth is not responsible for the delay -- it provides the splash screen
during boot. This was discussed at:
<
https://askubuntu.com/questions/1119167/slow-boot-issue-due-to-plymouth-quit-wait-service-ubuntu-18-04
>
Unlike many Ubuntu discussions, this one is excellent.   A good original
post with lots of detail, and a response
showing how to analyze what plymouth is doing.

Please start a new thread and provide more detail.  Are there long pauses
before the splash screen appears
or between ending the splash screen and the login screen?

-- 
George N. White III
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Long wait for start job

2021-06-24 Thread Ed Greshko

On 24/06/2021 18:42, Christopher Ross wrote:



On 12/06/2021 20:45, Joe Zeff wrote:

On 6/12/21 11:54 AM, Patrick O'Callaghan wrote:

Not my case. I don't have systemd-udev-settle.service. This is a fresh
install of F34.


Ok, you can still use systemd-analyze blame to find out what's causing the 
delay.


On my F34 boot systemd-udev-settle.service takes only 1.921s but the real 
biggie is

     1min 23.232s plymouth-quit-wait.service

How can I go about diagnosing and fixing that?
In 2021 i7 machines should not be taking literally minutes to boot.



Suggest you start a fresh thread.  You'd be hijacking this thread which is 
already a kinda
hijack.  So, send an email to this list with (suggestion) the Subject of

plymouth-quit-wait taking too long


--
Remind me to ignore comments which aren't germane to the thread.
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Long wait for start job

2021-06-24 Thread Christopher Ross



On 12/06/2021 20:45, Joe Zeff wrote:

On 6/12/21 11:54 AM, Patrick O'Callaghan wrote:

Not my case. I don't have systemd-udev-settle.service. This is a fresh
install of F34.


Ok, you can still use systemd-analyze blame to find out what's causing 
the delay.


On my F34 boot systemd-udev-settle.service takes only 1.921s but the 
real biggie is


    1min 23.232s plymouth-quit-wait.service


How can I go about diagnosing and fixing that?
In 2021 i7 machines should not be taking literally minutes to boot.

Thanks,
Chris R.

___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure