Re: Xorg crashes in KVM

2021-07-09 Thread Ed Greshko

On 10/07/2021 08:44, Eyal Lebedinsky wrote:

Recently I started having Xorg crash while using firefox.
This is f34 in a libvirt kvm, host (also f34) uses internal i915 video.
Kernel is 5.12.14-300.fc34.x86_64 but saw this with earlier 5.12.12/13 kernels. 


I've not seen any crashes on my VM.  Host is F34 and multiple guests, including 
F34 guests.

As a test, have you tried changing your guest video from QXL to Virtio?

--
Remind me to ignore comments which aren't germane to the thread.
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Xorg crashes in KVM

2021-07-09 Thread Eyal Lebedinsky

Recently I started having Xorg crash while using firefox.
This is f34 in a libvirt kvm, host (also f34) uses internal i915 video.
Kernel is 5.12.14-300.fc34.x86_64 but saw this with earlier 5.12.12/13 kernels.

an example crash:
=
### note: many repeats of the following line precede the crash ###
Jul  9 16:28:47 e4 kernel: f 4026531864#104645: failed to wait on release 24 
after spincount 301
Jul  9 16:28:47 e4 kernel: [TTM] Buffer eviction failed
Jul  9 16:28:47 e4 kernel: qxl :00:01.0: object_init failed for (458752, 
0x0001)
Jul  9 16:28:47 e4 kernel: [drm:qxl_gem_object_create [qxl]] *ERROR* Failed to 
allocate GEM object (454760, 1, 4096, -12)
Jul  9 16:28:47 e4 kernel: [drm:qxl_alloc_ioctl [qxl]] *ERROR* qxl_alloc_ioctl: 
failed to create gem ret=-12
Jul  9 16:28:47 e4 kernel: kauditd_printk_skb: 16 callbacks suppressed
Jul  9 16:28:47 e4 kernel: audit: type=1701 audit(1625812127.806:325): auid=4294967295 uid=0 gid=0 
ses=4294967295 subj=kernel pid=1035 comm="Xorg" exe="/usr/libexec/Xorg" sig=6 
res=1
Jul  9 16:28:47 e4 audit[1035]: ANOM_ABEND auid=4294967295 uid=0 gid=0 ses=4294967295 subj=kernel 
pid=1035 comm="Xorg" exe="/usr/libexec/Xorg" sig=6 res=1
Jul  9 16:28:48 e4 systemd[1]: Created slice system-systemd\x2dcoredump.slice.
Jul  9 16:28:48 e4 audit: BPF prog-id=49 op=LOAD
Jul  9 16:28:48 e4 kernel: audit: type=1334 audit(1625812128.526:326): 
prog-id=49 op=LOAD
Jul  9 16:28:48 e4 audit: BPF prog-id=50 op=LOAD
Jul  9 16:28:48 e4 audit: BPF prog-id=51 op=LOAD
Jul  9 16:28:48 e4 kernel: audit: type=1334 audit(1625812128.528:327): 
prog-id=50 op=LOAD
Jul  9 16:28:48 e4 kernel: audit: type=1334 audit(1625812128.528:328): 
prog-id=51 op=LOAD
Jul  9 16:28:48 e4 systemd[1]: Started Process Core Dump (PID 134753/UID 0).
Jul  9 16:28:48 e4 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel 
msg='unit=systemd-coredump@0-134753-0 comm="systemd" 
exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  9 16:28:48 e4 kernel: audit: type=1130 audit(1625812128.593:329): pid=1 uid=0 auid=4294967295 
ses=4294967295 subj=kernel msg='unit=systemd-coredump@0-134753-0 comm="systemd" 
exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul  9 16:28:52 e4 systemd-coredump[134754]: Process 1035 (Xorg) of user 0 
dumped core.

Stack trace of thread 1035:
#0  0x7f3c01fda2a2 n/a (libc.so.6 + 0x3d2a2)
#1  0x7f3c01fc38a4 n/a (libc.so.6 + 0x268a4)
#2  0x5595780f7170 OsAbort (Xorg + 0x1ba170)
#3  0x5595780ff446 FatalError (Xorg + 0x1c2446)
#4  0x5595780f5d3a OsSigHandler (Xorg + 0x1b8d3a)
#5  0x7f3c0217fa20 __restore_rt (libpthread.so.0 + 0x13a20)
#6  0x7f3c01716a35 qxl_image_create (qxl_drv.so + 0x8a35)
#7  0x7f3c01716e66 qxl_surface_put_image_for_reals (qxl_drv.so + 0x8e66)
#8  0x7f3c01723350 uxa_copy_n_to_n (qxl_drv.so + 0x15350)
#9  0x5595780d475b miCopyRegion (Xorg + 0x19775b)
#10 0x5595780d735c miDoCopy (Xorg + 0x19a35c)
#11 0x7f3c01723552 uxa_copy_area (qxl_drv.so + 0x15552)
#12 0x5595780831e9 damageCopyArea (Xorg + 0x1461e9)
#13 0x559577f964af ProcCopyArea (Xorg + 0x594af)
#14 0x559577f862d7 main (Xorg + 0x492d7)
#15 0x7f3c01fc4b75 n/a (libc.so.6 + 0x27b75)
#16 0x559577f8667e _start (Xorg + 0x4967e)

Stack trace of thread 1038:
#0  0x7f3c02181a8a __futex_abstimed_wait_common64 (libpthread.so.0 + 
0x15a8a)
#1  0x7f3c0217b2c0 pthread_cond_wait (libpthread.so.0 + 0xf2c0)
#2  0x7f3bfc296c43 thread_function (swrast_dri.so + 0x7b8c43)
#3  0x7f3bfc29653b impl_thrd_routine (swrast_dri.so + 0x7b853b)
#4  0x7f3c02175299 start_thread (libpthread.so.0 + 0x9299)
#5  0x7f3c0209d353 n/a (libc.so.6 + 0x100353)

Stack trace of thread 1040:
#0  0x7f3c02181a8a __futex_abstimed_wait_common64 (libpthread.so.0 + 
0x15a8a)
#1  0x7f3c0217b2c0 pthread_cond_wait (libpthread.so.0 + 0xf2c0)
#2  0x7f3bfc293edb lp_cs_tpool_worker (swrast_dri.so + 0x7b5edb)
#3  0x7f3bfc293e5b impl_thrd_routine (swrast_dri.so + 0x7b5e5b)
#4  0x7f3c02175299 start_thread (libpthread.so.0 + 0x9299)
#5  0x7f3c0209d353 n/a (libc.so.6 + 0x100353)

Stack trace of thread 1042:
#0  0x7f3c02181a8a __futex_abstimed_wait_common64 (libpthread.so.0 + 
0x15a8a)
#1  0x7f3c0217b2c0 pthread_cond_wait (libpthread.so.0 + 0xf2c0)
#2  0x7f3bfbc960db util_queue_thread_func (swrast_dri.so + 0x1b80db)
#3  0x7f3bfbc95b9b impl_thrd_routine (swrast_dri.so + 0x1b7b9b)
#4  0x7f3c02175299 start_thread (libpthread.so.0 + 0x9299)
#5  0x7f3c0209d353 n/a (libc.so.6 + 0x100353)

Stack trace of thread 1044:
#0  0x7f3c02181a8a __futex_abstimed_wait_common64 (libpthread.so.0 + 
0x15a8a)
#1  0x7f3c0217b2c0 pthread_cond_wait (libpthread.so.0 + 0xf2c0)
#2  0x7f3bfbc960db util_queue_thread_func (swrast_dri.so + 0x1b80db)
#3  0x7f3bfbc95b9b impl_thrd_routine (swrast_dri.so + 0x1b7b9b)
#4  0x7f3c02175299 start_thread (libpthread.so.0 + 0x9299)
#5  

D-Bus notification failed: Transport endpoint is not connected [was System always "Resumes from hibernation" on startup]

2021-07-09 Thread Jonathan Ryshpan
On Thu, 2021-07-01 at 08:25 -0600, Greg Woods wrote:
> I would bet that if you "cat /proc/cmdline" you will see a resume=
> parameter, but even if not this may be the default. At any rate, it
> doesn't look like your resume attempt is the reason for the delay, as
> the delay has already occurred when the resume is attempted:
> 
> Jun 30 12:19:40 amito kernel: Console: switching to colour frame
> buffer device 240x75
>         
> Jun 30 12:21:08 amito dracut-initqueue[526]: WARNING: D-Bus
> notification failed: Transport endpoint is not connected
> 
> Note that delay has already occurred at this point.
> 
> Jun 30 12:21:08 amito systemd[1]: Found device /dev/mapper/fedora-
> swap.
> Jun 30 12:21:08 amito systemd[1]: Starting Resume from hibernation
> using device /dev/mapper/fedora-swap...
> Jun 30 12:21:08 amito systemd-hibernate-resume[547]: Could not resume
> from '/dev/mapper/fedora-swap' (253:1).
> 
> ...and the resume failure is a second or less after the attempted
> resume starts.
> 
> On Thu, Jul 1, 2021 at 8:14 AM Jonathan Ryshpan 
> wrote:
> > ...
> > System configuration"  
> >    Operating System: Fedora 34
> >    KDE Plasma Version: 5.22.2
> >    KDE Frameworks Version: 5.83.0
> >    Qt Version: 5.15.2
> >    Kernel Version: 5.12.13-300.fc34.x86_64 (64-bit)
> >    Graphics Platform: Wayland
> >    Processors: 8 × Intel® Core™ i7-4790K CPU @ 4.00GHz
> >    Memory: 15.5 GiB of RAM
> >    Graphics Processor: Mesa DRI Intel® HD Graphics 4600

GW is correct.  I recreated the swap partition ($ swapoff -a; $ mkswap;
$ swapon -a) and the problem persisted.  Then I examined the full
system log, which goes back to Thu 2020-07-09.  There a large number of
entries like 
   Starting Resume from hibernation using device /dev/mapper/fedora-
   swap...
none of which is associated with a delay

However there are no entries like
   WARNING: D-Bus notification failed: Transport endpoint is not connected
until 2021-06-30, about a year.  After the first one appears, there is
one following each reboot, and each is associated with a delay,
generally, but not always, about 1 minute 28 seconds.

The system was upgraded from Fedora-33 to Fedora-34 on 2021-05-25, so
there may be some connection.  I don't see anything on the web about
this kind of error.  Can anyone enlighten me.

-- 
Thanks - jon 

 Rich or poor, it's good to have money.
___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[389-users] ldapsearch Please help!

2021-07-09 Thread kckong1
Current Environment:-
389-Directory/1.4.3.22 B2021.082.0613
Oracle Linux 8.2

I want to make the user raduser able to execute ldapsearch to get the record 
output. 
The raduser's objectClass
- directoryServerFeature (structural)
- inetOrgPerson (structural)
- organisationalPerson  (structural)
- person  (structural)
- top (abstract)

# ldapsearch -LLL -w password -H ldap://localhost:389 -D "cn=Directory Manager"
- output the ldap records.

# ldapsearch -LLL -w password -H ldap://localhost:389 -D 
"uid=raduser,ou=Administrator,o=mydatabase"
- NO OUTPUT, confirmed the credential is valid


DIT > ROOT DSE as below:-

- o=mydatabase
-- cn=Directory Manager

- ou=Administrator
-- uid=raduser

- ou=subscriber
-- uid=user1
-- uid=user2
-- uid=user3
-- and more...

___
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: DNF fastestmirror and deltarpm

2021-07-09 Thread Matti Pulkkinen
ke, 2021-07-07 kello 17:19 -0400, Todd Zullinger kirjoitti:
> Hi,
> 
> The deltarpm option is enabled by default.  But there are
> issues which prevent it from being very useful unless you
> are constantly updating (in which case, you're likely not
> someone who wants deltarpm enabled in the first place).
> 

Ah, I see. I assumed it was disabled by default because of the article.

> As noted, the list returned by the mirrorlist/metalink may
> already provide a better order than the fastestmirror option
> will achieve.
> 
> > 
Okay, good to know. Thank you.
> 

-- 
Terveisin / Regards,
Matti Pulkkinen


___
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure