On Wed, Jan 20, 2021 at 3:00 AM Nir Soffer <nsof...@redhat.com> wrote:

> On Wed, Jan 20, 2021 at 3:52 AM Matt Snow <matts...@gmail.com> wrote:
> >
> > [root@brick ~]# ps -efz | grep sanlock
>
> Sorry, its "ps -efZ", but we already know its not selinux.
>
> > [root@brick ~]# ps -ef | grep sanlock
> > sanlock     1308       1  0 10:21 ?        00:00:01 /usr/sbin/sanlock
> daemon
>
> Does sanlock run with the right groups?
>
> It appears so.

> On a working system:
>
> $ ps -efZ | grep sanlock | grep -v grep
> system_u:system_r:sanlock_t:s0-s0:c0.c1023 sanlock 983 1  0 11:23 ?
>     00:00:03 /usr/sbin/sanlock daemon
> system_u:system_r:sanlock_t:s0-s0:c0.c1023 root 986  983  0 11:23 ?
>     00:00:00 /usr/sbin/sanlock daemon
>

[root@brick audit]# ps -efZ | grep sanlock | grep -v grep

system_u:system_r:sanlock_t:s0-s0:c0.c1023 sanlock 1308 1  0 Jan19 ?
    00:00:04
/usr/sbin/sanlock daemon
system_u:system_r:sanlock_t:s0-s0:c0.c1023 root 1309 1308  0 Jan19 ?
    00:00:00
/usr/sbin/sanlock daemon

>
> The sanlock process running with "sanlock" user (pid=983) is the
> interesting one.
> The other one is a helper that never accesses storage.
>
> $ grep Groups: /proc/983/status
> Groups: 6 36 107 179
>
[root@brick audit]# grep Groups: /proc/1308/status

Groups: 6 36 107 179


> Vdsm verify this on startup using vdsm-tool is-configured. On a working
> system:
>
> $ sudo vdsm-tool is-configured
> lvm is configured for vdsm
> libvirt is already configured for vdsm
> sanlock is configured for vdsm
> Managed volume database is already configured
> Current revision of multipath.conf detected, preserving
> abrt is already configured for vdsm
>
> [root@brick audit]# vdsm-tool is-configured

lvm is configured for vdsm

libvirt is already configured for vdsm

sanlock is configured for vdsm

Current revision of multipath.conf detected, preserving

abrt is already configured for vdsm
Managed volume database is already configured


> > [root@brick ~]# ausearch -m avc
> > <no matches>
>
> Looks good.
>
> > [root@brick ~]# ls -lhZ
> /rhev/data-center/mnt/stumpy\:_tanker_ovirt_host__storage/8fd5420f-61fd-41af-8575-f61853a18d91/dom_md
> > total 278K
> > -rw-rw----. 1 vdsm kvm system_u:object_r:nfs_t:s0    0 Jan 19 13:38 ids
>
> Looks correct.
>
>
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WMOJMLXJHVESJFCBFGVBIBJUTNXLICIR/

Reply via email to