I don't think anything changed in the ost conf file in the last few days :)
but missing selinux-policy can cause the same issue... trying to add that
first


On Sun, Feb 17, 2019 at 5:57 PM Greg Sheremeta <gsher...@redhat.com> wrote:

> Any way you can check the label on the conf file within OST (edit the
> script I guess)? It probably needs 'postgresql_db_t'
>
> mine (not using SCL)
> ls -laZ /var/lib/pgsql/data/postgresql.conf
> -rw-------. 1 postgres postgres unconfined_u:object_r:postgresql_db_t:s0
> 22572 May 22  2018 /var/lib/pgsql/data/postgresql.conf
>
> (from Galit, this uses SCL)
> Feb 17 10:17:23 lago-upgrade-from-release-suite-master-engine
> postgresql-ctl: postgres cannot access the server configuration file "
> */var/opt/rh/rh-postgresql95/lib/pgsql/data/postgresql.conf*": Permission
> denied
> Feb 17 10:17:24 lago-upgrade-from-release-suite-master-engine
> postgresql-ctl: pg_ctl: could not start server
> Feb 17 10:17:24 lago-upgrade-from-release-suite-master-engine
> postgresql-ctl: Examine the log output.
> Feb 17 10:17:24 lago-upgrade-from-release-suite-master-engine systemd:
> rh-postgresql95-postgresql.service: control process exited, code=exited
> status=1
>
> (I'm totally guessing, btw ...)
>
> On Sun, Feb 17, 2019 at 12:51 PM Dafna Ron <d...@redhat.com> wrote:
>
>> yes, I actually see that vdsm is failing as well on same issue.
>> and selinux-policy is not appearing in master repo anymore.
>>
>>
>>
>>
>> On Sun, Feb 17, 2019 at 5:41 PM Greg Sheremeta <gsher...@redhat.com>
>> wrote:
>>
>>>
>>>
>>> On Sun, Feb 17, 2019 at 12:37 PM Dafna Ron <d...@redhat.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> we are failing upgrade from release suite on ovirt-engine master.
>>>>
>>>> CQ points below patch as root cause:
>>>> https://gerrit.ovirt.org/#/c/97719/ - webadmin: don't allow trimming
>>>> whitespace for the kernel cmdline
>>>>
>>>
>>> Not related. This is most likely selinux stuff from last week. Galit had
>>> another thread going about it.
>>>
>>> Greg
>>>
>>>
>>>>
>>>> Unfortunately the process logs are not collected and I have no way of
>>>> seeing why the processes failed. I can try to debug this further in the
>>>> morning
>>>>
>>>> ERROR:
>>>>
>>>> setup log:
>>>>
>>>> 2019-02-17 10:17:24,507-0500 DEBUG otopi.plugins.otopi.services.systemd 
>>>> plugin.execute:926 execute-output: ('/usr/bin/systemctl', 'start', 
>>>> 'rh-postgresql95-postgresql.service') stderr:
>>>> Job for rh-postgresql95-postgresql.service failed because the control 
>>>> process exited with error code. See "systemctl status 
>>>> rh-postgresql95-postgresql.service" and "journalctl -xe" for details.
>>>>
>>>> 2019-02-17 10:17:24,508-0500 DEBUG otopi.transaction transaction.abort:119 
>>>> aborting 'File transaction for 
>>>> '/var/opt/rh/rh-postgresql95/lib/pgsql/data/pg_hba.conf''
>>>> 2019-02-17 10:17:24,509-0500 DEBUG otopi.context 
>>>> context._executeMethod:143 method exception
>>>> Traceback (most recent call last):
>>>>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in 
>>>> _executeMethod
>>>>     method['method']()
>>>>   File 
>>>> "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/provisioning/postgres.py",
>>>>  line 201, in _misc
>>>>     self._provisioning.provision()
>>>>   File 
>>>> "/usr/share/ovirt-engine/setup/ovirt_engine_setup/engine_common/postgres.py",
>>>>  line 498, in provision
>>>>     self.restartPG()
>>>>   File 
>>>> "/usr/share/ovirt-engine/setup/ovirt_engine_setup/engine_common/postgres.py",
>>>>  line 399, in restartPG
>>>>     state=state,
>>>>   File "/usr/share/otopi/plugins/otopi/services/systemd.py", line 141, in 
>>>> state
>>>>     service=name,
>>>> RuntimeError: Failed to start service 'rh-postgresql95-postgresql'
>>>> 2019-02-17 10:17:24,511-0500 ERROR otopi.context 
>>>> context._executeMethod:152 Failed to execute stage 'Misc configuration': 
>>>> Failed to start service 'rh-postgresql95-postgresql'
>>>> 2019-02-17 10:17:24,556-0500 DEBUG 
>>>> otopi.plugins.otopi.debug.debug_failure.debug_failure 
>>>> debug_failure._notification:100 tcp connections:
>>>> id uid local foreign state pid exe
>>>> 0: 0 0.0.0.0:111 0.0.0.0:0 LISTEN 4191 /usr/sbin/rpcbind
>>>> 1: 29 0.0.0.0:662 0.0.0.0:0 LISTEN 4225 /usr/sbin/rpc.statd
>>>> 2: 0 0.0.0.0:22 0.0.0.0:0 LISTEN 3361 /usr/sbin/sshd
>>>> 3: 0 0.0.0.0:892 0.0.0.0:0 LISTEN 4266 /usr/sbin/rpc.mountd
>>>> 4: 0 0.0.0.0:2049 0.0.0.0:0 LISTEN UnknownPID UnknownEXE
>>>> 5: 0 0.0.0.0:32803 0.0.0.0:0 LISTEN UnknownPID UnknownEXE
>>>> 6: 0 192.168.201.2:22 192.168.201.1:54800 ESTABLISHED 8189 /usr/sbin/sshd
>>>> 2019-02-17 10:17:24,557-0500 DEBUG otopi.transaction transaction.abort:119 
>>>> aborting 'Yum Transaction'
>>>> 2019-02-17 10:17:24,558-0500 INFO 
>>>> otopi.plugins.otopi.packagers.yumpackager yumpackager.info:80 Yum 
>>>> Performing yum transaction rollback
>>>> Loaded plugins: fastestmirror, versionlock
>>>>
>>>>
>>>
>>> --
>>>
>>> GREG SHEREMETA
>>>
>>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>>
>>> Red Hat NA
>>>
>>> <https://www.redhat.com/>
>>>
>>> gsher...@redhat.com    IRC: gshereme
>>> <https://red.ht/sig>
>>>
>>
>
> --
>
> GREG SHEREMETA
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat NA
>
> <https://www.redhat.com/>
>
> gsher...@redhat.com    IRC: gshereme
> <https://red.ht/sig>
>
_______________________________________________
Infra mailing list -- infra@ovirt.org
To unsubscribe send an email to infra-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/KKDI4UWYQQ5V2KX5MYENTHHDN3NGFXVB/

Reply via email to