[ovirt-users] Re: Please help: Failure Restoring Data on Clean Engine After Migration

2022-07-14 Thread Moritz Baumann

I had a similar issue.

for me, taking the password from
/etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-grafana-database.conf 
(GRAFANA_DB_PASSWORD)


and set that password in postgres for the
user ovirt_engine_history_grafana did the trick.

Best
Mo


On 7/14/22 16:28, Andrei Verovski wrote:

Hi,

I have oVirt engine 4.4.7 running on dedicated PC (not hosted engine).

After several unsuccessful upgrade attempts of 4.4.7 to 4.4.10 decided to 
install clean 4.4.10 and migrate data.

On old engine
engine-backup --scope=all --mode=backup

On new engine
engine-backup --mode=restore --provision-all-databases --no-restore-permissions 
--file=ovirt-engine-backup-20220713160717.backup

Result:
[ ERROR ] Failed to execute stage 'Environment setup': Cannot connect to 
database for grafana using existing credentials: 
ovirt_engine_history_grafana@localhost:5432

How to Remove Grafana Completely During Engine Migration, or skip it during 
backup?
In fact it would be nice to delete it on active 4.4.7 setup, I don’t need it 
anyway.

Thanks in advance.


# — LOG ———

Start of engine-backup with mode 'restore'
scope: all
archive file: ovirt-engine-backup-20220713160717.backup
log file: /var/log/ovirt-engine-backup/ovirt-engine-restore-20220714170603.log
Preparing to restore:
- Unpacking file 'ovirt-engine-backup-20220713160717.backup'
Restoring:
- Files
Provisioning PostgreSQL users/databases:
- user 'engine', database 'engine'
- user 'ovirt_engine_history', database 'ovirt_engine_history'
Restoring:
- Engine database 'engine'
   - Cleaning up temporary tables in engine database 'engine'
   - Updating DbJustRestored VdcOption in engine database
   - Resetting DwhCurrentlyRunning in dwh_history_timekeeping in engine database
   - Resetting HA VM status
--
Please note:

The engine database was backed up at 2022-07-13 16:07:21.0 +0300 .

Objects that were added, removed or changed after this date, such as virtual
machines, disks, etc., are missing in the engine, and will probably require
recovery or recreation.
--
- DWH database 'ovirt_engine_history'
- Grafana database '/var/lib/grafana/grafana.db'
You should now run engine-setup.
Done.
[root@node00 ovirt-engine-backup]# engine-setup
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
   Configuration files: 
/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf, 
/etc/ovirt-engine-setup.conf.d/10-packaging.conf, 
/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf
   Log file: 
/var/log/ovirt-engine/setup/ovirt-engine-setup-20220714170757-7072xx.log
   Version: otopi-1.9.6 (otopi-1.9.6-1.el8)
[ INFO  ] The engine DB has been restored from a backup
[ ERROR ] Failed to execute stage 'Environment setup': Cannot connect to 
database for grafana using existing credentials: 
ovirt_engine_history_grafana@localhost:5432
[ INFO  ] Stage: Clean up
   Log file is located at 
/var/log/ovirt-engine/setup/ovirt-engine-setup-20220714170757-7072xx.log
[ INFO  ] Generating answer file 
'/var/lib/ovirt-engine/setup/answers/20220714170806-setup.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Execution of setup failed

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7JORR5ZWZJTGPF2FKDUTVI5DRWB2XI5H/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZZOUZX65Q2EADSBDVRPID7OQRYGMFSLL/


[ovirt-users] Re: Please help: Failure Restoring Data on Clean Engine After Migration

2022-07-14 Thread Moritz Baumann



On 7/14/22 16:37, Moritz Baumann wrote:

I had a similar issue.

for me, taking the password from
/etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-grafana-database.conf 
(GRAFANA_DB_PASSWORD)


and set that password in postgres for the
user ovirt_engine_history_grafana did the trick.


su - postgres -s/bin/bash
psql
\password ovirt_engine_history_grafana
# enter the password from that file
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VDRUE5BJ3JIENHY2AMVXYLE3WK7PEGQO/


[ovirt-users] Re: unable to create iso domain

2022-07-14 Thread Moritz Baumann

forgot to mention

ovirt is 4.5.1 on centos8-stream, nodes are ovirt-node-4.5.1

It's an old installation originating from 3.6 initially.



On 7/14/22 08:18, Moritz Baumann wrote:

Hi

I have removed the iso domain of an existing data center, and now I am 
unable to create a new iso domain


/var/log/ovirt-engine/engine.log shows:

2022-07-14 08:04:40,684+02 INFO 
[org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand] 
(default task-34) [8db814e3-43ab-4921-ad35-2b3acd51c385] Lock Acquired 
to object 
'EngineLock:{exclusiveLocks='[ovirt.storage.inf.ethz.ch:/export/ovirt/iso=STORAGE_CONNECTION]', 
sharedLocks=''}'
2022-07-14 08:04:40,689+02 WARN 
[org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand] 
(default task-34) [8db814e3-43ab-4921-ad35-2b3acd51c385] Validation of 
action 'AddStorageServerConnection' failed for user 
x...@ethz.ch@ethz.ch-authz. Reasons: 
VAR__ACTION__ADD,VAR__TYPE__STORAGE__CONNECTION,$connectionId 
c39c64ef-fb8b-4e87-9803-420c7fb2dd4a,$storageDomainName 
,ACTION_TYPE_FAILED_STORAGE_CONNECTION_ALREADY_EXISTS
2022-07-14 08:04:40,690+02 INFO 
[org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand] 
(default task-34) [8db814e3-43ab-4921-ad35-2b3acd51c385] Lock freed to 
object 
'EngineLock:{exclusiveLocks='[ovirt.scratch.inf.ethz.ch:/export/ovirt/iso=STORAGE_CONNECTION]', 
sharedLocks=''}'
2022-07-14 08:04:40,756+02 INFO 
[org.ovirt.engine.core.bll.storage.connection.DisconnectStorageServerConnectionCommand] 
(default task-34) [4148e0fd-58ae-4375-8dc8-a08f47402ed6] Running 
command: DisconnectStorageServerConnectionCommand internal: false. 
Entities affected :  ID: aaa0----123456789aaa Type: 
SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN
2022-07-14 08:04:40,756+02 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] 
(default task-34) [4148e0fd-58ae-4375-8dc8-a08f47402ed6] START, 
DisconnectStorageServerVDSCommand(HostName = ovirt-node01, 
StorageServerConnectionManagementVDSParameters:{hostId='d942c8fe-9a0c-4761-9be2-2f88b622070b', 
storagePoolId='----', storageType='NFS', 
connectionList='[StorageServerConnections:{id='null', 
connection='ovirt.storage.inf.ethz.ch:/export/ovirt/iso', iqn='null', 
vfsType='null', mountOptions='null', nfsVersion='null', 
nfsRetrans='null', nfsTimeo='null', iface='null', 
netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 
3043bbfd
2022-07-14 08:04:43,017+02 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] 
(default task-34) [4148e0fd-58ae-4375-8dc8-a08f47402ed6] FINISH, 
DisconnectStorageServerVDSCommand, return: 
{----=100}, log id: 3043bbfd



[root@ovirt-engine ovirt-engine]# showmount -e ovirt.storage.inf.ethz.ch 
| grep ovirt

Export list for ovirt.scratch.inf.ethz.ch:
/export/ovirt/export @ovirt-storage
/export/ovirt/data   @ovirt-storage
/export/ovirt/iso    @ovirt-storage

the other two domains still work just fine and the netgroup contains all 
ovirt-nodes.


storage-node1[0]:/export/ovirt/iso# ls -la
total 2
drwx--. 2 vdsm kvm  2 Jul 14 07:58 .
drwxr-xr-x. 5 root root 5 Aug 19  2020 ..
storage-node1[0]:/export/ovirt/iso# df .
Filesystem 1K-blocks  Used Available Use% Mounted on
fs1/ovirt/iso  524288000   256 524287744   1% /export/ovirt/iso
storage-node1[0]:/export/ovirt/iso#
storage-node1[0]:/export/ovirt/iso# exportfs -v | grep ovirt/ -A1
/export/ovirt/iso

@ovirt-storage(sync,wdelay,hide,no_subtree_check,fsid=215812,sec=sys:krb5:krb5i:krb5p,rw,secure,root_squash,no_all_squash) 


/export/ovirt/data

@ovirt-storage(sync,wdelay,hide,no_subtree_check,fsid=215811,sec=sys:krb5:krb5i:krb5p,rw,secure,root_squash,no_all_squash) 


--
/export/ovirt/export

@ovirt-storage(sync,wdelay,hide,no_subtree_check,fsid=215813,sec=sys:krb5:krb5i:krb5p,rw,secure,root_squash,no_all_squash) 





It appears that there is stille some reference to an iso domain 
(c39c64ef-fb8b-4e87-9803-420c7fb2dd4a ??) in the database. How can I get 
rid of it ?


Best
Moritz
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GP6U3SPBA56XKOL7IJ4NWZVG3MJV3KX6/ 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4SALNLHDVEGTRLIF6EJUXF6VWGX25SXC/


[ovirt-users] unable to create iso domain

2022-07-13 Thread Moritz Baumann

Hi

I have removed the iso domain of an existing data center, and now I am 
unable to create a new iso domain


/var/log/ovirt-engine/engine.log shows:

2022-07-14 08:04:40,684+02 INFO 
[org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand] 
(default task-34) [8db814e3-43ab-4921-ad35-2b3acd51c385] Lock Acquired 
to object 
'EngineLock:{exclusiveLocks='[ovirt.storage.inf.ethz.ch:/export/ovirt/iso=STORAGE_CONNECTION]', 
sharedLocks=''}'
2022-07-14 08:04:40,689+02 WARN 
[org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand] 
(default task-34) [8db814e3-43ab-4921-ad35-2b3acd51c385] Validation of 
action 'AddStorageServerConnection' failed for user 
x...@ethz.ch@ethz.ch-authz. Reasons: 
VAR__ACTION__ADD,VAR__TYPE__STORAGE__CONNECTION,$connectionId 
c39c64ef-fb8b-4e87-9803-420c7fb2dd4a,$storageDomainName 
,ACTION_TYPE_FAILED_STORAGE_CONNECTION_ALREADY_EXISTS
2022-07-14 08:04:40,690+02 INFO 
[org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand] 
(default task-34) [8db814e3-43ab-4921-ad35-2b3acd51c385] Lock freed to 
object 
'EngineLock:{exclusiveLocks='[ovirt.scratch.inf.ethz.ch:/export/ovirt/iso=STORAGE_CONNECTION]', 
sharedLocks=''}'
2022-07-14 08:04:40,756+02 INFO 
[org.ovirt.engine.core.bll.storage.connection.DisconnectStorageServerConnectionCommand] 
(default task-34) [4148e0fd-58ae-4375-8dc8-a08f47402ed6] Running 
command: DisconnectStorageServerConnectionCommand internal: false. 
Entities affected :  ID: aaa0----123456789aaa Type: 
SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN
2022-07-14 08:04:40,756+02 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] 
(default task-34) [4148e0fd-58ae-4375-8dc8-a08f47402ed6] START, 
DisconnectStorageServerVDSCommand(HostName = ovirt-node01, 
StorageServerConnectionManagementVDSParameters:{hostId='d942c8fe-9a0c-4761-9be2-2f88b622070b', 
storagePoolId='----', storageType='NFS', 
connectionList='[StorageServerConnections:{id='null', 
connection='ovirt.storage.inf.ethz.ch:/export/ovirt/iso', iqn='null', 
vfsType='null', mountOptions='null', nfsVersion='null', 
nfsRetrans='null', nfsTimeo='null', iface='null', 
netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 3043bbfd
2022-07-14 08:04:43,017+02 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] 
(default task-34) [4148e0fd-58ae-4375-8dc8-a08f47402ed6] FINISH, 
DisconnectStorageServerVDSCommand, return: 
{----=100}, log id: 3043bbfd



[root@ovirt-engine ovirt-engine]# showmount -e ovirt.storage.inf.ethz.ch 
| grep ovirt

Export list for ovirt.scratch.inf.ethz.ch:
/export/ovirt/export @ovirt-storage
/export/ovirt/data   @ovirt-storage
/export/ovirt/iso@ovirt-storage

the other two domains still work just fine and the netgroup contains all 
ovirt-nodes.


storage-node1[0]:/export/ovirt/iso# ls -la
total 2
drwx--. 2 vdsm kvm  2 Jul 14 07:58 .
drwxr-xr-x. 5 root root 5 Aug 19  2020 ..
storage-node1[0]:/export/ovirt/iso# df .
Filesystem 1K-blocks  Used Available Use% Mounted on
fs1/ovirt/iso  524288000   256 524287744   1% /export/ovirt/iso
storage-node1[0]:/export/ovirt/iso#
storage-node1[0]:/export/ovirt/iso# exportfs -v | grep ovirt/ -A1
/export/ovirt/iso

@ovirt-storage(sync,wdelay,hide,no_subtree_check,fsid=215812,sec=sys:krb5:krb5i:krb5p,rw,secure,root_squash,no_all_squash)
/export/ovirt/data

@ovirt-storage(sync,wdelay,hide,no_subtree_check,fsid=215811,sec=sys:krb5:krb5i:krb5p,rw,secure,root_squash,no_all_squash)
--
/export/ovirt/export

@ovirt-storage(sync,wdelay,hide,no_subtree_check,fsid=215813,sec=sys:krb5:krb5i:krb5p,rw,secure,root_squash,no_all_squash)



It appears that there is stille some reference to an iso domain 
(c39c64ef-fb8b-4e87-9803-420c7fb2dd4a ??) in the database. How can I get 
rid of it ?


Best
Moritz
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GP6U3SPBA56XKOL7IJ4NWZVG3MJV3KX6/


[ovirt-users] getting grafana back connect database (to be able to upgrade ovirt again)

2022-05-17 Thread Moritz Baumann

Hi

after upgrade from ovirt-4.4 -> ovirt 4.5
We had to face the jdbc problem and that we had old database enties 
which prohibited an upgrade.


I had dropped all dbs and restored from engine backup but now
I get the following error when I want to upgrade again


[root@ovirt-engine ~]# engine-setup
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
  Configuration files: 
/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf, 
/etc/ovirt-engine-setup.conf.d/10-packaging.conf, 
/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf
  Log file: 
/var/log/ovirt-engine/setup/ovirt-engine-setup-20220517153716-whf2lg.log

  Version: otopi-1.10.0 (otopi-1.10.0-1.el8)
[ ERROR ] Failed to execute stage 'Environment setup': Cannot connect to 
database for grafana using existing credentials: 
ovirt_engine_history_grafana@localhost:5432

[ INFO  ] Stage: Clean up
  Log file is located at 
/var/log/ovirt-engine/setup/ovirt-engine-setup-20220517153716-whf2lg.log
[ INFO  ] Generating answer file 
'/var/lib/ovirt-engine/setup/answers/20220517153720-setup.conf'

[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Execution of setup failed




the log shows:

RuntimeError: Cannot connect to database for grafana using existing 
credentials: ovirt_engine_history_grafana@localhost:5432
2022-05-17 15:37:20,461+0200 ERROR otopi.context 
context._executeMethod:154 Failed to execute stage 'Environment setup': 
Cannot connec
t to database for grafana using existing credentials: 
ovirt_engine_history_grafana@localhost:5432



How can I get grafana to connect to database again?

With kinde regards
Moritz
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JAAVH6RTT7B73QOJLFP7LA2HZJEXN5U2/


[ovirt-users] Re: cannot start vm after upgrade to ovirt-4.3

2019-02-19 Thread Moritz Baumann

Thank you Simone,

that worked.



On 19.02.19 12:40, Simone Tiraboschi wrote:



On Tue, Feb 19, 2019 at 12:18 PM Moritz Baumann 
mailto:moritz.baum...@inf.ethz.ch>> wrote:


After upgrading from 4.2 -> 4.3 I cannot start a vm anymore.

I try to start the vm with run once on a specific node (ovirt-node04)
and this is the output of /var/log/vdsm/vdsm.log


VolumeDoesNotExist: Volume does not exist:
(u'482698c2-b1bd-4715-9bc5-e222405260df',)
2019-02-19 12:08:34,322+0100 INFO  (vm/abee17b9)
[storage.TaskManager.Task]
(Task='d04f3abb-f3d3-4e2f-902f-d3c5e4fabc36')
aborting: Task is aborted: "Volume does not exist:
(u'482698c2-b1bd-4715-9bc5-e222405260df',)" - code 201 (task:1181)
2019-02-19 12:08:34,322+0100 ERROR (vm/abee17b9) [storage.Dispatcher]
FINISH prepareImage error=Volume does not exist:
(u'482698c2-b1bd-4715-9bc5-e222405260df',) (dispatcher:81)
2019-02-19 12:08:34,322+0100 ERROR (vm/abee17b9) [virt.vm]
(vmId='abee17b9-079e-452c-a97d-99eff951dc39') The vm start process
failed (vm:937)
Traceback (most recent call last):
    File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
866, in
_startUnderlyingVm
      self._run()
    File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2749,
in _run
      self._devices = self._make_devices()
    File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2589,
in _make_devices
      disk_objs = self._perform_host_local_adjustment()
    File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2662,
in _perform_host_local_adjustment
      self._preparePathsForDrives(disk_params)
    File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1011,
in _preparePathsForDrives
      drive['path'] = self.cif.prepareVolumePath(drive, self.id
<http://self.id>)
    File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 415,
in prepareVolumePath
      raise vm.VolumeError(drive)
VolumeError: Bad volume specification {'index': 1, 'domainID':


Hi,
I think you hit this bug: https://bugzilla.redhat.com/1666795

Manually setting back all the disk image files in the storage domain as 
vdm:kvm (36:36), 660 is a temporary workaround.
Adding all_squash,anonuid=36,anongid=36 to the configuration of your NFS 
share should avoid that until a proper fix will be released.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UDGUDT3GMJXYXPZQVTJ3TM6HLZCK/


[ovirt-users] cannot start vm after upgrade to ovirt-4.3

2019-02-19 Thread Moritz Baumann

After upgrading from 4.2 -> 4.3 I cannot start a vm anymore.

I try to start the vm with run once on a specific node (ovirt-node04) 
and this is the output of /var/log/vdsm/vdsm.log


2019-02-19 12:08:33,626+0100 INFO  (jsonrpc/6) [api.host] START 
getAllVmStats() from=:::129.132.17.194,33924 (api:48)
2019-02-19 12:08:33,631+0100 INFO  (jsonrpc/6) [api.host] FINISH 
getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 
'statsList': (suppressed)} from=:::129.132.17.194,33924 (api:54)
2019-02-19 12:08:33,633+0100 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] 
RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312)
2019-02-19 12:08:34,218+0100 INFO  (jsonrpc/4) [api.virt] START 
create(vmParams={u'xml': u'type="kvm" xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0"; 
xmlns:ovirt-vm="http://ovirt.org/vm/1.0";>mortalkombatabee17b9-079e-452c-a97d-99eff951dc3920971522097152slots="16">838860816type="smbios">oVirtname="product">OS-NAME:name="version">OS-VERSION:name="serial">HOST-SERIAL:name="uuid">abee17b9-079e-452c-a97d-99eff951dc39offset="variable" adjustment="0">tickpolicy="catchup">tickpolicy="delay">present="no">match="exact">SandyBridgepolicy="require">policy="require">policy="require">sockets="16">memory="2097152">type="mouse" bus="ps2">type="virtio" name="ovirt-guest-agent.0">path="/var/lib/libvirt/qemu/channels/abee17b9-079e-452c-a97d-99eff951dc39.ovirt-guest-agent.0">type="unix">name="org.qemu.guest_agent.0">path="/var/lib/libvirt/qemu/channels/abee17b9-079e-452c-a97d-99eff951dc39.org.qemu.guest_agent.0">model="virtio">/dev/urandomname="ua-39b1e011-518c-4ca6-91c8-6ae63cea0824">type="qxl" vram="32768" heads="1" ram="65536" 
vgamem="16384">name="ua-5de2d61a-2e0e-4194-b023-c1bfcb8f9f4e">bus="0x00" domain="0x" function="0x0" slot="0x02" 
type="pci">ports="16">name="ua-78c78899-4f5b-4217-95eb-d11ebf3b882e">bus="0x00" domain="0x" function="0x0" slot="0x04" 
type="pci">model="piix3-uhci" index="0">function="0x2" slot="0x01" type="pci">model="virtio">name="ua-94e7e4cb-9b07-4560-bb25-b02ef2b0cdbe">bus="0x00" domain="0x" function="0x0" slot="0x06" 
type="pci">autoport="yes" passwd="*" passwdValidTo="1970-01-01T00:00:01" 
tlsPort="-1">name="main" mode="secure">mode="secure">mode="secure">mode="secure">mode="secure">mode="secure">mode="secure">mode="secure">network="vdsm-ovirtmgmt">type="spicevmc">name="com.redhat.spice.0">type="bridge">state="up">name="ua-b5ce5e56-0a45-4917-b60b-a83edae1c649">bus="0x00" domain="0x" function="0x0" slot="0x03" 
type="pci">address="00:1a:4a:1b:a4:01">filter="vdsm-no-mac-spoofing">type="file" device="cdrom" snapshot="no">error_policy="report">startupPolicy="optional">bus="ide">name="ua-70717d83-9339-426b-b954-0ed01f8d60db">controller="0" unit="0" type="drive" target="0">snapshot="no" type="file" device="disk">bus="virtio">file="/rhev/data-center/0002-0002-0002-0002-03c4/c17d9d7f-e578-4626-a5d9-94ea555d7115/images/ef5893bc-cbe0-4971-8884-a4547a65fcc8/21f73e32-2bd1-46db-a4cd-b0e01416954c">name="qemu" io="threads" type="raw" error_policy="stop" 
cache="none">name="ua-ef5893bc-cbe0-4971-8884-a4547a65fcc8">order="1">ef5893bc-cbe0-4971-8884-a4547a65fcc8snapshot="no" type="file" device="disk">bus="virtio">file="/rhev/data-center/0002-0002-0002-0002-03c4/c17d9d7f-e578-4626-a5d9-94ea555d7115/images/9b913d5d-5103-44c8-9053-6a02f5dd3562/482698c2-b1bd-4715-9bc5-e222405260df">name="qemu" io="threads" type="raw" error_policy="stop" 
cache="none">name="ua-9b913d5d-5103-44c8-9053-6a02f5dd3562">9b913d5d-5103-44c8-9053-6a02f5dd3562enabled="no">enabled="no">machine="pc-i440fx-rhel7.6.0">hvmmode="sysinfo">timeout="3">type="int">20484.3mac_address="00:1a:4a:1b:a4:01">devtype="disk" 
name="vda">0002-0002-0002-0002-03c421f73e32-2bd1-46db-a4cd-b0e01416954cef5893bc-cbe0-4971-8884-a4547a65fcc8c17d9d7f-e578-4626-a5d9-94ea555d7115devtype="disk" 
name="vdb">0002-0002-0002-0002-03c4482698c2-b1bd-4715-9bc5-e222405260df9b913d5d-5103-44c8-9053-6a02f5dd3562c17d9d7f-e578-4626-a5d9-94ea555d7115falseauto_resume'}) 
from=:::129.132.17.194,33924, 
flow_id=978c7200-b51e-42f8-962d-e8e11383ee53, vmId= (api:48)
2019-02-19 12:08:34,255+0100 INFO  (jsonrpc/4) [api.virt] FINISH create 
return={'status': {'message': 'Done', 'code': 0}, 'vmList': {'status': 
'WaitForLaunch', 'maxMemSize': 8192, 'acpiEnable': 'true', 
'emulatedMachine': 'pc-i440fx-rhel7.6.0', 'vmId': 
'abee17b9-079e-452c-a97d-99eff951dc39', 'memGuaranteedSize': 2048, 
'timeOffset': '0', 'smpThreadsPerCore': '1', 'cpuType': 'SandyBridge', 
'guestDiskMapping': {}, 'arch': 'x86_64', 'smp': '1', 'guestNumaNodes': 
[{'nodeIndex': 0, 'cpus': '0', 'memory': '2048'}], u'xml': u'version="1.0" encoding="UTF-8"?>xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0"; 
xmlns:ovirt-vm="http://ovirt.org/vm/1.0";>mortalkombatabee17b9-079e-452c-a97d-99eff951dc3920971522097152slots="16">838860816type="smbios">oVirtname="product">OS-NAME:name="version">OS-VERSION:name="serial">HOST-SERIAL:n

[ovirt-users] user rhel7.5 as ovirt-node

2018-04-11 Thread Moritz Baumann
After installing a RHEL7.5 server and adding the host via ovirt-engine 
it initially works, but afterwards it findes the followgin upgrades 
which break e.g. yum:



-> Running transaction check
---> Package pyOpenSSL.x86_64 0:0.13.1-3.el7 will be obsoleted
---> Package python-dateutil.noarch 0:1.5-7.el7 will be updated
---> Package python-dateutil.noarch 1:2.4.2-1.el7 will be an update
---> Package python2-pyOpenSSL.noarch 0:16.2.0-3.el7 will be obsoleting

==
 Package 
Arch   Version 
   Repository 
 Size

==
Installing:
 python2-pyOpenSSL 
noarch 16.2.0-3.el7 

ovirt-4.2-centos-ovirt42 
 88 k

 replacing  pyOpenSSL.x86_64 0.13.1-3.el7
Updating:
 python-dateutil 
noarch 1:2.4.2-1.el7 

ovirt-4.2-centos-ovirt42 
 83 k


Transaction Summary
==
Install  1 Package
Upgrade  1 Package


It seems that pyOpenSSL.x86 is needed by yum.

Will there be updated ovirt packages?

Best
Mo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Fwd: AcquireHostIdFailure and code 661

2017-09-19 Thread Moritz Baumann

Hi Neil,

I had similar errors ('Sanlock lockspace add failure' and SPM problems, 
...) in the log files and my problem was that I added the "-g"  option 
to mountd (months ago without restarting the service) in 
/etc/sysconfig/nfs under RPCMOUNTDOPTS.


I had to either remove the "-g" option or add a goup sanlock and vdsm 
with the same users as on the ovirt-nodes.


Maybe your issue is similar.

Cheers,
Moritz

On 19.09.2017 14:16, Neil wrote:

Hi guys,

I'm desperate to get to the bottom of this issue. Does anyone have any 
ideas please?


Thank you.

Regards.

Neil Wilson.

-- Forwarded message --
From: *Neil* mailto:nwilson...@gmail.com>>
Date: Mon, Sep 11, 2017 at 4:46 PM
Subject: AcquireHostIdFailure and code 661
To: "users@ovirt.org " >



Hi guys,

Please could someone shed some light on this issue I'm facing.

I'm trying to add a new NFS storage domain but when I try add it, I get 
a message saying "Acquire hostID failed" and it fails to add.


I can mount the NFS share manually and I can see that once the attaching 
has failed the NFS share is still mounted on the hosts, as per the 
following...


172.16.0.11:/raid1/data/_NAS_NFS_Exports_/STOR2 on 
/rhev/data-center/mnt/172.16.0.11:_raid1_data___NAS__NFS__Exports___STOR2 type 
nfs (rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3,addr=172.16.0.11)


Also looking at the folders on the NFS share I can see that some data 
has been written, so it's not a permissions issue...


drwx---r-x+ 4 vdsm kvm 4096 Sep 11 16:08 
16ab135b-0362-4d7e-bb11-edf5b93535d5

-rwx---rwx. 1 vdsm kvm    0 Sep 11 16:08 __DIRECT_IO_TEST__

I have just upgraded from 3.3 to 3.5 as well as upgraded my 3 hosts in 
the hope it's a known bug, but I'm still encountering the same problem.


It's not a hosted engine and you might see in the logs that I have a 
storage domain that is out of space which I'm aware of, and I'm hoping 
the system using this space will be decommissioned in 2 days


Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2             420G  2.2G  413G   1% /
tmpfs                  48G     0   48G   0% /dev/shm
172.16.0.10:/raid0/data/_NAS_NFS_Exports_/RAID1_1TB
                       915G  915G  424M 100% 
/rhev/data-center/mnt/172.16.0.10:_raid0_data___NAS__NFS__Exports___RAID1__1TB

172.16.0.10:/raid0/data/_NAS_NFS_Exports_/STORAGE1
                       5.5T  3.7T  1.8T  67% 
/rhev/data-center/mnt/172.16.0.10:_raid0_data___NAS__NFS__Exports___STORAGE1

172.16.0.20:/data/ov-export
                       3.6T  2.3T  1.3T  65% 
/rhev/data-center/mnt/172.16.0.20:_data_ov-export

172.16.0.11:/raid1/data/_NAS_NFS_Exports_/4TB
                       3.6T  2.0T  1.6T  56% 
/rhev/data-center/mnt/172.16.0.11:_raid1_data___NAS__NFS__Exports___4TB

172.16.0.253:/var/lib/exports/iso
                       193G   42G  141G  23% 
/rhev/data-center/mnt/172.16.0.253:_var_lib_exports_iso

172.16.0.11:/raid1/data/_NAS_NFS_Exports_/STOR2
                       5.5T  3.7G  5.5T   1% 
/rhev/data-center/mnt/172.16.0.11:_raid1_data___NAS__NFS__Exports___STOR2


The "STOR2" above is left mounted after attempting to add the new NFS 
storage domain.


Engine details:
Fedora release 19 (Schrödinger’s Cat)
ovirt-engine-dbscripts-3.5.0.1-1.fc19.noarch
ovirt-release34-1.0.3-1.noarch
ovirt-image-uploader-3.5.0-1.fc19.noarch
ovirt-engine-websocket-proxy-3.5.0.1-1.fc19.noarch
ovirt-log-collector-3.5.0-1.fc19.noarch
ovirt-release35-006-1.noarch
ovirt-engine-setup-3.5.0.1-1.fc19.noarch
ovirt-release33-1.0.0-0.1.master.noarch
ovirt-engine-tools-3.5.0.1-1.fc19.noarch
ovirt-engine-lib-3.5.0.1-1.fc19.noarch
ovirt-engine-sdk-python-3.5.0.8-1.fc19.noarch
ovirt-host-deploy-java-1.3.0-1.fc19.noarch
ovirt-engine-backend-3.5.0.1-1.fc19.noarch
sos-3.1-1.1.fc19.ovirt.noarch
ovirt-engine-setup-base-3.5.0.1-1.fc19.noarch
ovirt-engine-extensions-api-impl-3.5.0.1-1.fc19.noarch
ovirt-engine-webadmin-portal-3.5.0.1-1.fc19.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.5.0.1-1.fc19.noarch
ovirt-iso-uploader-3.5.0-1.fc19.noarch
ovirt-host-deploy-1.3.0-1.fc19.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.5.0.1-1.fc19.noarch
ovirt-engine-3.5.0.1-1.fc19.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.5.0.1-1.fc19.noarch
ovirt-engine-userportal-3.5.0.1-1.fc19.noarch
ovirt-engine-cli-3.5.0.5-1.fc19.noarch
ovirt-engine-restapi-3.5.0.1-1.fc19.noarch
libvirt-daemon-driver-nwfilter-1.1.3.2-1.fc19.x86_64
libvirt-daemon-driver-qemu-1.1.3.2-1.fc19.x86_64
libvirt-daemon-driver-libxl-1.1.3.2-1.fc19.x86_64
libvirt-daemon-driver-secret-1.1.3.2-1.fc19.x86_64
libvirt-daemon-config-network-1.1.3.2-1.fc19.x86_64
libvirt-daemon-driver-storage-1.1.3.2-1.fc19.x86_64
libvirt-daemon-driver-network-1.1.3.2-1.fc19.x86_64
libvirt-1.1.3.2-1.fc19.x86_64
libvirt-daemon-kvm-1.1.3.2-1.fc19.x86_64
libvirt-client-1.1.3.2-1.fc19.x86_64
libvirt-daemon-driver-nodedev-1.1.3.2-1.fc19.x86_64
libvirt-daemon-driver-uml-1.1.3.2-1.fc19.x86_64
libvirt-daemon-driv

Re: [ovirt-users] ovirt 4.1.2 sanlock issue with nfs data domain

2017-06-19 Thread Moritz Baumann



On 19.06.2017 11:51, Markus Stockhausen wrote:
Maybe NFS Mounts with Version 4.2 and on Server side no SELinux nfs_t 
rule defined?


Both (nfs-server and ovirt host) are in selinux permissive mode.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1.2 sanlock issue with nfs data domain

2017-06-19 Thread Moritz Baumann
Is there a way to "reinitialize" the lockspace so one node can become 
SPM again and we can run VMS.


errors in /var/log/sanlock.log look like this:


2017-06-19 10:57:00+0200 1617673 [126217]: open error -13 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids
2017-06-19 10:57:00+0200 1617673 [126217]: s51 open_disk 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids 
error -13

2017-06-19 10:57:01+0200 1617674 [880]: s51 add_lockspace fail result -19
2017-06-19 10:57:02+0200 1617674 [881]: open error -13 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/leases

2017-06-19 10:57:02+0200 1617674 [828]: ci 2 fd 11 pid -1 recv errno 104
2017-06-19 10:57:10+0200 1617683 [881]: s52 lockspace 
c17d9d7f-e578-4626-a5d9-94ea555d7115:2:/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids:0
2017-06-19 10:57:10+0200 1617683 [126235]: open error -13 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids
2017-06-19 10:57:10+0200 1617683 [126235]: s52 open_disk 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids 
error -13

2017-06-19 10:57:11+0200 1617684 [881]: s52 add_lockspace fail result -19
2017-06-19 10:57:13+0200 1617685 [880]: open error -13 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/leases

2017-06-19 10:57:13+0200 1617685 [828]: ci 2 fd 11 pid -1 recv errno 104
2017-06-19 10:57:15+0200 1617688 [881]: open error -13 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/leases

2017-06-19 10:57:15+0200 1617688 [828]: ci 2 fd 11 pid -1 recv errno 104
2017-06-19 10:57:20+0200 1617693 [881]: s53 lockspace 
c17d9d7f-e578-4626-a5d9-94ea555d7115:2:/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids:0
2017-06-19 10:57:20+0200 1617693 [126255]: open error -13 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids
2017-06-19 10:57:20+0200 1617693 [126255]: s53 open_disk 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids 
error -13

2017-06-19 10:57:21+0200 1617694 [881]: s53 add_lockspace fail result -19
2017-06-19 10:57:26+0200 1617699 [880]: open error -13 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/leases

2017-06-19 10:57:26+0200 1617699 [828]: ci 2 fd 11 pid -1 recv errno 104
2017-06-19 10:57:29+0200 1617702 [881]: open error -13 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/leases

2017-06-19 10:57:29+0200 1617702 [828]: ci 2 fd 11 pid -1 recv errno 104
2017-06-19 10:57:30+0200 1617703 [880]: s54 lockspace 
c17d9d7f-e578-4626-a5d9-94ea555d7115:2:/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids:0




ovirt-node01[0]:/var/log# ls -ld 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids


-rw-rw. 1 vdsm kvm 1048576 28. Mai 23:13 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids


The nfs share is writeable:

ovirt-node01[0]:/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md# 
touch blabla
ovirt-node01[0]:/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md# 
ls -l

total 3320
-rw-r--r--. 1 root root0 19. Jun 11:00 blabla
-rw-rw. 1 vdsm kvm   1048576 28. Mai 23:13 ids
-rw-rw. 1 vdsm kvm  16777216 19. Jun 10:56 inbox
-rw-rw. 1 vdsm kvm   2097152 22. Mai 15:48 leases
-rw-r--r--. 1 vdsm kvm   361  1. Mär 18:21 metadata
-rw-rw. 1 vdsm kvm  16777216 22. Mai 15:48 outbox
-rw-rw. 1 vdsm kvm   1305088  1. Mär 18:21 xleases

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt 4.1.2 sanlock issue with nfs data domain

2017-06-19 Thread Moritz Baumann

Hi,
I'm still strugling to get our ovirt 4.1.2 back to life.

The data domain is nfs and the nfs mount works fine. However it appears 
that the sanlock does not work anymore.


Is there a way to "reinitialize" the lockspace so one node can become 
SPM again and we can run VMS.


Best,
Mo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] no spm node in cluster and unable to start any vm or stopped storage domain

2017-05-31 Thread Moritz Baumann

Hi Adam,

Just an idea, but could this be related to stale mounts from when you 
rebooted the storage?  Please try the following:


 1. Place all nodes into maintenance mode
 2. Disable the ovirt NFS exports
 1. Comment out lines in /etc/exports
 2. exportfs -r
 3. Reboot your nodes
 4. Re-enable the ovirt NFS exports
 5. Activate your nodes


all storage domains (data/iso) are down, so is the data center 
(non-responsive) and no nfs mount is on any of the nodes.


I can however manually mount the data export and touch a file (as root).

So I think stale mounts is not the issue.

However I did the steps and the result is the same.

Best,
Mo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] no spm node in cluster and unable to start any vm or stopped storage domain

2017-05-31 Thread Moritz Baumann

I found some info on how to import an abandomed export domain

https://www.ovirt.org/documentation/how-to/storage/clear-the-storage-domain-pool-config-of-an-exported-nfs-domain/

would the same (empty POOL_UUID and SHA_CKSUM in metadata) allow me to 
import the data domain as a new one (and keep the existing vms) ?



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] no spm node in cluster and unable to start any vm or stopped storage domain

2017-05-30 Thread Moritz Baumann
it is nfs3 based and I did an upgrade from RHEL 6.8 -> 6.9 and rebooted 
the storage. I paused all running vms but forgot to put the storage 
domains into maintenance.



scratch-inf[0]:/export/scratch/ovirt# ls -lr *
iso:
total 0
-rwxr-xr-x. 1 36 kvm  0 29. Mai 18:02 __DIRECT_IO_TEST__
drwxr-xr-x. 4 36 kvm 32 21. Jul 2015  2851dcfe-3f64-408a-ad4a-c416790696eb

export:
total 0
-rwxr-xr-x. 1 36 kvm  0 29. Mai 16:28 __DIRECT_IO_TEST__
drwxr-xr-x. 5 36 kvm 45 21. Jul 2015  4cda1489-e241-4186-9500-0fd61640d895

data:
total 0
-rwxr-xr-x. 1 36 kvm  0 29. Mai 18:02 __DIRECT_IO_TEST__
drwxr-xr-x. 5 36 kvm 45 23. Jul 2015  c17d9d7f-e578-4626-a5d9-94ea555d7115

scratch-inf[0]:/export/scratch/ovirt# exportfs -v | grep ovirt
/export/scratch/ovirt
	 
@ovirt-scratch(rw,async,wdelay,no_root_squash,no_subtree_check,fsid=200,sec=sys,rw,no_root_squash,no_all_squash)


I can write on nfs from all nodes as root (did not try with uid 36 but 
don't see why it shouldn't work)


Cheers,
Mo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] no spm node in cluster and unable to start any vm or stopped storage domain

2017-05-29 Thread Moritz Baumann

Hi,
after an upgrade I get the following errors in the web gui:

VDSM ovirt-node01 command SpmStatusVDS failed: (13, 'Sanlock resource 
read failure', 'Permission denied')

VDSM ovirt-node03 command HSMGetAllTasksStatusesVDS failed: Not SPM

These messages happen from all nodes.

I can stop vms and migrate them but I cannot start any vm again

How do I get bet to a sane state where one node is SPM.

Best,
Moritz
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] problems "coupling" freeipa-4.1.1 with ovirt-engine-3.4

2014-11-13 Thread Moritz Baumann
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,
I don't see any users in ovirt after running

engine-manage-domains add --domain=EXAMPLE.ORG --provider=ipa \
- --user=baumadm --add-permissions

only the user baum...@example.org is shown with an enabled=False.

Ipa appears to be setup correct since I can use a

ipa-client-install --mkhomedir

on the ovirt-engine host and login with the created users.

Best,
Moritz
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJUZMWjAAoJEEpbUstAR9uRdekQAMLqdR+OmgXePEJ87KFM9AXF
JuatOW5bPbVnnWrmKztv1ZnjIpkfGsh/Nr8ohuTVfu/n+i70Hssjy1ZhYKXZa20Y
CEwiodKdbdr3/JrfSiSq46f8daD4I4atShC1VAUAMvYaapw5j9mSlpKxQuihSHlO
lPvgofKxwmF/2XMskYnkoTR5bxeH9UbI4BMY2KX0zXbiJPF3/7aHuYcwyZJiinpO
cMwAotKZhKAMk9lLu8GS+WHkcZJNq0RlSXl5PfBG7Ev4gs06P06WR2PZplD2UGRf
x9LfhEzqllz5vLTgsteWtq3aUejHf3OxDQ7ng0KxD7OZWNMqglcudhAIxvINOCRz
gGjcNIauvaKE1lMWdcsnBBBVDltkrET/zznD9RDL0vGGnqT8UTBpo1tAQzpEq/+w
yg4tutwV4CfyMUAHRhutjgwH4KiLY06N0rhKW05QwfGp/smQfmD15G7nXYEl7uvE
9YqdtcmVFsGDyM6IYKYS6UtDkYyiubQR6gZnqknpOvcEL7A00iwhpXeD84al99u3
OLie8pBFWmMxsK0EQHYpqsitcHl51heXRuFgJEDqOdEH8F4i8Crzf0VS478D0zMn
edh8cK+hCqzWK8lpEE13FAxQioH9+wdCpGWVsFH9LYRhrJ89QjBV4EFyUMVVkyCK
LiRjGeOLunlRx8LOAXAs
=6WC8
-END PGP SIGNATURE-
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users