Hi Martin,
this is my history (please keep in mind that it might get distorted due to mail 
client).Note: I didn't stop the ovirt-engine.service and this caused some 
errors to be logged - but the engine is still working without issues. As I said 
- this is my test lab and I was willing to play around :)
Good Luck!

ssh root@engine
#Switch to postgre usersu - postgres

#If you don't load this , there will be no path for psql , nor it will start at 
allsource /opt/rh/rh-postgresql95/enable
#open the DB. psql engine
#Commands in the DB:select id, storage_name from storage_domain_static;
select storage_domain_id, ovf_disk_id from storage_domains_ovf_info where 
storag                                                                          
              e_domain_id='fbe7bf1a-2f03-4311-89fa-5031eab638bf';
delete from storage_domain_dynamic where id = 
'fbe7bf1a-2f03-4311-89fa-5031eab63                                              
                                          8bf';
delete from storage_domain_static where id = 
'fbe7bf1a-2f03-4311-89fa-5031eab638                                             
                                           bf';
delete from base_disks where disk_id = 
'7a155ede-5317-4860-aa93-de1dc283213e';delete from base_disks where disk_id = 
'7dedd0e1-8ce8-444e-8a3d-117c46845bb0';
delete from storage_domains_ovf_info where storage_domain_id = 
'fbe7bf1a-2f03-43                                                               
                         11-89fa-5031eab638bf';
delete from storage_pool_iso_map where storage_id = 
'fbe7bf1a-2f03-4311-89fa-503                                                    
                                    1eab638bf';
#I think this shows all tables:select table_schema ,table_name from 
information_schema.tables order by table_sc                                     
                                                   hema,table_name;#Maybe you 
don't need this one and you need to find the NFS volume:select * from 
gluster_volumes ;delete from gluster_volumes where id = 
'9b06a1e9-8102-4cd7-bc56-84960a1efaa2';
select table_schema ,table_name from information_schema.tables order by 
table_sc                                                                        
                hema,table_name;
# The previous delete failed as there was an entry in 
storage_server_connections.#In your case could be differentselect * from 
storage_server_connections;delete from storage_server_connections where id = 
'490ee1c7-ae29-45c0-bddd-61708                                                  
                                      22c8490';delete from gluster_volumes 
where id = '9b06a1e9-8102-4cd7-bc56-84960a1efaa2';


Best Regards,Strahil Nikolov
    В петък, 25 януари 2019 г., 11:04:01 ч. Гринуич+2, Martin Humaj 
<mhu...@gmail.com> написа:  
 
 Hi StrahilI have tried to use the same ip and nfs export to replace the 
original, did not work properly.
If you can guide me how to do it in engine DB i would appreciate it. This is a 
test system.
thank you Martin

On Fri, Jan 25, 2019 at 9:56 AM Strahil <hunter86...@yahoo.com> wrote:

Can you create a temporary NFS server which to be accessed during the removal?I 
have managed to edit the engine's DB to get rid of cluster domain, but this is 
not recommended for production  systems :)
  
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FHVNCODMC2POM5ISTICNMJ462VX72WXT/

Reply via email to