Hy!

I've just finished the process in the subject, and want to share a few things 
with others, and also want to ask a question.

Long story "short" (it's my IT infra@home, I've oVirt in bigger setups at 
work): I had 3 machines before: two desktop, and one Dell R710 server. One of 
the desktop running FreeNAS with an SSD RAID1, the other desktop and my Dell 
machine was my two hypervisors. Luckily, i've just got two HP DL380 G6, which 
become my hypervisors, and my Dell machine became the TrueNAS storage 
(redundant PSU, more core, more RAM, more disk :) ).

When I started the procedure, I used the latest oVirt 4.3, but it was the time 
to upgrade the version, and also want to migrage all my Data (with the self 
hosted engine too) to the TrueNAS Dell machine (but... I've only have the two 
SSD on my old FreeNAS, so it has to be moved)

After I replaced the hypervisors, and migrated all my VM data to the new 
storage to new disks (iSCSI->iSCSI, live and cold storage migration) it was the 
time, to shut off the FreeNAS, and start the HE redeploy.

The main steps, what I take:
 - Undeployed the hosted-engine from the host with ID: 2 (the ID came from 
hosted-engine --vm-status command, the machine name is: Jupiter)
 - Moved all my VMs to this host, only the HE remained on the machine with ID: 
1 (name: Saturn)
 - Removed the remained stuff with: "hosted-engine --clean-metadata --host-id=2 
--force-clean", so the Saturn was the only machine capable of running the HE
 - Set Global maintenance mode=true
 - Stop the ovirt-engine in the old HE machine
 - Create a backup with "engine-backup --mode=backup --file=engine.backup 
--log=engine-backup.log" and copy it to an another machine (my desktop actually)
 - "hosted-engine --vm-shutdown"
 - Saturn shutdown, 4.4.6 installer in (it was written to DVD before, don't 
want to create another) Saturn complete reinstall.
 - FreeNAS shutdown, SSD moved to TrueNAS, RAID 1/zVol create, iSCSI export...
 - After the base network was created, and the backup was moved back to the new 
Saturn, I started the new host deploy with: "hosted-engine --deploy 
--restore-from-file=engine.backup"

Catch 1 and 2: If you do this, you must know the old DC and the Cluster name! 
write it down somewhere! or you need to get out the PSQL dbs from the backup, 
and extract from it... (like i had to)
The process goes almost flawless, but at some point, I've got a deployment 
error (code: 519) which tells me, the Saturn have missing networks, and I can 
connect to https://saturn.mope.local:6900/ovirt-engine 

Catch 3: this URL not worked, since it was not the HE's URL, and I cannot login 
because the FQDN checking... This may need some further improvement ;) After 
some thinking, finally I've made a socks proxy with SSH (1080 forwarded to 
Saturn) and I was able to login to the locally running HE and make the network 
changes, which was required to continue the deploy process... Also (since the 
old FreeNAS box was on my desk, the HE and the two hypervisor was unable to 
connect to my old SSD iSCSI, so I have to remove it... (cannot put on 
maintenance, but I was able to delete it from Storage->Domains tab) After this, 
Saturn came up, got the "green arrow", so I removed the lock file which the 
deploy give me, and the deploy continued...

After this, I selected the new SSD RAID1 on my Dell iSCSI box, and the deploy 
was finally able to copy the HE to the iSCSI, so far so good :)

Finally, I've got my new 4.4.6 setup, with Self hosted HE on my new 
TrueNAS@Dell. (and all my VMs running on Jupiter at this time, without any 
errors)

The next step was to live migrate off Jupiter to Saturn, DVD in, Jupiter remove 
from the Cluster, and reinstall to 4.4.6, and readd Jupiter (this time,  with 
HE deployed again)

After I've put back Jupiter, and made the required initial setup with the 
network (VLANs pulled to the bonds, iSCSI IPs set, etc) the cluster was up and 
running.

The next step was to upgrade the hypervisors to the latest image with rolling 
update, it was worked as before, so the time come to move the cluster and DC 
compatilbility level from 4.3 -> 4.6... This forced me to reboot all my VMs, as 
it written when I made this change, but this worked too. At last: I've 
restarted the HE with hosted-engine --vm-shutdown / --vm-start, because it has 
some "!" at the BIOS version...

And this is my question actually: after the restart, the BIOS version of the HE 
machine remained the old one, and still have the "!" which states: "The 
chipset/firmware type doen not match the cluster chipset/firmware type"

Doen anyone know, how can the HE BIOS updated after compatibility leve increase?

Sorry, if my mail should be goes to the users list, but because of "Catch 3" I 
think is's better here.

Also I want to write this down, maybe someone find usefull, and this procedure 
can be used in the Docs too, if you want!

Regards: 
  Peter
_______________________________________________
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YKEIJAQPT4GQTBX6BDRRAP43T74CHDRF/

Reply via email to