Hello,
yes I know that migrating back to an older version is not guaranteed but I'm rather confused as to why this happens. Firstly this, more details below: --- 2020-12-02 16:34:08 start migrate command to tcp:10.0.0.11:60000 2020-12-02 16:34:09 migration status error: failed 2020-12-02 16:34:09 ERROR: online migrate failure - aborting 2020-12-02 16:34:09 aborting phase 2 - cleanup resources 2020-12-02 16:34:09 migrate_cancel 2020-12-02 16:34:11 ERROR: migration finished with problems (duration 00:00:08) TASK ERROR: migration problems --- That's all I get, where would one find the exact issue? Now for the details, in the previous minor upgrade we saw a similar issue, but that only affected VMs which had been created on an upgraded node, old ones were fine to move in both ways. OK, that was because of new features int he VM definition not present on the old version, understandable. With this one on a (more frequently updated) test cluster 6.3 to 6.2 migrations also works, the test cluster old nodes are at pve-manager (6.2-12), the one with issues are pve-manager (6.2-6). I however see nothing in the changelogs which would explain this difference. In general being able to live migrate back to a node with the previous version is a very desirable situation, since if there are issues/bugs (on the VM level) with the new version there no longer is an impact free way of reverting if this functionality is not present. Regards, Christian -- Christian Balzer Network/Systems Engineer [email protected] Rakuten Mobile Inc. _______________________________________________ pve-user mailing list [email protected] https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
