SSH Live Migration Without Shared Storage - Call For Discussion

Hi, Leroy from Immowelt here.

We have extended OpenNebula with an SSH live migration feature which requires no shared storage and uses qemu-img in the background with no additional required software or libraries. The documentation clearly states that this is not possible. We did it and it was not hard as soon as we understood the inner architecture of OpenNebula well enough.

I’d really like to hear about your thoughts on this and whether there was a specific reason to state it not possible (maybe it was not wanted or may lead to dangerous situations?) - Also: Is there any community interest for this feature? I only found this well-aged thread with no reactions at all from three years ago. If so - we will happily clean up the code and contribute.

Cheers!

7 Likes

Hello Leroy,

that sounds really great!

I’d really like to hear about your thoughts on this and whether there was a specific reason to state it not possible…

With current SSH drivers, the live-migration is not supported. That’s the main message. If you find somewhere it’s technically impossible at all, that information is old (from before QEMU 2.0) and we should fix the wording. The drivers doesn’t leverage too much the blockcopy functionality, which would help with this problem. We already have a tracking issue for this Storage Live migration for KVM · Issue #1644 · OpenNebula/one · GitHub.

Also: Is there any community interest for this feature? … If so - we will happily clean up the code and contribute.

Your contribution would be very welcome!

It would be great if you can share the technical details, approach you took, or the WIP code to discuss potential issues. It doesn’t have to be a ready-to-merge PR, but at least something we can check and discuss (here or as part of the issue Storage Live migration for KVM · Issue #1644 · OpenNebula/one · GitHub).

Thank you,
Vlastimil Holer

+1! It will be for sure great feature!

Hi @gersilex

As the https://github.com/OpenNebula/one/issues/1644 was open after a support request from us we are of course interested by such feature.
Do you have a PR somewhere to share ?

Thank you

+1, definitely interested in at least seeing the implementation to weigh against the potential gains and drawbacks of other live-migration capable configurations.

Thank you for the motivating feedback!

I’m not able to work on this until July 29th but I will definitely post and update here as soon as possible. Looking forward to discussing the feature with you guys!

For sure interested! We needed it badly in the days running qcow2. With Ceph this is no more issue. Still there are (smaller) setups without shared storage thtat could benefit from this. Would be a welcome feature.

Interested too. If not for other just to check how it will behave with mixed setups where a VM has both ssh and shared disks :wink:

+1, interested too.

Hey there. I am back with the code. Please have a look at it in this Gist and comment here or directly in the gist. Also, feel free to test it and give feedback.

It’s pretty much the original migration script with added functionality to get info about and create the qemu images on the remote side. OpenNebula handles the copying process itself without any problems, already.

Tested with OpenNebula 5.4.9 on x86_64 pc-i440fx-rhel7.6.0 (CentOS, really)

Really looking forward to hearing about your thoughts, tests and feedback.

Thank you very much.

VM files/catalog remain on the source host (including qcow disks). Perhaps not enough parameters for KVM migration.

In this place you need to write the full path.

source “$(dirname “$0”)/kvmrc”

i.e.

source $(dirname $0)/…/…/etc/vmm/kvm/kvmrc

I will continue testing with a larger disk capacity.
because it is interesting what will happen when communication is lost between hosts.

Hey there. I am testing the migration script over ssh. We have several datastores, for example ceph, local, nfs. Now my question, how could I set the migration in oned.conf that openebula could take the right migration script vor the right datastore where the vm located?

Thanks, and be healthy!

Joris