VM entry stuck in Poweroff and on wrong host

Greetings,

We run our OpenNebula manager as a VM in OpenNebula itself.

I did a not smart thing this evening and accidentally shut down that VM from Sunstone during maintenance on its hypervisor. I managed to bring it back up on another host with virsh and everything seems fine, except the ONE VM itself is “stuck” on the wrong host and in the poweroff state.

Googling turned up a potential solution to modify the database vm_pool table entry for the ONE vm and set it to RUNNING and ACTIVE state, but this didn’t work.

Suggestions appreciated as how to fix it. It seems to me that I just need to tell OpenNebula that the VM is ACTIVE and RUNNING on its new host, but I’m not sure how to do that.

Sorry to followup on my own post, I discovered one possible solution.

Since our ONE installation uses shared storage I was able to use virsh to live migrate the running VM to its original host.

virsh migrate --live one-1057 qemu+ssh://one-node06/system

Then in Sunstone I attempted to boot the VM, but was greeted with the error

Error performing action "resume" on virtual machine [1057]. This action is not available for state RUNNING

Now the VM reports that it is located on the correct host and is running.

I’m still curious if there is another way to accomplish this, maybe without migrating to the original hypervisor (maybe it is offline or dead for some reason).

We’ll in this case is a bit more difficult because of it’s the OpenNebula
VM itself. Usually a VM in this situation will be in UNKNOWN state so you
can onevm migrate it to the new hypervisor (where you recovered the VM);
but obviously you need oned to be up and running. So I’d say that for the
Opennebula VM itself your soulution is the better one.