OpenNebula Installation Design Considerations

Hey All,

Are there OpenNebula infrastructure diagrams available depicting the various means of installing the product?

Based on my earlier attempts to use the product, I’ve sketched out a basic setup. However, I wanted to confirm which one is most accurate?

What is the recommended design?

[ Design 1 ]

[ Design 2 ]

An external link helped to visualize one such design and future state I’m looking for:

https://www.researchgate.net/figure/OpenNebula-used-as-resource-cloud-broker_fig3_224185688

However, I wanted to visualize this from an installation perspective, rather than a capabilities perspective.

Cheers,
TK

Like this:


?

I’ll need to read it thoroughly later today. (1AM here).

A few additional questions.

Is GlusterFS supported natively in the latest OpenNebula?

If not, can you give me a high level overview of how live migration works? For example, if my VM’s are on one host and I evacuate the host (force the VM’s to transition over to the second physical node) what will I experience on the VM’s? Will there be a pause or would the VM’s be shut down. What is the backend storage needed to allow for live migration?

Cheers,
TK

Yes and no. Gluster uses filesystem storage, and you can use it with OpenNebula’s filesystem datastore drivers, works very well. The no part comes in the fact that OpenNebula is agnostic to the underlying technology, it only sees a folder (where the NFS/Gluster mount is), but will not interact with Gluster directly (for example, use some Gluster features like snapshots, instead those are managed directly from OpenNebula’s drivers).

About migration, as long as you are using a shared datastore (which you will if you want to use Gluster) live migration is supported on KVM (the shared datastore is a requisite from KVM). On LXD there is no live migration support yet because, in the words of LXD’s main developer:
“We do have planned work to get a new API specifically to move containers within the cluster, this could internally be made to trigger CRIU for running containers. Whether we’ll do that part or not, I’m not sure given the current state of CRIU (we pretty much can’t test it since it really can’t migrate a whole lot these days).”

When live migrating there is always a pause, but it is designed to be unnoticed (on KVM for example, it should be around 10ms and peaks should never be bigger than 1s).

Thanks Sergio. This is very helpful. ON’s hands off approach to GlusterFS and shared storage is a positive.

I’ve reviewed the pricing model for ON below. By KVM Servers within the PDF, do you mean LXD / KVM Virtuals or LXD / KVM physical servers? Will you provide less expensive plans in the future for POC style deployments?

Would the ON project also have a capabilities assessment of the tool vs other competing projects such as XCP-ng, oVirt, Proxmox, VMware? I’m thinking something along these lines: https://tinyurl.com/yxej2e8c

Cheers,
TK