HA and frontend on hypervisors nodes

Hi to all.
If I understood properly, the only required component for a working opennebula infrastructure are the management daemon (oned), the scheduler and one or more hypervisor servers.

the webinterface is totally optional, if I prefere to only use the management APIs.

One questions: is safe to run the management daemon on the same hypervisor servers in an HA configurations?
In example, let’s assume a 5 nodes infrastructure. All nodes are used as hypervisors. Can I use all 5 nodes as frontend with HA enabled (via uraft) achieving a 5 nodes failure protection?

This would simplify the infrastructure a lot, with no need of dedicated frontend nodes.

Then, if needed, I can place sunstone somewhere else (or even inside an OpenNebula VM manually created by using the API)

Any drawbacks?

Hi, I personally used setup, where frontend is self-host as VM on compute nodes. I think, that is better, because it is separate and I also have some kind of HA, because VM is managed by corosync, so I can live migrate it in case of maintenance, or it is automatically restarted on another node in case of node failure. This time I have just one frontend, but it is possible to run VM on each compute node and setup HA.

That’s not the same same
How do you create a VM inside opennebula if you don’t have any fronend running? You can’t use the APIs because APIs are provided by the front-end…

In our idea, I’ll put one frontend on every hypervisor node and then these frontend will be made Ha via RAFT protocol

This type of setup is known as a hyper-converged infrastructure (hci), and typically includes also a distributed storage on the physical hosts running, in the case of opennebula, the frontend and worker. Opennebula’s raft provides also sunstone HA through the floating IP.

You can see an example configuration of such a system at

There is a series of articles from Redhat that describes the performance considerations in hci setups https://redhatstackblog.redhat.com/2017/10/02/using-red-hat-openstack-director-to-deploy-co-located-ceph-storage-part-one/ and when googling for hyper-converged infrastructure you will find more inspiration for those setups from other cloud environments like proxmox or ovirt.

Exactly
But in my case, I won’t put any storage on hypervisor nodes, storage would be dedicated machines

If possible, I’ll use disk less hypervisor servers

You can provide storage from outside but a shared filesystem is a requirement for https://docs.opennebula.org/5.6/advanced_components/ha/frontend_ha_setup.html

The front-end is actually a set of services that communicate with each other via RPC. In other words, if we define opennebula as “core” service and all other opennebula-* services as “companion” services, then you can have a configuration with core running on the hypervisors and companion services on single or separate VMs. The RAFT_LEADER_IP should be used to communicate with the current leader API.

Your documentation link is from the future :wink:
Actually it is possible to run a RAFT backed HA setup without shared file system on the core nodes. It depends on the use case though. When a shared filesystem is used for IMAGE and SYSTEM datastores, besides the VM logs mentioned in the “Shared data between HA nodes” you need to keep only the FILES datastore in sync. Which could be done in the vip.sh script, periodically or with some hooks wizardy…

Hope this helps.

Best Regards,
Anton Todorov

Let me try to explain.
shared storage would be used,obviously. I’ll create a LizardFS cluster shared on each hypervisor nodes via FUSE or NFS (ganesha).

What I would like is to put any opennebula-core services to each hypervisors, so that I can bring up a RAFT cluster, totally redundant, without using dedicated VMs/Baremetals for opennebula services.

Hi,

I have Front end Ha run on hypervisor nodes, and I want to create HA front end with vm. My question do I need a dedicated nic and IP address for floating IP on vm and also on the hypervisors node ?