What I ended up doing was similar. I enabled NFS in gluster, allowed access from the OpenNebula hosts/controller, and then used the gluster server hostname as the NFS server name, and the gluster volume as the path. It worked. Alternatively, one could mount the gluster volume on the control node as an NFS share, then export it for use as a datastore.
I have read elsewhere, however, that the performance advantage of gluster is lost when using NFS. For me, that did not matter since I was migrating from Openstack and already had the gluster cluster (lol) setup.
I only use persistent images with my GlusterFS datastore so the system store being used over FUSE isnât a big deal since nothing but the deploymen files are being stored on it. If I use non-persistent images I deploy them using an SSH datastore to the local node.
When using FUSE to mount the GlusterFS volume as an NFS like export (so mounting /var/lib/one/datastore/0 as an example using FUSE), the performance is god awful. Luckily there is that alternative should it become necessary to have greater performance using the GlusterFS supported NFS-Ganesha.
If you really need GlusterFS you can try porting the drivers from 4.x to 5.x. It shouldnât be that hard if you only port basic things and skip snapshotting and such.
You can take a look at shared drivers to check what you should change:
Core functionality is still there in case it was needed by someone to make new glusterfs drivers.
Iâve reread the documentation and it seems that there is no specific drivers for gluster. I would follow the documentation for 4.14 and check whatâs not working:
I donât remember why it doesnât support snapshotting as I donât deal with gluster for quite some time. Maybe using qcow2 format in gluster is enough to make it work.