Hi, I am just learning OpenNebula and finding it much much more quicker then OpenStack. However, regarding datastores, I am confused. I have two hosts server0, server1. I created a gluster volume among them named vmvolume. Within oneadmin user account, I can create images with qemu-img create glusterfs://… .
My question is that, since /var/lib/one is NFS exported among hosts, when I create the following, should I do this in each host?
Okay, assume I did this, then it asks me to create
> cat ds.conf
NAME = "glusterds"
DS_MAD = fs
TM_MAD = shared
# the following line *must* be preset
DISK_TYPE = GLUSTER
GLUSTER_HOST = gluster_server:24007
GLUSTER_VOLUME = one_vol
CLONE_TARGET="SYSTEM"
LN_TARGET="NONE"
> onedatastore create ds.conf
ID: 101
> onedatastore list
ID NAME SIZE AVAIL CLUSTER IMAGES TYPE DS TM
0 system 9.9G 98% - 0 sys - shared
1 default 9.9G 98% - 2 img fs shared
2 files 12.3G 66% - 0 fil fs ssh
101 default 9.9G 98% - 0 img fs shared
It seems creating a new gluster datastore opens a new datastore with id 101, however we did operations for datastores 0 and 1 at the top, this is what confuses me at most. Even though I have 2 servers (in our research lab, we might add more), I need the data to be distributed to those two hosts, however the above instructions seems complicated to me.
You don’t need to share /var/lib/one to all the nodes but if you do it you can just mount gluster in some place in all the nodes and make the links just once:
This is for the first node, the others only need the first three lines as the links are already there (that directory is shared).
Creating new datastores, not matter which type, generate a new IDs. Anyway, you only need a pair of system/images datastores. I’m not sure why you need to add more. GlusterFS lets you merge space from several servers.
Without exporting /var/lib/one, can I add a gluster backed datastore? If I just share datastores 0 and 1 through a mounted gluster volume, the I/O ops would go over on that directory, but if qemu uses gluster volume directly, would not be that faster? My confusion was why do we need to both share folders 0 and 1 and create another datastore?
Even if you are using gluster drivers the image management (clone, move, deletion, etc) is done using standard FS, that is, with gluster mounted using fusefs.
You don’t need to create a new datastores. Reuse 0 and 1 for glusterfs.
The easiest method is mounting gluster share somewhere in your filesystem, for example /gluster, create two directories in that share and symlink them to datastores directory: