Storing images directly on the hosts

Hello,

Sorry for my ignorance, but I can’t seem to understand how I can directly store images on the hosts to avoid having to transfer the images to the host from the sunstone server every time I launch a VM.

I’m using the default datastores that come when installing opennebula, how do I create a new datastore inside a host that can be used to transfer images automatically when they are first launched and then leave them there for future VM’s.

Thanks,
Jonathan P

It seems that the default datastores are configured as TM_MAD="ssh" by default. This imply that each time you deploy a VM it’s disks are actually ssh’ed to the target host. See the Filesystem Datastore documentation. AFAIK (but I may be wrong!), the System Datastore is the central repository, and it must be kept on the Front-end (Sunstone), to allow deployment to different hosts. If you were using a shared transfer method (NFS, GlusterFS, etc.) then the DS could be virtually on all hosts and frontends.

I’ve got a different deployment using NFS, but still the images must be transferred over private network in this case from the shared NFS of images to the local drives.

Is it not possible to simply have the images locally stored on every host so that VM Deployments are instantaneous?

When a VM is started, opennebula has to create it’s disks based on the image, either by ssh or cp if shared storage is used.
What is the output of, on your nfs system:

onedatastore list

If you are using nfs as shared storage - is the network traffic generated by cp on nfs so much of concern?

I think you may be able to achive a setup limited to local storage using federated all-in-one frontend/node servers: Overview — OpenNebula 5.4.15 documentation
Each opennebula server would use the local disk datastores (no nfs), but disadvantage would be that you will have to explicitly select in which federation zone you start VM’s, and you lose live migration feature, not sure about offline migration. I have not tried such a setup, so a developer would have to confirm this is feasible.

Even though traffic generated is not primarily the issue, it’s still a waste of traffic - I think if opennebula adds the following functionality it would really improve the user experience.

When an image is launched, opennebula checks the host if the image exists or not - if it doesn’t exists or the MD5 hash doesn’t match, then starts transferring that image to OpenNebula host, afterwards sends the instructions (template, etc.) for the disk to be created directly by the host, followed by the VM launching.

If the image already exists, it simply sends the instructions to create the disk and then deploys the disk locally.

Thus only one transfer is required on the host, meaning that for deployments that do not have a fast private network (or has multiple clusters that are not in the same region) then the image must be only transferred once.

This saves bandwidth, as well as time for the user.

What do you think? I’m assuming this functionality is not possible with OpenNebula at this current moment but I don’t see any limitations that can stop this from becoming a reality.

Related https://forum.opennebula.io/t/is-there-an-efficient-local-disk-system-datastore-mechanism-for-persistent-images

a way to sync images to selected nodes will be a great feature. :smiling_face_with_three_hearts:

I’ve slightly changed the SSH driver to achieve this. It changes the replica host to be the destination host for the clone operation. Have a look at the “code” here: