Creating a nfs shared storage of opennebula one directory

Hi,

Recently I came to know that for live migration of VMs, shared storage(e.g. nfs) is necessary. So I tried to configure my /var/lib/one directory on my Frontend as shared using nfs but I am not able to do it. Can anybody please help me with a good documentation for configuring nfs for Opennebula. I have already tried multiple approaches to achieve it but I have been unsuccessful.

Thanks and Regards,
Arshad

Hi Arshad,

monitor your log file like this

tail -f /var/log/one/oned.log | grep -iv -e "monitor" -e "poolinfo"

while you are trying to migrate the vm. Then post what is being logged here.

But maybe I misunderstood your question, what exactely do you mean by stating: " So I tried to configure my /var/lib/one directory on my Frontend as shared using nfs but I am not able to do it"

if you have configured nfs server and client parts like in the quickstart guide it should work:
http://docs.opennebula.org/4.14/design_and_installation/quick_starts/qs_ubuntu_kvm.html

hope that helps
Jojo

Hi Arshad,

The official info is not step-by-step setup but it is giving a lot of hints how to set-up NFS as shared system datastore.
http://docs.opennebula.org/5.0/deployment/open_cloud_storage_setup/fs_ds.html

Please keep in mind that once you have the NFS set-up you must change the TM_MAD of the default SYSTEM datastore from SSH to SHARED or create new SYSTEM datastore with TM_MAD=shared.

Kind Regards,
Anton Todorov

Hi Jojo,

I tried the quickstart guide approach but it didn’t work for me. After configuring on nfs server, I tried the showmount -e command on server but it always gives RPC timed out error which is the same case when I try to mount on clients.

Regards,
Arshad

Hi Arshad,

After a clean install first delete your datastores through sunstone. Then remove your datastore directory on your nodes i.e. /var/lib/one/datastores. And recreate them, mount your nfs share on each of them and then chown to oneadmin:oneadmin. So on each node your datastore path (/var/lib/one/datastores) should point to your nfs mount (make sure it mounts after reboot)

Then create system and images datastores as follows:
su - oneadmin
vi images.txt
NAME = nfs_images
DS_MAD = fs
TM_MAD = qcow2

vi system.txt
NAME = nfs_system
TM_MAD = shared
TYPE = SYSTEM_DS

onedatastore create system.txt
onedatastore create images.txt

Now you should be able to see you datastores through sunstone & cli:
[oneadmin@node01 ~]$ onedatastore list
ID NAME SIZE AVAIL CLUSTERS IMAGES TYPE DS TM STAT
100 nfs_system 5.1T 96% 0 0 sys - shared on
101 nfs_images 5.1T 96% 0 16 img fs qcow2 on

Hope this helps,
Orhan

Hello

Did you ever manage to get this working ?
I followed the guide and also the links but just like a previous post I made our NFS datastore has a zero size

oneadmin@host1:~$ onedatastore list
ID NAME SIZE AVAIL CLUSTERS IMAGES TYPE DS TM STAT
100 nfs_system 0M - 0 0 sys - shared on
101 nfs_images 0M - 0 0 img fs qcow2 on

This is a dev environment where the frontend and node is one the same server.
I believe I have mounted the NFS correct as per the guidance from Anton

Filesystem Size Used Avail Use% Mounted on
/dev/root 20G 2.0G 17G 11% /
devtmpfs 63G 0 63G 0% /dev
tmpfs 63G 0 63G 0% /dev/shm
tmpfs 63G 17M 63G 1% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/md2 487M 23M 436M 5% /boot
/dev/sda1 510M 152K 510M 1% /boot/efi
/dev/md4 860G 72M 816G 1% /home
cgmfs 100K 0 100K 0% /run/cgmanager/fs
XX.XX>XX>XX>/zpool-125541/VeeamBackup 600G 0 600G 0% /var/lib/one/datastores
oneadmin@host1:~$

What’s inside /var/lib/one/datastores and do the permissions belong to oneadmin (note that its UID needs to be identical across your hosts)?

oneadmin@az:~$ df -h /nfs
Filesystem      Size  Used Avail Use% Mounted on
nfs-int:/nfs    204G  179G   25G  88% /nfs
oneadmin@az:~$ ls -la /var/lib/one/datastores/
total 8
drwxrwxr-x 2 oneadmin oneadmin 4096 Mar  5  2017 .
drwxr-xr-x 6 oneadmin root     4096 Jul 23 11:43 ..
lrwxrwxrwx 1 root     root       17 Mar  5  2017 0 -> /nfs/datastores/0
lrwxrwxrwx 1 root     root       17 Mar  5  2017 1 -> /nfs/datastores/1
lrwxrwxrwx 1 root     root       17 Mar  5  2017 2 -> /nfs/datastores/2
oneadmin@az:~$ ls -la /nfs/datastores
total 28
drwxr-xr-x 7 oneadmin oneadmin 4096 Feb 23  2017 .
drwxr-xr-x 6 root     root     4096 Feb 14  2017 ..
drwxr-xr-x 9 oneadmin oneadmin 4096 Sep 23 04:46 0
drwxr-xr-x 2 oneadmin oneadmin 4096 Mar  5  2017 1
drwxr-xr-x 2 oneadmin oneadmin 4096 Feb 14  2017 100
drwxr-xr-x 2 oneadmin oneadmin 4096 Feb 19  2017 2

Hello Again.

You hit the nail on the head. It was the ownership of the datastore mount that was the issue.
A simple solution made difficult due to lack of sleep.

Hi everyone, I have some problems with the use of GlusterFS. I have followed the official guide (https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/#step-1-have-at-least-two-nodes) and finally i have 2 nodes each one with the /data/brick1/gluster folder. I want to use the glusterfs only for the image datastore for both the nodes but i don’t know how to do this correctly.
I tried to mount the gluster on a new datastore (/var/lib/one/datastores/101) and with this solution when i create a new image opennebula save it on the gluster. But on the secondo node ( not the frontend) what have i to do? I think that a solution can be this -> http://docs.opennebula.org/5.4/deployment/open_cloud_storage_setup/fs_ds.html
Any solution for achievement this procedure will be very useful for me.
Thanks everyone!

Hi, after I have read some posts about this, i have another doubt! Some topics say that I have to mount the gluster on the datastore (for instance: mount -f glusterfs frontend:/gluster /var/lib/one/datastore/101 for both nodes) and some topics about the use of symlink after a i mount the gluster in /mnt/images for example (ln s /mnt/images /var/lib/one/datastore/101). Actually, i want to save my images from susntone on this shared gluster so that when a node instantiate the vm it take the image from it. Sorry for my confusion about this concept.
I have created the datastore 101 from sunstone specifing the shared mode. Many thanks!