NFS Datastore Only for VM Storage

Hello

We are in the process of testing to migrate from Openstack to Opennebula as the licence fees for our VMware Integrated Openstack have changed. Now VMware is charging for VIO.

Anyway, We have SAN via NFS that we want to use.

We have installed the latest version and have our dev environment running on one server.

It is our intention to have the frontend on a dedicated server and initially setup on node on a dedicated server connected via a private network.

As our NFS has a huge amount of storage and is connected via 10Gbit networking we wish to use that for the storage of our Virtual Machines.

Naturally it would be nice to use local storage for at least something however for live migrations it is imperative we use shared storage, hence the NFS.

We are somewhat lost about the various datastore types and have tried mounting the NFS however changing the location via the oned.conf does not work as we still get errors.

So, would someone be able to shed some light on how we connect the NFS to use for VM storage and also provide some advice on using some of the local storage for something else if this is at all possible.

I would be more than happy to pay for someone to provide some personal guidance in setting this up.

Hello Ben,
I don’t use local storages nd NFS on nodes. I use SAN storage over FC. I build GFS2 cluster and mount storage to /var/lib/one/datastores. Maybe its wrong but it’s work for me.

Thanks for the reply.

I think the problem is knowing where to mount the storage.
Can you share you fstab entry or mount entry ?
Our NFS used pure SSD and with dedicated 10Gbit network so performance is great.
I need to get my head around the different storage models are the 3 default storage that you can see below I am lost with.
Can I delete those and just add the NFS ?

DO any changes need to be made to oned.conf when using mounted storage ?

[root@kvm01 ~]# df -H
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/mpatha1 49G 5.0G 44G 11% /
devtmpfs 26G 0 26G 0% /dev
tmpfs 26G 79M 26G 1% /dev/shm
tmpfs 26G 144M 26G 1% /run
tmpfs 26G 0 26G 0% /sys/fs/cgroup
tmpfs 5.1G 0 5.1G 0% /run/user/9869
/dev/mapper/vg_cluster-lv_storage 2.2T 13G 2.2T 1% /var/lib/one/datastores
tmpfs 5.1G 0 5.1G 0% /run/user/0

Ok that helps I think
/dev/mapper/vg_cluster-lv_storage 2.2T 13G 2.2T 1% /var/lib/one/datastores

Time to dig in for the night and get it working.

Thank you for your time in providing this info.

Any idea why the NFS is stating 0 for storage capacity ?

oneadmin@host1:~/datastores$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 20G 3.0G 16G 17% /
devtmpfs 63G 0 63G 0% /dev
tmpfs 63G 0 63G 0% /dev/shm
tmpfs 63G 25M 63G 1% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/md4 860G 72M 816G 1% /home
/dev/md2 487M 23M 436M 5% /boot
/dev/sda1 510M 152K 510M 1% /boot/efi
cgmfs 100K 0 100K 0% /run/cgmanager/fs
10.16.103.6:/zpool-125541/VeeamBackup 600G 0 600G 0% /var/lib/one/datastores

I removed the default datastores and added 3 datastores for NFS however I cannot add images as I get no enough storage…This would make sense as Opennebula reports that the storage is )

DATASTORE 102 INFORMATION
ID : 102
NAME : NFS-Images
USER : oneadmin
GROUP : oneadmin
CLUSTERS : 0
TYPE : IMAGE
DS_MAD : fs
TM_MAD : shared
BASE PATH : /var/lib/one//datastores/102
DISK_TYPE : FILE
STATE : READY

DATASTORE CAPACITY
TOTAL: : 0M
FREE: : 0M
USED: : 0M
LIMIT: : -

PERMISSIONS
OWNER : um-
GROUP : u–
OTHER : —

DATASTORE TEMPLATE
ALLOW_ORPHANS=“NO”
BRIDGE_LIST=“host1”
CLONE_TARGET=“SYSTEM”
DATASTORE_CAPACITY_CHECK=“YES”
DISK_TYPE=“FILE”
DS_MAD=“fs”
LN_TARGET=“NONE”
RESTRICTED_DIRS="/"
SAFE_DIRS="/var/tmp"
TM_MAD=“shared”
TYPE=“IMAGE_DS”

what in /var/log/one/oned.log

There are many errors about monitoring. For the 3 new datastores created.

Tue Jan 9 16:36:46 2018 [Z0][ImM][I]: ExitCode: 1
Tue Jan 9 16:36:46 2018 [Z0][ImM][E]: Error monitoring datastore 104: LQ==. Decoded info: -
Tue Jan 9 16:36:53 2018 [Z0][InM][D]: Host host1 (0) successfully monitored.
Tue Jan 9 16:36:54 2018 [Z0][ReM][D]: Req:1600 UID:0 one.zone.raftstatus invoked
Tue Jan 9 16:36:54 2018 [Z0][ReM][D]: Req:1600 UID:0 one.zone.raftstatus result SUCCESS, "<SERVER_ID>-1<…"
Tue Jan 9 16:36:54 2018 [Z0][ReM][D]: Req:2224 UID:0 one.vmpool.info invoked , -2, -1, -1, -1
Tue Jan 9 16:36:54 2018 [Z0][ReM][D]: Req:2224 UID:0 one.vmpool.info result SUCCESS, "<VM_POOL></VM_POOL>"
Tue Jan 9 16:36:54 2018 [Z0][ReM][D]: Req:8384 UID:0 one.vmpool.info invoked , -2, -1, -1, -1
Tue Jan 9 16:36:54 2018 [Z0][ReM][D]: Req:8384 UID:0 one.vmpool.info result SUCCESS, "<VM_POOL></VM_POOL>"
Tue Jan 9 16:37:13 2018 [Z0][InM][D]: Host host1 (0) successfully monitored.
Tue Jan 9 16:37:24 2018 [Z0][ReM][D]: Req:4880 UID:0 one.zone.raftstatus invoked
Tue Jan 9 16:37:24 2018 [Z0][ReM][D]: Req:4880 UID:0 one.zone.raftstatus result SUCCESS, "<SERVER_ID>-1<…"
Tue Jan 9 16:37:24 2018 [Z0][ReM][D]: Req:7120 UID:0 one.vmpool.info invoked , -2, -1, -1, -1
Tue Jan 9 16:37:24 2018 [Z0][ReM][D]: Req:7120 UID:0 one.vmpool.info result SUCCESS, "<VM_POOL></VM_POOL>"
Tue Jan 9 16:37:24 2018 [Z0][ReM][D]: Req:352 UID:0 one.vmpool.info invoked , -2, -1, -1, -1
Tue Jan 9 16:37:24 2018 [Z0][ReM][D]: Req:352 UID:0 one.vmpool.info result SUCCESS, "<VM_POOL></VM_POOL>"
Tue Jan 9 16:37:33 2018 [Z0][InM][D]: Host host1 (0) successfully monitored.
Tue Jan 9 16:37:45 2018 [Z0][MKP][D]: Monitoring marketplace OpenNebula Public (0)
Tue Jan 9 16:37:45 2018 [Z0][MKP][D]: Marketplace OpenNebula Public (0) successfully monitored.
Tue Jan 9 16:37:46 2018 [Z0][InM][D]: Monitoring datastore NFS-Images (102)
Tue Jan 9 16:37:46 2018 [Z0][InM][D]: Monitoring datastore NFS-System (103)
Tue Jan 9 16:37:46 2018 [Z0][InM][D]: Monitoring datastore NFS-Files (104)
Tue Jan 9 16:37:46 2018 [Z0][ImM][I]: Command execution fail: /var/lib/one/remotes/datastore/fs/monitor PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMDQ8L0lEPjxVSUQ+MDwvVUlEPjxHSUQ+MDwvR0lEPjxVTkFNRT5vbmVhZG1pbjwvVU5BTUU+PEdOQU1FPm9uZWFkbWluPC9HTkFNRT48TkFNRT5ORlMtRmlsZXM8L05BTUU+PFBFUk1JU1NJT05TPjxPV05FUl9VPjE8L09XTkVSX1U+PE9XTkVSX00+MTwvT1dORVJfTT48T1dORVJfQT4wPC9PV05FUl9BPjxHUk9VUF9VPjE8L0dST1VQX1U+PEdST1VQX00+MDwvR1JPVVBfTT48R1JPVVBfQT4wPC9HUk9VUF9BPjxPVEhFUl9VPjA8L09USEVSX1U+PE9USEVSX00+MDwvT1RIRVJfTT48T1RIRVJfQT4wPC9PVEhFUl9BPjwvUEVSTUlTU0lPTlM+PERTX01BRD48IVtDREFUQVtmc11dPjwvRFNfTUFEPjxUTV9NQUQ+PCFbQ0RBVEFbc2hhcmVkXV0+PC9UTV9NQUQ+PEJBU0VfUEFUSD48IVtDREFUQVsvdmFyL2xpYi9vbmUvL2RhdGFzdG9yZXMvMTA0XV0+PC9CQVNFX1BBVEg+PFRZUEU+MjwvVFlQRT48RElTS19UWVBFPjA8L0RJU0tfVFlQRT48U1RBVEU+MDwvU1RBVEU+PENMVVNURVJTPjxJRD4wPC9JRD48L0NMVVNURVJTPjxUT1RBTF9NQj4wPC9UT1RBTF9NQj48RlJFRV9NQj4wPC9GUkVFX01CPjxVU0VEX01CPjA8L1VTRURfTUI+PElNQUdFUz48L0lNQUdFUz48VEVNUExBVEU+PEFMTE9XX09SUEhBTlM+PCFbQ0RBVEFbTk9dXT48L0FMTE9XX09SUEhBTlM+PEJSSURHRV9MSVNUPjwhW0NEQVRBW2hvc3QxXV0+PC9CUklER0VfTElTVD48Q0xPTkVfVEFSR0VUPjwhW0NEQVRBW1NZU1RFTV1dPjwvQ0xPTkVfVEFSR0VUPjxEQVRBU1RPUkVfQ0FQQUNJVFlfQ0hFQ0s+PCFbQ0RBVEFbWUVTXV0+PC9EQVRBU1RPUkVfQ0FQQUNJVFlfQ0hFQ0s+PERTX01BRD48IVtDREFUQVtmc11dPjwvRFNfTUFEPjxMTl9UQVJHRVQ+PCFbQ0RBVEFbTk9ORV1dPjwvTE5fVEFSR0VUPjxSRVNUUklDVEVEX0RJUlM+PCFbQ0RBVEFbL11dPjwvUkVTVFJJQ1RFRF9ESVJTPjxTQUZFX0RJUlM+PCFbQ0RBVEFbL3Zhci90bXBdXT48L1NBRkVfRElSUz48VE1fTUFEPjwhW0NEQVRBW3NoYXJlZF1dPjwvVE1fTUFEPjxUWVBFPjwhW0NEQVRBW0ZJTEVfRFNdXT48L1RZUEU+PC9URU1QTEFURT48L0RBVEFTVE9SRT48L0RTX0RSSVZFUl9BQ1RJT05fREFUQT4= 104
Tue Jan 9 16:37:46 2018 [Z0][ImM][I]: ExitCode: 1
Tue Jan 9 16:37:46 2018 [Z0][ImM][E]: Error monitoring datastore 104: LQ==. Decoded info: -
Tue Jan 9 16:37:46 2018 [Z0][ImM][I]: Command execution fail: /var/lib/one/remotes/datastore/fs/monitor PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMDI8L0lEPjxVSUQ+MDwvVUlEPjxHSUQ+MDwvR0lEPjxVTkFNRT5vbmVhZG1pbjwvVU5BTUU+PEdOQU1FPm9uZWFkbWluPC9HTkFNRT48TkFNRT5ORlMtSW1hZ2VzPC9OQU1FPjxQRVJNSVNTSU9OUz48T1dORVJfVT4xPC9PV05FUl9VPjxPV05FUl9NPjE8L09XTkVSX00+PE9XTkVSX0E+MDwvT1dORVJfQT48R1JPVVBfVT4xPC9HUk9VUF9VPjxHUk9VUF9NPjA8L0dST1VQX00+PEdST1VQX0E+MDwvR1JPVVBfQT48T1RIRVJfVT4wPC9PVEhFUl9VPjxPVEhFUl9NPjA8L09USEVSX00+PE9USEVSX0E+MDwvT1RIRVJfQT48L1BFUk1JU1NJT05TPjxEU19NQUQ+PCFbQ0RBVEFbZnNdXT48L0RTX01BRD48VE1fTUFEPjwhW0NEQVRBW3NoYXJlZF1dPjwvVE1fTUFEPjxCQVNFX1BBVEg+PCFbQ0RBVEFbL3Zhci9saWIvb25lLy9kYXRhc3RvcmVzLzEwMl1dPjwvQkFTRV9QQVRIPjxUWVBFPjA8L1RZUEU+PERJU0tfVFlQRT4wPC9ESVNLX1RZUEU+PFNUQVRFPjA8L1NUQVRFPjxDTFVTVEVSUz48SUQ+MDwvSUQ+PC9DTFVTVEVSUz48VE9UQUxfTUI+MDwvVE9UQUxfTUI+PEZSRUVfTUI+MDwvRlJFRV9NQj48VVNFRF9NQj4wPC9VU0VEX01CPjxJTUFHRVM+PC9JTUFHRVM+PFRFTVBMQVRFPjxBTExPV19PUlBIQU5TPjwhW0NEQVRBW05PXV0+PC9BTExPV19PUlBIQU5TPjxCUklER0VfTElTVD48IVtDREFUQVtob3N0MV1dPjwvQlJJREdFX0xJU1Q+PENMT05FX1RBUkdFVD48IVtDREFUQVtTWVNURU1dXT48L0NMT05FX1RBUkdFVD48REFUQVNUT1JFX0NBUEFDSVRZX0NIRUNLPjwhW0NEQVRBW1lFU11dPjwvREFUQVNUT1JFX0NBUEFDSVRZX0NIRUNLPjxESVNLX1RZUEU+PCFbQ0RBVEFbRklMRV1dPjwvRElTS19UWVBFPjxEU19NQUQ+PCFbQ0RBVEFbZnNdXT48L0RTX01BRD48TE5fVEFSR0VUPjwhW0NEQVRBW05PTkVdXT48L0xOX1RBUkdFVD48UkVTVFJJQ1RFRF9ESVJTPjwhW0NEQVRBWy9dXT48L1JFU1RSSUNURURfRElSUz48U0FGRV9ESVJTPjwhW0NEQVRBWy92YXIvdG1wXV0+PC9TQUZFX0RJUlM+PFRNX01BRD48IVtDREFUQVtzaGFyZWRdXT48L1RNX01BRD48VFlQRT48IVtDREFUQVtJTUFHRV9EU11dPjwvVFlQRT48L1RFTVBMQVRFPjwvREFUQVNUT1JFPjwvRFNfRFJJVkVSX0FDVElPTl9EQVRBPg== 102
Tue Jan 9 16:37:46 2018 [Z0][ImM][I]: ExitCode: 1
Tue Jan 9 16:37:46 2018 [Z0][ImM][E]: Error monitoring datastore 102: LQ==. Decoded info: -
Tue Jan 9 16:37:46 2018 [Z0][ImM][I]: Command execution fail: /var/lib/one/remotes/tm/shared/monitor PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMDM8L0lEPjxVSUQ+MDwvVUlEPjxHSUQ+MDwvR0lEPjxVTkFNRT5vbmVhZG1pbjwvVU5BTUU+PEdOQU1FPm9uZWFkbWluPC9HTkFNRT48TkFNRT5ORlMtU3lzdGVtPC9OQU1FPjxQRVJNSVNTSU9OUz48T1dORVJfVT4xPC9PV05FUl9VPjxPV05FUl9NPjE8L09XTkVSX00+PE9XTkVSX0E+MDwvT1dORVJfQT48R1JPVVBfVT4xPC9HUk9VUF9VPjxHUk9VUF9NPjA8L0dST1VQX00+PEdST1VQX0E+MDwvR1JPVVBfQT48T1RIRVJfVT4wPC9PVEhFUl9VPjxPVEhFUl9NPjA8L09USEVSX00+PE9USEVSX0E+MDwvT1RIRVJfQT48L1BFUk1JU1NJT05TPjxEU19NQUQ+PCFbQ0RBVEFbLV1dPjwvRFNfTUFEPjxUTV9NQUQ+PCFbQ0RBVEFbc2hhcmVkXV0+PC9UTV9NQUQ+PEJBU0VfUEFUSD48IVtDREFUQVsvdmFyL2xpYi9vbmUvL2RhdGFzdG9yZXMvMTAzXV0+PC9CQVNFX1BBVEg+PFRZUEU+MTwvVFlQRT48RElTS19UWVBFPjA8L0RJU0tfVFlQRT48U1RBVEU+MDwvU1RBVEU+PENMVVNURVJTPjxJRD4wPC9JRD48L0NMVVNURVJTPjxUT1RBTF9NQj4wPC9UT1RBTF9NQj48RlJFRV9NQj4wPC9GUkVFX01CPjxVU0VEX01CPjA8L1VTRURfTUI+PElNQUdFUz48L0lNQUdFUz48VEVNUExBVEU+PEFMTE9XX09SUEhBTlM+PCFbQ0RBVEFbTk9dXT48L0FMTE9XX09SUEhBTlM+PEJSSURHRV9MSVNUPjwhW0NEQVRBW2hvc3QxXV0+PC9CUklER0VfTElTVD48REFUQVNUT1JFX0NBUEFDSVRZX0NIRUNLPjwhW0NEQVRBW1lFU11dPjwvREFUQVNUT1JFX0NBUEFDSVRZX0NIRUNLPjxESVNLX1RZUEU+PCFbQ0RBVEFbRklMRV1dPjwvRElTS19UWVBFPjxEU19NSUdSQVRFPjwhW0NEQVRBW1lFU11dPjwvRFNfTUlHUkFURT48UkVTVFJJQ1RFRF9ESVJTPjwhW0NEQVRBWy9dXT48L1JFU1RSSUNURURfRElSUz48U0FGRV9ESVJTPjwhW0NEQVRBWy92YXIvdG1wXV0+PC9TQUZFX0RJUlM+PFNIQVJFRD48IVtDREFUQVtZRVNdXT48L1NIQVJFRD48VE1fTUFEPjwhW0NEQVRBW3NoYXJlZF1dPjwvVE1fTUFEPjxUWVBFPjwhW0NEQVRBW1NZU1RFTV9EU11dPjwvVFlQRT48L1RFTVBMQVRFPjwvREFUQVNUT1JFPjxEQVRBU1RPUkVfTE9DQVRJT04+L3Zhci9saWIvb25lLy9kYXRhc3RvcmVzPC9EQVRBU1RPUkVfTE9DQVRJT04+PC9EU19EUklWRVJfQUNUSU9OX0RBVEE+ 103
Tue Jan 9 16:37:46 2018 [Z0][ImM][I]: ExitCode: 1
Tue Jan 9 16:37:46 2018 [Z0][ImM][E]: Error monitoring datastore 103: LQ==. Decoded info: -
Tue Jan 9 16:37:53 2018 [Z0][InM][D]: Host host1 (0) successfully monitored.
Tue Jan 9 16:37:54 2018 [Z0][ReM][D]: Req:3520 UID:0 one.zone.raftstatus invoked
Tue Jan 9 16:37:54 2018 [Z0][ReM][D]: Req:3520 UID:0 one.zone.raftstatus result SUCCESS, "<SERVER_ID>-1<…"
Tue Jan 9 16:37:54 2018 [Z0][ReM][D]: Req:9248 UID:0 one.vmpool.info invoked , -2, -1, -1, -1
Tue Jan 9 16:37:54 2018 [Z0][ReM][D]: Req:9248 UID:0 one.vmpool.info result SUCCESS, "<VM_POOL></VM_POOL>"
Tue Jan 9 16:37:54 2018 [Z0][ReM][D]: Req:2800 UID:0 one.vmpool.info invoked , -2, -1, -1, -1
Tue Jan 9 16:37:54 2018 [Z0][ReM][D]: Req:2800 UID:0 one.vmpool.info result SUCCESS, "<VM_POOL></VM_POOL>"
root@host1:/var/lib/one#

Hi Ben,

(I don’t know if you have solved your problem, so here you go:)
Except for the VMware migration, we are in the same boat as you are: new to OpenNebula and live migration is definitely a requirement, so, are also using NFS (although I’m not sure yet if it will give us enough performance…)

Here you go the fstab of one of our controllers (we have HA set up):
xxxxxx@xxxxxx:/$ grep datastore /etc/fstab

Shared datastore for OpenNebula core services

xxxxxx:/opennebula_datastore_0 /var/lib/one/datastores/0 nfs auto,tcp,soft,intr,rsize=32768,wsize=32768 0 0
xxxxxx:/opennebula_datastore_1 /var/lib/one/datastores/1 nfs auto,tcp,soft,intr,rsize=32768,wsize=32768 0 0
xxxxxx:/opennebula_datastore_2 /var/lib/one/datastores/2 nfs auto,tcp,soft,intr,rsize=32768,wsize=32768 0 0

And from one of our KVM nodes:
xxxxxx@xxxxxx:~# grep datastore /etc/fstab

Shared datastore for OpenNebula core services

xxxxxx:/opennebula_datastore_0 /var/lib/one/datastores/0 nfs auto,tcp,soft,intr,rsize=32768,wsize=32768 0 0

I didn’t have to re-create any of them, just change their type and work through the process of mounting them.
So far, all live migration tests have worked no problem.

Let me know if you need more details, I’ll try to help where I can (but remember, I’m still learning… :slight_smile: ).

Alex

You should consider adding _netdev to the nfs mount options in /etc/fstab. Without it you could experience issues at shutdown and/or boot due to racing conditions between networking and mounting init scripts…

BR,
Anton Todorov

Hi Anto,

Thanks for the heads up, I’ll take a look at that…!

Regards,

Alex