Drbd datastore deploy fail

I am using ubuntu16.04, opennebula 5.4.6 and drbdadm 9.2.0 with drbd driver from github.
I can not start VMs on the drbd datastore.
I try to deploy. it fails. here are the output:

I did:
・export /var/lib/one as NFS share on controller
・mount /var/lib/one share from controller on all nodes
・datastore with backend “Filesystem shared” mode

root@test11:~/addon-drbdmanage# less /etc/exports
/var/lib/one 10.30.136.21/16(rw,async,no_subtree_check,no_root_squash)

root@test10:/var/lib/one/datastores/100/7#mount
10.30.136.21:/var/lib/one on /var/lib/one type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.30.136.20,local_lock=none,addr=10.30.136.21)

oneadmin@test11:/root/addon-drbdmanage$ onedatastore list
ID NAME SIZE AVAIL CLUSTERS IMAGES TYPE DS TM STAT
0 system - - 0 0 sys - ssh on
1 default 664.8G 94% 0 0 img fs ssh on
2 files 664.8G 94% 0 0 fil fs ssh on
100 nfs_system 664.8G 94% 0 0 sys - shared on
108 drbdmanage_re 128G 95% 0 2 img drbdman drbdman on

oneadmin@test11:/root/addon-drbdmanage$ onedatastore show 108
DATASTORE 108 INFORMATION
ID : 108
NAME : drbdmanage_redundant
USER : oneadmin
GROUP : oneadmin
CLUSTERS : 0
TYPE : IMAGE
DS_MAD : drbdmanage
TM_MAD : drbdmanage
BASE PATH : /var/lib/one//datastores/108
DISK_TYPE : FILE
STATE : READY

DATASTORE CAPACITY
TOTAL: : 128G
FREE: : 121.3G
USED: : 6.6G
LIMIT: : -

PERMISSIONS
OWNER : uma
GROUP : uma
OTHER : —

DATASTORE TEMPLATE
ALLOW_ORPHANS="NO"
BRIDGE_LIST="test11 test10"
CLONE_TARGET="SELF"
DISK_TYPE="FILE"
DRBD_REDUNDANCY="2"
DS_MAD="drbdmanage"
LN_TARGET=“NONE"
RESTRICTED_DIRS=”/“
SAFE_DIRS=”/var/tmp"
TM_MAD=“drbdmanage”

IMAGES
19
20

log on deploy:

Wed Feb 21 11:03:43 2018 [Z0][VM][I]: New state is ACTIVE
Wed Feb 21 11:03:43 2018 [Z0][VM][I]: New LCM state is PROLOG
Wed Feb 21 11:04:05 2018 [Z0][VM][I]: New LCM state is BOOT
Wed Feb 21 11:04:05 2018 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/7/deployment.0
Wed Feb 21 11:04:06 2018 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Wed Feb 21 11:04:06 2018 [Z0][VMM][I]: ExitCode: 0
Wed Feb 21 11:04:06 2018 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Wed Feb 21 11:04:10 2018 [Z0][VMM][I]: Command execution fail: cat << EOT | /var/tmp/one/vmm/kvm/deploy ‘/var/lib/one//datastores/100/7/deployment.0’ ‘test10’ 7 test10
Wed Feb 21 11:04:10 2018 [Z0][VMM][I]: error: Failed to create domain from /var/lib/one//datastores/100/7/deployment.0
Wed Feb 21 11:04:10 2018 [Z0][VMM][I]: error: Failed to open file ‘/var/lib/one//datastores/100/7/disk.0’: Wrong medium type
Wed Feb 21 11:04:10 2018 [Z0][VMM][E]: Could not create domain from /var/lib/one//datastores/100/7/deployment.0
Wed Feb 21 11:04:10 2018 [Z0][VMM][I]: ExitCode: 255
Wed Feb 21 11:04:10 2018 [Z0][VMM][I]: Failed to execute virtualization driver operation: deploy.
Wed Feb 21 11:04:10 2018 [Z0][VMM][E]: Error deploying virtual machine: Could not create domain from /var/lib/one//datastores/100/7/deployment.0
Wed Feb 21 11:04:10 2018 [Z0][VM][I]: New LCM state is BOOT_FAILURE

after deploy, I inspect the destination server location "/var/lib/one/datastores/100/7"
Its error: Failed to open file ‘/var/lib/one//datastores/100/7/disk.0’: Wrong medium type

root@test10:/var/lib/one/datastores/100/7# ls
deployment.0 disk.0 disk.1

How to fix Wrong medium type?

Please help me.

I think it is problem.

root@test10:/var/lib/one/datastores/100/7# drbdadm status
.drbdctrl role:Primary
volume:0 disk:UpToDate
volume:1 disk:UpToDate
test11 role:Secondary
volume:0 peer-disk:UpToDate
volume:1 peer-disk:UpToDate

OpenNebula-image-19 role:Secondary
disk:UpToDate
test11 role:Secondary
peer-disk:UpToDate

OpenNebula-image-19-vm7-disk0 role:Secondary
disk:Inconsistent
test11 role:Secondary
peer-disk:Inconsistent

OpenNebula-image-20 role:Secondary
disk:UpToDate
test11 role:Secondary
peer-disk:UpToDate

root@test11:~/addon-drbdmanage# drbdmanage v
±-------------------------------------------------------------------------------------------------+
| Name | Vol ID | Size | Minor | | State |
|--------------------------------------------------------------------------------------------------|
| OpenNebula-image-19 | 0 | 2.20 GiB | 109 | | ok |
| OpenNebula-image-19-vm7-disk0 | 0 | 2.20 GiB | 111 | | ok |
| OpenNebula-image-20 | 0 | 2.20 GiB | 110 | | ok |
±-------------------------------------------------------------------------------------------------+

I try to fix Inconsistent.
now OpenNebula-image-19-vm7-disk0 is uptodate

I did
root@test10:/var# drbdadm --force primary OpenNebula-image-19-vm7-disk0
root@test10:/var#drbdadm secondary OpenNebula-image-19-vm7-disk0

root@test10:/var/# drbdadm status
.drbdctrl role:Primary
volume:0 disk:UpToDate
volume:1 disk:UpToDate
test11 role:Secondary
volume:0 peer-disk:UpToDate
volume:1 peer-disk:UpToDate

OpenNebula-image-19 role:Secondary
disk:UpToDate
test11 role:Secondary
peer-disk:UpToDate

OpenNebula-image-19-vm7-disk0 role:Secondary
disk:UpToDate
test11 role:Secondary
peer-disk:UpToDate

OpenNebula-image-20 role:Secondary
disk:UpToDate
test11 role:Secondary
peer-disk:UpToDate

then retry deployment

log on deploy:
Wed Feb 21 11:58:29 2018 [Z0][VM][I]: New LCM state is BOOT
Wed Feb 21 11:58:29 2018 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/7/deployment.0
Wed Feb 21 11:58:30 2018 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Wed Feb 21 11:58:30 2018 [Z0][VMM][I]: ExitCode: 0
Wed Feb 21 11:58:30 2018 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Wed Feb 21 11:58:34 2018 [Z0][VMM][I]: Command execution fail: cat << EOT | /var/tmp/one/vmm/kvm/deploy ‘/var/lib/one//datastores/100/7/deployment.0’ ‘test10’ 7 test10
Wed Feb 21 11:58:34 2018 [Z0][VMM][I]: error: Failed to create domain from /var/lib/one//datastores/100/7/deployment.0
Wed Feb 21 11:58:34 2018 [Z0][VMM][I]: error: internal error: early end of file from monitor, possible problem: 2018-02-21T02:58:33.551594Z qemu-system-x86_64: -drive file=/var/lib/one//datastores/100/7/disk.0,format=qcow2,if=none,id=drive-virtio-disk0,cache=none: Image is not in qcow2 format
Wed Feb 21 11:58:34 2018 [Z0][VMM][E]: Could not create domain from /var/lib/one//datastores/100/7/deployment.0
Wed Feb 21 11:58:34 2018 [Z0][VMM][I]: ExitCode: 255
Wed Feb 21 11:58:34 2018 [Z0][VMM][I]: Failed to execute virtualization driver operation: deploy.
Wed Feb 21 11:58:34 2018 [Z0][VMM][E]: Error deploying virtual machine: Could not create domain from /var/lib/one//datastores/100/7/deployment.0
Wed Feb 21 11:58:34 2018 [Z0][VM][I]: New LCM state is BOOT_FAILURE

I think problem is
"Image is not in qcow2 format"?

Hi Keiji-San,

When you did a force primary on that resource, you may have put that disk image into an unrecoverable state. drbdadm primary --force should only be used once, directly after manual resource creation, using it at any other time is generally incorrect. You’ll need to remove that one and start again.

We solved the problem. Thank you very much.
It was caused by the ubuntu 16 kernel not corresponding to qcow 3.
I converted it to qcow 2 and it was able to be deployed.