Qcow2 image creating failed

debian 10
but installed
https://downloads.opennebula.org/repo/5.6.1/Debian/9/opennebula-5.6.1-1/source/

cluster is fine over ceph

ISO ang 6Gb IMG for VM are made succesfully in cephds

template:

    ARCH = "x86_64"
    CPU = "1"
    DISK = [
      IMAGE = "anon-disk",
      IMAGE_UNAME = "oneadmin" ]
    DISK = [
      IMAGE = "d10.iso",
      IMAGE_UNAME = "oneadmin" ]
    GRAPHICS = [
      LISTEN = "0.0.0.0",
      TYPE = "VNC" ]
    HYPERVISOR = "kvm"
    INPUTS_ORDER = ""
    MEMORY = "1024"
    MEMORY_UNIT_COST = "MB"
    OS = [
    ARCH = "x86_64",
    BOOT = "disk0,disk1",
    MACHINE = "pc-i440fx-2.12" ]
    SCHED_DS_REQUIREMENTS = "ID=\"101\""

VM creates with IMG format of RAW
But VM deploy fails with QCOW2:

    Fri Dec 28 17:24:21 2018 [Z0][VM][I]: New state is ACTIVE
    Fri Dec 28 17:24:21 2018 [Z0][VM][I]: New LCM state is PROLOG
    Fri Dec 28 17:24:22 2018 [Z0][VM][I]: New LCM state is BOOT
    Fri Dec 28 17:24:22 2018 [Z0][VMM][I]: Generating deployment file:                 /var/lib/one/vms/10/deployment.0
    Fri Dec 28 17:24:22 2018 [Z0][VM][I]: Virtual Machine has no context
    Fri Dec 28 17:24:22 2018 [Z0][VMM][I]: Successfully execute network driver operation: pre.
    Fri Dec 28 17:24:23 2018 [Z0][VMM][I]: Command execution fail: cat << EOT |                         /var/tmp/one/vmm/kvm/deploy '/var/lib/one//datastores/101/10/deployment.0' 'd10-mpv2' 10 d10-mpv2
    Fri Dec 28 17:24:23 2018 [Z0][VMM][I]: error: Failed to create domain from         /var/lib/one//datastores/101/10/deployment.0
    Fri Dec 28 17:24:23 2018 [Z0][VMM][I]: error: internal error: process exited while connecting         to monitor: 2018-12-28T14:24:23.820033Z qemu-system-x86_64: -drive file=rbd:one/one-        8:id=oneadmin:auth_supported=cephx\;none:mon_host=10.1.101.142\:6789\;10.1.101.140\:6789\;        10.1.101.123\:6789,file.password-secret=virtio-disk0-secret0,format=qcow2,if=none,id=drive-        virtio-disk0,cache=none: Image is not in qcow2 format
    Fri Dec 28 17:24:23 2018 [Z0][VMM][E]: Could not create domain from         /var/lib/one//datastores/101/10/deployment.0
    Fri Dec 28 17:24:23 2018 [Z0][VMM][I]: ExitCode: 255
    Fri Dec 28 17:24:23 2018 [Z0][VMM][I]: Failed to execute virtualization driver operation: deploy.
    Fri Dec 28 17:24:23 2018 [Z0][VMM][E]: Error deploying virtual machine: Could not create domain from /var/lib/one//datastores/101/10/deployment.0
    Fri Dec 28 17:24:23 2018 [Z0][VM][I]: New LCM state is BOOT_FAILURE

What’s missed?

When attaching disk in your VM or Template, select Advanced options and set attribute Image mapping driver to “raw”. Deployment file should than change from qcow2 disk format to raw. If this fails, please post your recent / newest version of deployment file.

BR

2 Likes

Inside vmimage it shows qcow2.
Inside deployment.0 it is marked as raw.
And it works but I don’t know what to think, I need encrypted qcow2 disk.
How to check from inside of VM ?

oneimage show 10
IMAGE 10 INFORMATION
ID : 10
NAME : anon-disk
USER : oneadmin
GROUP : oneadmin
LOCK : None
DATASTORE : cephds
TYPE : DATABLOCK
REGISTER TIME : 12/29 10:36:50
PERSISTENT : Yes
SOURCE : one/one-10
FSTYPE : qcow2
SIZE : 6G
STATE : used
RUNNING_VMS : 1
PERMISSIONS
OWNER : um-
GROUP : —
OTHER : —
IMAGE TEMPLATE
DEV_PREFIX=“vd”
DRIVER=“qcow2”

deployment.0

    <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
            <name>one-12</name>
            <cputune>
                    <shares>1024</shares>
            </cputune>
            <memory>1048576</memory>
            <os>
                    <type arch='x86_64' machine='pc-i440fx-2.12'>hvm</type>
            </os>
            <devices>
                    <emulator><![CDATA[/usr/bin/qemu-system-x86_64]]></emulator>
                    <disk type='network' device='disk'>
                            <source protocol='rbd' name='one/one-10'>
                                    <host name='10.1.101.142' port='6789'/>
                                    <host name='10.1.101.140' port='6789'/>
                                    <host name='10.1.101.123' port='6789'/>
                            </source>
                            <auth username='oneadmin'>
                                    <secret type='ceph' uuid='ca3da1ee-8431-4b0b-8e4d-8148a2086e9a'/>
                            </auth>
                            <target dev='vda'/>
                            <boot order='1'/>
                            <driver name='qemu' type='raw' cache='none'/>
                    </disk>
                    <disk type='network' device='cdrom'>
                            <source protocol='rbd' name='one/one-0'>
                                    <host name='10.1.101.142' port='6789'/>
                                    <host name='10.1.101.140' port='6789'/>
                                    <host name='10.1.101.123' port='6789'/>
                            </source>
                            <auth username='oneadmin'>
                                    <secret type='ceph' uuid='ca3da1ee-8431-4b0b-8e4d-8148a2086e9a'/>
                            </auth>
                            <target dev='hda'/>
                            <boot order='2'/>
                            <readonly/>
                            <driver name='qemu' type='raw' cache='none'/>
                    </disk>
                    <interface type='bridge'>
                            <source bridge='br-data'/>
                            <mac address='02:00:0a:01:7b:67'/>
                            <target dev='one-12-0'/>
                    </interface>
                    <graphics type='vnc' listen='0.0.0.0' port='5912'/>
            </devices>
            <features>
                    <acpi/>
            </features>
            <metadata>
                    <system_datastore><![CDATA[/var/lib/one//datastores/101/12]]>                </system_datastore>
            </metadata>
    </domain>

We also noticed the same problem with nebula 5.6.x and ceph qcow2 images. It’s a blocking issue for us to upgrade from 5.4 to 5.6.

It seems like nebula automatically converts qcow2 images into raw format during image upload into ceph datastore.

There is notice in the ceph docs (both luminous 12.2.x and mimic 13.2.x releases) about VM images formats:

Important
The raw data format is really the only sensible format option to use with RBD. Technically, you could use other QEMU-supported formats (such as qcow2 or vmdk ), but doing so would add additional overhead, and would also render the volume unsafe for virtual machine live migration when caching (see below) is enabled.

So it seems even it doesn’t make a sense to use qcow2 VM images with Ceph DS it should work as it is i.e. without converting into raw format. I wonder if such conversion is nebula-specific.

thnx for info!

by the the way, do you have problems with nic-detach in 5.4?

how did u solve it?

I’ve just replied in the thread you are referring to.

I remade cluster back with 5.4.13
and the same situation

oneimage shows qcow2
deployment.0 with raw

so is it correct qcow2 or not?
anybody knows how to check format on place?

Use command qemu-img info vmdiskfile and check what file type is in datastore. Then you can play around with <driver name='qemu' type='raw' cache='none'/> so it will be qcow2 as you requested. You can find/change driver options as i written in post above.

for ceph it’ll be like this:

qemu-img info -f rbd rbd:one/one-1

    image: json:{"pool": "one", "image": "one-1", "driver": "rbd"}
    file format: rbd
    virtual size: 6.0G (6442450944 bytes)
    disk size: unavailable
    cluster_size: 4194304

not so informative about encryption
probably it was coverted in a style of next command
qemu-img convert -f qcow2 -O rbd debian_squeeze.qcow2 rbd:data/squeeze

is it still protected? :slight_smile:
forgive my annoying, 100% shield is needed

is that what I should play with to involve ceph encryption?
http://docs.ceph.com/docs/mimic/ceph-volume/lvm/encryption/

or any other storage-clustered solution for 3 nodes which supports qcow2 ?

thank you,it is useful