Contribution: OpenNebula backup script for QCOW2 datastores


(Kristian Feldsam) #22

Hello @mptek, I explained nature of non-persistent and persistent images/instances in my prev posts. I already discuss this problem with few my customers. It all depends on usage of opennebula.

If you use it in “cloud” way, them it is like amazon ec2 instances (vm instances), ami images (non-persistent images) and ebs volumes (persistent images). So customer deploys new instances based on ami image (non-persistent image) and configure everything “infrastructure as code” using tools like ansible. That programatic configuration also includes attaching and mounting ebs volume (persistent image) which can be backuped. So in case of crash, they can easily deploys and configure everything by running “program”. Important data will be on ebs volumes, so preserved.

On the other hand, many people use openenbula as “vps” like hosting and deploys non-persistent instances/images. In this case there are two possible solutions (in my subjective opinion).

First one is to make every customer VPS persistent, so clone base image and make it persistent, clone base VM template, map it to that image, configure persistent IP, etc… and deploy vm.

Second one is deploying non persistent instances and do backups of that “non-persistent” images created on background.

Personally I use first approach.


(Ulrich P.) #23

Hi Kristian,

I am currently using ssh transfer datastores on some of my hosts. Images are deployed persistent and with driver=qcow2.
Unfortunately your backup script does not cover this scenario (TM_MAD=ssh, Image persistent, Image “Driver = SSH”).
I believe technically it should work with the same commands from your script. Am I correct? If yes, do you see a chance to implement this?

Thanks
Uli


(Kristian Feldsam) #24

Hello, I did support in develop branch, but not tested. you can try it and let me know what happens


(Ulrich P.) #25

Hi,

the version in the develop branch does not backup the image as I expected. To be more precise about my goal:
I deployed a VM to a KVM node with local storage and a system datastore (TMD_MAD=ssh). I checked the box “persistent” while deploying the VM. Now I would like to backup the image located in the system datastore from the running VM directly from this KVM node.
When I run your script (pointing to the system datastore using a label) it just does nothing.

Best Regards
Uli


(Ulrich P.) #26

Ok,

I did some more testing and can report following results. In my first tests I only included the system ds (ID: 109) on the kvm node in the backup run. In my oppinion this should be have been sufficient because the whole persistent image is located there. The backup.sh just did nothing in this tests.

In my next tests I included only the image in the backup run (based on label, not the complete ds) and I think I found a bug in your script:

  1. Snapshot is created succesfully
  2. rsync tries to copy the image file from datastore id: 110, which is the originating image datastore only existing in the frontend node. It should copy from id: 109, which is the ssh datastore on the kvm_node.

Here is the ouptut (look at the bold part):
[oneadmin@backup addon-image-backup]$ ./backup.sh
Create live snapshot of image 5 named testpersistent-disk-0 attached to VM 16 as disk sda running on kvm01.inf.corp.loc
Run cmd: ssh oneadmin@kvm01.inf.corp.loc ‘touch /var/lib/one/snapshot_image_backup/one-16-weekly-backup’
Run cmd: ssh oneadmin@kvm01.inf.corp.loc ‘virsh -c qemu:///system snapshot-create-as --domain one-16 weekly-backup --diskspec sda,file=/var/lib/one/snapshot_image_backup/one-16-weekly-backup --disk-only --atomic --no-metadata --quiesce’ || ssh oneadmin@kvm01.inf.corp.loc ‘virsh -c qemu:///system snapshot-create-as --domain one-16 weekly-backup --diskspec sda,file=/var/lib/one/snapshot_image_backup/one-16-weekly-backup --disk-only --atomic --no-metadata’
Domain snapshot weekly-backup created
Run cmd: mkdir -p /mnt/image_backup/110/6a32b4fe232caa0b36287e2cbb91cc8e.snap
Run cmd: rsync -aHAXxWv --inplace --numeric-ids --progress -e “ssh -T -o Compression=no -x” oneadmin@kvm01.inf.corp.loc:/var/lib/one//datastores/110/6a32b4fe232caa0b36287e2cbb91cc8e /mnt/image_backup/110/6a32b4fe232caa0b36287e2cbb91cc8e.tmp
receiving incremental file list
rsync: change_dir “/var/lib/one//datastores/110” failed: No such file or directory (2)

sent 8 bytes received 100 bytes 43.20 bytes/sec
total size is 0 speedup is 0.00
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1650) [Receiver=3.1.2]
rsync: [Receiver] write error: Broken pipe (32)


(Kristian Feldsam) #27

Hello, would you print here your datastores? If I understand correcttly, them persistent image is created in system? datastore on specific compute node? or it is image datastore?


(Mirko) #28

Hi Kristian,
I apologize for the delay in your reply.

I’m trying to better understand the operation of ON so as not to waste anyone’s time.
In your previous post write:

Literally I understand that the VM created “non-persistent” will have as a backing-file the image of the VM template. As the example of “Snowman”:

But if I do a qemu-img info of my qcow2 disks (of non-persistent VM) i get this (no backing-file):

root @ sto1-a: / var / lib / one / datastores / 0/83 # qemu-img info disk.0
image: disk.0
file format: qcow2
virtual size: 200G (214748364800 bytes)
disk size: 12G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

Why?
It looks like a copy of the template image (complete).

The first case for the solution you listed is IMHO laborious.
As I use it, opennebula assigns various parameters automatically (eg IPv4, IPv6, hostname, rootpwd, etc …).
If you open sunstone to internet my clients could create VMs by setting only the parameters strictly necessary leaving others decided by ON (eg IPs).

Thanks a lot anyway.
Very kind.
Greetings
Mirko


(Kristian Feldsam) #29

Hello, I think that it deponds on datastore type. I use shared qcow2 files datastore for images and for system too. I have clustered setup with gfs2 running on top of it. So it uses backing files for non-persistent VM and are in system DS. Persistent VMs each have own image in image DS.

If you use non shared SSH DS, them image is copied from frontend node via ssh to compute node, so no backing file.

My backup script supports only qcow2 datastore or shared datastores, not ssh/local


(Mirko) #30

Hello Feldsam.
You are right. I use shared QCOW2 for SYS DS and shared (only) for IMG.
ON copy the base qcow2 file to SYS DS (without backing file).
But i prefer this mode. Every VM have separate and indepndent storage file.

root@sto1-a:~# onedatastore list
  ID NAME                SIZE AVAIL CLUSTERS     IMAGES TYPE DS      TM      STAT
   0 Storage_VM         13.9T 95%   0                 0 sys  -       qcow2   on
   1 Immagini_VM        13.9T 95%   0                 2 img  fs      shared  on
   2 Files              13.9T 95%   0                 0 fil  fs      shared  on

I will look to modify your script to “accomodate” my setup.
Do you think is a heavy mods or simple?

Thank you anyway.
Very very kind.
Hello


(Kristian Feldsam) #31

Hello, I also prefer to have separate image for each VM, but I use persistent disks instead of non-persistent. My backup script also counts with backuping persistent disks only.

In case of use persistent disks, oned use system ds for just deployment files (virsh xml, symlink to image datastore)