Bug with migration VMs that using volatile disks on LVM

Please, describe the problem here and provide additional information below (if applicable) …

With new 5.8 version volatile disks are created normally as lvm.
But during live migration there is a problem with dm devices that already exists on target node:
Fri Apr 5 13:25:49 2019 [Z0][VMM][I]: device-mapper: create ioctl on vg–one–102-lv–one–203–10 LVM-JccAEq59uqVcYlXqst4QCPOiI2Dd27UzYEDagBvuLzGDhXenyZnxEwX8plSykcNQ failed: Device or resource busy

This dm devices are exists on all nodes even if lv is not active on that nodes.

[root@nebulanode3 ~]# dmsetup ls | grep 203
vg–one–102-lv–one–203–9 (253:19)
vg–one–102-lv–one–203–8 (253:18)
vg–one–102-lv–one–203–7 (253:17)
vg–one–102-lv–one–203–6 (253:16)
vg–one–102-lv–one–203–5 (253:15)
vg–one–102-lv–one–203–11 (253:21)
vg–one–102-lv–one–203–4 (253:14)
vg–one–102-lv–one–203–10 (253:20)
vg–one–102-lv–one–203–3 (253:13)
vg–one–102-lv–one–203–2 (253:12)
vg–one–102-lv–one–203–0 (253:11)
[root@nebulanode3 ~]# dmsetup remove vg–one–102-lv–one–203–9
[root@nebulanode3 ~]# dmsetup remove vg–one–102-lv–one–203–8
[root@nebulanode3 ~]# dmsetup remove vg–one–102-lv–one–203–7
[root@nebulanode3 ~]# dmsetup remove vg–one–102-lv–one–203–6
[root@nebulanode3 ~]# dmsetup remove vg–one–102-lv–one–203–5
[root@nebulanode3 ~]# dmsetup remove vg–one–102-lv–one–203–11
[root@nebulanode3 ~]# dmsetup remove vg–one–102-lv–one–203–4
[root@nebulanode3 ~]# dmsetup remove vg–one–102-lv–one–203–10
[root@nebulanode3 ~]# dmsetup remove vg–one–102-lv–one–203–3
[root@nebulanode3 ~]# dmsetup remove vg–one–102-lv–one–203–2
[root@nebulanode3 ~]# dmsetup remove vg–one–102-lv–one–203–0
After removing all dm devices on target node - live migration occurs normaly.


Versions of the related components and OS (frontend, hypervisors, VMs):
frontend 5.8 centos 7.6
hypervisor centos 7.6
vm - any
Steps to reproduce:
Check if dm devices are exists on target node with dmsetup ls
vg–one–102-lv–one–203–9 (253:19)
vg–one–102-lv–one–203–8 (253:18)
vg–one–102-lv–one–203–7 (253:17)
vg–one–102-lv–one–203–6 (253:16)
vg–one–102-lv–one–203–5 (253:15)
vg–one–102-lv–one–203–11 (253:21)
vg–one–102-lv–one–203–4 (253:14)
vg–one–102-lv–one–203–10 (253:20)
vg–one–102-lv–one–203–3 (253:13)
vg–one–102-lv–one–203–2 (253:12)
vg–one–102-lv–one–203–0 (253:11)
Try to live migrate VM to that node

Current results:
Error is ocured
Fri Apr 5 13:25:49 2019 [Z0][VMM][I]: device-mapper: create ioctl on vg–one–102-lv–one–203–10 LVM-JccAEq59uqVcYlXqst4QCPOiI2Dd27UzYEDagBvuLzGDhXenyZnxEwX8plSykcNQ failed: Device or resource busy
Expected results:
Migration successes.

Thanks for the reporting. We will track the issue in Github:

Thanks.
Can you tell something about this issue?

So far I can only say I can reproduce the issue and the bug seems to be in the driver, perhaps the migrate_other function was not adopted to the LVM volatile disk changes.

Thanks, but I pointed to another my problem with LVM datatore.
LV operations halts IO with whole datatore.