Storage migration on LVM fails. (LV deactivated too early)

Hi
I’m using opennebula for a while, maybe from 5.6 verison.
Now I’m on 5.8.1
I’m using LVM system datastores with shared LUN.
When I trying to migrate VM from one datastore to another driver fails:
Command execution failed (exit code: 1): /var/lib/one/remotes/tm/fs_lvm/mv nebulanode3:/var/lib/one//datastores/101/722/disk.0 nebulanode3:/var/lib/one//datastores/106/722/disk.0 722 100
Thu Oct 17 17:36:49 2019 [Z0][TM][E]: mv: Command “dd if=/dev/vg-one-101/lv-one-722-0 of=/dev/vg-one-106/lv-one-722-0 bs=64k” failed: dd: failed to open ‘/dev/vg-one-101/lv-one-722-0’: No such file or directory
It deactivates LV before starting copying.so there is no such source device.
It doesn’t matter if VM is running or not.
I not sure how it should work, and why driver deactivating device, but I sure that migration was worked on previous versions.

1 Like