Purge fs_lvm disks from dead VM's

Has anyone run into stale Volumes in an fs_lvm setup in 5.2 ?

I’m looking at getting or writting a script to detect and purge these. Looking at the logs of machines that I still have the logs for, these are either machines that have failed to come up (for capacity issues other than disk) or machines that we deleted - that for some reason the volume ( /var/lib/one/remotes/tm/fs_lvm/delete ) failed on deletion (I see it in the logs!!)

How do I see the difference between a deleted VM and a Stopped VM ? I noticed the onevm show ID still shows me information about the VM (State is DONE?), but it does not show on onevm list.

Is there a bulk purge script ? or a detection script ?

I’m running 5.2.0 with multiple clusters. The datastores are fs_lvm on multiple iscsi arrays ( one datastore per iscsi endpoint - assigned to different clusters). The fact that it is iscsi is irrelavent as OpenNebula is not aware, just that it shows up as a 26TB LUN)