Booting EFI VMs

When working with EFI VMs, we need to provide a section in XML config similar to this one:

<os firmware='efi'>
  <loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.fd</loader>
</os>

This will in turn create VARS file in /var/lib/libvirt/qemu/nvram/
However, since OpenNebula sets dynamic_ownership to 0 in /etc/libvirt/qemu.conf, file is unreadable and VM can’t be booted.
Error shown in VMs log is:

Wed Jul 31 16:51:45 2019 [Z0][VMM][I]: error: internal error: process exited while connecting to monitor: 2019-07-31T14:51:45.160555Z qemu-system-x86_64: -drive file=/var/lib/libvirt/qemu/nvram/one-1934_VARS.fd,if=pflash,format=raw,unit=1: Could not open '/var/lib/libvirt/qemu/nvram/one-1934_VARS.fd': Permission denied

Manually changing permissions from root:root to oneadmin makes VM bootable.
Changing dynamic_ownership to 1 makes VM bootable every time without manual actions.

We definitelly don’t want to change ownership every time VM is booted (or new one created).

However, since OpenNebula changes default value of dynamic_ownership we’re reluctant to revert it.

Can someone please explain how to properly handle this situation?
Also, if it is not supported, and we have to enable dynamic_ownership; what would be the impact of this?


Versions of the related components and OS (frontend, hypervisors, VMs):
OpenNebula 5.8.1
Ubuntu 18.04

Steps to reproduce:
Create template that contains EFI necessary XML changes.
Instantiate such a template.

Current results:
Deployment fails due to bad permissions on VARS file.

Expected results:
Deployments succeeds.

1 Like

I’d like to get an answer to this as well.

See also https://bugzilla.redhat.com/show_bug.cgi?id=1783255

After stumbling on same issue I’ve ended up using /var/lib/one/datastores/xxx/ovmf/vms-nvram/

But I’m using deploy tools that alter the template before start.
Which generates
<loader readonly="yes" type="pflash">/var/lib/one/datastores/xxx/ovmf/OVMF_CODE-pure-efi.fd</loader> <nvram>/var/lib/one/datastores/xxx/ovmf/vms-nvram/one-176-OVMF_VARS-pure-efi.fd</nvram>

End result is vars are not on the local drive, but each VM has it’s own file, on datastore. Also I’m free to use whatever version i wan’t instead of the one that comes with distro.

Guys I’ve created an issue to include support for this natively: Support for EFI VMs in KVM · Issue #4110 · OpenNebula/one · GitHub

1 Like

There’s a workaround for this that doesn’t require waiting on upstream changes to libvirtd. Mount /var/lib/libvirt/qemu/nvram with a filesystem that supports setting the uid=oneadmin. libvirtd doesn’t know the difference & UEFI VMs, including migrations, work fine.

There are multiple filesystems that support uid= (see man mount). Below is how I do it, using a loopback device, autofs, & vfat. If you don’t want to use autofs, /etc/fstab works. Loopback devices aren’t required, either. Any device that can be formatted & mounted with uid=oneadmin will work.

cd /var/lib/libvirt/qemu
dd if=/dev/zero of=/var/lib/libvirt/qemu/nvram.img bs=100M count=2 # 200M
chown root. /var/lib/libvirt/qemu/nvram.img && chmod 0600 /var/lib/libvirt/qemu/nvram.img # only root needs to rw the image
losetup -fP /var/lib/libvirt/qemu/nvram.img && losetup -a # add & print loopback devices
mkfs.vfat /dev/loop0 # format image with vfat
losetup -d /dev/loop0 # remove loopback device

# install autofs; setup direct automount map
chown oneadmin. /var/lib/libvirt/qemu/nvram # autofs will change the mode via umask
echo "/- /etc/auto.qemu-nvram" > /etc/auto.master.d/nvram.autofs
echo "/var/lib/libvirt/qemu/nvram -fstype=vfat,uid=oneadmin,gid=oneadmin,umask=0077 :/var/lib/libvirt/qemu/nvram.img" > /etc/auto.qemu-nvram
systemctl restart autofs

touch /var/lib/libvirt/qemu/nvram/test && ls -l /var/lib/libvirt/qemu/nvram && rm /var/lib/libvirt/qemu/nvram/test