Automatic vCPU pinning using VM Hooks

Hi all, I would like to share my work on configuration of automatic vCPU pinning using VM Hooks.

By default libvirt pin vCPU to all available cores and also across numa nodes, so there are unnecessary cpu cache misses and loses due to accessing memory from another numa node.

Initially I found in libvirt docs vcpu option placement=auto, which pins vcpu automatically by calling numad. So I opened feature request

After some googling I found python script with name pinhead, which is pretty interesting. On github is also forked version, which enhanced pinning on newer cpus. I create own fork with cherry-picking that enhanced pinning. Code is available here

How to use it

Download pinhead.py to hooks dir on Opennebula frontend

cd /var/lib/one/remotes/hooks
wget https://raw.githubusercontent.com/FELDSAM-INC/pinhead/master/pinhead/pinhead.py
chmod +x pinhead.py

Add new VM Hooks to /etc/one/oned.conf

VM_HOOK = [
    name      = "repincpu_on_run",
    on        = "RUNNING",
    command   = "pinhead.py",
    remote    = "YES" ]

VM_HOOK = [
    name      = "repincpu_on_stop",
    on        = "STOP",
    command   = "pinhead.py",
    remote    = "YES" ]

VM_HOOK = [
    name      = "repincpu_on_suspend",
    on        = "CUSTOM",
    state     = "SUSPENDED",
    lcm_state = "LCM_INIT",
    command   = "pinhead.py",
    remote    = "YES" ]

VM_HOOK = [
    name      = "repincpu_on_poweroff",
    on        = "CUSTOM",
    state     = "POWEROFF",
    lcm_state = "LCM_INIT",
    command   = "pinhead.py",
    remote    = "YES" ]

VM_HOOK = [
    name      = "repincpu_on_undeploy",
    on        = "CUSTOM",
    state     = "UNDEPLOYED",
    lcm_state = "LCM_INIT",
    command   = "pinhead.py",
    remote    = "YES" ]

VM_HOOK = [
    name      = "repincpu_on_done",
    on        = "DONE",
    command   = "pinhead.py",
    remote    = "YES" ]

Restart opennebula and sync hosts

systemctl restart opennebula
onehost sync --force

You can test it manually on host by calling and looking to log

/var/tmp/one/hooks/pinhead.py -
grep pinhead /var/log/messages

Now automatic vcpupinning will work after you run, suspend, stop, poweroff, undeploy, terminate VM.

To enhance memory management, I also added numatune configuration

nano /etc/one/vmm_exec/vmm_exec_kvm.conf
...
RAW = "<numatune><memory mode='strict' placement='auto'/></numatune>"
...

Hope this helps someone and also comments, feedback and ideas are welcome.

Refs:
LIBVIRT NUMA TUNING

4 Likes

This looks really interesting, do you have some performacne gains on your infrastructure? In the past I was looking for some solution where I will be able to automatically skip certain cores of hypervisor and dedicate specific ones for VMs on node/hypervisor.

Hello @Snowman, I didn’t do any scientific testing, but looks logic and a it is recommended for tune kvm performance. So you have solution now :slight_smile:

1 Like

hello, after i while, about 4 months usage, there is problem with over provisioning. I you deploy VMs with more VCPUs that you actually have, them there can be visible cpu steal time in guests. Also, on VM migrate there is no pinning rerun on original host but just only on new host, which is also non effective. Script is also no numa-avare!

so generally, I don’t recommend this on over provisioned hosts and hosts with more cpu sockets (numa nodes) and personally I don’t use it anymore :slight_smile:

but I keep using <numatune><memory mode='strict' placement='auto'/></numatune> which is good for multi-socket systems

Thank you @feldsam for your feedback.

I was really interested in doing the same kind of stuff on our cluster. Maybe I will use it on host I don’t want to do overcommitment.

Anyway, I think I will dedicate CPU core for opennebula / kvm and other cores will be allocated as usual.