Overprovisioning factor is applied twice to new hosts


For our staging cluster we are overprovisioning host resources with RESERVED_CPU="-50%" and RESERVED_MEM="-25% on the cluster template. In this case the overcommit values are only applied on the cluster template, the host template RESERVED_CPU and RESERVED_MEM are blank. When adding a host with 128GB RAM and 32 cores to the cluster I would expect the reported memory capacity to be 160GB (128 * 1.25) and reported CPU capacity to be 4800 (32 * 1.5 * 100).

What actually happens is that the “Overcommitment” section in Sunstone displays the expected values, but the capacity section shows that the memory and CPU overcommitment factors have been applied twice, so memory is reported as 196GB ( 128 * 1.25 * 1.25) and CPU is 7200 (32 * 1.5 * 1.5 * 100) (see attached).

(Ruben S. Montero) #2

Hi Dave,

I want to check if this is more a cosmetic issue or not. Could you please send the output of onehost show -x 51



Hi Ruben,

I had to use that host so I provisioned a new one with 24 cores and 64GB RAM. Looking at the XML per your suggestion it looks like it is just a cosmetic issue with sunstone, as the values for MAX_MEM = 78.4GB (62.7 * 1.25) and MAX_CPU = 3600 (2400 * 1.5).

Full XML from onehost show -x 53 below:

    <KVM_CPU_MODELS><![CDATA[486 pentium pentium2 pentium3 pentiumpro coreduo n270 core2duo qemu32 kvm32 cpu64-rhel5 cpu64-rhel6 kvm64 qemu64 Conroe Penryn Nehalem Nehalem-IBRS Westmere Westmere-IBRS SandyBridge SandyBridge-IBRS IvyBridge IvyBridge-IBRS Haswell-noTSX Haswell-noTSX-IBRS Haswell Haswell-IBRS Broadwell-noTSX Broadwell-noTSX-IBRS Broadwell Broadwell-IBRS Skylake-Client Skylake-Client-IBRS Skylake-Server Skylake-Server-IBRS athlon phenom Opteron_G1 Opteron_G2 Opteron_G3 Opteron_G4 Opteron_G5 EPYC EPYC-IBPB]]></KVM_CPU_MODELS>
    <KVM_MACHINES><![CDATA[pc-i440fx-rhel7.0.0 pc rhel6.0.0 rhel6.1.0 rhel6.2.0 rhel6.3.0 rhel6.4.0 rhel6.5.0 rhel6.6.0]]></KVM_MACHINES>
    <MODELNAME><![CDATA[Intel(R) Xeon(R) CPU           E5645  @ 2.40GHz]]></MODELNAME>

Thanks for your help!