[SOLVED] About running quotas

We are looking into the running quotas, but they seems pretty odd. All my users have negative values for there running quotas. Is that normal ?


Versions of the related components and OS (frontend, hypervisors, VMs):

5.8.0 (problem was already there with 5.6.2)

Steps to reproduce:

Have existing VM
Upgrade to 5.6.2 or any version with running quotas supported.

Current results:

Let pick one user. This user has actually 2 RUNNING VM (4GB + 2GB) and 1 UNDEPLOYED VM (1GB). Quotas show:
RUNNING VM : -1 / -
RUNNING CPU: -1 / -
RUNNING MEMORY: -2097152KB / -

Expected results:

RUNNING VM : 2 / -
RUNNING CPU: 2 / -
RUNNING MEMORY: 6GB / -

Or I’m missing some point ?

Best regards,
Edouard

Hi @madko

The -1 is because that user has not running quotas set up. If you add some quotas to that users, you will see the current number of running vms. Check this for more information.

Hi Alejandro,

So I’ve set a limit of 2 running VM for this user. This user can start his 3rd VM, he has now warning but the VM stay in PENDING mode. The admin sees RUNNING VM: 0 / 2 is that the correct behavior ?

Hi @madko

The user should see this message:

$ onetemplate instantiate 3 --user user
Password: 
[one.template.instantiate] User [2] : user [2] limit of 1 reached for RUNNING_VMS quota in VM.

And the admin should see this in the user quotas:

      RUNNING VMS       RUNNING MEMORY          RUNNING CPU
      1 /       1      128M /        -      0.10 /        -

So it seems there is a bug indeed. Here is what I’ve got:

Quotas:

Hi @madko

Could you please send me the output of oneuser show <USER_ID> -x for the user and the admin.

Sure. here is the oneuser show my_user :

<USER>
  <ID>3</ID>
  <GID>101</GID>
  <GROUPS>
    <ID>100</ID>
    <ID>101</ID>
    <ID>106</ID>
    <ID>109</ID>
    <ID>113</ID>
  </GROUPS>
  <GNAME>architecte</GNAME>
  <NAME>my_user</NAME>
  <PASSWORD><![CDATA[xxx]]></PASSWORD>
  <AUTH_DRIVER><![CDATA[core]]></AUTH_DRIVER>
  <ENABLED>1</ENABLED>
  <LOGIN_TOKEN>
    <TOKEN>xxx</TOKEN>
    <EXPIRATION_TIME>1545420903</EXPIRATION_TIME>
    <EGID>-1</EGID>
  </LOGIN_TOKEN>
  <TEMPLATE>
    <EMAIL><![CDATA[xxx]]></EMAIL>
    <FIRSTNAME><![CDATA[xxx]]></FIRSTNAME>
    <LASTNAME><![CDATA[xxx]]></LASTNAME>
    <SSH_PUBLIC_KEY><![CDATA[ssh-rsa xxx]]></SSH_PUBLIC_KEY>
    <SUNSTONE>
      <DEFAULT_VIEW><![CDATA[cloud]]></DEFAULT_VIEW>
      <LANG><![CDATA[en_US]]></LANG>
    </SUNSTONE>
    <TOKEN_PASSWORD><![CDATA[xxx]]></TOKEN_PASSWORD>
  </TEMPLATE>
  <DATASTORE_QUOTA/>
  <NETWORK_QUOTA>
    <NETWORK>
      <ID><![CDATA[1]]></ID>
      <LEASES><![CDATA[-1]]></LEASES>
      <LEASES_USED><![CDATA[1]]></LEASES_USED>
    </NETWORK>
    <NETWORK>
      <ID><![CDATA[444]]></ID>
      <LEASES><![CDATA[-1]]></LEASES>
      <LEASES_USED><![CDATA[2]]></LEASES_USED>
    </NETWORK>
  </NETWORK_QUOTA>
  <VM_QUOTA>
    <VM>
      <CPU><![CDATA[-1]]></CPU>
      <CPU_USED><![CDATA[0.30]]></CPU_USED>
      <MEMORY><![CDATA[49152]]></MEMORY>
      <MEMORY_USED><![CDATA[7168]]></MEMORY_USED>
      <RUNNING_CPU><![CDATA[-1]]></RUNNING_CPU>
      <RUNNING_CPU_USED><![CDATA[-0.90]]></RUNNING_CPU_USED>
      <RUNNING_MEMORY><![CDATA[-2]]></RUNNING_MEMORY>
      <RUNNING_MEMORY_USED><![CDATA[-1024]]></RUNNING_MEMORY_USED>
      <RUNNING_VMS><![CDATA[2]]></RUNNING_VMS>
      <RUNNING_VMS_USED><![CDATA[0]]></RUNNING_VMS_USED>
      <SYSTEM_DISK_SIZE><![CDATA[-1]]></SYSTEM_DISK_SIZE>
      <SYSTEM_DISK_SIZE_USED><![CDATA[115712]]></SYSTEM_DISK_SIZE_USED>
      <VMS><![CDATA[20]]></VMS>
      <VMS_USED><![CDATA[3]]></VMS_USED>
    </VM>
  </VM_QUOTA>
  <IMAGE_QUOTA>
    <IMAGE>
      <ID><![CDATA[16]]></ID>
      <RVMS><![CDATA[-1]]></RVMS>
      <RVMS_USED><![CDATA[1]]></RVMS_USED>
    </IMAGE>
    <IMAGE>
      <ID><![CDATA[124]]></ID>
      <RVMS><![CDATA[-1]]></RVMS>
      <RVMS_USED><![CDATA[1]]></RVMS_USED>
    </IMAGE>
    <IMAGE>
      <ID><![CDATA[407]]></ID>
      <RVMS><![CDATA[-1]]></RVMS>
      <RVMS_USED><![CDATA[1]]></RVMS_USED>
    </IMAGE>
  </IMAGE_QUOTA>
  <DEFAULT_USER_QUOTAS>
    <DATASTORE_QUOTA/>
    <NETWORK_QUOTA/>
    <VM_QUOTA/>
    <IMAGE_QUOTA/>
  </DEFAULT_USER_QUOTAS>
</USER>

And the one for the admin :

<USER>
  <ID>0</ID>
  <GID>0</GID>
  <GROUPS>
    <ID>0</ID>
  </GROUPS>
  <GNAME>oneadmin</GNAME>
  <NAME>oneadmin</NAME>
  <PASSWORD><![CDATA[xxx]]></PASSWORD>
  <AUTH_DRIVER><![CDATA[core]]></AUTH_DRIVER>
  <ENABLED>1</ENABLED>
  <LOGIN_TOKEN>
    <TOKEN>xxx</TOKEN>
    <EXPIRATION_TIME>1539632119</EXPIRATION_TIME>
    <EGID>0</EGID>
  </LOGIN_TOKEN>
  <TEMPLATE>
    <SSH_PUBLIC_KEY><![CDATA[ssh-rsa xxx]]></SSH_PUBLIC_KEY>
    <SUNSTONE>
      <DEFAULT_VIEW><![CDATA[admin]]></DEFAULT_VIEW>
      <LANG><![CDATA[en_US]]></LANG>
      <TABLE_DEFAULT_PAGE_LENGTH><![CDATA[100]]></TABLE_DEFAULT_PAGE_LENGTH>
    </SUNSTONE>
    <TOKEN_PASSWORD><![CDATA[xxx]]></TOKEN_PASSWORD>
  </TEMPLATE>
  <DATASTORE_QUOTA/>
  <NETWORK_QUOTA/>
  <VM_QUOTA/>
  <IMAGE_QUOTA/>
  <DEFAULT_USER_QUOTAS>
    <DATASTORE_QUOTA/>
    <NETWORK_QUOTA/>
    <VM_QUOTA/>
    <IMAGE_QUOTA/>
  </DEFAULT_USER_QUOTAS>
</USER>

It’s like the counter is initialized without taking care of previously deployed VM. So if I stop a running VM, counter starts to go into negative values. If I restart this VM, counter goes up to 0. I don’t remember having a positive value in all my users.

btw onedb fsck doesn’t find anything wrong.

I can see that the user has the running vms quota set to 2, but he currently has no running vms because of this <RUNNING_VMS_USED><![CDATA[0]]></RUNNING_VMS_USED>. So please could you check that the user has more than 2 vms in RUNNING state?

Also, we can do a simple test:

  • Terminate the vms for the user.
  • Then instantiate a vm using that user, wait until it’s in RUNNING state, repeat the step one more time.
  • Try to instantiate it one more time (this is the third time) what happens? Do you see a message or the vm can be instantiated?

My post #5 shows a screenshot of the 2 running VMS + 1 stuck in pending. This is the same user.

I will try to do what you suggest

Yes after creating 3 VMs, my running VM quota is at 2/2. So if I try to create one more VM I have indeed this message :slight_smile:

[one.template.instantiate] User [3] : user [3] limit of 2 reached for RUNNING_VMS quota in VM.

But how to fix the initial value of this counter ? This user has in fact 5 running VM, not 2.

That’s great!

About the counter it should be restarted automatically, so maybe the fsck did something wrong, I will be aware of this, if it’s happen again I will check.

Thanks for your feedback!

It’s seems that old users, maybe those created before the Running quotas functionnality, have all this problem. Some of theme are at -30 running VMs. So it can’t be fixed just by terminating/recreating VMs. Do you think that fsck could fix that at some time ?

There is a bug in the fsck, the functionality is missing, I will open a ticket on GitHub so we can fix it.

Thanks for your patient and your information!

Thank you Alejandro, I’ve seen the ticket #3082

Best regards