Ceph system datastore returns no space usage

(Dmitry) #1

Im trying to create vm’s by selecting certain DataStore, but another one selected instead

Using openNebula 5.8.1

KVM + UBUNTU 18

  1. DataStores:

  2. Trying to create vm in selected datastore( due “Deploy VM in a specific datastore” TAB). 2 datastores available: with ID = 100 and ID = 0 , both has system type. Choose store with ID = 100.

  3. Afterward vm was created, but in another datastore (with id 101 and image type):

DISK = [
ALLOW_ORPHANS = “mixed”,
CEPH_HOST = “IPS”,
CEPH_SECRET = “SECRET”,
CEPH_USER = “libvirt”,
CLONE = “YES”,
CLONE_TARGET = “SELF”,
CLUSTER_ID = “0”,
DATASTORE = “cephds”,
DATASTORE_ID = “101”,
DEV_PREFIX = “vd”,
DISK_ID = “0”,
DISK_SNAPSHOT_TOTAL_SIZE = “0”,
DISK_TYPE = “RBD”,
DRIVER = “raw”,
IMAGE = “Ubuntu 18.04 - KVM”,
IMAGE_ID = “0”,
IMAGE_STATE = “2”,
LN_TARGET = “NONE”,
ORIGINAL_SIZE = “2252”,
POOL_NAME = “one-images”,
READONLY = “NO”,
SAVE = “NO”,
SIZE = “2252”,
SOURCE = “one-images/one-0”,
TARGET = “vda”,
TM_MAD = “ceph”,
TYPE = “RBD” ]

What king of problem placed here?

(Alejandro Huertas) #2

Hello @way

You are selecting the System datastore where the VM is going to run in, but the disk will be placed in the image datastore associated. If you want to check the system datastore, you have to take a look at history_records.

(Dmitry) #3

Hello! Thank you for your answer.

Im got it. But if i select datastore, where VM going to run in, it must be grown in size afterwards, isnt it? But mine is not
.

(Alejandro Huertas) #4

Send me the output of onevm show <VM_ID> -x and oneimage show <IMAGE_ID> -x so I can check it.

(Spencer) #5

@ahuertas I have the same issue @way is seeing with the sunstone datastore view not updating used capacity of the pool. I recently just created a HA cluster with 3oned servers, 2 kvm hosts, and 3 sunstone with the same ceph pool I had been using for 5.6 testing, now with 5.8.1 testing. IIRC this was working correctly with version 5.8, but my environment is a little different now so maybe something is misconfigured on my end???

I have 10 VMs with ~300 MiB disk usage each (8GB provisioned disk), 1 OS image @ 1.2GiB, and an empty generic storage datablock @ 20GiB.

The capacity shown is typically the provisioned/total.

Here is ceph output

[oneadmin@front2 root]$ rbd du -p cachepool
NAME PROVISIONED USED
foo 4 GiB 0 B
one-4 8 GiB 1.2 GiB
one-5@snap 8 GiB 1.2 GiB
one-5 8 GiB 0 B
one-5-10-0 8 GiB 272 MiB
one-5-11-0 8 GiB 272 MiB
one-5-2-0 8 GiB 316 MiB
one-5-3-0 8 GiB 316 MiB
one-5-4-0 8 GiB 272 MiB
one-5-5-0 8 GiB 272 MiB
one-5-6-0 8 GiB 272 MiB
one-5-7-0 8 GiB 272 MiB
one-5-8-0 8 GiB 272 MiB
one-5-9-0 8 GiB 272 MiB
one-6 20 GiB 0 B
120 GiB 5.2 GiB

one-6 generic storage image oned log output:

Mon May 6 10:10:20 2019 [Z0][ImM][I]: Image created and ready to use
Mon May 6 10:10:20 2019 [Z0][InM][D]: Monitoring datastore Ceph-Images (103)
Mon May 6 10:10:22 2019 [Z0][ImM][D]: Datastore Ceph-Images (103) successfully monitored.

output of oneimage show 5:

[oneadmin@front2 root]$ oneimage show 5
IMAGE 5 INFORMATION
ID : 5
NAME : CentOS 7 - KVM
USER : oneadmin
GROUP : oneadmin
LOCK : None
DATASTORE : Ceph-Images
TYPE : OS
REGISTER TIME : 05/05 16:58:54
PERSISTENT : No
SOURCE : hddpool/one-5
PATH : https://marketplace.opennebula.systems//appliance/4e3b2788-d174-4151-b026-94bb0b987cbb/download/0
SIZE : 8G
STATE : used
RUNNING_VMS : 10

PERMISSIONS
OWNER : um-
GROUP : —
OTHER : —

IMAGE TEMPLATE
DEV_PREFIX=“vd”
FORMAT=“qcow2”
FROM_APP=“28”
FROM_APP_MD5=“dbc81ae029a17e12e51c0aac3cc5ac4d”
FROM_APP_NAME=“CentOS 7 - KVM”

VIRTUAL MACHINES

ID USER     GROUP    NAME            STAT UCPU    UMEM HOST             TIME
11 oneadmin oneadmin two7            runn  1.0  570.7M cloud3.ecs   0d 00h14
10 oneadmin oneadmin two6            runn  0.0  568.8M cloud4.ecs   0d 00h14
 9 oneadmin oneadmin two5            runn  0.0  549.6M cloud3.ecs   0d 00h14
 8 oneadmin oneadmin two4            runn  0.0  586.4M cloud4.ecs   0d 00h14
 7 oneadmin oneadmin two0            runn  0.0  565.2M cloud3.ecs   0d 00h14
 6 oneadmin oneadmin two1            runn  0.0  572.6M cloud4.ecs   0d 00h14
 5 oneadmin oneadmin two2            runn  0.0  542.7M cloud3.ecs   0d 00h14
 4 oneadmin oneadmin two3            runn  0.0  590.6M cloud4.ecs   0d 00h14
 3 oneadmin oneadmin 0               runn  0.0  501.8M cloud3.ecs   0d 17h19
 2 oneadmin oneadmin 1               runn  0.0  665.7M cloud4.ecs   0d 17h19

onevm show 11:

[oneadmin@front2 root]$ onevm show 11
VIRTUAL MACHINE 11 INFORMATION
ID : 11
NAME : two7
USER : oneadmin
GROUP : oneadmin
STATE : ACTIVE
LCM_STATE : RUNNING
LOCK : None
RESCHED : No
HOST : cloud3.ecstest.com
CLUSTER ID : 0
CLUSTER : default
START TIME : 05/06 10:07:51
END TIME : -
DEPLOY ID : one-11

VIRTUAL MACHINE MONITORING
CPU : 0.0
MEMORY : 570.9M
NETTX : 0K
NETRX : 0K
DISKRDBYTES : 139291972
DISKRDIOPS : 6570
DISKWRBYTES : 15672320
DISKWRIOPS : 320

PERMISSIONS
OWNER : um-
GROUP : —
OTHER : —

VM DISKS
ID DATASTORE TARGET IMAGE SIZE TYPE SAVE
0 Ceph-Image vda CentOS 7 - KVM -/8G rbd NO
1 - hda CONTEXT -/- - -

VIRTUAL MACHINE HISTORY
SEQ UID REQ HOST ACTION DS START TIME PROLOG
0 - - cloud3.ecste none 104 05/06 10:09:40 0d 00h13m 0h00m04s

USER TEMPLATE
LOGO=“images/logos/centos.png”
SCHED_MESSAGE=“Mon May 6 10:09:08 2019 : Cannot dispatch VM to any Host. Possible reasons: Not enough capacity in Host or System DS, dispatch limit reached, or limit of free leases reached.”

VIRTUAL MACHINE TEMPLATE
AUTOMATIC_DS_REQUIREMENTS="(“CLUSTERS/ID” @> 0)"
AUTOMATIC_NIC_REQUIREMENTS="(“CLUSTERS/ID” @> 0)"
AUTOMATIC_REQUIREMENTS="(CLUSTER_ID = 0) & !(PUBLIC_CLOUD = YES)"
CONTEXT=[
DISK_ID=“1”,
NETWORK=“YES”,
SSH_PUBLIC_KEY="",
TARGET=“hda” ]
CPU=“1”
GRAPHICS=[
LISTEN=“0.0.0.0”,
PORT=“5911”,
TYPE=“vnc” ]
MEMORY=“768”
OS=[
ARCH=“x86_64” ]
TEMPLATE_ID=“1”
TM_MAD_SYSTEM=“ceph”
VMID=“11”

Size is -/8 for the rbd disk. Idk what is up with the scheduling message, i batch created these VMs and onehost show says it is running on my host. I can’t vnc to confirm this vm is ACTUALLY running because I haven’t quite set it up on my external sunstone servers yet.

(Dmitry) #6

Here we go: Vm with its image parent

<VM>
  <ID>10</ID>
  <UID>0</UID>
  <GID>0</GID>
  <UNAME>oneadmin</UNAME>
  <GNAME>oneadmin</GNAME>
  <NAME>ELK</NAME>
  <PERMISSIONS>
    <OWNER_U>1</OWNER_U>
    <OWNER_M>1</OWNER_M>
    <OWNER_A>0</OWNER_A>
    <GROUP_U>0</GROUP_U>
    <GROUP_M>0</GROUP_M>
    <GROUP_A>0</GROUP_A>
    <OTHER_U>0</OTHER_U>
    <OTHER_M>0</OTHER_M>
    <OTHER_A>0</OTHER_A>
  </PERMISSIONS>
  <LAST_POLL>1557206038</LAST_POLL>
  <STATE>3</STATE>
  <LCM_STATE>3</LCM_STATE>
  <PREV_STATE>3</PREV_STATE>
  <PREV_LCM_STATE>3</PREV_LCM_STATE>
  <RESCHED>0</RESCHED>
  <STIME>1557120066</STIME>
  <ETIME>0</ETIME>
  <DEPLOY_ID>one-10</DEPLOY_ID>
  <MONITORING>
    <CPU><![CDATA[0.0]]></CPU>
    <DISKRDBYTES><![CDATA[255606636]]></DISKRDBYTES>
    <DISKRDIOPS><![CDATA[26980]]></DISKRDIOPS>
    <DISKWRBYTES><![CDATA[83318784]]></DISKWRBYTES>
    <DISKWRIOPS><![CDATA[4483]]></DISKWRIOPS>
    <MEMORY><![CDATA[788388]]></MEMORY>
    <NETRX><![CDATA[2335737]]></NETRX>
    <NETTX><![CDATA[471289]]></NETTX>
    <STATE><![CDATA[a]]></STATE>
  </MONITORING>
  <TEMPLATE>
    <AUTOMATIC_DS_REQUIREMENTS><![CDATA[("CLUSTERS/ID" @> 0)]]></AUTOMATIC_DS_REQUIREMENTS>
    <AUTOMATIC_NIC_REQUIREMENTS><![CDATA[("CLUSTERS/ID" @> 0)]]></AUTOMATIC_NIC_REQUIREMENTS>
    <AUTOMATIC_REQUIREMENTS><![CDATA[(CLUSTER_ID = 0) & !(PUBLIC_CLOUD = YES)]]></AUTOMATIC_REQUIREMENTS>
    <CONTEXT>
      <DISK_ID><![CDATA[1]]></DISK_ID>
      <ETH0_CONTEXT_FORCE_IPV4><![CDATA[]]></ETH0_CONTEXT_FORCE_IPV4>
      <ETH0_DNS><![CDATA[]]></ETH0_DNS>
      <ETH0_EXTERNAL><![CDATA[]]></ETH0_EXTERNAL>
      <ETH0_GATEWAY><![CDATA[192.168.188.254]]></ETH0_GATEWAY>
      <ETH0_GATEWAY6><![CDATA[]]></ETH0_GATEWAY6>
      <ETH0_IP><![CDATA[192.168.188.128]]></ETH0_IP>
      <ETH0_IP6><![CDATA[]]></ETH0_IP6>
      <ETH0_IP6_PREFIX_LENGTH><![CDATA[]]></ETH0_IP6_PREFIX_LENGTH>
      <ETH0_IP6_ULA><![CDATA[]]></ETH0_IP6_ULA>
      <ETH0_MAC><![CDATA[9e:d4:fd:fd:aa:00]]></ETH0_MAC>
      <ETH0_MASK><![CDATA[255.255.255.0]]></ETH0_MASK>
      <ETH0_MTU><![CDATA[]]></ETH0_MTU>
      <ETH0_NETWORK><![CDATA[192.168.188.0]]></ETH0_NETWORK>
      <ETH0_SEARCH_DOMAIN><![CDATA[]]></ETH0_SEARCH_DOMAIN>
      <ETH0_VLAN_ID><![CDATA[]]></ETH0_VLAN_ID>
      <ETH0_VROUTER_IP><![CDATA[]]></ETH0_VROUTER_IP>
      <ETH0_VROUTER_IP6><![CDATA[]]></ETH0_VROUTER_IP6>
      <ETH0_VROUTER_MANAGEMENT><![CDATA[]]></ETH0_VROUTER_MANAGEMENT>
      <NETWORK><![CDATA[YES]]></NETWORK>
      <PASSWORD><![CDATA[root]]></PASSWORD>
      <SSH_PUBLIC_KEY></SSH_PUBLIC_KEY>
      <TARGET><![CDATA[hda]]></TARGET>
    </CONTEXT>
    <CPU><![CDATA[2]]></CPU>
    <DISK>
      <ALLOW_ORPHANS><![CDATA[mixed]]></ALLOW_ORPHANS>
      <CEPH_HOST><![CDATA[192.168.188.1:6789 192.168.188.2:6789 192.168.188.3:6789]]></CEPH_HOST>
      <CEPH_SECRET><![CDATA[41d1754-d47f-4b41-968a-93t234600e]]></CEPH_SECRET>
      <CEPH_USER><![CDATA[libvirt]]></CEPH_USER>
      <CLONE><![CDATA[YES]]></CLONE>
      <CLONE_TARGET><![CDATA[SELF]]></CLONE_TARGET>
      <CLUSTER_ID><![CDATA[0]]></CLUSTER_ID>
      <DATASTORE><![CDATA[cephds]]></DATASTORE>
      <DATASTORE_ID><![CDATA[101]]></DATASTORE_ID>
      <DEV_PREFIX><![CDATA[vd]]></DEV_PREFIX>
      <DISK_ID><![CDATA[0]]></DISK_ID>
      <DISK_SNAPSHOT_TOTAL_SIZE><![CDATA[0]]></DISK_SNAPSHOT_TOTAL_SIZE>
      <DISK_TYPE><![CDATA[RBD]]></DISK_TYPE>
      <DRIVER><![CDATA[raw]]></DRIVER>
      <IMAGE><![CDATA[Ubuntu 18.04 - KVM]]></IMAGE>
      <IMAGE_ID><![CDATA[0]]></IMAGE_ID>
      <IMAGE_STATE><![CDATA[2]]></IMAGE_STATE>
      <LN_TARGET><![CDATA[NONE]]></LN_TARGET>
      <ORIGINAL_SIZE><![CDATA[2252]]></ORIGINAL_SIZE>
      <POOL_NAME><![CDATA[one-images]]></POOL_NAME>
      <READONLY><![CDATA[NO]]></READONLY>
      <SAVE><![CDATA[NO]]></SAVE>
      <SIZE><![CDATA[209920]]></SIZE>
      <SOURCE><![CDATA[one-images/one-0]]></SOURCE>
      <TARGET><![CDATA[vda]]></TARGET>
      <TM_MAD><![CDATA[ceph]]></TM_MAD>
      <TYPE><![CDATA[RBD]]></TYPE>
    </DISK>
    <GRAPHICS>
      <LISTEN><![CDATA[0.0.0.0]]></LISTEN>
      <PORT><![CDATA[5910]]></PORT>
      <TYPE><![CDATA[VNC]]></TYPE>
    </GRAPHICS>
    <MEMORY><![CDATA[8192]]></MEMORY>
    <NIC>
      <AR_ID><![CDATA[0]]></AR_ID>
      <BRIDGE><![CDATA[br0]]></BRIDGE>
      <BRIDGE_TYPE><![CDATA[linux]]></BRIDGE_TYPE>
      <CLUSTER_ID><![CDATA[0]]></CLUSTER_ID>
      <IP><![CDATA[192.168.188.128]]></IP>
      <MAC><![CDATA[9e:d4:fd:fd:aa:00]]></MAC>
      <MODEL><![CDATA[virtio]]></MODEL>
      <NAME><![CDATA[NIC0]]></NAME>
      <NETWORK><![CDATA[VirtualNetwork01]]></NETWORK>
      <NETWORK_ID><![CDATA[1]]></NETWORK_ID>
      <NIC_ID><![CDATA[0]]></NIC_ID>
      <SECURITY_GROUPS><![CDATA[0]]></SECURITY_GROUPS>
      <TARGET><![CDATA[one-10-0]]></TARGET>
      <VN_MAD><![CDATA[bridge]]></VN_MAD>
    </NIC>
    <NIC_DEFAULT>
      <MODEL><![CDATA[virtio]]></MODEL>
    </NIC_DEFAULT>
    <OS>
      <ARCH><![CDATA[x86_64]]></ARCH>
      <BOOT><![CDATA[]]></BOOT>
    </OS>
    <SECURITY_GROUP_RULE>
      <PROTOCOL><![CDATA[ALL]]></PROTOCOL>
      <RULE_TYPE><![CDATA[OUTBOUND]]></RULE_TYPE>
      <SECURITY_GROUP_ID><![CDATA[0]]></SECURITY_GROUP_ID>
      <SECURITY_GROUP_NAME><![CDATA[default]]></SECURITY_GROUP_NAME>
    </SECURITY_GROUP_RULE>
    <SECURITY_GROUP_RULE>
      <PROTOCOL><![CDATA[ALL]]></PROTOCOL>
      <RULE_TYPE><![CDATA[INBOUND]]></RULE_TYPE>
      <SECURITY_GROUP_ID><![CDATA[0]]></SECURITY_GROUP_ID>
      <SECURITY_GROUP_NAME><![CDATA[default]]></SECURITY_GROUP_NAME>
    </SECURITY_GROUP_RULE>
    <TEMPLATE_ID><![CDATA[0]]></TEMPLATE_ID>
    <TM_MAD_SYSTEM><![CDATA[ceph]]></TM_MAD_SYSTEM>
    <VMID><![CDATA[10]]></VMID>
  </TEMPLATE>
  <USER_TEMPLATE>
    <DESCRIPTION><![CDATA[Проверка с сетью]]></DESCRIPTION>
    <INPUTS_ORDER><![CDATA[]]></INPUTS_ORDER>
    <LOGO><![CDATA[images/logos/ubuntu.png]]></LOGO>
    <MEMORY_UNIT_COST><![CDATA[MB]]></MEMORY_UNIT_COST>
  </USER_TEMPLATE>
  <HISTORY_RECORDS>
    <HISTORY>
      <OID>10</OID>
      <SEQ>0</SEQ>
      <HOSTNAME>ubuntu-srv6</HOSTNAME>
      <HID>2</HID>
      <CID>0</CID>
      <STIME>1557120095</STIME>
      <ETIME>0</ETIME>
      <VM_MAD><![CDATA[kvm]]></VM_MAD>
      <TM_MAD><![CDATA[ceph]]></TM_MAD>
      <DS_ID>100</DS_ID>
      <PSTIME>1557120095</PSTIME>
      <PETIME>1557120097</PETIME>
      <RSTIME>1557120097</RSTIME>
      <RETIME>0</RETIME>
      <ESTIME>0</ESTIME>
      <EETIME>0</EETIME>
      <ACTION>0</ACTION>
      <UID>-1</UID>
      <GID>-1</GID>
      <REQUEST_ID>-1</REQUEST_ID>
    </HISTORY>
  </HISTORY_RECORDS>
</VM>

<IMAGE>
  <ID>0</ID>
  <UID>0</UID>
  <GID>0</GID>
  <UNAME>oneadmin</UNAME>
  <GNAME>oneadmin</GNAME>
  <NAME>Ubuntu 18.04 - KVM</NAME>
  <PERMISSIONS>
    <OWNER_U>1</OWNER_U>
    <OWNER_M>1</OWNER_M>
    <OWNER_A>0</OWNER_A>
    <GROUP_U>0</GROUP_U>
    <GROUP_M>0</GROUP_M>
    <GROUP_A>0</GROUP_A>
    <OTHER_U>0</OTHER_U>
    <OTHER_M>0</OTHER_M>
    <OTHER_A>0</OTHER_A>
  </PERMISSIONS>
  <TYPE>0</TYPE>
  <DISK_TYPE>3</DISK_TYPE>
  <PERSISTENT>0</PERSISTENT>
  <REGTIME>1556019449</REGTIME>
  <SOURCE><![CDATA[one-images/one-0]]></SOURCE>
  <PATH><![CDATA[https://marketplace.opennebula.systems//appliance/ca5c3632-359a-429c-ac5b-b86178ee2390/download/0]]></PATH>
  <FSTYPE><![CDATA[]]></FSTYPE>
  <SIZE>2252</SIZE>
  <STATE>2</STATE>
  <RUNNING_VMS>3</RUNNING_VMS>
  <CLONING_OPS>0</CLONING_OPS>
  <CLONING_ID>-1</CLONING_ID>
  <TARGET_SNAPSHOT>-1</TARGET_SNAPSHOT>
  <DATASTORE_ID>101</DATASTORE_ID>
  <DATASTORE>cephds</DATASTORE>
  <VMS>
    <ID>8</ID>
    <ID>10</ID>
    <ID>12</ID>
  </VMS>
  <CLONES/>
  <APP_CLONES/>
  <TEMPLATE>
    <DEV_PREFIX><![CDATA[vd]]></DEV_PREFIX>
    <FORMAT><![CDATA[qcow2]]></FORMAT>
    <FROM_APP><![CDATA[14]]></FROM_APP>
    <FROM_APP_MD5><![CDATA[fdfsdfsd1341524525]]></FROM_APP_MD5>
    <FROM_APP_NAME><![CDATA[Ubuntu 18.04 - KVM]]></FROM_APP_NAME>
  </TEMPLATE>
  <SNAPSHOTS>
    <ALLOW_ORPHANS><![CDATA[NO]]></ALLOW_ORPHANS>
    <CURRENT_BASE><![CDATA[-1]]></CURRENT_BASE>
    <NEXT_SNAPSHOT><![CDATA[0]]></NEXT_SNAPSHOT>
  </SNAPSHOTS>
</IMAGE>
(Alejandro Huertas) #7

Hello @IowaOrganics

Could you please send me the output of onedatastore list -x?

And also the full output of ceph df detail --format xml?

(Alejandro Huertas) #8

Hi @way

As I can see, the VM is running on system datastore 100, this is the one you selected, right?

(Dmitry) #9

Yes, i’ve selected datastore 100. History_records gives us where vm located is?

But my datastore 100 has has 0 GB size of 1.5TB available, how is it possible?

(Alejandro Huertas) #10

Yes, history records give you the HOST and the DATASTORE.

Maybe because the size is too small and sunstone round it, could you please check it in your ceph pool?

(Dmitry) #11

Sure, we need 2 last ones:

rados df
warning: line 10: 'mon_pg_warn_max_per_osd' in section 'global' redefined 
warning: line 12: 'mon_max_pg_per_osd' in section 'global' redefined 
POOL_NAME              USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED  RD_OPS      RD  WR_OPS     WR 
.rgw.root           2.2 KiB       6      0     18                  0       0        0      78  52 KiB       6  6 KiB 
cephfs_data             0 B       0      0      0                  0       0        0       0     0 B       0    0 B 
cephfs_metadata     2.5 KiB      22      0     44                  0       0        0       0     0 B      47 15 KiB 
default.rgw.control     0 B       8      0     24                  0       0        0       0     0 B       0    0 B 
default.rgw.log         0 B     207      0    621                  0       0        0 2469099 2.4 GiB 1645106    0 B 
default.rgw.meta        0 B       0      0      0                  0       0        0       0     0 B       0    0 B 
one                 4.1 KiB       5      0     15                  0       0        0     163 132 KiB      19 16 KiB 
one-images          6.7 GiB    1878      0   5634                  0       0        0  280603 4.5 GiB   88992 13 GiB 

total_objects    2126
total_used       434 GiB
total_avail      4.7 TiB
total_space      5.2 TiB
(Alejandro Huertas) #12

As you can see, the one pool has 4.1kiB used, that’s why you see 0 in Sunstone, because it’s rounded.

Could you please check the size in the CLI using onedatastore show 100 -x?

(Dmitry) #13

Yes, but expecting bigger size

<DATASTORE>
  <ID>100</ID>
  <UID>0</UID>
  <GID>0</GID>
  <UNAME>oneadmin</UNAME>
  <GNAME>oneadmin</GNAME>
  <NAME>ceph_system</NAME>
  <PERMISSIONS>
    <OWNER_U>1</OWNER_U>
    <OWNER_M>1</OWNER_M>
    <OWNER_A>0</OWNER_A>
    <GROUP_U>1</GROUP_U>
    <GROUP_M>0</GROUP_M>
    <GROUP_A>0</GROUP_A>
    <OTHER_U>0</OTHER_U>
    <OTHER_M>0</OTHER_M>
    <OTHER_A>0</OTHER_A>
  </PERMISSIONS>
  <DS_MAD><![CDATA[-]]></DS_MAD>
  <TM_MAD><![CDATA[ceph]]></TM_MAD>
  <BASE_PATH><![CDATA[/var/lib/one//datastores/100]]></BASE_PATH>
  <TYPE>1</TYPE>
  <DISK_TYPE>0</DISK_TYPE>
  <STATE>0</STATE>
  <CLUSTERS>
    <ID>0</ID>
  </CLUSTERS>
  <TOTAL_MB>1542303</TOTAL_MB>
  <FREE_MB>1542303</FREE_MB>
  <USED_MB>0</USED_MB>
  <IMAGES/>
  <TEMPLATE>
    <ALLOW_ORPHANS><![CDATA[mixed]]></ALLOW_ORPHANS>
    <BRIDGE_LIST><![CDATA[192.168.188.6]]></BRIDGE_LIST>
    <CEPH_HOST><![CDATA[192.168.188.1:6789 192.168.188.2:6789 192.168.188.3:6789]]></CEPH_HOST>
    <CEPH_SECRET><![CDATA[41894754-d47f-4bd6-968a-93c26cf7800e]]></CEPH_SECRET>
    <CEPH_USER><![CDATA[libvirt]]></CEPH_USER>
    <DISK_TYPE><![CDATA[FILE]]></DISK_TYPE>
    <DS_MIGRATE><![CDATA[NO]]></DS_MIGRATE>
    <POOL_NAME><![CDATA[one]]></POOL_NAME>
    <RESTRICTED_DIRS><![CDATA[/]]></RESTRICTED_DIRS>
    <SAFE_DIRS><![CDATA[/var/tmp]]></SAFE_DIRS>
    <SHARED><![CDATA[YES]]></SHARED>
    <TM_MAD><![CDATA[ceph]]></TM_MAD>
    <TYPE><![CDATA[SYSTEM_DS]]></TYPE>
  </TEMPLATE>
</DATASTORE>
(Spencer) #14

onedatastore list -x

<DATASTORE_POOL>
<DATASTORE>
  <ID>104</ID>
  <UID>0</UID>
  <GID>0</GID>
  <UNAME>oneadmin</UNAME>
  <GNAME>oneadmin</GNAME>
  <NAME>Ceph-System</NAME>
  <PERMISSIONS>
    <OWNER_U>1</OWNER_U>
    <OWNER_M>1</OWNER_M>
    <OWNER_A>0</OWNER_A>
    <GROUP_U>1</GROUP_U>
    <GROUP_M>0</GROUP_M>
    <GROUP_A>0</GROUP_A>
    <OTHER_U>0</OTHER_U>
    <OTHER_M>0</OTHER_M>
    <OTHER_A>0</OTHER_A>
  </PERMISSIONS>
  <DS_MAD><![CDATA[-]]></DS_MAD>
  <TM_MAD><![CDATA[ceph]]></TM_MAD>
  <BASE_PATH><![CDATA[/var/lib/one//datastores/104]]></BASE_PATH>
  <TYPE>1</TYPE>
  <DISK_TYPE>3</DISK_TYPE>
  <STATE>0</STATE>
  <CLUSTERS>
    <ID>0</ID>
  </CLUSTERS>
  <TOTAL_MB>5422098</TOTAL_MB>
  <FREE_MB>5422098</FREE_MB>
  <USED_MB>0</USED_MB>
  <IMAGES/>
  <TEMPLATE>
    <ALLOW_ORPHANS><![CDATA[mixed]]></ALLOW_ORPHANS>
    <BRIDGE_LIST><![CDATA[front1.ecstest.com front2.ecstest.com front3.ecstest.com]]></BRIDGE_LIST>
    <CEPH_HOST><![CDATA[cephmon1.ecstest.com cephmon2.ecstest.com cephmon3.ecstest.com]]></CEPH_HOST>
    <CEPH_SECRET><![CDATA[0102c5b1-39ed-4b33-a559-d50853f6725e]]></CEPH_SECRET>
    <CEPH_USER><![CDATA[spencer]]></CEPH_USER>
    <DATASTORE_CAPACITY_CHECK><![CDATA[YES]]></DATASTORE_CAPACITY_CHECK>
    <DISK_TYPE><![CDATA[RBD]]></DISK_TYPE>
    <DS_MIGRATE><![CDATA[NO]]></DS_MIGRATE>
    <POOL_NAME><![CDATA[hddpool]]></POOL_NAME>
    <RBD_FORMAT><![CDATA[2]]></RBD_FORMAT>
    <RESTRICTED_DIRS><![CDATA[/]]></RESTRICTED_DIRS>
    <SAFE_DIRS><![CDATA[/var/tmp sunshare /var/lib/one/sunshare]]></SAFE_DIRS>
    <SHARED><![CDATA[YES]]></SHARED>
    <TM_MAD><![CDATA[ceph]]></TM_MAD>
    <TYPE><![CDATA[SYSTEM_DS]]></TYPE>
  </TEMPLATE>
</DATASTORE>
<DATASTORE>
  <ID>103</ID>
  <UID>0</UID>
  <GID>0</GID>
  <UNAME>oneadmin</UNAME>
  <GNAME>oneadmin</GNAME>
  <NAME>Ceph-Images</NAME>
  <PERMISSIONS>
    <OWNER_U>1</OWNER_U>
    <OWNER_M>1</OWNER_M>
    <OWNER_A>1</OWNER_A>
    <GROUP_U>1</GROUP_U>
    <GROUP_M>0</GROUP_M>
    <GROUP_A>0</GROUP_A>
    <OTHER_U>0</OTHER_U>
    <OTHER_M>0</OTHER_M>
    <OTHER_A>0</OTHER_A>
  </PERMISSIONS>
  <DS_MAD><![CDATA[ceph]]></DS_MAD>
  <TM_MAD><![CDATA[ceph]]></TM_MAD>
  <BASE_PATH><![CDATA[/var/lib/one//datastores/103]]></BASE_PATH>
  <TYPE>0</TYPE>
  <DISK_TYPE>3</DISK_TYPE>
  <STATE>0</STATE>
  <CLUSTERS>
    <ID>0</ID>
  </CLUSTERS>
  <TOTAL_MB>5422098</TOTAL_MB>
  <FREE_MB>5422098</FREE_MB>
  <USED_MB>0</USED_MB>
  <IMAGES>
    <ID>5</ID>
    <ID>6</ID>
  </IMAGES>
  <TEMPLATE>
    <ALLOW_ORPHANS><![CDATA[mixed]]></ALLOW_ORPHANS>
    <BRIDGE_LIST><![CDATA[front1.ecstest.com front2.ecstest.com front3.ecstest.com]]></BRIDGE_LIST>
    <CEPH_HOST><![CDATA[cephmon1.ecstest.com cephmon2.ecstest.com cephmon3.ecstest.com]]></CEPH_HOST>
    <CEPH_SECRET><![CDATA[0102c5b1-39ed-4b33-a559-d50853f6725e]]></CEPH_SECRET>
    <CEPH_USER><![CDATA[spencer]]></CEPH_USER>
    <CLONE_TARGET><![CDATA[SELF]]></CLONE_TARGET>
    <CLONE_TARGET_SHARED><![CDATA[SELF]]></CLONE_TARGET_SHARED>
    <CLONE_TARGET_SSH><![CDATA[SYSTEM]]></CLONE_TARGET_SSH>
    <DISK_TYPE><![CDATA[RBD]]></DISK_TYPE>
    <DISK_TYPE_SHARED><![CDATA[RBD]]></DISK_TYPE_SHARED>
    <DISK_TYPE_SSH><![CDATA[FILE]]></DISK_TYPE_SSH>
    <DRIVER><![CDATA[raw]]></DRIVER>
    <DS_MAD><![CDATA[ceph]]></DS_MAD>
    <LN_TARGET><![CDATA[NONE]]></LN_TARGET>
    <LN_TARGET_SHARED><![CDATA[NONE]]></LN_TARGET_SHARED>
    <LN_TARGET_SSH><![CDATA[SYSTEM]]></LN_TARGET_SSH>
    <POOL_NAME><![CDATA[hddpool]]></POOL_NAME>
    <RBD_FORMAT><![CDATA[2]]></RBD_FORMAT>
    <RESTRICTED_DIRS><![CDATA[/]]></RESTRICTED_DIRS>
    <SAFE_DIRS><![CDATA[/var/tmp sunshare /var/lib/one/sunshare]]></SAFE_DIRS>
    <TM_MAD><![CDATA[ceph]]></TM_MAD>
    <TM_MAD_SYSTEM><![CDATA[ssh,shared]]></TM_MAD_SYSTEM>
    <TYPE><![CDATA[IMAGE_DS]]></TYPE>
  </TEMPLATE>
</DATASTORE>
</DATASTORE_POOL>

ceph df detail --format xml

<stats><stats><total_bytes>23764741914624</total_bytes><total_used_bytes>70918307840</total_used_bytes><total_avail_bytes>23693823606784</total_avail_bytes><total_objects>3830</total_objects></stats><pools><pool><name>.rgw.root</name><id>1</id><stats><kb_used>2</kb_used><bytes_used>1165</bytes_used><percent_used>1.56068e-10</percent_used><max_avail>7464698773504</max_avail><objects>4</objects><quota_objects>0</quota_objects><quota_bytes>0</quota_bytes><dirty>4</dirty><rd>0</rd><rd_bytes>0</rd_bytes><wr>4</wr><wr_bytes>4096</wr_bytes><raw_bytes_used>3495</raw_bytes_used></stats></pool><pool><name>default.rgw.control</name><id>2</id><stats><kb_used>0</kb_used><bytes_used>0</bytes_used><percent_used>0</percent_used><max_avail>7464698773504</max_avail><objects>8</objects><quota_objects>0</quota_objects><quota_bytes>0</quota_bytes><dirty>8</dirty><rd>0</rd><rd_bytes>0</rd_bytes><wr>0</wr><wr_bytes>0</wr_bytes><raw_bytes_used>0</raw_bytes_used></stats></pool><pool><name>default.rgw.meta</name><id>3</id><stats><kb_used>0</kb_used><bytes_used>0</bytes_used><percent_used>0</percent_used><max_avail>7464698773504</max_avail><objects>0</objects><quota_objects>0</quota_objects><quota_bytes>0</quota_bytes><dirty>0</dirty><rd>0</rd><rd_bytes>0</rd_bytes><wr>0</wr><wr_bytes>0</wr_bytes><raw_bytes_used>0</raw_bytes_used></stats></pool><pool><name>default.rgw.log</name><id>4</id><stats><kb_used>0</kb_used><bytes_used>0</bytes_used><percent_used>0</percent_used><max_avail>7464698773504</max_avail><objects>207</objects><quota_objects>0</quota_objects><quota_bytes>0</quota_bytes><dirty>207</dirty><rd>4480762</rd><rd_bytes>4588088320</rd_bytes><wr>2985698</wr><wr_bytes>0</wr_bytes><raw_bytes_used>0</raw_bytes_used></stats></pool><pool><name>hddpool</name><id>13</id><stats><kb_used>0</kb_used><bytes_used>0</bytes_used><percent_used>0</percent_used><max_avail>5685466628096</max_avail><objects>0</objects><quota_objects>0</quota_objects><quota_bytes>0</quota_bytes><dirty>0</dirty><rd>0</rd><rd_bytes>0</rd_bytes><wr>0</wr><wr_bytes>0</wr_bytes><raw_bytes_used>0</raw_bytes_used></stats></pool><pool><name>cachepool</name><id>14</id><stats><kb_used>11269362</kb_used><bytes_used>11539825800</bytes_used><percent_used>0.00633662</percent_used><max_avail>1809593139200</max_avail><objects>3611</objects><quota_objects>0</quota_objects><quota_bytes>0</quota_bytes><dirty>3391</dirty><rd>124540</rd><rd_bytes>4359417856</rd_bytes><wr>29570</wr><wr_bytes>13738208256</wr_bytes><raw_bytes_used>34619478016</raw_bytes_used></stats></pool></pools></stats>

I have cache tiering as cachepool for hddpool.

I configured system and image datastore with defaults and RBD 2.

(Daniel Clavijo Coca) #15

@way, it seems you have two pools in ceph for opennebula, one and one-images, can you verify you are using each one of them for datastores 100 and 101 respectively ? If that is the case, then your monitoring data seems alright according to your rados df output.

Or, are you expecting for images to be placed onto one pool ?

(Daniel Clavijo Coca) #16

Take a look at my setup

Both ceph images and system datastore are using the same pool

root@ubuntu1804-lxd-ceph-luminous-7dbb9-0:~# onedatastore show default -x
<DATASTORE>
  <ID>1</ID>
  <UID>0</UID>
  <GID>0</GID>
  <UNAME>oneadmin</UNAME>
  <GNAME>oneadmin</GNAME>
  <NAME>default</NAME>
  <PERMISSIONS>
    <OWNER_U>1</OWNER_U>
    <OWNER_M>1</OWNER_M>
    <OWNER_A>0</OWNER_A>
    <GROUP_U>1</GROUP_U>
    <GROUP_M>0</GROUP_M>
    <GROUP_A>0</GROUP_A>
    <OTHER_U>0</OTHER_U>
    <OTHER_M>0</OTHER_M>
    <OTHER_A>0</OTHER_A>
  </PERMISSIONS>
  <DS_MAD><![CDATA[ceph]]></DS_MAD>
  <TM_MAD><![CDATA[ceph]]></TM_MAD>
  <BASE_PATH><![CDATA[/var/lib/one//datastores/1]]></BASE_PATH>
  <TYPE>0</TYPE>
  <DISK_TYPE>3</DISK_TYPE>
  <STATE>0</STATE>
  <CLUSTERS>
    <ID>0</ID>
  </CLUSTERS>
  <TOTAL_MB>37792</TOTAL_MB>
  <FREE_MB>37635</FREE_MB>
  <USED_MB>157</USED_MB>
  <IMAGES>
    <ID>11</ID>
  </IMAGES>
  <TEMPLATE>
    <ALLOW_ORPHANS><![CDATA[mixed]]></ALLOW_ORPHANS>
    <BRIDGE_LIST><![CDATA[ubuntu1804-lxd-ceph-luminous-7dbb9-1.test ubuntu1804-lxd-ceph-luminous-7dbb9-2.test]]></BRIDGE_LIST>
    <CEPH_HOST><![CDATA[ubuntu1804-lxd-ceph-luminous-7dbb9-0.test]]></CEPH_HOST>
    <CEPH_SECRET><![CDATA[7ebb2445-e96e-44c6-b7c7-07dc7a50f311]]></CEPH_SECRET>
    <CEPH_USER><![CDATA[oneadmin]]></CEPH_USER>
    <CLONE_TARGET><![CDATA[SELF]]></CLONE_TARGET>
    <CLONE_TARGET_SHARED><![CDATA[SELF]]></CLONE_TARGET_SHARED>
    <CLONE_TARGET_SSH><![CDATA[SYSTEM]]></CLONE_TARGET_SSH>
    <DISK_TYPE><![CDATA[RBD]]></DISK_TYPE>
    <DISK_TYPE_SHARED><![CDATA[RBD]]></DISK_TYPE_SHARED>
    <DISK_TYPE_SSH><![CDATA[FILE]]></DISK_TYPE_SSH>
    <DRIVER><![CDATA[raw]]></DRIVER>
    <DS_MAD><![CDATA[ceph]]></DS_MAD>
    <LN_TARGET><![CDATA[NONE]]></LN_TARGET>
    <LN_TARGET_SHARED><![CDATA[NONE]]></LN_TARGET_SHARED>
    <LN_TARGET_SSH><![CDATA[SYSTEM]]></LN_TARGET_SSH>
    <POOL_NAME><![CDATA[one]]></POOL_NAME>
    <RESTRICTED_DIRS><![CDATA[/]]></RESTRICTED_DIRS>
    <SAFE_DIRS><![CDATA[/var/tmp /tmp]]></SAFE_DIRS>
    <TM_MAD><![CDATA[ceph]]></TM_MAD>
    <TM_MAD_SYSTEM><![CDATA[ssh,shared]]></TM_MAD_SYSTEM>
    <TYPE><![CDATA[IMAGE_DS]]></TYPE>
  </TEMPLATE>
</DATASTORE>
root@ubuntu1804-lxd-ceph-luminous-7dbb9-0:~# onedatastore show default
DATASTORE 1 INFORMATION                                                         
ID             : 1                   
NAME           : default             
USER           : oneadmin            
GROUP          : oneadmin            
CLUSTERS       : 0                   
TYPE           : IMAGE               
DS_MAD         : ceph                
TM_MAD         : ceph                
BASE PATH      : /var/lib/one//datastores/1
DISK_TYPE      : RBD                 
STATE          : READY               

DATASTORE CAPACITY                                                              
TOTAL:         : 36.9G               
FREE:          : 36.8G               
USED:          : 157M                
LIMIT:         : -                   

PERMISSIONS                                                                     
OWNER          : um-                 
GROUP          : u--                 
OTHER          : ---                 

DATASTORE TEMPLATE                                                              
ALLOW_ORPHANS="mixed"
BRIDGE_LIST="ubuntu1804-lxd-ceph-luminous-7dbb9-1.test ubuntu1804-lxd-ceph-luminous-7dbb9-2.test"
CEPH_HOST="ubuntu1804-lxd-ceph-luminous-7dbb9-0.test"
CEPH_SECRET="7ebb2445-e96e-44c6-b7c7-07dc7a50f311"
CEPH_USER="oneadmin"
CLONE_TARGET="SELF"
CLONE_TARGET_SHARED="SELF"
CLONE_TARGET_SSH="SYSTEM"
DISK_TYPE="RBD"
DISK_TYPE_SHARED="RBD"
DISK_TYPE_SSH="FILE"
DRIVER="raw"
DS_MAD="ceph"
LN_TARGET="NONE"
LN_TARGET_SHARED="NONE"
LN_TARGET_SSH="SYSTEM"
POOL_NAME="one"
RESTRICTED_DIRS="/"
SAFE_DIRS="/var/tmp /tmp"
TM_MAD="ceph"
TM_MAD_SYSTEM="ssh,shared"
TYPE="IMAGE_DS"

IMAGES         
11             
root@ubuntu1804-lxd-ceph-luminous-7dbb9-0:~# onedatastore show system
DATASTORE 0 INFORMATION                                                         
ID             : 0                   
NAME           : system              
USER           : oneadmin            
GROUP          : oneadmin            
CLUSTERS       : 0                   
TYPE           : SYSTEM              
DS_MAD         : -                   
TM_MAD         : ceph                
BASE PATH      : /var/lib/one//datastores/0
DISK_TYPE      : RBD                 
STATE          : READY               

DATASTORE CAPACITY                                                              
TOTAL:         : 36.9G               
FREE:          : 36.8G               
USED:          : 157M                
LIMIT:         : -                   

PERMISSIONS                                                                     
OWNER          : um-                 
GROUP          : u--                 
OTHER          : ---                 

DATASTORE TEMPLATE                                                              
ALLOW_ORPHANS="mixed"
BRIDGE_LIST="ubuntu1804-lxd-ceph-luminous-7dbb9-1.test ubuntu1804-lxd-ceph-luminous-7dbb9-2.test"
CEPH_HOST="ubuntu1804-lxd-ceph-luminous-7dbb9-0.test"
CEPH_SECRET="7ebb2445-e96e-44c6-b7c7-07dc7a50f311"
CEPH_USER="oneadmin"
DISK_TYPE="RBD"
DS_MIGRATE="NO"
POOL_NAME="one"
RESTRICTED_DIRS="/"
SAFE_DIRS="/var/tmp"
SHARED="YES"
TM_MAD="ceph"
TYPE="SYSTEM_DS"

IMAGES
(Dmitry) #17

Sure onedatastore show 100 -x and onedatastore show 101 -x told us about it. And i had heard that according to the best practice , must be one pool for images and another one for system, so yes, im expecting that.

(Daniel Clavijo Coca) #18

The Ceph Storage setup for OpenNebula states that both system and image datastores should share the same set of certain attributes, where the pool name is also included. Cloning the image from one pool to another would use additional space and take extra time, with that in mind, persistent images are used from the same image datastore they are saved, and non-persistent images are snapshoted on the same pool and that’s it.