Incorrect ceph datastore capacity in sunstone

Hi there,
After many searches I can’t figure out why our ONE(5.6.1) GUI is displaying different capacity from that in Ceph(13.2.5) #ceph df
DS2


Ceph is set with 2 replicas, so should it be 19TB max size (not 15.9TB) in sunstone ?
BR,
N.Alexandrov

I believe this is probably related to the “Max Avail” in the “one” pool. The capacity shown will be what you can actually use in that pool, not what the Ceph cluster actually has available. To increase max avail, be sure to check that OSDs are balanced. Check out the “ceph osd reweight-by-utilization” command.

  • Kenny

HI Nalexandrov,

I have a query to you which is not directly related to your post but hope you can help. We are tring to integrate ceph with opennebula and already have a working ceph cluster. But when we add ceph datastore in front end it shows no size of the ceph pool and oned.log shows unable to monitor the DS with following trace.
"Command execution failed (exit code: 255): /var/lib/one/remotes/datastore/ceph/monitor PERTX0RSSVZFU… a big hash string.

Please be note that we are able to access the ceph pool using the user authentication we used in front-end and ceph DS.

Regards
Foysal

Hi,
What versions are your ONE and CEPH?
I’ve got these lines in datastore configuration section of /etc/one/oned.conf:
DATASTORE_LOCATION = /var/lib/one/datastores
DATASTORE_CAPACITY_CHECK = “yes”
There are the docs for One5.6:
http://docs.opennebula.org/5.6/deployment/open_cloud_storage_setup/ceph_ds.html

The hash string is the command that failed to execute. I recommend you to copy the entire string starting with the file path through the end of the hash (include those end trailing numbers I think) and push enter. You will actually get the error that the command throws when you execute it.

Thanks for your suggestion. We are getting below error after running the command manually.

ERROR: monitor: Command “ceph --id libvirt --keyfile /etc/ceph/ceph.client.libvirt.keyring df detail --format xml” failed:

It appears your user is failing to authenticate.
Are you able to execute the following command from your frontend?

su oneadmin -c ‘rbd ls -p yourpoolname’

If not, please step through Ceph Datastore — OpenNebula 5.8.5 documentation

Make sure your ceph keys are owned by oneadmin in /var/lib/one

Hello,
I have the same problem.
Long time ago I added some extra disks to the ceph cluster and the cluster size was extended, but the “TOTAL” value inside opennebula is not updated¸ or most likely the information is misleading.

Used is taken from the ceph’s value for “used”, but the value for the OpenNebula’s total is coming from ceph’s “used” plus “max avail”.
OpenNebula should use the value from “Stored” instead of the one in “Used”.

[oneadmin@OpenNebula ~]$ onedatastore show 999
DATASTORE CAPACITY
TOTAL: : 47.6T
FREE: : 8.5T
USED: : 39.1T
LIMIT: : -

[oneadmin@OpenNebula ~]$ ceph --user libvirt df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 71 TiB 32 TiB 39 TiB 39 TiB 55.30
TOTAL 71 TiB 32 TiB 39 TiB 39 TiB 55.30

POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
.rgw.root 1 3.2 KiB 8 1.5 MiB 0 8.5 TiB
default.rgw.control 2 0 B 8 0 B 0 8.5 TiB
default.rgw.meta 3 0 B 0 0 B 0 8.5 TiB
default.rgw.log 4 0 B 207 0 B 0 8.5 TiB
one 5 13 TiB 4.02M 39 TiB 60.53 8.5 TiB

OpenNebula 5.8.5
ceph version 14.2.4 nautilus (stable)