Since yesterday, our hosts give a the datastore 0 twice, in the dashboard and in the CLI :
$ onehost show 7
...
LOCAL SYSTEM DATASTORE #0 CAPACITY
TOTAL: : 2.1T
USED: : 1003G
FREE: : 1.1T
LOCAL SYSTEM DATASTORE #0 CAPACITY
TOTAL: : 2.1T
USED: : 250.1G
FREE: : 1.8T
...
How can I reset this data without edit the database by hand
Versions of the related components and OS (frontend, hypervisors, VMs):
OpenNebula 5.6.0
KVM nodes in 5.6.0
Steps to reproduce:
I don’t really know…
Current results:
Two results are shown, only the first one in good.
Expected results:
$ onehost show 7
...
LOCAL SYSTEM DATASTORE #0 CAPACITY
TOTAL: : 2.1T
USED: : 1003G
FREE: : 1.1T
...
juanmont
(Juan Jose Montiel Cano)
August 16, 2018, 8:57am
2
Can you execute the following command and paste the result?
onedb show-body host --id 7
There is the result :
# onedb show-body host --id 7
<HOST>
<ID>7</ID>
<NAME>XXXXXXXXXXXXXXXXXXXXXXXXX</NAME>
<STATE>2</STATE>
<IM_MAD><![CDATA[kvm]]></IM_MAD>
<VM_MAD><![CDATA[kvm]]></VM_MAD>
<LAST_MON_TIME>1534434070</LAST_MON_TIME>
<CLUSTER_ID>103</CLUSTER_ID>
<CLUSTER>XXXXXXXXXXXXXXXXXXXXXXXXX</CLUSTER>
<HOST_SHARE>
<DISK_USAGE>0</DISK_USAGE>
<MEM_USAGE>244318208</MEM_USAGE>
<CPU_USAGE>7800</CPU_USAGE>
<TOTAL_MEM>264029024</TOTAL_MEM>
<TOTAL_CPU>4000</TOTAL_CPU>
<MAX_DISK>2231535</MAX_DISK>
<MAX_MEM>264029024</MAX_MEM>
<MAX_CPU>8000</MAX_CPU>
<FREE_DISK>979261</FREE_DISK>
<FREE_MEM>167208864</FREE_MEM>
<FREE_CPU>3520</FREE_CPU>
<USED_DISK>1029120</USED_DISK>
<USED_MEM>96820160</USED_MEM>
<USED_CPU>480</USED_CPU>
<RUNNING_VMS>37</RUNNING_VMS>
<DATASTORES>
<DS>
<FREE_MB><![CDATA[979261.5]]></FREE_MB>
<ID><![CDATA[0]]></ID>
<TOTAL_MB><![CDATA[2231535]]></TOTAL_MB>
<USED_MB><![CDATA[1029120]]></USED_MB>
</DS>
<DS>
<FREE_MB><![CDATA[1812617]]></FREE_MB>
<ID><![CDATA[0]]></ID>
<TOTAL_MB><![CDATA[2231535]]></TOTAL_MB>
<USED_MB><![CDATA[305495]]></USED_MB>
</DS>
</DATASTORES>
<PCI_DEVICES/>
</HOST_SHARE>
<VMS>
<ID>153</ID>
<ID>183</ID>
<ID>186</ID>
<ID>191</ID>
<ID>197</ID>
<ID>202</ID>
<ID>210</ID>
<ID>215</ID>
<ID>217</ID>
<ID>222</ID>
<ID>223</ID>
<ID>225</ID>
<ID>228</ID>
<ID>231</ID>
<ID>243</ID>
<ID>261</ID>
<ID>274</ID>
<ID>275</ID>
<ID>276</ID>
<ID>277</ID>
<ID>283</ID>
<ID>289</ID>
<ID>293</ID>
<ID>295</ID>
<ID>323</ID>
<ID>330</ID>
<ID>331</ID>
<ID>334</ID>
<ID>336</ID>
<ID>352</ID>
<ID>358</ID>
<ID>359</ID>
<ID>360</ID>
<ID>367</ID>
<ID>369</ID>
<ID>392</ID>
<ID>399</ID>
</VMS>
<TEMPLATE>
<ARCH><![CDATA[x86_64]]></ARCH>
<CPUSPEED><![CDATA[2200]]></CPUSPEED>
<HOSTNAME><![CDATA[XXXXXXXXX]]></HOSTNAME>
<HYPERVISOR><![CDATA[kvm]]></HYPERVISOR>
<IM_MAD><![CDATA[kvm]]></IM_MAD>
<KVM_CPU_MODELS><![CDATA[486 pentium pentium2 pentium3 pentiumpro coreduo n270 core2duo qemu32 kvm32 cpu64-rhel5 cpu64-rhel6 kvm64 qemu64 Conroe Penryn Nehalem Westmere SandyBridge IvyBridge Haswell-noTSX Haswell Broadwell-noTSX Broadwell Skylake-Client athlon phenom Opteron_G1 Opteron_G2 Opteron_G3 Opteron_G4 Opteron_G5]]></KVM_CPU_MODELS>
<KVM_MACHINES><![CDATA[pc-i440fx-2.8 pc pc-0.12 pc-i440fx-2.4 pc-1.3 pc-q35-2.7 pc-q35-2.6 xenpv pc-i440fx-1.7 pc-i440fx-1.6 pc-i440fx-2.7 pc-0.11 pc-i440fx-2.3 pc-0.10 pc-1.2 pc-i440fx-2.2 isapc pc-q35-2.5 xenfv pc-0.15 pc-0.14 pc-i440fx-1.5 pc-i440fx-2.6 pc-i440fx-1.4 pc-i440fx-2.5 pc-1.1 pc-i440fx-2.1 pc-q35-2.8 q35 pc-1.0 pc-i440fx-2.0 pc-q35-2.4 pc-0.13]]></KVM_MACHINES>
<MODELNAME><![CDATA[Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz]]></MODELNAME>
<NETRX><![CDATA[3716438378673]]></NETRX>
<NETTX><![CDATA[4271736132102]]></NETTX>
<RESERVED_CPU><![CDATA[-4000]]></RESERVED_CPU>
<RESERVED_MEM><![CDATA[-2392.800000011921]]></RESERVED_MEM>
<VERSION><![CDATA[5.6.0]]></VERSION>
<VM_MAD><![CDATA[kvm]]></VM_MAD>
</TEMPLATE>
</HOST>
Do you think the only way is to update-body
?
juanmont
(Juan Jose Montiel Cano)
August 17, 2018, 11:20am
4
Can you remove wrong DS from DATASTORES section with update-body
and check if OpenNebula recreate it?
I have recently discovered the origin of the issue. There was a backup of the monitor_ds.sh script named monitor_ds.sh.old, and it was synced on the kvm (onehost sync) and being executed every time.
The solution was to remove the script on opennebula master and also on every kvm node.