Daniel Dehennin via OpenNebula Community email@example.com
We are looking at a new OpenNebula cluster and avoid the SAN problem we described in OpenNebulaConf 2016.
We think about a lizardfs (thanks nodeweaver ) but wonder a little what kind of setup to use:
- Today, we have 4TB qcow2 storage (25TB uncompressed according to OpenNebula) on SAN and we would like to do some kind of hot/warm/cold storage to limit the price, is it reasonable to do?
It looks like I need to have dedicated chunkserver with proper label
(like “cold” or “slow”) and “attach” some datastores to that label using
I made some tests to run more than one chunkserver on a physical
machine with success so I could have 1 chunkserver for SSD and 1
chunkserver for HDD per physical machine to avoid having dedicated
hardware for spinning disks.
- It looks like it’s better to have more chunkservers than having huge capacity per server?
As far as I understand, it’s better to avoid any kind of RAID (even
JBOD) since a chunkserver using multiple disks will strip chunks on them
So, I need several disks on a single machine to make it handle more I/O,
and I need several physical machines to handle redudancy.
- Is it ok to put master (and shadow master) on chuckservers/hypervisors or is it required to have dedicated servers?
As the physical machines will be used as hypervisors, they will have
plenty of CPU and RAM, I saw a recommandation of 64GB RAM for the
metadata server but it seems to be for a million files which sum up to
PB of data.
Does someone have some hints to provide?
I’m planning to write some documentation on Lizardfs setup with
I’m mostly interested by the “all in one” use case, a single hypervisor
with 2 disks or more to start and then extend the cluster by adding more