SAN FC Datastore Desgin

Hey folks,

Could someone tell us what is the best way /recommended /best practice to usage Datastore in one environment with SAN FC Storage ? (Frontend is not included in this SAN ).

My idea is not usage 3party software/cluster like RH CLVM.

Could someone have ideas/suggestions!?

Best regards,

-Carlos

From another thread I gather you are using fs_lvm, which is the recommended method, by exporting the LUN to all the hypervisors in the cluster.

You could alternatively set up a clustered file system, but that is similar to installing CLVM in terms of extra components.

Hello, best way is to use clustering.

Corosync+Pacemaker, DLM, cLVM, GFS2.

We use cLVM for persistant images and GFS2 for system datastore and non-persistant qcow2 images for quick deploy.

You should use latest CentOS 7.2 (7.3 is upcomming), reffer to REHL 7.2 Guide. Also you should read “Clusters from scratch” and “Pacemaker Explained” guides http://clusterlabs.org/doc/

You also need fencing devices, best way is to use APC AP7921 which you can buy on ebay for 120GBP.

Frontend can run inside VM hosted on that cluster using “VirtualDomain” pacemaker resource agent.

You also need a lot of time and patience.

Hi @feldsam and @jmelis.

Thank you by comments and instructions.

I’ll read more about cLVM and GFS2 to try do good scenario, I dont have expertise about it.

Thank you!

-Carlos

Carlos Cesario opennebula@discoursemail.com writes:

Hi @feldsam and @jmelis.

Thank you by comments and instructions.

I’ll read more about cLVM and GFS2 to try do good scenario, I dont have expertise about it.

It’s not a trivial setup and when things goes wrong it can quickly
finish in halt the cluster is rebooted :-/

According to the documentation, IRC/mailing-list, here is the priority
of thing to setup:

  1. you need to setup fencing between hosts (IPMI, APC, …)

  2. Make sure fencing is working properly, really, take time to test it
    or everything can go crasy

  3. Then you can start to define resources in pacemaker to use your
    cluster, we may share our config to make you an idea

Regards.

Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF

And also best way is to have separate dedicated network for corosync

1 Like

Hi all, I am replying to this thread to share the experience with such setup. Proposed solution with corosync, pacemaker, DLM, cLVM, GFS2 is too complicated and in some cases fragile. For example, at the start, we hit an ugly bug in libqb, which wasn’t fixed until the Centos 7.3 release. It causes crashing the whole cluster every month, worse availability that in non-ha setup. Administration of such thing is complicated, requires experienced people and time. We are going throw away the whole structure and replace it with a simple one. We are staying with FC SAN storage, but we leverage storage API and write custom OpenNebula storage driver. There is no clustering, no complex solution, but simple, easy to understand, and also better performing. I like to tell others that simple is better, and you should think about that. I want to share our storage driver, which uses HPE 3PAR FC SAN [Contribution] HPE 3PAR Storage driver.