Storage Requirements


(Ben McGuire) #1

We seem to have resolved most issue including configuring Nginx Proxy which is a godsend.

We are going to be using SSD NAS connected via 10Gbit for Virtual Machines so we can use live migration.

My question is due to the confusing setup with Opennebula and its storage ( system, images, files ) we need to know if there is anything will use the local storage on the nodes if NFS is used.


(Kai 'wusel' Siering) #2

OpenNebula “lives” out of /var/lib/one, which is oneadmin's $HOME. All image related stuff happens in the datastores, which live in /var/lib/one/datastores/*. If you define a local DS, it will be created on that host’s /var/lib/one/datastores; if all DS ever will be of type NFS, easiest would be to mount /var/lib/one/datastores from NFS (instead of mounting the datastores separate on each host). (I use NFS & LFS, but only have a 1 GBit network there, thus for some stuff I use local storage as well.)


(Ben McGuire) #3

Hi Wusel,

Thanks for the explanation which made a lot of sense. After spending the day testing and then testing vOneCloud I have decided to stick with what I know ( VMware ) and use vOneCloud.
Now I do not need to bother about NFS as I can use our existing vSAN storage which I hope can be used with Opennebula.

Just need to spend tomorrow making VMware images unless someone knows where I can get standard vmdk images to work with vOneCloud.

I am pretty excited to move to Opennebula as it simplifies everything and once our VMs are migrated we will need less servers as currently we have 3 dedicated server just for NSX and Openstack.

I am confident that Opennebula will suit our purpose…I just wish I discovered it 2 years ago :slight_smile:


(Kai 'wusel' Siering) #4

I’m afraid you lost me here:

But vOneCloud builds on top of VMware vCenter deployments and you stated that your reason to look into OpenNebula was the charge incurred for your current Openstack environment (powered by VMware)?

OpenNebula manages (preferably) VMs on hypervisors running KVM as virtualization technique. Openstack (I don’t know anything about “VMware Integrated Openstack” though) usually uses KVM as well. So, unless you want to exchange your Openstack-based KVM-VMs by VMs based on VMware’s virtualization layer, I don’t see how vOneCloud fits into this? Just curious :wink:


(Ben McGuire) #5

Ha…There is usually a method to my madness.

One of the main reasons we have decided to move was 1. to consolidate servers as we have 8 dedicated servers just to run VMware Integrated Openstack with NSX which costs many thousands per month.
I haved done the calculations and based on current virtual machine number and storage requirements we can downgrade to just 4 servers using vOneCloud as we do not need 3 servers dedicated for NSX.
Secondly, Openstack is notorious for prefering to use NAT which customers hate I personally I do too and considering our server provider ( OVH ) and their routing, setting up a no NAT environment with Openstack and NSX was not an option as there is no way it would ever work. So, at the behest of satisfying customers and to simplify our Cloud I feel that vOneCloud was be far more suitable and we can still utilise the VMware hypervisor thus making for easier migration to the new Cloud and doing away with the complex NSX setup. I bvuilt that environment myself a little under 2 years ago and it tool me about 3 months to get it right and now my business has grown very rapidly I need a scalable solution that will also satisfy our customers. We are the number 1 provider of Pentesting Machines to security professionals and other professionals in the cyber security realm so having not using NAT is going to increase our business 10 fold. Plus I can still use our current vSAN setup that is being use for our Openstack Compute Cluster so storage performance is not an issue with pure SSD vSAN. I do not state that my opinion is right or the best solution as there are far more knowledgabe people than I but knowing my business I feel that this is the best course of action to take and I just hope that I do not regret it.

Lastly the cloud-init with VMware based images really suck and Opennebula contextualization seems to make that so much easier without cloud-init as cloud-init with Openstack is only really for KVM.

I am happy to hear you view if you think what I am doing is a big mistake however we are only a small team and we do not have the resources to manage a complex NSX and Openstack cloud as I cannot manage the business and networking myself…sure I could hire a few more people but we run a very tight budget and have many loyal long terms business customers and private customers and their happiness is my main priority. As I always say to them…if they need it and I can see it being viable ill build it…I always have at least one project going on :slight_smile:

I am a little excited to be using vOneCloud - like a kid with a new toy. If all goes well I should have everything up in less than a week and once we complete testing I can notofy customers that there will be no more NAT which they are going to love me for :slight_smile:


(Kai 'wusel' Siering) #6

Well, true, NAT sucks, but where do you get the v4 IPs from? The pools are basically depleted, so you’ll have to rent them from OVH I assume? So, while I agree that NAT sucks (and I’m more than happy not to need to rely on it due to own address space), v4 is not the future. But that’s a different story :wink:

Ah, so you use a Private Cloud/SDDC product from OVH, that’s where your ESXi infra comes from? Being involved in an ESXi to KVM migration project myself professionally, I think I understand the motivation; but vOneCloud to me is not the way to go. It’s a rather specialized version of ONe, and looking at the release notes, and assuming that you in the end would like to drop VMware, I’d suggest to set up a proper OpenNebula environment. Quoting the system requirements:

For infrastructures exceeding the aforementioned limits, we recommend an installation of OpenNebula from scratch on a bare metal server, using the vCenter drivers

In my spare time I run a bunch of servers for a club (a Freifunk community; i. e. not much money to spend, the servers are usually sponsored “somewhere”) in currently four DCs, and I’m rather happy with our OpenNebula setup. Yes, understanding the concept of Datastores, Templates, Images, Networks, Clusters, … is a bit tricky — took me some weeks and a lot of try-and-error experiences to get things going, but it certainly was worth the effort.

29

Since … nowadays we could deploy new VMs anywhere by the click of a button :wink: (Well, solving networking (not relying on ISP solutions) and other issues wasn’t easy, but as it’s part of my profession, not too difficult either :wink:)

The nice thing about OpenNebula to me is … that it doesn’t get in the way too much; it’s just a fancy (yet powerful) extension to libvirt/virt-manager — which e. g. Red Hat’s oVirt isn’t. So, if your business is “just” to provide VMs to customers, I’d go with ONe. To some extend, even have customers control the life-cycle of their VMs is ok-ish. Supplying a VPC-like setup might be off the scale; never looked into it as that’s neither my daytime job’s nor my hobbyist’s ballpark.

Setting up OpenNebula is actually rather straighforward; I’m on Ubuntu LTS for servers, using the OpenNebula repo makes sure the UID of the oneadmin user is consistent across servers, therefore no permission issues on shared filesystems.

Since vOneCloud is supposed to run as a VMware guest, I don’t see how vOneCloud would get you out of VMware’s fangs. But, in the end, it’s up to you :wink: