Securing new install


#1

hi,

i am doing an install to test open nebula. After following the firts step i have sunstone running well.

One thing bothers me in the fact that a lot of services listen to the internet with no filtering and i wanted to know what is a risk and if there is any guide on what can be blocked/filtered etc…

tcp 0 0 0.0.0.0:2633 0.0.0.0:* LISTEN 4123/oned
tcp 0 0 0.0.0.0:29876 0.0.0.0:* LISTEN 4442/python2
udp 0 0 0.0.0.0:4124 0.0.0.0:* 4221/collectd
tcp 0 0 0.0.0.0:9869 0.0.0.0:* LISTEN 4423/ruby

I found this one: https://www.youtube.com/watch?v=j7i_RsjFjC4

i am listening to it right now, hope it give some advice here.

How do you secure your opennebula installs when the hosting of your machines are in datacenters like ikoula/ovh and other “public” hosting company ?

best regards,
Ghislain.


(Ruben S. Montero) #2

IP address to bind the sockets can be defined in the configuration files:

oned.conf for oned, and collectd (2633, 4124), sunstone.conf for ui and vnc proxy (9869, 29876)

You may want to proxy the services exposed to Internet through a SSL proxy


#3

ok so

  • sunstone is the web gui , limit to the ip of admins and users of the gui
  • oned is … ? search for oned in the doc return nothing so, what access should it have, i dont know :slight_smile:
  • collectd : seems monitoring so should be limited to :
    guest and hosts => frontend and
    the frontend => all host/guest
  • vnc proxy, should be the same as collectd i guess

I speak from a simple install with all opennebual on the same machine with one test host all in ubuntu 18.04. Does it sounds good ?

regards,
Ghislain


(Ruben S. Montero) #4

oned is for the port is XML-RPC all components communicates with OpenNebula core through this port; including CLI or Sunstone. It could be localhost in most cases.

collectd is to talk to your hypervisors, it can listen on the private IP used to talk to them.

vnc-proxy same as sunstone, if you want to access VMs through VNC


#5

thanks for those information.

One more thing about security. The controller has access in ssh to the hosts that seems quite normal. The question i have is that, if i am not mistaken, all the host have access to the controller via ssh to the oneadmin account.

If i am not mistaken that mean that if any host is compromised it will be able to compromise all the other host “via” the controller and destroy a lots of things in the controller including sneaky things like embedding malware in the images of the guest images.

Am i wrong here ?

best regards,
Ghislain.


(Ruben S. Montero) #6

Yes, you are right, host-to-host ssh is required for some operations so usually oneadmin credentials are shared. This means that if oneadmin account in a host is compromised, it could potentially log on the frontend and perform any operation (as oneadmin). This includes but not limited to: altering VLAN_ID for networks, QoS parameters, full DB access, change VM images…


(Ruben S. Montero) #7

BTW, if you want to not share the oneadmin credentials this can be done (at some extent) if you do not need some features, I think live-migration…


#8

this frighten me.

you have seen kvm escape int he past and recently docker and lxd escape so this mean the whole opennebula base install is incredibly fragile against any escape (or direct compromise of any nodes).

Is that not a problem to have such a vulnerable architecture in a 2019 system that use a central controller ? You have a system with a central controller and that central controller is not making any control or security validation ? :frowning:

it could open point to point temporary channel limited to the operation needed (create ssh account with a forced command or rsync temporary share with limited access) then close it when done. i dont know i am just puzzled by this.

Ghislain.


(Ruben S. Montero) #9

It is doing it, based on ssh credentials. If the credentials get compromised you are in trouble as in any other system… As I said you can opt for not sharing the credentials.


#10

hi Ruben,

First i wanted to thanks all dev that take time to answer a noob question here that perhaps do not make senses. Just throwing what bothers me here, so please do not take anything as an attack or anything else than a desire to understand how things works. It’s just that some signs hurt my spider senses so i ask bluntly :slight_smile: and there is no scientific proof that spider senses exists so i am surely just having mental issues.

so i agree there is perhaps a way to tweak scripts to not have two way ssh open channel, but, if opennebula is basicaly made to have every host has a complete control of the cluster with the main daemon running on a controller then removing this could lead to breakage at several level and unseen consequence for the people doing that. Especially if they are noobs like me int the ecosystem. and that at each upgrade of the packages.

This is a little like the scp issue we had in 5.8rc, the scp method did not work showing that the tests structure and all the dev use something else like ceph or iscsi or they would surely have catch it before. This perhaps means this is not the way opennebula core team expect it to work IRL. It is allways better to use a tool like the dev intended to use it that prevent you to stumble into corner case that nobody uses.

So i prefer to learn from the dev what the way they intended to use it so i dont fall into a corner case that will destroy my servers 6month from now :slight_smile:

the way i see it is that the opennebula cluster expect to have 2way access. It seems that the host => controler is only for file copy purpose, i dont know if it use a plugin type thing for transfert method yet as i just started but i will have a look and if yes i could try to write my own as an exercise to learn opennebula systems.

I’ll google it for me then :stuck_out_tongue:

Ghislain.