[4.11] sunstone on a different host

Hi,

for security concerns, I need to install sunstone on a different server than the oned server.

As found in the document I know it’s possible:

By default the Sunstone server is configured to run in the frontend, 
but you are able to install the Sunstone server in a machine different 
from the frontend.
You will need to install only the sunstone server packages in the 
machine that will be running the server. If you are installing from 
source use the -s option for the install.sh script.Make sure :one_xmlprc: variable in sunstone-server.conf points to the right place where OpenNebula frontend is running, You can also leave it undefined and export ONE_XMLRPC environment variable.Provide the serveradmin credentials in the following file /var/lib/one/.one/sunstone_auth. If you changed the serveradmin password please check the Cloud Servers Authentication guide.

Everything seems to works, except the upload of images or files. The error is “[ImageAllocate] Cannot determine Image SIZE”. Is it normal?

In sunstone logs:

Tue Mar 10 09:26:26 2015 [I]: 192.168.199.254 - - [10/Mar/2015:09:26:26 -0400] "POST /upload_chunk HTTP/1.1" 200 - 0.1014
Tue Mar 10 09:26:26 2015 [I]: 192.168.199.254 - - [10/Mar/2015:09:26:26 -0400] "POST /upload HTTP/1.1" 500 - 0.2241

Your “upload” directory has to be shared with the “oned” server (the ":tmpdir: variable in /etc/one/sunstone.conf is not honoured for setups running Apache Passenger). If running Apache Passenger you can adjust “/usr/lib/one/sunstone/config.ru” and add the following:

ENV['TMPDIR'] = '/mnt/sunstone_upload' (or whatever your upload directory is).  #Make sure this path is shared with the oned server (i.e. shared nfs mount).

Don’t forget to restart sunstone and / or apache after you’ve made the changes.

1 Like

Thank you Stefan. I have opened a bug report since this information is missing in the documentation.

Do you know if VNC consoles would be accessible with sunstone running on a different machine than oned? This sunstone server doesn’t have access to hypervisors.

The sunstone needs access to the hypervisors as there is where the vnc proxy resides. This proxy makes connections to the VNC port of each VM in the host they are running.

couldn’t this proxy run on oned server? I can’t imagine a connection between sunstone server and hypervisors (for security reason). How do you deal with this kind of concerns, where the dashboard server which is accessed by the public, can’t be in the same network as the hypervisors? This is to prevent the machine running sunstone to access hypervisors if it’s compromised. What are the best practices?

The web interface will always try to connect to the sunstone server (default port 29876) when using VNC. Maybe a solution could be starting opennebula-novnc in the OpenNebula frontend and having a tunnel from Sunstone server machine port 29876 to the frontend.

client -> sunstone:29876 -> tunnel -> frontend:29876 -> host:vnc_port

Thank you Javi, I will try this way

Stefan do you have any experience with nginx and passenger setups regarding Opennebula? I have changed the TMP directory as per your suggestion but after restarting nginx uploads still end up in /tmp. I’m thinking that nginx for some weird reason might be overriding the passenger/ruby defined defaults.

I don’t have any experience with nginx and passenger. You might also change the sunstone_server.rb (ONE 4.12) file (line 82):

# Set the TMPDIR environment variable for uploaded images
ENV['TMPDIR']=$conf[:tmpdir] if $conf[:tmpdir]

To something like

# Set the TMPDIR environment variable for uploaded images
ENV['TMPDIR']='/path/to/upload/directory'

Thnx, I’ll give this a try!

So after fiddling around with passenger and suntone setting I finally was able to change the upload directory in nginx. For anyone with the same setup, what you are looking for is client_body_temp_path point it to where you want files to land and make sure it’s shared between your sunstone and opennebula server (NFS mount for example).

I also defined the following settings in nginx.conf
client_body_in_file_only clean; client_body_in_single_buffer on;

This seems to work the best for me.