Problem with auth in federation

Hi guys,

I’m trying to create federation with opennebula 5.6.2 and have auth problem on slave side, while i’m trying to auth in sunstone i get error:

Server is not running or there was a server exception. Please check the server logs.

and this in oned.log:

 [Z101][ReM][D]: Request_ID:416 UID:0 User:oneadmin Group_ID:0 Group:oneadmin Method_name:one.user.info Invoked:, -1
[Z101][ReM][D]: Req:416 UID:0 one.user.info result SUCCESS, "<USER><ID>0</ID><GID..."
[Z101][AuM][D]: Message received: LOG I 41 Command execution failed (exit code: 255): /var/lib/one/remotes/auth/server_cipher/authenticate

[Z101][AuM][I]: Command execution failed (exit code: 255): /var/lib/one/remotes/auth/server_cipher/authenticate
[Z101][AuM][D]: Message received: LOG E 41 bad decrypt

[Z101][AuM][I]: bad decrypt
[Z101][AuM][D]: Message received: AUTHENTICATE FAILURE 41 bad decrypt

[Z101][AuM][E]: Auth Error: bad decrypt
[Z101][ReM][D]: Request_ID:4768 UID:-1 User: Group_ID:-1 Group: Method_name:one.user.info Invoked:, -1
[Z101][ReM][E]: Req:4768 UID:- one.user.info result FAILURE [one.user.info] User couldn't be authenticated, aborting call.

in the same time i can use cli on master and slave without any problem, i see that the db is in synced state. I copied necessary files from /var/lib/one/.one

Please help in advice!

so the problem was solved with nginx restart.
but here is another issue, we can’t switch between zones if we use nginx, when we stop nginx and start opennebula-sunstone service, switching starts work.
the question is - how we can make it work with nginx

Did you follow the config provided into the documentation?
http://docs.opennebula.org/5.8/deployment/sunstone_setup/suns_advance.html

Hi mouyaq,

yes, I put underscores_in_headers on; and proxy_pass_request_headers on; under http section in nginx.conf on both zones of my federation, but it doesn’t help.

Which opennebula version and nginx are you using?

I’m using:
Centos 7 3.10.0-957
nginx-1.14.0-1.p5.3.5.el7.x86_64
passenger-5.3.5-1.el7.x86_64
opennebula-5.6.2-2

Here is my nginx.conf

user nginx;
worker_processes 10;
error_log /var/log/nginx/error.log ;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
    worker_connections 1024;
}
http {
    server_tokens off;
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  /var/log/nginx/access.log  main;
    underscores_in_headers on;
    proxy_pass_request_headers on;
    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;
    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;
    server {
        listen       127.0.0.1:80;
        server_name  localhost;
        root         /usr/share/nginx/html;
        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;
        location / {
        }
        error_page 404 /404.html;
            location = /40x.html {
        }
        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }

and passenger.conf

passenger_root /usr/share/ruby/vendor_ruby/phusion_passenger/locations.ini;
passenger_ruby /usr/bin/ruby;
passenger_instance_registry_dir /var/run/passenger-instreg;
passenger_max_instances_per_app 1;
passenger_user oneadmin;
#passenger_log_level 7;
 server {
        listen       80;
        server_name  one_vip;
        root         /usr/lib/one/sunstone/public;
        passenger_enabled on;
        error_log  /var/log/nginx/passenger.error.log;
        access_log  /var/log/nginx/passenger.access.log;
        client_body_in_file_only clean;
        client_max_body_size 35G;


        location / {
        }

        # redirect server error pages to the static page /40x.html
        #
        error_page  404              /404.html;
        location = /40x.html {
        }

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
        }
    }

There was an issue resolved in version 5.8.1. Try to update.

hi mouyaq,

I faced another issue, that vnc consoles disappeared from the slave cluster, if I switch to slave from master with WEB UI they visible ok, but if I try to log in on slave cluster directly that is no vnc picture neither in the list of vm nor in the vm settings. but if vm’s settings load not too quick I see VNC picture and after a moment it disappears.

When vnc dissapear can you see instance IP Address?
Make sure you can see ip address in sunstone and do a onevm show -x to check ip address too.

hi mouyaq,

tnx for reply

I can see ip address and I see nic, ip address and vnc device in xml.

Check this issue: