VM unaccessable via network on the LXD node (apparmor -- ?)

Hi,

I’ve recently added LXD node, downloaded ‘ubuntu_bionic - LXD’ image and instaniated the VM from this image.

Problem: can not ping provided ip address.

UFW: disabled
apparmor: tried with turned on and off
VM ID: 288

LXD node:

ubuntu@one-lxd-node-01:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.3 LTS
Release:        18.04
Codename:       bionic

LXD node syslog:

ubuntu@one-lxd-node-01:~$ sudo grep one-288 /var/log/syslog
Sep  2 09:31:59 one-lxd-node-01 lxd[4045]: t=2019-09-02T09:31:59+0000 lvl=warn msg="Unable to update backup.yaml at this time" name=one-288
Sep  2 09:32:01 one-lxd-node-01 kernel: [ 1466.184650] audit: type=1400 audit(1567416721.577:47): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-one-288_</var/lib/lxd>" pid=11721 comm="apparmor_parser"
Sep  2 09:32:01 one-lxd-node-01 kernel: [ 1466.244842] brbond0: port 2(one-288-0) entered blocking state
Sep  2 09:32:01 one-lxd-node-01 kernel: [ 1466.244847] brbond0: port 2(one-288-0) entered disabled state
Sep  2 09:32:01 one-lxd-node-01 kernel: [ 1466.244990] device one-288-0 entered promiscuous mode
Sep  2 09:32:01 one-lxd-node-01 kernel: [ 1466.246165] IPv6: ADDRCONF(NETDEV_UP): one-288-0: link is not ready
Sep  2 09:32:01 one-lxd-node-01 kernel: [ 1466.382738] IPv6: ADDRCONF(NETDEV_CHANGE): one-288-0: link becomes ready
Sep  2 09:32:01 one-lxd-node-01 kernel: [ 1466.382775] brbond0: port 2(one-288-0) entered blocking state
Sep  2 09:32:01 one-lxd-node-01 kernel: [ 1466.382778] brbond0: port 2(one-288-0) entered forwarding state
Sep  2 09:32:01 one-lxd-node-01 systemd-networkd[2297]: one-288-0: Gained carrier
Sep  2 09:32:03 one-lxd-node-01 systemd-networkd[2297]: one-288-0: Gained IPv6LL

LXD node links:

ubuntu@one-lxd-node-01:~$ ip l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master brbond0 state UP mode DEFAULT group default qlen 1000
    link/ether 00:1a:64:63:29:2c brd ff:ff:ff:ff:ff:ff
3: bond1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master brbond1 state UP mode DEFAULT group default qlen 1000
    link/ether 00:1a:64:63:29:2e brd ff:ff:ff:ff:ff:ff
4: brbond1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 00:1a:64:63:29:2e brd ff:ff:ff:ff:ff:ff
5: brbond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 00:1a:64:63:29:2c brd ff:ff:ff:ff:ff:ff
13: one-288-0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brbond0 state UP mode DEFAULT group default qlen 1000
    link/ether fe:c0:10:90:79:e0 brd ff:ff:ff:ff:ff:ff link-netnsid 0

LXD node bridges:

ubuntu@one-lxd-node-01:~$ brctl show
bridge name     bridge id               STP enabled     interfaces
brbond0         8000.001a6463292c       no              bond0
                                                        one-288-0
brbond1         8000.001a6463292e       no              bond1

ubuntu@one-lxd-node-01:~$ lxc network show brbond0
config: {}
description: ""
name: brbond0
type: bridge
used_by:
- /1.0/containers/one-288
managed: false
status: ""
locations: []

VNET:

[oneadmin@one-srv-01 ~]$ onevnet show 0
VIRTUAL NETWORK 0 INFORMATION                                                   
ID                       : 0                   
NAME                     : public              
USER                     : oneadmin            
GROUP                    : tsu                 
LOCK                     : None                
CLUSTERS                 : 100                 
BRIDGE                   : brbond0             
VN_MAD                   : bridge              
PHYSICAL DEVICE          : bond0               
AUTOMATIC VLAN ID        : NO                  
AUTOMATIC OUTER VLAN ID  : NO                  
USED LEASES              : 16                  

PERMISSIONS                                                                     
OWNER                    : um-                 
GROUP                    : u--                 
OTHER                    : ---                 

VIRTUAL NETWORK TEMPLATE                                                        
BRIDGE="brbond0"
BRIDGE_TYPE="linux"
DNS="172.28.4.2"
GATEWAY="172.28.4.1"
OUTER_VLAN_ID=""
PHYDEV="bond0"
SECURITY_GROUPS="0"
VLAN_ID=""
VN_MAD="bridge"

ADDRESS RANGE POOL                                                              
AR 0                                                                            
SIZE           : 10                  
LEASES         : 10                  

RANGE                                   FIRST                               LAST
MAC                         02:00:ac:1c:04:32                  02:00:ac:1c:04:3b
IP                                172.28.4.50                        172.28.4.59

AR 1                                                                            
SIZE           : 20                  
LEASES         : 6                   

RANGE                                   FIRST                               LAST
MAC                         02:00:ac:1c:04:46                  02:00:ac:1c:04:59
IP                                172.28.4.70                        172.28.4.89


LEASES                                                                          
AR  OWNER                         MAC              IP                        IP6
1   V:288           02:00:ac:1c:04:48     172.28.4.72                          -

LXD node lxc output:

$ lxc list one-288
+---------+---------+------+------+------------+-----------+
|  NAME   |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+---------+---------+------+------+------------+-----------+
| one-288 | RUNNING |      |      | PERSISTENT | 0         |
+---------+---------+------+------+------------+-----------+

/var/log/one/288.log

Mon Sep  2 12:34:57 2019 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/288/deployment.1
Mon Sep  2 12:34:59 2019 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Mon Sep  2 12:35:00 2019 [Z0][VMM][I]: ExitCode: 0
Mon Sep  2 12:35:00 2019 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Mon Sep  2 12:35:04 2019 [Z0][VMM][I]: deploy: Using qcow2 mapper for /var/lib/one/datastores/101/288/disk.0
Mon Sep  2 12:35:04 2019 [Z0][VMM][I]: deploy: Mapping disk at /var/lib/lxd/storage-pools/default/containers/one-288/rootfs using device /dev/nbd0
Mon Sep  2 12:35:04 2019 [Z0][VMM][I]: deploy: Mounting /dev/nbd0 at /var/lib/lxd/storage-pools/default/containers/one-288/rootfs
Mon Sep  2 12:35:04 2019 [Z0][VMM][I]: deploy: Mapping disk at /var/lib/one/datastores/101/288/mapper/disk.1 using device /dev/loop0
Mon Sep  2 12:35:04 2019 [Z0][VMM][I]: deploy: Mounting /dev/loop0 at /var/lib/one/datastores/101/288/mapper/disk.1
Mon Sep  2 12:35:04 2019 [Z0][VMM][I]: ExitCode: 0
Mon Sep  2 12:35:04 2019 [Z0][VMM][I]: Successfully execute virtualization driver operation: deploy.
Mon Sep  2 12:35:04 2019 [Z0][VMM][I]: ExitCode: 0
Mon Sep  2 12:35:04 2019 [Z0][VMM][I]: Successfully execute network driver operation: post.
Mon Sep  2 12:35:04 2019 [Z0][VM][I]: New LCM state is RUNNING

What I’ve missed?

It seems the IP address on your container is missing, for some reason. Check the network contextualization inside the container.

Hi,
Yes, I’ve noticed it. But there is another problem :wink: I can not get in to the container.
All my combinations with USERNAME/PASSWORD(PASSWORD_BASE64) are failed for some reason.
Also I’ve changed

:command: /bin/login 
to 
:command: /bin/bash
in the /var/lib/one/remotes/etc/vmm/lxd/lxdrc

but it still asks for password.

log into the LXD node and issue

lxc exec one-288 bash

You should be able to get a root shell inside the container

You wrote bash on the VNC config, and the login command is still executed ?

Hi,

It helped, I’m in.

Seems that contextualization partially works.
VM context (298, edited):

CONTEXT=[
  DISK_ID="1",
  ETH0_CONTEXT_FORCE_IPV4="",
  ETH0_DNS="172.28.4.2",
  ETH0_EXTERNAL="",
  ETH0_GATEWAY="172.28.4.1",
  ETH0_GATEWAY6="",
  ETH0_IP="172.28.4.75",
  ETH0_IP6="",
  ETH0_IP6_PREFIX_LENGTH="",
  ETH0_IP6_ULA="",
  ETH0_MAC="02:00:ac:1c:04:4b",
  ETH0_MASK="",
  ETH0_MTU="",
  ETH0_NETWORK="",
  ETH0_SEARCH_DOMAIN="",
  ETH0_VLAN_ID="",
  ETH0_VROUTER_IP="",
  ETH0_VROUTER_IP6="",
  ETH0_VROUTER_MANAGEMENT="",
  NETWORK="YES",
  PASSWORD_BASE64="IVBhc3N3b3Jk",
  SET_HOSTNAME="test-lxd1",
  SSH_PUBLIC_KEY="ssh-rsa...",
  TARGET="hdb",
  USERNAME="sadm" ]

User ‘sadm’ does not exist in the system, only ‘ubuntu’ are present.
There is a network interface:

root@test-lxd1:~# ip l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 02:00:ac:1c:04:4b brd ff:ff:ff:ff:ff:ff link-netnsid 0

but ip address:

root@test-lxd1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 02:00:ac:1c:04:4b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::acff:fe1c:44b/64 scope link 
       valid_lft forever preferred_lft forever

/etc/netplan is pretty empty:

root@test-lxd1:~# ls -la /etc/netplan/
total 8
drwxr-xr-x  2 nobody nogroup 4096 Aug 28 14:57 .
drwxr-xr-x 75 nobody nogroup 4096 Aug 28 14:57 ..

/etc/networks consists some ‘interfaces’:

root@test-lxd1:~# ls -la /etc/network
total 32
drwxr-xr-x  6 nobody nogroup 4096 Aug 28 14:57 .
drwxr-xr-x 75 nobody nogroup 4096 Aug 28 14:57 ..
drwxr-xr-x  2 nobody nogroup 4096 Apr 27  2018 if-down.d
drwxr-xr-x  2 nobody nogroup 4096 Apr 27  2018 if-post-down.d
drwxr-xr-x  2 nobody nogroup 4096 Aug 28 14:56 if-pre-up.d
drwxr-xr-x  2 nobody nogroup 4096 Aug 28 14:57 if-up.d
-rw-r--r--  1 nobody nogroup   64 Aug 28 14:57 interfaces
-rw-r--r--  1 nobody nogroup  190 Aug 28 14:57 interfaces.1567004230

but they are also unconfigured:

root@test-lxd1:~# cat /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback

root@test-lxd1:~# cat /etc/network/interfaces.1567004230 
# ifupdown has been replaced by netplan(5) on this system.  See
# /etc/netplan for current configuration.
# To re-enable ifupdown on this system, you can run:
#    sudo apt install ifupdown

May be something wrong with the ‘Apps’ images?

Manually added ip does not help:

root@test-lxd1:~# ip addr add 172.28.4.75/24 dev eth0

root@test-lxd1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 02:00:ac:1c:04:4b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.28.4.75/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::acff:fe1c:44b/64 scope link 
       valid_lft forever preferred_lft forever

root@test-lxd1:~# ping 174.28.4.1
connect: Network is unreachable
root@test-lxd1:~# ping 174.28.4.75
connect: Network is unreachable

Loopback works:

root@test-lxd1:~# ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms

Or something wrong with lxd node’s bridge… But it accessable from the other nodes. And I can ping container from the outside:

# lxc list one-298
+---------+---------+--------------------+------+------------+-----------+
|  NAME   |  STATE  |        IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+---------+---------+--------------------+------+------------+-----------+
| one-298 | RUNNING | 172.28.4.75 (eth0) |      | PERSISTENT | 0         |
+---------+---------+--------------------+------+------------+-----------+
root@one-lxd-node-01:~# ping 172.28.4.75
PING 172.28.4.75 (172.28.4.75) 56(84) bytes of data.
64 bytes from 172.28.4.75: icmp_seq=1 ttl=64 time=0.097 ms

UPDATE: After container reinstanialization (ip still manually added) I can ping from it. But can not access container with ‘ssh ubuntu@container_ip’.

Also container’s permissions are weird:

$ lxc exec one-299 bash      
bash: /root/.bashrc: Permission denied
root@test-bionic-lxd:~# whoami
root
root@test-bionic-lxd:~# ls -la
ls: cannot open directory '.': Permission denied

UPDATE2: I see a lot of errors inside container like this:

file '/var/log/messages': open error: Permission denied

Partialy problem solved by massive chowning:

chown -R 100000:100000 /var/lib/lxd/storage-pools/default/containers/one-299/rootfs/{all stuff but home/ubuntu}
chown -R 101000:101000 /var/lib/lxd/storage-pools/default/containers/one-299/rootfs/home/ubuntu

Could you show the VM_TEMPLATE of the contianer ? If it was imported from the LXD marketplace it should have the LXD_SECURITY_PRIVILEGED parameter set to true, and it seems you are probably missing that. Check https://github.com/OpenNebula/one/issues/3258 for more info about the situation you are having.

Imported template:

$ onetemplate show 33
TEMPLATE 33 INFORMATION                                                         
ID             : 33                  
NAME           : ubuntu_bionic - LXD 
USER           : s.cherpatyuk        
GROUP          : oneadmin            
LOCK           : None                
REGISTER TIME  : 08/28 17:54:39      

PERMISSIONS                                                                     
OWNER          : um-                 
GROUP          : ---                 
OTHER          : ---                 

TEMPLATE CONTENTS                                                               
CONTEXT=[
  NETWORK="YES",
  SET_HOSTNAME="$NAME",
  SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]" ]
CPU="1"
DISK=[
  IMAGE_ID="107" ]
GRAPHICS=[
  LISTEN="0.0.0.0",
  TYPE="VNC" ]
HYPERVISOR="lxd"
INPUTS_ORDER=""
LOGO="images/logos/ubuntu.png"
LXD_PROFILE=""
LXD_SECURITY_NESTING="no"
LXD_SECURITY_PRIVILEGED=""
MEMORY="768"
MEMORY_UNIT_COST="MB"
OS=[
  BOOT="" ]

BTW, you can set LXD_SECURITY_PRIVILEGED to true via sunstone only in advanced mode. Wizard provides choise for Security Privileged ‘Yes/No’ but changes nothing after selecting ‘Yes’.

So, I’ve changed template with LXD_SECURITY_PRIVILEGED=“true” and got the container with active ip :slight_smile:

But after VM instantiation there is no ~/.ssh/authorized_keys for default ‘ubuntu’ user.
UPDATE: solved by adding USERNAME=ubuntu to the template.

Moreover ssh service is stopped:

# systemctl status ssh
● ssh.service - OpenBSD Secure Shell server
   Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
   Active: inactive (dead)

And failed to start:

# systemctl start ssh  
Job for ssh.service failed because the control process exited with error code.
See "systemctl status ssh.service" and "journalctl -xe" for details.

UPDATE: solved with adding

sshd:x:109:65534::/run/sshd:/usr/sbin/nologin

to the /etc/passwd and generating certs:

# ssh-keygen -A

Conclusion: bionic lxd template downloaded from the marketplace is useless.

I think you might be experiencing this, it is applicable to every marketplace app.

Thanks for reporting this. I opened https://github.com/OpenNebula/one/issues/3663