LXD ethernet bridge network doesn't get ip from dhcp

I’ve setup virtual network with:

BRIDGE = "br0"
BRIDGE_TYPE = "linux"
DNS = "192.168.7.1 8.8.8.8"
GATEWAY = "192.168.7.1"
OUTER_VLAN_ID = ""
PHYDEV = ""
SECURITY_GROUPS = "0"
VLAN_ID = ""
VN_MAD = "bridge"

and use ubuntu bionic lxd template vm:


User template

HYPERVISOR = "lxd"
INPUTS_ORDER = ""
LXD_PROFILE = ""
LXD_SECURITY_NESTING = "no"
LXD_SECURITY_PRIVILEGED = "yes"
MEMORY_UNIT_COST = "MB"
SCHED_DS_REQUIREMENTS = "ID=\"0\""
SCHED_REQUIREMENTS = "ID=\"5\""

Template

AUTOMATIC_DS_REQUIREMENTS = "(\"CLUSTERS/ID\" @> 0)"
AUTOMATIC_NIC_REQUIREMENTS = "(\"CLUSTERS/ID\" @> 0)"
AUTOMATIC_REQUIREMENTS = "(CLUSTER_ID = 0) & !(PUBLIC_CLOUD = YES)"
CONTEXT = [
  DISK_ID = "1",
  ETH0_CONTEXT_FORCE_IPV4 = "",
  ETH0_DNS = "192.168.7.1 8.8.8.8",
  ETH0_EXTERNAL = "",
  ETH0_GATEWAY = "192.168.7.1",
  ETH0_GATEWAY6 = "",
  ETH0_IP = "",
  ETH0_IP6 = "",
  ETH0_IP6_PREFIX_LENGTH = "",
  ETH0_IP6_ULA = "",
  ETH0_MAC = "02:00:e2:ef:27:63",
  ETH0_MASK = "",
  ETH0_MTU = "",
  ETH0_NETWORK = "",
  ETH0_SEARCH_DOMAIN = "",
  ETH0_VLAN_ID = "",
  ETH0_VROUTER_IP = "",
  ETH0_VROUTER_IP6 = "",
  ETH0_VROUTER_MANAGEMENT = "",
  NETWORK = "YES",
  SSH_PUBLIC_KEY = "",
  TARGET = "hdb" ]
CPU = "1"
DISK = [
  ALLOW_ORPHANS = "NO",
  CLONE = "YES",
  CLONE_TARGET = "SYSTEM",
  CLUSTER_ID = "0",
  DATASTORE = "default",
  DATASTORE_ID = "1",
  DEV_PREFIX = "hd",
  DISK_ID = "0",
  DISK_SNAPSHOT_TOTAL_SIZE = "0",
  DISK_TYPE = "FILE",
  DRIVER = "raw",
  IMAGE = "ubuntu_bionic - LXD",
  IMAGE_ID = "1",
  IMAGE_STATE = "2",
  LN_TARGET = "SYSTEM",
  ORIGINAL_SIZE = "1024",
  READONLY = "NO",
  SAVE = "NO",
  SIZE = "1024",
  SOURCE = "/var/lib/one//datastores/1/0ef1fb25b7fb4ac301bc287d8ad652fe",
  TARGET = "hda",
  TM_MAD = "ssh",
  TYPE = "FILE" ]
GRAPHICS = [
  LISTEN = "0.0.0.0",
  PORT = "5920",
  TYPE = "VNC" ]
MEMORY = "768"
NIC = [
  AR_ID = "0",
  BRIDGE = "br0",
  BRIDGE_TYPE = "linux",
  CLUSTER_ID = "0",
  MAC = "02:00:e2:ef:27:63",
  NAME = "NIC0",
  NETWORK = "bridge",
  NETWORK_ID = "9",
  NIC_ID = "0",
  SECURITY_GROUPS = "0",
  TARGET = "one-20-0",
  VN_MAD = "bridge" ]
OS = [
  BOOT = "" ]
SECURITY_GROUP_RULE = [
  PROTOCOL = "ALL",
  RULE_TYPE = "OUTBOUND",
  SECURITY_GROUP_ID = "0",
  SECURITY_GROUP_NAME = "default" ]
SECURITY_GROUP_RULE = [
  PROTOCOL = "ALL",
  RULE_TYPE = "INBOUND",
  SECURITY_GROUP_ID = "0",
  SECURITY_GROUP_NAME = "default" ]
TEMPLATE_ID = "1"
TM_MAD_SYSTEM = "ssh"
VMID = "20"

But, though it boots, I only get an ipv6 ip address and it doesn’t have an IP in the Info tab so I can’t use VNC.


Versions of the related components and OS (frontend, hypervisors, VMs):
Ubuntu 18.04.3
lxc --version 3.0.4
Package: opennebula-node-lxd
Version: 5.8.5-1

Steps to reproduce:

  1. Create virtual network template as above. Making sure that the br0 exists and is working.
  2. Instantiate vm template ubuntu bionic lxd selecting the created virtual network.
  3. Get the following log:
Mon Nov 4 20:21:56 2019 [Z0][VM][I]: New state is ACTIVE
Mon Nov 4 20:21:56 2019 [Z0][VM][I]: New LCM state is PROLOG
Mon Nov 4 20:22:12 2019 [Z0][VM][I]: New LCM state is BOOT
Mon Nov 4 20:22:12 2019 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/20/deployment.0
Mon Nov 4 20:22:15 2019 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Mon Nov 4 20:22:16 2019 [Z0][VMM][I]: ExitCode: 0
Mon Nov 4 20:22:16 2019 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Mon Nov 4 20:22:27 2019 [Z0][VMM][I]: deploy: Processing disk 0
Mon Nov 4 20:22:27 2019 [Z0][VMM][I]: deploy: Using raw filesystem mapper for /var/lib/one/datastores/0/20/disk.0
Mon Nov 4 20:22:27 2019 [Z0][VMM][I]: deploy: Mapping disk at /var/snap/lxd/common/lxd/storage-pools/default/containers/one-20/rootfs using device /dev/loop11
Mon Nov 4 20:22:27 2019 [Z0][VMM][I]: deploy: Mounting /dev/loop11 at /var/snap/lxd/common/lxd/storage-pools/default/containers/one-20/rootfs
Mon Nov 4 20:22:27 2019 [Z0][VMM][I]: deploy: Mapping disk at /var/lib/one/datastores/0/20/mapper/disk.1 using device /dev/loop12
Mon Nov 4 20:22:27 2019 [Z0][VMM][I]: deploy: Mounting /dev/loop12 at /var/lib/one/datastores/0/20/mapper/disk.1
Mon Nov 4 20:22:27 2019 [Z0][VMM][I]: deploy: --- Starting container ---
Mon Nov 4 20:22:27 2019 [Z0][VMM][I]: ExitCode: 0
Mon Nov 4 20:22:27 2019 [Z0][VMM][I]: Successfully execute virtualization driver operation: deploy.
Mon Nov 4 20:22:27 2019 [Z0][VMM][I]: ExitCode: 0
Mon Nov 4 20:22:27 2019 [Z0][VMM][I]: Successfully execute network driver operation: post.
Mon Nov 4 20:22:28 2019 [Z0][VM][I]: New LCM state is RUNNING

Current results:
lxc list shows:

+--------+---------+------+---------------------------------------+------------+-----------+
|  NAME  |  STATE  | IPV4 |                 IPV6                  |    TYPE    | SNAPSHOTS |
+--------+---------+------+---------------------------------------+------------+-----------+
| one-20 | RUNNING |      | fd03:66e4:3fc5::e2ff:feef:2763 (eth0) | PERSISTENT | 0         |
+--------+---------+------+---------------------------------------+------------+-----------+

Expected results:
The vm should have an ipv4 address.

Other notes:
Looking at the logs in the router (dnsmasq), there is no DHCPREQUEST from that mac, but if I run dhclient inside the vm/container it does a DHCPREQUEST and gets an IP but it still doesn’t show up in the Info or other tabs in sunstone, so I still can’t access VNC.

Thanks for your help.

OpenNebula is not aware of the ip addresses you setup in the container. It only shows the IP addresses leased by the virtual networks. You should be able to access VNC whether the container has IP addresses or not.

So I was able to make VNC work by adding my root CA to my browser. However, when I launch VNC it shows a login and I don’t know the credentials of the LXD instance so I can login.

I know i can use lxc exec from the shell, but that defeats my purpose of having a web based interface to manage LXD instances.

Also, is there a way to change the hostname of the LXD instance? It’s always LXC_NAME.

Another thing, the network of the Ubuntu 18.04 LXD instance is not enabled by default. I need to run dhcpclient every time it boots. I tried using netplan but it won’t work since the eth0@if* changes its name for every reboot (first it was eth0@if22, after a reboot it was eth0@if24, so I can’t attach a dhcp4 to it on netplan).

Thanks for your help.

when I launch VNC it shows a login and I don’t know the credentials of the LXD instance so I can login.

By default those images don’t have a password for the root user. If you want to set it you can set a password via contextualization or, you can change the command running in noVNC to be bash, for example.

Also, is there a way to change the hostname of the LXD instance? It’s always LXC_NAME .

This probably means that the contextualization package wasn’t installed on the image. You should check the container contextualization log to see what happened.

The network issue is likely related to the lack of contextualization. Ultimately, you can always manually install the context package, but keep in mind that should be done automatically when importing the app.