RTNETLINK answers: Network is unreachable

I’ve changed the default libvirtd network from 192.168.122.X to 10.0.0.1 using virsh net-edit default. Then I reconfigured the OpenNebula network on 10.0.0.100+. However, when I edit an instantiated VM to remove the earlier subnet and add the NIC with the 10.0.0.X , I can’t even add a route on the VM interface. Just gives me:

RTNETLINK answers: Network is unreachable

No routes are present on the VM. There’s no default gateway. Restarted iptables as well to take the new subnet changes into effect. The changes were reflected in iptables.

What am I missing?

Also, when modifying a template, be it a VM one or a NIC one, are the changes reflected in the already instantiated instances?

Thx,
TK

My configuration:

[oneadmin@one01 ~]$ onevnet show 0
VIRTUAL NETWORK 0 INFORMATION
ID                       : 0
NAME                     : virt-mds01
USER                     : oneadmin
GROUP                    : oneadmin
LOCK                     : None
CLUSTERS                 : 100
BRIDGE                   : virbr0
VN_MAD                   : bridge
AUTOMATIC VLAN ID        : NO
AUTOMATIC OUTER VLAN ID  : NO
USED LEASES              : 1

PERMISSIONS
OWNER                    : um-
GROUP                    : ---
OTHER                    : ---

VIRTUAL NETWORK TEMPLATE
BRIDGE="virbr0"
BRIDGE_TYPE="linux"
DESCRIPTION="Virtual Network 01"
OUTER_VLAN_ID=""
PHYDEV=""
SECURITY_GROUPS="0"
VLAN_ID=""
VN_MAD="bridge"

ADDRESS RANGE POOL
AR 0
SIZE           : 154
LEASES         : 1

RANGE                                   FIRST                               LAST
MAC                         08:08:08:A0:B0:01                  08:08:08:a0:b0:9a
IP                                 10.0.0.100                         10.0.0.253


LEASES
AR  OWNER                         MAC              IP                        IP6
0   V:17            08:08:08:a0:b0:01      10.0.0.100                          -

VIRTUAL ROUTERS
[oneadmin@one01 ~]$

Node network config:

[root@mdskvm-p02 network-scripts]# virsh list
 Id    Name                           State
----------------------------------------------------
 5     one-17                         running

[root@mdskvm-p02 network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp2s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
3: enp2s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 78:e7:d1:8c:b1:bc brd ff:ff:ff:ff:ff:ff
4: enp3s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
5: enp3s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master onebr01 state UP group default
    link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7ae7:d1ff:fe8c:b1ba/64 scope link
       valid_lft forever preferred_lft forever
11: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
    link/ether 26:bf:20:53:5f:a7 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::24bf:20ff:fe53:5fa7/64 scope link
       valid_lft forever preferred_lft forever
12: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
    link/ether ce:39:a5:50:4e:8b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::cc39:a5ff:fe50:4e8b/64 scope link
       valid_lft forever preferred_lft forever
23: onebr01: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.39/23 brd 192.168.1.255 scope global onebr01
       valid_lft forever preferred_lft forever
    inet6 fe80::7ae7:d1ff:fe8c:b1ba/64 scope link
       valid_lft forever preferred_lft forever
25: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 52:54:00:50:e9:20 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/24 brd 10.0.0.255 scope global virbr0
       valid_lft forever preferred_lft forever
26: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 500
    link/ether 52:54:00:50:e9:20 brd ff:ff:ff:ff:ff:ff
28: one-17-0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN group default qlen 500
    link/ether fe:08:08:a0:b0:01 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc08:8ff:fea0:b001/64 scope link
       valid_lft forever preferred_lft forever
[root@mdskvm-p02 network-scripts]#

Routing:

[root@mdskvm-p02 network-scripts]# ip route
default via 192.168.0.1 dev onebr01
10.0.0.0/24 dev virbr0 proto kernel scope link src 10.0.0.1
169.254.0.0/16 dev onebr01 scope link metric 1023
192.168.0.0/23 dev onebr01 proto kernel scope link src 192.168.0.39
[root@mdskvm-p02 network-scripts]#

[oneadmin@one01 ~]$ onevm list
    ID USER     GROUP    NAME            STAT UCPU    UMEM HOST             TIME
    17 oneadmin oneadmin one-jump01.nix. runn  1.0    4.1G mdskvm-p02   0d 08h33


[oneadmin@one01 ~]$ onevm show 17
VIRTUAL MACHINE 17 INFORMATION
ID                  : 17
NAME                : one-jump01.nix.mds.xyz
USER                : oneadmin
GROUP               : oneadmin
STATE               : ACTIVE
LCM_STATE           : RUNNING
LOCK                : None
RESCHED             : No
HOST                : mdskvm-p02.nix.mds.xyz
CLUSTER ID          : 100
CLUSTER             : kvm-c01
START TIME          : 11/11 16:45:43
END TIME            : -
DEPLOY ID           : one-17

VIRTUAL MACHINE MONITORING
CPU                 : 1.0
MEMORY              : 4.1G
NETTX               : 0K
NETRX               : 67K
DISKRDBYTES         : 16843994636
DISKRDIOPS          : 350822
DISKWRBYTES         : 1539492352
DISKWRIOPS          : 35032

PERMISSIONS
OWNER               : um-
GROUP               : ---
OTHER               : ---

VM DISKS
 ID DATASTORE  TARGET IMAGE                               SIZE      TYPE SAVE
  0 mdskvmgv-c hdc    raw - 64G                           1.1G/64G  fs     NO
  1 mdskvm-pc0 hda    Cent OS 7 Everything - GV 0 - IMG 0 7.2G/7.2G cdro   NO
  2 -          hdb    CONTEXT                             1M/-      -       -

VM NICS
 ID NETWORK              BRIDGE       IP              MAC               PCI_ID
  0 virt-mds01           virbr0       10.0.0.100      08:08:08:a0:b0:01

SECURITY

NIC_ID NETWORK                   SECURITY_GROUPS
     0 virt-mds01                0

SECURITY GROUP   TYPE     PROTOCOL NETWORK                       RANGE
  ID NAME                          VNET START             SIZE
   0 default     OUTBOUND ALL
   0 default     INBOUND  ALL

VIRTUAL MACHINE HISTORY
SEQ UID  REQ   HOST         ACTION       DS           START        TIME     PROLOG
  0 0    9600  mdskvm-p02.n poweroff-h  118  11/11 16:46:01   0d 01h44m   0h01m44s
  1 0    4384  mdskvm-p02.n nic-detach  118  11/11 18:38:04   0d 02h01m   0h00m00s
  2 0    960   mdskvm-p02.n nic-attach  118  11/11 20:40:01   0d 00h02m   0h00m00s
  3 0    8800  mdskvm-p02.n nic-attach  118  11/11 20:42:35   0d 00h06m   0h00m00s
  4 0    5584  mdskvm-p02.n nic-attach  118  11/11 20:49:12   0d 00h05m   0h00m00s
  5 0    4304  mdskvm-p02.n nic-detach  118  11/11 20:54:41   0d 02h20m   0h00m00s
  6 0    5776  mdskvm-p02.n nic-detach  118  11/11 23:14:48   0d 00h02m   0h00m00s
  7 0    6128  mdskvm-p02.n nic-detach  118  11/11 23:17:04   0d 00h02m   0h00m00s
  8 0    176   mdskvm-p02.n nic-attach  118  11/11 23:19:16   0d 00h08m   0h00m00s
  9 0    6080  mdskvm-p02.n nic-detach  118  11/11 23:27:52   0d 00h49m   0h00m00s
 10 0    4240  mdskvm-p02.n nic-attach  118  11/12 00:17:32   0d 00h02m   0h00m00s
 11 0    1120  mdskvm-p02.n nic-detach  118  11/12 00:20:17   0d 00h03m   0h00m00s
 12 0    3696  mdskvm-p02.n nic-attach  118  11/12 00:24:15   0d 00h09m   0h00m00s
 13 -    -     mdskvm-p02.n none        118  11/12 00:33:30   0d 00h45m   0h00m00s

USER TEMPLATE
DESCRIPTION="rhel7-template"
ERROR="Mon Nov 11 18:24:47 2019 : Error shutting down VM: Timed out shutting down one-17"
HYPERVISOR="kvm"
INPUTS_ORDER=""
LOGO="images/logos/centos.png"
MEMORY_UNIT_COST="MB"
SCHED_DS_REQUIREMENTS="ID=\"118\""
SCHED_MESSAGE="Mon Nov 11 16:46:01 2019 : Cannot dispatch VM to any Host. Possible reasons: Not enough capacity in Host or System DS, dispatch limit reached, or limit of free leases reached."
SCHED_REQUIREMENTS="ID=\"2\" | ID=\"3\" | CLUSTER_ID=\"100\""

VIRTUAL MACHINE TEMPLATE
AUTOMATIC_DS_REQUIREMENTS="(\"CLUSTERS/ID\" @> 100)"
AUTOMATIC_NIC_REQUIREMENTS="(\"CLUSTERS/ID\" @> 100)"
AUTOMATIC_REQUIREMENTS="(CLUSTER_ID = 100) & !(PUBLIC_CLOUD = YES)"
CONTEXT=[
  DISK_ID="2",
  ETH0_CONTEXT_FORCE_IPV4="",
  ETH0_DNS="",
  ETH0_EXTERNAL="",
  ETH0_GATEWAY="",
  ETH0_GATEWAY6="",
  ETH0_IP="10.0.0.100",
  ETH0_IP6="",
  ETH0_IP6_PREFIX_LENGTH="",
  ETH0_IP6_ULA="",
  ETH0_MAC="08:08:08:a0:b0:01",
  ETH0_MASK="",
  ETH0_MTU="",
  ETH0_NETWORK="",
  ETH0_SEARCH_DOMAIN="",
  ETH0_VLAN_ID="",
  ETH0_VROUTER_IP="",
  ETH0_VROUTER_IP6="",
  ETH0_VROUTER_MANAGEMENT="",
  NETWORK="YES",
  SSH_PUBLIC_KEY="",
  TARGET="hdb" ]
CPU="2"
GRAPHICS=[
  LISTEN="0.0.0.0",
  PORT="5917",
  TYPE="VNC" ]
MEMORY="4096"
MEMORY_COST="5"
OS=[
  ARCH="x86_64",
  BOOT="disk1,disk0" ]
TEMPLATE_ID="2"
TM_MAD_SYSTEM="shared"
VCPU="2"
VMID="17"
[oneadmin@one01 ~]$

KVM subnet definition:

[root@mdskvm-p02 network-scripts]# virsh net-edit default
<network>
  <name>default</name>
  <uuid>3e1181a4-d387-4d0b-adab-91c6c19c854b</uuid>
  <forward mode='nat'/>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:50:e9:20'/>
  <ip address='10.0.0.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='10.0.0.2' end='10.0.0.254'/>
    </dhcp>
  </ip>
</network>

What nic type are you using ? Have you tried using virtio as the default nic to emulate , I know some operating systems Fortigate VM for example won’t use the realtek NIC so virtio is a must

I’m sorry, not sure what you mean by NIC type and virtio in this context. Closest I can think if is that virbr0 and onebr01 are both bridges. Is this what you mean?

I tried to use virbr0 (OpenNebula defined) and onebr01 (My own bridge on the compute host) with no effect. One-17-0 is of course the OpenNebula VM NIC.

When virbr0 had the default libvirtd IP range of 192.168.122.X, I could assign an IP to the VM (one-17-0) then was able to ping at least 192.168.122.1 that was set on virbr0. After changing the IP of the default libvirtd network to 10.0.0.X, setting the IP on the VM results in the subject message.

Thx,
TK

Aghhh ok I get you actually, you are trying to change IP of default vibr0 interface , I think that usually gets a bit messy as it’s the preconfigure NIC for guest vms in NAT mode with KVM / libvirt

Could you not create a new bridge interface and attach vms to this and then add an IP to the bridge to act as the gateway for vms and setup your own iptables rules for the nat

I considered that but wanted to keep the setup clean and use the one bridge and single subnet.

I actually did create a new bridge, but not through libvirtd. See above. I’m thinking you mean through libvirtd.

Customizing extra NAT rules would be more involving then I would prefer. If I can keep the whole setup within and controlled by libvirtd, I would prefer that.

Thx,
TK

Yeh I know what you mean , I would look at the libvirt and KVM guides for changing the IP of the virbr0 interface in that case then.

Would anyone be able to share a screenshot or two of how they got their working and what settings you’ve used to allow communication in or out?

I wanted to start from a known good state and if that doesn’t work, I can rule out ON or CentOS configs as a culprit.

Thx,
TK

So I manage to move this along but in a rather odd manner. Here’s the new config:

[root@one01 ~]# onevnet show 0
VIRTUAL NETWORK 0 INFORMATION
ID                       : 0
NAME                     : virt-mds01
USER                     : oneadmin
GROUP                    : oneadmin
LOCK                     : None
CLUSTERS                 : 100
BRIDGE                   : onebr01
VN_MAD                   : bridge
AUTOMATIC VLAN ID        : NO
AUTOMATIC OUTER VLAN ID  : NO
USED LEASES              : 1

PERMISSIONS
OWNER                    : um-
GROUP                    : ---
OTHER                    : ---

VIRTUAL NETWORK TEMPLATE
BRIDGE="onebr01"
BRIDGE_TYPE="linux"
DESCRIPTION="Virtual Network 01"
OUTER_VLAN_ID=""
PHYDEV=""
SECURITY_GROUPS="0"
VLAN_ID=""
VN_MAD="bridge"

ADDRESS RANGE POOL
AR 0
SIZE           : 154
LEASES         : 1

RANGE                                   FIRST                               LAST
MAC                         08:08:08:A0:B0:01                  08:08:08:a0:b0:9a
IP                                 10.0.0.100                         10.0.0.253


LEASES
AR  OWNER                         MAC              IP                        IP6
0   V:19            08:08:08:a0:b0:01      10.0.0.100                          -

VIRTUAL ROUTERS
[root@one01 ~]#

Now when I ping out to outside, assuming I’m using an outside DNS server such as 8.8.8.8, it works. When I ssh in to the VM, it works. But I’m on the 192.168.0.X IP range, not the 10.0.0.100 one specified in the virtual network definition. And I can’t ssh anywhere to my local network 192.168.0.X even though I can 1) ssh into the VM just fine and 2) get an IP from the DHCP server, which resides on the 192.168.0.X network but is external to the compute node.

onebr01 is my compute node bridge. See above for details. virbr0 is my libvirtd one. Even though the OpenNebula lists 10.0.0.100 as my IP range, that’s nowhere assigned. I get my local Network IP from the local DHCP server.

This is getting me closer but it doesn’t seem right I’m not going through the libvirtd bridge.

EDIT:
So I can reach out to the local network only partially, but only if the MAC table has a valid MAC instead of incomplete:

[root@mdskvm-p02 ~]# ssh 192.168.0.85
root@192.168.0.85's password:
Last login: Tue Nov 19 01:55:26 2019 from 192.168.0.105
[root@one-entry01 ~]#
[root@one-entry01 ~]#
[root@one-entry01 ~]#
[root@one-entry01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
3: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:08:08:a0:b0:01 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.85/24 brd 192.168.0.255 scope global ens3
       valid_lft forever preferred_lft forever
    inet6 fe80::a08:8ff:fea0:b001/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
[root@one-entry01 ~]#
[root@one-entry01 ~]#
[root@one-entry01 ~]#
[root@one-entry01 ~]# arp -n
Address                  HWtype  HWaddress           Flags Mask            Iface
192.168.0.39             ether   78:e7:d1:8c:b1:ba   C                     ens3
192.168.0.105                    (incomplete)                              ens3
192.168.0.224                    (incomplete)                              ens3
192.168.0.44                     (incomplete)                              ens3
192.168.0.60                     (incomplete)                              ens3
192.168.0.1              ether   40:16:7e:a2:62:12   C                     ens3
192.168.0.154                    (incomplete)                              ens3
[root@one-entry01 ~]# ssh 192.168.0.39
Password:
Last login: Tue Nov 19 02:05:55 2019
[root@mdskvm-p02 ~]#

Interestingly, the ARP table has missing MAC’s but only for the subnet defined in libvirtd or through the OpenNebula interface:

[root@mdskvm-p02 ~]# arp -n
Address                  HWtype  HWaddress           Flags Mask            Iface
192.168.0.88             ether   78:e7:d1:8f:4d:26   C                     onebr01
192.168.0.224            ether   02:bf:c0:a8:00:e0   C                     onebr01
192.168.0.44             ether   00:50:56:86:0d:fa   C                     onebr01
192.168.0.45             ether   00:50:56:86:7e:ec   C                     onebr01
10.0.0.101                       (incomplete)                              virbr0
10.0.0.100                       (incomplete)                              virbr0
192.168.0.85             ether   08:08:08:a0:b0:01   C                     onebr01
192.168.0.105            ether   00:50:56:86:e7:7a   C                     onebr01
192.168.0.1              ether   40:16:7e:a2:62:12   C                     onebr01
192.168.0.2              ether   78:24:af:e6:d9:78   C                     onebr01
192.168.0.111            ether   08:08:08:a0:b0:01   C                     onebr01
192.168.0.76             ether   14:da:e9:19:95:b5   C                     onebr01
192.168.0.222            ether   02:bf:c0:a8:00:e0   C                     onebr01
192.168.0.60             ether   78:e7:d1:8f:4d:26   C                     onebr01
10.0.0.1                         (incomplete)                              virbr0
[root@mdskvm-p02 ~]#

Thx,
TK

Resolved this. Effectively changed the bonding interface (bond0) configuration to mode=6 instead of mode=2. This resolved the MAC address incomplete entries and a host of other network related issues on the compute nodes.

I don’t believe the other components are fine tuned yet on my setup since automatic assignment still doesn’t work ( Hints? ) however inbound and outbound communication works just fine.

Full writeup here:

Cheers,
TK