Network Configuration

Proxmox VE is using the Linux network stack. This provides a lot of flexibility on how to set up the network on the Proxmox VE nodes. The configuration can be done either via the GUI, or by manually editing the file /etc/network/interfaces , which contains the whole network configuration. The interfaces(5) manual page contains the complete format description. All Proxmox VE tools try hard to keep direct user modifications, but using the GUI is still preferable, because it protects you from errors.

A vmbr interface is needed to connect guests to the underlying physical network. They are a Linux bridge which can be thought of as a virtual switch to which the guests and physical interfaces are connected to. This section provides some examples on how the network can be set up to accomodate different use cases like redundancy with a bond , vlans or routed and NAT setups.

The Software Defined Network is an option for more complex virtual networks in Proxmox VE clusters.

Apply Network Changes

Proxmox VE does not write changes directly to /etc/network/interfaces . Instead, we write into a temporary file called /etc/network/interfaces.new , this way you can do many related changes at once. This also allows to ensure your changes are correct before applying, as a wrong network configuration may render a node inaccessible.

Live-Reload Network with ifupdown2

With the recommended ifupdown2 package (default for new installations since Proxmox VE 7.0), it is possible to apply network configuration changes without a reboot. If you change the network configuration via the GUI, you can click the Apply Configuration button. This will move changes from the staging interfaces.new file to /etc/network/interfaces and apply them live.

If you made manual changes directly to the /etc/network/interfaces file, you can apply them by running ifreload -a

Reboot Node to Apply

Another way to apply a new network configuration is to reboot the node. In that case the systemd service pvenetcommit will activate the staging interfaces.new file before the networking service will apply that configuration.

Naming Conventions

We currently use the following naming conventions for device names:

Ethernet devices: en* , systemd network interface names. This naming scheme is used for new Proxmox VE installations since version 5.0.

Ethernet devices: eth[N] , where 0 ≤ N ( eth0 , eth1 , …) This naming scheme is used for Proxmox VE hosts which were installed before the 5.0 release. When upgrading to 5.0, the names are kept as-is.

Bridge names: vmbr[N] , where 0 ≤ N ≤ 4094 ( vmbr0 - vmbr4094 )

Bonds: bond[N] , where 0 ≤ N ( bond0 , bond1 , …)

VLANs: Simply add the VLAN number to the device name, separated by a period ( eno1.50 , bond1.30 )

This makes it easier to debug networks problems, because the device name implies the device type.

Systemd Network Interface Names

Systemd defines a versioned naming scheme for network device names. The scheme uses the two-character prefix en for Ethernet network devices. The next characters depends on the device driver, device location and other attributes. Some possible patterns are:

o<index>[n<phys_port_name>|d<dev_port>] — devices on board

s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — devices by hotplug id

[P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — devices by bus id

x<MAC> — devices by MAC address

Some examples for the most common patterns are:

eno1 — is the first on-board NIC

enp3s0f1 — is function 1 of the NIC on PCI bus 3, slot 0

For a full list of possible device name patterns, see the systemd.net-naming-scheme(7) manpage .

A new version of systemd may define a new version of the network device naming scheme, which it then uses by default. Consequently, updating to a newer systemd version, for example during a major Proxmox VE upgrade, can change the names of network devices and require adjusting the network configuration. To avoid name changes due to a new version of the naming scheme, you can manually pin a particular naming scheme version (see below ).

However, even with a pinned naming scheme version, network device names can still change due to kernel or driver updates. In order to avoid name changes for a particular network device altogether, you can manually override its name using a link file (see below ).

For more information on network interface names, see Predictable Network Interface Names .

Pinning a specific naming scheme version

You can pin a specific version of the naming scheme for network devices by adding the net.naming-scheme=<version> parameter to the kernel command line . For a list of naming scheme versions, see the systemd.net-naming-scheme(7) manpage .

For example, to pin the version v252 , which is the latest naming scheme version for a fresh Proxmox VE 8.0 installation, add the following kernel command-line parameter:

See also this section on editing the kernel command line. You need to reboot for the changes to take effect.

Overriding network device names

You can manually assign a name to a particular network device using a custom systemd.link file . This overrides the name that would be assigned according to the latest network device naming scheme. This way, you can avoid naming changes due to kernel updates, driver updates or newer versions of the naming scheme.

Custom link files should be placed in /etc/systemd/network/ and named <n>-<id>.link , where n is a priority smaller than 99 and id is some identifier. A link file has two sections: [Match] determines which interfaces the file will apply to; [Link] determines how these interfaces should be configured, including their naming.

To assign a name to a particular network device, you need a way to uniquely and permanently identify that device in the [Match] section. One possibility is to match the device’s MAC address using the MACAddress option, as it is unlikely to change. Then, you can assign a name using the Name option in the [Link] section.

For example, to assign the name enwan0 to the device with MAC address aa:bb:cc:dd:ee:ff , create a file /etc/systemd/network/10-enwan0.link with the following contents:

Do not forget to adjust /etc/network/interfaces to use the new name. You need to reboot the node for the change to take effect.

For more information on link files, see the systemd.link(5) manpage .

Choosing a network configuration

Depending on your current network organization and your resources you can choose either a bridged, routed, or masquerading networking setup.

Proxmox VE server in a private LAN, using an external gateway to reach the internet

The Bridged model makes the most sense in this case, and this is also the default mode on new Proxmox VE installations. Each of your Guest system will have a virtual interface attached to the Proxmox VE bridge. This is similar in effect to having the Guest network card directly connected to a new switch on your LAN, the Proxmox VE host playing the role of the switch.

Proxmox VE server at hosting provider, with public IP ranges for Guests

For this setup, you can use either a Bridged or Routed model, depending on what your provider allows.

Proxmox VE server at hosting provider, with a single public IP address

In that case the only way to get outgoing network accesses for your guest systems is to use Masquerading . For incoming network access to your guests, you will need to configure Port Forwarding .

For further flexibility, you can configure VLANs (IEEE 802.1q) and network bonding, also known as "link aggregation". That way it is possible to build complex and flexible virtual networks.

Default Configuration using a Bridge

Bridges are like physical network switches implemented in software. All virtual guests can share a single bridge, or you can create multiple bridges to separate network domains. Each host can have up to 4094 bridges.

The installation program creates a single bridge named vmbr0 , which is connected to the first Ethernet card. The corresponding configuration in /etc/network/interfaces might look like this:

Virtual machines behave as if they were directly connected to the physical network. The network, in turn, sees each virtual machine as having its own MAC, even though there is only one network cable connecting all of these VMs to the network.

Routed Configuration

Most hosting providers do not support the above setup. For security reasons, they disable networking as soon as they detect multiple MAC addresses on a single interface.

You can avoid the problem by “routing” all traffic via a single interface. This makes sure that all network packets use the same MAC address.

A common scenario is that you have a public IP (assume 198.51.100.5 for this example), and an additional IP block for your VMs ( 203.0.113.16/28 ). We recommend the following setup for such situations:

Masquerading (NAT) with iptables

Masquerading allows guests having only a private IP address to access the network by using the host IP address for outgoing traffic. Each outgoing packet is rewritten by iptables to appear as originating from the host, and responses are rewritten accordingly to be routed to the original sender.

Adding these lines in the /etc/network/interfaces can fix this problem:

For more information about this, refer to the following links:

Netfilter Packet Flow

Patch on netdev-list introducing conntrack zones

Blog post with a good explanation by using TRACE in the raw table

Bonding (also called NIC teaming or Link Aggregation) is a technique for binding multiple NIC’s to a single network device. It is possible to achieve different goals, like make the network fault-tolerant, increase the performance or both together.

High-speed hardware like Fibre Channel and the associated switching hardware can be quite expensive. By doing link aggregation, two NICs can appear as one logical interface, resulting in double speed. This is a native Linux kernel feature that is supported by most switches. If your nodes have multiple Ethernet ports, you can distribute your points of failure by running network cables to different switches and the bonded connection will failover to one cable or the other in case of network trouble.

Aggregated links can improve live-migration delays and improve the speed of replication of data between Proxmox VE Cluster nodes.

There are 7 modes for bonding:

Round-robin (balance-rr): Transmit network packets in sequential order from the first available network interface (NIC) slave through the last. This mode provides load balancing and fault tolerance.

Active-backup (active-backup): Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single logical bonded interface’s MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch. This mode provides fault tolerance.

XOR (balance-xor): Transmit network packets based on [(source MAC address XOR’d with destination MAC address) modulo NIC slave count]. This selects the same NIC slave for each destination MAC address. This mode provides load balancing and fault tolerance.

Broadcast (broadcast): Transmit network packets on all slave network interfaces. This mode provides fault tolerance.

IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP): Creates aggregation groups that share the same speed and duplex settings. Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification.

Adaptive transmit load balancing (balance-tlb): Linux bonding driver mode that does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

Adaptive load balancing (balance-alb): Includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the NIC slaves in the single logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.

If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using the corresponding bonding mode (802.3ad). Otherwise you should generally use the active-backup mode.

For the cluster network (Corosync) we recommend configuring it with multiple networks. Corosync does not need a bond for network reduncancy as it can switch between networks by itself, if one becomes unusable.

The following bond configuration can be used as distributed/shared storage network. The benefit would be that you get more speed and the network will be fault-tolerant.

Another possibility it to use the bond directly as bridge port. This can be used to make the guest network fault-tolerant.

VLAN 802.1Q

A virtual LAN (VLAN) is a broadcast domain that is partitioned and isolated in the network at layer two. So it is possible to have multiple networks (4096) in a physical network, each independent of the other ones.

Each VLAN network is identified by a number often called tag . Network packages are then tagged to identify which virtual network they belong to.

VLAN for Guest Networks

Proxmox VE supports this setup out of the box. You can specify the VLAN tag when you create a VM. The VLAN tag is part of the guest network configuration. The networking layer supports different modes to implement VLANs, depending on the bridge configuration:

VLAN awareness on the Linux bridge: In this case, each guest’s virtual network card is assigned to a VLAN tag, which is transparently supported by the Linux bridge. Trunk mode is also possible, but that makes configuration in the guest necessary.

"traditional" VLAN on the Linux bridge: In contrast to the VLAN awareness method, this method is not transparent and creates a VLAN device with associated bridge for each VLAN. That is, creating a guest on VLAN 5 for example, would create two interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.

Open vSwitch VLAN: This mode uses the OVS VLAN feature.

Guest configured VLAN: VLANs are assigned inside the guest. In this case, the setup is completely done inside the guest and can not be influenced from the outside. The benefit is that you can use more than one VLAN on a single virtual NIC.

VLAN on the Host

To allow host communication with an isolated network. It is possible to apply VLAN tags to any network device (NIC, Bond, Bridge). In general, you should configure the VLAN on the interface with the least abstraction layers between itself and the physical NIC.

For example, in a default configuration where you want to place the host management address on a separate VLAN.

The next example is the same setup but a bond is used to make this network fail-safe.

Disabling IPv6 on the Node

Proxmox VE works correctly in all environments, irrespective of whether IPv6 is deployed or not. We recommend leaving all settings at the provided defaults.

Should you still need to disable support for IPv6 on your node, do so by creating an appropriate sysctl.conf (5) snippet file and setting the proper sysctls , for example adding /etc/sysctl.d/disable-ipv6.conf with content:

This method is preferred to disabling the loading of the IPv6 module on the kernel commandline .

Disabling MAC Learning on a Bridge

By default, MAC learning is enabled on a bridge to ensure a smooth experience with virtual guests and their networks.

But in some environments this can be undesired. Since Proxmox VE 7.3 you can disable MAC learning on the bridge by setting the ‘bridge-disable-mac-learning 1` configuration on a bridge in `/etc/network/interfaces’, for example:

Once enabled, Proxmox VE will manually add the configured MAC address from VMs and Containers to the bridges forwarding database to ensure that guest can still use the network - but only when they are using their actual MAC address.

  • Reference Documentation

Navigation menu

change ip proxmox

techbits.io

techbits.io

Change the ip address of a proxmox host.

Proxmox has come a long way since 2008, and the amount that can be configured in the frontend is quite staggering, but sometimes you long for the good old days and want to drop into the terminal. It's just Debian, after all... Hello? Where's everyone gone? Ok, fine, let's start with changing the IP through the web UI:

Changing the IP through the Web UI

There are two changes to make. First, go to System > Network, choose the 'Linux Bridge' interface and click Edit. Enter the new IP here and click OK.

Now click 'Apply Configuration' and the system will reload the network configuration.

Now go to System > Hosts, change the IP address on the line with the hostname and click Save.

Changing the IP with a One-Liner

TL;DR: here's a one-liner: sed -i 's/oldip/newip/g' /etc/network/interfaces /etc/hosts && systemctl restart networking

The above UI changes are essentially just updating these files:

  • /etc/network/interfaces - This updates the actual NIC/interface address
  • /etc/hosts - If a server hostname is in the hosts file, a system won't perform a DNS lookup for that address. Therefore, host/IP pairings in the hosts file need to be correct.

While you could edit both files manually, change the old IP to the new IP, that's just not as fun as using a one-liner to do it for you! In this example, 192.168.10.177 is the old IP and 192.168.10.10 is the new IP:

If you can do it with a one-liner, I'm certainly going to try. Hopefully, this helps you, but if you think I've missed something, please leave a comment or let me know over at @techbitsio .

Sending Proxmox VE 8 Syslogs to a Log Aggregator

How to enable omv-extras on open media vault, install home assistant on proxmox.

Proxmox Support Forum

  • Search forums

Follow along with the video below to see how to install our site as a web app on your home screen.

Note: This feature may not be available in some browsers.

[SOLVED]   Change the ip-address of the Proxmox VE cluster

  • Thread starter mugr
  • Start date Jul 17, 2019
  • Proxmox Virtual Environment
  • Proxmox VE: Installation and configuration
  • Jul 17, 2019

Hi! Help me, please! My version of Proxmox VE 5.4-11. I made a Proxmox cluster (while there is only one node in the cluster). When creating the default cluster, the external network interface was automatically used: ip 109.178.хх.хх \ 29 I needed to change the external ip address for the cluster to internal 192.168.1.5 in order for the cluster to work in the local network 192.168.1.0 \ 24 I did everything using these instructions: https://pve.proxmox.com/wiki/Cluster_Manager#_cluster_network and https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_edit_corosync_conf Everything seemed to work, "systemctl status corosync" did not say any errors. But in the web interface in the section "Datacenter-> Cluster-> Join Information", as before, the old ip-address of the external network is indicated: 109.178.xxx.xx How can this be changed? And it will affect the operation of the cluster functions? My "corosync.conf": Code: logging { debug: off to_syslog: yes } nodelist { node { name: eldprx1 nodeid: 1 quorum_votes: 1 ring0_addr: 192.168.1.5 } } quorum { provider: corosync_votequorum } totem { cluster_name: EldisProxmox config_version: 2 interface { bindnetaddr: 192.168.1.5 ringnumber: 0 } ip_version: ipv4 secauth: on version: 2 }  

My "/etc/hosts": Code: 127.0.0.1 localhost.localdomain localhost 109.178.xx.xx prx1.local prx1 pvelocalhost 192.168.1.5 prx1.local prx1 pvelocalhost # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters My "/etc/network/interfaces": Code: source /etc/network/interfaces.d/* auto lo iface lo inet loopback allow-hotplug enp96s0f0 auto enp96s0f0 iface enp96s0f0 inet manual dns-nameservers 8.8.8.8 dns-search local allow-hotplug enp96s0f1 auto enp96s0f1 iface enp96s0f1 inet manual auto vmbr0 iface vmbr0 inet static address 109.178.xx.xx netmask 29 gateway 109.178.xx.xx bridge-ports enp96s0f0 bridge-stp off bridge-fd 0 auto vmbr1 iface vmbr1 inet static address 192.168.1.5 netmask 24 bridge-ports enp96s0f1 bridge-stp off bridge-fd 0  

Can anyone help me with this problem please? What settings need to be made yet?  

t.lamprecht

t.lamprecht

Proxmox staff member.

mugr said: Can anyone help me with this problem please? What settings need to be made yet? Click to expand...

Thanks for the answer! But if I specify only one local entry in "/ etc / hosts" - "192.168.1.5 prx1.local prx1 pvelocalhost", then the web interface will also be available only at the local address 192.168.1.5:8006. But I need the web interface to be available at the external address 109.178.xx.xx: 8006, and the cluster is connected and working at the internal local address 192.168.1.5. How can I do it? I'm confused, sorry!  

mugr said: But if I specify only one local entry in "/ etc / hosts" - "192.168.1.5 prx1.local prx1 pvelocalhost", then the web interface will also be available only at the local address 192.168.1.5:8006. Click to expand...

The address/network used for the nodename in /etc/hosts is just the default for most operations, for example live migration of VMs will be send over that one, Proxmox VE nodes will use this to talk with each other and redirect API request in a cluster, ...  

So, I need to my "/etc/hosts" lead to the following form: Code: 127.0.0.1 localhost.localdomain localhost 192.168.1.5 prx1.local prx1 pvelocalhost 109.178.xx.xx prx1.local prx1 pvelocalhost In this case, in the web interface in the section "Datacenter-> Cluster-> Join Information" will be the local address 192.168.1.5, and the management of the web interface will remain, as before, on the external address 109.178.xx.xx? I understood correctly?  

As said two entries with the same name have no use, the first found will always be used. You could just remove (or rename) the second "prx1" entry and be also good.  

t.lamprecht said: As said two entries with the same name have no use, the first found will always be used. You could just remove (or rename) the second "prx1" entry and be also good. Click to expand...
mugr said: But then I will lose access to the web interface through the external address 109.178.xx.xx. Click to expand...

Thank you very much! I changed the entries in "/etc/hosts", executed "service pve-cluster restart", and everything works well.  

  • This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register. By continuing to use this site, you are consenting to our use of cookies. Accept Learn more…

IMAGES

  1. Как изменить основной IP-адрес Proxmox VE

    change ip proxmox

  2. Proxmox IP Address

    change ip proxmox

  3. Changing the IP of your Proxmox Server

    change ip proxmox

  4. Proxmox VE 6.4 Released With Single-File Restore And Live Restore

    change ip proxmox

  5. Proxmox Ubuntu Linux IP address

    change ip proxmox

  6. Setting IP address Proxmox

    change ip proxmox

VIDEO

  1. How to Add a Storage Drive to Proxmox เพิ่มฮาร์ดดิส ระบบปฏิบัติการ Proxmox

  2. Mikrotik

  3. Proxmox- The Best Home Server

  4. Proxmox

  5. How to change console sessions on Linux? #linux

  6. LIVE 🔴 Tech Talk + Proxmox bei Hetzner

COMMENTS

  1. Network Configuration

    Apply Network Changes Naming Conventions Choosing a network configuration Default Configuration using a Bridge Routed Configuration Masquerading (NAT) with iptables Linux Bond VLAN 802.1Q Disabling IPv6 on the Node Disabling MAC Learning on a Bridge Proxmox VE is using the Linux network stack.

  2. How to change Proxmox default IP at bootup

    Jan 5, 2024 #2 By default PVE will use static IPs and not DHCP. And I would also stick with that so you can SSH into the PVE host or access its webUI if your OPNsense running your DHCP server fails to start. You can change its IP by changing the IP on the bridges and the hosts file via the webUI.

  3. Change the IP address of a Proxmox host

    First, go to System > Network, choose the 'Linux Bridge' interface and click Edit. Enter the new IP here and click OK. Now click 'Apply Configuration' and the system will reload the network configuration. Now go to System > Hosts, change the IP address on the line with the hostname and click Save.

  4. [SOLVED] Change the ip-address of the Proxmox VE cluster

    #1 Hi! Help me, please! My version of Proxmox VE 5.4-11. I made a Proxmox cluster (while there is only one node in the cluster). When creating the default cluster, the external network interface was automatically used: ip 109.178.хх.хх \ 29

  5. How to Change the IP address of Proxmox

    VE is Changing network and showing you how to change the ip address of your Proxmox serverLinks in our description may be affiliate links which help our chan...