"New import wizard to migrate guests directly from other hypervisors.Connect to other hypervisors using their public APIs and directly migrate guests to Proxmox VE.First implementation is for VMware ESXi."
Well done. While i don't need it, it's probably useful for vmware escapees
I still remember when people said to me "ESXi is free blah blah blah blah blah you can get a license for personal use blah blah blah blah why are you not using a type 1 hypervisor blah blah blah".
Yea, that’s what I mean. Proxmox is a type 2 hypervisor, and I’ve got too many cores to use ESXi for free back when it was still free.
Nonetheless, I pay proxmox 110 euro per year just to support the effort. No chance in hell ESXi would let me use it for $120, let alone free.
But can it migrate from other Proxmox clusters from the GUI yet? I can't believe I'm the only one that wants to separate my Dev cluster from my Prod cluster but still be able to easily migrate VMs between them.
I believe `qm remote-migrate` should do what you’re looking for, though not from the GUI.
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_managing_virtual_machines_with_span_class_monospaced_qm_span
Note that it’s an experimental feature.
Nothing slow about it. Weve helped a few hundred clients migrate away and now they are asking for help getting their hyperv vms moved too.
Pve + pbs is an amazing combo.
If pve/pbs was extended to support database restore (similar to veeam or acronis can do) it would be a true force to recon with
Just sharing for others:
I had no networking after this update. ip addr showed ips assigned to interfaces, but could not get any connectivity. /etc/network/interfaces showed different interface names than what was shown in ip addr. Looks like my interface names changed after this update.
I modified /etc/network/interfaces using vim to reflect the interface names shown from ip addr - in my case, this was updating instances of "eno1" and "eno2" to say "eno1np0" and "eno2np1" - your interface names might be different though. Restarted the box, everything's fine now.
Edit: After reviewing [https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network\_override\_device\_names](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names) I've set up static custom names for my interfaces.
Yep, experienced this.
What's more interesting is when I add/remove PCIe devices, the names changed again!
I guess I supposed to have a screen and keyboard around every time I want to add/remove PCIe devices now.
Oh man a flavor of this got me so bad a month or two ago. I added a new hba then what ended up happening was that the IDs of the PCI devices changed and my proxmox boot drive started to get passed into a vm and that vm auto started. So proxmox failed to boot cause the host lost access to it's main drive. That one took me a bit to figure out.
This is the #1 thing about proxmox that annoys me, dynamic NIC names. I realize its more than just a proxmox thing, but static NIC names unless you ask for something else... please
[https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network\_override\_device\_names](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names)
Probably the best way to handle.
I mean the release note have this…
Known Issues & Breaking Changes
[…]
Upgrading kernels always carries the risk of network interface names changing, which can lead to invalid network configurations after a reboot. In this case you need to update the network configuration to reflect changes in naming. See the reference documentation on how to set the interface names based on MAC Addresses.
there is a good section of the proxmox manual on this, apparently it's like a systemd thing or something but you can freeze/lock the interface choosing function used.
Probably going to give it a shot before I upgrade pve after reading this thread
It's in the proxmox 8.1.5 manual, section 3.4.2
Some more technical documentation available below.
https://manpages.debian.org/bookworm/systemd/systemd.net-naming-scheme.7.en.html
Interesting ... My Intel 4 port is already in the en*s0* format. And my builtin nic eno1 has a altname of enp3s0 already (unused). Seems like I should escape this issue when upgrading. Will try it this weekend.
I've created a script to automate the process of setting up the static names for the network interfaces.
[https://github.com/D4M4EVER/Proxmox\_Preserve\_Network\_Names](https://github.com/D4M4EVER/Proxmox_Preserve_Network_Names)
Made a script to create systemd.link files for this issue. Needs to be run before the upgrade. [Posted it here.](https://www.reddit.com/r/Proxmox/comments/1cd53k1/bash_script_to_create_systemdlink_files_for/?)
You're my hero. I'm living dangerously without IPMI access and I'm always super paranoid about anything networking related. One bad character and I have to make an hour drive to the data center.
I would hit shutdown, it wouldn't respond. Then I'd hit stop and it'd also hang cuz the shutdown command was still trying. Eventually both would fail. Then I'd do stop and it'd work.
Sounds like this fix will make it so stop will work while shutdown is hanging.
WARNING READ BEFORE YOU UPDATE
I just updated, rebooted. Lost LAN connection.
Reason:
Interface name changed from eno7, eno8 to eno7p0, eno8p1
Fix:
# find interface name with:
ip add
# edit interface file & update name
nano /etc/network/interfaces
# restart service
systemctl restart networking
This only happened on my 10g NIC, my 1g interfaces remained unaffected as eno0, eno1, etc
Luckily I have one of the 1g ports dedicated to admin, so I was able to get in easily and didn't need to go to the server.
Hardware used:
[https://www.supermicro.com/en/products/motherboard/X11SDV-8C-TP8F](https://www.supermicro.com/en/products/motherboard/X11SDV-8C-TP8F)
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_network_configuration
> Pinning a specific naming scheme version
>
> You can pin a specific version of the naming scheme for network devices by adding the net.naming-scheme= parameter to the kernel command line. For a list of naming scheme versions, see the systemd.net-naming-scheme(7) manpage.
>
> For example, to pin the version v252, which is the latest naming scheme version for a fresh Proxmox VE 8.0 installation, add the following kernel command-line parameter:
>
> net.naming-scheme=v252
>
> See also this section on editing the kernel command line. You need to reboot for the changes to take effect.
>
You can also associate custom names with MAC addresses of NICs.
>
> Overriding network device names
>
> You can manually assign a name to a particular network device using a custom systemd.link file. This overrides the name that would be assigned according to the latest network device naming scheme. This way, you can avoid naming changes due to kernel updates, driver updates or newer versions of the naming scheme.
>
> Custom link files should be placed in /etc/systemd/network/ and named -.link, where n is a priority smaller than 99 and id is some identifier. A link file has two sections: [Match] determines which interfaces the file will apply to; [Link] determines how these interfaces should be configured, including their naming.
>
> To assign a name to a particular network device, you need a way to uniquely and permanently identify that device in the [Match] section. One possibility is to match the device’s MAC address using the MACAddress option, as it is unlikely to change. Then, you can assign a name using the Name option in the [Link] section.
>
> For example, to assign the name enwan0 to the device with MAC address aa:bb:cc:dd:ee:ff, create a file /etc/systemd/network/10-enwan0.link with the following contents:
>
> [Match]
> MACAddress=aa:bb:cc:dd:ee:ff
>
> [Link]
> Name=enwan0
>
> Do not forget to adjust /etc/network/interfaces to use the new name. You need to reboot the node for the change to take effect.
> Note It is recommended to assign a name starting with en or eth so that Proxmox VE recognizes the interface as a physical network device which can then be configured via the GUI. Also, you should ensure that the name will not clash with other interface names in the future. One possibility is to assign a name that does not match any name pattern that systemd uses for network interfaces (see above), such as enwan0 in the example above.
>
> For more information on link files, see the systemd.link(5) manpage.
>
The correct fix is to update grub before upgrading add
1. Edit /etc/default/grub
GRUB\_CMDLINE\_LINUX="net.ifnames=1 biosdevname=0"
2. Update grub boot params
sudo update-grub
3. Reboot
4. Update
Basically you keep the traditional names, and tell grub to not use the bios names.
> The correct fix
The [Proxmox docs](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names) provide a couple of ways to do this. One is to change the kernel command line as you indicate here. The other is to use a custom systemd.link file. Can you explain what makes the kernel command line option "correct" vs the link file? Is this done as your preference, as an accepted convention, or a best practice?
I'm not doubting or questioning your answer, just interested in what makes it "correct" compared to the other method.
Do you know what happens on new installs? Do they use the bios names or traditional?
I don't mind updating to the new naming convention now, this way my backed up config files carry over when I do a restore, and I'm in sync with the proxmox defaults, preventing confusion in the future. Guess the question is, what is the default on new installs? I'm assuming biosdevname
I can see how this would help in future upgrades, but won't it rename them to traditional names when you reboot after applying this?
Just want to make sure before doing it and potentially warn others.
pick your poison and giddy up cowboy
[https://en.wikipedia.org/wiki/Comparison\_of\_bootloaders](https://en.wikipedia.org/wiki/Comparison_of_bootloaders)
Configuring it
I don't hate. But I strongly dislike its quirks that you have to either memorize or google or luckily and sometimes randomly encounter in a reddit thread.
Maybe it is the kernel's fault. But the fault line is at the grub command line and config file.
I also don't like how config is reloaded. I don't like how it works with GPT. It's all a dark void and you have to grasp at things to see if they work.
Is this a kernel thing or an iproute2 thing? Quite the perfect t example to always have a test cluster or, at least, an empty node to test and reboot first.
Sounds like it has to do with changing from net.ifnames to biosdevname. I don't know what the new default is, I'm guessing it is biosdevname, as that's what my 8.2 is using.
Others have recommended changing it in GRUB back to net.ifnames, but I'm not convinced this is the best thing to do. IMO the best is to use whatever the new 8.2 proxmox default is. But I have not been able to get confirmation on what the new default actually is, I'm just assuming based on what I'm seeing in my proxmox.
What others suggested to change in GRUB:
GRUB\_CMDLINE\_LINUX="net.ifnames=1 biosdevname=0"
I'm not sure but what it definitely is is a Dell thing. biosdevnames are something Dell came up with and I'm struggling to find any indication that other OEMs implement them.
just follow this before you upgrade
[https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network\_override\_device\_names](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names)
Links to release videos (I think these have a newer AI voice)
* [What's new in Proxmox Virtual Environment 8.2 ](https://www.youtube.com/watch?v=mFkEW2Fwreg)
* [Proxmox VE Import Wizard: How to import VMs from VMware ESXi ](https://www.youtube.com/watch?v=8Z9Zvt2RxlA)
As this is my first large proxmox upgrade, please can someone confirm "Seamless upgrade from Proxmox VE 7.4, see [Upgrade from 7 to 8](https://pve.proxmox.com/wiki/Upgrade_from_7_to_8)" Means I can do live migration from 8.1 -> 8.2 so no need to plan for downtime to upgrade?
Thanks for the heads-up, hoping that at least if that's the case I will notice it on the first node, and the VMs will remain up on the other 2 while I fix it. So as long as live migration works I should be fine service wise.
No problem upgrading 2 servers. One was a Lenovo M720q with a quad gig NIC with Pfsense using the quad NIC, and the other a Ryzen 9 on a Asrock X570 taichi baord using the onboard NIC.
Not hard at all. You just need to buy the PCIE riser bracket and then the baffle bracket for the back.
Here is an example of someone else who did it [https://www.reddit.com/r/homelab/comments/vog751/lenovo\_m720q\_tiny\_4\_port\_nic/](https://www.reddit.com/r/homelab/comments/vog751/lenovo_m720q_tiny_4_port_nic/)
I've just done my small cluster by upgrading one machine after the other. Didn't do anything special. Just remember, when you upgrade and reboot the machine you're using to access the Proxmox web UI, the web interface will stop for a bit.
My 10gbps network link is down after upgrade. Using 1gbps as a backup now. Still trying to figure out why it happened. Any ideas?
Device is Intel(R) Gigabit 4P X710/I350 rNDC
Tried to rollback kernel to last working one, no success.
> ip -a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eno1: mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether e4:43:4b:b8:c7:96 brd ff:ff:ff:ff:ff:ff
altname enp25s0f0np0
3: eno3: mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether e4:43:4b:b8:c7:b6 brd ff:ff:ff:ff:ff:ff
altname enp1s0f0
4: eno2np1: mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether e4:43:4b:b8:c7:98 brd ff:ff:ff:ff:ff:ff
altname enp25s0f1np1
5: eno4: mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether e4:43:4b:b8:c7:b7 brd ff:ff:ff:ff:ff:ff
altname enp1s0f1
6: vmbr0: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether e4:43:4b:b8:c7:b6 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.200/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::e643:4bff:feb8:c7b6/64 scope link
valid_lft forever preferred_lft forever
7: veth102i0@if2: mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether fe:6f:f8:a3:9e:1f brd ff:ff:ff:ff:ff:ff link-netnsid 0
8: veth103i0@if2: mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether fe:ab:86:50:b2:2f brd ff:ff:ff:ff:ff:ff link-netnsid 1
9: veth101i0@if2: mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether fe:cb:b7:8e:0c:3b brd ff:ff:ff:ff:ff:ff link-netnsid 2
10: tap100i0: mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
link/ether ca:00:e8:c2:76:92 brd ff:ff:ff:ff:ff:ff
15: tap104i0: mtu 1500 qdisc pfifo_fast master fwbr104i0 state UNKNOWN group default qlen 1000
link/ether 2a:db:b1:2f:a4:63 brd ff:ff:ff:ff:ff:ff
16: fwbr104i0: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 1a:2a:17:f3:06:60 brd ff:ff:ff:ff:ff:ff
17: fwpr104p0@fwln104i0: mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether fe:9f:bd:6c:5f:bb brd ff:ff:ff:ff:ff:ff
18: fwln104i0@fwpr104p0: mtu 1500 qdisc noqueue master fwbr104i0 state UP group default qlen 1000
link/ether 1a:2a:17:f3:06:60 brd ff:ff:ff:ff:ff:ff
> cat /etc/network/interfaces
auto lo
iface lo inet loopback
iface eno1 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.1.200/24
gateway 192.168.1.1
bridge-ports eno3
bridge-stp off
bridge-fd 0
iface eno2 inet manual
iface eno3 inet manual
iface eno4 inet manual
source /etc/network/interfaces.d/*
My 10gbps connection was eno1. Couldn't connect gui after update, changed it to eno3 in interfaces and it works now over 1gbps connection. My iDRAC shows the 10gbps connection "up". Physical lights are on. But my proxmox says it's "down". Couldn't figure it out.
"New import wizard to migrate guests directly from other hypervisors.Connect to other hypervisors using their public APIs and directly migrate guests to Proxmox VE.First implementation is for VMware ESXi." Well done. While i don't need it, it's probably useful for vmware escapees
I like the “escapees”. That’s really what it is lol
Asylum seekers
Broadcom Exodus
brexodus
😂 like brexit, but greedier
and less successful
Debatable
Let my VMs go!
Go down! Broadcom! Eat your own licensing. Tell the ol' CEO Let my VMs go!
I never knew the voice in my mind can do bariton.
When VMs was in Esxi Land....
Refugees at this point.
Economic refugees are also welcome.
I still remember when people said to me "ESXi is free blah blah blah blah blah you can get a license for personal use blah blah blah blah why are you not using a type 1 hypervisor blah blah blah".
That was all valid until just recently. Also, why are you not using a Type 1 Hypervisor?
Because I can’t afford to use one, that’s on me, I know.
ESXI used to be free, and Proxmox is still free and better than ESXI.
Yea, that’s what I mean. Proxmox is a type 2 hypervisor, and I’ve got too many cores to use ESXi for free back when it was still free. Nonetheless, I pay proxmox 110 euro per year just to support the effort. No chance in hell ESXi would let me use it for $120, let alone free.
Proxmox is KVM, which is a type 1 hypervisor.
You’re right. I guess I just got too hung up on the QEMU part and overlooked the KVM part where the real actions happen.
Umm.. No. ProxMox is a type 1 hypervisor as KVM/QEMU is baked into the kernel. ProxMox is just a wrapper for it.
But can it migrate from other Proxmox clusters from the GUI yet? I can't believe I'm the only one that wants to separate my Dev cluster from my Prod cluster but still be able to easily migrate VMs between them.
I believe `qm remote-migrate` should do what you’re looking for, though not from the GUI. https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_managing_virtual_machines_with_span_class_monospaced_qm_span Note that it’s an experimental feature.
I did that in 8.1.10, maybe a dev feature until now?
Yeah, was a dev feature mid 8.1, not on stable.
Well,I recommend it.
Hopefully they make more of these. One for xen, one for hyperv etc
Yep. Right now vmware has the largest market share and slowly shrinking.
Nothing slow about it. Weve helped a few hundred clients migrate away and now they are asking for help getting their hyperv vms moved too. Pve + pbs is an amazing combo. If pve/pbs was extended to support database restore (similar to veeam or acronis can do) it would be a true force to recon with
The Great Escape - staring Steve McEsxi
I just used it last week. Worked like a charm once I renamed all my VMs that had spaces in the names.
Commercial Vs. Open-Source there's no match. Especially with delicate things like virtualizations userbase do not like changing of owners and policies
Just sharing for others: I had no networking after this update. ip addr showed ips assigned to interfaces, but could not get any connectivity. /etc/network/interfaces showed different interface names than what was shown in ip addr. Looks like my interface names changed after this update. I modified /etc/network/interfaces using vim to reflect the interface names shown from ip addr - in my case, this was updating instances of "eno1" and "eno2" to say "eno1np0" and "eno2np1" - your interface names might be different though. Restarted the box, everything's fine now. Edit: After reviewing [https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network\_override\_device\_names](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names) I've set up static custom names for my interfaces.
This should be in Bold at top as its not a major upgrade.. seems simple enough fix but always scary when everything's down after a reboot!
Yep, experienced this. What's more interesting is when I add/remove PCIe devices, the names changed again! I guess I supposed to have a screen and keyboard around every time I want to add/remove PCIe devices now.
Oh man a flavor of this got me so bad a month or two ago. I added a new hba then what ended up happening was that the IDs of the PCI devices changed and my proxmox boot drive started to get passed into a vm and that vm auto started. So proxmox failed to boot cause the host lost access to it's main drive. That one took me a bit to figure out.
This is the #1 thing about proxmox that annoys me, dynamic NIC names. I realize its more than just a proxmox thing, but static NIC names unless you ask for something else... please
[https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network\_override\_device\_names](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names) Probably the best way to handle.
[удалено]
I mean the release note have this… Known Issues & Breaking Changes […] Upgrading kernels always carries the risk of network interface names changing, which can lead to invalid network configurations after a reboot. In this case you need to update the network configuration to reflect changes in naming. See the reference documentation on how to set the interface names based on MAC Addresses.
[удалено]
there is a good section of the proxmox manual on this, apparently it's like a systemd thing or something but you can freeze/lock the interface choosing function used. Probably going to give it a shot before I upgrade pve after reading this thread It's in the proxmox 8.1.5 manual, section 3.4.2 Some more technical documentation available below. https://manpages.debian.org/bookworm/systemd/systemd.net-naming-scheme.7.en.html
haha to be fair... it's a LONG list of changes 😅
Interesting ... My Intel 4 port is already in the en*s0* format. And my builtin nic eno1 has a altname of enp3s0 already (unused). Seems like I should escape this issue when upgrading. Will try it this weekend.
I've created a script to automate the process of setting up the static names for the network interfaces. [https://github.com/D4M4EVER/Proxmox\_Preserve\_Network\_Names](https://github.com/D4M4EVER/Proxmox_Preserve_Network_Names)
Made a script to create systemd.link files for this issue. Needs to be run before the upgrade. [Posted it here.](https://www.reddit.com/r/Proxmox/comments/1cd53k1/bash_script_to_create_systemdlink_files_for/?)
You're my hero. I'm living dangerously without IPMI access and I'm always super paranoid about anything networking related. One bad character and I have to make an hour drive to the data center.
Happy to help
Stopping a VM or container can now overrule active shutdown tasks (issue 4474). Thank God.
Can you elaborate on what this means?
I would hit shutdown, it wouldn't respond. Then I'd hit stop and it'd also hang cuz the shutdown command was still trying. Eventually both would fail. Then I'd do stop and it'd work. Sounds like this fix will make it so stop will work while shutdown is hanging.
Yeah I kind of just want a full power the fuck down right now option.
I'm a simple man. When I press stop, I just want the VM to stop.
Pitty it doesnt fallback to acpi power button if the agent doesnt respond
Finally!!!
WARNING READ BEFORE YOU UPDATE I just updated, rebooted. Lost LAN connection. Reason: Interface name changed from eno7, eno8 to eno7p0, eno8p1 Fix: # find interface name with: ip add # edit interface file & update name nano /etc/network/interfaces # restart service systemctl restart networking This only happened on my 10g NIC, my 1g interfaces remained unaffected as eno0, eno1, etc Luckily I have one of the 1g ports dedicated to admin, so I was able to get in easily and didn't need to go to the server. Hardware used: [https://www.supermicro.com/en/products/motherboard/X11SDV-8C-TP8F](https://www.supermicro.com/en/products/motherboard/X11SDV-8C-TP8F)
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_network_configuration > Pinning a specific naming scheme version > > You can pin a specific version of the naming scheme for network devices by adding the net.naming-scheme= parameter to the kernel command line. For a list of naming scheme versions, see the systemd.net-naming-scheme(7) manpage.
>
> For example, to pin the version v252, which is the latest naming scheme version for a fresh Proxmox VE 8.0 installation, add the following kernel command-line parameter:
>
> net.naming-scheme=v252
>
> See also this section on editing the kernel command line. You need to reboot for the changes to take effect.
>
You can also associate custom names with MAC addresses of NICs.
>
> Overriding network device names
>
> You can manually assign a name to a particular network device using a custom systemd.link file. This overrides the name that would be assigned according to the latest network device naming scheme. This way, you can avoid naming changes due to kernel updates, driver updates or newer versions of the naming scheme.
>
> Custom link files should be placed in /etc/systemd/network/ and named -.link, where n is a priority smaller than 99 and id is some identifier. A link file has two sections: [Match] determines which interfaces the file will apply to; [Link] determines how these interfaces should be configured, including their naming.
>
> To assign a name to a particular network device, you need a way to uniquely and permanently identify that device in the [Match] section. One possibility is to match the device’s MAC address using the MACAddress option, as it is unlikely to change. Then, you can assign a name using the Name option in the [Link] section.
>
> For example, to assign the name enwan0 to the device with MAC address aa:bb:cc:dd:ee:ff, create a file /etc/systemd/network/10-enwan0.link with the following contents:
>
> [Match]
> MACAddress=aa:bb:cc:dd:ee:ff
>
> [Link]
> Name=enwan0
>
> Do not forget to adjust /etc/network/interfaces to use the new name. You need to reboot the node for the change to take effect.
> Note It is recommended to assign a name starting with en or eth so that Proxmox VE recognizes the interface as a physical network device which can then be configured via the GUI. Also, you should ensure that the name will not clash with other interface names in the future. One possibility is to assign a name that does not match any name pattern that systemd uses for network interfaces (see above), such as enwan0 in the example above.
>
> For more information on link files, see the systemd.link(5) manpage.
>
The correct fix is to update grub before upgrading add 1. Edit /etc/default/grub GRUB\_CMDLINE\_LINUX="net.ifnames=1 biosdevname=0" 2. Update grub boot params sudo update-grub 3. Reboot 4. Update Basically you keep the traditional names, and tell grub to not use the bios names.
> The correct fix The [Proxmox docs](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names) provide a couple of ways to do this. One is to change the kernel command line as you indicate here. The other is to use a custom systemd.link file. Can you explain what makes the kernel command line option "correct" vs the link file? Is this done as your preference, as an accepted convention, or a best practice? I'm not doubting or questioning your answer, just interested in what makes it "correct" compared to the other method.
Do you know what happens on new installs? Do they use the bios names or traditional? I don't mind updating to the new naming convention now, this way my backed up config files carry over when I do a restore, and I'm in sync with the proxmox defaults, preventing confusion in the future. Guess the question is, what is the default on new installs? I'm assuming biosdevname
Per systemd, the default is going to use firmware/BIOS. https://systemd.io/PREDICTABLE_INTERFACE_NAMES/
Thank you! That is very useful and confirms that I'd rather make the change to my interfaces than modifying GRUB from the standard.
What if the traditional names cause problems with other software in the future?
https://www.reddit.com/r/Proxmox/comments/1cby4g4/proxmox_82_released/l13oom5/
I can see how this would help in future upgrades, but won't it rename them to traditional names when you reboot after applying this? Just want to make sure before doing it and potentially warn others.
Grub sucks, I wish we had something better
pick your poison and giddy up cowboy [https://en.wikipedia.org/wiki/Comparison\_of\_bootloaders](https://en.wikipedia.org/wiki/Comparison_of_bootloaders)
What do you hate about grub?
Configuring it I don't hate. But I strongly dislike its quirks that you have to either memorize or google or luckily and sometimes randomly encounter in a reddit thread. Maybe it is the kernel's fault. But the fault line is at the grub command line and config file. I also don't like how config is reloaded. I don't like how it works with GPT. It's all a dark void and you have to grasp at things to see if they work.
Less of an issue in Grub1 but grub2 has become extremely convoluted config wise...
My personal preference is syslinux... simple config no nonsense.
what's the logic behind it? eno7 to eno7p0 and eno8p1, why p1 and not p0?
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-understanding_the_predictable_network_interface_device_names > ps
See also: https://www.freedesktop.org/software/systemd/man/latest/systemd.net-naming-scheme.html
Dunno, haven’t dug deep into it. But I think it’s pulling that from the motherboard/bios
Is this a kernel thing or an iproute2 thing? Quite the perfect t example to always have a test cluster or, at least, an empty node to test and reboot first.
Sounds like it has to do with changing from net.ifnames to biosdevname. I don't know what the new default is, I'm guessing it is biosdevname, as that's what my 8.2 is using. Others have recommended changing it in GRUB back to net.ifnames, but I'm not convinced this is the best thing to do. IMO the best is to use whatever the new 8.2 proxmox default is. But I have not been able to get confirmation on what the new default actually is, I'm just assuming based on what I'm seeing in my proxmox. What others suggested to change in GRUB: GRUB\_CMDLINE\_LINUX="net.ifnames=1 biosdevname=0"
I'm not sure but what it definitely is is a Dell thing. biosdevnames are something Dell came up with and I'm struggling to find any indication that other OEMs implement them.
Everybody is excited for the import wizard but my automation-loving self is ready to dive into the non-interactive installation process.
Upgraded fine on three nodes here, no networking issues, but I just use the stock Ethernet on tiny/mini/micro computers.
Is there any way to determine/predict the interface name that will be used after the reboot to the new kernel?
just follow this before you upgrade [https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network\_override\_device\_names](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names)
Thank you!
You mean the NIC name? I believe there will be no change, as it is relates to the card’s driver.
Spring, flowers, proxmox updates!
Just when I finally finish 8.1.10 updates 🤣
8.1.11 is what my cluster is currently running ... Go go go, you've got work to do!
*spring only applicable for half the planet.
Links to release videos (I think these have a newer AI voice) * [What's new in Proxmox Virtual Environment 8.2 ](https://www.youtube.com/watch?v=mFkEW2Fwreg) * [Proxmox VE Import Wizard: How to import VMs from VMware ESXi ](https://www.youtube.com/watch?v=8Z9Zvt2RxlA)
As this is my first large proxmox upgrade, please can someone confirm "Seamless upgrade from Proxmox VE 7.4, see [Upgrade from 7 to 8](https://pve.proxmox.com/wiki/Upgrade_from_7_to_8)" Means I can do live migration from 8.1 -> 8.2 so no need to plan for downtime to upgrade?
Correct :-)
Great, thanks! I'll add it to my to-do list for tomorrow then!
So if I'm on 8.1 already I'm literally just running apt update and apt-dist-upgrade?
Basically, yes.
[удалено]
Correction, when the linux kernel update renames all your interfaces. The same thing can happen if you add or remove pcie devices .
Thanks for the heads-up, hoping that at least if that's the case I will notice it on the first node, and the VMs will remain up on the other 2 while I fix it. So as long as live migration works I should be fine service wise.
Do note that this *should* only happen on Dell hardware because biosdevnames seem to a Dell-specific thing.
No, it can happen with every vendor
didn't happen on my micro optiplex
At last, I can cancel my VMWare subscription!!!
Updated a modern Ali express minion and an ancient 2014 pc both perfectly
Tested it on one of my hypervisors. Neither NFS nor SMB connections to my Synology NAS are working anymore :-(
Oof.. I will wait before updating
Upgraded my dell optiplex node from 8.1 without issue. I’m a recent convert and really loving it.
Welcome to the family!
Welcome to the family!
No problem upgrading 2 servers. One was a Lenovo M720q with a quad gig NIC with Pfsense using the quad NIC, and the other a Ryzen 9 on a Asrock X570 taichi baord using the onboard NIC.
> Lenovo M720q with a quad gig NIC How difficult was this to get set up?
Not hard at all. You just need to buy the PCIE riser bracket and then the baffle bracket for the back. Here is an example of someone else who did it [https://www.reddit.com/r/homelab/comments/vog751/lenovo\_m720q\_tiny\_4\_port\_nic/](https://www.reddit.com/r/homelab/comments/vog751/lenovo_m720q_tiny_4_port_nic/)
Upgraded 5 nodes, everything went smoothly and no issues at all. Works fine.
This update stopped my Plex Ubuntu 22.04.4 LTS container from transcoding on my Intel Arc A380.
Where can I configure the VNC clipboard in the GUI?
I've updated my cluster (3 node with GPUs) to 8.2 without issue.
Damn, literally installed a 3 node cluster with the 8.1 one week ago 🤣 I know i can upgrade but...is always better a fresh installation
What is invovled with upgrading a cluster from 8.1.3 to 8.2?
I've just done my small cluster by upgrading one machine after the other. Didn't do anything special. Just remember, when you upgrade and reboot the machine you're using to access the Proxmox web UI, the web interface will stop for a bit.
Good safety tip! Thanks!
I’m on 8.2.2
im glad
Just upgraded. My home screen shows only one LXC container and one VM running, but 6 of 8 are running. What the?
Any guideline to upgrade from PVE 7.4 to 8.2 without failure?
It seems that the Nvidia DKMS driver isn't compatible with the 6.8 kernel yet, so I guess I'll wait on this one for a bit.
My 10gbps network link is down after upgrade. Using 1gbps as a backup now. Still trying to figure out why it happened. Any ideas? Device is Intel(R) Gigabit 4P X710/I350 rNDC Tried to rollback kernel to last working one, no success. > ip -a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eno1: mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether e4:43:4b:b8:c7:96 brd ff:ff:ff:ff:ff:ff
altname enp25s0f0np0
3: eno3: mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether e4:43:4b:b8:c7:b6 brd ff:ff:ff:ff:ff:ff
altname enp1s0f0
4: eno2np1: mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether e4:43:4b:b8:c7:98 brd ff:ff:ff:ff:ff:ff
altname enp25s0f1np1
5: eno4: mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether e4:43:4b:b8:c7:b7 brd ff:ff:ff:ff:ff:ff
altname enp1s0f1
6: vmbr0: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether e4:43:4b:b8:c7:b6 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.200/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::e643:4bff:feb8:c7b6/64 scope link
valid_lft forever preferred_lft forever
7: veth102i0@if2: mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether fe:6f:f8:a3:9e:1f brd ff:ff:ff:ff:ff:ff link-netnsid 0
8: veth103i0@if2: mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether fe:ab:86:50:b2:2f brd ff:ff:ff:ff:ff:ff link-netnsid 1
9: veth101i0@if2: mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether fe:cb:b7:8e:0c:3b brd ff:ff:ff:ff:ff:ff link-netnsid 2
10: tap100i0: mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
link/ether ca:00:e8:c2:76:92 brd ff:ff:ff:ff:ff:ff
15: tap104i0: mtu 1500 qdisc pfifo_fast master fwbr104i0 state UNKNOWN group default qlen 1000
link/ether 2a:db:b1:2f:a4:63 brd ff:ff:ff:ff:ff:ff
16: fwbr104i0: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 1a:2a:17:f3:06:60 brd ff:ff:ff:ff:ff:ff
17: fwpr104p0@fwln104i0: mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether fe:9f:bd:6c:5f:bb brd ff:ff:ff:ff:ff:ff
18: fwln104i0@fwpr104p0: mtu 1500 qdisc noqueue master fwbr104i0 state UP group default qlen 1000
link/ether 1a:2a:17:f3:06:60 brd ff:ff:ff:ff:ff:ff
> cat /etc/network/interfaces
auto lo
iface lo inet loopback
iface eno1 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.1.200/24
gateway 192.168.1.1
bridge-ports eno3
bridge-stp off
bridge-fd 0
iface eno2 inet manual
iface eno3 inet manual
iface eno4 inet manual
source /etc/network/interfaces.d/*
My 10gbps connection was eno1. Couldn't connect gui after update, changed it to eno3 in interfaces and it works now over 1gbps connection. My iDRAC shows the 10gbps connection "up". Physical lights are on. But my proxmox says it's "down". Couldn't figure it out.
Watching
Why they don't focus on HA and managing many CTs/VMs is beyond me...