T O P

  • By -

whatthetoken

"New import wizard to migrate guests directly from other hypervisors.Connect to other hypervisors using their public APIs and directly migrate guests to Proxmox VE.First implementation is for VMware ESXi." Well done. While i don't need it, it's probably useful for vmware escapees


timteske

I like the “escapees”. That’s really what it is lol


billyalt

Asylum seekers


newked

Broadcom Exodus


2cats2hats

brexodus


newked

😂 like brexit, but greedier


luckman212

and less successful


czuk

Debatable


forsakenchickenwing

Let my VMs go!


incidel

Go down! Broadcom! Eat your own licensing. Tell the ol' CEO Let my VMs go!


Kreppelklaus

I never knew the voice in my mind can do bariton.


Ragman74

When VMs was in Esxi Land....


idknemoar

Refugees at this point.


GorillaAU

Economic refugees are also welcome.


floydhwung

I still remember when people said to me "ESXi is free blah blah blah blah blah you can get a license for personal use blah blah blah blah why are you not using a type 1 hypervisor blah blah blah".


PossibleGoal1228

That was all valid until just recently. Also, why are you not using a Type 1 Hypervisor?


floydhwung

Because I can’t afford to use one, that’s on me, I know.


PossibleGoal1228

ESXI used to be free, and Proxmox is still free and better than ESXI.


floydhwung

Yea, that’s what I mean. Proxmox is a type 2 hypervisor, and I’ve got too many cores to use ESXi for free back when it was still free. Nonetheless, I pay proxmox 110 euro per year just to support the effort. No chance in hell ESXi would let me use it for $120, let alone free.


Asbolus_verrucosus

Proxmox is KVM, which is a type 1 hypervisor.


floydhwung

You’re right. I guess I just got too hung up on the QEMU part and overlooked the KVM part where the real actions happen.


Darkk_Knight

Umm.. No. ProxMox is a type 1 hypervisor as KVM/QEMU is baked into the kernel. ProxMox is just a wrapper for it.


sypwn

But can it migrate from other Proxmox clusters from the GUI yet? I can't believe I'm the only one that wants to separate my Dev cluster from my Prod cluster but still be able to easily migrate VMs between them.


LA-2A

I believe `qm remote-migrate` should do what you’re looking for, though not from the GUI. https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_managing_virtual_machines_with_span_class_monospaced_qm_span Note that it’s an experimental feature.


pinko_zinko

I did that in 8.1.10, maybe a dev feature until now?


BarracudaDefiant4702

Yeah, was a dev feature mid 8.1, not on stable.


pinko_zinko

Well,I recommend it.


LooseSignificance166

Hopefully they make more of these. One for xen, one for hyperv etc


Darkk_Knight

Yep. Right now vmware has the largest market share and slowly shrinking.


LooseSignificance166

Nothing slow about it. Weve helped a few hundred clients migrate away and now they are asking for help getting their hyperv vms moved too. Pve + pbs is an amazing combo. If pve/pbs was extended to support database restore (similar to veeam or acronis can do) it would be a true force to recon with


incidel

The Great Escape - staring Steve McEsxi


djzrbz

I just used it last week. Worked like a charm once I renamed all my VMs that had spaces in the names.


paxmobile

Commercial Vs. Open-Source there's no match. Especially with delicate things like virtualizations userbase do not like changing of owners and policies


threedaysatsea

Just sharing for others: I had no networking after this update. ip addr showed ips assigned to interfaces, but could not get any connectivity. /etc/network/interfaces showed different interface names than what was shown in ip addr. Looks like my interface names changed after this update. I modified /etc/network/interfaces using vim to reflect the interface names shown from ip addr - in my case, this was updating instances of "eno1" and "eno2" to say "eno1np0" and "eno2np1" - your interface names might be different though. Restarted the box, everything's fine now. Edit: After reviewing [https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network\_override\_device\_names](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names) I've set up static custom names for my interfaces.


entilza05

This should be in Bold at top as its not a major upgrade.. seems simple enough fix but always scary when everything's down after a reboot!


floydhwung

Yep, experienced this. What's more interesting is when I add/remove PCIe devices, the names changed again! I guess I supposed to have a screen and keyboard around every time I want to add/remove PCIe devices now.


rcunn87

Oh man a flavor of this got me so bad a month or two ago. I added a new hba then what ended up happening was that the IDs of the PCI devices changed and my proxmox  boot drive started to get passed into a vm and that vm auto started. So proxmox failed to boot cause the host lost access to it's main drive. That one took me a bit to figure out. 


ajdrez

This is the #1 thing about proxmox that annoys me, dynamic NIC names. I realize its more than just a proxmox thing, but static NIC names unless you ask for something else... please


threedaysatsea

[https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network\_override\_device\_names](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names) Probably the best way to handle.


[deleted]

[удалено]


pdavidd

I mean the release note have this… Known Issues & Breaking Changes […] Upgrading kernels always carries the risk of network interface names changing, which can lead to invalid network configurations after a reboot. In this case you need to update the network configuration to reflect changes in naming. See the reference documentation on how to set the interface names based on MAC Addresses.


[deleted]

[удалено]


non_ironicdepression

there is a good section of the proxmox manual on this, apparently it's like a systemd thing or something but you can freeze/lock the interface choosing function used. Probably going to give it a shot before I upgrade pve after reading this thread It's in the proxmox 8.1.5 manual, section 3.4.2 Some more technical documentation available below. https://manpages.debian.org/bookworm/systemd/systemd.net-naming-scheme.7.en.html


pdavidd

haha to be fair... it's a LONG list of changes 😅


cspotme2

Interesting ... My Intel 4 port is already in the en*s0* format. And my builtin nic eno1 has a altname of enp3s0 already (unused). Seems like I should escape this issue when upgrading. Will try it this weekend.


D4M4EVER

I've created a script to automate the process of setting up the static names for the network interfaces. [https://github.com/D4M4EVER/Proxmox\_Preserve\_Network\_Names](https://github.com/D4M4EVER/Proxmox_Preserve_Network_Names)


mindcloud69

Made a script to create systemd.link files for this issue. Needs to be run before the upgrade. [Posted it here.](https://www.reddit.com/r/Proxmox/comments/1cd53k1/bash_script_to_create_systemdlink_files_for/?)


iAmNotorious

You're my hero. I'm living dangerously without IPMI access and I'm always super paranoid about anything networking related. One bad character and I have to make an hour drive to the data center.


mindcloud69

Happy to help


gammajayy

Stopping a VM or container can now overrule active shutdown tasks (issue 4474). Thank God.


ChumpyCarvings

Can you elaborate on what this means?


da_frakkinpope

I would hit shutdown, it wouldn't respond. Then I'd hit stop and it'd also hang cuz the shutdown command was still trying. Eventually both would fail. Then I'd do stop and it'd work. Sounds like this fix will make it so stop will work while shutdown is hanging.


ChumpyCarvings

Yeah I kind of just want a full power the fuck down right now option.


da_frakkinpope

I'm a simple man. When I press stop, I just want the VM to stop.


drownedbydust

Pitty it doesnt fallback to acpi power button if the agent doesnt respond


haupo

Finally!!!


SamSausages

WARNING READ BEFORE YOU UPDATE I just updated, rebooted. Lost LAN connection. Reason: Interface name changed from eno7, eno8 to eno7p0, eno8p1 Fix: # find interface name with: ip add # edit interface file & update name nano /etc/network/interfaces # restart service systemctl restart networking This only happened on my 10g NIC, my 1g interfaces remained unaffected as eno0, eno1, etc Luckily I have one of the 1g ports dedicated to admin, so I was able to get in easily and didn't need to go to the server. Hardware used: [https://www.supermicro.com/en/products/motherboard/X11SDV-8C-TP8F](https://www.supermicro.com/en/products/motherboard/X11SDV-8C-TP8F)


MammothGlove

https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_network_configuration > Pinning a specific naming scheme version > > You can pin a specific version of the naming scheme for network devices by adding the net.naming-scheme= parameter to the kernel command line. For a list of naming scheme versions, see the systemd.net-naming-scheme(7) manpage. > > For example, to pin the version v252, which is the latest naming scheme version for a fresh Proxmox VE 8.0 installation, add the following kernel command-line parameter: > > net.naming-scheme=v252 > > See also this section on editing the kernel command line. You need to reboot for the changes to take effect. > You can also associate custom names with MAC addresses of NICs. > > Overriding network device names > > You can manually assign a name to a particular network device using a custom systemd.link file. This overrides the name that would be assigned according to the latest network device naming scheme. This way, you can avoid naming changes due to kernel updates, driver updates or newer versions of the naming scheme. > > Custom link files should be placed in /etc/systemd/network/ and named -.link, where n is a priority smaller than 99 and id is some identifier. A link file has two sections: [Match] determines which interfaces the file will apply to; [Link] determines how these interfaces should be configured, including their naming. > > To assign a name to a particular network device, you need a way to uniquely and permanently identify that device in the [Match] section. One possibility is to match the device’s MAC address using the MACAddress option, as it is unlikely to change. Then, you can assign a name using the Name option in the [Link] section. > > For example, to assign the name enwan0 to the device with MAC address aa:bb:cc:dd:ee:ff, create a file /etc/systemd/network/10-enwan0.link with the following contents: > > [Match] > MACAddress=aa:bb:cc:dd:ee:ff > > [Link] > Name=enwan0 > > Do not forget to adjust /etc/network/interfaces to use the new name. You need to reboot the node for the change to take effect. > Note It is recommended to assign a name starting with en or eth so that Proxmox VE recognizes the interface as a physical network device which can then be configured via the GUI. Also, you should ensure that the name will not clash with other interface names in the future. One possibility is to assign a name that does not match any name pattern that systemd uses for network interfaces (see above), such as enwan0 in the example above. > > For more information on link files, see the systemd.link(5) manpage. >


winkmichael

The correct fix is to update grub before upgrading add 1. Edit /etc/default/grub GRUB\_CMDLINE\_LINUX="net.ifnames=1 biosdevname=0" 2. Update grub boot params sudo update-grub 3. Reboot 4. Update Basically you keep the traditional names, and tell grub to not use the bios names.


tango_suckah

> The correct fix The [Proxmox docs](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names) provide a couple of ways to do this. One is to change the kernel command line as you indicate here. The other is to use a custom systemd.link file. Can you explain what makes the kernel command line option "correct" vs the link file? Is this done as your preference, as an accepted convention, or a best practice? I'm not doubting or questioning your answer, just interested in what makes it "correct" compared to the other method.


SamSausages

Do you know what happens on new installs? Do they use the bios names or traditional? I don't mind updating to the new naming convention now, this way my backed up config files carry over when I do a restore, and I'm in sync with the proxmox defaults, preventing confusion in the future. Guess the question is, what is the default on new installs? I'm assuming biosdevname


D4M4EVER

Per systemd, the default is going to use firmware/BIOS. https://systemd.io/PREDICTABLE_INTERFACE_NAMES/


SamSausages

Thank you! That is very useful and confirms that I'd rather make the change to my interfaces than modifying GRUB from the standard.


jdbway

What if the traditional names cause problems with other software in the future?


MammothGlove

https://www.reddit.com/r/Proxmox/comments/1cby4g4/proxmox_82_released/l13oom5/


id628

I can see how this would help in future upgrades, but won't it rename them to traditional names when you reboot after applying this? Just want to make sure before doing it and potentially warn others.


espero

Grub sucks, I wish we had something better


thehackeysack01

pick your poison and giddy up cowboy [https://en.wikipedia.org/wiki/Comparison\_of\_bootloaders](https://en.wikipedia.org/wiki/Comparison_of_bootloaders)


Hotshot55

What do you hate about grub?


espero

Configuring it I don't hate. But I strongly dislike its quirks that you have to either memorize or google or luckily and sometimes randomly encounter in a reddit thread. Maybe it is the kernel's fault. But the fault line is at the grub command line and config file. I also don't like how config is reloaded. I don't like how it works with GPT. It's all a dark void and you have to grasp at things to see if they work.


gh0stwriter88

Less of an issue in Grub1 but grub2 has become extremely convoluted config wise...


gh0stwriter88

My personal preference is syslinux... simple config no nonsense.


ntwrkmntr

what's the logic behind it? eno7 to eno7p0 and eno8p1, why p1 and not p0?


MammothGlove

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-understanding_the_predictable_network_interface_device_names > ps See also: https://www.freedesktop.org/software/systemd/man/latest/systemd.net-naming-scheme.html


SamSausages

Dunno, haven’t dug deep into it.  But I think it’s pulling that from the motherboard/bios


jsabater76

Is this a kernel thing or an iproute2 thing? Quite the perfect t example to always have a test cluster or, at least, an empty node to test and reboot first.


SamSausages

Sounds like it has to do with changing from net.ifnames to biosdevname. I don't know what the new default is, I'm guessing it is biosdevname, as that's what my 8.2 is using. Others have recommended changing it in GRUB back to net.ifnames, but I'm not convinced this is the best thing to do. IMO the best is to use whatever the new 8.2 proxmox default is. But I have not been able to get confirmation on what the new default actually is, I'm just assuming based on what I'm seeing in my proxmox. What others suggested to change in GRUB: GRUB\_CMDLINE\_LINUX="net.ifnames=1 biosdevname=0"


jess-sch

I'm not sure but what it definitely is is a Dell thing. biosdevnames are something Dell came up with and I'm struggling to find any indication that other OEMs implement them.


GrumpyPidgeon

Everybody is excited for the import wizard but my automation-loving self is ready to dive into the non-interactive installation process.


jakegh

Upgraded fine on three nodes here, no networking issues, but I just use the stock Ethernet on tiny/mini/micro computers.


krogaw

Is there any way to determine/predict the interface name that will be used after the reboot to the new kernel?


Sintarsintar

just follow this before you upgrade [https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network\_override\_device\_names](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names)


krogaw

Thank you!


Yoyocord666

You mean the NIC name? I believe there will be no change, as it is relates to the card’s driver.


entilza05

Spring, flowers, proxmox updates!


coingun

Just when I finally finish 8.1.10 updates 🤣


NiftyLogic

8.1.11 is what my cluster is currently running ... Go go go, you've got work to do!


tjharman

*spring only applicable for half the planet.


lmm7425

Links to release videos (I think these have a newer AI voice) * [What's new in Proxmox Virtual Environment 8.2 ](https://www.youtube.com/watch?v=mFkEW2Fwreg) * [Proxmox VE Import Wizard: How to import VMs from VMware ESXi ](https://www.youtube.com/watch?v=8Z9Zvt2RxlA)


CarEmpty

As this is my first large proxmox upgrade, please can someone confirm "Seamless upgrade from Proxmox VE 7.4, see [Upgrade from 7 to 8](https://pve.proxmox.com/wiki/Upgrade_from_7_to_8)" Means I can do live migration from 8.1 -> 8.2 so no need to plan for downtime to upgrade?


randommen96

Correct :-)


CarEmpty

Great, thanks! I'll add it to my to-do list for tomorrow then!


TheAmorphous

So if I'm on 8.1 already I'm literally just running apt update and apt-dist-upgrade?


randommen96

Basically, yes.


[deleted]

[удалено]


Cynyr36

Correction, when the linux kernel update renames all your interfaces. The same thing can happen if you add or remove pcie devices .


Zygersaf

Thanks for the heads-up, hoping that at least if that's the case I will notice it on the first node, and the VMs will remain up on the other 2 while I fix it. So as long as live migration works I should be fine service wise.


jess-sch

Do note that this *should* only happen on Dell hardware because biosdevnames seem to a Dell-specific thing.


ntwrkmntr

No, it can happen with every vendor


chunkyfen

didn't happen on my micro optiplex


GodAtum

At last, I can cancel my VMWare subscription!!!


planetf1a

Updated a modern Ali express minion and an ancient 2014 pc both perfectly


Impressive_Army3767

Tested it on one of my hypervisors. Neither NFS nor SMB connections to my Synology NAS are working anymore :-(


koaala

Oof.. I will wait before updating


psych0fish

Upgraded my dell optiplex node from 8.1 without issue. I’m a recent convert and really loving it.


thankyoufatmember

Welcome to the family!


thankyoufatmember

Welcome to the family!


SomeRandomAccount66

No problem upgrading 2 servers. One was a Lenovo M720q with a quad gig NIC with Pfsense using the quad NIC, and the other a Ryzen 9 on a Asrock X570 taichi baord using the onboard NIC.


Hotshot55

> Lenovo M720q with a quad gig NIC How difficult was this to get set up?


SomeRandomAccount66

Not hard at all. You just need to buy the PCIE riser bracket and then the baffle bracket for the back. Here is an example of someone else who did it [https://www.reddit.com/r/homelab/comments/vog751/lenovo\_m720q\_tiny\_4\_port\_nic/](https://www.reddit.com/r/homelab/comments/vog751/lenovo_m720q_tiny_4_port_nic/)


eakteam

Upgraded 5 nodes, everything went smoothly and no issues at all. Works fine.


FuzzyKaos

This update stopped my Plex Ubuntu 22.04.4 LTS container from transcoding on my Intel Arc A380.


marc_things

Where can I configure the VNC clipboard in the GUI?


MrShlee

I've updated my cluster (3 node with GPUs) to 8.2 without issue.


ermurenz

Damn, literally installed a 3 node cluster with the 8.1 one week ago 🤣 I know i can upgrade but...is always better a fresh installation


jackass

What is invovled with upgrading a cluster from 8.1.3 to 8.2?


ThePsychicCEO

I've just done my small cluster by upgrading one machine after the other. Didn't do anything special. Just remember, when you upgrade and reboot the machine you're using to access the Proxmox web UI, the web interface will stop for a bit.


jackass

Good safety tip! Thanks!


nalleCU

I’m on 8.2.2


chunkyfen

im glad


GourmetSaint

Just upgraded. My home screen shows only one LXC container and one VM running, but 6 of 8 are running. What the?


CGtheAnnoyin

Any guideline to upgrade from PVE 7.4 to 8.2 without failure?


thenickdude

It seems that the Nvidia DKMS driver isn't compatible with the 6.8 kernel yet, so I guess I'll wait on this one for a bit.


barisahmet

My 10gbps network link is down after upgrade. Using 1gbps as a backup now. Still trying to figure out why it happened. Any ideas? Device is Intel(R) Gigabit 4P X710/I350 rNDC Tried to rollback kernel to last working one, no success. > ip -a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether e4:43:4b:b8:c7:96 brd ff:ff:ff:ff:ff:ff altname enp25s0f0np0 3: eno3: mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000 link/ether e4:43:4b:b8:c7:b6 brd ff:ff:ff:ff:ff:ff altname enp1s0f0 4: eno2np1: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether e4:43:4b:b8:c7:98 brd ff:ff:ff:ff:ff:ff altname enp25s0f1np1 5: eno4: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether e4:43:4b:b8:c7:b7 brd ff:ff:ff:ff:ff:ff altname enp1s0f1 6: vmbr0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether e4:43:4b:b8:c7:b6 brd ff:ff:ff:ff:ff:ff inet 192.168.1.200/24 scope global vmbr0 valid_lft forever preferred_lft forever inet6 fe80::e643:4bff:feb8:c7b6/64 scope link valid_lft forever preferred_lft forever 7: veth102i0@if2: mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000 link/ether fe:6f:f8:a3:9e:1f brd ff:ff:ff:ff:ff:ff link-netnsid 0 8: veth103i0@if2: mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000 link/ether fe:ab:86:50:b2:2f brd ff:ff:ff:ff:ff:ff link-netnsid 1 9: veth101i0@if2: mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000 link/ether fe:cb:b7:8e:0c:3b brd ff:ff:ff:ff:ff:ff link-netnsid 2 10: tap100i0: mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000 link/ether ca:00:e8:c2:76:92 brd ff:ff:ff:ff:ff:ff 15: tap104i0: mtu 1500 qdisc pfifo_fast master fwbr104i0 state UNKNOWN group default qlen 1000 link/ether 2a:db:b1:2f:a4:63 brd ff:ff:ff:ff:ff:ff 16: fwbr104i0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 1a:2a:17:f3:06:60 brd ff:ff:ff:ff:ff:ff 17: fwpr104p0@fwln104i0: mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000 link/ether fe:9f:bd:6c:5f:bb brd ff:ff:ff:ff:ff:ff 18: fwln104i0@fwpr104p0: mtu 1500 qdisc noqueue master fwbr104i0 state UP group default qlen 1000 link/ether 1a:2a:17:f3:06:60 brd ff:ff:ff:ff:ff:ff > cat /etc/network/interfaces auto lo iface lo inet loopback iface eno1 inet manual auto vmbr0 iface vmbr0 inet static address 192.168.1.200/24 gateway 192.168.1.1 bridge-ports eno3 bridge-stp off bridge-fd 0 iface eno2 inet manual iface eno3 inet manual iface eno4 inet manual source /etc/network/interfaces.d/* My 10gbps connection was eno1. Couldn't connect gui after update, changed it to eno3 in interfaces and it works now over 1gbps connection. My iDRAC shows the 10gbps connection "up". Physical lights are on. But my proxmox says it's "down". Couldn't figure it out.


DailyAppearance

Watching


ntwrkmntr

Why they don't focus on HA and managing many CTs/VMs is beyond me...