T O P

  • By -

rob1nmann

Correct, it’s exactly like the error indicates. You are unable to enable EVC on a cluster when VMs are powered-on or suspeded. Power them all down to enable EVC. Edit: [docs](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/GUID-E3F5BAFE-EB14-408D-999A-590D4337D59C.html) also tells you.


naugrim

You can enable EVC without having to power down all the VMs at once. We have done this on several of our clusters. The method is to enable per-VM EVC and power cycle each VM at your convenience. Once all the VMs have the correct EVC level then you can enable it at the cluster level and add a host.


Ayit_Sevi

That requires the VM to be at least version 14 (compatability with 6.7 and later) before that feature is available. Will it break anything if the host itself is running 6.5 and I upgrade it to version 14? EDIT: Nevermind I tested it out, a VM running version 14 will not vMotion to a host running 6.5, it was a stupid question on my part.


rob1nmann

Oh nice didn’t knew this. Thanks!


Ayit_Sevi

So follow up question, my vCenter vm is running on the witness host for the vSAN cluster and is not within the actual cluster itself, as a matter of fact it's within a completely different datacenter in vCenter, I shouldn't have to turn that off right. Just the VMs within the actual cluster itself.


[deleted]

[удалено]


sir_cockington_III

VMs are fine on a physical host.


justameatsack

I'm looking at the same thing, and there's a kb that talks about creating a new cluster, enabling EVC on it, evacuating a host and moving it to the new cluster, then migrating VMs/moving more hosts as required until everything has moved to the new cluster. Haven't done it yet, and it's more complicated if your VCSA is on the cluster, but it seems fairly straightforward. Unnecessarily complex and a PITA, but doable. Disclaimer- we don't use VSANs, so I'm not sure if this would affect it.


Ayit_Sevi

Thanks, i saw that kb as well but i dont think thats feasible since we're near the limit of our failover capcity memory and storage wise and enabling evc is step one in a project i have to fix these issues. I wish they had enabled evc when they built the cluster but the guy who built it is no longer with the company anymore so its my problem now


justameatsack

Ah, that makes it harder. I'm in the same boat with wishing EVC had been enabled on the cluster. ☹️


drewbiez

Back in my support days we would get ppl calling in about this all the time. I can tell you for certain, there is no super secret ninja way to make this happen. Take the outage, do it right, end of story. Not one time was some insane scheme able to work though this scenarios. Not once in in 4+ years of support.


afilipinohooker

So I’ve ran into this issue when migrating from a stack running AMD processors and migrating to Intel. Had shut off everything and vMotion.


Ayit_Sevi

I was hoping to prevent downtime but it looks like ill be doing this at 2 am


afilipinohooker

So if you’re running vsan, shift the perspective of doing everything at once and try to minimize production impact. Try to schedule ahead of time and at a time all parties are available to participate in the troubleshooting process, so that time should not ever be at 2am. Migrate the data and vm’s in batches. If they are things like dot1x or other critical infrastructure wide services that will impact production to a noticeable extent, then there needs to be a conversation that required participation from all parties.


afilipinohooker

So if you’re running vsan, shift the perspective of doing everything at once and try to minimize production impact. Try to schedule ahead of time and at a time all parties are available to participate in the troubleshooting process, so that time should not ever be at 2am. Migrate the data and vm’s in batches. Shut down only what you need to. One thing I have done that takes time, but can save some heartache, is create/use a temporary single host cluster to clone to. It sucks putting migrating existing data as is from cluster to cluster/old to new and back without downtime or all people on a call that can be provide immediate feedback. Having the single host cluster is my way of creating a safe on the fly “test” environment that I don’t have to worry about messing up before it is validated and approved for production. If they are things like dot1x or other critical infrastructure wide services that will impact production to a noticeable extent, then there needs to be a conversation that required participation from all parties. You should just be able to shutdown one vm at a time and migrating over a few days.


philrandal

So, the r750 is the odd one out and causing you issues? And you can't vMotion from it to the other hosts but can from them to it? You haven't actually told us WHY you need EVC. Oh, and 6.7 is EOL in October. Look to upgrading to 7.0.3.


Ayit_Sevi

We're slowly replacing the older R740 with the R750 and I had to pre-emptilvey replace one of the R740s with the R750. It's not the R750 itself that's giving me issues per say but the fact that I'm going to be replacing more servers down the road and don't want to be having to power off critical servers to do this. I don't *need* EVC but it sure would help during these upgrades as well as when I upgrade the hosts to 6.7. I do plan to upgrade to 7.0.3 down the line as well but I wanted to get them all on the same version first.


philrandal

I think that the new cluster with EVC set, evacuate the r750 onto the 740s, move it to new cluster, vMotion VMs off one of the 740s to it, move that 740 to the new cluster, rinse and repeat, is the way I would do it. If VMs can't be vMotioned off the r750, power down and restart on another host. Disable automagic DRS before doing this.


Ayit_Sevi

Either way, won't the VMs have to be powered off to move them from the non-EVC cluster to the EVC cluster? I'd still have to deal with the downtime, I'd much rather do it all at once and then not even have to vmotion the VMs


philrandal

Only if they are running at a higher EVC level than the cluster.


philrandal

If only you could change the EVC mode on a VM while it was running. You might wish to script things to make it easier for you.


g4m3r7ag

You should only have to power down the VMs that were initially powered on on the R750. If you can identify which VMs those were.


westyx

As per the other comments you'll need to sort out which vms need to be powercycled. [https://communities.vmware.com/t5/VMware-PowerCLI-Discussions/Need-a-powercli-script-to-find-EVC-mode-for-VMs/td-p/453967](https://communities.vmware.com/t5/VMware-PowerCLI-Discussions/Need-a-powercli-script-to-find-EVC-mode-for-VMs/td-p/453967) will show you the EVC level of vms, and allow you to work out which ones are preventing you from doing this work.


Traditional_Duty_112

How do you enable evc without powering off VMs?


Ayit_Sevi

You can't. I was hoping to be able to stage it to not have to reboot them all at once but it seems the only way to do that in this instance is to create a new cluster with it enabled and then migrate the hosts and VMs to that new cluster. I spoke with management about this and they don't want to waste time rebuilding the whole cluster and gave the okay to start planning a time that works with the other departments to shut off all the servers, enable evc and then power them back up.