AnsweredAssumed Answered

VMware CPU Scheduling

Question asked by peter.mitchell on Feb 23, 2016
Latest reply on Feb 25, 2016 by Alec Kemp

I was reading VMTurbo blog: http://vmturbo.com/blog/still-waiting-on-that-cpu-ready-queue-line/

I tried to post a comment, but the website errors saying that log in is required.  (even if I'm logged into the VMTurbo).

 

can y'all fix that?

 

Anyway, I'll just post what I was going to say, here:

 

I've been doing research on this topic, and I came upon this blog post.  I've been seeing some conflicting information about the CPU scheduler in ESXi.  It seems that your analogy fits what ESX was doing in 2.x as per this white paper:
https://www.vmware.com/files/pdf/techpaper/VMware-vSphere-CPU-Sched-Perf.pdf

 

"Strict Co-Scheduling in ESX 2.x

Strict co-scheduling was implemented in ESX 2.x and discontinued in ESX 3.x. In the strict co-scheduling algorithm, the CPU scheduler maintains a cumulative skew per each vCPU of a multiprocessor virtual machine. The skew grows when the associated vCPU does not make progress while any of its siblings makes progress.
If the skew becomes greater than a threshold, typically a few milliseconds, the entire virtual machine would be stopped (co-stop) and will only be scheduled again (co-start) when there are enough pCPUs available to schedule all vCPUs simultaneously. This ensures that the skew does not grow any further and only shrinks.
The strict co-scheduling might cause CPU fragmentation. For example, a 4-vCPU multiprocessor virtual machine might not be scheduled even if there are three idle pCPUs. This results in scheduling delays and lower CPU utilization."

 

However, this doesn't match up with what the whitepaper describes as the current scheduling algorithm:

 

"Relaxed Co-Scheduling
Co-scheduling executes a set of threads or processes at the same time to achieve high performance. Because multiple cooperating threads or processes frequently synchronize with each other, not executing them concurrently would only increase the latency of synchronization. For example, a thread waiting to be signaled by another thread in a spin loop might reduce its waiting time by being executed concurrently with the signaling thread.
An operating system requires synchronous progress on all its CPUs, and it might malfunction when it detects this requirement is not being met. For example, a watchdog timer might expect a response from its sibling vCPU within the specified time and would crash otherwise. When running these operating systems as a guest, ESXi must therefore maintain synchronous progress on the virtual CPUs.
The CPU scheduler meets this challenge by implementing relaxed co-scheduling of the multiple vCPUs of a multiprocessor virtual machine. This implementation allows for some flexibility while maintaining the illusion of synchronous progress. It meets the needs for high performance and correct execution of guests. For this purpose, the progress of a vCPU is measured where a vCPU is considered making progress when it executes guest instructions or it is in the IDLE state. Then, the goal of co-scheduling is to keep the difference in progress between sibling vCPUs, or the “skew,” bounded."

 

Anyway, if anybody has any other information on this, that would be appreciated. 

Outcomes