Introduction
Some of you may know me: I love using macOS, and I have been extensively using it since my Late 2012 Mac mini shipped with Mountain Lion. Before that, and that's probably less known, I had Snow Leopard running on my PC using distros, custom kernels and quite a lot of kernel panics. It was a weird time for Hackintosh back then. But it was fun nevertheless, just like it is today.
For quite some time now, I've been successfully running macOS in Dual Boot next to Windows 10 on my desktop PC. Earlier this year, I've updated it from High Sierra to Big Sur and gave it an AMD GPU to work with, since Apple and NVIDIA aren't the best friends at all, so there's no Web Drivers for any macOS release after High Sierra. However, switching between Windows and macOS usually involves rebooting, and because my PC is built using non-Apple hardware, I'm using OpenCore as a boot loader – which, for some reason, fails to boot sometimes while getting stuck at various messages such as:
virtual IOReturn IONVMeController::CreateSubmissionQueue(uint16_t, uint8_t)::2886:SQ index=0 entrysize=64
This of course forces me to reboot, more often than not multiple times in a row, and since the Windows Boot Loader likes to sneak in my explicitly configured boot order, I almost always have to enter my motherboard's boot menu and select the drive containing macOS manually, only to reboot again and again and again.
As this is getting extremely annoying and frustrating over time, I've tried virtualizing Windows and macOS again, with the ultimate goal of booting a virtual machine off my physical disks. With the experience of previous attempts using a single GPU – my trusty GTX 1060 – I started experimenting with my current dual GPU setup using a 30 day trial of Unraid. In the end, I've got the VMs working with the GPUs passed through, but I felt like Unraid wasn't the right choice for me – add to the fact that it's still just a 30 day trial and that a license costs way too much just for using it to manage virtual machines in a fancy web interface. So I've sat down and researched everything needed to get this "simultaneously-booting Virtual Machine-based production setup on a single host" (trademark pending) of mine working with a more practical version of Linux.
Chapter 1: The Hardware
First of all, I didn't pick my hardware because it does its job particularly well, but rather because it just does its job and it has everything I needed for regular usage without virtual machines. I could have probably gone with a more enthusiast-oriented motherboard based on X470/X570 if I wanted to follow my plan to use virtual machines more seriously, but on the other hand, I didn't want to spend tons of money.
CPU | AMD Ryzen 7 3700X 3.6 GHz 8-Core Processor |
Motherboard | Asus PRIME B450-PLUS |
Memory | Crucial Ballistix 16 GB (2 x 8 GB) DDR4-3200 CL16 |
SSD (Windows 10) | Samsung 970 Evo 1 TB M.2-2280 |
SSD (macOS) | Western Digital Blue 500 GB 2.5" |
GPU (Windows 10) | Gigabyte GeForce GTX 1060 6 GB WINDFORCE OC |
GPU (macOS) | Gigabyte Radeon RX 560 2 GB GAMING |
Monitor | Xiaomi Mi Curved Gaming Monitor 34" |
The Xiaomi monitor is especially interesting, because – thanks to its ultrawide-ness – it supports picture-by-picture, so we can display Windows and macOS next to each other. Nice! Although it's not perfect, as it squeezes the input to fit on one half of the screen, and setting any appropriate resolution in order to fix this introduces pillarboxing. If you have any suggestion on how to fix this, please contact me on Twitter.
Additionally, this motherboard isn't very good when it comes to splitting IOMMU groups. For example, it groups the RX 560 together with some random onboard controllers, which is probably an AMD thing, but not very helpful if you want to pass this GPU to macOS on its own, as you'd have to pass every single device in the same IOMMU group to the VM – which may or may not break things. So in my case, I have to use a custom Kernel which includes the "PCI ACS Override" patch, just like Unraid does. More on that later.
Also the RX 560 isn't very powerful, but as an afterthought it's enough to display the macOS user interface without any issues.
Chapter 2: Setting up Manjaro – and stripping it down
The first iteration of this article was me explaining in detail how I set up everything on Ubuntu Server, starting with booting fresh installs of Windows and macOS inside a virtual machine, then continuing adding host devices and removing default devices one by one. In fact, I didn't even get to the point of explaining the macOS procedure in the first iteration as I've literally migrated this project over to Manjaro yesterday (at the time I wrote this passage), and I've booted from my SSDs straight away with basically no additional work on the VMs.
Anyways, unlike Ubuntu, Manjaro doesn't come in a Server (no user interface) variant, so it's a little work to get rid of KDE and Plasma. We need to get rid of everything that adds to our boot time to ensure a near native experience, starting with our desktop environment. Why should you need a desktop for a system where you'll probably never see it anyways? Also, this is one of the components that we don't want to use our GPU in order to fix NVIDIA's Code 43.
$ sudo pacman -Rnsc plasma kf5 kde-applications xorg-server
$ sudo systemctl set-default multi-user.target
Reboot your system and make sure you're prompted to login on a console.
Next, remove snapd
and lvm2
, unless you want to use Snap Applications or LVM2 disk stuff. I don't know what it does (and I don't care), and the accompanying services add a lot of boot time.
$ sudo pacman -Rnsc snapd lvm2
Reboot again and check which remaining services are taking the longest time at boot using systemd-analyze blame
. I'd say: if it's not relevant to the system or security, remove it. (Warning: unprofessional opinion!). Here's a list of services I disabled (or masked 😷) on my host up until this point:
avahi-daemon.service
bluetooth.service
ModemManager.service
pamac-mirrorlist.service
Next, if you haven't set it up yet, prepare SSH to allow X Forwarding. On the client side, I'm using X11/XQuartz on my 2017 MacBook Pro 13".
sudo pacman -S openssh
sudo nano /etc/ssh/sshd_config
[...]
# Some of these options may be commented like this line, just remove the "#" in front
AllowTcpForwarding yes
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost yes
[...]
Don't forget to start the SSH service and make it launch at boot.
$ sudo systemctl enable sshd
$ sudo systemctl start sshd
On your client, you should now be able to log in on your host, and if everything works as intended, you should see the X11 icon show up in your Dock (if you're using a Mac like me).
$ ssh -X <username>@<host ip address>
Now it's time to finally install the framework for our virtual machines: QEMU! To read everything important about QEMU and KVM, I strongly recommend the Arch Wiki: QEMU, KVM
You should also have checked if your system supports KVM in the first place, but if you're running a modern CPU (basically anything more recent than Pentium 4 662/672 or Athlon 64), it should probably be supported.
# Intel
$ echo "kvm_intel" | sudo tee /etc/modules-load.d/kvm.conf
$ modprobe kvm_intel
# AMD
$ echo "kvm_amd" | sudo tee /etc/modules-load.d/kvm.conf
$ modprobe kvm_amd
$ sudo pacman -S qemu virt-manager dnsmasq iptables ebtables ovmf
After installation, add your user to the required groups:
$ groupadd libvirt
$ sudo gpasswd -a $(whoami) kvm
$ sudo gpasswd -a $(whoami) libvirt
Allow your user to manage VMs without explicit authentication:
$ sudo bash -c 'cat << EOF > /etc/polkit-1/rules.d/50-libvirt.rules
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.isInGroup("kvm")) {
return polkit.Result.YES;
}
});
EOF'
And finally, start the libvirtd
service.
$ sudo systemctl enable libvirtd
$ sudo systemctl start libvirt
At this point, you should be able to run virt-manager
on your SSH client and the Virtual Machine Manager window should pop up. It should also connect to a connection called "QEMU/KVM" automatically. If not, you can define a new connection using File → Add Connection. Select "QEMU/KVM" under Hypervisor and click Connect. Now, in theory, you should be able to create virtual machines.
Chapter 3: Preparing the GPUs
Now that we've prepared Manjaro, it's time to check our IOMMU groups. A good mainboard handles splitting devices into proper IOMMU groups by itself, and as mentioned before, my motherboard is not very good at it. But first, we'll need to enable IOMMU support by adding some kernel parameters:
$ sudo nano /etc/default/grub
[...]
# Intel
GRUB_CMDLINE_LINUX_DEFAULT="text intel_iommu=on iommu=pt"
# AMD
GRUB_CMDLINE_LINUX_DEFAULT="text amd_iommu=on iommu=pt"
[...]
$ sudo update-grub
Reboot your host once again, and verify that IOMMU is enabled and that at least some groups are being created:
$ sudo dmesg | grep -i iommu
[...]
[ 0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-linux-zen root=UUID=631e74b8-139f-4e78-a8da-ecd93e4bce80 ro text amd_iommu=on iommu=pt
[ 0.161187] iommu: Default domain type: Passthrough (set via kernel command line)
[ 0.244427] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[ 0.245452] pci 0000:00:01.0: Adding to iommu group 0
[ 0.245461] pci 0000:00:01.1: Adding to iommu group 1
[ 0.245471] pci 0000:00:01.3: Adding to iommu group 2
[ 0.245478] pci 0000:00:02.0: Adding to iommu group 3
[ 0.245490] pci 0000:00:03.0: Adding to iommu group 4
[...]
[ 1.247633] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
[ 1.258391] AMD-Vi: AMD IOMMUv2 driver by Joerg Roedel <jroedel@suse.de>
[...]
Now, to check your IOMMU groups, save the following script as iommu_groups.sh
(or whatever you like to name it) and make it executable (chmod +x iommu_groups.sh
).
#!/bin/bash
shopt -s nullglob
for d in /sys/kernel/iommu_groups/*/devices/*; do
n=$(d#*/iommu_groups/*}; n=${n%%/*}
printf 'IOMMU Group %s ' "$n"
lspci -nns "${d##*/}"
done
Your GPUs should be in their own group with no other devices. Mine weren't (thanks, AMD!), so I had to install a kernel that includes the PCI ACS Override patch, which will force devices to be in their own IOMMU groups but also comes with potential security risks.
Your script output will probably look something like this (this was copied after I applied the patch, so the devices are already split into separate groups):
[...]
IOMMU Group 24 06:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Baffin [Radeon RX 550 640SP / RX 560/560X] [1002:67ff] (rev cf)
IOMMU Group 25 06:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Baffin HDMI/DP Audio [Radeon RX 550 640SP / RX 560/560X] [1002:aae0]
IOMMU Group 26 07:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106 [GeForce GTX 1060 6GB] [10de:1c03] (rev a1)
IOMMU Group 27 07:00.1 Audio device [0403]: NVIDIA Corporation GP106 High Definition Audio Controller [10de:10f1] (rev a1)
[...]
Here we can see the PCI Bus addresses (0x:00.y) and vendor/device IDs ([xxxx:yyyy]) for our GPUs which we'll need later on. Any modern GPU should have a VGA compatible controller and an Audio device, which can be differentiated by having the same base PCI Bus address (06:00 for the AMD GPU in this case) but having different functions – 0 for the VGA controller, 1 for the audio device. We'll need them both passed to the VM for the GPU to work correctly.
If you actually need the PCI ACS Override patch, say one of your GPUs or any other device you want to pass through is in the same group as a device you don't necessarily want to pass through, we can simply install the linux-zen
kernel, which includes this patch AND comes precompiled – unlike linux-vfio
, which takes ages to compile even on my mid-tier but still powerful 3700X. Instead, we only need to download the package and install it.
$ wget --content-disposition https://archlinux.org/packages/extra/x86_64/linux-zen/download/
$ sudo pacman -U linux-zen-5.11.8.zen1-1-x86_64.pkg.tar.zst
Now we're going to disable any graphical output of our host machine, so before continuing, you need to make sure SSH is working correctly! Otherwise you'd need to boot from a Live USB, mount a few partitions in the correct place, chroot
and reverse anything we're doing here.
First, we're going to change our kernel parameters once again:
$ sudo nano /etc/default/grub
[...]
# Replace "amd_iommu" with "intel_iommu" if you're using an Intel CPU
GRUB_CMDLINE_LINUX_DEFAULT="text amd_iommu=on iommu=pt vfio-pci.ids=10de:1c03,10de:10f1,1002:67ff,1002:aae0 video=vesafb:off video=efifb:off pcie_acs_override=downstream,multifunction"
[...]
$ sudo update-grub
Here, we've added the vendor/device ID's to vfio-pci.ids=
(there should be two IDs for each GPU, unless one or more of your GPUs doesn't have an audio controller), disabled the VESA/EFI frame buffers using video=vesafb:off video=efifb:off
and enabled the PCI(e) ACS Override patch for "downstream" and "multifunction" devices.
Please note that depending on your kernel, you may need to either use pci_acs_override
(no e) or pcie_acs_override
. While I was working with Ubuntu Server, the former did the job, while for Manjaro I had to use the latter.
We're also going to use modprobe
to assign our GPU devices to vfio-pci
. The kernel parameters basically do the same, but using both ways we can make sure the parameters are applied for sure.
$ sudo nano /etc/modprobe.d/vfio.conf
options vfio-pci ids=10de:1c03,10de:10f1,1002:67ff,1002:aae0
For vfio-pci
to correctly assign our devices at boot time, we need to load its modules when the host is booting:
$ sudo nano /etc/mkinitcpio.conf
[...]
MODULES=(... vfio_pci vfio vfio_iommu_type1 vfio_virqfd ...)
[...]
And finally, we're going to block relevant GPU drivers from loading at boot. Using this and all of the above, we're ensuring that our NVIDIA GPU won't fail to initialize due to Code 43. My GTX 1060 is just a plain old consumer card and no Quadro, and NVIDIA doesn't like consumer cards to be used in virtual machines (Update: Starting with driver 465.89, they actually do!). And while it may not be required for macOS, the AMD GPU will surely benefit from that.
$ sudo nano /etc/modprobe.d/blacklist.conf
[...]
blacklist nouveau
blacklist nvidia
blacklist nvidiafb
blacklist amdgpu
Now we need to regenerate our ramdisk (I guess that's what it does, I'm no expert when it comes to Linux terms) using sudo mkinitcpio -p linux-zen
. If you're using a stock kernel (which you're probably not at this point), replace linux-zen
with linux[major][minor]
, so for Kernel 5.11.8, that would be linux511
.
If you've done everything correctly, you should have no graphical output whatsoever after rebooting once again (except for maybe the GRUB selector, you can disable that too if you want), and running lspci -nnk
(over SSH!) should return an output similar to the following. Pay attention to the lines that say Kernel driver in use: vfio-pci
.
06:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Baffin [Radeon RX 550 640SP / RX 560/560X] (rev cf)
Subsystem: Gigabyte Technology Co., Ltd Baffin [Radeon RX 550 640SP / RX 560/560X]
Kernel driver in use: vfio-pci
Kernel modules: amdgpu
06:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Baffin HDMI/DP Audio [Radeon RX 550 640SP / RX 560/560X]
Subsystem: Gigabyte Technology Co., Ltd Baffin HDMI/DP Audio [Radeon RX 550 640SP / RX 560/560X]
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
07:00.0 VGA compatible controller: NVIDIA Corporation GP106 [GeForce GTX 1060 6GB] (rev a1)
Subsystem: Gigabyte Technology Co., Ltd GP106 [GeForce GTX 1060 6GB]
Kernel driver in use: vfio-pci
Kernel modules: nvidiafb, nouveau
07:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1)
Subsystem: Gigabyte Technology Co., Ltd GP106 High Definition Audio Controller
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
Chapter 4: Creating the Windows 10 virtual machine
As mentioned earlier, in the previous iteration of this article, I used a fresh Windows install to verify our GPU passthrough is working. But today, we're going to skip this, as my migration to Manjaro showed that nothing can probably break so hard your Windows would become unbootable. Just in case: backups are your friend.
First of all, run virt-manager
on your SSH client and create a virtual machine. Select Import existing disk image and choose the boot disk you want to pass through. But instead of using /dev/sdX
, which may change in between reboots (although very unlikely), we're going to use /dev/disk/by-id/
. This folder contains always up-to-date symlinks to the actual device. To see which disk I need to select, I used ls -l /dev/disk/by-id
, which produced the following output in my case:
lrwxrwxrwx 1 root root 9 Mar 23 18:17 ata-APPLE_HDD_xxxxxxxxxxxxxxx_xxxxxxxxxxxxxx -> ../../sdb
lrwxrwxrwx 1 root root 10 Mar 23 18:17 ata-APPLE_HDD_xxxxxxxxxxxxxxx_xxxxxxxxxxxxxx-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Mar 23 18:17 ata-APPLE_HDD_xxxxxxxxxxxxxxx_xxxxxxxxxxxxxx-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 9 Mar 23 22:06 ata-WDC_WD10EZEX-xxxxxxx_WD-WCCxxxxxxxx -> ../../sdc
lrwxrwxrwx 1 root root 10 Mar 23 22:06 ata-WDC_WD10EZEX-xxxxxxx_WD-WCCxxxxxxxx-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Mar 23 22:06 ata-WDC_WD10EZEX-xxxxxxx_WD-WCCxxxxxxxx-part2 -> ../../sdc2
lrwxrwxrwx 1 root root 9 Mar 23 22:06 ata-WDC_WD20EZRZ-xxxxxxx_WD-WCCxxxxxxxx -> ../../sdd
lrwxrwxrwx 1 root root 10 Mar 23 22:06 ata-WDC_WD20EZRZ-xxxxxxx_WD-WCCxxxxxxxx-part1 -> ../../sdd1
lrwxrwxrwx 1 root root 10 Mar 23 22:06 ata-WDC_WD20EZRZ-xxxxxxx_WD-WCCxxxxxxxx-part2 -> ../../sdd2
lrwxrwxrwx 1 root root 10 Mar 23 22:06 ata-WDC_WD20EZRZ-xxxxxxx_WD-WCCxxxxxxxx-part3 -> ../../sdd3
lrwxrwxrwx 1 root root 9 Mar 23 18:18 ata-WDC_WDS500G2B0A-xxxxxx_xxxxxxxxxxxx -> ../../sda
lrwxrwxrwx 1 root root 10 Mar 23 18:18 ata-WDC_WDS500G2B0A-xxxxxx_xxxxxxxxxxxx-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Mar 23 18:18 ata-WDC_WDS500G2B0A-xxxxxx_xxxxxxxxxxxx-part2 -> ../../sda2
[...]
lrwxrwxrwx 1 root root 13 Mar 23 22:06 nvme-Samsung_SSD_970_EVO_1TB_xxxxxxxxxxxxxxx -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Mar 23 22:06 nvme-Samsung_SSD_970_EVO_1TB_xxxxxxxxxxxxxxx-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Mar 23 22:06 nvme-Samsung_SSD_970_EVO_1TB_xxxxxxxxxxxxxxx-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root 15 Mar 23 22:06 nvme-Samsung_SSD_970_EVO_1TB_xxxxxxxxxxxxxxx-part3 -> ../../nvme0n1p3
lrwxrwxrwx 1 root root 15 Mar 23 22:06 nvme-Samsung_SSD_970_EVO_1TB_xxxxxxxxxxxxxxx-part4 -> ../../nvme0n1p4
[...]
Since Windows is installed on my only NVMe drive (a Samsung 970 EVO), I can see that I need to use /dev/disk/by-id/nvme-Samsung_SSD_970_EVO_1TB_xxxxxxxxxxxxxxx
without any of the -partX
suffixes. This uses the whole drive instead of a single partition, which – I guess – is pretty important if you're using Windows 10 in UEFI mode.
As an operating system, enter win10
. On Ubuntu Server, I had to check Include end of life operating systems for some reason for Windows 10 to even show up. Next, choose the amount of RAM to allocate to this VM. My system has 16 GB installed, so I've allocated 8 GB (8192 MiB) to Windows and 6 GB (6144 MiB) to macOS. Don't make the mistake and allocate 10 GB to Windows without reducing the value for macOS like I did, or else you'll wonder why one VM always shuts itself down (spoiler: out of memory). Leave the CPU cores at default for now, we'll take care of them in a second. In the last step, check Customize configuration before install so you can, well, customize your configuration before installing. Under Overview → Hypervisor Details set the Firmware to "UEFI x86_64: /usr/share/OVMF/OVMF_CODE_4M.fd" (the path may be different when using other Linux variants). Make sure the Chipset is set to "Q35". Don't forget to click Apply.
Once the configuration is completed and the VM is started, shut it right down. For the GPU to work, we'll need to edit this VM's XML definition. In my case, I've also added two more hard drives, one of which contains my user data.
But first of all: CPU tuning. To see which cores – or more precisely CPUs – are grouped together, run lscpu -e
.
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ
0 0 0 0 0:0:0:0 yes 5224.2178 2200.0000
1 0 0 1 1:1:1:0 yes 5224.2178 2200.0000
2 0 0 2 2:2:2:0 yes 5090.6250 2200.0000
3 0 0 3 3:3:3:0 yes 4957.0308 2200.0000
4 0 0 4 4:4:4:1 yes 4689.8428 2200.0000
5 0 0 5 5:5:5:1 yes 4559.7651 2200.0000
6 0 0 6 6:6:6:1 yes 4823.4370 2200.0000
7 0 0 7 7:7:7:1 yes 4426.1709 2200.0000
8 0 0 0 0:0:0:0 yes 5224.2178 2200.0000
9 0 0 1 1:1:1:0 yes 5224.2178 2200.0000
10 0 0 2 2:2:2:0 yes 5090.6250 2200.0000
11 0 0 3 3:3:3:0 yes 4957.0308 2200.0000
12 0 0 4 4:4:4:1 yes 4689.8428 2200.0000
13 0 0 5 5:5:5:1 yes 4559.7651 2200.0000
14 0 0 6 6:6:6:1 yes 4823.4370 2200.0000
15 0 0 7 7:7:7:1 yes 4426.1709 2200.0000
Here we can see that on my Ryzen 7 3700X, CPUs 0-7 are linked to CPUs 8-15 respectively. Also look at that MAXMHZ column: So many CPUs hovering around 5 GHZ! I could overclock this thing so hard if I wanted to.
Naturally, we want to only add grouped CPUs to our VMs to improve performance. I've decided to leave CPU 0/8 to the host system and assign CPUs 1-4/9-12 to Windows 10, while assigning CPUs 5-7/13-15 to macOS. We're also switching the CPU mode to host-passthrough
, which will tell Windows we're using a Ryzen 7 3700X (or whatever CPU you are using). Please note that the following options must match your intended CPU count, or else virt-manager
will complain.
[...]
<vcpu placement="static">8</vcpu>
<cputune>
<vcpupin vcpu="0" cpuset="1"/>
<vcpupin vcpu="1" cpuset="9"/>
<vcpupin vcpu="2" cpuset="2"/>
<vcpupin vcpu="3" cpuset="10"/>
<vcpupin vcpu="4" cpuset="3"/>
<vcpupin vcpu="5" cpuset="11"/>
<vcpupin vcpu="6" cpuset="4"/>
<vcpupin vcpu="7" cpuset="12"/>
</cputune>
[...]
<cpu mode="host-passthrough" check="none" migratable="on">
<topology sockets="1" dies="1" cores="8" threads="1"/>
</cpu>
To hide the fact that we're running Windows 10 in a virtual machine (nobody except NVIDIA seems to currently care apparently), we're going to add a few more options to the XML definition:
[...]
<features>
[...]
<hyperv>
[...]
<!-- You can basically enter any 12 character string you like -->
<vendor_id state="on" value="1234567890ab"/>
</hyperv>
<kvm>
<hidden state="on"/>
</kvm>
<ioapic driver="kvm"/>
[...]
</features>
[...]
Please note that your system might a different configuration. There are many users out there encountering Code 43, but also just as many possible solutions. There's a lot of trial and error involved, and this configuration somehow magically works on my system. A new solution is to update your GeForce driver to version 465.89 or later, as NVIDIA is finally allowing consumers to use their regular GPUs inside virtual machines.
Next is Boot Options. Make sure the VM boots from your Windows drive by default. That's it for Boot Options.
Now it's almost time to actually prepare your GPU. For that, we'll have to use a GPU BIOS, so either download one for your card from TechPowerUp or dump it yourself using GPU-Z. I mean, your Windows should still be bootable, right? While you're there, grab the BIOS for your AMD GPU as well. macOS probably doesn't need it, but we're going to add it anyways, just in case.
And because NVIDIA is so special about their consumer cards, we need to remove their header from the dumped BIOS. There's a great tutorial by Unraid god Spaceinvader One on how to patch your GPU BIOS, and while it was made for Unraid, a patched GPU BIOS should work on any other Linux as well. Luckily, AMD doesn't need any patching in this case.
In my eyes, the easiest way of doing this would be to patch your GPU BIOS on your SSH client using any hex editor and then copy it to the appropriate location on your host using SFTP/SSH.
If you're still using AppArmor, put your ROM files in a directory that is not blocked by default, such as /usr/share/vgabios
. Back in virt-manager
, add your GPU (and accompanying audio controller, if it has one) to your VM as a PCI host device, and add the ROM to the VGA controller, which is usually function="0"
inside <source>
. Make sure the <address>
not enclosed in <source>
has the same domain, bus and slot for both devices, and that the audio device is function="0x1"
. Your XML definition then should look something like this:
<hostdev mode="subsystem" type="pci" managed="yes">
<source>
<address domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
</source>
<rom file="/usr/share/vgabios/GP106.rom"/>
<address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
</hostdev>
<hostdev mode="subsystem" type="pci" managed="yes">
<source>
<address domain="0x0000" bus="0x07" slot="0x00" function="0x1"/>
</source>
<address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x1"/>
</hostdev>
For networking, I'd recommend setting up a bridge adapter first. Again, the Arch Wiki will help you with that. After setting it up, save the following XML as host-bridge.xml
and replace br0
with the name you gave your bridge adapter. Once that's done, you can define the bridge on your domain using virsh net-define host-bridge.xml
.
<network>
<name>host-bridge</name>
<forward mode="bridge"/>
<bridge name="br0"/>
</network>
Next, open the QEMU/KVM
connection in virt-manager
, open the Virtual Networks tab, select host-bridge
and check Autostart: On Boot. Also click the little "play" icon in the bottom left part of the window if the bridge is currently disabled. Now you can use this bridge in your VMs. Lastly, add any additional required PCI or USB Host devices (keyboards, mice, DualShock controllers etc.) to the Windows VM and press Power on this virtual machine. If all went well, you'll be greeted with your desktop after waiting for Windows to prepare some devices.
And that's it! Now you've got your personal Windows 10 running in a virtual machine. But wait, there's more! Next, we'll set up the macOS VM, and in the end, basically make them one machine using Barrier.
Now take a break and enjoy your Windows in a VM, you've earned it. Maybe reboot the guest a couple of times to see how well it's working.
Chapter 5: Creating the macOS virtual machine
To prevent any confusion: this article expects you to already have a working macOS installation on your PC. We will not be dealing with how to install macOS on non-Apple branded hardware. Not that I would care about the legal stuff, but it's just WAY too much to explain. If you want to install macOS on your PC (without virtual machines), consult the OpenCore Install Guide.
First of all, we need to create a disk that we'll install OpenCore on, so that we don't break the existing OpenCore installation on our physical disk. The configuration will probably be very different and macOS may not boot with it anyways. The disk doesn't need to be very large, 200 MB should be sufficient. We're also going to format the disk using gdisk
to prevent macOS from automatically mounting it on every boot.
$ qemu-img create -f qcow2 macOS.EFI.qcow2 200M
$ sudo pacman -S gdisk
$ sudo modprobe nbd max_part=8
$ sudo qemu-nbd --connect=/dev/nbd0 ./macOS.EFI.qcow2
Next, we're going to create a new GUID Partition Table (GPT), followed by creating the EFI partition:
$ sudo gdisk /dev/nbd0
GPT fdisk (gdisk) version 1.0.7
Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: not present
Creating new GPT entries in memory.
Command (? for help): w
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!!
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/nbd0.
The operation has completed successfully.
$ sudo gdisk /dev/nbd0
GPT fdisk (gdisk) version 1.0.7
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Command (? for help): n
Partition number (1-128, default 1): 1
First sector (34-409566, default = 2048) or {+-}size{KMGTP}:
Last sector (2048-409566, default = 409566) or {+-}size{KMGTP}:
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): EF00
Changed type of partition to 'EFI system partition'
Command (? for help): w
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!!
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/nbd0.
The operation has completed successfully.
After the EFI partition has been created, we need to format it as FAT32. Also, we'll mount it to /mnt/EFI
so we can put files on it.
$ sudo mkfs.fat -F 32 /dev/nbd0p1
$ sudo mkdir /mnt/EFI
$ sudo mount /dev/nbd0p1 /mnt/EFI
Next, we're going to clone the OSX-KVM
repository, which contains the preconfigured OVMF (UEFI) files for use in QEMU virtual machines, as well as a preconfigured copy of OpenCore. Mount the OpenCore disk image to /mnt/OC
and copy the "EFI" directory to /mnt/EFI
. This way, the original image will be left untouched and you still have a backup in case your configuration doesn't work.
$ git clone https://github.com/kholia/OSX-KVM.git
$ sudo qemu-nbd --connect=/dev/nbd1 ./OSX-KVM/OpenCore-Catalina/OpenCore.qcow2
$ sudo mkdir /mnt/OC
$ sudo mount /dev/nbd1p1 /mnt/OC
$ sudo cp -r /mnt/OC/EFI /mnt/EFI/
$ sudo umount /mnt/OC
$ sudo qemu-nbd --disconnect /dev/nbd1
Now for some customization. For this, we'll need a proper XML editor. You can use ProperTree (which also works over SSH with -X
; requires tk
to be installed: sudo pacman -S tk
) or copy /mnt/EFI/EFI/OC/config.plist
via SFTP/SSH to another machine and open it in Xcode, Notepad++ etc.
First, change OpenCore's ScanPolicy, located under Misc → Security, to "17760515". This will prevent OpenCore from displaying EFI partitions altogether and booting from them by default once we disable the boot menu.
If you have generated a Serial Number, UUID and MLB (for use with Apple Services), you may now enter them in PlatformInfo.
Now save the file and make sure the updated version is located on the EFI drive before unmounting.
$ sudo umount /mnt/EFI
$ sudo qemu-nbd --disconnect /dev/nbd0
And now it's time to actually create the macOS virtual machine. Follow the same procedure as with Windows 10, but instead of using your physical drive at Import existing disk image, you're going to use the virtual EFI disk from a minute ago. Unfortunately, virt-manager
only includes OS metadata up until Mac OS X Lion, so we'll have to use generic
as an operating system. Set your desired CPU cores (in my case: 6) and amount of RAM (4096 MiB minimum, I'm using 6144 MiB). Next, check Customize configuration before install so we can enable UEFI by navigating to Overview → Hypervisor Details and setting Firmware to "UEFI x86_64: /usr/share/OVMF/OVMF_CODE_4M.fd" (the path may be different when using other Linux variants). Make sure the Chipset is set to "Q35".
Finish the configuration, shut the VM down, then add your physical drive and your keyboard & mouse. Next, remove the SPICE graphics, console, serial port, etc. and make sure to boot from your EFI disk you created earlier, which should be selected by default.
Lastly, we'll need to edit our XML to change a few more things we can't using virt-manager
. Add the qemu
namespace to the root node of our XML. This will allow us to add parameters that would usually be added when using qemu
from the command line directly.
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
Next, replace the UEFI loader
and nvram
paths with OSX-KVM:
[...]
<os>
[...]
<loader readonly='yes' type='pflash'>/path/to/OSX-KVM/OVMF_CODE.fd</loader>
<nvram>/path/to/OSX-KVM/OVMF_VARS-1024x768.fd</nvram>
[...]
</os>
[...]
Set the CPU mode to host-passthrough
again, but remove the topology node if it exists.
[...]
<cpu mode="host-passthrough" check="none" migratable="on"/>
[...]
Note: on my setup, macOS would refuse to boot with CPU topology enabled. I think it might be related to having an unusual (for macOS) CPU configuration with 6 cores. You may either add additional (disabled) cores or use a configuration that a real Mac would use.
A little more CPU tuning:
[...]
<vcpu placement="static">6</vcpu>
<cputune>
<vcpupin vcpu="0" cpuset="5"/>
<vcpupin vcpu="1" cpuset="13"/>
<vcpupin vcpu="2" cpuset="6"/>
<vcpupin vcpu="3" cpuset="14"/>
<vcpupin vcpu="4" cpuset="7"/>
<vcpupin vcpu="5" cpuset="15"/>
</cputune>
[...]
Make sure the network adapter is using the VMXNET3 model. You can either use the MAC address provided by OSX-KVM
or generate your own using the graphical editor provided by virt-manager
. Just click the little refresh arrow to generate a new MAC address.
<interface type='bridge'>
<mac address='52:54:00:8e:e2:66'/>
<source bridge='virbr0'/>
<target dev='tap0'/>
<model type='vmxnet3'/>
</interface>
Now, replace every controller entry with the ones from OSX-KVM
.
[...]
<controller type='sata' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pcie-root'/>
<controller type='pci' index='1' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='1' port='0x8'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='2' port='0x9'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='3' port='0xa'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='4' port='0xb'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='5' port='0xc'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
</controller>
<controller type='pci' index='6' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='6' port='0xd'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
</controller>
<controller type='pci' index='7' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='7' port='0xe'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</controller>
<controller type='usb' index='0' model='ich9-ehci1'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
</controller>
[...]
Adding a GPU before replacing the controllers may break something, so now it's safe to add your GPU, the ROM file and audio device (if existing). Again, make sure your devices are located in the same domain, bus and slot, but have different functions (0 = VGA Controller, 1 = Audio device). Your XML should look similar to this:
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
</source>
<rom file='/usr/share/vgabios/Gigabyte.RX560.2048.170423.rom'/>
<address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0000' bus='0x06' slot='0x00' function='0x1'/>
</source>
<address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x1'/>
</hostdev>
And lastly, the command line arguments. These include the "secret" OSK key, the only thing preventing people from running a non-legitimate macOS on non-Apple hardware. In fact, it's so secret that it can be found anywhere on the internet 😉. Add these arguments to very bottom of your XML, just before the closing </domain>
tag:
<qemu:commandline>
<qemu:arg value='-device'/>
<qemu:arg value='isa-applesmc,osk=ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc'/>
<qemu:arg value='-smbios'/>
<qemu:arg value='type=2'/>
<qemu:arg value='-device'/>
<qemu:arg value='usb-tablet'/>
<qemu:arg value='-device'/>
<qemu:arg value='usb-kbd'/>
<qemu:arg value='-cpu'/>
<qemu:arg value='host,vendor=GenuineIntel,+hypervisor,+invtsc,kvm=on,+fma,+avx,+avx2,+aes,+ssse3,+sse4_2,+popcnt,+sse4a,+bmi1,+bmi2'/>
</qemu:commandline>
When you start the VM now, you should be greeted with various options, including your physical disk, alongside "Clear NVRAM", "UEFI Shell" and "Shutdown". Select your disk and hit Ctrl+Return. This will make OpenCore remember your selection in case you add any more bootable disks in the future.
Now give it a little time to boot. OpenCore is preconfigured to verbose boot (-v
) so you can see exactly what macOS is doing. If it appears to be stuck, just wait a few seconds and it should continue on its own. If not, just reset the VM and try again.
If everything is working correctly, you should see your desktop and hardware-accelerated UI elements. Now it's up to you if you want to disable the disk picker (recommended if you want this VM to automatically start when the host boots) or to update OpenCore and the included kexts (kernel extensions). As of writing, the current version of OpenCore is 0.6.7, while the version of OpenCore you're running is 0.6.4. Also, there are updates available for Lilu, WhateverGreen and VoodooHDA. For more information, consult the OpenCore Install Guide and OpenCore Post-Install Guide.
At this point, you should also enable Screen Sharing in Preferences → Sharing if you haven't. This will allow you to control your VM without an attached keyboard or mouse. Depending on what VM will be your primary one, we'll remove the keyboard and mouse from the secondary VM as they will be shared from the primary using Barrier. When installing Barrier, you probably want both VMs running at the same time, and only one VM at a time can use a USB device.
Chapter 6: One machine to rule them all – Installing Barrier
Installing Barrier and setting it up is pretty straight forward. On Windows, I've downloaded the latest development build which includes Autostart. On macOS, you can use the regular release from GitHub.
My setup uses the Windows VM as the Barrier server – it has a keyboard and mouse connected – and macOS is the client. So when installing, I selected "Server" on Windows and "Client" on macOS. The remaining setup should be pretty self-explanatory. And thanks to my Xiaomi monitor that supports picture-by-picture, I can now display both Windows and macOS next to each other (although squeezed) and move the cursor from Windows to macOS and back.
Epilogue: Conclusion
Running existing Windows and macOS installations is actually not that hard, and the result is actually usable. I've been using this setup for about two weeks straight, mostly for gaming on Windows and productivity on macOS, and there's just a few minor things to look out for:
- Windows activation: Moving between the native Windows and Windows running in a VM may screw your activation. Luckily, in my case, it fixed itself after a day or two, but Microsoft may not like you changing the hardware too often. I have no idea how hardware ID activation actually works, but Windows thinks your actual PC and the virtual machine are two different pieces of hardware, which is absolutely valid.
- UTC time: Dual-booting Windows and any Unix-based operating system can be very annoying when it comes to keep your system time synchronized. Unfortunately, this also applies to Windows running in a VM on a Unix-based host. By default, Windows uses RTC time instead of UTC (the time Unix uses). This can be fixed by running
regedit
and adding a new DWORD value located atHKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation
calledRealTimeIsUniversal
with a value of1
. However, this alone caused my time to be off by 3 hours at some point (thanks, Daylight Saving Time!), so I had to enable thew32time
service, which would automatically sync the time at boot. For this, I'm using PowerShell:Set-Service -Name W32Time -StartupType Automatic
Alternatively, open
services.msc
, select "Windows Time Service" (name may be different) and set the Startup Type to "Automatic". Reboot your Windows guest and the system time should now be your current local time. - macOS: Black screen after verbose boot: Sometimes, when I want to boot macOS automatically when the Manjaro host boots, there's a black screen after verbose booting. The usual green flashes (which also occur on my 2012 Mac mini running High Sierra), but no Apple logo with a progress bar as it's supposed to be. This doesn't seem to be a problem when I decide to boot it manually. But when it happens, it can be worked around by simply rebooting the macOS guest.
- Slow network speeds: This may only be a temporary issue – I'm not constantly downloading games via Steam to test this – but you may experience slow network speeds while using Windows. I've seen download speeds from less than 500 KB/s up to 4 MB/s – my usual download speed is around 12 MB/s. Again, this was either temporary or it's caused by an older network adapter model provided by QEMU. You can try setting the model to
virtio
, but this requires the VirtIO drivers to be installed. You can download the ISO containing the drivers (before changing the adapter model) from here.
Also, NVIDIA recently released an updated GeForce driver (version 465.89) that includes – next to resizable BAR support – "beta support for virtualization on GeForce GPUs". What this means is that basically NVIDIA disabled Code 43 when using a supported GPU inside a virtual machine (Kepler/600/700-series or later GPUs for desktops, Maxwell/900-series GPUs for mobile) and large portions of this article handling Code 43 may be considered obsolete. However, these instructions are still valid if for some reason you can't upgrade to 465.89 or later.
So, that's probably it for now. If you have any suggestions (eg. I forgot something important) you can always hit me up on Twitter – my DMs should be open: @Sniper_GER.
If there's anything that doesn't fit inside 1000 characters, here's my work email: sniperger <at> festival <dot> tf
. Please note that I may not respond immediately, I rarely check my email.
I. macOS Big Sur 11.3
Recently, Apple released macOS Big Sur 11.3. Once downloaded, the VM wouldn't boot after the first restart. Instead, I was greeted with the following error message printed by OpenCore:
OC: Kernel Patcher result 1 for kernel(algrey - cpuid_set_cpufamily - force CPUFAMILY_INTEL_PENRYN) - not found
Turns out that 11.3 (actually starting with Beta 1) needed a modified Kernel patch, which is already included in the list of AMD patches for OpenCore. To get this working using KVM, you need to add the following patch to Kernel → Patches after the existing patch called cpuid_set_cpufamily - force CPUFAMILY_INTEL_PENRYN
, preferably before starting the upgrade to 11.3:
<dict>
<key>Arch</key>
<string>x86_64</string>
<key>Base</key>
<string></string>
<key>Comment</key>
<string>DhinakG - cpuid_set_cpufamily - force CPUFAMILY_INTEL_PENRYN - 11.3b1</string>
<key>Count</key>
<integer>1</integer>
<key>Enabled</key>
<true/>
<key>Find</key>
<data>MdIAAIA9AAAAAAZ1AA==</data>
<key>Identifier</key>
<string>kernel</string>
<key>Limit</key>
<integer>0</integer>
<key>Mask</key>
<data>//8AAP//AAAA////AA==</data>
<key>MaxKernel</key>
<string>20.99.99</string>
<key>MinKernel</key>
<string>20.4.0</string>
<key>Replace</key>
<data>swG6vE/qeOldAAAAkA==</data>
<key>ReplaceMask</key>
<data></data>
<key>Skip</key>
<integer>0</integer>
</dict>
Make sure the MaxKernel
of the previous patch is set to "20.3.0", so that I doesn't apply to 11.3.
Addendum: Updates
- 04/19/21: Fixed a typo in chapter 3 when rebuilding the
initramfs
(mkinitcpio -p linux-zen
instead ofmkinitcpio -u linux-zen
) and when checking the kernel drivers in use (lspci -nnk
instead oflspci -nn
). - 04/30/21: Added instructions to prepare for the upgrade to Big Sur 11.3