How to properly enable IOMMU on Linux?

On my device, when running fwupdmgr security, the only thing missing to reach HSI:3 was IOMMU, which was reported as Not found.
Looking online how to enable it, I found that I had to enable Intel VT-d in the BIOS, and add the kernel argument intel_iommu=on (for Intel processors). The BIOS setting was already enabled; I added the kernel argument, and now fwupdmgr security successfully reports IOMMU as Enabled, and HSI:3.

However this procedure raised some questions:

  1. When searching how to enable the IOMMU, I found mentioned that for AMD processors the kernel parameter to add is amd_iommu=on. However it seems that now this information is outdated, as that parameter no longer exists: see here for the possible values of amd_iommu. There is amd_iommu=force_isolation but I’m not sure if that’s the AMD equivalent of intel_iommu=on.

On the secureblue documentation both intel_iommu=on and amd_iommu=force_isolation are listed with the same description (“Mitigate DMA attacks by enabling IOMMU”), suggesting to me that they are indeed equivalent.
But the argument amd_iommu=force_isolation is in the list of “Unstable kargs” that “may cause issues on some hardware”. What kind of issues could arise when enabled? Should I want to enable it on AMD hardware?

  1. Actually, secureblue lists iommu=force and intel_iommu=on together as “Mitigate DMA attacks by enabling IOMMU”. So do I actually need both?
    I also see iommu.passthrough=0, iommu.strict=1 and efi=disable_early_pci_dma but I don’t understand what they do and whether I should want them.
    What is the difference between the various IOMMU-related kernel arguments, and what happens if I enable some but not all of them? How do the various arguments affect each other?
    Can I simply set iommu=force without setting any hardware-specific arguments like amd_iommu and intel_iommu?

  2. What about processors from other manufacturers and/or with different architectures?

  3. Does the IOMMU usually come enabled by default in Linux distributions?
    What are some distributions that have IOMMU enabled by default, without pre-set kernel arguments? (For example I noticed that some distributions have kernel lockdown disabled by default, and the way to enable it is to set a kernel argument, while other distributions come with kernel lockdown enabled by default without having any pre-set kernel argument).

  4. What happens if in the future the kernel argument that I enabled becomes deprecated? For example if I had had an AMD processor with enabled amd_iommu=on, what would have happened upon updating to a newer kernel version that no longer supported that argument?
    This is why I would prefer to use a distribution that comes with IOMMU enabled by default, instead of having to enable it myself (so that I do not have to constantly worry about my configuration becoming deprecated).

  5. Since originally fwupdmgr security reported IOMMU as Not found, is it possible that my hardware does not support IOMMU, and enabling it with a kernel argument does nothing, but tricks fwupd into reporting IOMMU as enabled? Is there a way to verify that I correclty enabled IOMMU?

I hope to get answers that can apply in general to other hardware and software, but in any case my hardware is the laptop HP Probook 450 G8 (Intel processor i7-1165G7), with Fedora Kinoite installed.
In my BIOS the following relevant options are enabled: VTx, VTd, DMA Protection, Pre-boot DMA protection (this one is set to All PCIe devices; the only other possible option is Disable). There is also a setting called Measure additional DMA settings which is disabled by default and I did not enable; the possible values for this setting are Disable, PCR1, PCR7. I do not know what it is; the description says: “When enabled, includes the state of Virtualization Technology (VTx), Virtualization Technology for Directed I/O (VTd), and the Pre-Boot DMA Protection settings into the measurement for the specified PCR”.

1 Like

I’m no expert but I put a Proxmox running a Debian server in the past with a few LVMs and played with those settings in the Intel NUC. I still have the server up but I don’t mess with those settings anymore. Things are very stable.

Yes, it is.

A few, but it doesn’t mean they will happen. The most notable that I can think are virtualization problems. If you use virtualization technologies (like KVM or Xen), forcing strict isolation might interfere with how virtual machines access hardware. However, I’d try enabling before ruling it out.

Yeah, might be helpful.

Yes.

Generally is better to research before playing with things that you are not familiar. I don’t recall for sure if the content creator here explain those items but may be helpful to watch https://www.youtube.com/watch?v=GoZaMgEgrHw&t=378s

If your distro is recommending it may be better to contact them to ask. I think @RoyalOughtness still around and responding sometimes some questions but if I’m not mistaken was agreed that they wouldn’t be interacting in this forum responding questions and the recommended channel is their Discord.

Edit: Just noticed that you are actually using Kinoite and you didn’t rebase to SecureBlue so I’d look Kinoite documentation and use their channels to get some help.

Hope this helps a little bit.

1 Like

I can explain this:

  • On Intel you need intel_iommu=on to turn it on
  • On AMD the kernel will automatically turn the IOMMU on
  • On AMD you can further ensure all devices are isolated and none are exempted with amd_iommu=force_isolation
    • with linux 6.13 and higher this has recently caused black screens on newer amdgpus, it will likely be fixed in the future
  • Regardless of both you should use iommu=force to enable it in cases where it may not be used such as low memory situation. This afaik will not turn it on, only ensure it stays on in more cases.
  • And you should also enable strict TLB invalidation via iommu.strict=1, this ensures mappings are always accurately set instead of lazily updated upon changes
  • Lastly you want iommu.passthrough=0 this ensures the IOMMU is always enforced even for DMA
  • efi=disable_early_pci_dma can be used to minimize ability of DMA before full IOMMU initialization, however this can commonly cause issues with GPUs getting stuck at a black screen, so test it on your system before setting it permanently
8 Likes

Thank you very much