• 3 Posts
  • 23 Comments
Joined 1 month ago
cake
Cake day: June 21st, 2025

help-circle

  • The other points have been answered, so I’ll try and give a surface view of Magma. It’s basically an abstraction layer for virtual GPU drivers used in VMs. Currently, you need specific implementations to handle all of the pathways between different types of VM guests and hosts, which gets complicated fast, and duplicates a lot of work. The idea is the Magma abstracts this away, and so host and guest GPU drivers only need to interface with Magma. Which means you can swap out different host OSes/GPU drivers and different guest OSes and GPU drivers, and as long as they interface with Magma, they should “just work”.

    Of course, whether it will work out that way in practice remains to be seen. I think Google is using it internally but it’s not in Mesa yet, so it may not even roll out widely. You can follow the MR if you want more detail or to see its progress.

    If you’re wondering why Google is implementing this it appears to be for Fuschia and Android, and compatibility between those two and with desktop Linux, with Windows support also supported as an additional value add. Chromebooks in particular should benefit from this, since ChromeOS is being retired I believe.

    And as an aside, unlike some of the traditional GPU implementations you’d find in VMs, these are or will be pretty much just the normal graphics driver that you’d use on the host. They are generally called “native contexts” and have been implemented for AMD and Intel at the least, but only on non-Windows systems for now. These implementations alone, once they are widely supported, should result in near native GPU performance in VMs, without having to use GPU passthrough (I.e. passing through a physical GPU to the VM guest). So even without Magma there’s some promising stuff happening, albeit mainly on the Linux host -> Linux guest pathway.







  • Another useful use case is that the tool works on videos with mpv to interpolate to a higher frame rate. I know that subjectively not everyone likes that for film, but for footage that doesn’t rely on sets and the like such as sport and Youtube videos it’s a nice improvement.

    In terms of quality vs performance, I’d say it’s somewhere between the lower quality SVP default and the higher quality (but very resource intensive) RIFE implementation. There’s also LSFG_PERF_MODE=1 and decreasing the flow rate, but the former was a pretty obvious decline in quality, but might be needed on slower GPUs.