https://youtu.be/Fkk9DI-8el4?t=26>>38161680>>38161725It COULD be ~twice as fast? It looks like either python itself or something that the whole shebang is using is locked to a single NUMA node, since I never saw python.exe exceed 50% CPU usage on the dot. There's a Microsoft/DirectML version of pytorch that could technically run on AMD GPUs, but it's based on pytorch 1.8 so I don't think it will run at all - webui is using 1.12.1. From what I read while this was generating, it might be possible to make ROCm work with WSL2. Server 2019 won't run it, but my gayming rig with an RX5700 might.
>>38161950I have a subscription already. I was hoping to get it running on any of these stupid AMD GPUs so I could try upscaling, using embeds, etc.