ml

Build and run llama.cpp locally on Fedora 42 with ROCm

There are multiple tutorials online already on how to run