mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-06-26 19:55:04 +00:00
* ggml : move AMX to the CPU backend --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
llama.cpp/example/run
The purpose of this example is to demonstrate a minimal usage of llama.cpp for running models.
./llama-run Meta-Llama-3.1-8B-Instruct.gguf
...