mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-07-30 14:13:57 -04:00
When using group query attention, we have one workgroup per KV batch and this can be very few workgroups (e.g. just 8 in some models). Enable split_k to spread the work across SMs. This helps a lot when the KV cache is large.