mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-09-01 21:04:58 -04:00
* batched-bench : fix pp batch contents * metal : optimize multi-sequence FA vec kernel ggml-ci * metal : use FA-vec kernel up to batch size 20 ggml-ci