Files
llama.cpp/scripts
Diego Devesa 3420909dff ggml : automatic selection of best CPU backend (#10606)
* ggml : automatic selection of best CPU backend

* amx : minor opt

* add GGML_AVX_VNNI to enable avx-vnni, fix checks
2024-12-01 16:12:41 +01:00
..
2024-01-09 19:21:13 +02:00
2024-07-05 07:53:33 +03:00
2023-08-29 10:50:30 +03:00
2024-11-17 08:30:29 +02:00
2024-11-27 11:10:42 +02:00
2024-11-17 08:30:29 +02:00