Files
llama.cpp/ggml
Johannes Gäßler defe2158dd CUDA: mul_mat_v support for batch sizes > 1 (#14262)
* CUDA: mul_mat_v support for batch sizes > 1

* use 64 bit math for initial offset calculation
2025-06-23 13:11:31 +02:00
..
2025-06-20 21:02:47 +03:00
2024-07-13 18:12:39 +02:00