mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-08-01 23:16:26 -04:00
* This is not needed by the normal use where the result is read using `tensor_get`, but it allows perf mode of `test-backend-ops` to properly measure performance.