mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-07-28 13:20:27 -04:00
This disables the workaround on rocblas fixed versions (>=4.0.0) to eliminate the runtime cost and unnecessary VRAM allocation of loading all tensile objects.