Files
llama.cpp/ggml
Mengqing Cao d2d3200b38 cann : add Ascend NPU support (whisper/2336)
* enable Ascend NPU in src/whisper.cpp
  * sync test-backend-ops with llama.cpp
2024-09-08 11:05:55 +03:00
..
2024-07-13 18:12:39 +02:00