mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-07-17 08:14:50 +00:00
* ggml: support CUDA's half type for aarch64(#1455) support CUDA's half type for aarch64 in ggml_fp16_t definition * ggml: use __CUDACC__ to recognise nvcc compiler
This commit is contained in:
Reference in New Issue
Block a user