mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-08-25 17:48:49 -04:00
* rename ggml-cpu-aarch64.c to .cpp * reformat extra cpu backend. - clean Q4_0_N_M and IQ4_0_N_M - remove from "file" tensor type - allow only with dynamic repack - extract cpu extra bufts and convert to C++ - hbm - "aarch64" - more generic use of extra buffer - generalise extra_supports_op - new API for "cpu-accel": - amx - aarch64 * clang-format * Clean Q4_0_N_M ref Enable restrict on C++ * add op GGML_OP_MUL_MAT_ID for Q4_0_N_M with runtime repack * added/corrected control on tensor size for Q4 repacking. * Update ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Update ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * add debug logs on repacks. --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
9 lines
202 B
C
9 lines
202 B
C
#include "ggml-backend.h"
|
|
#include "ggml-cpu-impl.h"
|
|
|
|
// GGML internal header
|
|
|
|
#if defined(__AMX_INT8__) && defined(__AVX512VNNI__)
|
|
ggml_backend_buffer_type_t ggml_backend_amx_buffer_type(void);
|
|
#endif
|