This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-10 02:45:32 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
ec6c09d0fac1f2699c3ea1994e10482cb4b95e0f
llama.cpp
/
ggml
/
include
History
Diego Devesa
fe92821ea9
ggml : add bilinear upscale support (ggml/1185)
2025-04-11 00:17:47 +03:00
..
ggml-alloc.h
ggml : upgrade init_tensor API to return a ggml_status (
#11854
)
2025-02-28 14:41:47 +01:00
ggml-backend.h
ggml : upgrade init_tensor API to return a ggml_status (
#11854
)
2025-02-28 14:41:47 +01:00
ggml-blas.h
…
ggml-cann.h
…
ggml-cpp.h
…
ggml-cpu.h
ggml-cpu: Faster IQ1 mul_mat_vec on AVX2 using BMI2 instructions (
#12154
)
2025-03-06 02:26:10 +01:00
ggml-cuda.h
…
ggml-kompute.h
…
ggml-metal.h
repo : update links to new url (
#11886
)
2025-02-15 16:40:57 +02:00
ggml-opencl.h
…
ggml-opt.h
…
ggml-rpc.h
rpc : send hash when tensor data is above some fixed threshold (
#12496
)
2025-03-28 08:18:04 +02:00
ggml-sycl.h
…
ggml-vulkan.h
vulkan: Make Vulkan optional at runtime (
#11493
). (
#11494
)
2025-02-10 07:17:21 +01:00
ggml.h
ggml : add bilinear upscale support (ggml/1185)
2025-04-11 00:17:47 +03:00
gguf.h
…