This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-06-26 19:55:04 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
5b2b2dc6ae8086bff7c9b3c17fb435cf319b7185
llama.cpp
/
tests
History
Georgi Gerganov
5b2b2dc6ae
ggml : sync (unary ops refactor, static-correctness) (
#2370
)
...
* ggml : sync (unary ops, tests) ggml-ci * tests : remove unnecessary funcs
2023-07-24 14:46:21 +03:00
..
CMakeLists.txt
cmake : install targets (
#2256
)
2023-07-19 10:01:11 +03:00
test-double-float.c
all : be more strict about converting float to double (
#458
)
2023-03-28 19:48:20 +03:00
test-grad0.c
ggml : sync (unary ops refactor, static-correctness) (
#2370
)
2023-07-24 14:46:21 +03:00
test-opt.c
ggml : sync (unary ops refactor, static-correctness) (
#2370
)
2023-07-24 14:46:21 +03:00
test-quantize-fns.cpp
ggml : generalize
quantize_fns
for simpler FP16 handling (
#1237
)
2023-07-05 19:13:06 +03:00
test-quantize-perf.cpp
ggml : generalize
quantize_fns
for simpler FP16 handling (
#1237
)
2023-07-05 19:13:06 +03:00
test-sampling.cpp
ci : integrate with ggml-org/ci (
#2250
)
2023-07-18 14:24:43 +03:00
test-tokenizer-0.cpp
mpi : add support for distributed inference via MPI (
#2099
)
2023-07-10 18:49:56 +03:00