This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-06-26 19:55:04 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
a84ab1da8dc6a59a5b67420ae1322f09503ffc72
llama.cpp
/
tests
History
katsu560
a84ab1da8d
tests : fix quantize perf (
#1990
)
...
* fix test quantize perf * avoid the global state
2023-06-26 19:47:02 +03:00
..
CMakeLists.txt
ggml : implement backward pass for llama + small training-llama-from-scratch example (
#1360
)
2023-05-13 15:56:40 +03:00
test-double-float.c
all : be more strict about converting float to double (
#458
)
2023-03-28 19:48:20 +03:00
test-grad0.c
tests : sync test-grad0 from ggml
2023-06-24 19:40:18 +03:00
test-opt.c
ggml : implement backward pass for llama + small training-llama-from-scratch example (
#1360
)
2023-05-13 15:56:40 +03:00
test-quantize-fns.cpp
build : fix and ignore MSVC warnings (
#1889
)
2023-06-16 21:23:53 +03:00
test-quantize-perf.cpp
tests : fix quantize perf (
#1990
)
2023-06-26 19:47:02 +03:00
test-sampling.cpp
llama : fix top-p sampling to match the canonical definition (
#1953
)
2023-06-24 13:15:01 +03:00
test-tokenizer-0.cpp
llama : make model stateless and context stateful (llama_state) (
#1797
)
2023-06-24 11:47:58 +03:00