Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-06-26 19:55:04 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
a84ab1da8dc6a59a5b67420ae1322f09503ffc72
llama.cpp/tests
History
katsu560 a84ab1da8d tests : fix quantize perf (#1990)
* fix test quantize perf

* avoid the global state
2023-06-26 19:47:02 +03:00
..
CMakeLists.txt
ggml : implement backward pass for llama + small training-llama-from-scratch example (#1360)
2023-05-13 15:56:40 +03:00
test-double-float.c
all : be more strict about converting float to double (#458)
2023-03-28 19:48:20 +03:00
test-grad0.c
tests : sync test-grad0 from ggml
2023-06-24 19:40:18 +03:00
test-opt.c
ggml : implement backward pass for llama + small training-llama-from-scratch example (#1360)
2023-05-13 15:56:40 +03:00
test-quantize-fns.cpp
build : fix and ignore MSVC warnings (#1889)
2023-06-16 21:23:53 +03:00
test-quantize-perf.cpp
tests : fix quantize perf (#1990)
2023-06-26 19:47:02 +03:00
test-sampling.cpp
llama : fix top-p sampling to match the canonical definition (#1953)
2023-06-24 13:15:01 +03:00
test-tokenizer-0.cpp
llama : make model stateless and context stateful (llama_state) (#1797)
2023-06-24 11:47:58 +03:00
Powered by Gitea Version: 1.24.1 Page: 121ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API