This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-18 16:47:42 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
cb1c0727bd59803b439b6a3af121c99e6393ff3d
llama.cpp
/
examples
History
Kawrakow
cb1c0727bd
HellaSwag: split token evaluation into batches if needed (
#2681
)
...
Co-authored-by: Iwan Kawrakow <
iwan.kawrakow@gmail.com
>
2023-08-21 11:11:31 +03:00
..
baby-llama
…
benchmark
…
convert-llama2c-to-ggml
…
embd-input
…
embedding
…
jeopardy
…
llama-bench
…
main
…
metal
…
perplexity
HellaSwag: split token evaluation into batches if needed (
#2681
)
2023-08-21 11:11:31 +03:00
quantize
…
quantize-stats
…
save-load-state
…
server
…
simple
…
train-text-from-scratch
…
alpaca.sh
…
chat-13B.bat
…
chat-13B.sh
…
chat-persistent.sh
…
chat-vicuna.sh
…
chat.sh
…
CMakeLists.txt
…
common.cpp
…
common.h
…
console.cpp
…
console.h
…
gpt4all.sh
…
grammar-parser.cpp
…
grammar-parser.h
…
json-schema-to-grammar.py
…
llama2-13b.sh
…
llama2.sh
…
llama.vim
…
llm.vim
…
make-ggml.py
…
Miku.sh
…
reason-act.sh
…
server-llama2-13B.sh
…