This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-19 17:17:40 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
c4fe84fb0d28851a5c10e5a633f82ae2ba3b7fae
llama.cpp
/
examples
History
slaren
1d78fecdab
Fix LoRA acronym (
#1145
)
2023-04-23 23:03:44 +02:00
..
benchmark
…
embedding
…
main
Fix LoRA acronym (
#1145
)
2023-04-23 23:03:44 +02:00
perplexity
Show perplexity ETA in hours and minutes (
#1096
)
2023-04-21 14:57:57 +02:00
quantize
llama : multi-threaded quantization (
#1075
)
2023-04-20 20:42:27 +03:00
quantize-stats
llama : multi-threaded quantization (
#1075
)
2023-04-20 20:42:27 +03:00
alpaca.sh
examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Experience (
#1107
)
2023-04-22 09:54:33 +03:00
chat-13B.bat
…
chat-13B.sh
…
chat.sh
…
CMakeLists.txt
…
common.cpp
Add LoRA support (
#820
)
2023-04-17 17:28:55 +02:00
common.h
llama : have n_batch default to 512 (
#1091
)
2023-04-22 11:27:05 +03:00
gpt4all.sh
…
Miku.sh
…
reason-act.sh
…