This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-25 09:38:35 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
1160de38f6d7f717b2fba61dcb1238ba974f8cc1
llama.cpp
/
common
History
Siwen Yu
9fb13f9584
common : add
--version
option to show build info in CLI (
#4433
)
2023-12-13 14:50:14 +02:00
..
base64.hpp
…
build-info.cpp.in
…
CMakeLists.txt
build : fix build info generation and cleanup Makefile (
#3920
)
2023-12-01 00:23:08 +02:00
common.cpp
common : add
--version
option to show build info in CLI (
#4433
)
2023-12-13 14:50:14 +02:00
common.h
llama : per-layer KV cache + quantum K cache (
#4309
)
2023-12-07 13:03:17 +02:00
console.cpp
…
console.h
…
grammar-parser.cpp
grammar-parser : fix typo (
#4318
)
2023-12-04 09:57:35 +02:00
grammar-parser.h
…
log.h
english : use
typos
to fix comments and logs (
#4354
)
2023-12-12 11:53:36 +02:00
sampling.cpp
common : fix compile warning
2023-12-06 10:41:03 +02:00
sampling.h
sampling : custom samplers order (
#4285
)
2023-12-05 12:05:51 +02:00
stb_image.h
…
train.cpp
train : move number of gpu layers argument parsing to common/train.cpp (
#4074
)
2023-11-17 17:19:16 +02:00
train.h
…