This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-16 07:38:28 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
b1688
llama.cpp
/
models
History
Galunid
36eed0c42c
stablelm : StableLM support (
#3586
)
...
* Add support for stablelm-3b-4e1t * Supports GPU offloading of (n-1) layers
2023-11-14 11:17:12 +01:00
..
.editorconfig
…
ggml-vocab-aquila.gguf
…
ggml-vocab-baichuan.gguf
Add more tokenizer tests (
#3742
)
2023-10-24 09:17:17 +02:00
ggml-vocab-falcon.gguf
…
ggml-vocab-gpt-neox.gguf
Add more tokenizer tests (
#3742
)
2023-10-24 09:17:17 +02:00
ggml-vocab-llama.gguf
gguf : remove special-case code for GGUFv1 (
#3901
)
2023-11-02 11:20:21 +02:00
ggml-vocab-mpt.gguf
…
ggml-vocab-refact.gguf
Add more tokenizer tests (
#3742
)
2023-10-24 09:17:17 +02:00
ggml-vocab-stablelm-3b-4e1t.gguf
stablelm : StableLM support (
#3586
)
2023-11-14 11:17:12 +01:00
ggml-vocab-starcoder.gguf
Add more tokenizer tests (
#3742
)
2023-10-24 09:17:17 +02:00