This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-15 20:53:00 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
c75753a01b911d726e5843342c88b49b6550c82b
llama.cpp
/
examples
History
Georgi Gerganov
c75753a01b
server : infill gen ends on new line
2025-03-07 17:19:55 +02:00
..
batched
…
batched-bench
…
batched.swift
…
convert-llama2c-to-ggml
…
cvector-generator
…
deprecation-warning
…
embedding
…
eval-callback
…
export-lora
…
gbnf-validator
…
gen-docs
…
gguf
…
gguf-hash
…
gguf-split
…
gritlm
…
imatrix
…
infill
…
jeopardy
…
llama-bench
llama-bench : fix unexpected global variable initialize sequence issue (
#11832
)
2025-02-14 02:13:43 +01:00
llama.android
…
llama.swiftui
…
llava
…
lookahead
…
lookup
…
main
main: allow preloading conversation with -p and add -st / --single-turn (
#12145
)
2025-03-04 12:19:39 -04:00
parallel
…
passkey
…
perplexity
Fix: Compile failure due to Microsoft STL breaking change (
#11836
)
2025-02-12 21:36:11 +01:00
quantize
…
quantize-stats
…
retrieval
…
rpc
…
run
Adding UTF-8 support to llama.cpp (
#12111
)
2025-03-03 12:44:56 +00:00
save-load-state
…
server
server : infill gen ends on new line
2025-03-07 17:19:55 +02:00
simple
llama : add
llama_vocab
, functions -> methods, naming (
#11110
)
2025-01-12 11:32:42 +02:00
simple-chat
…
simple-cmake-pkg
…
speculative
…
speculative-simple
…
sycl
…
tokenize
…
tts
…
chat-13B.bat
…
chat-13B.sh
…
chat-persistent.sh
…
chat-vicuna.sh
…
chat.sh
…
CMakeLists.txt
…
convert_legacy_llama.py
…
json_schema_pydantic_example.py
…
json_schema_to_grammar.py
…
llama.vim
…
llm.vim
…
Miku.sh
…
pydantic_models_to_grammar_examples.py
…
pydantic_models_to_grammar.py
…
reason-act.sh
…
regex_to_grammar.py
…
server_embd.py
…
server-llama2-13B.sh
…
ts-type-to-grammar.sh
…