This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-19 17:17:40 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
04655063c47af6cbced295c8c7ad369402b15300
llama.cpp
/
examples
History
Georgi Gerganov
bac8bed248
eval-callback : check for empty input (
#14539
)
2025-07-05 07:18:09 +03:00
..
batched
…
batched.swift
…
convert-llama2c-to-ggml
…
deprecation-warning
…
embedding
…
eval-callback
…
gen-docs
…
gguf
…
gguf-hash
…
gritlm
…
jeopardy
…
llama.android
…
llama.swiftui
…
lookahead
…
lookup
…
parallel
…
passkey
…
retrieval
…
save-load-state
…
simple
fix: check model pointer validity before use (
#13631
)
2025-05-19 13:25:41 +03:00
simple-chat
…
simple-cmake-pkg
…
speculative
…
speculative-simple
…
sycl
…
training
…
chat-13B.bat
…
chat-13B.sh
…
chat-persistent.sh
…
chat-vicuna.sh
…
chat.sh
…
CMakeLists.txt
…
convert_legacy_llama.py
metadata: Detailed Dataset Authorship Metadata (
#8875
)
2024-11-13 21:10:38 +11:00
json_schema_pydantic_example.py
…
json_schema_to_grammar.py
grammar : handle maxItems == 0 in JSON schema (
#13117
)
2025-04-26 10:10:20 +02:00
llama.vim
…
llm.vim
…
Miku.sh
…
pydantic_models_to_grammar_examples.py
…
pydantic_models_to_grammar.py
…
reason-act.sh
…
regex_to_grammar.py
…
server_embd.py
…
server-llama2-13B.sh
…
ts-type-to-grammar.sh
…