This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-17 13:40:55 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
eb57fee51f7b4d78039f003249873c2eb46f12f6
llama.cpp
/
examples
History
Radoslav Gerganov
210d99173d
llama-bench : add support for the RPC backend (
#7435
)
2024-05-29 14:45:44 +03:00
..
baby-llama
…
batched
…
batched-bench
…
batched.swift
…
beam-search
…
benchmark
…
convert-llama2c-to-ggml
…
embedding
…
eval-callback
common : normalize naming style (
#7462
)
2024-05-22 20:04:20 +03:00
export-lora
…
finetune
…
gbnf-validator
…
gguf
…
gguf-split
…
gritlm
…
imatrix
common : normalize naming style (
#7462
)
2024-05-22 20:04:20 +03:00
infill
…
jeopardy
…
llama-bench
llama-bench : add support for the RPC backend (
#7435
)
2024-05-29 14:45:44 +03:00
llama.android
…
llama.swiftui
…
llava
llava : update clip.h (
#7580
)
2024-05-28 12:48:16 +10:00
lookahead
…
lookup
…
main
main: replace --no-special with --special (
#7534
)
2024-05-27 00:10:17 +10:00
main-cmake-pkg
…
parallel
…
passkey
…
perplexity
…
quantize
…
quantize-stats
…
retrieval
…
rpc
…
save-load-state
…
server
server: do not remove whitespace at the start of a completion chunk (
#7524
)
2024-05-28 14:55:51 +10:00
simple
…
speculative
…
sycl
…
tokenize
…
train-text-from-scratch
…
alpaca.sh
…
base-translate.sh
…
chat-13B.bat
…
chat-13B.sh
…
chat-persistent.sh
…
chat-vicuna.sh
…
chat.sh
…
CMakeLists.txt
…
gpt4all.sh
…
json_schema_to_grammar.py
…
json-schema-pydantic-example.py
…
llama2-13b.sh
…
llama2.sh
…
llama.vim
…
llm.vim
…
make-ggml.py
…
Miku.sh
…
pydantic_models_to_grammar.py
…
pydantic-models-to-grammar-examples.py
…
reason-act.sh
…
regex-to-grammar.py
…
server-embd.py
…
server-llama2-13B.sh
…
ts-type-to-grammar.sh
…