This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-09-16 12:25:27 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
b4823
llama.cpp
/
tests
History
cmdr2
0cbee131ad
cuda/vulkan: specify fp32-only support for some operations in supports_op (ggml/1129)
...
ggml-ci
2025-03-03 18:18:11 +02:00
..
.gitignore
…
CMakeLists.txt
…
get-model.cpp
…
get-model.h
…
run-json-schema-to-grammar.mjs
…
test-arg-parser.cpp
…
test-autorelease.cpp
…
test-backend-ops.cpp
…
test-barrier.cpp
…
test-c.c
…
test-chat-template.cpp
…
test-chat.cpp
…
test-double-float.cpp
…
test-gguf.cpp
…
test-grammar-integration.cpp
…
test-grammar-llguidance.cpp
…
test-grammar-parser.cpp
…
test-json-schema-to-grammar.cpp
sampling : support for llguidance grammars (
#10224
)
2025-02-02 09:55:32 +02:00
test-llama-grammar.cpp
…
test-log.cpp
…
test-lora-conversion-inference.sh
…
test-model-load-cancel.cpp
…
test-opt.cpp
…
test-quantize-fns.cpp
…
test-quantize-perf.cpp
…
test-rope.cpp
…
test-sampling.cpp
…
test-tokenizer-0.cpp
…
test-tokenizer-0.py
…
test-tokenizer-0.sh
…
test-tokenizer-1-bpe.cpp
…
test-tokenizer-1-spm.cpp
…
test-tokenizer-random.py
…