Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-07-21 18:28:31 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
572b3141d343d7f947bf53b57513016e90db5680
llama.cpp/common
History
Xuan-Son Nguyen 7c727fbe39 arg : add --no-mmproj-offload (#13093)
* arg : add --no-mmproj-offload

* Update common/arg.cpp
2025-04-24 14:04:14 +02:00
..
cmake
…
minja
sync: minja (#12739)
2025-04-04 21:16:39 +01:00
arg.cpp
arg : add --no-mmproj-offload (#13093)
2025-04-24 14:04:14 +02:00
arg.h
…
base64.hpp
…
build-info.cpp.in
…
chat.cpp
tool-call: fix non-tool-calling grammar crashes w/ Qwen / Hermes 2 templates (#12900)
2025-04-11 21:47:52 +02:00
chat.h
server: extract <think> tags from qwq outputs (#12297)
2025-03-10 10:59:03 +00:00
CMakeLists.txt
cmake : enable curl by default (#12761)
2025-04-07 13:35:19 +02:00
common.cpp
common : Define cache directory on AIX (#12915)
2025-04-12 17:33:39 +02:00
common.h
arg : add --no-mmproj-offload (#13093)
2025-04-24 14:04:14 +02:00
console.cpp
…
console.h
…
json-schema-to-grammar.cpp
…
json-schema-to-grammar.h
…
json.hpp
…
llguidance.cpp
upgrade to llguidance 0.7.10 (#12576)
2025-03-26 11:06:09 -07:00
log.cpp
…
log.h
…
ngram-cache.cpp
…
ngram-cache.h
…
sampling.cpp
llama: fix error on bad grammar (#12628)
2025-03-28 18:08:52 +01:00
sampling.h
…
speculative.cpp
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
2025-03-13 12:35:44 +02:00
speculative.h
…
stb_image.h
…
Powered by Gitea Version: 1.24.2 Page: 2498ms Template: 68ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API