Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-08-01 23:16:26 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
269de86ba073b5dc9ce687c11a3bc4d7d873b962
llama.cpp/examples
History
Pierrick Hymbert e3965cf35a server: tests - slow inference causes timeout on the CI (#5715)
* server: tests - longer inference timeout for CI
2024-02-25 22:48:33 +01:00
..
baby-llama
…
batched
…
batched-bench
…
batched.swift
…
beam-search
…
benchmark
…
convert-llama2c-to-ggml
…
embedding
…
export-lora
…
finetune
…
gguf
…
imatrix
…
infill
…
jeopardy
…
llama-bench
…
llama.android
…
llama.swiftui
…
llava
…
lookahead
…
lookup
…
main
…
main-cmake-pkg
…
parallel
…
passkey
…
perplexity
…
quantize
…
quantize-stats
…
save-load-state
…
server
server: tests - slow inference causes timeout on the CI (#5715)
2024-02-25 22:48:33 +01:00
simple
…
speculative
…
sycl
…
tokenize
…
train-text-from-scratch
…
alpaca.sh
…
base-translate.sh
…
chat-13B.bat
…
chat-13B.sh
…
chat-persistent.sh
…
chat-vicuna.sh
…
chat.sh
…
CMakeLists.txt
…
gpt4all.sh
…
json-schema-to-grammar.py
…
llama2-13b.sh
…
llama2.sh
…
llama.vim
…
llm.vim
…
make-ggml.py
…
Miku.sh
…
pydantic_models_to_grammar.py
…
pydantic-models-to-grammar-examples.py
…
reason-act.sh
…
server-llama2-13B.sh
…
Powered by Gitea Version: 1.24.3 Page: 6535ms Template: 1540ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API