Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-06-30 12:55:17 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
c8d6a1f34ab6f1b6bd468d256e535a61f98f114c
llama.cpp/common
History
Georgi Gerganov cc44877486 log : disable pid in log filenames
2023-10-25 10:09:16 +03:00
..
CMakeLists.txt
common : fix mirostat state when using multiple sequences (#3543)
2023-10-11 22:35:46 +03:00
common.cpp
llama : remove token functions with context args in favor of model (#3720)
2023-10-23 22:40:03 +03:00
common.h
sampling : refactor init to use llama_sampling_params (#3696)
2023-10-20 21:07:23 +03:00
console.cpp
check C++ code with -Wmissing-declarations (#3184)
2023-09-15 15:38:27 -04:00
console.h
gguf : new file format with flexible meta data (beta) (#2398)
2023-08-21 23:07:43 +03:00
grammar-parser.cpp
ggml : fix rope + llama minor optimizations (#3560)
2023-10-20 13:02:12 +03:00
grammar-parser.h
gguf : new file format with flexible meta data (beta) (#2398)
2023-08-21 23:07:43 +03:00
log.h
log : disable pid in log filenames
2023-10-25 10:09:16 +03:00
sampling.cpp
llama : remove token functions with context args in favor of model (#3720)
2023-10-23 22:40:03 +03:00
sampling.h
sampling : refactor init to use llama_sampling_params (#3696)
2023-10-20 21:07:23 +03:00
stb_image.h
examples: support LLaVA v1.5 (multimodal model) (#3436)
2023-10-12 18:23:18 +03:00
train.cpp
llama : remove token functions with context args in favor of model (#3720)
2023-10-23 22:40:03 +03:00
train.h
train : finetune LORA (#2632)
2023-09-28 21:40:11 +03:00
Powered by Gitea Version: 1.24.1 Page: 238ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API