Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-08-17 21:51:27 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
441f51dca004debf8b275f1bdc08e0f1af7fd8f8
llama.cpp/tests
History
bssrdf afc8c19291 ggml : fix some mul mat cases + add tests for src1 F16 (ggml/669)
* fixed mul-mat error for old GPUs

* style fixes

* add mul mat src1 f16 test cases, fix more cases

ggml-ci

---------

Co-authored-by: bssrdf <bssrdf@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2023-12-29 14:54:19 +02:00
..
CMakeLists.txt
gpt2 : Add gpt2 architecture integration (#4555)
2023-12-28 15:03:57 +01:00
test-backend-ops.cpp
ggml : fix some mul mat cases + add tests for src1 F16 (ggml/669)
2023-12-29 14:54:19 +02:00
test-c.c
…
test-double-float.cpp
…
test-grad0.cpp
cuda : improve cuda pool efficiency using virtual memory (#4606)
2023-12-24 14:34:22 +01:00
test-grammar-parser.cpp
…
test-llama-grammar.cpp
…
test-opt.cpp
…
test-quantize-fns.cpp
…
test-quantize-perf.cpp
ggml : use ggml_row_size where possible (#4472)
2023-12-14 20:05:21 +01:00
test-rope.cpp
…
test-sampling.cpp
…
test-tokenizer-0-falcon.cpp
…
test-tokenizer-0-falcon.py
ci : add flake8 to github actions (python linting) (#4129)
2023-11-20 11:35:47 +01:00
test-tokenizer-0-llama.cpp
…
test-tokenizer-0-llama.py
ci : add flake8 to github actions (python linting) (#4129)
2023-11-20 11:35:47 +01:00
test-tokenizer-1-bpe.cpp
…
test-tokenizer-1-llama.cpp
…
Powered by Gitea Version: 1.24.5 Page: 3540ms Template: 32ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API