Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-07-11 22:04:31 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
7c26775adb579e92b59c82e8084c07a1d0f75e9c
llama.cpp/ggml-cuda/template-instances/fattn-vec-f16-instance-hs64-f16-f16.cu

6 lines
176 B
Plaintext
Raw Normal View History

CUDA: refactor mmq, dmmv, mmvq (#7716) * CUDA: refactor mmq, dmmv, mmvq * fix out-of-bounds write * struct for qk, qr, qi * fix cmake build * mmq_type_traits
2024-06-05 16:53:00 +02:00
// This file has been autogenerated by generate_cu_files.py, do not edit manually.
CUDA: quantized KV support for FA vec (#7527) * CUDA: quantized KV support for FA vec * try CI fix * fix commented-out kernel variants * add q8_0 q4_0 tests * fix nwarps > batch size * split fattn compile via extern templates * fix flake8 * fix metal tests * fix cmake * make generate_cu_files.py executable * add autogenerated .cu files * fix AMD * error if type_v != FP16 and not flash_attn * remove obsolete code
2024-06-01 08:44:14 +02:00
#include "../fattn-vec-f16.cuh"
DECL_FATTN_VEC_F16_CASE(64, GGML_TYPE_F16, GGML_TYPE_F16);
Reference in New Issue Copy Permalink
Powered by Gitea Version: 1.24.2 Page: 1971ms Template: 7ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API