Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-08-18 05:56:00 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
0f1a39f3439825acf7e3a1663566d410be152170
llama.cpp/ggml/src/ggml-sycl
History
Alberto Cabrera Pérez 2ec846d558 sycl : fix powf call in device code (#8368)
2024-07-08 14:22:41 +01:00
..
dpct
Enabled more data types for oneMKL gemm_batch (#8236)
2024-07-05 13:23:25 +01:00
backend.hpp
[SYCL] Fix WARP_SIZE=16 bug of Intel GPU (#8266)
2024-07-05 13:06:13 +08:00
common.cpp
…
common.hpp
rm get_work_group_size() by local cache for performance (#8286)
2024-07-05 10:32:29 +08:00
convert.cpp
Dequant improvements rebase (#8255)
2024-07-03 09:55:34 +08:00
convert.hpp
…
dequantize.hpp
Dequant improvements rebase (#8255)
2024-07-03 09:55:34 +08:00
dmmv.cpp
[SYCL] Fix WARP_SIZE=16 bug of Intel GPU (#8266)
2024-07-05 13:06:13 +08:00
dmmv.hpp
…
mmq.cpp
…
mmq.hpp
…
mmvq.cpp
[SYCL] Fix the sub group size of Intel (#8106)
2024-07-02 10:16:00 +08:00
mmvq.hpp
…
norm.cpp
[SYCL] Fix WARP_SIZE=16 bug of Intel GPU (#8266)
2024-07-05 13:06:13 +08:00
norm.hpp
[SYCL] Fix the sub group size of Intel (#8106)
2024-07-02 10:16:00 +08:00
presets.hpp
[SYCL] Fix WARP_SIZE=16 bug of Intel GPU (#8266)
2024-07-05 13:06:13 +08:00
rope.cpp
sycl : fix powf call in device code (#8368)
2024-07-08 14:22:41 +01:00
rope.hpp
…
softmax.cpp
[SYCL] Fix WARP_SIZE=16 bug of Intel GPU (#8266)
2024-07-05 13:06:13 +08:00
softmax.hpp
[SYCL] Fix WARP_SIZE=16 bug of Intel GPU (#8266)
2024-07-05 13:06:13 +08:00
vecdotq.hpp
CUDA: refactor and optimize IQ MMVQ (#8215)
2024-07-01 20:39:06 +02:00
Powered by Gitea Version: 1.24.5 Page: 2073ms Template: 64ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API