This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-08-17 13:40:55 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
0c74b04376b0b9efc096480fe10f866afc8d7c1c
llama.cpp
/
ggml
History
Jeff Bolz
0c74b04376
vulkan: fix NaN issue in flash attention shader (
#12776
)
...
Use -FLT_MAX/2 rather than -inf as the initial value for computing the maximum.
2025-04-06 11:03:47 +02:00
..
cmake
scripts : update sync + fix cmake merge
2025-03-27 10:09:29 +02:00
include
metal : improve FA + improve MoE (
#12612
)
2025-03-28 20:21:59 +02:00
src
vulkan: fix NaN issue in flash attention shader (
#12776
)
2025-04-06 11:03:47 +02:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
ggml : add logging for native build options/vars (whisper/2935)
2025-03-30 08:33:31 +03:00