Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-06-27 03:55:20 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
0cc4m/vulkan-device-architecture
llama.cpp/cmake
History
Adrien Gallouët c0d4843225 build : fix llama.pc (#11658)
Signed-off-by: Adrien Gallouët <adrien@gallouet.fr>
2025-02-06 13:08:13 +02:00
..
arm64-apple-clang.cmake
Add apple arm to presets (#10134)
2024-11-02 15:35:31 -07:00
arm64-windows-llvm.cmake
ggml : prevent builds with -ffinite-math-only (#7726)
2024-06-04 17:01:09 +10:00
arm64-windows-msvc.cmake
Add support for properly optimized Windows ARM64 builds with LLVM and MSVC (#7191)
2024-05-16 12:47:36 +10:00
build-info.cmake
cmake: fix shell command quoting in build-info script (#11309)
2025-01-20 16:02:15 +02:00
common.cmake
cmake : enable warnings in llama (#10474)
2024-11-26 14:18:08 +02:00
git-vars.cmake
llama : reorganize source code + improve CMake (#8006)
2024-06-26 18:33:02 +03:00
llama-config.cmake.in
cmake: add hints for locating ggml on Windows using Llama find-package (#11466)
2025-01-28 19:22:06 -04:00
llama.pc.in
build : fix llama.pc (#11658)
2025-02-06 13:08:13 +02:00
x64-windows-llvm.cmake
Changes to CMakePresets.json to add ninja clang target on windows (#10668)
2024-12-09 09:40:19 -08:00
Powered by Gitea Version: 1.24.1 Page: 596ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API