This website requires JavaScript.
Explore
Help
Sign In
tqcq
/
llama.cpp
Watch
0
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggml-org/llama.cpp.git
synced
2025-07-30 22:23:31 -04:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
e5113e8d746bfc10b70d956a3ae64dd460becfda
llama.cpp
/
.devops
/
nix
History
Evgeny Kurnevsky
e52aba537a
nix: allow to override rocm gpu targets (
#10794
)
...
This allows to reduce compile time when you are building for a single GPU.
2024-12-14 10:17:36 -08:00
..
apps.nix
…
devshells.nix
build(nix): Package gguf-py (
#5664
)
2024-09-02 14:21:01 +03:00
docker.nix
…
jetson-support.nix
…
nixpkgs-instances.nix
build(nix): Package gguf-py (
#5664
)
2024-09-02 14:21:01 +03:00
package-gguf-py.nix
build(nix): Package gguf-py (
#5664
)
2024-09-02 14:21:01 +03:00
package.nix
nix: allow to override rocm gpu targets (
#10794
)
2024-12-14 10:17:36 -08:00
python-scripts.nix
server : replace behave with pytest (
#10416
)
2024-11-26 16:20:18 +01:00
scope.nix
build(nix): Package gguf-py (
#5664
)
2024-09-02 14:21:01 +03:00
sif.nix
…