From 1adc9812bd33dc85489bf093528d61c22917d54f Mon Sep 17 00:00:00 2001 From: Bas Nijholt Date: Wed, 13 Aug 2025 11:21:31 -0700 Subject: [PATCH] fix(nix): remove non-functional llama-cpp cachix cache from flake.nix (#15295) The flake.nix included references to llama-cpp.cachix.org cache with a comment claiming it's 'Populated by the CI in ggml-org/llama.cpp', but: 1. No visible CI workflow populates this cache 2. The cache is empty for recent builds (tested b6150, etc.) 3. This misleads users into expecting pre-built binaries that don't exist This change removes the non-functional cache references entirely, leaving only the working cuda-maintainers cache that actually provides CUDA dependencies. Users can still manually add the llama-cpp cache if it becomes functional in the future. --- flake.nix | 5 ----- 1 file changed, 5 deletions(-) diff --git a/flake.nix b/flake.nix index 0b5edf911..bb02c8e52 100644 --- a/flake.nix +++ b/flake.nix @@ -36,9 +36,6 @@ # ``` # nixConfig = { # extra-substituters = [ - # # Populated by the CI in ggml-org/llama.cpp - # "https://llama-cpp.cachix.org" - # # # A development cache for nixpkgs imported with `config.cudaSupport = true`. # # Populated by https://hercules-ci.com/github/SomeoneSerge/nixpkgs-cuda-ci. # # This lets one skip building e.g. the CUDA-enabled openmpi. @@ -47,10 +44,8 @@ # ]; # # # Verify these are the same keys as published on - # # - https://app.cachix.org/cache/llama-cpp # # - https://app.cachix.org/cache/cuda-maintainers # extra-trusted-public-keys = [ - # "llama-cpp.cachix.org-1:H75X+w83wUKTIPSO1KWy9ADUrzThyGs8P5tmAbkWhQc=" # "cuda-maintainers.cachix.org-1:0dq3bujKpuEPMCX6U4WylrUDZ9JyUG0VpVZa7CNfq5E=" # ]; # };