mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-07-31 14:44:34 -04:00
There are couple things in this architecture: 1. Shared input and output embedding parameters. 2. Key length and value length are not derived from `n_embd`. More information about the models can be found at https://ai.google.dev/gemma. GGUFs can be downloaded from https://huggingface.co/google.