Gabe Goodhart
7ba463b38c
fix: Remove llama_model_is_hybrid_Recurrent public API
...
https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2141728423
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:19 -06:00
Gabe Goodhart
1510016ea4
fix: Remove logits_all after rebase
...
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:19 -06:00
Gabe Goodhart
d8c929ff5d
feat: Allow custom layer filters for hybrid recurrent
...
This should help support architectures like Falcon H1 where there is
overlap between layers that need attention and recurrent caches.
https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2140748922
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:19 -06:00
Gabe Goodhart
d5d7628b5f
refactor: Remove n_embd_k/v_gqa from recurrent cache
...
This is no longer needed now that there are separate implementations
https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2140825128
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:19 -06:00
Gabe Goodhart
b42c8b43cf
refactor: Remove layer index from n_embd_k/v_s
...
Now that it's not used at all in the unified cache, we don't need to use
the layer index to zero it out for attention layers.
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:19 -06:00
Gabe Goodhart
1dd12133cd
refactor: Remove n_embd_k/v_s from unified cache
...
No longer needed now that unified isn't also supporting recurrent
https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2140761069
Branch: HybridRecurrentCache
2025-06-17 14:54:18 -06:00
Gabe Goodhart
833dfb54ae
fix: Use per-layer n_embd_k/v_s calls for mamba (1) layers
...
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:18 -06:00
Gabe Goodhart
f6d5f055c6
fix: Remove errant virtual destructor leftover from previous impl attempt
...
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:18 -06:00
Gabe Goodhart
9c1a604af8
fix: Update clear signature for data argument after rebase
...
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:18 -06:00
Gabe Goodhart
de9297fd5e
fix: Add missing padding to n_ctx for hybrid cache construction
...
Branch: GraniteFour
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:18 -06:00
Gabe Goodhart
911e694476
fix: Fix status for init_update sig for recurrent cache state
...
Branch: GraniteFour
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:18 -06:00
Gabe Goodhart
d3699366e6
fix: Update recurrent cache for changes to remove intermediate kv_cache interface
...
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:18 -06:00
Gabe Goodhart
a9b5fe98ad
fix: Fix logic for initializing inputs and attn layers for hybrid caches
...
Branch: GraniteFour
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:18 -06:00
Gabe Goodhart
e3c1631556
feat: Support hybrid recurrent in llama-graph
...
NOTE: I intentionally did not add support for s_mask since it will be going
away soon
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:18 -06:00
Gabe Goodhart
cf03d4ae5c
fix: Fix shift logic to defer to unified cache
...
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:18 -06:00
Gabe Goodhart
6c6ec0003a
fix: Fix wrong bool condition for split equal in hybrid cache
...
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:18 -06:00
Gabe Goodhart
423c89401d
feat: Construct hybrid recurrent cache for hybrid recurrent models
...
This includes a refactor of the create_memory logic to avoid needing to use
the arch enum explicitly unless a model needs explicit cache instantiation
logic beyond the standard logic for recurrent, hybrid, unified, and iswa.
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:18 -06:00
Gabe Goodhart
c71eaa37a0
feat: First pass at llama_kv_cache_hybrid_recurrent
...
This follows the pattern in iswa where the two child caches are held
explicitly to support the case where a model requires a single attention
cache and a single recurrent cache where each layer uses exactly one of the
caches.
This is a rewrite of the more generic approach in the original hybrid cache
PR: https://github.com/ggml-org/llama.cpp/pull/13276
Branch: HybridRecurrentCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:18 -06:00
Gabe Goodhart
13332a7554
fix: Use per-layer sizing everywhere in kv caches
...
Branch: GraniteFour
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:18 -06:00
Gabe Goodhart
40e9187892
feat: Add layer filter to recurrent cache
...
Branch: HybridCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:18 -06:00
Gabe Goodhart
fb26e95ae7
refactor: rename *_is_hybrid -> *_is_hybrid_recurrent
...
The implementation of the hybrid cache intentionally does not specify the
types of the child caches, so there was a naming mismatch with these
predicate functions that used "hybrid" to imply "hybrid recurrent."
Branch: HybridCache
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:18 -06:00
Gabe Goodhart
fc9e0b576e
feat: Auto-fill hparams.recurrent_layer_arr based on whether the model is recurrent
...
Branch: GraniteFour
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:18 -06:00
Gabe Goodhart
05f1958080
feat: Add support for distinguishing recurrent vs non-recurrent layers in hparams
...
Branch: GraniteFour
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:17 -06:00
Gabe Goodhart
5e2f2c3876
feat: Add c++ side constants for attention layer indices hparam
...
Branch: GraniteFour
2025-06-17 14:54:17 -06:00
Gabe Goodhart
ec8fe17b1a
feat: Add llama_model_is_hybrid API call
...
Also, split llama_model_is_recurrent into llm_arch_is_recurrent in
llama-arch with llama_model_is_recurrent delegating to
llm_arch_is_recurrent. The same split is done for hybird. This is needed
because there are places where the llama_model has not yet been initialized
but we need to check if the model is recurrent (specifically for the
per-layer recurrent check array in hparams).
Branch: GraniteFour
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com >
2025-06-17 14:54:17 -06:00
bandoti
c46503014d
cmake: remove shader-gen step-targets from ggml-vulkan ( #14226 )
...
* Remove step-targets from vulkan-shaders-gen
* Unset DESTDIR when building vulkan-shaders-gen
b5689
2025-06-17 22:33:25 +02:00
xctan
860a9e4eef
ggml-cpu : remove the weak alias trick ( #14221 )
b5688
2025-06-17 12:58:32 +03:00
R0CKSTAR
fe9d60e74a
musa: fix build warning (unused variable) ( #14231 )
...
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com >
b5687
2025-06-17 17:48:08 +08:00
Sigbjørn Skjæret
e434e69183
common : suggest --jinja when autodetection fails ( #14222 )
b5686
2025-06-16 21:58:42 +02:00
Georgi Gerganov
89fea80d29
server : fix incorrect usage of llama_get_embeddings() ( #14225 )
...
* server : fix incorrect usage of llama_get_embeddings()
ggml-ci
* cont : fix the fix
ggml-ci
b5685
2025-06-16 22:33:27 +03:00
Diego Devesa
6adc3c3ebc
llama : add thread safety test ( #14035 )
...
* llama : add thread safety test
* llamafile : remove global state
* llama : better LLAMA_SPLIT_MODE_NONE logic
when main_gpu < 0 GPU devices are not used
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
b5684
2025-06-16 08:11:43 -07:00
bandoti
0dbcabde8c
cmake: clean up external project logic for vulkan-shaders-gen ( #14179 )
...
* Remove install step for vulkan-shaders-gen
* Add install step to normalize msvc with make
* Regenerate modified shaders at build-time
b5683
2025-06-16 10:32:13 -03:00
Đinh Trọng Huy
ad590be98c
model : add NeoBERT ( #14164 )
...
* convert neobert model to gguf
* add inference graph
* fix flake8 lint
* followed reviewer suggestions
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
* follow reviewers suggestions
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
* override NeoBERT feed-forward length
---------
Co-authored-by: dinhhuy <huy.dinh@brains-tech.co.jp >
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
b5682
2025-06-16 14:53:41 +02:00
uvos
7d6d91babf
HIP: disable rocwmma on gfx12 by default until rocm 7.0 ( #14202 )
b5681
2025-06-16 13:47:38 +02:00
Georgi Gerganov
d3e64b9f49
llama : rework embeddings logic ( #14208 )
...
* llama : rework embeddings logic
ggml-ci
* cont : fix rerank
ggml-ci
* cont : engrish [no ci]
* cont : fix rerank
ggml-ci
* server : support both embeddings and completions with single model
ggml-ci
* cont : avoid embeddings_org
ggml-ci
2025-06-16 14:14:00 +03:00
Charles Xu
3ba0d843c6
ggml: Add Android support for GGML_CPU_ALL_VARIANTS ( #14206 )
b5679
2025-06-16 11:47:57 +02:00
Bartowski
0bf49eb668
convert : remove arcee change in convert_hf_to_gguf_update.py ( #14207 )
2025-06-16 10:16:06 +02:00
Đinh Trọng Huy
4ad243677b
gguf-py : allow key override when adding value to GGUFWriter ( #14194 )
...
Co-authored-by: dinhhuy <huy.dinh@brains-tech.co.jp >
2025-06-16 09:20:59 +02:00
Jeff Bolz
c89c2d1ab9
vulkan: mutex around vkQueueSubmit ( #14127 )
...
This fixes the remaining crash in test-thread-safety on my system.
b5676
2025-06-16 08:21:08 +02:00
xctan
3555b3004b
ggml-cpu : rework weak alias on apple targets ( #14146 )
...
* ggml-cpu : rework weak alias on apple targets
* fix powerpc detection
* fix ppc detection
* fix powerpc detection on darwin
b5675
2025-06-16 13:54:15 +08:00
Bartowski
d7da8dc83a
model : Add support for Arcee AI's upcoming AFM model ( #14185 )
...
* Add Arcee AFM support
* Add draft update code
* Fix linter and update URL, may still not be final
* Update src/llama-model.cpp
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com >
* Remote accidental blank line
---------
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com >
b5674
2025-06-16 01:04:06 +02:00
Eric Curtin
cd355eda7d
server : When listening on a unix domain socket don't print http:// and port ( #14180 )
...
Instead show something like this:
main: server is listening on file.sock - starting the main loop
Signed-off-by: Eric Curtin <ecurtin@redhat.com >
b5673
2025-06-15 23:36:22 +02:00
Ed Addario
30e5b01de2
quantize : change int to unsigned int for KV overrides ( #14197 )
b5672
2025-06-15 18:53:45 +02:00
uvos
e54b394082
CUDA/HIP: fix ssm_scan on devices where warp size is not 32 ( #14196 )
b5671
2025-06-15 17:30:13 +02:00
uvos
2c2caa4443
HIP: Replace usage of depricated preprocessor macro __AMDGCN_WAVEFRONT_SIZE__ ( #14183 )
b5670
2025-06-15 15:45:27 +02:00
Georgi Gerganov
5fce5f948d
kv-cache : fix use-after-move of defrag info ( #14189 )
...
ggml-ci
b5669
2025-06-15 10:52:11 +03:00
Mikko Juola
9ae4143bc6
model : add dots.llm1 architecture support ( #14044 ) ( #14118 )
...
Adds:
* Dots1Model to convert_hf_to_gguf.py
* Computation graph code to llama-model.cpp
* Chat template to llama-chat.cpp to detect this model's template.
---
The model is called "dots.llm1" (I decided to shorten it to dots1 or
DOTS1 in the code generally) architecture.
The only models that exist as of writing of this commit that follow this
architecture are "dots.llm1.inst" and "dots.llm1.base" from here:
* https://huggingface.co/rednote-hilab/dots.llm1.inst
* https://huggingface.co/rednote-hilab/dots.llm1.base
The model architecture is a combination of Qwen and Deepseek parts, as
seen here:
ffe12627b4/src/transformers/models/dots1/modular_dots1.py
b5668
2025-06-15 09:52:06 +02:00
Georgi Gerganov
c311ac664d
cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ ( #14188 )
...
ggml-ci
b5667
2025-06-15 10:08:58 +03:00
Georgi Gerganov
b9912ac570
batch : auto-gen positions + verify multi-sequence input ( #14177 )
...
* batch : verify multi-sequence input batches
ggml-ci
* cont : auto-gen positions + verify multi-seq input
ggml-ci
* cont : first print debug info, then perform validation
ggml-ci
* cont : fix position auto-gen + add comments
ggml-ci
b5666
2025-06-15 09:18:37 +03:00
Pepijn de Vos
00ba772610
docs : remove WIP since PR has been merged ( #13912 )
2025-06-15 08:06:37 +02:00