Xuan-Son Nguyen
fba5c0d680
chat : hotfix gpt-oss jinja raising an exception ( #15243 )
...
* chat : hotfix gpt-oss jinja raising an exception
* fix
b6132
2025-08-11 15:31:35 +02:00
Xuan-Son Nguyen
53d0a12658
server : allow specifying reasoning_format in HTTP request ( #15238 )
b6131
2025-08-11 14:48:41 +02:00
Zagaj
27093afe78
readme : update infra list ( #15234 )
2025-08-11 15:27:54 +03:00
Georgi Gerganov
228f724d9c
kv-cache : fix seq_rm with seq_id == -1 ( #15226 )
...
* kv-cache : fix seq_rm with seq_id == -1
ggml-ci
* cont : iterate over streams
ggml-ci
b6129
2025-08-11 13:58:24 +03:00
Daniel Bevenius
cd3069dfcb
kv-cache : log (debug) all streams in find_slot ( #15176 )
...
This commit updates `llama_kv_cache_unified::find_slot` to log
information for all streams when debug is enabled.
The motivation for this change is that currently if a non-unified
kv-cache is used, then only one stream will be logged because the
code was currently uses `seq_to_stream[1]`.
b6128
2025-08-11 11:21:19 +02:00
Sigbjørn Skjæret
50e81bdf5d
convert : fix merge conflicts ( #15229 )
2025-08-11 11:15:44 +02:00
Daniel Bevenius
1ebbaddff2
perplexity : update comments/error msg to use decode [no ci] ( #15227 )
...
This commit updates comments and error messages to use "decode" instead
of "eval" in perplexity.cpp.
The motivation for this is that `llama_eval` was renamed to
`llama_decode` a while ago, but the comments and error messages
still referred to "eval". This change ensures consistency and clarity.
2025-08-11 11:21:24 +03:00
Julien Denize
a3a7874272
convert : improve Mistral models integration ( #14737 )
...
* Improve Mistral models integration with llama.cpp
* Revert changes and fix gguf
* Revert change
* refactor convert_mistral_to_gguf.py in convert_hf_to_gguf.py
* Revert collateral
* Rename model name
* refactor
* revert
* remove duplicate
* Remove duplication code
* Fixes
* Fix flake issues
* Apply comments
* Apply comments
* Apply comments
* Fix remote
* add default chat template
* Revert
* nit
2025-08-11 10:07:49 +02:00
Charles Xu
002cb1bb33
kleidiai: fix unsigned overflow bug ( #15150 )
...
* kleidiai: fix unsigned overflow bug
* address review comments
b6124
2025-08-11 09:59:26 +02:00
David Zhao
79c1160b07
cuda: refactored ssm_scan and use CUB ( #13291 )
...
* cuda: refactored ssm_scan to use CUB
* fixed compilation error when when not using CUB
* assign L to constant and use size_t instead of int
* deduplicated functions
* change min blocks per mp to 1
* Use cub load and store warp transpose
* suppress clang warning
b6123
2025-08-09 20:29:43 +02:00
Aman Gupta
34c9d765bf
CUDA: add attention sinks for tile and wmma ( #15178 )
...
* CUDA: add attention sinks for tile and wmma
* Review: formatting changes + remove syncthreads from tile + remove warp_reduce_max from wmma
b6122
2025-08-09 20:00:24 +08:00
compilade
e54d41befc
gguf-py : add Numpy MXFP4 de/quantization support ( #15111 )
...
* gguf-py : add MXFP4 de/quantization support
* ggml-quants : handle zero amax for MXFP4
b6121
2025-08-08 17:48:26 -04:00
Johannes Gäßler
4850b52aed
server-bench: external OAI servers, sqlite ( #15179 )
...
* server-bench: external OAI servers, sqlite
* Update scripts/server-bench.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update scripts/server-bench.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update scripts/server-bench.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* raise_for_status
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
2025-08-08 23:04:36 +02:00
AN Long
cd6983d56d
ggml : fix field name when new ggml_backend ( #14944 )
b6119
2025-08-08 14:37:22 +02:00
Olivier Chafik
6c7e9a5440
vendor: sync minja ( #15161 )
...
* vendor: sync minja
* Update minja.hpp
* Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
b6118
2025-08-08 10:45:18 +01:00
Johannes Gäßler
1425f587a8
CUDA: attention sinks for mma FlashAttention ( #15157 )
b6117
2025-08-08 08:19:58 +02:00
lhez
aaa3d07ae7
opencl: support sink in soft_max
(attn sinks) ( #15152 )
b6116
2025-08-07 21:47:03 -07:00
Xuan-Son Nguyen
50aa938901
convert : support non-mxfp4 HF model ( #15153 )
...
* convert : support non-mxfp4 HF model
* rm redundant check
* disable debug check
b6115
2025-08-07 23:26:03 +02:00
Jeff Bolz
c4f53563df
vulkan: support fattn sinks ( #15126 )
b6114
2025-08-07 22:44:20 +02:00
Jeff Bolz
a0552c8bee
vulkan: Add env var to disable host visible vidmem ( #15109 )
b6113
2025-08-07 22:07:11 +02:00
RunningLeon
99acbc9921
llama : Support intern-s1 ( #14875 )
...
* support internvl
* support interns1
* resolve comments
* put interns1 in tensor mapping
* resolve comment
* move tokenizer changes to sub class
2025-08-07 18:20:40 +02:00
uvos
7ad67ba9fe
HIP: add cmake option to enable compiler output of kernel resource usage metrics ( #15103 )
b6111
2025-08-07 16:44:14 +02:00
Christian Kastner
9a96389544
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON ( #15094 )
...
Any available libraries are found and loaded dynamically at runtime.
2025-08-07 13:45:41 +02:00
Johannes Gäßler
1d72c84188
CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 ( #15131 )
...
* CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16
b6109
2025-08-07 10:53:21 +02:00
Johannes Gäßler
20638e4f16
scripts: fix crash when --tool is not set ( #15133 )
2025-08-07 08:50:30 +02:00
Daniel Bevenius
36d3f00e14
requirements : fix PyTorch uint64 compatibility ( #15134 )
...
This commit addresses an issue with the convert_hf_to_gguf script
which is currently failing with:
```console
AttributeError: module 'torch' has no attribute 'uint64'
```
This occurred because safetensors expects torch.uint64 to be available
in the public API, but PyTorch 2.2.x only provides limited support for
unsigned types beyond uint8 it seems. The torch.uint64 dtype exists but
is not exposed in the standard torch namespace
(see pytorch/pytorch#58734 ).
PyTorch 2.4.0 properly exposes torch.uint64 in the public API, resolving
the compatibility issue with safetensors. This also required torchvision
to updated to =0.19.0 for compatibility.
Refs: https://huggingface.co/spaces/ggml-org/gguf-my-repo/discussions/186#68938de803e47d990aa087fb
Refs: https://github.com/pytorch/pytorch/issues/58734
2025-08-07 05:31:48 +02:00
Reese Levine
5fd160bbd9
ggml: Add basic SET_ROWS support in WebGPU ( #15137 )
...
* Begin work on set_rows
* Work on set rows
* Add error buffers for reporting unsupported SET_ROWS indices
* Remove extra comments
b6106
2025-08-06 15:14:40 -07:00
rmatif
756cfea826
fix profiling crash ( #15072 )
b6105
2025-08-06 14:17:51 -07:00
lhez
e725a1a982
opencl: add swiglu_oai
and add_id
( #15121 )
...
* opencl: add `swiglu-oai`
* opencl: add `add_id`
* opencl: add missing `add_id.cl`
b6104
2025-08-06 12:12:17 -07:00
Sachin Desai
3db4da56a5
chat : support Granite model reasoning and tool call ( #14864 )
b6103
2025-08-06 20:27:30 +02:00
Juk Armstrong
476aa3fd57
Fixed name -override-tensors
to -override-tensor
( #15129 )
b6102
2025-08-06 17:28:48 +01:00
Diego Devesa
0d8831543c
ggml : fix fallback to CPU for ununsupported ops ( #15118 )
b6101
2025-08-06 14:37:35 +02:00
Sigbjørn Skjæret
65c797c4fa
chat : fix yandex chat template ( #15116 )
b6100
2025-08-06 13:26:49 +02:00
stevenkuang
25726898e8
chat : fix hunyuan auto-detection ( #15114 )
...
Signed-off-by: stevenkuang <stevenkuang@tencent.com >
b6099
2025-08-06 11:48:30 +02:00
Chenguang Li
2241453252
CANN: add support for ACL Graph ( #15065 )
...
* feat(cann): add optional support for ACL Graph execution
This commit adds support for executing ggml computational graphs using
Huawei's ACL graph mode via the USE_CANN_GRAPH flag. The support can be
enabled at compile time using the CMake option:
-DUSE_CANN_GRAPH=ON
By default, ACL graph execution is **disabled**, and the fallback path
uses node-by-node execution.
Key additions:
- CMake option to toggle graph mode
- Graph capture and execution logic using
- Tensor property matching to determine whether graph update is required
- Safe fallback and logging if the environment variable LLAMA_SET_ROWS
is unset or invalid
This prepares the backend for performance improvements in repetitive graph
execution scenarios on Ascend devices.
Signed-off-by: noemotiovon <757486878@qq.com >
* Fix review comments
Signed-off-by: noemotiovon <757486878@qq.com >
* remane USE_CANN_GRAPH to USE_ACL_GRAPH
Signed-off-by: noemotiovon <757486878@qq.com >
* fix typo
Signed-off-by: noemotiovon <757486878@qq.com >
---------
Signed-off-by: noemotiovon <757486878@qq.com >
b6098
2025-08-06 14:12:42 +08:00
Reese Levine
9515c6131a
ggml: WebGPU disable SET_ROWS for now ( #15078 )
...
* Add paramater buffer pool, batching of submissions, refactor command building/submission
* Add header for linux builds
* Free staged parameter buffers at once
* Format with clang-format
* Fix thread-safe implementation
* Use device implicit synchronization
* Update workflow to use custom release
* Remove testing branch workflow
* Disable set_rows until it's implemented
* Fix potential issue around empty queue submission
* Try synchronous submission
* Try waiting on all futures explicitly
* Add debug
* Add more debug messages
* Work on getting ssh access for debugging
* Debug on failure
* Disable other tests
* Remove extra if
* Try more locking
* maybe passes?
* test
* Some cleanups
* Restore build file
* Remove extra testing branch ci
b6097
2025-08-05 16:26:38 -07:00
Georgi Gerganov
fd1234cb46
llama : add gpt-oss ( #15091 )
...
* oai moe
* compat with new checkpoint
* add attn sink impl
* add rope scaling yarn
* logits match with latest transformers code
* wip chat template
* rm trailing space
* use ggml_scale_bias
* rm redundant is_swa_all
* convert interleaved gate_up
* graph : fix activation function to match reference (#7 )
* vocab : handle o200k_harmony special tokens
* ggml : add attention sinks support (#1 )
* llama : add attn sinks
* ggml : add attn sinks
* cuda : add attn sinks
* vulkan : add support for sinks in softmax
remove unnecessary return
* ggml : add fused swiglu_oai op (#11 )
* ggml : add fused swiglu_oai op
* Update ggml/src/ggml-cpu/ops.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
* update CUDA impl
* cont : metal impl
* add vulkan impl
* test-backend-ops : more test cases, clean up
* llama : remove unfused impl
* remove extra lines
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
---------
Co-authored-by: slaren <slarengh@gmail.com >
* repack mxfp4 upon conversion
* clean up a bit
* enable thinking
* add quick hack to render only some special tokens
* fix bf16 conversion
* remove vocab hack
* webui ok
* support chat parsing for gpt-oss
* fix webui
* direct mapping mxfp4, FINALLY
* force using mxfp4
* properly use lazy tensor
* ggml : add mxfp4
ggml : use e8m0 conversion instead of powf
Co-authored-by: Diego Devesa <slarengh@gmail.com >
change kvalues_mxfp4 table to match e2m1 (#6 )
metal : remove quantization for now (not used)
cuda : fix disabled CUDA graphs due to ffn moe bias
vulkan : add support for mxfp4
cont : add cm2 dequant
* ggml : add ggml_add_id (#13 )
* ggml : add ggml_add_id
* add cuda impl
* llama : add weight support check for add_id
* perf opt
* add vulkan impl
* rename cuda files
* add metal impl
* allow in-place ggml_add_id
* llama : keep biases on CPU with --cpu-moe
* llama : fix compile error
ggml-ci
* cuda : add fallback for __nv_cvt_e8m0_to_bf16raw
ggml-ci
* cleanup
ggml-ci
* sycl : fix supports_op for MXFP4
ggml-ci
* fix Unknown reasoning format
* ggml-cpu : fix AVX build
ggml-ci
* fix hip build
ggml-ci
* cuda : add mxfp4 dequantization support for cuBLAS
ggml-ci
* ggml-cpu : fix mxfp4 fallback definitions for some architectures
ggml-ci
* cuda : fix version required for __nv_cvt_e8m0_to_bf16raw
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co >
Co-authored-by: slaren <slarengh@gmail.com >
b6096
2025-08-05 22:10:36 +03:00
Sigbjørn Skjæret
f324a3b715
chat : only remove double bos/eos if added ( #15086 )
...
* only remove double bos/eos if added
* fix tests
b6095
2025-08-05 20:43:36 +02:00
Georgi Gerganov
be42642581
readme : update hot topics ( #15097 )
2025-08-05 20:19:33 +03:00
Romain Biessy
3306ceabf0
sycl: fix mul_mat selection ( #15092 )
b6093
2025-08-05 18:39:55 +02:00
Juk Armstrong
c81de6e107
Fix glm4moe
bug ( #15088 )
b6092
2025-08-05 13:56:44 +01:00
Alex Wu
22f060c9c4
webui: fix markdown table ( #15081 )
...
* webui: fix markdown table
* webui: fix table display with themes
2025-08-05 13:56:44 +02:00
compilade
ee3a9fcf88
context : fix index overflow on huge outputs ( #15080 )
...
* context : fix overflow when re-ordering huge outputs
* context : fix logits size overflow for huge batches
b6090
2025-08-05 11:27:45 +02:00
Diego Devesa
ec428b02c3
llama : add --n-cpu-moe option ( #15077 )
...
* llama : add --n-cpu-moe option
Keeps the MoE weights of the first N layers in the CPU
b6089
2025-08-05 01:05:36 +02:00
compilade
19f68fa5a4
imatrix : warn when GGUF imatrix is saved without .gguf suffix ( #15076 )
...
* imatrix : add warning when suffix is not .gguf for GGUF imatrix
* imatrix : only warn about suffix when output format is unspecified
b6088
2025-08-04 23:26:52 +02:00
Christian Kastner
41613437ff
cmake: Add GGML_BACKEND_DIR option ( #15074 )
...
* cmake: Add GGML_BACKEND_DIR option
This can be used by distributions to specify where to look for backends
when ggml is built with GGML_BACKEND_DL=ON.
* Fix phrasing
b6087
2025-08-04 21:29:14 +02:00
Sigbjørn Skjæret
e5bebe5251
gguf-py : add --chat-template-file to gguf_new_metadata ( #15075 )
2025-08-04 21:01:48 +02:00
Sam
ef0144c087
model: support GLM 4.5 family of models ( #14939 )
...
* model: Add GLM 4.5 (#14921 )
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Merge in PR suggestions
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* model: Add GLM 4.5 family of models (#14921 )
1. Updated tensor_mapping.py with NextN tensor mappings
- Added proper tensor mappings for all NextN/MTP tensors in /Users/samm/git/llama.cpp/gguf-py/gguf/tensor_mapping.py
- Added mappings for: eh_proj, embed_tokens, enorm, hnorm, shared_head.head, shared_head.norm
2. Added num_nextn_predict_layers configuration
- Added LLM_KV_NUM_NEXTN_PREDICT_LAYERS constant to llama-arch.h and llama-arch.cpp
- Added num_nextn_predict_layers field to llama_hparams struct
- Updated GLM4_MOE parameter loading in llama-model.cpp to read this parameter
- Modified tensor loading logic to conditionally load NextN tensors based on num_nextn_predict_layers
- Added GGUF writer support in gguf_writer.py with add_num_nextn_predict_layers() method
- Updated conversion script to extract and write this parameter from HuggingFace config
3. Added FIM tokens for GLM4_MOE
- Added GLM-4.5's FIM tokens to llama-vocab.cpp:
- <|code_prefix|> for FIM_PRE
- <|code_suffix|> for FIM_SUF
- <|code_middle|> for FIM_MID
4. Removed manual NextN tensor handling
- Removed the special-case handling in convert_hf_to_gguf.py that manually mapped NextN tensors
- NextN tensors are now handled automatically through the proper tensor mapping system
* glm 4.5 update tensors names
* model: glm 4.5 apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* model: glm 4.5 apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
* model: glm 4.5 apply suggestions from code review
* Apply suggestions from code review
* patch broken chat template
* typings fix
* add TENSOR_SKIP flag
Co-authored-by: Diego Devesa <slarengh@gmail.com >
* Update src/llama-model-loader.h
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com >
Co-authored-by: Diego Devesa <slarengh@gmail.com >
b6085
2025-08-04 20:29:25 +02:00
Sigbjørn Skjæret
2721257e3e
quantize : fix confusing error message if ftype is invalid ( #15071 )
b6084
2025-08-04 18:11:02 +02:00
Reese Levine
587d0118f5
ggml: WebGPU backend host improvements and style fixing ( #14978 )
...
* Add parameter buffer pool, batching of submissions, refactor command building/submission
* Add header for linux builds
* Free staged parameter buffers at once
* Format with clang-format
* Fix thread-safe implementation
* Use device implicit synchronization
* Update workflow to use custom release
* Remove testing branch workflow
b6083
2025-08-04 08:52:43 -07:00