Commit Graph

5872 Commits

Author SHA1 Message Date
Ed Addario
30e5b01de2 quantize : change int to unsigned int for KV overrides (#14197) b5672 2025-06-15 18:53:45 +02:00
uvos
e54b394082 CUDA/HIP: fix ssm_scan on devices where warp size is not 32 (#14196) b5671 2025-06-15 17:30:13 +02:00
uvos
2c2caa4443 HIP: Replace usage of depricated preprocessor macro __AMDGCN_WAVEFRONT_SIZE__ (#14183) b5670 2025-06-15 15:45:27 +02:00
Georgi Gerganov
5fce5f948d kv-cache : fix use-after-move of defrag info (#14189)
ggml-ci
b5669
2025-06-15 10:52:11 +03:00
Mikko Juola
9ae4143bc6 model : add dots.llm1 architecture support (#14044) (#14118)
Adds:

* Dots1Model to convert_hf_to_gguf.py

* Computation graph code to llama-model.cpp

* Chat template to llama-chat.cpp to detect this model's template.

---

The model is called "dots.llm1" (I decided to shorten it to dots1 or
DOTS1 in the code generally) architecture.

The only models that exist as of writing of this commit that follow this
architecture are "dots.llm1.inst" and "dots.llm1.base" from here:

* https://huggingface.co/rednote-hilab/dots.llm1.inst

* https://huggingface.co/rednote-hilab/dots.llm1.base

The model architecture is a combination of Qwen and Deepseek parts, as
seen here:

ffe12627b4/src/transformers/models/dots1/modular_dots1.py
b5668
2025-06-15 09:52:06 +02:00
Georgi Gerganov
c311ac664d cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188)
ggml-ci
b5667
2025-06-15 10:08:58 +03:00
Georgi Gerganov
b9912ac570 batch : auto-gen positions + verify multi-sequence input (#14177)
* batch : verify multi-sequence input batches

ggml-ci

* cont : auto-gen positions + verify multi-seq input

ggml-ci

* cont : first print debug info, then perform validation

ggml-ci

* cont : fix position auto-gen + add comments

ggml-ci
b5666
2025-06-15 09:18:37 +03:00
Pepijn de Vos
00ba772610 docs : remove WIP since PR has been merged (#13912) 2025-06-15 08:06:37 +02:00
Piotr
3cb203c89f llama-chat : Do not throw when tool parsing fails (#14012)
Currently when a model generates output which looks like a tool call,
but is invalid an exception is thrown and not handled, causing the cli
or llama-server to bail. Instead, handle the chat parser exception and
simply return the generated text in such cases.

Signed-off-by: Piotr Stankiewicz <piotr.stankiewicz@docker.com>
b5664
2025-06-14 17:25:15 +01:00
Aman Gupta
2e42be42bd compare-llama-bench: add option to plot (#14169)
* compare llama-bench: add option to plot

* Address review comments: convert case + add type hints

* Add matplotlib to requirements

* fix tests

* Improve comment and fix assert condition for test

* Add back default test_name, add --plot_log_scale

* use log_scale regardless of x_values
2025-06-14 10:34:20 +02:00
Georgi Gerganov
fb85a288d7 vocab : fix build (#14175)
ggml-ci
b5662
2025-06-13 20:03:05 +03:00
Svetlozar Georgiev
40643edb86 sycl: fix docker image (#14144) 2025-06-13 18:32:56 +02:00
Guy Goldenberg
3cfbbdb44e Merge commit from fork
* vocab : prevent integer overflow during load

* Add static cast and GGML_ABORT

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-06-13 19:20:25 +03:00
Georgi Gerganov
80709b70a2 batch : add LLAMA_BATCH_DEBUG environment variable (#14172)
* batch : add LLAMA_BATCH_DEBUG environment variable

ggml-ci

* cont : improve seq_id display
b5659
2025-06-13 18:35:00 +03:00
ddpasa
26ff3685bf docs : Update multimodal.md (#14122)
* Update multimodal.md

* Update multimodal.md
2025-06-13 15:17:53 +02:00
Georgi Gerganov
60c666347b batch : rework llama_batch_allocr (#14153)
* batch : rework llama_batch_allocr

ggml-ci

* cont : move validation inside class

ggml-ci

* cont : move output counting to class

ggml-ci

* cont : minor

ggml-ci

* batch : add TODOs

ggml-ci
b5657
2025-06-13 13:47:55 +03:00
Georgi Gerganov
b7cc7745e3 readme : remove survey link (#14168) 2025-06-13 11:55:44 +03:00
Christian Kastner
cc8d081879 cmake: Add ability to pass in LLAMA_BUILD_NUMBER/COMMIT (#14167)
* cmake: Add ability to pass in LLAMA_BUILD_NUMBER/COMMIT

* cmake: Pass on LLAMA_BUILD_* to GGML_BUILD_*
b5655
2025-06-13 10:38:52 +02:00
Đinh Trọng Huy
d714dadb57 pooling : make cls_b and cls_out_b optional (#14165)
Co-authored-by: dinhhuy <huy.dinh@brains-tech.co.jp>
b5654
2025-06-13 11:34:08 +03:00
Georgi Gerganov
ffad043973 server : fix SWA condition for full context reprocess (#14163)
ggml-ci
b5653
2025-06-13 11:18:25 +03:00
Anton Mitkov
0889eba570 sycl: Adding additional cpy dbg print output (#14034) b5652 2025-06-13 08:51:39 +01:00
Ewan Crawford
c61285e739 SYCL: Bump oneMath commit (#14152)
Update oneMath commit to merged PR https://github.com/uxlfoundation/oneMath/pull/669
which adds SYCL-Graph support for recording CUDA BLAS commands.

With this change the `MUL_MAT` tests now pass on DPC++ CUDA backends with SYCL-Graph
enabled. Prior to this change, an error would be thrown.

```
$ GGML_SYCL_DISABLE_GRAPH=0 ./bin/test-backend-ops -b SYCL0 -o MUL_MAT -p type_a=f16,type_b=f32,m=16,n=1,k=256,bs=\\[1,1\\],nr=\\[2

UR CUDA ERROR:
        Value:           700
        Name:            CUDA_ERROR_ILLEGAL_ADDRESS
        Description:     an illegal memory access was encountered
        Function:        operator()
        Source Location: $HOME/dpcpp/unified-runtime/source/adapters/cuda/queue.cpp:154

Native API failed. Native API returns: 2147483646 (UR_RESULT_ERROR_UNKNOWN)
Exception caught at file:$HOME/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp, line:3598, func:operator()
SYCL error: CHECK_TRY_ERROR((stream)->wait()): Meet error in this line code!
  in function ggml_backend_sycl_synchronize at $HOME/llama.cpp/ggml/src/ggml-sycl/ggml-sycl.cpp:3598
$HOME/llama.cpp/ggml/src/ggml-sycl/../ggml-sycl/common.hpp:118: SYCL error
Could not attach to process.  If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user.  For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
```
b5651
2025-06-13 08:45:37 +01:00
Christian Kastner
09cf2c7c65 cmake : Improve build-info.cpp generation (#14156)
* cmake: Simplify build-info.cpp generation

The rebuild of build-info.cpp still gets triggered when .git/index gets
changes.

* cmake: generate build-info.cpp in build dir
b5650
2025-06-13 09:51:34 +03:00
Georgi Gerganov
c33fe8b8c4 vocab : prevent heap overflow when vocab is too small (#14145)
ggml-ci
b5649
2025-06-13 08:03:54 +03:00
Anton Mitkov
ed52f3668e sycl: Remove not needed copy f16->f32 for dnnl mul mat (#14125) b5648 2025-06-12 15:15:11 +02:00
Georgi Gerganov
a681b4ba83 readme : remove project status link (#14149) 2025-06-12 14:43:09 +03:00
Georgi Gerganov
7d516443dd server : re-enable SWA speculative decoding (#14131)
ggml-ci
b5646
2025-06-12 11:51:38 +03:00
Georgi Gerganov
f6e1a7aa87 context : simplify output counting logic during decode (#14142)
* batch : remove logits_all flag

ggml-ci

* context : simplify output counting logic during decode

ggml-ci

* cont : fix comments
b5645
2025-06-12 11:50:01 +03:00
Georgi Gerganov
c3ee46fab4 batch : remove logits_all flag (#14141)
ggml-ci
b5644
2025-06-12 11:49:26 +03:00
Georgi Gerganov
e2c0b6e46a cmake : handle whitepsaces in path during metal build (#14126)
* cmake : handle whitepsaces in path during metal build

ggml-ci

* cont : proper fix

ggml-ci

---------

Co-authored-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2025-06-12 10:14:24 +03:00
Georgi Gerganov
9596506965 kv-cache : fix split_equal handling in unified implementation (#14130)
ggml-ci
b5642
2025-06-12 10:02:15 +03:00
compilade
a20b2b05bc context : round n_tokens to next multiple of n_seqs when reserving (#14140)
This fixes RWKV inference which otherwise failed
when the worst case ubatch.n_seq_tokens rounded to 0.
b5641
2025-06-12 02:56:04 -04:00
bandoti
2e89f76b7a common: fix issue with regex_escape routine on windows (#14133) b5640 2025-06-11 17:19:44 -03:00
Christian Kastner
532802f938 Implement GGML_CPU_ALL_VARIANTS for ARM (#14080)
* ggml-cpu: Factor out feature detection build from x86

* ggml-cpu: Add ARM feature detection and scoring

This is analogous to cpu-feats-x86.cpp. However, to detect compile-time
activation of features, we rely on GGML_USE_<FEAT> which need to be set
in cmake, instead of GGML_<FEAT> that users would set for x86.

This is because on ARM, users specify features with GGML_CPU_ARM_ARCH,
rather than with individual flags.

* ggml-cpu: Implement GGML_CPU_ALL_VARIANTS for ARM

Like x86, however to pass around arch flags within cmake, we use
GGML_INTERNAL_<FEAT> as we don't have GGML_<FEAT>.

Some features are optional, so we may need to build multiple backends
per arch version (armv8.2_1, armv8.2_2, ...), and let the scoring
function sort out which one can be used.

* ggml-cpu: Limit ARM GGML_CPU_ALL_VARIANTS to Linux for now

The other platforms will need their own specific variants.

This also fixes the bug that the the variant-building branch was always
being executed as the else-branch of GGML_NATIVE=OFF. The branch is
moved to an elseif-branch which restores the previous behavior.
b5639
2025-06-11 21:07:44 +02:00
Sigbjørn Skjæret
d4e0d95cf5 chore : clean up relative source dir paths (#14128) b5638 2025-06-11 19:04:23 +02:00
Sigbjørn Skjæret
cc66a7f78f tests : add test-tokenizers-repo (#14017) b5637 2025-06-11 17:16:32 +02:00
Jeff Bolz
bd248d4dc7 vulkan: Better thread-safety for command pools/buffers (#14116)
This change moves the command pool/buffer tracking into a vk_command_pool
structure. There are two instances per context (for compute+transfer) and
two instances per device for operations that don't go through a context.
This should prevent separate contexts from stomping on each other.
b5636
2025-06-11 09:48:52 -05:00
Aman
7781e5fe99 webui: Wrap long numbers instead of infinite horizontal scroll (#14062)
* webui: Wrap long numbers instead of infinite horizontal scroll

* Use tailwind class

* update index.html.gz
2025-06-11 16:42:25 +02:00
Georgi Gerganov
89a184fa71 kv-cache : relax SWA masking condition (#14119)
ggml-ci
b5634
2025-06-11 16:48:45 +03:00
Taylor
2baf07727f server : pass default --keep argument (#14120) b5633 2025-06-11 13:43:43 +03:00
Georgi Gerganov
7ae2932116 kv-cache : add LLAMA_KV_CACHE_DEBUG environment variable (#14121) b5632 2025-06-11 12:52:45 +03:00
Jeff Bolz
1f7d50b293 vulkan: Track descriptor pools/sets per-context (#14109)
Use the same descriptor set layout for all pipelines (MAX_PARAMETER_COUNT == 8)
and move it to the vk_device. Move all the descriptor pool and set tracking to
the context - none of it is specific to pipelines anymore. It has a single vector
of pools and vector of sets, and a single counter to track requests and a single
counter to track use.
b5631
2025-06-11 07:19:25 +02:00
lhez
4c763c8d1b opencl: add mul_mv_id_q4_0_f32_8x_flat (#14003) b5630 2025-06-10 16:55:58 -07:00
compilade
dad5c44398 kv-cache : avoid modifying recurrent cells when setting inputs (#13834)
* kv-cache : avoid modifying recurrent cells when setting inputs

* kv-cache : remove inp_s_mask

It was replaced with equivalent and simpler functionality
with rs_z (the first zeroed state) and the already-existing inp_s_copy.

* kv-cache : fix non-consecutive token pos warning for recurrent models

The problem was apparently caused by how the tail cells were swapped.

* graph : simplify logic for recurrent state copies

* kv-cache : use cell without src refs for rs_z in recurrent cache

* llama-graph : fix recurrent state copy

The `state_copy` shuffle assumes everything is moved at once,
which is not true when `states_extra` is copied back to the cache
before copying the range of states between `head` and `head + n_seqs`.
This is only a problem if any of the cells in [`head`, `head + n_seqs`)
have an `src` in [`head + n_seqs`, `head + n_kv`),
which does happen when `n_ubatch > 1` in the `llama-parallel` example.

Changing the order of the operations avoids the potential overwrite
before use, although when copies are avoided (like with Mamba2),
this will require further changes.

* llama-graph : rename n_state to state_size in build_recurrent_state

This naming should reduce confusion between the state size
and the number of states.
b5629
2025-06-10 18:20:14 -04:00
Sigbjørn Skjæret
55f6b9fa65 convert : fix duplicate key DeepSeek-R1 conversion error (#14103) 2025-06-10 23:29:52 +02:00
Sigbjørn Skjæret
3678b838bb llama : support GEGLU for jina-bert-v2 (#14090) b5627 2025-06-10 18:02:08 +02:00
Jeff Bolz
652b70e667 vulkan: force device 0 in CI (#14106) 2025-06-10 10:53:47 -05:00
Juk Armstrong
3a12db23b6 Fixed spec timings to: accepted/tested instead of accepted/drafted (#14104) b5625 2025-06-10 16:48:07 +01:00
Georgi Gerganov
ae92c1855b sync : ggml
ggml-ci
b5624
2025-06-10 18:39:33 +03:00
Georgi Gerganov
b7ce1ad1e3 ggml : fix weak alias win32 (whisper/0)
ggml-ci
2025-06-10 18:39:33 +03:00