Commit Graph

4226 Commits

Author SHA1 Message Date
9a4b79bcfa CANN: Improve the Inferencing Performance for Ascend NPU Device (#10454)
* improve inferencing performance for ascend npu.

Co-authored-by: Frank Mai <thxCode@thxcode0824@gmail.com>

* some modification after review

* some modifications after review

* restore some modifications

* restore some modifications

---------

Co-authored-by: shanshan shen <shanshanshen333@gmail.com>
Co-authored-by: Frank Mai <thxCode@thxcode0824@gmail.com>
b4176
2024-11-26 18:08:37 +08:00
7066b4cce2 CANN: RoPE and CANCAT operator optimization (#10488)
Co-authored-by: noemotiovon <noemotiovon@gmail.com>
b4175
2024-11-26 17:31:05 +08:00
0eb4e12bee vulkan: Fix a vulkan-shaders-gen arugment parsing error (#10484)
The vulkan-shaders-gen was not parsing the --no-clean argument correctly.
Because the previous code was parsing the arguments which have a value only
and the --no-clean argument does not have a value, it was not being parsed
correctly. This commit can now correctly parse arguments that don't have values.
b4174
2024-11-26 01:47:20 +00:00
0cc63754b8 Introduce llama-run (#10291)
It's like simple-chat but it uses smart pointers to avoid manual
memory cleanups. Less memory leaks in the code now. Avoid printing
multiple dots. Split code into smaller functions. Uses no exception
handling.

Signed-off-by: Eric Curtin <ecurtin@redhat.com>
b4173
2024-11-25 22:56:24 +01:00
50d5cecbda ci : build docker images only once daily (#10503) b4172 2024-11-25 22:05:39 +01:00
9fd8c2687f server : add more information about error (#10455) b4171 2024-11-25 22:28:59 +02:00
47f931c8f9 server : enable cache_prompt by default (#10501)
ggml-ci
b4170
2024-11-25 21:50:07 +02:00
106964e3d2 metal : enable mat-vec kernels for bs <= 4 (#10491) b4169 2024-11-25 21:49:31 +02:00
80acb7b430 Rename Olmo1124 to Olmo2 (#10500) b4168 2024-11-25 19:36:09 +01:00
10bce0450f llama : accept a list of devices to use to offload a model (#10497)
* llama : accept a list of devices to use to offload a model

* accept `--dev none` to completely disable offloading

* fix dev list with dl backends

* rename env parameter to LLAMA_ARG_DEVICE for consistency
b4167
2024-11-25 19:30:06 +01:00
1f922254f0 Github: update issue templates [no ci] (#10489) 2024-11-25 19:18:37 +01:00
a9a678a6b2 Add download chat feature to server chat (#10481)
* Add download chat feature to server chat

Add a download feature next to the delete chat feature in the server vue chat interface.

* code style

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2024-11-25 17:11:55 +01:00
9ca2e67762 server : add speculative decoding support (#10455)
* server : add speculative decoding support

ggml-ci

* server : add helper function slot.can_speculate()

ggml-ci
b4164
2024-11-25 16:31:38 +02:00
5931c1f233 ggml : add support for dynamic loading of backends (#10469)
* ggml : add support for dynamic loading of backends

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
b4163
2024-11-25 15:13:39 +01:00
f6d12e7df8 tests : fix compile warning b4162 2024-11-25 15:17:32 +02:00
b756441104 metal : minor code formatting b4161 2024-11-25 15:08:04 +02:00
5a8987793f [SYCL] Fix building Win package for oneAPI 2025.0 update (#10483)
* fix build package for 2025.0

* debug

* debug

* fix

* rm debug

---------

Co-authored-by: arthw <14088817+arthw@users.noreply.github.com>
b4160
2024-11-25 17:31:10 +08:00
d9d54e498d speculative : refactor and add a simpler example (#10362)
* speculative : refactor and add a simpler example

ggml-ci

* speculative : clean-up and add comments and TODOs [no ci]

* speculative : manage context in common_speculative

ggml-ci

* speculative : simplify

ggml-ci

* speculative : simplify (cont)

ggml-ci

* speculative : add --draft-min CLI arg

* speculative : minor fixup

* make : build fixes

* speculative : do not redraft previous drafts

ggml-ci

* speculative : fix the draft sampling

ggml-ci

* speculative : fix compile warning

* common : refactor args

ggml-ci

* common : change defaults [no ci]

* common : final touches

ggml-ci
2024-11-25 09:58:41 +02:00
cce5a90075 flake.lock: Update (#10470)
Flake lock file updates:

• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/5e4fbfb6b3de1aa2872b76d49fafc942626e2add?narHash=sha256-OZiZ3m8SCMfh3B6bfGC/Bm4x3qc1m2SVEAlkV6iY7Yg%3D' (2024-11-15)
  → 'github:NixOS/nixpkgs/23e89b7da85c3640bbc2173fe04f4bd114342367?narHash=sha256-y/MEyuJ5oBWrWAic/14LaIr/u5E0wRVzyYsouYY3W6w%3D' (2024-11-19)

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-11-24 08:03:25 -08:00
dc39012cba llama : fix op mul check with command-r-plus (#10476) b4157 2024-11-24 16:10:26 +01:00
9336db462c convert : XLMRoberta Type Vocab Size (#10458)
This matches the key in common bert-based embedding models and may have a
value other than 1 in it.

Branch: XLMRobertaTypeVocabSize

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2024-11-24 11:02:34 +02:00
96fa2c5e2d fix gguf-py: Conversion error when multiple licenses are configured (#9807)
* fix general.license list to str

* fix join license list

---------

Co-authored-by: momonga <115213907+mmnga@users.noreply.github.com>
2024-11-24 01:09:22 +01:00
55ed008b2d ggml : do not use ARM features not included in the build (#10457) b4154 2024-11-23 14:41:12 +01:00
6dfcfef078 ci: Update oneAPI runtime dll packaging (#10428)
This is the minimum runtime dll dependencies for oneAPI 2025.0
b4153
2024-11-22 10:44:08 +01:00
599b3e0cd4 GitHub: ask for more info in issue templates (#10426)
* GitHub: ask for more info in issues [no ci]

* refactor issue templates to be component-specific

* more understandable issue description

* add dropdown for llama.cpp module
2024-11-22 08:32:40 +01:00
c18610b4ee CANN: Support Ascend310P to accelerate F32 and F16 Model (#10216)
* CANN Support Ascend310P to accelerate F32 and F16 Model

* Add compile option soc type macro ASCEND_310P to ggml-cann lib

* Remove unused code

* Remove the ascend soc_type hard code compile option in CMakelist.txt
b4151
2024-11-22 14:07:20 +08:00
a5e47592b6 cuda : optimize argmax (#10441)
* cuda : optimize argmax

* remove unused parameter

ggml-ci

* fixup : use full warps

ggml-ci

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* fix ub

* ggml : check ne00 <= INT32_MAX in argmax and argsort

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
b4150
2024-11-21 18:18:50 +01:00
1bb30bf28c llama : handle KV shift for recurrent models (#10402)
ggml-ci
b4149
2024-11-21 10:22:47 +02:00
87a533be57 sync : ggml b4148 2024-11-21 09:22:11 +02:00
59b9172822 ggml/sched : do not skip views in pre-assignments 2024-11-21 09:22:05 +02:00
02e4eaf22f ggml-opt: fix data corruption (ggml/1022) 2024-11-21 09:22:02 +02:00
9abe9eeae9 vulkan: predicate max operation in soft_max shaders/soft_max (#10437)
Fixes #10434
2024-11-20 20:47:36 +01:00
f95caa7954 cmake: add link dependencies to cmake find pkg (#10433)
* cmake pkg: find accelerate, openmp, memkind libs

* cmake pkg: find BLAS libs

* try BLAS_LIBRARIES instead

* Add BLAS link opts

* Add more link deps. and set GGML_ vars
2024-11-20 17:22:19 +01:00
fab5d30ff6 llama : add .clang-format file (#10415) b4143 2024-11-20 12:57:53 +01:00
8fd4b7fa29 vulkan: copy iq4_nl LUT into shared memory (#10409) b4142 2024-11-20 08:40:18 +01:00
1bacb9f625 vulkan: further optimize mul_mat_vec using larger loads (#10387)
* vulkan: Use pipeline_robustness to disable robustness in mul_mat_vec.

Add some early returns for nonexistent rows in mul_mat_vec shaders. These
can only be hit when dispatching a 2D grid of workgroups. Fix the logic
for the 2D grid of workgroups to round up.

Enable the pipeline robustness extension if it's available, and use it to
disable robustness for these pipelines. The instructions to do the bounds
checking contend for the same ALU resources as the bit twiddling dequant
instructions.

* vulkan: Add GLSL structure aliases for quant types to allow larger loads

In Vulkan it's not possible to cast pointer types, so instead you have to
declare an aliased binding for the memory with a different type. This
commit adds aliases for the quant formats using 16b ints, and in a few
places where the struct size is a multiple of 4 also using 32b ints.
Currently only q4_k's aliases are used, but others will be used in
subsequent commits.

* vulkan: use larger loads in q5_k and q6_k shaders.

Similar to the optimization I did in q4_k recently, this vectorizes some loads
and reduces the number of bit twiddling instructions.

* vulkan: use larger K step per iteration in mul_mat_vec.

Add vec4 dequantization functions, and use them to do K=8 per iteration in
mul_mat_vec. This uses 16b loads for the quant values and 128b loads for B
which helps reduce the load on the memory system.

The K_PER_ITER==2 logic is still there, just for F16/F32, and really only
because they support unaligned sizes.

Tweak the num_iters/unrolling logic to be simpler and catch a couple missed
unrolling opportunities.
b4141
2024-11-20 08:11:00 +01:00
ad21c9e1f1 update rel to 4040 (#10395)
Co-authored-by: arthw <14088817+arthw@users.noreply.github.com>
2024-11-20 13:54:25 +08:00
3952a221af Fix missing file renames in Makefile due to changes in commit ae8de6d50a (#10413) b4139 2024-11-19 23:18:17 +01:00
42ae10bbcd add cmake rvv support (#10411) b4138 2024-11-19 21:10:31 +01:00
9fe0fb0626 sync : ggml b4137 2024-11-19 20:03:21 +02:00
611fabd792 metal : fox offset integer overflows in im2col (ggml/1015)
-- While running StableDiffusion.cpp locally with Metal some offsets overflow and results in incorrect calculations
2024-11-19 20:03:21 +02:00
PAB
12b0ad953a metal : add GGML_UNARY_OP_ELU kernel (ggml/1018) 2024-11-19 20:03:21 +02:00
342397dc7e cmake: force MSVC compiler charset to utf-8 (#9989) b4134 2024-11-19 18:42:00 +01:00
2a11b6b094 Add required ggml-base and backend libs to cmake pkg (#10407) b4133 2024-11-19 17:10:30 +01:00
3ee6382d48 cuda : fix CUDA_FLAGS not being applied (#10403) b4132 2024-11-19 14:29:38 +01:00
8e752a777b llama : add check for KV cache shifts (#10401)
ggml-ci
b4131
2024-11-19 13:29:26 +02:00
a88ad007de llama : add OLMo November 2024 support (#10394)
* Add OLMo November 2024 constants

* Add OLMo November 2024 converter

* Add loading of OLMo November 2024 tensors and hyper parameters

* Add building of OLMo November 2024 model
b4130
2024-11-19 11:04:08 +02:00
2a1507c162 sycl : Add option to set the SYCL architecture for all targets (#10266)
* Add option to set the SYCL architecture for all targets
* Convert GGML_SYCL_HIP_TARGET to the more generic GGML_SYCL_ARCH option
* Document that setting GGML_SYCL_ARCH can improve the performance
b4129
2024-11-19 08:02:23 +00:00
b3e585988f vulkan: Optimize soft_max (#10301)
* vulkan: Optimize soft_max

Large soft_max could already saturate memory, but small/medium sizes were
pretty slow. The bulk of the gains for them comes from using a smaller
workgroup size, and making the workgroup size match the subgroup size also
makes the barriers much cheaper.

Cache some values in locals to avoid refetching/recomputing. And stamp
out a few "template instantiations" so smaller cases will fully unroll.

Add a missing early return for OOB rows. This happens when there are more
than 512 rows and the dispatch is 512 x H.

* vulkan: Further soft_max optimizations

Restore the workgroup size of 512 case, use it for >1024.

Use unrollable loops for more iteration counts.
b4128
2024-11-19 08:25:17 +01:00
557924f222 sycl: Revert MUL_MAT_OP support changes (#10385) b4127 2024-11-19 08:50:04 +08:00