Commit Graph

2957 Commits

Author SHA1 Message Date
e18bc6aaf3 convert : fix Qwen/Qwen-7b conversion (#7308) 2024-05-17 10:01:58 +03:00
ee94172d33 server : add support for the RPC backend (#7305)
ref: #7292
b2906
2024-05-17 10:00:17 +03:00
934266c0e0 ggml : rewrite silu and softmax for cpu (#7154)
This change upstreams llamafile's vectorized expf() functions. This lets
us compute softmax and silu more accurately than the short[65536] lookup
table that GGML previously used to make this operation go faster. We can
support aarch64 and sse2+ with the worst case rounding error of 2ulp. It
makes make -j8 tests && ./tests/test-backend-ops -o SOFT_MAX -b CPU perf
go 1.5x faster for SSE2+FMA, 1.9x faster for AVX2+FMA and 2.1x on AVX512
2024-05-17 09:58:52 +03:00
9c4fdcbec8 [Server] Added --verbose option to README [no ci] (#7335) 2024-05-17 10:11:03 +10:00
24ecb58168 Revert "server bench: fix bench not waiting for model load (#7284)" (#7334)
This reverts commit 583fd6b000.
2024-05-16 20:43:45 +02:00
9afdffe70e rpc : get available mem for the CPU backend
This can be overridden with the -m command line option

ref: #7293
2024-05-16 12:04:08 +03:00
3b3963c55c rpc : add command line arg for specifying backend memory
ref: #7293
b2901
2024-05-16 09:58:29 +03:00
dda64fc17c convert : get general.name from model dir, not its parent (#5615)
Co-authored-by: Brian <mofosyne@gmail.com>
2024-05-16 16:15:23 +10:00
0350f58152 grammar, json, llama: replace push on emplace if it possible (#7273) b2899 2024-05-16 16:14:24 +10:00
ad52d5c259 doc: add references to hugging face GGUF-my-repo quantisation web tool. (#7288)
* chore: add references to the quantisation space.

* fix grammer lol.

* Update README.md

Co-authored-by: Julien Chaumond <julien@huggingface.co>

* Update README.md

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Julien Chaumond <julien@huggingface.co>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-05-16 15:38:43 +10:00
172b78210a ci: fix bin/Release path for windows-arm64 builds (#7317)
Switch to Ninja Multi-Config CMake generator to resurect bin/Release path
that broke artifact packaging in CI.
b2897
2024-05-16 15:36:43 +10:00
13ad16af12 Add support for properly optimized Windows ARM64 builds with LLVM and MSVC (#7191)
* logging: add proper checks for clang to avoid errors and warnings with VA_ARGS

* build: add CMake Presets and toolchian files for Windows ARM64

* matmul-int8: enable matmul-int8 with MSVC and fix Clang warnings

* ci: add support for optimized Windows ARM64 builds with MSVC and LLVM

* matmul-int8: fixed typos in q8_0_q8_0 matmuls

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* matmul-int8: remove unnecessary casts in q8_0_q8_0

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-05-16 12:47:36 +10:00
8f7080bf48 readme : remove stray double quote (#7310)
Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-05-15 23:41:03 +02:00
e1b40ac3b9 ggml : use dynamic thread scheduling for matrix multiplication (#6915)
* Just reordering some structs.

* Adding in the calls to mm_pause

* Passing around the state

* Renaming and moving a bunch of variables around.

* Extracting the logic to it's own function.

* Moving some variable definitions into the chunk function.

* Moving some variables around

* moving src1_cont inside

* Moving row_size

* adding the current_chunk

* Reorg the code.

* Formatting to match the orig patch

* starting to setup the chunking variables

* Starting the buildup of the loop

* The yield shouldn't be necessary.

* adding the looping structure based on the chunk configuration.

* Add in the re-chunking code.

* Making it much more likely to rechunk.

* disable resizing if numa is enabled.

* Updating comments with what we've learned.

* Fix formatting

* Couple more formatting fixes.

* More style fixes.

* Fix Warnings

* Going with unused because there's conditional logic that needs it.

* Update ggml.c

* Update ggml.c

---------
b2894
2024-05-15 19:59:12 +02:00
dc020985b8 Avoid unnecessarily disabling CUDA graphs (#7302)
As discussed in PR #6766, CUDA graphs were being disabled in the presence of long prompts.
This fixes the issue by avoiding the consective update counter from incrementing unnecessarily
for tokens in which cuda graphs are disabled due to batch size > 1.
b2893
2024-05-15 15:44:49 +02:00
344f9126cc ggml : tag ggml_tensor::backend as deprecated (#7290) b2892 2024-05-15 15:08:48 +02:00
9a17ab914b Add missing " (#7303) b2891 2024-05-15 17:56:30 +05:30
dm4
ea3b0590ee embedding : free the batch after execution (#7297) b2890 2024-05-15 15:01:12 +03:00
29499bb593 sync : ggml b2889 2024-05-15 13:23:41 +03:00
48aa8fd1f2 ggml : add ggml_upscale_ext (ggml/814)
* initial commit with CPU implementation of upscale to shape and test, cuda implementation next

* experimental commit to see if dst shape is correct

* test version

* test

* removed unnecessary params

* refactor

* fixed tests

* ggml : metal impl + cleanup + sycl dev warnings

* patched ggml_upscale cuda op to handle non-contiguous tensors, added test for non-contiguous behavior

* metal : fix upsacle op to support nb00 + style

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-05-15 13:23:33 +03:00
583fd6b000 server bench: fix bench not waiting for model load (#7284) 2024-05-15 08:44:16 +02:00
9f773486ab script : sync ggml-rpc b2886 2024-05-14 19:14:38 +03:00
e8a7fd4fb0 metal : support FA without mask + add asserts (#7278)
* ggml : fa without mask + add asserts

ggml-ci

* metal : support non-contiguous KV

ggml-ci
b2885
2024-05-14 19:09:30 +03:00
a5e3fde857 sync : ggml
ggml-ci
b2884
2024-05-14 19:08:09 +03:00
f308ea7059 metal : tune soft_max number of threads (whisper/0) 2024-05-14 19:08:09 +03:00
c3c88f296a ggml : try fix ppc64 (whisper/0) 2024-05-14 19:08:09 +03:00
182adefcf3 ggml : expose SSE3 and SSSE3 for MSVC when AVX is available (whisper/2128) 2024-05-14 19:08:09 +03:00
0d26d8ccd8 ggml : optimize for ppc64le using VSX intrinsics (ggml/784)
* optimize for ppc64le using VSX intrinsics

* 1. code clean up by removing comments about overflow concern.

2. fix typo in suffix of scaling.

* Continue to fix typo in suffix of scaling for QK_K <> 256

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-05-14 19:08:09 +03:00
4f0263633b server: free sampling contexts on exit (#7264)
* server: free sampling contexts on exit

This cleans up last leak found by the address sanitizer.

* fix whitespace

* fix whitespace
b2879
2024-05-14 16:11:24 +02:00
1265c670fd Revert "move ndk code to a new library (#6951)" (#7282)
This reverts commit efc8f767c8.
b2878
2024-05-14 16:10:39 +03:00
5e31828d3e ggml : add RPC backend (#6829)
* ggml : add RPC backend

The RPC backend proxies all operations to a remote server which runs a
regular backend (CPU, CUDA, Metal, etc).

* set TCP_NODELAY

* add CI workflows

* Address review comments

* fix warning

* implement llama_max_devices() for RPC

* Address review comments

* Address review comments

* wrap sockfd into a struct

* implement get_alignment and get_max_size

* add get_device_memory

* fix warning

* win32 support

* add README

* readme : trim trailing whitespace

* Address review comments

* win32 fix

* Address review comments

* fix compile warnings on macos
b2877
2024-05-14 14:27:19 +03:00
541600201e llama : disable pipeline parallelism with nkvo (#7265) b2876 2024-05-14 17:33:42 +10:00
efc8f767c8 move ndk code to a new library (#6951) b2875 2024-05-14 17:30:30 +10:00
e0f556186b Add left recursion check: quit early instead of going into an infinite loop (#7083)
* Add left recursion check: quit early instead of going into an infinite loop

* Remove custom enum, rename left recursion check and move to "grammar internal" section, add handling for edge case where a leftmost nonterminal may be empty

* Remove unnecessary declaration
b2874
2024-05-14 15:25:56 +10:00
27f65d6267 docs: Fix typo and update description for --embeddings flag (#7026)
- Change '--embedding' to '--embeddings' in the README
- Update the description to match the latest --help output
- Added a caution about defining physical batch size
2024-05-14 15:20:47 +10:00
ee52225067 convert-hf : support direct Q8_0 conversion (#7234)
* convert-hf : support q8_0 conversion

* convert-hf : add missing ftype

This was messing with the checksums otherwise.

* convert-hf : add missing ftype to Baichuan and Xverse

I didn't notice these on my first pass.
2024-05-13 14:10:51 -04:00
614d3b914e llama : less KV padding when FA is off (#7257)
ggml-ci
b2871
2024-05-13 17:15:15 +03:00
30e70334f7 llava-cli: fix base64 prompt (#7248) b2870 2024-05-14 00:02:36 +10:00
1c570d8bee perplexity: add BF16 vs. FP16 results (#7150) 2024-05-13 13:03:27 +02:00
948f4ec7c5 [SYCL] rm wait() (#7233) b2868 2024-05-13 18:11:26 +08:00
9aa672490c llama : rename jina tokenizers to v2 (#7249)
* refactor: rename jina tokenizers to v2

* refactor: keep refactoring non-breaking
b2867
2024-05-13 11:35:14 +03:00
b1f8af1886 convert.py: Outfile default name change and additional metadata support (#4858)
* convert.py: Outfile default name change and additional metadata support

* convert.py: don't stringify Metadata load method output

* convert.py: typo fix

* convert.py: fix metadata format to sync with LLM_KV_NAMES in llama.cpp
2024-05-13 12:56:47 +10:00
e586ee4259 change default temperature of OAI compat API from 0 to 1 (#7226)
* change default temperature of OAI compat API from 0 to 1

* make tests explicitly send temperature to OAI API
b2865
2024-05-13 12:40:08 +10:00
cbf75894d2 [SYCL] Add oneapi runtime dll files to win release package (#7241)
* add oneapi running time dlls to release package

* fix path

* fix path

* fix path

* fix path

* fix path

---------

Co-authored-by: Zhang <jianyu.zhang@intel.com>
b2864
2024-05-13 08:04:29 +08:00
0d5cef78ae [SYCL] update CI with oneapi 2024.1 (#7235)
Co-authored-by: Zhang <jianyu.zhang@intel.com>
2024-05-13 08:02:55 +08:00
dc685be466 CUDA: add FP32 FlashAttention vector kernel (#7188)
* CUDA: add FP32 FlashAttention vector kernel

* fixup! CUDA: add FP32 FlashAttention vector kernel

* fixup! fixup! CUDA: add FP32 FlashAttention vector kernel

* fixup! fixup! fixup! CUDA: add FP32 FlashAttention vector kernel
b2862
2024-05-12 19:40:45 +02:00
6f1b63606f cmake : fix version cmp (#7227) b2861 2024-05-12 18:30:23 +03:00
b228aba91a remove convert-lora-to-ggml.py (#7204) b2860 2024-05-12 02:29:33 +02:00
7bd4ffb780 metal : fix warnings (skipme) (#0) b2859 2024-05-11 21:38:13 +03:00
1622ac023f sync : ggml 2024-05-11 21:35:05 +03:00