Commit Graph

978 Commits

Author SHA1 Message Date
f64d44a9b9 CUDA: Fixed OpenLLaMA 3b mmq, reduced compile time (#2590) master-f64d44a 2023-08-13 00:24:45 +02:00
b19edd54d5 Adding support for llama2.c models (#2559) master-b19edd5 2023-08-12 01:17:25 +02:00
53dc399472 server: fixed wrong variable name in timing json (#2579)
* server: fixed wrong variable name in timing json

* remove redunct entry
master-53dc399
2023-08-12 00:35:14 +02:00
9ca4abed89 Handle ENABLE_VIRTUAL_TERMINAL_PROCESSING more gracefully on earlier versions of Windows. master-9ca4abe 2023-08-10 13:11:36 -07:00
e59fcb2bc1 Add --n-predict -2 for stopping generation on full context (#2565) master-e59fcb2 2023-08-10 16:28:27 +02:00
1638757767 Fix grammar-based sampling issue in server (#2566) master-1638757 2023-08-10 13:16:38 +03:00
916a9acdd0 ggml-alloc: Don't try to re-use buffers of external tensors (#2562)
* ggml-alloc: Don't try to re-use buffers of external tensors

They might be weights that came from another context, so we
have no control over them (and they might be re-used elsewhere
so writing to them would be a bad idea).

* ggml-alloc: >= when checking for out-of-bounds

Co-authored-by: slaren <slarengh@gmail.com>

---------

Co-authored-by: slaren <slarengh@gmail.com>
master-916a9ac
2023-08-09 22:47:42 +02:00
ea04a4ca19 add log_callback to llama_context_params for custom logging. (#2234)
* add log_callback to llama_context_params for custom logging.

* Fix macro expansion on gcc

* Add struct llama_state for global variables and move log_callback there

* Turn log level into enum and some minor changes.

* Remove model_for_logging parameter (not needed anymore)

* Convert remaining fprintf(stderr, ...) calls to use new macros.

* Fix enum and initialize g_state

* Fix log calls after merge

* Fix missing static

* Add back all the new lines in the logging strings

* Add comment for llama_log_callback and replace remaining printf calls

---------

Co-authored-by: grahameth <->
Co-authored-by: Helmut <helmut.buhler@inf.h-brs.de>
master-ea04a4c
2023-08-09 22:46:40 +02:00
25d43e0eb5 CUDA: tuned mul_mat_q kernels (#2546) master-25d43e0 2023-08-09 09:42:34 +02:00
f5bfea0580 Allow passing grammar to completion endpoint (#2532)
* Allow passing grammar to completion endpoint
master-f5bfea0
2023-08-08 16:29:19 +03:00
acfc5478ff CUDA: tighter VRAM scratch size for 65b/70b (#2551) master-acfc547 2023-08-08 14:38:16 +02:00
7ed8d1fe7f llm.vim : multiline autocompletion, get rid of "^@" (#2543) 2023-08-08 15:07:02 +03:00
e7f94d6fdc vim : bring back simple llm.vim example 2023-08-08 15:06:18 +03:00
2d7baaf50f vim : streaming and more (#2495)
* Update Vim plugin

* Remove getbufoneline usage, Add input bind example.

getbufoneline() appears to be a recently added function and has been
replaced with getbufline for compatibility.

An additional example that explains how to add a keybind that works in
insert mode was added.
2023-08-08 14:44:48 +03:00
f3c3b4b167 Add --rope-scale parameter (#2544)
* common.cpp : Add --rope-scale parameter
* README.md : Add info about using linear rope scaling
master-f3c3b4b
2023-08-07 19:07:19 +02:00
93356bdb7a ggml : mul mat tweaks (#2372)
* ggml : mul mat wip

ggml-ci

* ggml : alternative thread distribution for mul_mat

ggml-ci

* ggml : mul_mat block tiling attempt

* ggml : mul_mat threads yield

ggml-ci
master-93356bd
2023-08-07 14:25:58 +03:00
60baff7c85 ggml : pad result of ggml_nbytes() master-60baff7 2023-08-07 14:24:42 +03:00
9082b5dfbf ggml : change params pointer (style change) (#2539)
ggml-ci
master-9082b5d
2023-08-07 13:55:18 +03:00
99d29c0094 ggml : sync (custom ops) (#2537)
ggml-ci
master-99d29c0
2023-08-07 13:20:09 +03:00
3d9a551816 Fixed mmap prefetch for GPU offloading (#2529) master-3d9a551 2023-08-07 10:09:40 +02:00
f6f9896ac3 metal : fix out-of-bounds access + inc concurrency nodes (#2416)
* metal : fix out-of-bounds access + style changes

* metal : increase concurrency nodes to 2*GGML_MAX_NODES
2023-08-07 10:52:57 +03:00
34a14b28ff [Makefile] Move ARM CFLAGS before compilation (#2536) master-34a14b2 2023-08-07 09:21:46 +03:00
7297128db8 [Zig] Rewrite build for Zig 0.11 (#2514)
* zig build fixes

* Disable LTO on Windows.
2023-08-07 08:35:53 +03:00
86c3219895 console : fix issue related to Windows 11 PowerShell console mode persistence (#2521) master-86c3219 2023-08-06 09:49:34 +03:00
2e8265ae17 convert.py : add missing abstract methods for quantized data (#2491) 2023-08-06 09:34:05 +03:00
f514d1b306 CUDA: faster k-quant mul_mat_q kernels (#2525) master-f514d1b 2023-08-05 18:20:44 +02:00
332311234a fix firefox autoscroll (#2519) master-3323112 2023-08-04 22:16:11 +02:00
182af739c4 server: regenerate completion.js.hpp (#2515) master-182af73 2023-08-04 21:00:57 +02:00
4329d1acb0 CUDA: use min compute capability of GPUs actually used (#2506) master-4329d1a 2023-08-04 17:35:22 +02:00
02f9d96a86 CUDA: check if event is NULL before cudaStreamWaitEvent (#2505)
Fixes #2503
master-02f9d96
2023-08-04 17:34:32 +02:00
3498588e0f Add --simple-io option for subprocesses and break out console.h and cpp (#1558) master-3498588 2023-08-04 08:20:12 -07:00
5f631c2679 Fixing race condition in server and partial stream handling in frontend. (#2391)
* Fixing race condition in server.cpp and partial stream handling in completion.js

* Reverting assert edits.

* Adding newline to eof
master-5f631c2
2023-08-04 13:37:24 +02:00
415e99fec2 Stream save llama context data to file instead of allocating entire buffer upfront (#2488)
* added stream saving context data to file to avoid allocating unnecessary amounts of memory

* generalised copying state data to file or buffer

* added comments explaining how copy_state_data works

* fixed trailing whitespaces

* fixed save load state example

* updated save load state to use public function in llama.cpp

* - restored breakage of the llama_copy_state_data API
- moved new logic for copying llama state data to internal function

* fixed function declaration order

* restored save load state example

* fixed whitepace

* removed unused llama-util.h include

* Apply suggestions from code review

Co-authored-by: slaren <slarengh@gmail.com>

* Apply code review suggestions

Co-authored-by: slaren <slarengh@gmail.com>

---------

Co-authored-by: slaren <slarengh@gmail.com>
master-415e99f
2023-08-04 13:29:52 +02:00
ff966e7ca6 build : fix several cast and printf warnings (#2499) master-ff966e7 2023-08-04 13:07:21 +03:00
8183159cf3 examples : generate JSON according to schema (#1887)
* examples : add JSON schema grammars

* complete JSON grammar

* ensure primitive types can be used as root of schema

* support integer type and adjust usage text
2023-08-02 22:05:44 -04:00
468ea24fb4 CUDA: faster non k-quant mul_mat_q kernels (#2483) master-468ea24 2023-08-02 18:04:04 +02:00
4f6b60c776 CUDA: Fix models with output size != 32000 (#2480) master-4f6b60c 2023-08-02 16:48:10 +02:00
220d931864 readme : add Aquila-7B model series to supported models (#2487)
* support bpe tokenizer in convert

Signed-off-by: ldwang <ftgreat@gmail.com>

* support bpe tokenizer in convert

Signed-off-by: ldwang <ftgreat@gmail.com>

* support bpe tokenizer in convert, fix

Signed-off-by: ldwang <ftgreat@gmail.com>

* Add Aquila-7B models in README.md

Signed-off-by: ldwang <ftgreat@gmail.com>

* Up Aquila-7B models in README.md

Signed-off-by: ldwang <ftgreat@gmail.com>

---------

Signed-off-by: ldwang <ftgreat@gmail.com>
Co-authored-by: ldwang <ftgreat@gmail.com>
2023-08-02 11:21:11 +03:00
Eve
81844fbcfd tests : Fix compilation warnings (Linux/GCC) (#2451)
* fix hellaswag print format, cast away warning in test-double-float

* c++11 cannot use designated initializers

* add static to test-grad0.c internal functions

* use memcpy in test-double-float.c

* port c tests to c++

* use initializer list for ggml_init_params
master-81844fb
2023-08-02 11:06:19 +03:00
a312193e18 readme : Add Chinese LLaMA-2 / Alpaca-2 to supported models (#2475)
* add support for chinese llama-2 / alpaca-2

* remove white spaces
2023-08-02 09:18:31 +03:00
c574bddb36 fix a typo in examples/server/README.md (#2478) 2023-08-01 14:54:28 +02:00
86aeb27734 server : Support dark mode (#2414)
* server : Support dark mode

So it respects user system light / dark settings.

* Update index.html.hpp by running ./deps.sh
master-86aeb27
2023-08-01 10:56:23 +02:00
1873ff586b metal : add gqa8 kernel to allow llama-2-70B on metal (#2459)
* Added gqa8 kernel to allow llama-2-70B on metal

* Update ggml-metal.m

Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com>

* Extend kernel_mul_mat_f16_f32 to handle gqa broadcast

* Added ne03==ne13 assertion

---------

Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com>
2023-08-01 10:43:12 +03:00
49e7cb5bb1 CUDA: fixed LLAMA_FAST compilation option (#2473) master-49e7cb5 2023-07-31 21:02:19 +02:00
b772bba42e CUDA: fixed cmake F16 option (#2471) master-b772bba 2023-07-31 19:52:22 +02:00
0728c5a8b9 CUDA: mmq CLI option, fixed mmq build issues (#2453) master-0728c5a 2023-07-31 15:44:35 +02:00
1215ed7d5c CUDA: Implemented row flattening for non-glm RoPE (#2468) master-1215ed7 2023-07-31 14:32:30 +02:00
2dbf518911 CUDA: fewer memory bank conflicts for mul_mat_q (#2458) master-2dbf518 2023-07-31 13:18:51 +02:00
9d2382b3e4 Fix Metal backend broken from the allocator changes (#2455)
* fix Metal backend broken from the allocator changes
master-9d2382b
2023-07-31 11:02:53 +02:00
a113689571 ggml : add graph tensor allocator (#2411)
* ggml : add graph tensor allocator

* ggml : don't calculate data pointer of unallocated tensors when creating a view with an offset

* ggml : refactor ggml_view_Nd into ggml_view_tensor_offset
master-a113689
2023-07-30 15:58:01 +02:00