Commit Graph

1545 Commits

Author SHA1 Message Date
40a34fe8d0 speculative : fix prompt tokenization in speculative example (#4025)
* Support special tokens and not adding BOS to prompt in speculative

* Adapt to new should_add_bos function

* Ensure tgt and dft have same add_bos setting
b1545
2023-11-20 11:50:04 +02:00
dae06c06e5 Revert "finetune : add --n-gpu-layers flag info to --help (#4128)"
This reverts commit 05e8301e45.
b1544
2023-11-19 19:16:07 +02:00
05e8301e45 finetune : add --n-gpu-layers flag info to --help (#4128) b1543 2023-11-19 18:56:38 +02:00
936c79b227 server : relay error messages (#4131) b1542 2023-11-19 18:54:10 +02:00
262005ad9d common : comma should be semicolon (#4137) b1541 2023-11-19 18:52:57 +02:00
35985acffa gitignore : tokenize 2023-11-19 18:50:49 +02:00
e937066420 gguf-py : export chat templates (#4125)
* gguf-py : export chat templates

* llama.cpp : escape new lines in gguf kv info prints

* gguf-py : bump version

* gguf-py : check chat_template type

* gguf-py : initialize chat_template
b1539
2023-11-19 11:10:52 +01:00
28a2e6e7d4 tokenize example: Respect normal add BOS token behavior (#4126)
Allow building with Makefile
b1538
2023-11-18 14:48:17 -07:00
0b5c3b0457 scripts : Remove missed baichuan convert script (#4127) 2023-11-18 21:08:33 +01:00
2923f17f6f Clean up ggml-cuda.cu warnings when compiling with clang (for ROCM) (#4124)
* ggml-cuda.cu: Clean up warnings when compiling with clang

* ggml-cuda.cu: Move static items into anonymous namespace

* ggml-cuda.cu: Fix use of namespace start macro

* Revert "ggml-cuda.cu: Fix use of namespace start macro"

This reverts commit 26c1149026.

* Revert "ggml-cuda.cu: Move static items into anonymous namespace"

This reverts commit e29757e0f7.
b1536
2023-11-18 08:11:18 -07:00
bbecf3f415 llama : increase max nodes (#4115) b1535 2023-11-17 21:39:11 +02:00
8e9361089d build : support ppc64le build for make and CMake (#3963)
* build: support ppc64le build for make and CMake

* build: keep __POWER9_VECTOR__ ifdef and extend with __powerpc64__

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
b1534
2023-11-17 18:11:23 +02:00
5ad387e994 tokenize : fix trailing whitespace b1533 2023-11-17 18:01:38 +02:00
2fa02b4b3d examples : add tokenize (#4039) b1532 2023-11-17 17:36:44 +02:00
2ab0707acb convert : use 'model' value if it exists. This allows karpathy/tinyllamas to load (#4089)
Co-authored-by: Don Mahurin <@>
2023-11-17 17:32:34 +02:00
11173c92d6 py : Falcon HF compatibility (#4104)
Falcon HF compatibility
2023-11-17 17:24:30 +02:00
9e87ef60e1 common : improve yaml log escaping (#4080)
* logging: improve escaping in yaml output

* logging: include review feedback
b1529
2023-11-17 17:24:07 +02:00
c7cce1246e llava : fix compilation warning that fread return value is not used (#4069) b1528 2023-11-17 17:22:56 +02:00
f7d5e97542 py : remove superfluous import statements (#4076)
Signed-off-by: Jiri Podivin <jpodivin@gmail.com>
Co-authored-by: Jiri Podivin <jpodivin@redhat.com>
2023-11-17 17:20:53 +02:00
ba4cf5c0bf train : move number of gpu layers argument parsing to common/train.cpp (#4074)
- introduces help entry for the argument
 - cuts '--gpu-layers' form in order to simplify usage and documentation.

Signed-off-by: Jiri Podivin <jpodivin@gmail.com>
Co-authored-by: Jiri Podivin <jpodivin@redhat.com>
b1526
2023-11-17 17:19:16 +02:00
e85bb1a8e7 llama : add functions to get the model's metadata (#4013)
* llama : add functions to get the model's metadata

* format -> std::to_string

* better documentation
b1525
2023-11-17 17:17:37 +02:00
3e916a07ac finetune : speed-up ggml_compute_forward_out_prod_f32 via BLAS (#4079)
* Remove logically superfluous assertions and order by dimension

* Use cblas_sgemm() to implement ggml_compute_forward_out_prod()

* Remove ggml_compute_forward_out_prod_use_blas(), fix compiling errors on cmake/zig, remove trailing whitespace

* Add openBLAS support for sgemm() in compute_forward_out_prod()
b1524
2023-11-17 16:48:19 +02:00
947f64f163 finetune : zero the loraB initial vectors (#4082)
* finetune : zero the loraB initial vectors

Without this, the first iteration is starting out far from the base model, instead of exactly on it.
Zeroing loraB is what the paper recommends. loralib also zeroes at least one of the init vector pairs
(though it departs from the paper in using a different distribution for the other vector, in some cases).

* tabs to spaces

* Use ggml_set_zero instead of adding a new function
b1523
2023-11-17 11:23:11 +01:00
b83e149ec6 cuda : get_row_rounding F32 (#4095)
* Fix #4017

* Update ggml-cuda.cu

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* Update ggml-cuda.cu

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

---------

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
b1522
2023-11-17 10:01:15 +02:00
4f447a4833 llama : fix data units (#4101)
* llama : fix data units

ggml-ci

* Revert "llama : fix data units"

This reverts commit f5feac831f.

* llama : disambiguate data units

ggml-ci
b1521
2023-11-17 10:00:15 +02:00
91f6499393 Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040)
* gguf-py: gguf-dump: Respect --no-tensor flag in JSON mode.

* Respect add_bos_token GGUF metadata value

* gguf-py: Try to fix SpecialVocab giving up too easily for the Nth time
b1520
2023-11-16 19:14:37 -07:00
8da46278e1 gguf : fix potential infinite loops while parsing (#4100)
Co-authored-by: Bernhard Gstrein <gstrein@cs.uni-freiburg.de>
b1519
2023-11-16 17:01:48 +02:00
a6fc554e26 llama : restore prefix space in llama tokenizer (#4081) b1518 2023-11-15 11:34:47 -05:00
1cf2850d52 ggml-cuda : increase max graph size (#4084) b1517 2023-11-15 14:58:13 +02:00
6bb4908a17 Fix MacOS Sonoma model quantization (#4052)
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
b1516
2023-11-14 12:34:41 -05:00
36eed0c42c stablelm : StableLM support (#3586)
* Add support for stablelm-3b-4e1t
* Supports GPU offloading of (n-1) layers
b1515
2023-11-14 11:17:12 +01:00
b46d12f86d convert.py: also look for plain model.safetensors (#4043)
* add safetensors to convert.py help message

* Check for single-file safetensors model

* Update convert.py "model" option help message

* revert convert.py help message change
2023-11-13 18:03:40 -07:00
bd90eca237 llava : fix regression for square images in #3613 (#4056) b1513 2023-11-13 18:20:52 +03:00
3d68f364f1 ggml : sync (im2col, GPU conv, 32-bit arm compat) (#4060)
ggml-ci
b1512
2023-11-13 16:55:52 +02:00
c049b37d7b readme : update hot topics 2023-11-13 14:18:08 +02:00
4760e7cc0b sync : ggml (backend v2) (#3912)
* sync : ggml (backend v2) (wip)

* sync : migrate examples and llama.cpp to dynamic graphs (wip)

* sync : update tests + fix max op params to 64

ggml-ci

* sync : ggml-cuda

ggml-ci

* llama : fix save/load state context size

ggml-ci

* sync : try to fix build on tvOS

* sync : pass custom graph sizes in training examples

* sync : update graph copies to new ggml API

* sync : update sync-ggml.sh with new files

* scripts : fix header in sync script

* train : fix context size calculations

* llama : increase inference graph size up to 4096 nodes

* train : allocate grads for backward graphs

* train : allocate grads for gb_tmp
b1510
2023-11-13 14:16:23 +02:00
bb50a792ec Add ReLU and SQR CUDA ops to (partially) fix Persimmon offloading (#4041)
* Add ReLU and SQR CUDA ops to fix Persimmon offloading

* Persimmon loader: More helpful error on CUDA/ROCM when offloading too many layers
b1509
2023-11-13 01:58:15 -07:00
21fd874c8d gguf-py: gguf_writer: Use bytearray to build metadata (#4051)
* gguf-py: gguf_writer: Use BytesIO to build metadata

* Use bytearray instead

Bump gguf-py package version
2023-11-12 16:39:37 -07:00
532dd74e38 Fix some documentation typos/grammar mistakes (#4032)
* typos

* Update examples/parallel/README.md

Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>

---------

Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>
2023-11-11 23:04:58 -07:00
e86fc56f75 Fix gguf-convert-endian script (#4037)
* Fix gguf-convert-endian script

* Bump version and update description
2023-11-11 08:35:31 -07:00
d96ca7ded7 server : fix crash when prompt exceeds context size (#3996) b1505 2023-11-10 23:48:21 -06:00
34b0a08207 gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981)
* gguf-py: Refactor and add file reading support

* Replay changes from #3871

Credit to @cebtenzzre for that pull

* Various type annotation fixes.

* sort imports with isort (again)

* Fix missing return statement in add_tensor

* style cleanup with flake8

* fix NamedTuple and Enum usage

* Fix an issue with state init in GGUFReader

Move examples to an examples/ directory

Clean up examples

Add an example of modifying keys in a GGUF file

Update documentation with info on examples

Try to support people importing gguf/gguf.py directly

* Damagage is not a word.

* Clean up gguf-py/examples/modify_gguf.py whitespace

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* Update gguf-py/examples/modify_gguf.py formatting

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* Update gguf-py/gguf/gguf_reader.py type hint

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* Make examples executable, formatting changes

* Add more information to GGUFReader and examples comments

* Include a gguf Python package version bump

* Add convert-gguf-endian.py script

* cleanup

* gguf-py : bump minor version

* Reorganize scripts

* Make GGUFReader endian detection less arbitrary

* Add JSON dumping support to gguf-dump.py

Which I kind of regret now

* A few for gguf-dump.py cleanups

* Murder accidental tuple in gguf-py/scripts/gguf-dump.py

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* cleanup

* constants : remove unneeded type annotations

* fix python 3.8 compat

* Set up gguf- scripts in pyproject.toml

* And include scripts/__init__.py, derp

* convert.py: We can't currently support Q8_0 on big endian.

* gguf-py: SpecialVocab: Always try available sources for special token ids

gguf-py: SpecialVocab: Try to load merges from merges.txt if not in tokenizer.json

gguf-py: SpecialVocab: Add 'add_bos_token' type bools to GGUF metadata
u

* cleanup

* Promote add_X_token to GGUF metadata for BOS and EOS

---------

Co-authored-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
2023-11-11 08:04:50 +03:00
4a4fd3eefa server : allow continue edit on completion mode (#3950)
* server : allow continue edit on completion mode

* server : handle abort case in runCompletion

* server : style improvement
b1503
2023-11-10 16:49:33 -06:00
df9d1293de Unbreak persimmon after #3837 (#4010) b1502 2023-11-10 14:24:54 +01:00
a75fa576ab scripts: Generalize convert scripts (#3838)
* Replace convert-*-hf-to-gguf.py files with convert-hf-to-gguf.py
2023-11-09 11:09:29 +01:00
57ad015dc3 server : add min_p param (#3877)
* Update server.cpp with min_p after it was introduced in https://github.com/ggerganov/llama.cpp/pull/3841

* Use spaces instead of tabs

* Update index.html.hpp after running deps.sh

* Fix test - fix line ending
b1500
2023-11-08 20:00:34 -06:00
875fb42871 ggml-alloc : fix backend assignments of views (#3982) b1499 2023-11-08 13:15:14 +01:00
0a7c980b6f gguf : track writer state, free unneeded tensors, cleanup (#3871) 2023-11-07 12:43:04 -05:00
413503d4b9 make : do not add linker flags when compiling static llava lib (#3977) b1497 2023-11-07 20:25:32 +03:00
e9c1cecb9d ggml : fix backward rope after YaRN (#3974)
* fix backward process of rope

rope backward process was broken after YaRN RoPE (#2268) implementation, due to missing changes in backward functions.

the code for the backward process is nearly identically to the forward process:
the only difference is the sign of the sin-values.

to avoid future regressions remove the near-duplicate backward functions and reuse the forward code:

for this a new function argument `bool forward` was added to `ggml_compute_forward_rope_f32` and `ggml_compute_forward_rope_f16`.
the sin-values will be negated when forward is false.

* fix finetune rope call to use correct default attn_factor of 1.0f

* remove unused `ggml_rope_xpos_back`

it is better to have only one `ggml_rope_back` function that accepts all rope parameters, so that `ggml_compute_backward` can propagate all parameters without having to switch between different rope_back variants.

* fix comments explaining the sinus sign in ggml_forward_rope

* add missing function arguments in declaration

* fix function argument type in declaration
b1496
2023-11-07 10:04:51 +02:00