Commit Graph

427 Commits

Author SHA1 Message Date
c4fe84fb0d llama : refactor get / set state + remove redundant kv cache API (#1143) master-c4fe84f 2023-04-24 07:40:02 +03:00
1d78fecdab Fix LoRA acronym (#1145) 2023-04-23 23:03:44 +02:00
284685f169 scripts : add helper scripts to synch ggml repo 2023-04-23 19:57:09 +03:00
edce63baa9 Added README.md for main with examples and explanations (#1139) 2023-04-23 15:37:02 +00:00
ec9cdb6752 ggml : do not print perf ops that have not been used at all master-ec9cdb6 2023-04-23 18:32:52 +03:00
e4422e299c ggml : better PERF prints + support "LLAMA_PERF=1 make" master-e4422e2 2023-04-23 18:15:39 +03:00
53c8434398 Improve AVX2 for vec_dot_q4_3_q8_0 (#1138) master-53c8434 2023-04-23 11:01:03 +00:00
c6524f46eb readme : update gpt4all instructions (#980) 2023-04-23 10:21:26 +02:00
c9e2c26f41 A better packNibbles and mul_sum_i8_pairs_float implementation using AVX512 (#1119) master-c9e2c26 2023-04-23 07:57:05 +00:00
0e018fe008 ggml : fix Q4_3 cuBLAS master-0e018fe 2023-04-22 16:32:07 +03:00
857308d1e8 ci : trigger CI for drafts, but not most PR actions (#1125) master-857308d 2023-04-22 16:12:29 +03:00
c50b628810 Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122) master-c50b628 2023-04-22 10:54:13 +00:00
5f939498d5 ggml : unit test for quantization functions (#953)
* Unit test for quantization functions

Use the ggml_internal_get_quantize_fn function to loop through all
quantization formats and run a sanity check on the result.

Also add a microbenchmark that times these functions directly without
running the rest of the GGML graph.

* test-quantize-fns: CI fixes

Fix issues uncovered in CI
 - need to use sizes divisible by 32*8 for loop unrolling
 - use intrinsic header that should work on Mac

* test-quantize: remove

Per PR comment, subsumed by test-quantize-fns

* test-quantize: fix for q8_0 intermediates
2023-04-22 12:10:39 +03:00
36b4f7e064 llama : print timings on ctrl+c exit (#1021)
* print timings on ctrl+c exit

* remove redundant free memory call.

* add global pointer to ctx.
master-36b4f7e
2023-04-22 11:56:35 +03:00
10f19c1121 llama : have n_batch default to 512 (#1091)
* set default n_batch to 512 when using BLAS

* spacing

* alternate implementation of setting different n_batch for BLAS

* set n_batch to 512 for all cases
master-10f19c1
2023-04-22 11:27:05 +03:00
7e312f165c cmake : fix build under Windows when enable BUILD_SHARED_LIBS (#1100)
* Fix build under Windows when enable BUILD_SHARED_LIBS

* Make AVX512 test on Windows to build the shared libs
master-7e312f1
2023-04-22 11:18:20 +03:00
872c365a91 ggml : fix AVX build + update to new Q8_0 format master-872c365 2023-04-22 11:08:12 +03:00
955ef9a5d5 ggml : alternative Q4_3 implementation using modified Q8_0 (#1109)
* ggml : prefer vzip to vuzp

This way we always use the same type of instruction across all quantizations

* ggml : alternative Q4_3 implementation using modified Q8_0

* ggml : fix Q4_3 scalar imlpementation

* ggml : slight improvement of Q4_3 - no need for loop unrolling

* ggml : fix AVX paths for Q8_0 quantization
2023-04-22 10:55:35 +03:00
c5aa5e5777 ggml : AVX2 optimization for vec_dot_q4_3_q8_0 and refactoring (#1099)
* AVX2 optimization for vec_dot_q4_3_q8_0 and refactoring

* finish AVX vectorization of quantize_row_q8_0

* Rename hsum_int_8 to hsum_i32_8
master-c5aa5e5
2023-04-22 10:37:05 +03:00
e9a9cb0c54 examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Experience (#1107)
* Moving parameters to separate lines for readability.

* Increasing repeate_penalty to 1.1 to make alpaca more usable by default.

* Adding trailing newline.
2023-04-22 09:54:33 +03:00
b6e7f9b09e llama : add api for getting/setting the complete state: rng, logits, embedding and kv_cache (#1105)
* reserve correct size for logits

* add functions to get and set the whole llama state:

including rng, logits, embedding and kv_cache

* remove unused variables

* remove trailing whitespace

* fix comment
master-b6e7f9b
2023-04-22 09:21:32 +03:00
50cb666b8a Improve cuBLAS performance by using a memory pool (#1094)
* Improve cuBLAS performance by using a memory pool

* Move cuda specific definitions to ggml-cuda.h/cu

* Add CXX flags to nvcc

* Change memory pool synchronization mechanism to a spin lock
General code cleanup
master-50cb666
2023-04-21 21:59:17 +02:00
25d7abbd1f llama : fixed rlimit error message (#888) master-25d7abb 2023-04-21 21:48:06 +03:00
018f2279f5 cmake : link threads publicly to ggml (#1042)
* fix: ld link test-tokenizer-0 error

```
cmake3 --build . --config Release
[  5%] Built target ggml
[ 16%] Built target llama
[ 22%] Linking CXX executable ../bin/test-tokenizer-0
../libllama.a(ggml.c.o):在函数‘ggml_graph_compute’中:
ggml.c:(.text+0xf2db):对‘pthread_create’未定义的引用
ggml.c:(.text+0xf9d4):对‘pthread_join’未定义的引用
collect2: error: ld returned 1 exit status
gmake[2]: *** [bin/test-tokenizer-0] 错误 1
gmake[1]: *** [tests/CMakeFiles/test-tokenizer-0.dir/all] 错误 2
gmake: *** [all] 错误 2
```

* Update CMakeLists.txt

* Update CMakeLists.txt

* Update CMakeLists.txt
master-018f227
2023-04-21 21:27:06 +03:00
9411288271 main : evaluate tokens in batches after swapping context (#1014)
* examples : evaluate tokens in batches after swapping context

* Update examples/main/main.cpp

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
master-9411288
2023-04-21 21:18:09 +03:00
8687c1f258 llama : remember and restore kv cache data pointers (#1104)
because their value is stored in buf and overwritten by memcpy
master-8687c1f
2023-04-21 18:25:21 +03:00
1bfc153e2f ggml : a faster version for Q4_1 x Q8_0 dot products (#1083)
* A faster version for Q4_1 x Q8_0 dot products

The idea nehind being that Q8_0 quantized
values get used many times in the matrix multiplications
where they are involved. In the current implementations,
when we are evaluating the dot products, we need to compute
the sum of the quants in the Q8_0 vector, so the same
operation is repeated many times. Here we pre-compute
the sum during Q8_0 quantization, store it in the
now modified block_q8_0 struct, and then reuse this
result in the subsequent dot products.

In a synthetic benchmark (just compute a bunch of dot
products), this change speeds up the Q4_1 * Q8_0 dot
product by 80%, making the performance identical to
Q4_0 * Q8_0.

In practical application, I see a ~15% gain in speed for
token prediction on M2, and ~5% gain on Ryzen 7950X.
The speed gain in the prompt evaluation is much bigger
(around 50%).

I have only done the change for the scalar version,
ARM_NEON, and AVX2, so we still need an AVX implementation.

* Cleaning up

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
master-1bfc153
2023-04-21 18:18:26 +03:00
3d59769c3b Show perplexity ETA in hours and minutes (#1096) master-3d59769 2023-04-21 14:57:57 +02:00
d40fded93e llama : fix comment for "output.weight" tensor master-d40fded 2023-04-21 10:24:02 +03:00
2510c1831f Add ggml-model-*.bin checksums for 7B, 13B, 30B, 65B (#1088)
* Add ggml-model-*.bin checksums for 7B, 13B, 30B
* Add ggml-model-*.bin checksums for 65B

---------

Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
2023-04-20 23:56:44 +02:00
12b5900dbc ggml : sync ggml (add GPT-NeoX RoPE implementation) master-12b5900 2023-04-20 23:32:59 +03:00
9ff334f3c9 ggml : fix bug in ggml_compute_forward_dup_f32() master-9ff334f 2023-04-20 21:58:38 +03:00
2005469ea1 Add Q4_3 support to cuBLAS (#1086) master-2005469 2023-04-20 20:49:53 +02:00
8a1756abdf ggml : do not break cuBLAS build (Q4_3 is not yet implemented) master-8a1756a 2023-04-20 21:43:50 +03:00
66aab46079 ggml : fix Q4_3 quantization
Broke it during conflict resolution in last PR
master-66aab46
2023-04-20 20:44:05 +03:00
38de86a711 llama : multi-threaded quantization (#1075)
* Multi-threading quantization.

Not much gain for simple quantizations, bit it will be important
for quantizations that require more CPU cycles.

* Multi-threading for quantize-stats

It now does the job in ~14 seconds on my Mac for
Q4_0, Q4_1 and Q4_2. Single-threaded it was taking
more than 2 minutes after adding the more elaborate
version of Q4_2.

* Reviewer comments

* Avoiding compiler confusion

After changing chunk_size to const int as suggested by
@ggerganov, clang and GCC starting to warn me that I don't
need to capture it in the lambda. So, I removed it from the
capture list. But that makes the MSVC build fail. So,
making it a constexpr to make every compiler happy.

* Still fighting with lambda captures in MSVC

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
master-38de86a
2023-04-20 20:42:27 +03:00
e0305ead3a ggml : add Q4_3 quantization (#1082) master-e0305ea 2023-04-20 20:35:53 +03:00
6a9661ea5a ci : remove the LLAMA_ACCELERATE matrix dimension from Ubuntu builds in the CI (#1074)
[Accelerate](https://developer.apple.com/documentation/accelerate) is an Apple framework which can only be used on macOS, and the CMake build [ignores](https://github.com/ggerganov/llama.cpp/blob/master/CMakeLists.txt#L102) the `LLAMA_ACCELERATE` variable when run on non-Apple platforms. This implies setting `LLAMA_ACCELERATE` is a no-op on Ubuntu and can be removed.

This will reduce visual noise in CI check results (in addition to reducing the number of checks we have to run for every PR). Right now every sanitized build is duplicated twice for no good reason (e.g., we have `CI / ubuntu-latest-cmake-sanitizer (ADDRESS, Debug, ON)` and `CI / ubuntu-latest-cmake-sanitizer (ADDRESS, Debug, OFF)`).
master-6a9661e
2023-04-20 18:15:18 +03:00
5addcb120c fix: LLAMA_CUBLAS=1 undefined reference 'shm_open' (#1080) master-5addcb1 2023-04-20 15:28:43 +02:00
c8c2c52482 AVX2 optimization for vec_dot_q4_2_q8_0 (#1068) master-c8c2c52 2023-04-20 08:45:41 +02:00
02d6988121 Improve cuBLAS performance by dequantizing on the GPU (#1065) master-02d6988 2023-04-20 03:14:14 +02:00
834695fe3a Minor: Readme fixed grammar, spelling, and misc updates (#1071) 2023-04-19 19:52:14 +00:00
f7d05095b4 Q4_2 quantization with rmse-optimized scale and quants (#1062)
* Q4_2 quantization with rmse-optimized scale and quants

For quantize-stats we get
q4_2: rmse 0.00159301, maxerr 0.17480469, 95pct<0.0030, median<0.0012

For 7B perplexity with BLAS enabled we get 6.2038 after 655 chunks.

Quantization is slow (~90 seconds on my Mac for 7B) as not
multi-threaded as in PR #896.

* ggml : satisfy the sanitizer builds

Not sure why this makes them fail

* Better follow ggml conventions for function names

* Fixed type as per reviewer comment

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
master-f7d0509
2023-04-19 20:20:14 +02:00
884e7d7a2b ggml : use 8-bit precision for Q4_1 intermediate results (#1047)
* ggml : use 8-bit precision for Q4_1 intermediate results (ARM)

* ggml : optimize ggml_vec_dot_q4_1_q8_0() via vmalq_n_f32

56 ms/token with Q4_1 !

* ggml : AVX2 implementation of ggml_vec_dot_q4_1_q8_0 (#1051)

* gitignore : ignore ppl-*.txt files

---------

Co-authored-by: slaren <2141330+slaren@users.noreply.github.com>
master-884e7d7
2023-04-19 20:10:08 +03:00
7cd5c4a3e9 readme : add warning about Q4_2 and Q4_3 2023-04-19 19:07:54 +03:00
f3d4edf504 ggml : Q4 cleanup - remove 4-bit dot product code (#1061)
* Q4 cleanup

* Remove unused AVX512 Q4_0 code
master-f3d4edf
2023-04-19 19:06:37 +03:00
8944a13296 Add NVIDIA cuBLAS support (#1044) master-8944a13 2023-04-19 11:22:45 +02:00
6667401238 Multi-threaded ggml_cpy (#1035)
* Multi-threaded ggml_cpy

* Update ggml.c

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Also fix wdata offset in ggml_compute_forward_add_q_f32

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
master-6667401
2023-04-19 00:53:24 +02:00
77a73403ca ggml : add new Q4_2 quantization (ARM only) (#1046)
* ggml : Q4_2 ARM

* ggml : add ggml_is_quantized()

* llama : update llama_type_name() with Q4_2 entry

* ggml : speed-up q4_2

- 4 threads: ~100ms -> ~90ms
- 8 threads:  ~55ms -> ~50ms

* ggml : optimize q4_2 using vmlaq_n_f32 + vmulq_n_f32
master-77a7340
2023-04-18 23:54:57 +03:00
50a8a2af97 ggml : scratch that - vmlaq_n_f32 is always better
Had a background process that was messing with the timings
master-50a8a2a
2023-04-18 23:11:23 +03:00