Commit Graph

4606 Commits

Author SHA1 Message Date
f2d1c47294 cmake should link openblas properly with -lopenblas like how it's done in the makefile (#839) master-f2d1c47 2023-04-08 11:15:17 +00:00
lon
317fb12fbd Add new binaries to flake.nix (#847) 2023-04-08 12:04:23 +02:00
62cfc54f77 Add quantize-stats command for testing quantization (#728)
Command that calculates some statistics over the errors introduced by
quantization, like mean square error, max error and some percentile errors for layer
weights. Should be useful for testing quantization improvements.

Exposes some internal state from ggml and llama for testing
master-62cfc54
2023-04-08 00:09:18 +02:00
698f7b5d63 make : add libllama.so target for llama-cpp-python (#797)
I was able to get llama-cpp-python working but only when I build libllama.so with make.
master-698f7b5
2023-04-07 19:11:58 +03:00
c1950c3431 zig : don't link examples/common.cpp for non-example (#814) 2023-04-07 19:05:29 +03:00
4953e9007f llama : always sort logits before nucleus sampling (#812)
* Always sort logits before nucleus sampling

* remove second normalization

- fix windows build
- remove normalization since std::discrete_distribution does not require it
master-4953e90
2023-04-07 19:02:12 +03:00
cc9cee8e9e Do not crash when it has nothing to say. (#796)
Otherwise observing this in the interactive mode:
/usr/lib/gcc/x86_64-pc-linux-gnu/12/include/g++-v12/bits/stl_vector.h:1230: reference std::vector<int>::back() [_Tp = int, _Alloc = std::allocator<int>]: Assertion '!this->empty()' failed.
master-cc9cee8
2023-04-06 17:59:11 +02:00
d2beca95dc Make docker instructions more explicit (#785) 2023-04-06 08:56:58 +02:00
eeaa7b0492 ggml : multi-thread ggml_rope() (~3-4 times faster on M1) (#781) master-eeaa7b0 2023-04-05 22:11:03 +03:00
986b6ce9f9 ggml, llama : avoid heavy V transpose + improvements (#775)
ggml :

- added ggml_view_3d()
- ggml_view_tensor() now inherits the stride too
- reimplement ggml_cpy() to account for dst stride
- no longer require tensor->data to be memory aligned

llama :

- compute RoPE on 32-bit tensors (should be more accurate)
- store RoPE-ed K in the KV cache
- store transposed V in the KV cache (significant speed-up)
- avoid unnecessary Q copy
master-986b6ce
2023-04-05 22:07:33 +03:00
3416298929 Update README.md 2023-04-05 19:54:30 +03:00
5a8c4f6240 llama : define non-positive top_k; top_k range check (#779)
* Define non-positive top_k; top_k range check

* minor : brackets

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
master-5a8c4f6
2023-04-05 19:20:05 +03:00
ff05d05c96 miku.sh : add executable bit (#780) 2023-04-05 18:59:13 +03:00
62b3e81aae media : add logos and banners 2023-04-05 18:58:31 +03:00
8d10406d6e readme : change logo + add bindings + add uis + add wiki 2023-04-05 18:56:20 +03:00
ed1c214e66 zig : add build.zig (#773)
Co-authored-by: Locria Cyber <74560659+locriacyber@users.noreply.github.com>
2023-04-05 18:06:02 +03:00
0c44427df1 make : missing host optimizations in CXXFLAGS (#763) master-0c44427 2023-04-05 17:38:37 +03:00
594cc95fab readme : update with CMake and windows example (#748)
* README: Update with CMake and windows example

* README: update with code-review for cmake build
2023-04-05 17:36:12 +03:00
88ed5761b8 examples : add Miku.sh (#724)
* Add Miku.sh to examples

* Add missing line to prompt in Miku.sh

* Add --keep param to Miku.sh

* Remove '[end_of_conversation]' line from Miku.sh

No longer is necessary.
2023-04-05 17:32:42 +03:00
58c438cf7d Add Accelerate/BLAS when using Swift (#765) 2023-04-05 06:44:24 -04:00
53dbba7695 Windows: reactive sigint handler after each Ctrl-C (#736) master-53dbba7 2023-04-03 18:00:55 +02:00
437e77855a 10+% performance improvement of ggml_vec_dot_q4_0 on AVX2 (#654)
* Performance improvement of AVX2 code
* Fixed problem with MSVC compiler
* Reviewer comments: removed double semicolon, deleted empty line 1962
master-437e778
2023-04-03 09:52:28 +02:00
cd7fa95690 Define non-positive temperature behavior (#720) master-cd7fa95 2023-04-03 02:19:04 +02:00
a0c0516416 Remove torch GPU dependencies from the Docker.full image (#665)
By using `pip install torch --index-url https://download.pytorch.org/whl/cpu`
instead of `pip install torch` we can specify we want to install a CPU-only version
of PyTorch without any GPU dependencies. This reduces the size of the Docker image
from 7.32 GB to 1.62 GB
2023-04-03 00:13:03 +02:00
d8d4e865cd Add a missing step to the gpt4all instructions (#690)
`migrate-ggml-2023-03-30-pr613.py` is needed to get gpt4all running.
2023-04-02 12:48:57 +02:00
e986f94829 Added api for getting/setting the kv_cache (#685)
The api provides access methods for retrieving the current memory buffer for the kv_cache and its token number.
It also contains a method for setting the kv_cache from a memory buffer.

This makes it possible to load/save history - maybe support --cache-prompt paramater as well?

Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
master-e986f94
2023-04-02 12:23:04 +02:00
c0bb1d3ce2 ggml : change ne to int64_t (#626) master-c0bb1d3 2023-04-02 13:21:31 +03:00
6e7801d08d examples : add gpt4all script (#658) 2023-04-02 10:56:20 +03:00
81040f10aa llama : do not allocate KV cache for "vocab_only == true" (#682)
Fixes sanitizer CI
master-81040f1
2023-04-02 10:18:53 +03:00
c4f89d8d73 make : use -march=native -mtune=native on x86 (#609) master-c4f89d8 2023-04-02 10:17:05 +03:00
5b70e7de4c fix default params for examples/main (#697) master-5b70e7d 2023-04-02 04:41:12 +02:00
a717cba844 py: huggingface -> Hugging Face (#686) 2023-04-01 18:38:18 +02:00
d0a7f742e7 readme: replace termux links with homepage, play store is deprecated (#680) 2023-04-01 16:57:30 +02:00
0d054e292e Show error message when -f fails master-0d054e2 2023-04-01 16:08:40 +02:00
3525899277 Enable -std= for cmake builds, fix warnings (#598) master-3525899 2023-03-31 19:19:16 +00:00
1d08882afa Optimize AVX2 ggml_vec_dot_q4_0 (#642) master-1d08882 2023-03-31 15:55:52 +00:00
02c5b27e91 Add AVX acceleration (#617)
* ggml : add AVX quantize_row_q4_0()

* ggml : add AVX ggml_vec_dot_q4_0()

* ggml : refactor AVX part of ggml_vec_dot_q4_0()

https://github.com/ggerganov/llama.cpp/pull/617#issuecomment-1489985645
master-02c5b27
2023-03-31 13:55:44 +02:00
cbef542879 py : cleanup the code
- use f-strings where possible
- drop first param of encode/decode functions since "utf-8" is the default
2023-03-31 10:32:01 +02:00
9733104be5 drop quantize.py (now that models are using a single file) 2023-03-31 01:07:32 +02:00
3df890aef4 readme : update supported models 2023-03-30 22:31:54 +03:00
ee0c40dd6d Introduce GGML migration tool for new file format
If you deleted your old Meta LLaMA .pth files, then the
migrate-ggml-2023-03-30-pr613.py script will allow you to convert your
old ggml files into the new mmap()'able format.

See #613
master-ee0c40d
2023-03-30 12:28:25 -07:00
6f23ba5ee2 Ensure --mlock works properly with mmap() support 2023-03-30 12:28:25 -07:00
78ca9838ee Make loading weights 10-100x faster
This is a breaking change that's going to give you three benefits:

1. Your inference commands should load 100x faster
2. You may be able to safely load models 2x larger
3. You can run many concurrent inference processes

This was accomplished by changing the file format so we can mmap()
weights directly into memory without having to read() or copy them
thereby ensuring the kernel can make its file cache pages directly
accessible to our inference processes; and secondly, that the file
cache pages are much less likely to get evicted (which would force
loads to hit disk) because they're no longer competing with memory
pages that were needlessly created by gigabytes of standard i/o.

The new file format supports single-file models like LLaMA 7b, and
it also supports multi-file models like LLaMA 13B. Our Python tool
now merges the foo.1, foo.2, etc. files back into a single file so
that the C++ code which maps it doesn't need to reshape data every
time. That's made llama.cpp so much simpler. Much of its load code
has now been deleted.

Furthermore, this change ensures that tensors are aligned properly
on a 32-byte boundary. That opens the door to seeing if we can get
additional performance gains on some microprocessors, by using ops
that require memory alignment.

Lastly note that both POSIX and the Windows platform are supported

Fixes #91
2023-03-30 12:28:25 -07:00
a017390358 Initial windows support (untested) 2023-03-30 12:28:25 -07:00
ac184d5147 Always initialize mm_addr and mm_length in llama_model 2023-03-30 12:28:25 -07:00
276e5b7811 Unmap the file in llama_free 2023-03-30 12:28:25 -07:00
d68c5dc435 Make mmap_file static 2023-03-30 12:28:25 -07:00
64bde3ffd4 Fix ggml_init_params in quantize 2023-03-30 12:28:25 -07:00
c03ae8dca1 Add mmap support for model files 2023-03-30 12:28:25 -07:00
3bcc129ba8 cmake : properly invoke CTest (#629) master-3bcc129 2023-03-30 20:56:59 +03:00