5763 Commits

Author SHA1 Message Date
0f1b21cb90 Docker - Fix publish docker image in GitHub Registry (#235)
* fix publish permission

* try to fix docker pipeline using as password github_token & username repository_owner
master-0f1b21c
2023-03-20 18:05:20 +01:00
074bea2eb1 sentencepiece bpe compatible tokenizer (#252)
* potential out of bounds read

* fix quantize

* style

* Update convert-pth-to-ggml.py

* mild cleanup

* don't need the space-prefixing here rn since main.cpp already does it

* new file magic + version header field

* readme notice

* missing newlines

Co-authored-by: slaren <2141330+slaren@users.noreply.github.com>
master-074bea2
2023-03-20 03:17:23 -07:00
5cb63e2493 Add tqdm to Python requirements (#293)
* Add tqdm to Python requirements
* Remove torchvision torchaudio, add requests
2023-03-20 09:24:11 +01:00
da5303c1ea bugfix: default should not be interactive (#304) master-da5303c 2023-03-19 23:44:20 +02:00
4545539d71 Rename script 2023-03-19 21:58:51 +02:00
edeba28366 Add temporary helper script for Alpaca chat 2023-03-19 21:57:48 +02:00
5c19c70ba6 fix coloring of last n_batch of prompt, and refactor line input (#221)
* fix coloring of last `n_batch` of prompt, and refactor line input
* forgot the newline that needs to be sent to the model
* (per #283) try to force flush of color reset in SIGINT handler
master-5c19c70
2023-03-19 19:44:30 +00:00
24568371ae Support for multiple reverse prompts. (#299)
Co-authored-by: Johnman <>
Co-authored-by: Johnman <tjohnman@github>
master-2456837
2023-03-19 21:33:06 +02:00
7392f1cd2c Improved quantize script (#222)
* Improved quantize script

I improved the quantize script by adding error handling and allowing to select many models for quantization at once in the command line. I also converted it to Python for generalization as well as extensibility.

* Fixes and improvements based on Matt's observations

Fixed and improved many things in the script based on the reviews made by @mattsta. The parallelization suggestion is still to be revised, but code for it was still added (commented).

* Small fixes to the previous commit

* Corrected to use the original glob pattern

The original Bash script uses a glob pattern to match files that have endings such as ...bin.0, ...bin.1, etc. That has been translated correctly to Python now.

* Added support for Windows and updated README to use this script

New code to set the name of the quantize script binary depending on the platform has been added (quantize.exe if working on Windows) and the README.md file has been updated to use this script instead of the Bash one.

* Fixed a typo and removed shell=True in the subprocess.run call

Fixed a typo regarding the new filenames of the quantized models and removed the shell=True parameter in the subprocess.run call as it was conflicting with the list of parameters.

* Corrected previous commit

* Small tweak: changed the name of the program in argparse

This was making the automatic help message to be suggesting the program's usage as being literally "$ Quantization Script [arguments]". It should now be something like "$ python3 quantize.py [arguments]".
master-ad5fd5b
2023-03-19 20:38:44 +02:00
ad5fd5b60c Make prompt randomization optional. (#300)
Co-authored-by: Johnman <>
2023-03-19 20:36:19 +02:00
368d0c8a9e Respect the maximum number of tokens in interactive. (#298)
Co-authored-by: Johnman <johnman@github>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
master-368d0c8
2023-03-19 20:31:17 +02:00
50fae10d03 Add --ignore-eos parameter (#181)
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
master-50fae10
2023-03-19 20:22:48 +02:00
084e2f0ec0 interactive mode: print '\n' in sigint_handler, this flush stdout thus ensure color reset. (#283) master-084e2f0 2023-03-19 20:10:00 +02:00
0b366e7357 Command line switch to use F16 for memory_k and memory_v (refactor of #154) (#294)
* Use F16 for memory_k and memory_v

* add command line switch to use f16 instead of f32 for memory k+v

---------

Co-authored-by: Ty Everett <ty@tyweb.us>
master-0b366e7
2023-03-19 19:57:00 +02:00
160bfb217d Update hot topics to mention Alpaca support 2023-03-19 19:51:55 +02:00
c494ed5b94 Fix off-by-one bug (#115) master-c494ed5 2023-03-19 19:46:32 +02:00
c1c7026b47 Fix python stuff (#109) 2023-03-19 19:33:18 +02:00
467b149761 Refactoring convert-pth-to-ggml.py: more concise and readable (#109)
* Refactor get_n_parts function to simplify code and improve readability

* Use f-strings instead of concatenation

* Refactoring: more concise and readable

* modularize

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-19 19:17:39 +02:00
70f01cb863 Drop trailing new line from file prompts (#80) master-70f01cb 2023-03-19 19:05:04 +02:00
a4e63b73df Add instruction for using Alpaca (#240) 2023-03-19 18:49:50 +02:00
9e1707218a Add "--instruct" argument for usage with Alpaca (#240)
Also start adding prompts in "./prompts"
master-9e17072
2023-03-19 18:37:02 +02:00
22213a17b5 Change RMSNorm eps to 1e-6 (#173)
I think this is what is used in the Python code
master-22213a1
2023-03-19 17:30:00 +02:00
d7def1a752 Warn user if a context size greater than 2048 tokens is specified (#274)
LLaMA doesn't support more than 2048 token context sizes, and going above that produces terrible results.
master-d7def1a
2023-03-18 20:10:47 -04:00
6f61c18ec9 Fix typo in readme 2023-03-18 23:18:04 +01:00
1e5a6d088d Add note about Python 3.11 to readme 2023-03-18 22:25:35 +01:00
554b541521 Add memory/disk requirements to readme 2023-03-18 22:25:35 +01:00
d3f202d57b Remove unused code since n_vocab is model.hparams.n_vocab (#262) master-d3f202d 2023-03-18 13:51:49 +00:00
e03e359730 fixed warning with std::ignore about unused function result (#151)
fixed warning with std::ignore about unused function result
2023-03-18 11:44:09 +00:00
a81d0c2a17 Fix n^2 loop in tokenization (#254)
This causes long prompts to parse very slowly.
2023-03-18 11:17:19 +00:00
b2de7f18df CI Improvements (#230)
* CI Improvements

Manual build feature, autoreleases for Windows

* better CI naming convention

use branch name in releases and tags
2023-03-18 09:27:12 +02:00
a292747893 Nix flake (#40)
* Nix flake

* Nix: only add Accelerate framework on macOS

* Nix: development shel, direnv and compatibility

* Nix: use python packages supplied by withPackages

* Nix: remove channel compatibility

* Nix: fix ARM neon dotproduct on macOS

---------

Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
2023-03-17 23:03:48 +01:00
c9f670a177 Implement non-greedy tokenizer that tries to maximize token lengths (#242)
* Implement non-greedy tokenizer that tries to maximize token lengths

* Insert single space in front of the prompt

- this is to match original llama tokenizer behavior

---------

Co-authored-by: Jakub Horak <jakub.horak@ibawizard.net>
2023-03-17 21:05:58 +01:00
4f54609110 Default to 4 threads (#243) 2023-03-17 21:46:46 +02:00
e81b9c81c1 Update Contributing section 2023-03-17 20:30:04 +02:00
367946c668 Don't tell users to use a bad number of threads (#243)
The readme tells people to use the command line option "-t 8", causing 8
threads to be started. On systems with fewer than 8 cores, this causes a
significant slowdown. Remove the option from the example command lines
and use /proc/cpuinfo on Linux to determine a sensible default.
2023-03-17 19:47:35 +02:00
6b0df5ccf3 add ptread link to fix cmake build under linux (#114)
* add ptread link to fix cmake build under linux

* add cmake to linux and macos platform

* separate make and cmake workflow

---------

Co-authored-by: Sebastián A <sebastian.aedo29@gmail.com>
2023-03-17 13:38:24 -03:00
2af23d3043 🚀 Dockerize llamacpp (#132)
* feat: dockerize llamacpp

* feat: split build & runtime stages

* split dockerfile into main & tools

* add quantize into tool docker image

* Update .devops/tools.sh

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* add docker action pipeline

* change CI to publish at github docker registry

* fix name runs-on macOS-latest is macos-latest (lowercase)

* include docker versioned images

* fix github action docker

* fix docker.yml

* feat: include all-in-one command tool & update readme.md

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-17 10:47:06 +01:00
904d2a8d6a Q4_1 quantization (#193)
* Add AVX2 version of ggml_vec_dot_q4_1

* Small optimisations to q4_1 dot product (@Const-me)

* Rearrange Q4_1 quantization to work for multipart models. (Fix #152)

* Fix ggml_vec_mad_q4_1 too

* Fix non-vectorised q4_1 vec mul
2023-03-17 06:48:39 +02:00
721311070e Update README.md 2023-03-16 15:00:09 +02:00
ac15de7895 Expand "Contributing" section 2023-03-16 08:55:13 +02:00
273abc47ff Update hot topics - RMSnorm 2023-03-16 07:12:12 +02:00
9b4a15b17d Fix RMS norm in GGML (#191) 2023-03-15 19:29:25 -04:00
6eac39ba95 Add RMS norm and use it (#187)
* add ggml_rms_norm

* update op num
2023-03-16 00:41:38 +02:00
27944c4206 fixed typo (#178) 2023-03-15 22:35:25 +02:00
2d15d6c9a9 add SIGINT support for _WIN32 environments (#120)
* add SIGINT support for _WIN32 environments

* perhaps more consistent
2023-03-15 21:56:24 +02:00
2d64715ad4 added ctx_size parameter (#148)
* added ctx_size parameter

* added it in more places

* Apply suggestions from code review

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-15 21:42:40 +02:00
16b2c61a22 fixed color reset on exit (#149)
* fixed color reset on exit

* added sigint handler for ansi_color_reset

* Update main.cpp

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-15 21:39:38 +02:00
977295c700 Fix potential licensing issue (#126)
* Update README.md

* Update README.md

remove facebook
2023-03-15 21:39:06 +02:00
956dfda8ad Use tokenizer.vocab_size() instead of hardcoding 32000 in convert-pth-to-ggml.py (#142)
There are ways that special tokens or other new tokens could be added to the tokenizer; therefore it's probably best not to assume the vocabulary is only 32000 tokens.
2023-03-15 21:37:50 +02:00
113e685d18 inline -> static inline for "bytesFromNibbles" (#161)
Without "static" prefix, it fails to compile in clang
2023-03-15 21:05:14 +02:00