Commit Graph

454 Commits

Author SHA1 Message Date
da0e9fe90c Add SHA256SUMS file and instructions to README how to obtain and verify the downloads
Hashes created using:

sha256sum models/*B/*.pth models/*[7136]B/ggml-model-f16.bin* models/*[7136]B/ggml-model-q4_0.bin* > SHA256SUMS
2023-03-21 23:19:11 +01:00
3366853e41 Add notice about pending change 2023-03-21 22:57:35 +02:00
1daf4dd712 Minor style changes 2023-03-21 18:10:32 +02:00
dc6a845b85 Add chat.sh script 2023-03-21 18:09:46 +02:00
3bfa3b43b7 Fix convert script, warnings alpaca instructions, default params 2023-03-21 17:59:16 +02:00
e0ffc861fa Update IPFS links to quantized alpaca with new tokenizer format (#352) 2023-03-21 17:34:49 +02:00
074bea2eb1 sentencepiece bpe compatible tokenizer (#252)
* potential out of bounds read

* fix quantize

* style

* Update convert-pth-to-ggml.py

* mild cleanup

* don't need the space-prefixing here rn since main.cpp already does it

* new file magic + version header field

* readme notice

* missing newlines

Co-authored-by: slaren <2141330+slaren@users.noreply.github.com>
2023-03-20 03:17:23 -07:00
7392f1cd2c Improved quantize script (#222)
* Improved quantize script

I improved the quantize script by adding error handling and allowing to select many models for quantization at once in the command line. I also converted it to Python for generalization as well as extensibility.

* Fixes and improvements based on Matt's observations

Fixed and improved many things in the script based on the reviews made by @mattsta. The parallelization suggestion is still to be revised, but code for it was still added (commented).

* Small fixes to the previous commit

* Corrected to use the original glob pattern

The original Bash script uses a glob pattern to match files that have endings such as ...bin.0, ...bin.1, etc. That has been translated correctly to Python now.

* Added support for Windows and updated README to use this script

New code to set the name of the quantize script binary depending on the platform has been added (quantize.exe if working on Windows) and the README.md file has been updated to use this script instead of the Bash one.

* Fixed a typo and removed shell=True in the subprocess.run call

Fixed a typo regarding the new filenames of the quantized models and removed the shell=True parameter in the subprocess.run call as it was conflicting with the list of parameters.

* Corrected previous commit

* Small tweak: changed the name of the program in argparse

This was making the automatic help message to be suggesting the program's usage as being literally "$ Quantization Script [arguments]". It should now be something like "$ python3 quantize.py [arguments]".
2023-03-19 20:38:44 +02:00
160bfb217d Update hot topics to mention Alpaca support 2023-03-19 19:51:55 +02:00
a4e63b73df Add instruction for using Alpaca (#240) 2023-03-19 18:49:50 +02:00
6f61c18ec9 Fix typo in readme 2023-03-18 23:18:04 +01:00
1e5a6d088d Add note about Python 3.11 to readme 2023-03-18 22:25:35 +01:00
554b541521 Add memory/disk requirements to readme 2023-03-18 22:25:35 +01:00
e81b9c81c1 Update Contributing section 2023-03-17 20:30:04 +02:00
367946c668 Don't tell users to use a bad number of threads (#243)
The readme tells people to use the command line option "-t 8", causing 8
threads to be started. On systems with fewer than 8 cores, this causes a
significant slowdown. Remove the option from the example command lines
and use /proc/cpuinfo on Linux to determine a sensible default.
2023-03-17 19:47:35 +02:00
2af23d3043 🚀 Dockerize llamacpp (#132)
* feat: dockerize llamacpp

* feat: split build & runtime stages

* split dockerfile into main & tools

* add quantize into tool docker image

* Update .devops/tools.sh

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* add docker action pipeline

* change CI to publish at github docker registry

* fix name runs-on macOS-latest is macos-latest (lowercase)

* include docker versioned images

* fix github action docker

* fix docker.yml

* feat: include all-in-one command tool & update readme.md

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-17 10:47:06 +01:00
721311070e Update README.md 2023-03-16 15:00:09 +02:00
ac15de7895 Expand "Contributing" section 2023-03-16 08:55:13 +02:00
273abc47ff Update hot topics - RMSnorm 2023-03-16 07:12:12 +02:00
27944c4206 fixed typo (#178) 2023-03-15 22:35:25 +02:00
977295c700 Fix potential licensing issue (#126)
* Update README.md

* Update README.md

remove facebook
2023-03-15 21:39:06 +02:00
60f819a2b1 Add section to README on how to run the project on Android (#130) 2023-03-14 15:30:08 +02:00
97ab2b2578 Add Misc section + update hot topics + minor fixes 2023-03-14 09:43:52 +02:00
7ec903d3c1 Update contribution section, hot topics, limitations, etc. 2023-03-13 19:21:51 +02:00
d1f224712d Add quantize script for batch quantization (#92)
* Add quantize script for batch quantization

* Indentation

* README for new quantize.sh

* Fix script name

* Fix file list on Mac OS

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-13 18:15:20 +02:00
1808ee0500 Add initial contribution guidelines 2023-03-13 09:42:26 +02:00
1a0a74300f Update README.md 2023-03-12 23:39:01 +02:00
96ea727f47 Add interactive mode (#61)
* Initial work on interactive mode.

* Improve interactive mode. Make rev. prompt optional.

* Update README to explain interactive mode.

* Fix OS X build
2023-03-12 23:13:28 +02:00
9661954835 Fix typo in README (#45) 2023-03-12 22:30:08 +02:00
7027a97837 Update README.md 2023-03-12 22:09:26 +02:00
7c9e54e55e Revert "weights_only" arg - this causing more trouble than help 2023-03-12 20:59:01 +02:00
b9bd1d0141 python/pytorch compat notes (#44) 2023-03-12 14:16:33 +02:00
702fddf5c5 Clarify meaning of hacking 2023-03-12 09:03:25 +02:00
7d86e25bf6 README: add "Supported platforms" + update hot topics 2023-03-12 08:41:54 +02:00
da1a4ff01f Update README.md 2023-03-12 01:26:32 +02:00
6b2cb6302f Fix a typo in model name (#16) 2023-03-11 19:32:20 +02:00
4235e3d5b3 Update README.md 2023-03-11 18:10:18 +02:00
f1eaff4721 Add AVX2 support for x86 architectures thanks to @Const-me ! 2023-03-11 18:04:25 +02:00
0c6803321c Update README.md 2023-03-11 12:31:21 +02:00
7211862c94 Update Makefile var + add comment 2023-03-11 12:27:02 +02:00
a5c5ae2f54 Update README.md 2023-03-11 11:34:25 +02:00
ea977e85ec Update README.md 2023-03-11 11:34:11 +02:00
007a8f6f45 Support all LLaMA models + change Q4_0 quantization storage 2023-03-11 11:28:30 +02:00
5f2f970d51 Include Python dependencies in README (#6) 2023-03-11 07:47:26 +02:00
73c6ed5e87 Update README.md 2023-03-11 01:30:47 +02:00
01eeed8fb1 Update README.md 2023-03-11 01:22:58 +02:00
6da2df34ee Update README.md 2023-03-11 01:18:10 +02:00
920a7fe2d9 Update README.md 2023-03-11 00:55:22 +02:00
3a57ee59de Update README.md 2023-03-11 00:51:46 +02:00
b85028522d Update README.md 2023-03-11 00:09:19 +02:00