Commit Graph

134 Commits

Author SHA1 Message Date
486ae645fd Compute perplexity over prompt (#270)
* Compute perplexity over prompt

* More accurate perplexity calculation - over all logits in the context window (so 512x more tokens!)

* Output all perplexitiies

* Add timing/ETA
2023-03-21 18:27:42 +02:00
3ab3e6582f Add chatLLaMa script (#198)
* Add chatLLaMa script

* Fix shellcheck errors and do some cleanup

* Move chatLLaMa script to `examples` directory

* Reduce chatLLaMa context size to 2048

Ref d7def1a752

* Include n_predict to 2048 in examples/chatLLaMa
2023-03-21 18:23:15 +02:00
f157088cb7 makefile: Fix CPU feature detection on Haiku (#218) 2023-03-21 18:21:06 +02:00
c86ba036e6 Enable ANSI colors on Windows 10+ (#311)
* Enable ANSI colors on Windows 10+

On older versions function will silently fail without any ill effects

* Do not call SetConsoleMode if the mode is already set

* Update main.cpp

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-21 18:14:46 +02:00
1daf4dd712 Minor style changes 2023-03-21 18:10:32 +02:00
dc6a845b85 Add chat.sh script 2023-03-21 18:09:46 +02:00
6a612959e1 Check for reverse prompt by characters instead of tokens (#292) (#330)
* Check for reverse prompt by characters instead of tokens (#292)

* Update main.cpp

Wording.

* Cleanup.

* Remove unnecessary use of std::stringstream.

---------

Co-authored-by: Johnman <tjohnman@github>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-21 18:05:06 +02:00
d5f56a5e5a Check for reverse prompt by characters instead of tokens (#292) (#330)
* Check for reverse prompt by characters instead of tokens (#292)

* Update main.cpp

Wording.

* Cleanup.

* Remove unnecessary use of std::stringstream.

---------

Co-authored-by: Johnman <tjohnman@github>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-21 18:04:43 +02:00
3bfa3b43b7 Fix convert script, warnings alpaca instructions, default params 2023-03-21 17:59:16 +02:00
715d292ee0 Add OpenBSD support (#314) 2023-03-21 17:50:09 +02:00
c98ae02668 fix typo in comment (#318) 2023-03-21 17:49:43 +02:00
c3b2306b18 Makefile: slightly cleanup for Mac Intel; echo instead of run ./main -h (#335) 2023-03-21 17:44:11 +02:00
975d2cebf9 cmdline option for custom amount of model parts (--n_parts N) (#348)
* cmdline option for custom amount of model parts (--n_parts N)

* Update main.cpp

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-21 17:42:43 +02:00
e0ffc861fa Update IPFS links to quantized alpaca with new tokenizer format (#352) 2023-03-21 17:34:49 +02:00
8f644a0a85 Change default repeat_penalty to 1.0
I feel this penalty is not really helping.
Especially for the example from the README it makes results pretty bad
2023-03-21 17:32:14 +02:00
eb34620aec Add tokenizer test + revert to C++11 (#355)
* Add test-tokenizer-0 to do a few tokenizations - feel free to expand
* Added option to convert-pth-to-ggml.py script to dump just the vocabulary
* Added ./models/ggml-vocab.bin containing just LLaMA vocab data (used for tests)
* Added utility to load vocabulary file from previous point (temporary implementation)
* Avoid using std::string_view and drop back to C++11 (hope I didn't break something)
* Rename gpt_vocab -> llama_vocab
* All CMake binaries go into ./bin/ now
2023-03-21 17:29:41 +02:00
2e664f1ff4 Add initial AVX512 support for dot product on Linux (#320)
* Update Makefile to detect AVX512 support and add compiler flags if it's available
 * Based on existing AVX2 implementation, dot product on one 32-value block of 4-bit quantized ints at a time
 * Perform 8 bit -> 16 bit sign extension and multiply+add on 32 values at time instead of 16
 * Use built-in AVX512 horizontal reduce add to get sum at the end
 * Manual unrolling on inner dot product loop to reduce loop counter overhead
master-2e664f1
2023-03-21 15:35:42 +01:00
8cf9f34edd Adding missing features of CMakeLists.txt & Refactoring (#131)
* Functionality addition CMakeLists.txt

Refactoring:
1. Simplify more options that are negation of negation.
LLAMA_NO_ACCELERATE -> LLAMA_ACCELERATE
2. Changed to an optional expression instead of forcing to enable AVX2 in MSVC.
3. Make CMAKE_CXX_STANDARD, which is different from Makefile, the same.
4. Use add_compile_options instead of adding options to CMAKE_C_FLAGS.
5. Make utils use target_link_libraries instead of directly referencing code.

Added features:
1. Added some options.
LLAMA_STATIC_LINK,LLAMA_NATIVE,LLAMA_LTO,LLAMA_GPROF,LLAMA_OPENBLAS

* Fix Accelerate link in CMake

* Windows build Fix

* C++11 to C++17

* Reflects C/C++ standard individually

* Change the version to 3.12

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
master-8cf9f34
2023-03-21 01:37:16 +01:00
bd4b46d6ba Nix flake: set meta.mainProgram to llama 2023-03-20 22:50:22 +01:00
6b6d5b5024 Fixed tokenizer.model not found error when model dir is symlink (#325) 2023-03-20 19:33:10 +00:00
a791a68b61 move file magic/version to header, print expected version (#319) master-a791a68 2023-03-20 19:26:01 +00:00
0f1b21cb90 Docker - Fix publish docker image in GitHub Registry (#235)
* fix publish permission

* try to fix docker pipeline using as password github_token & username repository_owner
master-0f1b21c
2023-03-20 18:05:20 +01:00
074bea2eb1 sentencepiece bpe compatible tokenizer (#252)
* potential out of bounds read

* fix quantize

* style

* Update convert-pth-to-ggml.py

* mild cleanup

* don't need the space-prefixing here rn since main.cpp already does it

* new file magic + version header field

* readme notice

* missing newlines

Co-authored-by: slaren <2141330+slaren@users.noreply.github.com>
master-074bea2
2023-03-20 03:17:23 -07:00
5cb63e2493 Add tqdm to Python requirements (#293)
* Add tqdm to Python requirements
* Remove torchvision torchaudio, add requests
2023-03-20 09:24:11 +01:00
da5303c1ea bugfix: default should not be interactive (#304) master-da5303c 2023-03-19 23:44:20 +02:00
4545539d71 Rename script 2023-03-19 21:58:51 +02:00
edeba28366 Add temporary helper script for Alpaca chat 2023-03-19 21:57:48 +02:00
5c19c70ba6 fix coloring of last n_batch of prompt, and refactor line input (#221)
* fix coloring of last `n_batch` of prompt, and refactor line input
* forgot the newline that needs to be sent to the model
* (per #283) try to force flush of color reset in SIGINT handler
master-5c19c70
2023-03-19 19:44:30 +00:00
24568371ae Support for multiple reverse prompts. (#299)
Co-authored-by: Johnman <>
Co-authored-by: Johnman <tjohnman@github>
master-2456837
2023-03-19 21:33:06 +02:00
7392f1cd2c Improved quantize script (#222)
* Improved quantize script

I improved the quantize script by adding error handling and allowing to select many models for quantization at once in the command line. I also converted it to Python for generalization as well as extensibility.

* Fixes and improvements based on Matt's observations

Fixed and improved many things in the script based on the reviews made by @mattsta. The parallelization suggestion is still to be revised, but code for it was still added (commented).

* Small fixes to the previous commit

* Corrected to use the original glob pattern

The original Bash script uses a glob pattern to match files that have endings such as ...bin.0, ...bin.1, etc. That has been translated correctly to Python now.

* Added support for Windows and updated README to use this script

New code to set the name of the quantize script binary depending on the platform has been added (quantize.exe if working on Windows) and the README.md file has been updated to use this script instead of the Bash one.

* Fixed a typo and removed shell=True in the subprocess.run call

Fixed a typo regarding the new filenames of the quantized models and removed the shell=True parameter in the subprocess.run call as it was conflicting with the list of parameters.

* Corrected previous commit

* Small tweak: changed the name of the program in argparse

This was making the automatic help message to be suggesting the program's usage as being literally "$ Quantization Script [arguments]". It should now be something like "$ python3 quantize.py [arguments]".
master-ad5fd5b
2023-03-19 20:38:44 +02:00
ad5fd5b60c Make prompt randomization optional. (#300)
Co-authored-by: Johnman <>
2023-03-19 20:36:19 +02:00
368d0c8a9e Respect the maximum number of tokens in interactive. (#298)
Co-authored-by: Johnman <johnman@github>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
master-368d0c8
2023-03-19 20:31:17 +02:00
50fae10d03 Add --ignore-eos parameter (#181)
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
master-50fae10
2023-03-19 20:22:48 +02:00
084e2f0ec0 interactive mode: print '\n' in sigint_handler, this flush stdout thus ensure color reset. (#283) master-084e2f0 2023-03-19 20:10:00 +02:00
0b366e7357 Command line switch to use F16 for memory_k and memory_v (refactor of #154) (#294)
* Use F16 for memory_k and memory_v

* add command line switch to use f16 instead of f32 for memory k+v

---------

Co-authored-by: Ty Everett <ty@tyweb.us>
master-0b366e7
2023-03-19 19:57:00 +02:00
160bfb217d Update hot topics to mention Alpaca support 2023-03-19 19:51:55 +02:00
c494ed5b94 Fix off-by-one bug (#115) master-c494ed5 2023-03-19 19:46:32 +02:00
c1c7026b47 Fix python stuff (#109) 2023-03-19 19:33:18 +02:00
467b149761 Refactoring convert-pth-to-ggml.py: more concise and readable (#109)
* Refactor get_n_parts function to simplify code and improve readability

* Use f-strings instead of concatenation

* Refactoring: more concise and readable

* modularize

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-19 19:17:39 +02:00
70f01cb863 Drop trailing new line from file prompts (#80) master-70f01cb 2023-03-19 19:05:04 +02:00
a4e63b73df Add instruction for using Alpaca (#240) 2023-03-19 18:49:50 +02:00
9e1707218a Add "--instruct" argument for usage with Alpaca (#240)
Also start adding prompts in "./prompts"
master-9e17072
2023-03-19 18:37:02 +02:00
22213a17b5 Change RMSNorm eps to 1e-6 (#173)
I think this is what is used in the Python code
master-22213a1
2023-03-19 17:30:00 +02:00
d7def1a752 Warn user if a context size greater than 2048 tokens is specified (#274)
LLaMA doesn't support more than 2048 token context sizes, and going above that produces terrible results.
master-d7def1a
2023-03-18 20:10:47 -04:00
6f61c18ec9 Fix typo in readme 2023-03-18 23:18:04 +01:00
1e5a6d088d Add note about Python 3.11 to readme 2023-03-18 22:25:35 +01:00
554b541521 Add memory/disk requirements to readme 2023-03-18 22:25:35 +01:00
d3f202d57b Remove unused code since n_vocab is model.hparams.n_vocab (#262) master-d3f202d 2023-03-18 13:51:49 +00:00
e03e359730 fixed warning with std::ignore about unused function result (#151)
fixed warning with std::ignore about unused function result
2023-03-18 11:44:09 +00:00
a81d0c2a17 Fix n^2 loop in tokenization (#254)
This causes long prompts to parse very slowly.
2023-03-18 11:17:19 +00:00