Commit Graph

454 Commits

Author SHA1 Message Date
f9a6364912 llama : require first token to be BOS (#1303)
* llama : require first token to be BOS

* scripts : add ppl-run-all.sh

* perplexity : add BOS for each chunk

* readme : update perplexity values after BOS fix

* perplexity : add clarifying comments
2023-05-08 17:41:54 +03:00
1f48b0abcf Documented CUDA reproducibility, added warning (#1346) 2023-05-08 02:42:01 +02:00
173d0e6419 makefile: automatic Arch Linux detection (#1332)
This commit is a port of a detection method used in koboldcpp's Makefile in order to automatically set the -lcblas option on Arch Linux
2023-05-05 23:57:14 +02:00
921dcee00a readme: add missing info (#1324) 2023-05-05 16:43:36 +02:00
360cfe5bec readme : add OpenBuddy link (#1321) 2023-05-04 19:33:31 +03:00
bca9ad938a minor : fix whitespaces (#1302) 2023-05-03 20:09:42 +03:00
b0c71c7b6d scripts : platform independent script to verify sha256 checksums (#1203)
* python script to verify the checksum of the llama models

Added Python script for verifying SHA256 checksums of files in a directory, which can run on multiple platforms. Improved the formatting of the output results for better readability.

* Update README.md

update to the readme for improved readability and to explain the usage of the python checksum verification script

* update the verification script

I've extended the script based on suggestions by @prusnak

The script now checks the available RAM, is there is enough to check the file at once it will do so. If not the file is read in chunks.

* minor improvment

small change so that the available ram is checked and not the total ram

* remove the part of the code that reads the file at once if enough ram is available

based on suggestions from @prusnak i removed the part of the code that checks whether the user had enough ram to read the entire model at once. the file is now always read in chunks.

* Update verify-checksum-models.py

quick fix to pass the git check
2023-05-03 18:31:28 +03:00
36d19a603b Remove Q4_3 which is no better than Q5 (#1218) 2023-04-28 23:10:43 +00:00
7f15c5c477 readme : update hot topics 2023-04-28 21:32:52 +03:00
78ec543733 Correcting link to w64devkit (#1214)
Correcting link to w64devkit (change seeto to skeeto).
2023-04-28 16:22:48 +02:00
f9be42add0 readme : add quantization info 2023-04-26 23:24:42 +03:00
ea3ad7eb60 Updating build instructions to include BLAS support (#1183)
* Updated build information

First update to the build instructions to include BLAS.

* Update README.md

* Update information about BLAS

* Better BLAS explanation

Adding a clearer BLAS explanation and adding a link to download the CUDA toolkit.

* Better BLAS explanation

* BLAS for Mac

Specifying that BLAS is already supported on Macs using the Accelerate Framework.

* Clarify the effect of BLAS

* Windows Make instructions

Added the instructions to build with Make on Windows

* Fixing typo

* Fix trailing whitespace
2023-04-26 22:03:03 +02:00
859fee6dfb quantize : use map to assign quantization type from string (#1191)
instead of `int` (while `int` option still being supported)

This allows the following usage:

`./quantize ggml-model-f16.bin ggml-model-q4_0.bin q4_0`

instead of:

`./quantize ggml-model-f16.bin ggml-model-q4_0.bin 2`
2023-04-26 18:43:27 +02:00
9b0a4d4214 examples/main README improvements and some light refactoring (#1131) 2023-04-24 15:45:32 +00:00
c6524f46eb readme : update gpt4all instructions (#980) 2023-04-23 10:21:26 +02:00
834695fe3a Minor: Readme fixed grammar, spelling, and misc updates (#1071) 2023-04-19 19:52:14 +00:00
7cd5c4a3e9 readme : add warning about Q4_2 and Q4_3 2023-04-19 19:07:54 +03:00
7faa7460f0 readme : update hot topics about new LoRA functionality 2023-04-18 20:10:26 +03:00
e9298af389 readme : add Ruby bindings (#1029) 2023-04-17 22:34:35 +03:00
723dac55fa py : new conversion script (#545)
Current status: Working, except for the latest GPTQ-for-LLaMa format
  that includes `g_idx`.  This turns out to require changes to GGML, so
  for now it only works if you use the `--outtype` option to dequantize it
  back to f16 (which is pointless except for debugging).

  I also included some cleanup for the C++ code.

  This script is meant to replace all the existing conversion scripts
  (including the ones that convert from older GGML formats), while also
  adding support for some new formats.  Specifically, I've tested with:

  - [x] `LLaMA` (original)
  - [x] `llama-65b-4bit`
  - [x] `alpaca-native`
  - [x] `alpaca-native-4bit`
  - [x] LLaMA converted to 'transformers' format using
        `convert_llama_weights_to_hf.py`
  - [x] `alpaca-native` quantized with `--true-sequential --act-order
        --groupsize 128` (dequantized only)
  - [x] same as above plus `--save_safetensors`
  - [x] GPT4All
  - [x] stock unversioned ggml
  - [x] ggmh

  There's enough overlap in the logic needed to handle these different
  cases that it seemed best to move to a single script.

  I haven't tried this with Alpaca-LoRA because I don't know where to find
  it.

  Useful features:

  - Uses multiple threads for a speedup in some cases (though the Python
    GIL limits the gain, and sometimes it's disk-bound anyway).

  - Combines split models into a single file (both the intra-tensor split
    of the original and the inter-tensor split of 'transformers' format
    files).  Single files are more convenient to work with and more
    friendly to future changes to use memory mapping on the C++ side.  To
    accomplish this without increasing memory requirements, it has some
    custom loading code which avoids loading whole input files into memory
    at once.

  - Because of the custom loading code, it no longer depends in PyTorch,
    which might make installing dependencies slightly easier or faster...
    although it still depends on NumPy and sentencepiece, so I don't know
    if there's any meaningful difference.  In any case, I also added a
    requirements.txt file to lock the dependency versions in case of any
    future breaking changes.

  - Type annotations checked with mypy.

  - Some attempts to be extra user-friendly:

      - The script tries to be forgiving with arguments, e.g. you can
        specify either the model file itself or the directory containing
        it.

      - The script doesn't depend on config.json / params.json, just in
        case the user downloaded files individually and doesn't have those
        handy.  But you still need tokenizer.model and, for Alpaca,
        added_tokens.json.

      - The script tries to give a helpful error message if
        added_tokens.json is missing.
2023-04-14 10:03:03 +03:00
ec29272175 readme : remove python 3.10 warning (#929) 2023-04-13 16:59:53 +03:00
7e941b95eb readme : llama node binding (#911)
* chore: add nodejs binding

* chore: add nodejs binding
2023-04-13 16:54:27 +03:00
4579af95e8 zig : update build.zig (#872)
* update

* update readme

* minimize the changes.

---------

Co-authored-by: zjli2019 <zhengji.li@ingchips.com>
2023-04-13 16:43:22 +03:00
f76cb3a34d readme : change "GPU support" link to discussion 2023-04-12 14:48:57 +03:00
782438070f readme : update hot topics with link to "GPU support" issue 2023-04-12 14:31:12 +03:00
4dbbd40750 readme: link to sha256sums file (#902)
This is to emphasize that these do not need to be obtained from elsewhere.
2023-04-12 08:46:20 +02:00
8b679987cd Fix whitespace, add .editorconfig, add GitHub workflow (#883) 2023-04-11 19:45:44 +00:00
a0caa34b16 Add BAIR's Koala to supported models (#877) 2023-04-10 22:41:53 +02:00
d2beca95dc Make docker instructions more explicit (#785) 2023-04-06 08:56:58 +02:00
3416298929 Update README.md 2023-04-05 19:54:30 +03:00
8d10406d6e readme : change logo + add bindings + add uis + add wiki 2023-04-05 18:56:20 +03:00
594cc95fab readme : update with CMake and windows example (#748)
* README: Update with CMake and windows example

* README: update with code-review for cmake build
2023-04-05 17:36:12 +03:00
d8d4e865cd Add a missing step to the gpt4all instructions (#690)
`migrate-ggml-2023-03-30-pr613.py` is needed to get gpt4all running.
2023-04-02 12:48:57 +02:00
d0a7f742e7 readme: replace termux links with homepage, play store is deprecated (#680) 2023-04-01 16:57:30 +02:00
9733104be5 drop quantize.py (now that models are using a single file) 2023-03-31 01:07:32 +02:00
3df890aef4 readme : update supported models 2023-03-30 22:31:54 +03:00
b467702b87 readme : fix typos 2023-03-29 19:38:31 +03:00
516d88e75c readme : add GPT4All instructions (close #588) 2023-03-29 19:37:20 +03:00
b391579db9 Update README and comments for standalone perplexity tool (#525) 2023-03-26 16:14:01 +03:00
348d6926ee Add logo to README.md 2023-03-26 10:20:49 +03:00
55ad42af84 Move chat scripts into "./examples" 2023-03-25 20:37:09 +02:00
4a7129acd2 Remove obsolete information from README 2023-03-25 16:30:32 +02:00
f4f5362edb Update README.md (#444)
Added explicit **bolded** instructions clarifying that people need to request access to models from Facebook and never through through this repo.
2023-03-24 15:23:09 +00:00
b6b268d441 Add link to Roadmap discussion 2023-03-24 09:13:35 +02:00
a50e39c6fe Revert "Delete SHA256SUMS for now" (#429)
* Revert "Delete SHA256SUMS for now (#416)"

This reverts commit 8eea5ae0e5.

* Remove ggml files until they can be verified
* Remove alpaca json
* Add also model/tokenizer.model to SHA256SUMS + update README

---------

Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
2023-03-23 15:15:48 +01:00
8a3e5ef801 Move model section from issue template to README.md (#421)
* Update custom.md

* Removed Model section as it is better placed in README.md

* Updates to README.md model section

* Inserted text that was removed from  issue template about obtaining models from FB and links to papers describing the various models

* Removed IPF down links for the Alpaca 7B models as these look to be in the old data format and probably shouldn't be directly linked to, anyway

* Updated the perplexity section to point at Perplexity scores #406 discussion
2023-03-23 11:30:40 +00:00
93208cfb92 Adjust repetition penalty .. 2023-03-23 10:46:58 +02:00
03ace14cfd Add link to recent podcast about whisper.cpp and llama.cpp 2023-03-23 09:48:51 +02:00
40ea807a97 Add details on perplexity to README.md (#395) 2023-03-22 08:53:54 -07:00
56817b1f88 Remove temporary notice and update hot topics 2023-03-22 07:34:02 +02:00