5763 Commits

Author SHA1 Message Date
348d6926ee Add logo to README.md 2023-03-26 10:20:49 +03:00
33e35b8fe8 Exit from interactive mode if input stream is bad (#491)
Allow exiting the interactive prompt also with CTRL-D on Unix and CTRL-Z
on Windows.
master-33e35b8
2023-03-26 08:25:46 +03:00
19726169b3 CI: Run other sanitizer builds even if one fails (#511)
applies only to sanitizer builds so they wont be cancelled
master-1972616
2023-03-26 00:13:28 +02:00
f732695cd5 Clarify console output in convert-pth-to-ggml.py (#512)
"Processing part 1 of 3" instead of "Processing part 0"
2023-03-25 23:53:55 +02:00
2f7bf7dd7c CMake / CI additions (#497)
* CMake: Add AVX512 option

* CI: Add AVX/AVX512 builds (Windows)
(AVX512 tests can only be run when the worker happens to support it, building works anyway)

* CMake: Fix sanitizer linkage ( merged #468 )

* CI: Add sanitizer builds (Ubuntu)

* CI: Fix release tagging
(change @zendesk/action-create-release to @anzz1/action-create-release until upstream PR Added commitish as input zendesk/action-create-release#32 is merged)
master-2f7bf7d
2023-03-25 23:38:11 +02:00
34ab526843 (Windows) Set console to UTF-8 on init (#420)
Sets console codepage to 65001 (CP_UTF8) on start for both input and output, should fix problems with UTF-8 characters.
master-34ab526
2023-03-25 22:29:22 +02:00
c2b25b6912 Fix colors enabling on WIN32 master-c2b25b6 2023-03-25 21:53:39 +02:00
79b2b266db If n_predict == -1, generate forever 2023-03-25 21:51:41 +02:00
e2d490dafd Inifinite generation via context swapping (#71) 2023-03-25 21:36:22 +02:00
03f7e33560 Cleanup STL headers + fix embedding examples + minor stuff master-03f7e33 2023-03-25 20:51:14 +02:00
55ad42af84 Move chat scripts into "./examples" 2023-03-25 20:37:09 +02:00
459e93cce0 Add AVX2 implementation of dequantize_row_q4_1 (#505) master-459e93c 2023-03-25 20:31:48 +02:00
a316a425d0 Overhaul the examples structure
- main -> examples
- utils -> examples (renamed to "common")
- quantize -> examples
- separate tools for "perplexity" and "embedding"

Hope I didn't break something !
master-a316a42
2023-03-25 20:26:40 +02:00
ecbe466a36 Retire the ggml_mul_mat() branch for transposed src0 (#500)
* Retire the ggml_mul_mat() for transposed src0

- It can always be made contiguous with ggml_cpy()
- The code is now simplified
- The results are deterministic in respect to num threads

* SIMD-ify dequantize_row_q4_0() for ARM_NEON (#502)

* Attempt to SIMD-ify dequantize_row_q4_0() for ARM_NEON

* Fix dequantization - forgot to interleave the quants
master-ecbe466
2023-03-25 19:47:21 +02:00
502a400192 Disable prompt verbosity by default and add option to enable (#480) master-502a400 2023-03-25 17:17:16 +02:00
09aecbf628 Add AVX2 implementation of dequantize_row_q4_0 (#467) master-09aecbf 2023-03-25 17:06:49 +02:00
4640eff23d Don't interefe with BLAS for large prompts by running only 1 thread master-4640eff 2023-03-25 17:03:10 +02:00
ab77d76312 Add longer DAN prompt for testing big batch numbers 2023-03-25 16:49:09 +02:00
29b7baab67 Add timings for the prompt evaluation (#478) master-29b7baa 2023-03-25 16:34:23 +02:00
4a7129acd2 Remove obsolete information from README 2023-03-25 16:30:32 +02:00
6b6dbc8910 Remove obsolete assert and fix compiler warning master-6b6dbc8 2023-03-25 16:22:05 +02:00
2a2e63ce05 Fix nasty bug in ggml_compute_forward_mul_mat_f32() and reenable BLAS master-2a2e63c 2023-03-25 16:10:14 +02:00
e899bf54b2 bounds checking for input prefix (#492) master-e899bf5 2023-03-25 14:42:09 +02:00
fbd4d38c64 feat: '--in-prefix STRING' option (#426)
Prefix user inputs with a string
master-fbd4d38
2023-03-25 14:03:19 +02:00
58e6c9f36f Add support for file load progress reporting callbacks (#434)
* File load progress reporting

* Move llama_progress_handler into llama_context_params

* Renames

* Use seekg to find file size instead

* More correct load progress

* Call progress callback more frequently

* Fix typo
master-58e6c9f
2023-03-25 07:26:28 +02:00
36d07532ef Add missing struct annotation (#483)
`llama_sample_top_p_top_k` was missing the struct annotation on line 126.

This causes a compiler issue when being parsed by the Kotlin C interop generator.

This commit fixes the above issue by adding the struct annotation.
master-36d0753
2023-03-25 07:21:24 +02:00
6f1ee4b640 Fix crash for 65B model with pre-allocated memory (#485) master-6f1ee4b 2023-03-25 06:38:14 +02:00
8520fc310e Disable BLAS altogether - the bug is not just for qunatized mat mul master-8520fc3 2023-03-24 23:47:06 +02:00
b3f460e941 Disable BLAS branch in mul_mat - seems there is a bug master-b3f460e 2023-03-24 23:39:17 +02:00
04c6f5ed6f Immediately start processing the prompt before user input has been provided (#476) master-7a9b6c3 master-04c6f5e 2023-03-24 23:17:58 +02:00
7a9b6c3a8b Reduce memory usage and allocate enough memory for largest context (#473)
* Reduce memory usage and allocate enough memory for large contexts

* Simpler scratch buffer usage

* Reenable BLAS for quantized mul_mat

* Fix number of layers in 30B and 65B

* Fix KV cache size for F32
2023-03-24 23:17:37 +02:00
31572d9665 Temporary bump the memory buffer size - hopefully fix issues from 483bab2e master-31572d9 2023-03-24 18:23:56 +02:00
f4f5362edb Update README.md (#444)
Added explicit **bolded** instructions clarifying that people need to request access to models from Facebook and never through through this repo.
master-863f65e
2023-03-24 15:23:09 +00:00
863f65e2e3 fix instruct mode (#445)
changes to EOS behavior in interactive and reverse prompt handling broke instruct mode by erroneously injecting instruct mode's reverse prompt and an extra newline.
2023-03-24 17:22:39 +02:00
afd220d9c6 Properly free llama_context on failure master-563cdc3 master-481044d master-afd220d 2023-03-24 17:21:01 +02:00
481044d50c additional optimizations for POWER9 (#454) 2023-03-24 17:19:26 +02:00
563cdc391d Support calling mlock() on loaded model data on Linux and macOS (#453)
* Support calling mlock() on loaded model data on Linux and macOS

This is enabled by a new --mlock command line option.

Using mlock() disables swapping and memory compression for the model
data.  Doing so can be useful on systems where the model takes up a
large fraction of system RAM.  In my experience, macOS is quite eager to
start compressing llama.cpp's memory, which then makes it halt for a few
seconds while it decompresses, even with a model that uses "only" 25GB
out of 32GB.

Of course, this comes at the cost of forcing the system to swap or
compress other processes' memory instead, so it needs to be used with
care and shouldn't be enabled by default.

In theory it should be possible to support this on Windows as well using
VirtualLock(), but I'm not much of a Windows user.

* Update llama.cpp

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-24 17:19:05 +02:00
8d4a855c24 Add embedding mode with arg flag. Currently working (#282)
* working but ugly

* add arg flag, not working on embedding mode

* typo

* Working! Thanks to @nullhook

* make params argument instead of hardcoded boolean. remove useless time check

* start doing the instructions but not finished. This probably doesnt compile

* Embeddings extraction support

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
master-8d4a855
2023-03-24 17:05:13 +02:00
b6b268d441 Add link to Roadmap discussion 2023-03-24 09:13:35 +02:00
3cd8dde0d1 Revert "Fix memory allocation issues and seg faults"
This reverts commit 4870e455b3.

Will provide the correct fix later
master-3cd8dde
2023-03-24 06:22:28 +02:00
4870e455b3 Fix memory allocation issues and seg faults master-4870e45 2023-03-24 00:11:53 +02:00
483bab2e3d Avoid the transposed X branch in the Z = X * Y matrix multiplication (#439)
Should make results reproducible for different number of threads and batch sizes
master-483bab2
2023-03-23 23:22:01 +02:00
404e1da38e Fix quantize script not finding models in parent directory (#428) 2023-03-23 22:42:52 +02:00
4cc053b6d5 Remove oboslete command from Docker script 2023-03-23 22:39:44 +02:00
0ba5a3a9a5 Obsolete 2023-03-23 22:32:21 +02:00
2e17dfd80a Replace EOS with newline to prevent context/memory being flushed by EOS in interactive mode (#333)
* Improve interactive mode's coherence after EOS

Aims to improve coherence and ability to resume the interactive session when the user is given input back after an end of text token is reached.
Not sure what token 13 is or why it seems to help. See conversation for examples.

* Make newline token a constant

* dynamically determine newline token

* relocate previous newline token const

* cleanup whitespace

* print a new line on end of text in interactive

this may need to be looked into further when not using a reverse prompt

* only print manual newline with reverse prompt

fix formatting of reverse prompts so they don't end up at the end of the current line while not introducing unnecessary new lines otherwise

* alternate approach to replace end of text tokens

* Inject the reverse prompt again after eos in interactive mode

* tokenize reverse prompt when needed

makes this PR compatible with https://github.com/ggerganov/llama.cpp/pull/330

* tokenize and inject only first reverse prompt

thanks to tjohnman

* tokenize first reverse prompt once

* add newline token

* add newline token

* tokenize/inject reverse prompt for refactor

this doesn't seem right though

* tokenize nothing for antiprompt if no reverse

* Update main.cpp

* Update main.cpp

* tokenize and inject reverse prompt as needed

this doesn't seem to work if the reverse prompt is tokenized outside earlier on

* not needed

* remove newline token

* remove newline token

* tokenize newline token

* add space to comment

* Update main.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Slaren <2141330+slaren@users.noreply.github.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
master-2e17dfd
2023-03-23 22:22:47 +02:00
20a1a4e09c Fix GPTQ converter (#423)
* Fix GPTQ converter

* Fix comment

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
master-ad072fc
2023-03-23 22:18:13 +02:00
ad072fc5ad Generate library with CMake (#430)
* Generate library with CMake

BUILD_SHARED_LIBS to allow llama library to be generated.

* Turn ON PIC when BUILD_SHARED_LIBS is ON
2023-03-23 21:16:48 +01:00
ea10d3ded2 Command line args bounds checking (#424)
* command line args bounds checking

* unknown and invalid param exit codes 0 -> 1
master-ea10d3d
2023-03-23 19:54:28 +02:00
a18c19259a Fix Nix build 2023-03-23 17:51:26 +01:00