Commit Graph

5076 Commits

Author SHA1 Message Date
d7b7484f74 Add OpenLLaMA instructions to the README (#1954)
* add openllama to readme
2023-06-23 10:38:01 +02:00
7487137227 rework convert.py to read hyper-parameters from config.json (#1958)
* Read hyper-parameters from HuggingFace-transformer config.json, if they exist, and fall back to guessing, like before otherwise.
  This allows converting open_llama 3B and other non-standard model designs.
master-7487137
2023-06-22 14:20:47 +02:00
bbca06e269 cmake: revert CUDA arch default to 52, 61 if f16 (#1959) master-bbca06e 2023-06-21 23:49:25 +02:00
fb98254f99 Fix typo in README.md (#1961) 2023-06-21 23:48:43 +02:00
049aa16b8c readme : add link to p1 2023-06-20 19:05:54 +03:00
2322ec223a Fix typo (#1949) 2023-06-20 15:42:40 +03:00
aacdbd4056 llama : fix params struct slignment (#1936)
* Workaround struct misalignment during value-copy

Signed-off-by: mudler <mudler@localai.io>

* Move booleans at the bottom of the structure

Signed-off-by: mudler <mudler@localai.io>

* Add comment

Signed-off-by: mudler <mudler@localai.io>

---------

Signed-off-by: mudler <mudler@localai.io>
master-aacdbd4
2023-06-20 04:24:39 +03:00
20568fe60f [Fix] Reenable server embedding endpoint (#1937)
* Add back embedding feature

* Update README
master-20568fe
2023-06-20 01:12:39 +03:00
18b35625c3 ggml : fix bug in LBFGS optimizer (found by ggml tests) master-18b3562 2023-06-19 20:43:30 +03:00
ba4e85a833 llama : use aligned memory during ggml_init call from loading saved sessions (#1934)
* fixed issue: memory is not guaranteed to be aligned properly during ggml_init call from loading saved sessions

* - removed commented out old code from fix
- updated another instance of same issue below original
master-ba4e85a
2023-06-19 18:20:06 +03:00
23fc5c219a cmake : fix trailing whitespaces master-23fc5c2 2023-06-19 18:18:34 +03:00
cb40dfca69 llama : only use Q6_K for output weights if tensor size is multiple of 256 (#1932)
* Only use Q6_K for output weights if tensor size is multiple of 256

* Fixed copy/paste mistake

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
master-cb40dfc
2023-06-19 18:17:03 +03:00
ca7c3f4da5 cuda : faster k-quants on older GPUs (#1930)
* k_quants: hopefully much faster Q4_K on older GPUs

On the GTX-1660 that I have available to represent
"old GPUs", token prediction drops from 65.5 ms/tok
to 41.5 ms/tok!

* k_quants: hopefully much faster Q3_K on older GPUs

On the GTX-1660 that I have available to represent
"old GPUs", token prediction drops from 60.3 ms/tok
to 41.0 ms/tok!

* k_quants: faster Q2_K on older GPUs

It looks like I didn't need to change anything
compared to what we already had, so this is just
adding clarifying comments. But I now measure
36.3 ms/tok on the GTX-1660, instead fo the
47.2 ms/tok that I have written in the faster
k-quants PR.

* k_quants: faster Q5_K on older GPUs

68.5 ms/tok -> 62.0 ms/tok on GTX-1660.
For some reason the same access pattern that leads
to such resounding success for Q2_K to Q4_K did not
work at all for Q5_K.

It is also more difficult to measure because for Q5_K_S
we only have 32 layers on the GTX-1660, so output, tok embeddings
and kv cache are done on the CPU.

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
master-ca7c3f4
2023-06-19 18:14:09 +03:00
b97ca431db ggml : sync latest ggml repo (#1924)
* ggml : sync latest ggml repo

* ggml : remove unused comments

* ggml : asserts
master-b97ca43
2023-06-19 18:12:33 +03:00
1e3abfcef0 cmake : fix build shared ggml when CUDA is enabled (#1929)
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
master-1e3abfc
2023-06-19 18:10:37 +03:00
16b9cd1939 Convert vector to f16 for dequantize mul mat vec (#1913)
* Convert vector to f16 for dmmv

* compile option

* Added compilation option description to README

* Changed cmake CUDA_ARCHITECTURES from "OFF" to "native"
master-16b9cd1
2023-06-19 10:23:56 +02:00
b24c3049d9 Added tokens per second to info prints (#1928) master-b24c304 2023-06-18 17:41:26 +02:00
0ede372a51 Fixed incorrectly applying RMS norm twice (#1925) master-0ede372 2023-06-18 16:07:09 +02:00
8596af4277 ggml : fix bug in ggml_compute_forward_add_q_f32 (#1918) master-8596af4 2023-06-18 14:19:16 +03:00
e1886cf4fe readme : update Android build instructions (#1922)
Add steps for using termux on android devices to prevent common errors.
2023-06-18 11:28:26 +03:00
8ab8ba62eb llama : prevent usage of k-quants when tensor size is not a multiple of 256 (#1921)
* Fix examples/metal

* k-quants: prevent usage when tensor size is not divisible by 256

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
master-8ab8ba6
2023-06-18 11:13:43 +03:00
90cc59d6ab examples : fix examples/metal (#1920)
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
master-90cc59d
2023-06-18 10:52:10 +03:00
ce2c7d72e2 metal : handle buffers larger than device's maxBufferLength (#1826)
* metal : handle buffers larger than device's maxBufferLength

* metal : print more verbose device info + handle errors

* metal : fix prints for overlapping views

* metal : minimize view overlap to try to utilize device memory better
master-ce2c7d7
2023-06-18 09:09:47 +03:00
57cd69460f cmake : add CUDA_ARCHITECTURES to new target ggml_static (#1917) master-57cd694 2023-06-18 07:29:47 +03:00
b2416493ab make : do not print help for simple example master-b241649 2023-06-17 20:55:03 +03:00
4f9c43e3bd minor : warning fixes master-4f9c43e 2023-06-17 20:24:11 +03:00
2c9380dd2f Only one CUDA stream per device for async compute (#1898) master-2c9380d 2023-06-17 19:15:02 +02:00
051e1b0e6a llama : fix kv_cache n init (close #1903) master-051e1b0 2023-06-17 19:31:20 +03:00
86c7571864 make : update for latest Arch (#1701)
With the upcoming change to the openblas package in arch the Makefile workaround is no longer needed.
master-86c7571
2023-06-17 19:17:22 +03:00
3d59ec5935 ggml : fix warnings under MSVC (#1908) master-3d59ec5 2023-06-17 18:46:15 +03:00
0711a5f6dc metal : add norm, cpy f16->f16, alibi kernels (#1823) 2023-06-17 17:37:49 +03:00
fc45a81bc6 exposed modules so that they can be invoked by nix run github:ggerganov/llama.cpp#server etc (#1863) 2023-06-17 14:13:05 +02:00
794db3e7b9 Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.

Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.

This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.

Summary of the changes:

- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict 
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables

---------

Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
master-794db3e
2023-06-17 14:53:04 +03:00
5ddf7ea1fb hooks : setting up flake8 and pre-commit hooks (#1681)
Small, non-functional changes were made to non-compliant files.
These include breaking up long lines, whitespace sanitation and
unused import removal.

Maximum line length in python files was set to a generous 125 chars,
in order to minimize number of changes needed in scripts and general
annoyance. The "txt" prompts directory is excluded from the checks
as it may contain oddly formatted files and strings for a good reason.

Signed-off-by: Jiri Podivin <jpodivin@gmail.com>
2023-06-17 13:32:48 +03:00
bac19927c3 readme : alternative way to build for Android with CLBlast. (#1828) 2023-06-17 12:01:06 +03:00
b4c6f46f17 Allow cmake to build ggml as a library (#1896)
* Allow cmake to build ggml as a library

* A ggml_static library will be created

* When BUILD_SHARED_LIBS is enabled, ggml_shared will also be built
master-b4c6f46
2023-06-17 01:49:42 -06:00
92f20d9942 train : get raw text instead of page with html (#1905)
We probably want to train using just the text of Shakespeare instead of the html of the page displaying his work.
2023-06-17 09:51:54 +03:00
d411968e99 opencl : support k-quants (#1836)
* Porting q2_k kernel to OpenCL

* Set global and local sizes for kernel calls for dequantizing k-quants

* Added q6_k kernel

* Fix q4_k opencl struct order

* Replace uchar with uint8_t

* Finish dequant kernels

* Added OpenCL DMMV kernels

* Fix q2_k, improve code

* Fix q3_k

* Shorten switch statements

* Improve code formatting

---------

Co-authored-by: Concedo <39025047+LostRuins@users.noreply.github.com>
master-d411968
2023-06-16 21:59:49 +03:00
b41b4cad6f examples : add "simple" (#1840)
* Create `simple.cpp`

* minimalist example `CMakeLists.txt`

* Update Makefile for minimalist example

* remove 273: Trailing whitespace

* removed trailing white spaces simple.cpp

* typo and comments simple.cpp

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
master-b41b4ca
2023-06-16 21:58:09 +03:00
13fe9d2d84 cmake : add auto detection of BLAS_INCLUDE_DIRS (#1886) master-13fe9d2 2023-06-16 21:53:04 +03:00
ac3b886953 llama : fix embd when offloading non-repeating layers (#1891) master-ac3b886 2023-06-16 21:25:51 +03:00
5b9ccaf104 Fixed possible macro redefinition (#1892)
MinGW libstdc++ may define `NOMINMAX` unconditionally. This fixes the case when it is already defined.
master-5b9ccaf
2023-06-16 21:25:01 +03:00
9cbf50c041 build : fix and ignore MSVC warnings (#1889) master-9cbf50c 2023-06-16 21:23:53 +03:00
3d01122610 CUDA : faster k-quant dot kernels (#1862)
* cuda : faster k-quant dot kernels

* Imrove Q2_K dot kernel on older GPUs

We now have a K_QUANTS_PER_ITERATION macro, which should be
set to 1 on older and to 2 on newer GPUs.
With this, we preserve the performance of the original
PR on RTX-4080, and are faster compared to master on
GTX-1660.

* Imrove Q6_K dot kernel on older GPUs

Using the same K_QUANTS_PER_ITERATION macro as last commit,
we preserve performance on RTX-4080 and speed up
Q6_K on a GTX-1660.

* Add LLAMA_CUDA_KQUANTS_ITER to CMakeLists.txt and Makefile

Allowed values are 1 or 2. 2 gives the best performance on
modern GPUs and is set as default. On older GPUs 1 may work
better.

* PR comments

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
master-3d01122
2023-06-16 20:08:44 +03:00
602c748863 gitignore : add several entries specific to Visual Studio (#1888) 2023-06-16 09:58:11 +03:00
a09f9195be Fixed CUDA runtime version check (#1879) master-a09f919 2023-06-15 21:49:08 +02:00
bed9275617 cmake : remove whitespaces master-bed9275 2023-06-15 21:56:50 +03:00
c36e81da62 examples : add chat-vicuna.sh (#1854)
Co-authored-by: Yang Li <yangliyl@google.com>
master-c36e81d
2023-06-15 21:05:53 +03:00
3559433fec cmake : set include path for OpenBlas (#1830) master-3559433 2023-06-15 20:51:26 +03:00
69b34a0e80 swift : Package compile breaks due to ggml-metal.metal (#1831)
* Ignore metal file in spm

* Add ggml.h to spm public Headers

---------

Co-authored-by: Vogel Frederik <vogel.frederik@linecorp.com>
master-69b34a0
2023-06-15 20:47:04 +03:00