Commit Graph

4504 Commits

Author SHA1 Message Date
e64f5b5578 examples : make n_ctx warning work again (#3066)
This was broken by commit e36ecdcc ("build : on Mac OS enable Metal by
default (#2901)").
b1204
2023-09-08 11:43:35 -04:00
94f10b91ed readme : update hot tpoics 2023-09-08 18:18:04 +03:00
b3e9852e47 sync : ggml (CUDA GLM RoPE + POSIX) (#3082)
ggml-ci
b1202
2023-09-08 17:58:07 +03:00
cb6c44c5e0 build : do not use _GNU_SOURCE gratuitously (#2035)
* Do not use _GNU_SOURCE gratuitously.

What is needed to build llama.cpp and examples is availability of
stuff defined in The Open Group Base Specifications Issue 6
(https://pubs.opengroup.org/onlinepubs/009695399/) known also as
Single Unix Specification v3 (SUSv3) or POSIX.1-2001 + XSI extensions,
plus some stuff from BSD that is not specified in POSIX.1.

Well, that was true until NUMA support was added recently,
so enable GNU libc extensions for Linux builds to cover that.

Not having feature test macros in source code gives greater flexibility
to those wanting to reuse it in 3rd party app, as they can build it with
FTMs set by Makefile here or other FTMs depending on their needs.

It builds without issues in Alpine (musl libc), Ubuntu (glibc), MSYS2.

* make : enable Darwin extensions for macOS to expose RLIMIT_MEMLOCK

* make : enable BSD extensions for DragonFlyBSD to expose RLIMIT_MEMLOCK

* make : use BSD-specific FTMs to enable alloca on BSDs

* make : fix OpenBSD build by exposing newer POSIX definitions

* cmake : follow recent FTM improvements from Makefile
b1201
2023-09-08 15:09:21 +03:00
a21baeb122 docker : add git to full-cuda.Dockerfile main-cuda.Dockerfile (#3044) 2023-09-08 13:57:55 +03:00
Yui
6ff712a6d1 Update deprecated GGML TheBloke links to GGUF (#3079) 2023-09-08 12:32:55 +02:00
ebc96086af ggml-alloc : correctly check mmap return value for errors (#3075) b1198 2023-09-08 04:04:56 +02:00
7f412dab9c enable CPU HBM (#2603)
* add cpu hbm support

* add memalign 0 byte check

* Update ggml.c

* Update llama.cpp

* ggml : allow ggml_init with 0 size

* retrigger ci

* fix code style

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
b1197
2023-09-08 03:46:56 +02:00
6336d834ec convert : fix F32 ftype not being saved (#3048) 2023-09-07 14:27:42 -04:00
00d62adb79 fix some warnings from gcc and clang-tidy (#3038)
Co-authored-by: xaedes <xaedes@gmail.com>
b1195
2023-09-07 13:22:29 -04:00
4fa2cc1750 make : improve test target (#3031) b1194 2023-09-07 10:15:01 -04:00
5ffab089a5 make : fix CPPFLAGS (#3035) b1193 2023-09-07 10:13:50 -04:00
15b67a66c2 llama-bench : use two tokens in the warmup run for prompt evals (#3059) b1192 2023-09-07 15:52:34 +02:00
be8c9c245b metal : parallel RoPE on Metal (#3024)
* Parallel RoPE on metal

* PR suggestion

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-09-07 16:45:01 +03:00
be6beeb8d7 metal : correct fix of kernel_norm (#3060)
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-07 16:42:42 +03:00
c4f496648c metal : fix kernel_norm (fixes Falcon on Metal) (#3057)
* metal : fix kernel_norm

ggml-ci

* metal : put warning in kernel_norm to not combine the loops

* metal : restore original F16 mat-vec multiplication

It works after the norm fixes

* common : don't do warm-up with more than n_batch tokens (close #3058)

ggml-ci

* metal : minor
b1189
2023-09-07 15:49:09 +03:00
fec2fb19e4 ggml : posixify madvise and pagesize (#3037)
* llama : use posix_madvise() instead of madvise() derived from BSD

sed -i 's,\<madvise\>,posix_&,g;s,\<MADV_,POSIX_&,g' llama.cpp

* ggml : use sysconf(_SC_PAGESIZE) instead of getpagesize() derived from BSD

sed -i 's,getpagesize(),sysconf(_SC_PAGESIZE),g' ggml.c

* metal : use sysconf(_SC_PAGESIZE) instead of getpagesize() derived from BSD

sed -i 's,getpagesize(),sysconf(_SC_PAGESIZE),g' ggml-metal.m
b1188
2023-09-07 11:15:06 +03:00
178b1850eb k-quants : fix zero-weight guard in Q6_K (ref #3040) b1187 2023-09-06 12:40:57 +03:00
ea2c85d5d2 convert-llama-ggml-to-gguf: Try to handle files older than GGJTv3 (#3023)
* convert-llama-ggmlv3-to-gguf: Try to handle files older than GGJTv3

* Better error messages for files that cannot be converted

* Add file type to GGUF output

* Rename to convert-llama-ggml-to-gguf.py

* Include original file type information in description

* Improve some informational output
2023-09-06 02:49:11 -06:00
9912b9efc8 build : add LLAMA_METAL_NDEBUG flag (#3033) b1185 2023-09-05 18:21:10 -04:00
9e2023156e make : use new flag variables for recent changes (#3019) b1184 2023-09-05 15:12:00 -04:00
de2fe892af examples : replace fprintf to stdout with printf (#3017) b1183 2023-09-05 15:10:27 -04:00
c9c3220c48 convert: fix convert.py not working with int filename_stem (#3028)
* fix implicit int to string conversion
* convert : remove an obsolete pyright comment

---------

Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com>
2023-09-05 19:41:00 +02:00
d59bd97065 Guard against all weights in a super-block being zero (#3010)
* Guard against all weights in a super-block being zero

* Also guard against extremely small weights

Closes #2982 

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
b1181
2023-09-05 09:55:33 +02:00
35938ee3b0 llama : update logic for number of threads when using BLAS b1180 2023-09-05 10:46:39 +03:00
921772104b speculative : add grammar support (#2991)
* speculative : add grammar support

* grammars : add json_arr.gbnf

* grammar : add comments to new grammar file

* grammar : remove one nested level

* common : warm-up with 2 tokens - seems to work better

* speculative : print draft token pieces

* speculative : reuse grammar parser + better logs and comments

* speculative : avoid grammar_mem

* make : fix speculative build
b1179
2023-09-05 08:46:17 +03:00
2ba85c8609 py : minor 2023-09-04 22:50:50 +03:00
e36ecdccc8 build : on Mac OS enable Metal by default (#2901)
* build : on Mac OS enable Metal by default

* make : try to fix build on Linux

* make : move targets back to the top

* make : fix target clean

* llama : enable GPU inference by default with Metal

* llama : fix vocab_only logic when GPU is enabled

* common : better `n_gpu_layers` assignment

* readme : update Metal instructions

* make : fix merge conflict remnants

* gitignore : metal
b1177
2023-09-04 22:26:24 +03:00
bd33e5ab92 ggml-opencl : store GPU buffer in ggml_tensor::extra (#2994) b1176 2023-09-04 14:59:52 +02:00
3103568144 llama-bench : make cpp file non-executable (#2999) b1175 2023-09-04 13:40:18 +03:00
5b8530d88c make : add speculative example (#3003) b1174 2023-09-04 13:39:57 +03:00
e4386f417f server : add a subtle loading animation to the edit box (#2466)
* editorconfig: add override for the server HTML (which already is 2-space indented)

* server: add a subtle loading animation to the edit box
b1173
2023-09-04 16:28:55 +08:00
35195689cd 2x faster (rms) norm cuda kernels (3.7% e2e improvement) (#2985)
* 2x faster (rms) norm cuda kernels

* Fix code style
b1172
2023-09-04 08:53:30 +02:00
cf9b08485c ggml-alloc : use virtual memory for measurement (#2973)
* ggml-alloc : use virtual memory for measurement

* compatibility fixes for MAP_ANONYMOUS

* fallback to fixed address for systems without virtual memory
b1171
2023-09-03 20:34:09 +02:00
47068e5170 speculative : PoC for speeding-up inference via speculative sampling (#2926)
* speculative : initial example

* speculative : print encoding speed

* speculative : add --draft CLI arg
b1170
2023-09-03 15:12:08 +03:00
8f429fa511 perplexity : fix ETA by warming up the model with an empty run b1169 2023-09-03 13:43:17 +03:00
6519e9c99c gguf(python): Fix special vocab handling when id < 0 (#2984) 2023-09-03 04:38:43 -06:00
b7f2aa9e51 metal : restore 363f0bf and fix reduce in F16_F32 kernels (#2986) 2023-09-03 13:23:33 +03:00
73a12a6344 cov : disable comment in PRs (#2989) 2023-09-03 13:19:01 +03:00
3730134776 llama : fix bpe tokenize from byte (#2889) b1165 2023-09-03 13:18:09 +03:00
d9151e6f57 metal : revert 6af0bab until we fix it
This restores the generated text to be the same as before #2959
2023-09-03 12:40:56 +03:00
afc43d5f82 cov : add Code Coverage and codecov.io integration (#2928)
* update .gitignore

* makefile: add coverage support (lcov, gcovr)

* add code-coverage workflow

* update code coverage workflow

* wun on ubuntu 20.04

* use gcc-8

* check why the job hang

* add env vars

* add LLAMA_CODE_COVERAGE=1 again

* - add CODECOV_TOKEN
- add missing make lcov-report

* install lcov

* update make file -pb flag

* remove unused  GGML_NITER from workflows

* wrap coverage output files in COV_TARGETS
b1163
2023-09-03 11:48:49 +03:00
6460f758db opencl : fix a bug in ggml_cl_pool_malloc() for ggml_cl_mul_mat_f32() (#2955)
Co-authored-by: Wentai Zhang <wentaizhang@tencent.com>
b1162
2023-09-03 11:46:44 +03:00
ca82cf7bac metal : more optimizations (#2959)
* Very minor speedup via simd-group synchronization in f16 x f32

* Another very minor speedup on metal

* Quite significant PP speedup on metal

* Another attempt

* Minor

* Massive improvement for TG for fp16

* ~4-5% improvement for Q8_0 TG on metal

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-03 11:06:22 +03:00
6a31a3bd98 swift : add support for k-quants (#2983) 2023-09-03 09:21:05 +03:00
cff7b0bf07 convert.py : BPE fixes (#2938)
* convert.py: BPE fixes?

* Remove unnecessary conditional in addl token error handling
2023-09-03 08:52:13 +03:00
340af42f09 docs : add catai to README.md (#2967) 2023-09-03 08:50:51 +03:00
c42f0ec6b3 examples : fix gpt-neox (#2943)
Co-authored-by: mmnga <mmnga1mmnga@gmail.com>
b1157
2023-09-03 08:36:28 +03:00
2753415afd swift : add missing c file to Package.swift (#2978) 2023-09-03 08:27:25 +03:00
bc054af97a make : support overriding CFLAGS/CXXFLAGS/CPPFLAGS/LDFLAGS (#2886)
* make : remove unused -DGGML_BIG_ENDIAN

* make : put preprocessor stuff in CPPFLAGS

* make : pass Raspberry Pi arch flags to g++ as well

* make : support overriding CFLAGS/CXXFLAGS/CPPFLAGS/LDFLAGS

* make : fix inverted conditional
b1155
2023-09-03 08:26:59 +03:00