Files
llama.cpp/examples
Douglas Hanley 03bf161eb6 llama : support batched embeddings (#5466)
* batched embedding: pool outputs by sequence id. updated embedding example

* bring back non-causal attention

* embd : minor improvements

* llama : minor

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-13 14:06:58 +02:00
..
2024-01-14 09:45:56 +02:00
2024-02-12 09:16:06 +02:00
2024-02-12 09:16:06 +02:00
2023-12-21 23:08:14 +02:00
2024-02-04 10:39:58 +02:00
2023-03-29 20:21:09 +03:00
2023-08-30 09:29:32 +03:00