Files
llama.cpp/examples
Steward Garcia ce18d727a4 clip : enable gpu backend (#4205)
* clip: enable CUDA backend

* add missing kernels

* add enough padding for alignment

* remove ggml_repeat of clip.cpp

* add metal backend

* llava : fixes

- avoid ggml_repeat
- use GGML_USE_ instead of CLIP_USE_ macros
- remove unused vars

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-12-29 18:52:15 +02:00
..
2023-12-21 23:08:14 +02:00
2023-12-29 18:52:15 +02:00
2023-11-13 14:16:23 +02:00
2023-03-29 20:21:09 +03:00
2023-08-30 09:29:32 +03:00
2023-08-08 14:44:48 +03:00