Johannes Gäßler
10d2af0eaa
llama/ggml: add LLM training support ( #10544 )
...
* llama/ggml: add LLM training support
more compact progress bar
llama_save_model_to_file
llama_opt_param_filter
ggml_graph_dup force_grads
refactor ggml_opt, fix test-opt
* remove logits_all
* refactor CUDA implementation for ACC
* reset graph at beginning of opt period
2025-05-12 14:44:49 +02:00
..
2025-04-01 23:44:05 +02:00
2025-03-13 12:35:44 +02:00
2025-01-12 11:32:42 +02:00
2024-12-04 23:19:20 +01:00
2025-05-08 14:28:33 +03:00
2025-01-12 11:32:42 +02:00
2024-11-29 21:54:58 +01:00
2025-01-07 18:01:58 +01:00
2025-01-07 18:01:58 +01:00
2025-04-01 23:44:05 +02:00
2024-06-13 00:41:52 +01:00
2025-04-07 13:35:19 +02:00
2025-03-13 12:35:44 +02:00
2025-03-13 12:35:44 +02:00
2025-03-13 12:35:44 +02:00
2025-04-02 14:32:59 +03:00
2025-04-01 23:44:05 +02:00
2025-03-13 12:35:44 +02:00
2025-03-13 12:35:44 +02:00
2025-01-12 11:32:42 +02:00
2025-03-13 12:35:44 +02:00
2025-02-15 16:40:57 +02:00
2025-04-01 23:44:05 +02:00
2025-04-01 23:44:05 +02:00
2025-04-14 18:19:07 +08:00
2025-05-12 14:44:49 +02:00
2024-06-13 00:41:52 +01:00
2024-11-09 09:06:54 +02:00
2024-06-13 00:41:52 +01:00
2024-06-13 00:41:52 +01:00
2025-05-12 14:44:49 +02:00
2024-11-13 21:10:38 +11:00
2024-07-07 15:04:39 -04:00
2025-04-26 10:10:20 +02:00
2025-02-15 16:40:57 +02:00
2024-06-13 00:41:52 +01:00
2025-05-02 20:27:13 +02:00
2024-07-14 19:51:21 -04:00
2024-06-13 00:41:52 +01:00
2024-07-05 07:53:33 +03:00
2025-04-08 19:54:51 +03:00
2024-06-13 00:41:52 +01:00