mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-07-20 17:49:18 +00:00
llama : fix parameter order for hybrid memory initialization (#14725)
This commit is contained in:
@ -38,9 +38,9 @@ llama_memory_hybrid::llama_memory_hybrid(
|
|||||||
type_v,
|
type_v,
|
||||||
v_trans,
|
v_trans,
|
||||||
offload,
|
offload,
|
||||||
|
1,
|
||||||
kv_size,
|
kv_size,
|
||||||
n_seq_max,
|
n_seq_max,
|
||||||
1,
|
|
||||||
n_pad,
|
n_pad,
|
||||||
n_swa,
|
n_swa,
|
||||||
swa_type
|
swa_type
|
||||||
|
Reference in New Issue
Block a user