* server : (experimental) vision support via libmtmd * mtmd : add more api around mtmd_image_tokens * mtmd : add more api around mtmd_image_tokens * mtmd : ability to calc image hash * shared_ptr for mtmd_image_tokens * move hash to user-define ID (fixed) * abstract out the batch management * small fix * refactor logic adding tokens to batch * implement hashing image * use FNV hash, now hash bitmap instead of file data * allow decoding image embedding to be split into batches * rm whitespace * disable some features when mtmd is on * fix --no-mmproj-offload * mtmd_context_params no timings * refactor server_inp to server_tokens * fix the failing test case * init * wip * working version * add mtmd::bitmaps * add test target * rm redundant define * test: mtmd_input_chunks_free * rm outdated comment * fix merging issue * explicitly create mtmd::input_chunks * mtmd_input_chunk_copy * add clone() * improve server_input struct * clip : fix confused naming ffn_up and ffn_down * rm ffn_i/o/g naming * rename n_embd, n_ff * small fix * no check n_ff * fix detokenize * add const to various places * add warning about breaking changes * add c api * helper: use mtmd_image_tokens_get_n_pos * fix ctx_shift * fix name shadowing * more strict condition * support remote image_url * remote image_url log * add CI test * do not log base64 * add "has_multimodal" to /props * remove dangling image * speculative: use slot.cache_tokens.insert * Apply suggestions from code review Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * rm can_be_detokenized * on prmpt processing done, assert cache_tokens.size * handle_completions_impl returns void * adapt the new web ui * update docs and hot topics * rm assert * small fix (2) --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Multimodal Support in llama.cpp
This directory provides multimodal capabilities for llama.cpp
. Initially intended as a showcase for running LLaVA models, its scope has expanded significantly over time to include various other vision-capable models. As a result, LLaVA is no longer the only multimodal architecture supported.
Important
Multimodal support can be viewed as a sub-project within
llama.cpp
. It is under very heavy development, and breaking changes are expected.
The naming and structure related to multimodal support have evolved, which might cause some confusion. Here's a brief timeline to clarify:
- #3436: Initial support for LLaVA 1.5 was added, introducing
llava.cpp
andclip.cpp
. Thellava-cli
binary was created for model interaction. - #4954: Support for MobileVLM was added, becoming the second vision model supported. This built upon the existing
llava.cpp
,clip.cpp
, andllava-cli
infrastructure. - Expansion & Fragmentation: Many new models were subsequently added (e.g., #7599, #10361, #12344, and others). However,
llava-cli
lacked support for the increasingly complex chat templates required by these models. This led to the creation of model-specific binaries likeqwen2vl-cli
,minicpmv-cli
, andgemma3-cli
. While functional, this proliferation of command-line tools became confusing for users. - #12849:
libmtmd
was introduced as a replacement forllava.cpp
. Its goals include providing a single, unified command-line interface, improving the user/developer experience (UX/DX), and supporting both audio and image inputs. - #13012:
mtmd-cli
was added, consolidating the various model-specific CLIs into a single tool powered bylibmtmd
.
Pre-quantized models
See the list of pre-quantized model here
How it works and what is mmproj
?
Multimodal support in llama.cpp
works by encoding images into embeddings using a separate model component, and then feeding these embeddings into the language model.
This approach keeps the multimodal components distinct from the core libllama
library. Separating these allows for faster, independent development cycles. While many modern vision models are based on Vision Transformers (ViTs), their specific pre-processing and projection steps can vary significantly. Integrating this diverse complexity directly into libllama
is currently challenging.
Consequently, running a multimodal model typically requires two GGUF files:
- The standard language model file.
- A corresponding multimodal projector (
mmproj
) file, which handles the image encoding and projection.
What is libmtmd
?
As outlined in the history, libmtmd
is the modern library designed to replace the original llava.cpp
implementation for handling multimodal inputs.
Built upon clip.cpp
(similar to llava.cpp
), libmtmd
offers several advantages:
- Unified Interface: Aims to consolidate interaction for various multimodal models.
- Improved UX/DX: Features a more intuitive API, inspired by the
Processor
class in the Hugging Facetransformers
library. - Flexibility: Designed to support multiple input types (text, audio, images) while respecting the wide variety of chat templates used by different models.
How to obtain mmproj
Multimodal projector (mmproj
) files are specific to each model architecture.
For the following models, you can use convert_hf_to_gguf.py
with --mmproj
flag to get the mmproj
file:
- Gemma 3 - Note: 1B variant does not have vision support
- SmolVLM (from HuggingFaceTB)
- SmolVLM2 (from HuggingFaceTB)
- Pixtral 12B - only works with
transformers
-compatible checkpoint - Qwen 2 VL and Qwen 2.5 VL (from Qwen)
- Mistral Small 3.1 24B
For older models, please refer to the relevant guide for instructions on how to obtain or create them: