server: fix tool-call of DeepSeek R1 Qwen, return reasoning_content (Command 7RB & DeepSeek R1) unless --reasoning-format none (#11607)

* extract & return thoughts in reasoning_content field (unless --reasoning-format) for DeepSeek R1 & Command R7B

* tool-calls: add deepseek r1 template (models/templates/llama-cpp-deepseek-r1.jinja) + hackommodate broken official template

* tool-calls: accommodate variety of wrong tool call opening tags both R1 Qwen 32B and 7B distills like to spit out

* server/oai: ensure content is null when there are tool calls, and reasoning_content appears before content for readability

* tool-calls: add DeepSeek R1 Qwen distills to server/README.md & server tests

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
This commit is contained in:
Olivier Chafik
2025-02-13 10:05:16 +00:00
committed by GitHub
parent 27e8a23300
commit c7f460ab88
17 changed files with 1023 additions and 316 deletions

5
scripts/get_chat_template.py Normal file → Executable file
View File

@ -7,9 +7,8 @@
./scripts/get_chat_template.py model_id [variant]
Examples:
./scripts/get_chat_template.py NousResearch/Meta-Llama-3-8B-Instruct
./scripts/get_chat_template.py NousResearch/Hermes-3-Llama-3.1-8B tool_use
./scripts/get_chat_template.py meta-llama/Llama-3.2-3B-Instruct
./scripts/get_chat_template.py CohereForAI/c4ai-command-r-plus tool_use
./scripts/get_chat_template.py microsoft/Phi-3.5-mini-instruct
'''
import json