Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-07-29 13:43:38 -04:00
Code Issues Packages Projects Releases Wiki Activity
Files
345c8c0c87a97c1595f9c8b14833d531c8c7d8df
llama.cpp/mypy.ini

8 lines
163 B
INI
Raw Normal View History

convert : fix python 3.8 support, modernize type annotations (#2916) * convert : fix python 3.8 support * convert : sort imports * convert : fix required parameters in convert-llama-ggmlv3-to-gguf * convert : fix mypy errors in convert-llama-ggmlv3-to-gguf * convert : use PEP 585 generics and PEP 604 unions Now that we have `from __future__ import annotations`, we can use this modern syntax in Python 3.7 instead of restricting support to Python 3.9 or 3.10 respectively. * gguf.py : a tuple is already a tuple * add mypy.ini * convert : add necessary `type: ignore` comments * gguf-py: bump version
2023-08-31 01:02:23 -04:00
[mypy]
strict = true
allow_untyped_calls = true
allow_untyped_defs = true
allow_incomplete_defs = true
scripts: Generalize convert scripts (#3838) * Replace convert-*-hf-to-gguf.py files with convert-hf-to-gguf.py
2023-11-09 11:09:29 +01:00
disable_error_code = import-untyped
convert : partially revert PR #4818 (#5041)
2024-01-20 18:14:18 -05:00
warn_return_any = false
Reference in New Issue Copy Permalink
Powered by Gitea Version: 1.24.3 Page: 767ms Template: 40ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API