Add tokenizer test + revert to C++11 (#355)

* Add test-tokenizer-0 to do a few tokenizations - feel free to expand
* Added option to convert-pth-to-ggml.py script to dump just the vocabulary
* Added ./models/ggml-vocab.bin containing just LLaMA vocab data (used for tests)
* Added utility to load vocabulary file from previous point (temporary implementation)
* Avoid using std::string_view and drop back to C++11 (hope I didn't break something)
* Rename gpt_vocab -> llama_vocab
* All CMake binaries go into ./bin/ now
This commit is contained in:
Georgi Gerganov
2023-03-21 17:29:41 +02:00
committed by GitHub
parent 2e664f1ff4
commit eb34620aec
11 changed files with 249 additions and 148 deletions

View File

@ -44,7 +44,7 @@ bool llama_model_quantize(const std::string & fname_inp, const std::string & fna
return false;
}
gpt_vocab vocab;
llama_vocab vocab;
printf("%s: loading model from '%s'\n", __func__, fname_inp.c_str());