Logo
Explore Help
Sign In
tqcq/llama.cpp
0
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2025-07-06 09:03:31 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
b2000
llama.cpp/examples/alpaca.sh

20 lines
336 B
Bash
Raw Permalink Normal View History

Add temporary helper script for Alpaca chat
2023-03-19 21:57:28 +02:00
#!/bin/bash
Move chat scripts into "./examples"
2023-03-25 20:36:52 +02:00
Add temporary helper script for Alpaca chat
2023-03-19 21:57:28 +02:00
#
# Temporary script - will be removed in the future
#
Move chat scripts into "./examples"
2023-03-25 20:36:52 +02:00
cd `dirname $0`
cd ..
alpaca.sh : update model file name (#2074) The original file name, `ggml-alpaca-7b-q4.bin`, implied the first-generation GGML. After the breaking changes (mentioned in https://github.com/ggerganov/llama.cpp/issues/382), `llama.cpp` requires GGML V3 now. Those model files are named `*ggmlv3*.bin`. We should change the example to an actually working model file, so that this thing is more likely to run out-of-the-box for more people, and less people would waste time downloading the old Alpaca model.
2023-07-06 09:17:50 -07:00
./main -m ./models/alpaca.13b.ggmlv3.q8_0.bin \
examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Experience (#1107) * Moving parameters to separate lines for readability. * Increasing repeate_penalty to 1.1 to make alpaca more usable by default. * Adding trailing newline.
2023-04-22 02:54:33 -04:00
--color \
-f ./prompts/alpaca.txt \
--ctx_size 2048 \
-n -1 \
-ins -b 256 \
--top_k 10000 \
--temp 0.2 \
--repeat_penalty 1.1 \
-t 7
Reference in New Issue Copy Permalink
Powered by Gitea Version: 1.24.1 Page: 212ms Template: 7ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API