Llama.cpp:修订间差异
第13行: | 第13行: | ||
pip install -r requirements.txt | pip install -r requirements.txt | ||
python convert_hf_to_gguf.py --help | python convert_hf_to_gguf.py --help | ||
#M1 MPS does not support bf16 | |||
python convert_hf_to_gguf.py ~/Documents/MODELS/Qwen2-0.5B --outfile ~/Documents/MODELS/qwen2-0.5b-fp16.gguf --outtype f16 | |||
</syntaxhighlight> | </syntaxhighlight> | ||
[[Category:Deep Learning]] | [[Category:Deep Learning]] |
2024年7月19日 (五) 02:48的版本
Build llama.cpp
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make
Convert Hugging Face Model to GGUF
pip install -r requirements.txt
python convert_hf_to_gguf.py --help
#M1 MPS does not support bf16
python convert_hf_to_gguf.py ~/Documents/MODELS/Qwen2-0.5B --outfile ~/Documents/MODELS/qwen2-0.5b-fp16.gguf --outtype f16