NanoGPT Tutorial:修订间差异

来自WHY42
Riguz留言 | 贡献
无编辑摘要
Riguz留言 | 贡献
无编辑摘要
第38行: 第38行:
</syntaxhighlight>
</syntaxhighlight>


<syntaxhighlight lang="bash">
python train.py config/train_shakespeare_char.py --device=mps --compile=False --eval_iters=20 --log_interval=1 --block_size=64 --batch_size=12 --n_layer=4 --n_head=4 --n_embd=128 --max_iters=2000 --lr_decay_iters=2000 --dropout=0.0
step 2000: train loss 1.7640, val loss 1.8925
saving checkpoint to out-shakespeare-char
iter 2000: loss 1.6982, time 352.67ms, mfu 0.05%
</syntaxhighlight>
[[Category:Deep Learning]]
[[Category:Deep Learning]]
[[Category:PyTorch]]
[[Category:PyTorch]]

2023年12月11日 (一) 05:12的版本

(base)   nanoGPT git:(master) python data/shakespeare_char/prepare.py
length of dataset in characters: 1,115,394
all the unique characters:
 !$&',-.3:;?ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz
vocab size: 65
train has 1,003,854 tokens
val has 111,540 tokens
python train.py config/train_shakespeare_char.py

# Run it without GPU (mac air), pytorch nightly
# raise AssertionError("Torch not compiled with CUDA enabled")
python train.py config/train_shakespeare_char.py --device=cpu --compile=False --eval_iters=20 --log_interval=1 --block_size=64 --batch_size=12 --n_layer=4 --n_head=4 --n_embd=128 --max_iters=2000 --lr_decay_iters=2000 --dropout=0.0

step 2000: train loss 1.7640, val loss 1.8925
saving checkpoint to out-shakespeare-char
iter 2000: loss 1.6982, time 306.45ms, mfu 0.05%

python sample.py --out_dir=out-shakespeare-char --device=cpu
Overriding: out_dir = out-shakespeare-char
Overriding: device = cpu
number of parameters: 0.80M
Loading meta from data/shakespeare_char/meta.pkl...

I by doth what letterd fain flowarrman,
Lotheefuly daught shouss blate thou his though'd that opt--
Hammine than you, not neme your down way.

ELANUS:
I would and murser wormen that more?
...


python train.py config/train_shakespeare_char.py --device=mps --compile=False --eval_iters=20 --log_interval=1 --block_size=64 --batch_size=12 --n_layer=4 --n_head=4 --n_embd=128 --max_iters=2000 --lr_decay_iters=2000 --dropout=0.0

step 2000: train loss 1.7640, val loss 1.8925
saving checkpoint to out-shakespeare-char
iter 2000: loss 1.6982, time 352.67ms, mfu 0.05%