PyTorch get started:修订间差异

来自WHY42
Riguz留言 | 贡献
无编辑摘要
Riguz留言 | 贡献
Riguz移动页面PyTorch:Get StartedPyTorch get started,不留重定向
 
(未显示同一用户的13个中间版本)
第1行: 第1行:
= Installation =
== Conda Installation==  
== Conda Installation==  


第11行: 第12行:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
conda install pytorch::pytorch torchvision torchaudio -c pytorch
# conda install pytorch::pytorch torchvision torchaudio -c pytorch
# MPS acceleration is available on MacOS 12.3+
conda install pytorch-nightly::pytorch torchvision torchaudio -c pytorch-nightly
</syntaxhighlight>
 
To verify:
 
<syntaxhighlight lang="python">
import torch
x = torch.rand(5, 3)
print(x)
</syntaxhighlight>
 
Output:
<syntaxhighlight lang="bash">
tensor([[0.2162, 0.2653, 0.6725],
        [0.5371, 0.4180, 0.1353],
        [0.3697, 0.5238, 0.0332],
        [0.6179, 0.5008, 0.9435],
        [0.1182, 0.3233, 0.9071]])
</syntaxhighlight>
 
= Concepts =
 
* 标量(Scalar):仅包含一个数值的张量,例如 torch.tensor(3.0)
* 向量:一个轴的张量
* 矩阵:两个轴的张量
 
== Tensor(张量) ==
Tensors are a specialized data structure that are very similar to arrays and matrices. In PyTorch, we use tensors to encode the inputs and outputs of a model, as well as the model’s parameters.
 
Tensors are similar to NumPy’s ndarrays, except that :
* tensors can run on GPUs or other hardware accelerators
* tensors are also optimized for automatic differentiation(自动微分)
 
<syntaxhighlight lang="python">
>>> import torch
>>> x = torch.arange(10)
>>> x
tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> x.shape
torch.Size([10])
>>> x.numel()
10
>>> X = x.reshape(3,4)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: shape '[3, 4]' is invalid for input of size 10
>>> X = x.reshape(2,5) # or X = x.reshape(-1,5)
>>> X
tensor([[0, 1, 2, 3, 4],
        [5, 6, 7, 8, 9]])
>>> torch.zeros(2,3,4)
tensor([[[0., 0., 0., 0.],
        [0., 0., 0., 0.],
        [0., 0., 0., 0.]],
 
        [[0., 0., 0., 0.],
        [0., 0., 0., 0.],
        [0., 0., 0., 0.]]])
>>> torch.ones(2,3,4)
tensor([[[1., 1., 1., 1.],
        [1., 1., 1., 1.],
        [1., 1., 1., 1.]],
 
        [[1., 1., 1., 1.],
        [1., 1., 1., 1.],
        [1., 1., 1., 1.]]])
>>> torch.randn(3,4) #创建3x4的张量,其中每个值都从均值为0,标准差为1的正态分布中随机采样
tensor([[ 0.1182, -0.6975,  0.6529,  0.4547],
        [-0.6887,  0.1396,  1.1660,  0.0818],
        [-0.8471,  0.4265,  0.4753,  0.8336]])
</syntaxhighlight>
 
== 张量运算==
 
<syntaxhighlight lang="bash">
>>> x = torch.tensor([1.0, 2, 4, 8])
>>> y = torch.tensor([2, 2, 2, 2])
>>> x + y
tensor([ 3.,  4.,  6., 10.])
>>> x - y
tensor([-1.,  0.,  2.,  6.])
>>> x * y
tensor([ 2.,  4.,  8., 16.])
>>> x / y
tensor([0.5000, 1.0000, 2.0000, 4.0000])
>>> x ** y
tensor([ 1.,  4., 16., 64.])
>>> torch.exp(x)
tensor([2.7183e+00, 7.3891e+00, 5.4598e+01, 2.9810e+03])
>>> x.sum()
tensor(15.)
</syntaxhighlight>
</syntaxhighlight>




[[Category:Deep Learning]]
[[Category:Deep Learning]]
[[Category:PyTorch]]

2023年12月19日 (二) 11:25的最新版本

Installation

Conda Installation

conda create --name deeplearning python=3.11
conda activate deeplearning
python --version
// 3.11.5

Install pytorch

# conda install pytorch::pytorch torchvision torchaudio -c pytorch
# MPS acceleration is available on MacOS 12.3+
conda install pytorch-nightly::pytorch torchvision torchaudio -c pytorch-nightly

To verify:

import torch
x = torch.rand(5, 3)
print(x)

Output:

tensor([[0.2162, 0.2653, 0.6725],
        [0.5371, 0.4180, 0.1353],
        [0.3697, 0.5238, 0.0332],
        [0.6179, 0.5008, 0.9435],
        [0.1182, 0.3233, 0.9071]])

Concepts

  • 标量(Scalar):仅包含一个数值的张量,例如 torch.tensor(3.0)
  • 向量:一个轴的张量
  • 矩阵:两个轴的张量

Tensor(张量)

Tensors are a specialized data structure that are very similar to arrays and matrices. In PyTorch, we use tensors to encode the inputs and outputs of a model, as well as the model’s parameters.

Tensors are similar to NumPy’s ndarrays, except that :

  • tensors can run on GPUs or other hardware accelerators
  • tensors are also optimized for automatic differentiation(自动微分)
>>> import torch
>>> x = torch.arange(10)
>>> x
tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> x.shape
torch.Size([10])
>>> x.numel()
10
>>> X = x.reshape(3,4)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: shape '[3, 4]' is invalid for input of size 10
>>> X = x.reshape(2,5) # or X = x.reshape(-1,5)
>>> X
tensor([[0, 1, 2, 3, 4],
        [5, 6, 7, 8, 9]])
>>> torch.zeros(2,3,4)
tensor([[[0., 0., 0., 0.],
         [0., 0., 0., 0.],
         [0., 0., 0., 0.]],

        [[0., 0., 0., 0.],
         [0., 0., 0., 0.],
         [0., 0., 0., 0.]]])
>>> torch.ones(2,3,4)
tensor([[[1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.]],

        [[1., 1., 1., 1.],
         [1., 1., 1., 1.],
         [1., 1., 1., 1.]]])
>>> torch.randn(3,4) #创建3x4的张量,其中每个值都从均值为0,标准差为1的正态分布中随机采样
tensor([[ 0.1182, -0.6975,  0.6529,  0.4547],
        [-0.6887,  0.1396,  1.1660,  0.0818],
        [-0.8471,  0.4265,  0.4753,  0.8336]])

张量运算

>>> x = torch.tensor([1.0, 2, 4, 8])
>>> y = torch.tensor([2, 2, 2, 2])
>>> x + y
tensor([ 3.,  4.,  6., 10.])
>>> x - y
tensor([-1.,  0.,  2.,  6.])
>>> x * y
tensor([ 2.,  4.,  8., 16.])
>>> x / y
tensor([0.5000, 1.0000, 2.0000, 4.0000])
>>> x ** y
tensor([ 1.,  4., 16., 64.])
>>> torch.exp(x)
tensor([2.7183e+00, 7.3891e+00, 5.4598e+01, 2.9810e+03])
>>> x.sum()
tensor(15.)