Learn the Basics: Build the Neural Network

https://pytorch.org/tutorials/beginner/basics/buildmodel_tutorial.html
https://github.com/pytorch/tutorials/blob/main/beginner_source/basics/buildmodel_tutorial.py

Learn the Basics
https://pytorch.org/tutorials/beginner/basics/intro.html

Neural networks comprise of layers/modules that perform operations on data.

The torch.nn namespace provides all the building blocks you need to build your own neural network. Every module in PyTorch subclasses the nn.Module. A neural network is a module itself that consists of other modules (layers). This nested structure allows for building and managing complex architectures easily.

torch.nn
https://pytorch.org/docs/stable/nn.html

torch.nn.Module
https://pytorch.org/docs/stable/generated/torch.nn.Module.html

subclass /'sʌbklɑːs/
n. 亚纲;子集
vt. 把...划入亚纲
comprise /kəmˈpraɪz/
vt. 包含;由...组成

In the following sections, we’ll build a neural network to classify images in the FashionMNIST dataset.

import os
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets, transforms

1. Get Device for Training

We want to be able to train our model on an accelerator such as CUDA, MPS, MTIA, or XPU. If the current accelerator is available, we will use it. Otherwise, we use the CPU.

#!/usr/bin/env python
# coding=utf-8

import torch

print(f"torch.__version__ = {torch.__version__}")

device = torch.accelerator.current_accelerator().type if torch.accelerator.is_available() else "cpu"
print(f"torch.accelerator.current_accelerator().type = {torch.accelerator.current_accelerator().type}")
print(f"Using {device} device")

/home/yongqiang/miniconda3/bin/python /home/yongqiang/stable_diffusion_work/stable_diffusion_diffusers/yongqiang.py 
torch.__version__ = 2.6.0+cu124
torch.accelerator.current_accelerator().type = cuda
Using cuda device

Process finished with exit code 0
  • AttributeError: module 'torch' has no attribute 'accelerator'

现有 torch 版本为 torch = 2.5.1,更新到 torch = 2.6.0

(base) yongqiang@yongqiang:~$ pip install --upgrade torch==2.6.0

2. Define the Class

We define our neural network by subclassing nn.Module, and initialize the neural network layers in __init__. Every nn.Module subclass implements the operations on input data in the forward method.

We create an instance of NeuralNetwork, and move it to the device, and print its structure.

To use the model, we pass it the input data. This executes the model’s forward, along with some background operations. Do not call model.forward() directly!

Calling the model on the input returns a 2-dimensional tensor with dim=0 corresponding to each output of 10 raw predicted values for each class, and dim=1 corresponding to the individual values of each output. We get the prediction probabilities by passing it through an instance of the nn.Softmax module.

#!/usr/bin/env python
# coding=utf-8

import torch
from torch import nn

print(f"torch.__version__ = {torch.__version__}")

device = "cpu"
print(f"torch.accelerator.current_accelerator().type = {torch.accelerator.current_accelerator().type}")
print(f"Using {device} device\n")


class NeuralNetwork(nn.Module):
    def __init__(self):
        super().__init__()
        self.flatten = nn.Flatten()
        self.linear_relu_stack = nn.Sequential(
            nn.Linear(28 * 28, 512),
            nn.ReLU(),
            nn.Linear(512, 512),
            nn.ReLU(),
            nn.Linear(512, 10),
        )

    def forward(self, x):
        x = self.flatten(x)
        logits = self.linear_relu_stack(x)
        return logits


model = NeuralNetwork().to(device)
print(model)

X = torch.rand(1, 28, 28, device=device)
logits = model(X)
print(f"\nlogits.shape = {logits.shape}")
print(f"logits =\n{logits}")
pred_probab = nn.Softmax(dim=1)(logits)
print(f"pred_probab =\n{pred_probab}")
y_pred = pred_probab.argmax(1)
print(f"Predicted class: {y_pred}")

/home/yongqiang/miniconda3/bin/python /home/yongqiang/stable_diffusion_work/stable_diffusion_diffusers/yongqiang.py 
torch.__version__ = 2.6.0+cu124
torch.accelerator.current_accelerator().type = cuda
Using cpu device

NeuralNetwork(
  (flatten): Flatten(start_dim=1, end_dim=-1)
  (linear_relu_stack): Sequential(
    (0): Linear(in_features=784, out_features=512, bias=True)
    (1): ReLU()
    (2): Linear(in_features=512, out_features=512, bias=True)
    (3): ReLU()
    (4): Linear(in_features=512, out_features=10, bias=True)
  )
)

logits.shape = torch.Size([1, 10])
logits =
tensor([[ 0.0263,  0.0581,  0.0266, -0.0473, -0.0372, -0.0117, -0.0110,  0.0227,
         -0.0125, -0.0842]], grad_fn=<AddmmBackward0>)
pred_probab =
tensor([[0.1033, 0.1066, 0.1033, 0.0960, 0.0970, 0.0995, 0.0995, 0.1029, 0.0994,
         0.0925]], grad_fn=<SoftmaxBackward0>)
Predicted class: tensor([1])

Process finished with exit code 0

3. Model Layers

Let’s break down the layers in the FashionMNIST model. To illustrate it, we will take a sample minibatch of 3 images of size 28x28 and see what happens to it as we pass it through the network.

#!/usr/bin/env python
# coding=utf-8

import torch

device = "cpu"
print(
    f"torch.__version__ = {torch.__version__}, torch.accelerator.current_accelerator().type = {torch.accelerator.current_accelerator().type}")
print(f"device = {device}\n")

input_image = torch.rand(3, 28, 28)
print(f"input_image.size() = {input_image.size()}")
print(f"input_image.shape = {input_image.shape}")

/home/yongqiang/miniconda3/bin/python /home/yongqiang/stable_diffusion_work/stable_diffusion_diffusers/yongqiang.py 
torch.__version__ = 2.6.0+cu124, torch.accelerator.current_accelerator().type = cuda
device = cpu

input_image.size() = torch.Size([3, 28, 28])
input_image.shape = torch.Size([3, 28, 28])

Process finished with exit code 0

3.1. nn.Flatten

We initialize the nn.Flatten layer to convert each 2D 28x28 image into a contiguous array of 784 pixel values (the minibatch dimension (at dim=0) is maintained).

#!/usr/bin/env python
# coding=utf-8

import torch
from torch import nn

device = "cpu"
print(
    f"torch.__version__ = {torch.__version__}, torch.accelerator.current_accelerator().type = {torch.accelerator.current_accelerator().type}")
print(f"device = {device}\n")

input_image = torch.rand(3, 28, 28)
print(f"input_image.size() = {input_image.size()}")
print(f"input_image.shape = {input_image.shape}")

flatten = nn.Flatten()
flat_image = flatten(input_image)
print(f"flat_image.size() = {flat_image.size()}")

/home/yongqiang/miniconda3/bin/python /home/yongqiang/stable_diffusion_work/stable_diffusion_diffusers/yongqiang.py 
torch.__version__ = 2.6.0+cu124, torch.accelerator.current_accelerator().type = cuda
device = cpu

input_image.size() = torch.Size([3, 28, 28])
input_image.shape = torch.Size([3, 28, 28])
flat_image.size() = torch.Size([3, 784])

Process finished with exit code 0

3.2. nn.Linear

The linear layer is a module that applies a linear transformation on the input using its stored weights and biases.

#!/usr/bin/env python
# coding=utf-8

import torch
from torch import nn

device = "cpu"
print(
    f"torch.__version__ = {torch.__version__}, torch.accelerator.current_accelerator().type = {torch.accelerator.current_accelerator().type}")
print(f"device = {device}\n")

input_image = torch.rand(3, 28, 28)
print(f"input_image.size() = {input_image.size()}")
print(f"input_image.shape = {input_image.shape}")

flatten = nn.Flatten()
flat_image = flatten(input_image)
print(f"flat_image.size() = {flat_image.size()}")

layer1 = nn.Linear(in_features=28 * 28, out_features=20)
hidden1 = layer1(flat_image)
print(f"hidden1.size() = {hidden1.size()}")

/home/yongqiang/miniconda3/bin/python /home/yongqiang/stable_diffusion_work/stable_diffusion_diffusers/yongqiang.py 
torch.__version__ = 2.6.0+cu124, torch.accelerator.current_accelerator().type = cuda
device = cpu

input_image.size() = torch.Size([3, 28, 28])
input_image.shape = torch.Size([3, 28, 28])
flat_image.size() = torch.Size([3, 784])
hidden1.size() = torch.Size([3, 20])

Process finished with exit code 0

3.3. nn.ReLU

Non-linear activations are what create the complex mappings between the model’s inputs and outputs. They are applied after linear transformations to introduce nonlinearity, helping neural networks learn a wide variety of phenomena.

In this model, we use nn.ReLU between our linear layers, but there’s other activations to introduce non-linearity in your model.

phenomenon /fəˈnɒmɪnən/
n. 现象;非凡的人或事物;杰出的人
复数:phenomena
#!/usr/bin/env python
# coding=utf-8

import torch
from torch import nn

device = "cpu"
print(
    f"torch.__version__ = {torch.__version__}, torch.accelerator.current_accelerator().type = {torch.accelerator.current_accelerator().type}")
print(f"device = {device}\n")

input_image = torch.rand(3, 28, 28)
print(f"input_image.size() = {input_image.size()}")
print(f"input_image.shape = {input_image.shape}")

flatten = nn.Flatten()
flat_image = flatten(input_image)
print(f"flat_image.size() = {flat_image.size()}")

layer1 = nn.Linear(in_features=28 * 28, out_features=20)
hidden1 = layer1(flat_image)
print(f"hidden1.size() = {hidden1.size()}")

print(f"Before ReLU: \n{hidden1}\n")
hidden1 = nn.ReLU()(hidden1)
print(f"After ReLU: \n{hidden1}\n")

/home/yongqiang/miniconda3/bin/python /home/yongqiang/stable_diffusion_work/stable_diffusion_diffusers/yongqiang.py 
torch.__version__ = 2.6.0+cu124, torch.accelerator.current_accelerator().type = cuda
device = cpu

input_image.size() = torch.Size([3, 28, 28])
input_image.shape = torch.Size([3, 28, 28])
flat_image.size() = torch.Size([3, 784])
hidden1.size() = torch.Size([3, 20])
Before ReLU: 
tensor([[-0.0111, -0.2121,  0.2101,  0.2063, -0.3129,  0.0213, -0.6340,  0.0785,
         -0.0648, -0.2985, -0.4828, -0.0499, -0.4227,  0.0237, -0.2247,  0.5294,
         -0.0424, -0.1106, -0.1568, -0.2915],
        [ 0.1681,  0.2109,  0.2439,  0.3871,  0.1900,  0.0583, -0.6310, -0.2321,
          0.0335, -0.1672, -0.6496,  0.0732, -0.2041,  0.4019, -0.3532,  0.2264,
          0.0464,  0.2359, -0.0233, -0.1444],
        [ 0.0491,  0.0786, -0.0940, -0.0384, -0.2240, -0.1865, -0.3728,  0.0503,
          0.1104, -0.3730, -0.5841,  0.1705, -0.1833,  0.1305, -0.7708, -0.0340,
         -0.0091,  0.0036,  0.1088, -0.2444]], grad_fn=<AddmmBackward0>)

After ReLU: 
tensor([[0.0000, 0.0000, 0.2101, 0.2063, 0.0000, 0.0213, 0.0000, 0.0785, 0.0000,
         0.0000, 0.0000, 0.0000, 0.0000, 0.0237, 0.0000, 0.5294, 0.0000, 0.0000,
         0.0000, 0.0000],
        [0.1681, 0.2109, 0.2439, 0.3871, 0.1900, 0.0583, 0.0000, 0.0000, 0.0335,
         0.0000, 0.0000, 0.0732, 0.0000, 0.4019, 0.0000, 0.2264, 0.0464, 0.2359,
         0.0000, 0.0000],
        [0.0491, 0.0786, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0503, 0.1104,
         0.0000, 0.0000, 0.1705, 0.0000, 0.1305, 0.0000, 0.0000, 0.0000, 0.0036,
         0.1088, 0.0000]], grad_fn=<ReluBackward0>)


Process finished with exit code 0

3.4. nn.Sequential

nn.Sequential is an ordered container of modules. The data is passed through all the modules in the same order as defined. You can use sequential containers to put together a quick network like seq_modules.

3.5. nn.Softmax

The last linear layer of the neural network returns logits - raw values in [-infty, infty] - which are passed to the nn.Softmax module. The logits are scaled to values [0, 1] representing the model’s predicted probabilities for each class. dim parameter indicates the dimension along which the values must sum to 1.

#!/usr/bin/env python
# coding=utf-8

import torch
from torch import nn

device = "cpu"
print(
    f"torch.__version__ = {torch.__version__}, torch.accelerator.current_accelerator().type = {torch.accelerator.current_accelerator().type}")
print(f"device = {device}\n")

flatten = nn.Flatten()

layer1 = nn.Linear(in_features=28 * 28, out_features=20)

seq_modules = nn.Sequential(
    flatten,
    layer1,
    nn.ReLU(),
    nn.Linear(20, 10)
)

input_image = torch.rand(3, 28, 28)
logits = seq_modules(input_image)
print(f"logits.size() = {logits.size()}")

softmax = nn.Softmax(dim=1)
pred_probab = softmax(logits)
print(f"pred_probab: \n{pred_probab}")

/home/yongqiang/miniconda3/bin/python /home/yongqiang/stable_diffusion_work/stable_diffusion_diffusers/yongqiang.py 
torch.__version__ = 2.6.0+cu124, torch.accelerator.current_accelerator().type = cuda
device = cpu

logits.size() = torch.Size([3, 10])
pred_probab: 
tensor([[0.0831, 0.0920, 0.1077, 0.0803, 0.1177, 0.0900, 0.1052, 0.0925, 0.1110,
         0.1205],
        [0.0802, 0.0913, 0.1047, 0.0897, 0.1204, 0.0919, 0.1160, 0.0963, 0.1152,
         0.0943],
        [0.0755, 0.0822, 0.1038, 0.0874, 0.1294, 0.0847, 0.1206, 0.1013, 0.1201,
         0.0950]], grad_fn=<SoftmaxBackward0>)

Process finished with exit code 0

4. Model Parameters

Many layers inside a neural network are parameterized, i.e. have associated weights and biases that are optimized during training. Subclassing nn.Module automatically tracks all fields defined inside your model object, and makes all parameters accessible using your model’s parameters() or named_parameters() methods.

In this example, we iterate over each parameter, and print its size and a preview of its values.

#!/usr/bin/env python
# coding=utf-8

import torch
from torch import nn

device = "cpu"
print(
    f"torch.__version__ = {torch.__version__}, torch.accelerator.current_accelerator().type = {torch.accelerator.current_accelerator().type}")
print(f"device = {device}\n")


class NeuralNetwork(nn.Module):
    def __init__(self):
        super().__init__()
        self.flatten = nn.Flatten()
        self.linear_relu_stack = nn.Sequential(
            nn.Linear(28 * 28, 512),
            nn.ReLU(),
            nn.Linear(512, 512),
            nn.ReLU(),
            nn.Linear(512, 10),
        )

    def forward(self, x):
        x = self.flatten(x)
        logits = self.linear_relu_stack(x)
        return logits


model = NeuralNetwork().to(device)

print(f"Model structure: {model}\n\n")

for name, param in model.named_parameters():
    print(f"Layer: {name} | Size: {param.size()} | Values : {param[:2]} \n")

/home/yongqiang/miniconda3/bin/python /home/yongqiang/stable_diffusion_work/stable_diffusion_diffusers/yongqiang.py 
torch.__version__ = 2.6.0+cu124, torch.accelerator.current_accelerator().type = cuda
device = cpu

Model structure: NeuralNetwork(
  (flatten): Flatten(start_dim=1, end_dim=-1)
  (linear_relu_stack): Sequential(
    (0): Linear(in_features=784, out_features=512, bias=True)
    (1): ReLU()
    (2): Linear(in_features=512, out_features=512, bias=True)
    (3): ReLU()
    (4): Linear(in_features=512, out_features=10, bias=True)
  )
)


Layer: linear_relu_stack.0.weight | Size: torch.Size([512, 784]) | Values : tensor([[ 0.0201,  0.0259,  0.0332,  ...,  0.0071, -0.0068, -0.0072],
        [-0.0041, -0.0039, -0.0112,  ..., -0.0349,  0.0166, -0.0185]],
       grad_fn=<SliceBackward0>) 

Layer: linear_relu_stack.0.bias | Size: torch.Size([512]) | Values : tensor([-0.0118,  0.0166], grad_fn=<SliceBackward0>) 

Layer: linear_relu_stack.2.weight | Size: torch.Size([512, 512]) | Values : tensor([[-0.0311, -0.0014,  0.0226,  ...,  0.0119, -0.0438, -0.0095],
        [-0.0402, -0.0429,  0.0069,  ...,  0.0218,  0.0277, -0.0344]],
       grad_fn=<SliceBackward0>) 

Layer: linear_relu_stack.2.bias | Size: torch.Size([512]) | Values : tensor([ 0.0156, -0.0327], grad_fn=<SliceBackward0>) 

Layer: linear_relu_stack.4.weight | Size: torch.Size([10, 512]) | Values : tensor([[-0.0198,  0.0003,  0.0309,  ..., -0.0227,  0.0266, -0.0107],
        [-0.0300,  0.0218, -0.0353,  ...,  0.0433,  0.0159,  0.0190]],
       grad_fn=<SliceBackward0>) 

Layer: linear_relu_stack.4.bias | Size: torch.Size([10]) | Values : tensor([ 0.0274, -0.0111], grad_fn=<SliceBackward0>) 


Process finished with exit code 0

References

[1] Yongqiang Cheng, https://yongqiang.blog.csdn.net/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Yongqiang Cheng

梦想不是浮躁,而是沉淀和积累。

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值