
Data Structure
Networking
RDBMS
Operating System
Java
MS Excel
iOS
HTML
CSS
Android
Python
C Programming
C++
C#
MongoDB
MySQL
Javascript
PHP
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
Compute Cross-Entropy Loss Between Input and Target Tensors in PyTorch
To compute the cross entropy loss between the input and target (predicted and actual) values, we apply the function CrossEntropyLoss(). It is accessed from the torch.nn module. It creates a criterion that measures the cross entropy loss. It is a type of loss function provided by the torch.nn module.
The loss functions are used to optimize a deep neural network by minimizing the loss. CrossEntropyLoss() is very useful in training multiclass classification problems. The input is expected to contain unnormalized scores for each class.
The target tensor may contain class indices in the range of [0,C-1] where C is the number of classes or the class probabilities.
Syntax
torch.nn.CrossEntropyLoss()
Steps
To compute the cross entropy loss, one could follow the steps given below
Import the required library. In all the following examples, the required Python library is torch. Make sure you have already installed it.
import torch
Create the input and target tensors and print them.
input = torch.rand(3, 5) target = torch.empty(3, dtype = torch.long).random_(5)
Create a criterion to measure the cross entropy loss.
loss = nn.CrossEntropyLoss()
Compute the cross entropy loss and print it.
output = loss(input, target) print('Cross Entropy Loss:
', output)
Note − In the following examples, we are using random numbers to generate input and target tensors. So, you may notice that you are getting different values of these tensors
Example 1
In this example, we compute the cross entropy loss between the input and target tensors. Here we have taken the example of a target tensor with class indices.
# Example of target with class indices import torch import torch.nn as nn input = torch.rand(3, 5) target = torch.empty(3, dtype = torch.long).random_(5) print(target) loss = nn.CrossEntropyLoss() output = loss(input, target) print('input:
', input) print('target:
', target) print('Cross Entropy Loss:
', output)
Output
tensor([2, 0, 4]) input: tensor([[0.2228, 0.2523, 0.9712, 0.7887, 0.2820], [0.7778, 0.4144, 0.8693, 0.1355, 0.3706], [0.0823, 0.5392, 0.0542, 0.0153, 0.8475]]) target: tensor([2, 0, 4]) Cross Entropy Loss: tensor(1.2340)
Example 2
In this example, we compute the cross entropy loss between the input and target tensors. Here we have taken the example of a target tensor with class probabilities.
# Example of target with class probabilities import torch import torch.nn as nn input = torch.rand(3, 5, requires_grad=True) target = torch.empty(3, dtype=torch.long).random_(5) print(target.size()) loss = nn.CrossEntropyLoss() output = loss(input, target) output.backward() print("Input:
",input) print("Target:
",target) print("Cross Entropy Loss:
",output) print('Input grads:
', input.grad)
Output
torch.Size([3]) Input: tensor([[0.8671, 0.0189, 0.0042, 0.1619, 0.9805], [0.1054, 0.1519, 0.6359, 0.6112, 0.9417], [0.9968, 0.3285, 0.9185, 0.0315, 0.9592]], requires_grad=True) Target: tensor([1, 0, 4]) Cross Entropy Loss: tensor(1.8338, grad_fn=<NllLossBackward>) Input grads: tensor([[ 0.0962, -0.2921, 0.0406, 0.0475, 0.1078], [-0.2901, 0.0453, 0.0735, 0.0717, 0.0997], [ 0.0882, 0.0452, 0.0815, 0.0336, -0.2484]])