Cross Entropy Loss PyTorch

In this Python tutorial, we will learn about Cross entropy loss PyTorch in Python and we will also cover different examples related to Cross entropy loss PyTorch. Additionally, we will cover these topics.

  • Cross entropy loss PyTorch
  • Cross entropy loss PyTorch example
  • Cross entropy loss PyTorch implementation
  • Cross entropy loss PyTorch softmax
  • Cross entropy loss PyTorch functional
  • Cross entropy loss PyTorch logits
  • Cross entropy loss PyTorch backward
  • Cross entropy loss PyTorch weight
  • Cross entropy loss PyTorch reduction

Cross entropy loss PyTorch

In this section, we will learn about cross-entropy loss PyTorch in python.

  • Cross entropy loss is mainly used for the classification problem in machine learning.
  • The criterion are to calculate the cross-entropy between the input variables and the target variables.

Code:

In the following code, we will import some libraries to calculate the cross-entropy between the variables.

  • input = torch.tensor([[3.4, 1.5,0.4, 0.10]],dtype=torch.float) is used as an input variable.
  • target = torch.tensor([0], dtype=torch.long) is used as an target variable.
from torch import nn
criteria = nn.CrossEntropyLoss()
input = torch.tensor([[3.4, 1.5,0.4, 0.10]],dtype=torch.float)
target = torch.tensor([0], dtype=torch.long)
criteria(input, target)

Output:

After running the above code, we get the following output in which we can see that the cross-entropy loss value is printed on the screen.

Cross entropy loss PyTorch
Cross entropy loss PyTorch

Also, check: Machine Learning using Python

Cross entropy loss PyTorch example

In this section, we will learn about the cross-entropy loss PyTorch with the help of an example.

Cross entropy is defined as a process that is used to calculate the difference between the probability distribution of the given set of variables.

Code:

In the following code, we will import some libraries from which we can calculate the cross-entropy between two variables.

  • total_bce_loss = num.sum(-y_true * num.log(y_pred) – (1 – y_true) * num.log(1 – y_pred)) is calculate the cross entropy loss.
  • mean_bce_loss = total_bce_loss / num_of_samples is used to calculate the mean of cross entropy loss.
  • print (“CE error is: ” + str(crossentropy_value)) is used to print the cross entropy value.
  • sigmoid = torch.nn.Sigmoid() is used to ensuring the input between 0 and 1.
  • output = crossentropy_loss(input, target) is used to calculate the ouput of the cross-entropy loss.
import numpy as num
import torch
y_pred = num.array([0.1582, 0.4139, 0.2287])
y_true = num.array([0.0, 1.0, 0.0]) 
def CrossEntropy(y_pred, y_true):
    total_bce_loss = num.sum(-y_true * num.log(y_pred) - (1 - y_true) * num.log(1 - y_pred))

    num_of_samples = y_pred.shape[0]
    mean_bce_loss = total_bce_loss / num_of_samples
    return mean_bce_loss
crossentropy_value = CrossEntropy(y_pred, y_true)
print ("CE error is: " + str(crossentropy_value))
crossentropy_loss = torch.nn.BCELoss()
sigmoid = torch.nn.Sigmoid() 
input = torch.tensor(y_pred)
target = torch.tensor(y_true)
output = crossentropy_loss(input, target)
output

Output:

In the following output, we can see that the cross-entropy loss example value is printed on the screen.

Cross entropy loss PyTorch example
Cross entropy loss PyTorch example

Read: Pandas in Python

Cross entropy loss PyTorch implementation

In this section, we will learn about cross-entropy loss PyTorch implementation in python.

  • As we know cross-entropy loss PyTorch is used to calculate the difference between the input and output variable.
  • Here we can calculate the difference between input and output variables by implementation.

Code:

In the following code, we will import some libraries from calculating cross-entropy loss.

  • X = torch.randn(batch_size, n_classes) is used to get the random values.
  • target = torch.randint(n_classes, size=(batch_size,), dtype=torch.long) is used as an target variable.
  • f.cross_entropy(X, target) is used to calculate the cross entropy.
import torch
import torch.nn as nn
import torch.nn.functional as f
batch_size, n_classes = 7, 5
X = torch.randn(batch_size, n_classes)
X.shape
X
target = torch.randint(n_classes, size=(batch_size,), dtype=torch.long)
target
f.cross_entropy(X, target)

Output:

After running the above code we get the following output in which we can see that the cross-entropy value after implementation is printed on the screen.

cross entropy loss PyTorch implementation.cross entropy loss PyTorch implementation
Cross entropy loss PyTorch implementation.

Read: What is NumPy in Python

Cross entropy loss PyTorch softmax

In this section, we will learn about the cross-entropy loss of Pytorch softmax in python.

  • Cross entropy loss PyTorch softmax is defined as a task that changes the K real values between 0 and 1.
  • The motive of the cross-entropy is to measure the distance from the true values and also used to take the output probabilities.

Code:

In the following code, we will import some libraries from which we can measure the cross-entropy loss softmax.

  • X = torch.randn(batch_size, n_classes) is used to get the values.
  • def softmax(x): return x.exp() / (x.exp().sum(-1)).unsqueeze(-1) is used to define the softmax value.
  • loss=nl(pred, target) is used to calculate the loss.
import torch
import torch.nn as nn
import torch.nn.functional as f
batch_size, n_classes = 7, 5
X = torch.randn(batch_size, n_classes)
X.shape
X
target = torch.randint(n_classes, size=(batch_size,), dtype=torch.long)
target
def softmax(x): return x.exp() / (x.exp().sum(-1)).unsqueeze(-1)
def nl(input, target): return -input[range(target.shape[0]), target].log().mean()

pred = softmax(X)
loss=nl(pred, target)
loss
pred = softmax(X)
loss=nl(pred, target)
loss

Output:

After running the above code, we get the following output in which we can see that the value of cross-entropy loss softmax is printed on the screen.

Cross entropy loss PyTorch softmax
Cross entropy loss PyTorch softmax

Read: Adam optimizer PyTorch with Examples

Cross entropy loss PyTorch functional

In this section, we will learn about the cross-entropy loss PyTorch functional in Python.

Cross entropy loss PyTorch has many functions. We can explain each is every function one by one.

  • Cross_entropy
  • Binary_cross_entropy
  • Binary_cross_entropy_with_logit
  • ctc_loss
  • cosine_embedded_loss
  • nll_loss
  • gaussian_nll_loss
  • l1_loss
  • multi_margin_loss
  • soft_margin_loss
  • triple_margin_loss
  • triple_margin_with_distance_loss
  • mse_loss
  1. Cross_entropy: This is used to calculate the difference between the input and the target variable.
  2. Binary_cross_entropy: This function is used which calculate the binary cross entropy between the target and input probabilities.
  3. Binary_cross_entropy_with_logits: This function is used which calculate the binary cross entropy between the target and input logits.
  4. ctc_loss: This is used as connectionist temporal classification loss.
  5. cosine_embedded_loss: This is used to calculate the loss given input tensor and also used for measuring the two inputs and similar or not.
  6. nll_loss: This loss is the negative log likelihood loss.
  7. gaussian_nll_loss: This loss is a gaussian negative log likelihood loss.
  8. l1_loss: This function is used which takes mean element-wise element value difference.
  9. multi_margin_loss: This is used to improve the multiclass multi-classification hinge loss between input and output variables.
  10. soft_margin_loss: This is used to improve the two-class classification logistic loss between input tensor and target tensor.
  11. triple_margin_loss: This is used to calculate the relative similarity between the samples.
  12. triple_margin_with_distance_loss: This is used to calculate the triple loss given input tensor and real value function that compute the relationship between anchor and positive example.
  13. mse_loss: This is used to measure the mean square error element-wise.

Read: PyTorch nn linear + Examples

Cross entropy loss PyTorch logits

In this section, we will learn about cross-entropy loss PyTorch logits in Python.

  • The logit function is used to identify the standard logistic distribution. It is used in machine learning models.
  • In cross-entropy loss, PyTorch logits are used to take scores which is called as logit function.

Code:

In the following code, we will import some libraries from which we can calculate the cross entropy loss PyTorch logit.

  • target = torch.ones([12, 66], dtype=torch.float32) is used as a target variable.
  • output = torch.full([12, 66], 1.7) is used as prediction value.
  • positive_weight = torch.ones([66]) is used as positive weight where weight are equal to 1.
  • criterion = torch.nn.BCEWithLogitsLoss(pos_weight=positive_weight) is used to calculate the cross entropy logit loss function.
import torch
target = torch.ones([12, 66], dtype=torch.float32)  
output = torch.full([12, 66], 1.7)  
positive_weight = torch.ones([66]) 
criterion = torch.nn.BCEWithLogitsLoss(pos_weight=positive_weight)
criterion(output, target)  

Output:

After running the above code, we get the following output in which we can see that the cross-entropy logit score is printed on the screen.

Cross Entropy loss PyTorch Logit
Cross-Entropy loss PyTorch Logit

Cross entropy loss PyTorch backward

In this section, we will learn about cross-entropy loss PyTorch backward in Python.

  • Cross entropy loss Pytorch backward is used to calculate the gradient of the current tensor.
  • The cross-entropy loss is mainly used or helpful for the classification problem and also calculate the cross entropy loss between the input and target.

Code:

In the following code, we will import the torch library from which we can calculate the PyTorch backward function.

  • input = torch.randn(5, 7, requires_grad=True) is used as an input variable.
  • target = torch.empty(5, dtype=torch.long).random_(5) is used as an target variable.
  • output.backward() is used to get the backward output.
import torch
import torch.nn as nn
loss = nn.CrossEntropyLoss()
input = torch.randn(5, 7, requires_grad=True)
target = torch.empty(5, dtype=torch.long).random_(5)
output = loss(input, target)
output.backward()
output

Output:

After running the above code, we get the following output in which we can see that the cross-entropy loss PyTorch backward score is printed on the screen.

Cross entropy loss PyTorch backward
Cross entropy loss PyTorch backward

Cross entropy loss PyTorch weight

In this section, we will learn about cross-entropy loss PyTorch weight in python.

  • As we know cross-entropy is defined as a process of calculating the difference between the input and target variables.
  • In cross-entropy loss, if we give the weight it assigns weight to every class and the weight should be in 1d tensor.

Code:

In the following code, we will import some libraries from which we can calculate the cross-entropy loss PyTorch weight.

  • softmax=nn.Softmax() is used to change the K real values.
  • loss = nn.CrossEntropyLoss(weight=sc) is used to calculate the cross entropy loss weight.
  • inputvariable = torch.tensor([[3.0,4.0],[6.0,9.0]]) is used as an input variable.
  • targetvariable = torch.tensor([1,0]) is used as a target variable.
  • print(output) is used to print the output.
from torch import nn
import torch
softmax=nn.Softmax()
sc=torch.tensor([0.6,0.38])
loss = nn.CrossEntropyLoss(weight=sc)
inputvariable = torch.tensor([[3.0,4.0],[6.0,9.0]])
targetvariable = torch.tensor([1,0])
output = loss(inputvariable, targetvariable)
print(output)

Output:

After running the above code, we get the following output in which we can see that the cross-entropy loss weight is printed on the screen.

cross entropy loss PyTorch weight

Cross entropy loss PyTorch reduction

In this section, we will learn about cross-entropy loss PyTorch weight in python.

  • Cross entropy loss PyTorch is defined as a process of creating something in less amount.
  • Cross entropy is also defined as a region to calculate the cross-entropy between the input and output variable.

Code:

In the following code, we will import some libraries from which we can calculate the cross-entropy loss reduction.

  • outputs = num.random.rand(16, 1, 256, 256) is used to generate the random variable of output.
  • targets = num.random.randint(2, size=(16, 256, 256)) is used to generate the random variable of target.
  • seed = 0 is used to set the random seed to zero.
  • torch.manual_seed(seed) is used to set the seed for generating the random numbers.
  • loss_fn = torch.nn.CrossEntropyLoss(reduction=reduction) is used to calculate cross entropy loss reduction.
  • print(i, outputs.sum(), targets.sum(), outputs.mean(), targets.mean(), loss.sum(), loss.mean()) is used to print the output on the screen.
import numpy as num
import torch



outputs = num.random.rand(16, 1, 256, 256)
outputs = num.hstack((outputs, 1.0 - outputs))
targets = num.random.randint(2, size=(16, 256, 256))

seed = 0
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False

for reduction in [ 'sum', 'mean']:
    print(reduction)

    for i in range(10):
        torch.manual_seed(seed)
        num.random.seed(seed)

        outputs_torch, targets_torch = torch.from_numpy(outputs), torch.from_numpy(targets)
        outputs_torch, targets_torch = outputs_torch.cuda(0), targets_torch.cuda(0)

        loss_fn = torch.nn.CrossEntropyLoss(reduction=reduction)
        loss_fn = loss_fn.cuda(0)

        loss = loss_fn(outputs_torch, targets_torch)
        loss = loss.detach().cpu().numpy()
        print(i, outputs.sum(), targets.sum(), outputs.mean(), targets.mean(), loss.sum(), loss.mean())

Output:

After running the above code, we get the following output in which we can see that the cross-entropy loss reduction is printed on the screen.

Cross entropy loss PyTorch reduction
Cross entropy loss PyTorch reduction

So, in this tutorial, we discussed Cross entropy loss PyTorch and we have also covered different examples related to its implementation. Here is the list of examples that we have covered.

  • Cross entropy loss PyTorch
  • Cross entropy loss PyTorch example
  • Cross entropy loss PyTorch implementation
  • Cross entropy loss PyTorch softmax
  • Cross entropy loss PyTorch functional
  • Cross entropy loss PyTorch logits
  • Cross entropy loss PyTorch backward
  • Cross entropy loss PyTorch weight
  • Cross entropy loss PyTorch reduction