HELLO GUYS, IN THIS BLOG POST I WANT TO SHOW THE BASIC INTRO TO IDENTIFICATION OF HANDWRITTEN DIGITS USING PYTORCH MNIST DATA.
MNIST DATA IS THE COLLECTION OF HANDWRITTEN DIGITS. MNIST CONTAINS 70,000 HANDWRITTEN DIGITS, 60,000 FOR TRAINING AND REMAINING 10,000 ARE FOR TESTING. THE IMAGES ARE GRAYSCALE AND 28x28 PIXELS. WE CAN DOWNLOAD THE DATASET USING THE CODE BELOW.
Here, the parameters batch_size is kept to 64 so that the training images are grouped into 64 each and shuffle is kept to TRUE, such that each time we run the code it shuffles the data and returns an iterable with new groups of batch_size.
As the trainloader is iterable, we are iterating through it and collecting the first batch of images and it's corresponding labels into images and labels respectively.
Now, run the above code and see the output.
you will see something like this.
torch.Size([64, 1, 28, 28])
torch.Size([64])
It shows that there are 64 images with grayscale and they are 28x28 pixels.
Now see the first image in the first batch by running the below code.
The shape of the first image in the first batch can be known by running the following code.
print(images[0].shape)
OUTPUT: torch.size([1,28,28])
Here, plt.imshow() plots a 2D-image with the first parameter of the 2D numpy array which contains the values of the image. To convert the torch tensor to numpy array of 2 dimensions, we use
images[0].numpy().squeeze()
Now, it is time for constructing a neural network.
The first layer contains 28x28 values i.e., 784 values.
The middle/hidden layer contains 256 neurons.
The output layer contains 10 neurons whose values represent the probability of the numbers.
To convert the torch tensor of size [1,28,28] to continuous 784 neurons, we will write
images[0].view(1,-1)
This will generate a tensor of 1 row and 784 columns.
We will use torch.randn() for generating the weights for these neurons.
In the first layer, we will generate random weights of size (784,256) i.e., with 784 rows and 256 columns.
For the neurons in the second layer, we will generate random weights with size (256,10).
As this is the classification problem with multiple classes, we will apply the softmax function to the values of the second and also the last layer to convert the values into the probability.
While applying the softmax function we will apply the normalization concept to avoid the nan values.
i.e., exp(x)/sigma(exp(x)) is converted into exp(x-y)/sigma(exp(x-y)) , where y is the maximum value of the given vector.
Now, the full code will be:
In the next blog, I will say about the nn module in pytorch which simplifies our work.
Find the code in this blog in https://github.com/VallamkondaNeelima/MachineLearning/blob/master/mnist1.py
Follow me on instagram: https://www.instagram.com/neelima2312/
Comments
Post a Comment