Training the MNIST model in Keras

Description

In this product, we will use the model using tf.keras APIs. It is better to learn both Keras and layers packages from TensorFlow as they could be seen at several open source codes. The objective of the product is to make you understand various offerings of TensorFlow so that you can build products on top of it. 

“Code is read more often than it is written.”

Bearing in mind the preceding quote, you are shown how to implement the same model using various APIs. Open source code of any implementation of the latest algorithms will be a mix of these APIs. Next, we will start with the Keras implementation. 

 

Preparing the dataset

The MNIST data is available with Keras. First, import tensorflow. Then define a few constants such as batch size, the classes, and the number of epochs. The batch size can be selected based on the RAM available on your machine. The higher the batch size, the more RAM required. The impact of the batch size on the accuracy is minimal. The number of classes is equal to 10 here and will be different for different problems. The number of epochs determines how many times the training has to go through the full dataset.

 

Building the model

we will use a few convolution layers followed by fully connected layers for training the preceding dataset. Construct a simple sequential model with two convolution layers followed by pooling, dropout, and dense layers.  A sequential model has the add method to stack layers one above another. The first layer has 64 filters, and the second layers have 128 filters. The kernel size is 3 for all the filters. Apply the max pooling after the convolution layers. The output of the convolution layers is flattened connecting to a couple of fully connected layers with dropout connections.

https://matlab1.com/training-neural-networks/

Training an autoencoder via conjugate gradients with a L2 reconstruction

 

Reviews

There are no reviews yet.

Be the first to review “Training the MNIST model in Keras”