Build Artificial Neural Network in Tensorflow

If you want to learn how to build artificial neural network in tensorflow, you visited the right place.

As a machine learning or Artificial Intelligence enthusiast, you must know about Artificial Neural Networks and how to build them. Because ANNs are the key components of AI and ML.

It can solve problems from pattern recognition, image and speech recognition, and natural language processing to generating images.

But in this TensorFlow tutorial, I will explain “What is the neural network and how it works with visual diagrams?” so that your concept about ANNs becomes very clear. Using this concept, you can build other neural network models.

After that, I will explain how you can create it using the tensorflow module or functions, and finally, you will understand how to create an artificial neural network model or feed-forward neural network.

What is an Artificial Neural Network

The Artificial Neural Network (ANN) is a computation model that acts similarly to the human brain; in reality, it is inspired by the functioning and structure of the brain. Here, I will use the words perceptrons and neurons interchangeably.

The ANN is made of three layers: Input layers, Hidden layers and the Output layers.

What is an Artificial Neural Network

Each layer consists of perceptron (neurons).

If you don’t know about perceptron (neurons), I recommend visiting the Building Perceptron in Python tutorials. It is necessary to know about Perceptron to understand how ANNs work.

Input Layer: This is the first layer in the artificial neural network, which accepts input and passes these inputs to a hidden layer for processing.

The dataset features are provided as input to this layer in the real world. In this layer, there are several neurons based on the number of features, which means the number of features equals the number of neurons in the input layer. This layer may or may not contain any activation function; it only passes the input to the next hidden layer.

Input Layers Consists of Perceptron

Look at the above picture; as you can see in the input layer, there are four perceptrons. These four perceptrons represent the four features, which means one perceptron handles one feature, depending upon the structure of neural networks.

To create input layers using TensorFlow, follow the steps below.

As shown below, you must import the Dense() function from the tensorflow.keras.layers.

from tensorflow.keras.layers import Dense

Suppose you provide three features to a neural network; your input layer will contain three perceptrons (neurons). So, To create an input layer with three perceptrons, run the code below.

Dense(units=3, activation='relu', input_shape=(3,));

As you can see, the Dense() function contains 3 units, which means the number of perceptrons (neurons) in the input layer; also, the activation for the input layer is ‘relu’,

The input_shape(3) represents the number of features of the dataset, so it is also three because here, it requires three neurons to handle three features.

Hidden Layers: This layer exists between the Input layer and the Output layer; it is a layer that also contains the neurons, and these neurons are connected to each neuron in the previous layer (Input layer).

Here, Hidden Layers means it can contain multiple layers, and the number of neurons in each hidden layer is a hyperparameter, which is adjusted based on the complexity of the problem.

In each hidden layer, an activation function is applied to the output of each neuron; this is applied to introduce non-linearity.

Hidden Layers Contains Perceptron with Activation Function

The above picture shows the perceptron of each layer in the hidden layers. Unlike the Input layers, each perceptron has an activation function (in yellow) applied to the output.

The above shows a simple perceptron structure with w1 (weight) and bias(b) parameters; the perceptron can have adjusted hyperparameters.

To create hidden layers in Tensorflow, use the same Dense() function shown below.

Dense(units=10, activation='relu');

The hidden layer contains 10 perceptions (neurons) and an activation function, the same as ‘relu’. Above is the single hidden layer, but there can be more hidden layers, such as creating one more hidden layer, as shown below.

Dense(units=20, activation='relu');

This hidden layer contains the 20 perceptrons or neurons.

Output Layer: This is the last layer in the artificial neural network. The number of neurons in this layer is decided on the task of this whole ANN.

For example, binary classification, like identifying between two fruits if the given fruit is mango or not, is used here only on neurons with a sigmoid activation function.

  • If it is the multiclassification, identifying more than one fruit, it contains one neuron per class with a softmax activation function.
  • If the task is related to regression, it contains one neuron without an activation function or uses a linear activation function.

To create an output layer in Tensorflow, use the Dense() function, but choosing the number of perceptron and activation functions depends upon what type of task your neural network is trying to solve.

For example, you have to train the neural network to identify whether the fruit is mango; here, there are only two possible outcomes: 1 for the fruit is mango, and 0 for the fruit is not mango. So, it is a binary classification.

As shown below, you need one neuron and a sigmoid activation function to create an output layer for binary classification.

Dense(units=1, activation='sigmoid')

If your network is performing a task related to multiclassification, the number of neurons and activation function may vary.

Now you know how to create layers of artificial neural networks, but you may hear terms like AI model, machine learning model, etc. So, how can you create a model like that in tensorflow or especially an artificial neural network model?

Build Artificial Neural Network in Tensorflow

To create that model, you must arrange your layers, such as Input, Hidden and Output layers, in sequential order. This means you have to put the input layer first, then the hidden layers, and finally the output layer.

So keeping the layers in this order means the input layers accept the input from the outside world (provided through the dataset) and then pass this input for further processing to hidden layers; as hidden layers consist of multiple hidden layers, the input is processed through these layers.

Then, after this, the output from the hidden layers is passed as input to the output layer, which is the final layer.

So, passing the input through these layers is how the model learns the patterns in the dataset and makes predictions on the new dataset.

To arrange or stack the layers sequentially, tensorflow provides an API called Sequential. So, using this API, you can create a sequential model in tensorflow, which contains your layers in sequential order.

So import the Sequential function from the tensorflow.keras.models package as shown below.

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense

Next, create a sequential model using the code below.

model = Sequential()

Now add the input, hidden and output layers to this model by calling the add() function on it, as shown below.

model.add(tf.keras.layers.dense(units=3, activation='relu', input_shape=(3,))) # Input layer
model.add(tf.keras.layers.dense(units=10, activation='relu'))   # Hidden layer
model.add(tf.keras.layers.dense(units=1, activation='sigmoid')) # Ouptut layer
Build Artificial Neural Network in Tensorflow

As you can see, one by one, each layer is added to the model sequentially and forms a stack of layers. Again, the layers are sequentially so that the output of one layer becomes the input of the next layer.

You have constructed or defined your first neural network in TensorFlow, called a feed-forward neural network. But it is called a feed-forward neural network because the input data (or information) flows in one direction, which means only from the input layer, hidden layers, to the output layer.

Also, a question may arise in your mind: can the information flow backwards from output layers to other layers (hidden) in the backward direction? The answer is yes.

Other neural networks, such as RNNs (Recurrent Neural Networks), CNN (Convolutional Neural Networks), LSTM, etc., have an architecture where information flows in both directions.

Explaining the other types of neural networks is not part of this tutorial, but after constructing the neural network model, the next step is to compile the model. So visit the tutorial, Compiling Neural Network Model.

If you want to know more depth about the TensorFlow model, visit the official documentation of TensorFlow tf.keras.Model.


In this tensorflow tutorial, you learned that artificial neural networks are composed of input, hidden and output layers.

Additionally, you learned the purpose of each layer and how to stack the layers sequentially to create a sequential artificial neural network model using the Sequential() function.

While creating the model, you learned how to specify the number of features as perceptron to each layer, activation function, etc.

You may like to read: