Perceptron#

Perceptron

1raise SystemExit("Stop right there!");
An exception has occurred, use %tb to see the full traceback.

SystemExit: Stop right there!

Importing libraries and packages#

 1import os
 2import tensorflow as tf
 3
 4# Statistics
 5from sklearn.metrics import accuracy_score
 6
 7# Warnings
 8import warnings
 9
10warnings.filterwarnings("ignore")
1os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"

Loading dataset#

1# Input data and labels of the OR table data in TensorFlow
2X = tf.Variable(
3    [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]], dtype=tf.float32
4)
5print(X)
<tf.Variable 'Variable:0' shape=(4, 2) dtype=float32, numpy=
array([[0., 0.],
       [0., 1.],
       [1., 0.],
       [1., 1.]], dtype=float32)>

Forward propagation#

The data representation of the input data and the true labels in TensorFlow. The layers are a linear layer and activation functions, in the form of a net input function and a sigmoid function respectively. For the neural network representation, a function called perceptron() is used, with the linear and a sigmoid layer to create predictions. Using input data and initial weights and biases is called forward propagation.

1# Setting the actual labels in TensorFlow and use the reshape()
2# function to reshape the y vector into a 4 × 1 matrix
3y = tf.Variable([0, 1, 1, 1], dtype=tf.float32)
4y = tf.reshape(y, [4, 1])
5print(y)
tf.Tensor(
[[0.]
 [1.]
 [1.]
 [1.]], shape=(4, 1), dtype=float32)
1# Number of neurons (units) = 1
2# X.shape[1] will equal 2 (since the indices start with zero)
3# Number of features (inputs) = 2 (number of examples × number of features)
4Number_of_features = X.shape[1]
5Number_of_units = 1
1# Defining the connections weight matrix in TensorFlow
2weight = tf.Variable(
3    tf.zeros([Number_of_features, Number_of_units]), dtype=tf.float32
4)
5print(weight)
<tf.Variable 'Variable:0' shape=(2, 1) dtype=float32, numpy=
array([[0.],
       [0.]], dtype=float32)>
1# Variable for the bias
2bias = tf.Variable(tf.zeros([Number_of_units, 1]), dtype=tf.float32)
3print(bias)
<tf.Variable 'Variable:0' shape=(1, 1) dtype=float32, numpy=array([[0.]], dtype=float32)>
1def perceptron(x):
2    z = tf.add(tf.matmul(x, weight), bias)
3    output = tf.sigmoid(z)
4    return output
1print(perceptron(X))
tf.Tensor(
[[0.5]
 [0.5]
 [0.5]
 [0.5]], shape=(4, 1), dtype=float32)

Backward propagation#

The predictions are not quite accurate, yet. Backward propagation is next.

For the backpropagation of the error, use an optimizer to minimize the loss: The Stochastic Gradient Descent (SGD) optimizer will update the parameters of the networks (weights and biases) on each instance from the input data. The learning rate is the magnitude by which SGD takes a step in order to reach the global optimum of the loss function.

1learning_rate = 0.01
2optimizer = tf.optimizers.SGD(learning_rate)
 1# Defining the epochs, training loop, and loss function
 2no_of_epochs = 1000
 3for n in range(no_of_epochs):
 4    loss = lambda: abs(
 5        tf.reduce_mean(
 6            tf.nn.sigmoid_cross_entropy_with_logits(
 7                labels=y, logits=perceptron(X)
 8            )
 9        )
10    )
11    optimizer.minimize(loss, [weight, bias])

Statistics#

1tf.print(weight)
[[0.412449151]
 [0.412449151]]
1tf.print(bias)
[[0.236065909]]
1ypred = perceptron(X)
2ypred = tf.round(ypred)
1acc = accuracy_score(y, ypred)
2print(acc)
0.75