Final Project

Our goal is to create two separate neural networks that are able to identify Braille symbols, in the form of arrays of 6 binary values indicating the presence or absence of dots. Given the low complexity of this task, we expect to classify with a high rate of accuracy. Our first neural net is a basic single-layer perceptron. The second is a multi-layer perceptron with an input layer, a single hidden layer, and an output layer.

We built an interactive Javascript tool that allows you to see how a single-layer perceptron classifies 26 braille characters. To use the tool, click on an unlabeled braille character. Corresponding input nodes will "activate", sending their weighted inputs to all 26 output nodes. We were able to train our single-layer perceptron with 100% accuracy, so only the correct output node is ever activated. This activated output node corresponds with an alphanumeric representation.

We decided to construct a neural network because we were interested to learn more about a technology that is currently holding a lot of interest in computer science realms both private and academic. In particular, we wanted to get a better understanding of the emergent properties of neural networks, and how these allow a well-constructed neural network to solve problems that have previously been considered extremely difficult for a computer to solve, such as identifying images or patterns with many different subtle indicators that are hard for a computer to explicitly pin down.

We settled on a Braille classifier multi-layer perceptron because we wanted a relatively simple example to work with and manipulate, in order to best understand the most fundamental properties of neural nets. There are 63 represented characters/symbols in grade 1 Braille (the most basic kind) and only 64 configurations of dots on a 3 x 2 Braille template, so our data is quite unambiguous, allowing us to reasonably pursue a goal of 100% accuracy for our classifier.

decade | numeric sequence | shift right | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

1st | |||||||||||||||

2nd | |||||||||||||||

3rd | |||||||||||||||

4th | |||||||||||||||

5th | shift down |

Braille is a form of written communication with a steep learning curve, and since grade I Braille is unambiguous (i.e. every symbol maps to 1 letter or meaning), computer algorithms for translating it have been around for a while, and most do not use neural networks. However, there have been others who conduct Braille translation with the use of neural networks, sometimes using visual mapping input data rather than our binary arrays. In other words, given an image of a Braille character or a series of them, these other neural networks can determine their alphanumeric value.

For building a one-layer perceptron, we adapted source code found at GitHub.com to create a basic perceptron that could be trained to respond to one Braille character. For one perceptron, there are 6 input nodes, with each corresponding to whether one dot in a 3 x 2 character is raised feed. These input nodes feed into one output node that will output 1 if the correct character is inputted and 0 for every other character combination in the Braille alphabet. With each training example fed into the perceptron, each weight would be adjusted according to the equation:

where *alpha* represents the learning rate, *x _{i}* the input corresponding to the weight, and

Using our perceptron class, we created a network of 63 perceptrons and trained each to respond to a different Braille. After some experimentation with different learning rates, we found that our perceptron network could not correctly classify all characters. To address this, we added an additional input node as a bias, assigned it a random initial weight and let that weight be adjusted with the others through training.

For our multilayer perceptron, we decided to take an approach oriented around Node objects. Only the hidden and output node layers are actually implemented using these Node objects, because each node stores its inputs and input weights, meaning the hidden layer nodes can store all information from the input layer. To train, we feed in training data consisting of all 63 characters (again in the form of 6-slot arrays) over multiple epochs.

For each training example fed in, we use this formula

to calculate the total level of error for all output nodes. Then, we back propagate through all of the previous nodes connected to this given output node, adjusting each connection weight according to the extent to which that weight affected the error.

Initially, our network of single layer perceptrons was able to classify almost all of the Braille characters, with the exception of the character represented by no raised dots. After adding the bias node as an additional input node, our perceptron network was able to classify each Braille character successfully. As this is the training data we used and is linearly separable, we had no doubt that a properly implemented single-layer perceptron would be able to classify these characters accurately.

We have successfully created a functional single-layer perceptron network that classifies Braille characters with a 100% accuracy rate. We are continuing to develop our multi-layer perceptron, with a particular focus on creating a fully functioning back propagation algorithm suitable for this neural network – we are fairly close to this goal, having completed all other parts of the object-oriented general framework of the multi-layer network.

In its current state, our project is interesting but does not have many practical applications – there exist other Braille translating projects and many use simpler methods than ours. One extension we think would be actually quite useful would be an interpreter of grade 2 Braille and grade 3 Braille, both of which include constructs such as abbreviation, letter chunks, common words, and shorthand as well as typical letters. Because of these features, Braille grades 2 and 3 can have ambiguous meanings that require interpretation and contextualization by the reader, so a neural network would actually be suitable for solving this problem.

Info on Braille

A walk through of the math behind the back propogation algorithm

A previous study using neural nets to classify the Serbian Cyrilic alphabet

A good walkthrough of neural nets

An intense explanation of advanced neural networks

Source code for a single-layer perceptron

A walk through of the math behind the back propogation algorithm

A previous study using neural nets to classify the Serbian Cyrilic alphabet

A good walkthrough of neural nets

An intense explanation of advanced neural networks

Source code for a single-layer perceptron

We have neither given nor received unauthorized aid on this website.

*- 12/8/16*