California Scientific

California Scientific
1000 SW Powell Ct
Oak Grove, MO 64075
Sales@CalSci.com
916-225-5119
California Scientific  *  BrainMaker Neural Network Software  *  Predict Forecast Classify Stocks Bonds Markets Commodities Diagnose Medical

BrainMaker Neural Network Software

Neural Network Algorithm Decoding and Sequence Predictor

James R. Johnson Netrologic, Inc. 5200 Springfield Pike, Suite 312 Dayton, OH 45431

Abstract

The ability to predict data sequences is important in data transmission due to the need to provide error correction. Certain algorithms can predict repetitive code with good accuracy, but fail in the presence of noisy code sequences. A back propagation neural network was trained in noisy data and was able to predict code sequences with much better than random accuracy based on the statistics that it extracted autonomously from the code data structures. Accuracies from 62% to 93% correct were obtained depending upon the initial conditions and the presence or absence of noise. Higher accuracies could probably be obtained by training a network with a wider variety of training samples.

Introduction

The ability to predict sequences is important in many applications. This is not particularly difficult to do when the underlying algorithm which produces the sequence is understood. However, when the algorithm which produces the sequence is unknown, the problem of decoding the sequence can be very severe. The National Security Agency maintains a large suite of super computers to generate statistics which serve to break repetitive codes. With the use of enough computer power and time, this approach is very effective for breaking codes. Similar situations arise in computer communications where it is desirable to decode incoming bit strings which were produced by an unknown algorithm in order that the receiving station can perform error correction. In this situation, you are again faced with a monumental task of gathering statistics to deduce the underlying algorithm. The situation is compounded by the fact that the incoming data string may be corrupted by noise. The statistics of the encoding algorithm will be partially masked by the deleterious influence of stochastic noise.

Because neural networks in effect compute statistics and metrics from the raw data autonomously, we decided to try an experiment using neural networks to see if they could decode the unknown algorithm generating the serial bit string. We were encouraged by the results of Lapedes and Farber (1987) where they used a neural network to deduce the underlying algorithm from a data string generated by a tenth order equation producing a chaotic string. Since then, other researchers such as White (1988) and Greenwood (1989) have demonstrated a neural net's ability to decode chaotic or pseudorandom strings in applications in the stock market and predicting the next frequencies for a frequency hopper transmitter. The Problem

It is fairly straightforward (though it may be computationally intensive) to compute the kernel of a sequence from clean data. It is quite another problem to compute the underlying algorithm in the presence of noise. We assigned the problem to a back propagation network with 100 inputs, two layers of ten hidden units each, and one output unit. The network was given an input of 100 bits generated using the algorithm in Figure 1 and was asked to predict what the 101st bit should be in that sequence with no explicit knowledge of how the string was formed. The equation used to generate the bits is ba = ba-3 (XOR) ba-31 where 32 ó a ó100.

The 31-bit seed is a random series of whatever the author wants to make it. This algorithm was used to generate a set of 1000 training facts to train a back propagation net. The first data sets generated were generated with sets of correlated data; that is, five sets of 100 bits were generated using the algorithm in Figure 1 and a 31 bit seed that was identical except it was shifted right one additional position for each subsequent set of data to generate five separate sets of 100 bits. Then a new random 31-bit seed was generated and five more correlated 100-bit sets were produced. The network was trained using California Scientific's Brainmaker which learned all of the 1000 training sets to within the 0.1 threshold after fourteen hours of training on a PC/XT. A statistical confidence factor was applied to the results to determine if the net was determining next bits with a greater than 99% probability of being better than random. The equation used was CF = 0.5 + 1.29/N where N ò 7 and in our case N = 1000.

For a test on the trained network, we generated a database of 500 sets of 100 strings. The network got 468 out of 500 correct using a threshold of 0.5 to determine correct responses. This was a result which with a very high degree of probability was better than random. Next, 10% noise was introduced into the training data and the net was trained on that until it could correctly predict all of the 1000 training facts. The 101st bit in the test set of 500 sequences was correctly predicted in 376 out of 500 times with a 0.5 tolerance.

To make the problem more difficult, a network was next trained on a set where each of the 1000 training sets was generated with a different 31 bit seed. These were used first without noise and produced a net which got 326 out of 500 correct. Corrupting the training set with 10% random noise produced a net which garnered 314 correct out of 500 noisy test sets. The noisy data coupled with the fact that we trained this net also using uncorrelated data (the 31 bit seeds were unrelated for each set) greatly lengthened the training time. It took 43 hours and 45 minutes on a PC/XT for the net to learn the training set in this worst case training set (i.e. noise and uncorrelated data sets).

Conclusions

Networks can deduce the function of an underlying algorithm which produces a binary string to enable decoding of strings produced by linear feedback shift registers with a much better than random accuracy. The noisier the data is that the net is trained on, the less accurate the trained net will be. Using many correlated short training sequences allows a network to learn faster. It may be that different architectures or methods of presenting the data would produce better results. Quicker convergence may be obtained by adding first and second derivatives of the bit string to the input of the neural net. Accuracy might be improved by increasing the number of training cases.

References

1. Greenwood, Daniel. "Using Neural Networks to Recognize Agile Emitters: A Feasibility Analysis", Final Report on Phase I Navy SBIR N-240, September 1989.

2. Lapedes, Alan and Robert Farber. "Nonlinear Signal Processing Using Neural Networks: Prediction and System Modelling", Los Alamos National Laboratory, Los Alamos, NM, Technical Report LA-UR- 87-2662, July 1987.

3. White, Hal. "Economic Prediction Using Neural Networks: The Case of IBM Daily Stock Returns", IEEE International Conference on Neural Networks, Vol. II, pp. 451-458, San Diego, CA, July 1988.