When you're training a network with BrainMaker, the number of hidden neurons is critical. Too few, and there's not enough available "brain" to learn your problem. Too many, and the network "memorizes" instead of "learns". Here's how to tell:
Turn on Display Histograms- we leave this on essentially always. If, at the end of training, your histograms are still bell curve shaped, you can almost certainly reduce the number of hidden neurons, which will very likely improve your network's predictive powers. If your histograms are relatively flat, you're probable very close to the optimum number of hidden neurons. If your histograms are bunched up at the left and/or right side of the graph, with little near the middle, your network is probably already brain-dead, and will never learn. You need to add hidden neurons, or investigate your data more closely.
The most important thing at first is to get the network to train at all. If you're having trouble, try liberally adding hidden neurons, up to about three times as many hidden neurons as you have inputs. After you've successfully trained, then you can try again while cutting back on the number of neurons, thereby trying to build a network with improved predictive powers.
If this doesn't work, you probably have problems with your data.
Kolmogorov, a Russian mathematician, showed in the 1950's that the correct number of hidden layers is always either one or two. In our experience, one is the correct number about 85% of the time, and two about 15%. Using more than two layers of hidden neurons is a complete waste of time. We allow up to eight hidden layers. It's also OK with us if you toss your PC into a swimming pool, but this will not improve it's performance.