Changes in Artificial Neural Network Learning Parameters and Their Impact on Modeling Error Reduction

Document Type: Research Paper


Free scholar


The main objective of this research is to investigate the effect of neural network architecture parameters on model behavior. Neural network architectural factors such as training algorithm, number of hidden layer neurons, data set design in training stage and the changes made to them, and finally its effect on the output of the model were investigated. It developed a database for modeling using by multi-layer perceptron. In particular, the modeling process enjoyed three training algorithms: Bayesian Regularization (BR), Scaled Conjugate Gradient (SCG) and Levenberg Marquardt (LM). Model selection criteria based on the lowest error rate and data regression, using a trial and error approach. The results showed that models that greatly reduce the error have less generalizability. In the meantime, the BR algorithm with the data set design of 15-15-70 (for test, validation and training sections, respectively), has been used to reduce the error better than other algorithms, but improper generalizability. In contrast, the LM algorithm has better generalizability than the other two algorithms. Data analysis shows that, in most cases, when the amounts of data dedicated to test and validation change (increase or decrease), the model requires more neurons in order to reduce errors.