Stability analysis for a recurrent sigma‒pi‒sigma neural network based on a batch gradient algorithm with L2 regularization
Abstract
Higher-order neural networks (HONNs) have more powerful nonlinear mapping capabilities than traditional feed-forward neural networks do. The recurrent sigma-pi-sigma neural network (RSPSNN), a multilayer higher-order neural network, suffers from slow convergence and poor generalizability when trained using a traditional gradient learning algorithm. To overcome these drawbacks, this work presents a method of accelerating the training of the RSPSNN based on L2 regularization, and the results show that the L2 regularity term can effectively accelerate the convergence of the network during the training process. In addition, we rigorously demonstrate the convergence stability of the proposed algorithm, and the performance of the algorithm is evaluated via classification and approximation experiments, which confirm the theoretical results of this paper.