报告人：Lu Zhiqin 教授 （University of California，Irvine）
报告摘要：In this talk, we shall give a mathematical setting of the Random Backpropogation (RBP) method in unsupervised machine learning. When there is no hidden layer in the neural network, the method degenerates to the usual least square method. When there are multiple hidden layers, we can formulate the learning procedure as a system of nonlinear ODEs. We proved the short time, long time existences as well as the convergence of the system of nonlinear ODEs when there is only one hidden layer. This is joint work with Pierre Baldi in Neural Networks 33 (2012) 136-147, and with Pierre Baldi, Peter Sadowski in Neural Networks 95 (2017) 110-133 and in Artificial Intelligence 260 (2018), 1-35.