|
Meta-Learning-Based Physics-Informed Neural Network: Numerical Simulations of Initial Value Problems of Nonlinear Dynamical Systems without Labeled Data and Correlation Analyses |
|
View Full Text View/Add Comment Download reader |
KeyWord:Physics-informed neural network, meta-learning, fine reinitialization, transfer learning, initial value problem, nonlinear dynamical systems, correlation analyses |
Author Name | Affiliation | Worrawat Duanyai | Department of Robotic and Computational Intelligence System, School of Engineering, King Mongkut's Institute of Technology Ladkrabang, Chalong Krung 1 Rd., 10520 Bangkok, Thailand | Weon Keun Song | Department of Robotic and AI Engineering, School of Engineering, King Mongkut's Institute of Technology Ladkrabang, Chalong Krung 1, 10520 Bangkok, Thailand | Thanadol Chitthamlerd | Department of Computer Engineering, Chulalongkorn University, Phaya Thai Rd, Wang Mai, 10330 Bangkok, Thailand | Girish Kumar | Department of Mechanical Engineering, Delhi Technological University, Shahbad Daulatpur, Bawana Road, 110042 Delhi, India |
|
Hits: 189 |
Download times: 170 |
Abstract: |
There are several main challenges in solving nonlinear differential equations with artificial neural networks (ANNs), such as a nonlinear system's sensitivity to its initial values, discretization, and strategies for incorporating physics-based information into ANNs. As for the first issue, this paper addresses the initial value problems of nonlinear dynamical systems (a Duffing oscillator and a Burger's equation), which cause large global truncation errors in sub-domains with a significant reduction in the influence of initial constraints, using meta-learning-based physics-informed neural networks (MPINNs). The MPINNs with dual learners outperform physics-informed neural networks with a single learner (no fine reinitialization capability). As a result, the former approach improves solution convergence by 98.83\% in the sub-time domain (III) of a Duffing oscillator, and by 85.89\% at $t = 45$ in a Burger's equation problem, compared to the latter one. Model accuracy is highly dependent on the adaptability of the initial parameters in the first hidden layers of the meta-models. From correlation analyses, it is obvious that the parameters become less (the Duffing oscillator) or more (the Burger's equation) correlated during fine reinitialization, as the update manner differs or is similar to the one used in pre-initialization. In the first example, the MPINN achieves both the mitigation of model sensitivity to its output and the improvement of model accuracy. Conversely, the second example shows that the proposed approach is not enough to solve both issues simultaneously, as increased model sensitivity to its output leads to higher model accuracy. The application of transfer learning reduces the number of iterative pre-meta-trainings. |
|
|
|