Numerical Methods for Partial Differential Equations Seminar
Wednesday, December 13, 2017 at 4:30pm to 5:30pm
SPEAKER: Vsevolod Avrutsky (Moscow Institute of Physics and Technology)
TITLE: Neural networks catching up with finite differences
in solving partial differential equations in higher dimensions
Deep neural networks for solving Partial Differential Equations.
From mathematical point of view neural network is a smooth function that depends on input vector as well as weights between its neurons, and all derivatives of the output with respect to input are available for analytical calculation. If we consider input as coordinates in 2D or 3D space we can “put” neural network into any differential equation and calculate residual error that we get for certain initial weights. By implementing an iterative process of training, we can further minimize that error and obtain a function that obeys the equation with necessary precision. This solution is defined not by values on a grid, but by parameters (weights) of a smooth function, thus it’s easy to add or remove mesh points during solving. Method benefits from strong interpolating abilities of deep neural networks, and allows us to obtain solutions of linear and non linear PDEs with nearly machine precision in the whole region of space using very sparse grids. Future generalizations most likely will be able to solve equations in up to 6 dimensions.