A New Gummel Iterative Algorithm Based on Gaussian Process Regression for the PNP Equation
Ao Yuyan, Yang Ying,*
Guilin University of Electronic Technology, School of Mathematics and Computating Science & Guangxi Applied Mathematics Center (GUET) & Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation, Guangxi Guilin 541004
NSFC(12161026) Special Fund for Scientific and Technological Bases and Talents of Guangxi(AD23026048) Guangxi Natural Science Foundation(2020GXNSFAA159098) Science and Technology Project of Guangxi(AD23023002)
PNP Equations are a class of nonlinear partial differential equations coupled from Poisson and Nernst planck equations, and the efficiency of its Gummel iteration, a commonly used linearization iteration method, is largely affected by the relaxation parameter. The Gaussian Process Regression (GPR) method in machine learning, due to its small training size and the fact that it does not need to provide a functional relationship, is applied in that paper to predict the preferred relaxation parameters for the Gummel iteration and accelerate the convergence of the iteration. Firstly GPR method with predictable relaxation parameters is designed for the Gummel iteration of the PNP equation. Secondly, the Box-Cox transformation method is utilized to preprocess the data of Gummel iteration to improve the accuracy of the GPR method. Finally, based on the GPR method and Box-Cox transformation algorithm, a new Gummel iteration algorithm for the PNP equation is proposed. Numerical experiments show that the new Gummel iterative algorithm is more efficient in solving and has the same convergence order compared to the classical Gummel iterative algorithm.
Ao Yuyan, Yang Ying. A New Gummel Iterative Algorithm Based on Gaussian Process Regression for the PNP Equation[J]. Acta Mathematica Scientia, 2024, 44(5): 1301-1309
$\begin{align*}\label{1.6} &D_p(\nabla p, \nabla v)+D_p(\nabla p \nabla u, \nabla v)=\left(f_{p}, v\right), \forall v \in H_{0}^{1}(\Omega),\\ &D_n(\nabla n, \nabla v)+D_n(\nabla n \nabla u, \nabla v)=\left(f_{n}, v\right), \forall v \in H_{0}^{1}(\Omega),\\ &-(\nabla {u},\nabla w)=\bigg(\frac{e_c^2\beta}{\varepsilon_0\varepsilon_s}(p-n),w\bigg), \forall w \in H_{0}^{1}(\Omega). \end{align*}$
In this paper we developed accurate finite element methods for solving 3-D Poisson-Nernst-Planck (PNP) equations with singular permanent charges for electrodiffusion in solvated biomolecular systems. The electrostatic Poisson equation was defined in the biomolecules and in the solvent, while the Nernst-Planck equation was defined only in the solvent. We applied a stable regularization scheme to remove the singular component of the electrostatic potential induced by the permanent charges inside biomolecules, and formulated regular, well-posed PNP equations. An inexact-Newton method was used to solve the coupled nonlinear elliptic equations for the steady problems; while an Adams-Bashforth-Crank-Nicolson method was devised for time integration for the unsteady electrodiffusion. We numerically investigated the conditioning of the stiffness matrices for the finite element approximations of the two formulations of the Nernst-Planck equation, and theoretically proved that the transformed formulation is always associated with an ill-conditioned stiffness matrix. We also studied the electroneutrality of the solution and its relation with the boundary conditions on the molecular surface, and concluded that a large net charge concentration is always present near the molecular surface due to the presence of multiple species of charged particles in the solution. The numerical methods are shown to be accurate and stable by various test problems, and are applicable to real large-scale biophysical electrodiffusion problems.
SilvaV L S, SalinasP, JacksonM D, et al.
Machine learning acceleration for nonlinear solvers applied to multiphase porous media flow
Computer Methods in Applied Mechanics and Engineering, 2021, 384: 113989
Gaussian processes (GPs) are natural generalisations of multivariate Gaussian random variables to infinite (countably or continuous) index sets. GPs have been applied in a large number of fields to a diverse range of ends, and very many deep theoretical analyses of various properties are available. This paper gives an introduction to Gaussian processes on a fairly elementary level with special emphasis on characteristics relevant in machine learning. It draws explicit connections to branches such as spline smoothing models and support vector machines in which similar ideas have been investigated. Gaussian process models are routinely used to solve hard machine learning problems. They are attractive because of their flexible non-parametric nature and computational simplicity. Treated within a Bayesian framework, very powerful statistical methods can be implemented which offer valid estimates of uncertainties in our predictions and generic model selection procedures cast as nonlinear optimization problems. Their main drawback of heavy computational scaling has recently been alleviated by the introduction of generic sparse approximations.13,78,31 The mathematical literature on GPs is large and often uses deep concepts which are not required to fully understand most machine learning applications. In this tutorial paper, we aim to present characteristics of GPs relevant to machine learning and to show up precise connections to other "kernel machines" popular in the community. Our focus is on a simple presentation, but references to more detailed sources are provided.
WilliamsC K I, RasmussenC E.Gaussian Processes for Machine Learning. Cambridge: MIT Press, 2006