## An Inertial Conjugate Gradient Projection Method for the Split Feasibility Problem

Jian Jinbao, Dai Yu, Yin Jianghua,*

School of Mathematics and Physics, Center for Applied Mathematics of Guangxi, Guangxi Minzu University, Nanning 530006

 Fund supported: Natural Science Foundation of Guangxi(2023GXNSFBA026029)Guangxi Science and Technology Program (AD23023001)Middle-aged and Young Teachers' Basic Ability Promotion Project of Guangxi(2023KY0168)Research Project of Guangxi Minzu University(2022KJQD03)

Abstract

Based on a convex constrained nonlinear monotone equations equivalent to the split feasibility problem, in this paper, a novel inertial conjugate gradient projection method is proposed. The presented method does not calculate the maximum eigenvalue of the matrix ${A^\top}A$ and the complex projections of multiple times. Under mild conditions, the global convergence of the proposed method is proved, and its rate of convergence is analyzed. Numerical experiments show that the proposed method is efficient and robust.

Keywords： Split feasibility problem; Inertial technique; Conjugate gradient projection algorithm; Global convergence; Convergence rate

Jian Jinbao, Dai Yu, Yin Jianghua. An Inertial Conjugate Gradient Projection Method for the Split Feasibility Problem[J]. Acta Mathematica Scientia, 2024, 44(4): 1066-1079

## 1 引言

\begin{align*} \mbox{求}\ x\in C,\ \mbox{使得} Ax\in Q, \end{align*}

$\begin{equation*} \gamma_k =\frac{{\rho_k}}{\|\nabla f(x_k)\|}, \end{equation*}$

$\begin{equation*} \gamma_k=\frac{{\rho_k}{f(x_k)}}{\|\nabla f(x_k)\|^2}. \end{equation*}$

$\begin{equation*} \gamma_k{\|\nabla f(x_k)-\nabla f(y_k)\|}\leq\mu{\|x_k-y_k\|}, \end{equation*}$

$$$\alpha_k=\left\{\begin{array}{ll} \min \bigg\{\alpha,\frac{1}{k^2\|x_k-x_{k-1}\|^2}\bigg\}, & x_k\neq x_{k-1},\\ 0, & \mbox{否则}, \end{array}\right.$$$

$$$y_k=x_k+\alpha_k(x_k-x_{k-1}).$$$

$\|\nabla f(y_k)\|\leq\varepsilon$, 则终止; 否则执行步骤2.

$\begin{equation*} d_k=\left\{ \begin{array}{ll} -\nabla f(y_0), & k=0;\\ -\nabla f(y_k)+\beta_kd_{k-1},& k\geq1, \end{array}\right. \end{equation*}$

$$$\beta_k=\frac{\|\nabla f(y_k)\|^2-\theta_k{\frac{(\nabla f(y_k)^\top\nabla f(y_{k-1}))^2}{\|\nabla f({y_{k-1}})\|^2}}}{\max \{\mu\| d_{k-1}\|\|\nabla f(y_k)\|,d_{k-1}^\top(\nabla f(y_k)-\nabla f(y_{k-1})),\|\nabla f(y_k)\|^2\}},$$$

$$$-{\nabla f(y_k+\varsigma{\rho}^id_k)}^\top d_k\geq\sigma\varsigma{\rho}^i\|d_k\|^2.$$$

$z_k=y_k+t_kd_k.$

$$$x_{k+1}=P_C{[y_k-\gamma\xi_k\nabla f(z_k)]},$$$

$$$\xi_k=\frac{\nabla f(z_k)^\top(y_k-z_k)} {\|\nabla f(z_k)\|^2},$$$

$k:=k+1$, 转步骤1.

$\begin{equation*} \beta_k=\frac{(1-\theta_k)\|\nabla f(y_k)\|^2+\theta_k(\|\nabla f(y_k)\|^2-{\frac{(\nabla f(y_k)^\top\nabla f(y_{k-1}))^2}{\|\nabla f({y_{k-1}})\|^2}})}{\max \{\mu\| d_{k-1}\|\|\nabla f(y_k)\|,d_{k-1}^\top(\nabla f(y_k)-\nabla f(y_{k-1})),\|\nabla f(y_k)\|^2\}}. \end{equation*}$

(2) 在 (2.5) 式中, 参数 $\gamma$ 被称为松弛因子, 其恰当的取值可加速投影收缩方法的数值收敛.

$$$(\nabla f(x)-\nabla f(y))^\top(x-y)\geq0,\ \forall\ x,\ y\in\mathbb{R}^{n};$$$

(iii) [4,引理 8.1] 映射 $\nabla f$$\mathbb R^n 上是 L-Lipschitz 连续的, 且常数 L=\|A\|^2>0, 即 $$\|\nabla f(x)-\nabla f(y)\|\leq L\|x-y\|,\ \forall\ x,\ y\in \mathbb{R}^{n}.$$ 命题 3.2 [3,命题 3.1] 设 SFP 的解集 \Gamma\neq\emptyset, 则以下结论等价 (i) x^*\in\Gamma; (ii) x^*\in C$$f(x^*)=0$;

(iii) $x^*\in C$$\nabla f(x^*)=0. 命题 3.3 对任意 a,b,c,d\in \mathbb R^{n}, 有 \begin{equation*} 2(a-b)^\top(c-d)=(\|a-d\|^2-\|a-c\|^2)+(\|b-c\|^2-\|b-d\|^2). \end{equation*} 命题 3.4[28]\Omega$$\mathbb{R}^{n}$ 中非空闭凸子集, 则对任意 x\in \mathbb{R}^{n}z\in\Omega, 有 \begin{equation*} \|P_\Omega(x)-z\|^2\leq\|x-z\|^2-\|P_\Omega(x)-x\|^2. \end{equation*} 引理 3.1\{d_k\} 是由算法 1 产生的序列, 则对任意 k\geq1, 有 $$0\leq\beta_k\leq\frac{\|\nabla f(y_k)\|}{\mu\|d_{k-1}\|}.$$ 进而, 对任意 k\geq0, 搜索方向 d_k 满足充分下降条件 $$\nabla f(y_k)^\top d_k\leq-(1-\frac{1}{\mu})\|\nabla f(y_k)\|^2$$ 及信赖域性质 $$\|d_{k}\|\leq(1+\frac{1}{\mu})\|\nabla f(y_k)\|.$$ 由 (2.3) 式知 \begin{equation*}\begin{aligned} \beta_k&=\frac{\|\nabla f(y_k)\|^2-\theta_k{\frac{(\nabla f(y_k)^\top\nabla f(y_{k-1}))^2}{\|\nabla f({y_{k-1}})\|^2}}}{\max \{\mu\| d_{k-1}\|\|\nabla f(y_k)\|,d_{k-1}^\top(\nabla f(y_k)-\nabla f(y_{k-1})),\|\nabla f(y_k)\|^2\}}\\ &=\frac{\|\nabla f(y_k)\|^2-\theta_k{\frac{\|\nabla f(y_k)\|^2\|\nabla f(y_{k-1})\|^2{\rm cos}^2\omega_k}{\|\nabla f({y_{k-1}})\|^2}}}{\max \{\mu\| d_{k-1}\|\|\nabla f(y_k)\|,d_{k-1}^\top(\nabla f(y_k)-\nabla f(y_{k-1})),\|\nabla f(y_k)\|^2\}}\\ &=\frac{(1-\theta_k{\rm cos}^2\omega_k)\|\nabla f(y_k)\|^2}{\max \{\mu\| d_{k-1}\|\|\nabla f(y_k)\|,d_{k-1}^\top(\nabla f(y_k)-\nabla f(y_{k-1})),\|\nabla f(y_k)\|^2\}}, \end{aligned}\end{equation*} 其中, \omega_k\in[{\pi}]\nabla f(y_{k-1})$$\nabla f(y_{k}) 的夹角. 由此结合 \theta_k \in [0,1]$$\beta_k \geq0. 进一步,

\begin{equation*}\begin{aligned} \beta_k&\leq\frac{\|\nabla f(y_k)\|^2}{\max \{\mu\| d_{k-1}\|\|\nabla f(y_k)\|,d_{k-1}^\top(\nabla f(y_k)-\nabla f(y_{k-1})),\|\nabla f(y_k)\|^2\}}\\ &\leq\frac{\|\nabla f(y_k)\|^2}{\mu\| d_{k-1}\|\|\nabla f(y_k)\|}=\frac{\|\nabla f(y_k)\|}{\mu\| d_{k-1}\|}. \end{aligned}\end{equation*}

$k\geq1$ 时, 在 $d_k=-\nabla f(y_k)+\beta_kd_{k-1}$ 两边同时左乘 $\nabla f(y_k)^{\top}$

\begin{equation*} \begin{aligned} \nabla f(y_k)^\top d_k&=\nabla f(y_k)^\top(-\nabla f(y_k)+\beta_kd_{k-1})\\ &=-\|\nabla f(y_k)\|^2+\beta_k\nabla f(y_k)^\top d_{k-1}\\ &\leq-\|\nabla f(y_k)\|^2+\beta_k\|\nabla f(y_k)\|\|d_{k-1}\|\\ &\leq-\|\nabla f(y_k)\|^2+\frac{\|\nabla f(y_k)\|^2\|d_{k-1}\|}{\mu\|d_{k-1}\|}\\ &=-(1-\frac{1}{\mu})\|\nabla f(y_k)\|^2, \end{aligned} \end{equation*}

$k\geq1$ 时, $d_k=-\nabla f(y_k)+\beta_kd_{k-1}$. 于是, 利用三角不等式及 (3.3) 式得

\begin{equation*}\begin{aligned} \|d_k\|&\leq\|\nabla f(y_k)\|+\beta_k\|d_{k-1}\|\\ &\leq\|\nabla f(y_k)\|+\frac{\|\nabla f(y_k)\|}{\mu\| d_{k-1}\|}\|d_{k-1}\| =(1+\frac{1}{\mu})\|\nabla f(y_k)\|. \end{aligned} \end{equation*}

$\begin{equation*} -{\nabla f(y_{k_0}+\varsigma{\rho}^id_{k_0})}^\top d_{k_0}<\sigma\varsigma{\rho}^i\|d_{k_0}\|^2. \end{equation*}$

$i\rightarrow\infty$, 再结合 $\nabla f$ 的连续性, 可知 $\nabla f(y_{k_0})^\top d_{k_0}\geq0$. 这与 (3.4) 式矛盾, 故引理成立.

$\begin{equation*} \chi_{k+1}-\chi_k+\upsilon_k\leq\alpha_k(\chi_k-\chi_{k-1})+\eta_k,\ \forall\ k\geq 0. \end{equation*}$

$\sum\limits_{k=0}^{\infty}\eta_k<\infty$, 则

(i) $\lim\limits_{k \rightarrow \infty}\chi_{k}$ 存在;

(ii) $\sum\limits_{k=0}^{\infty}\upsilon_k<\infty$.

$$$\lim\limits_{k\rightarrow\infty} \frac{\|z_k-y_k\|^2}{\|\nabla f(z_k)\|}=0.$$$

$\begin{equation*} \lim\limits_{k\rightarrow\infty} \alpha_k\|x_k-x_{k-1}\|^2=0. \end{equation*}$

$$$\|y_k-x_k\|^2=\alpha_k^2\|x_k-x_{k-1}\|^2\leq\alpha_k\|x_k-x_{k-1}\|^2\rightarrow0,\ k\rightarrow\infty.$$$

$\begin{equation*} \lim\limits_{k\rightarrow\infty}\|x_{k}-\bar{y}\|=\lim\limits_{i\rightarrow\infty}\|x_{k_i}-\bar{y}\|=0. \end{equation*}$

$$$l\cdot{\rm dist}(y,\Gamma)\leq\|\nabla f(y)\|,$$$

\begin{aligned} {\rm dist}^2(x_{k+1},\Gamma)&\leq \|x_{k+1}-h_k\|^2\\ &\leq\|y_k-h_k\|^2-\gamma(2-\gamma)\sigma^2\frac{\|z_k-y_k\|^4}{\|\nabla f(z_k)\|^2}\\ &={\rm dist}^2(y_k,\Gamma)-\gamma(2-\gamma)\sigma^2\frac{\|z_k-y_k\|^4}{\|\nabla f(z_k)\|^2}. \end{aligned}

\begin{aligned} \|\nabla f(z_k)\|&=\|\nabla f(z_k)-\nabla f(h_k)\|\leq L\|z_k-h_k\|\leq L(\|y_k-z_k\|+\|y_k-h_k\|)\\ &=L(t_k\|d_k\|+\|y_k-h_k\|)\leq L(\varsigma\|d_k\|+\|y_k-h_k\|)\\ &\leq L(\varsigma(1+\frac{1}{\mu})\|\nabla f(y_k)-\nabla f(h_k)\|+\|y_k-h_k\|)\\ &\leq L(1+\varsigma L(1+\frac{1}{\mu}))\|y_k-h_k\|\\ &=L(1+\varsigma L(1+\frac{1}{\mu})){\rm dist}(y_k,\Gamma). \end{aligned}

$$$\|y_k-z_k\|^4={t^4_k}\|d_k\|^4\geq {\hat{t}^{4}}(1-\frac{1}{\mu})^4\|\nabla f(y_k)\|^4\geq {\hat{t}^{4}}(1-\frac{1}{\mu})^4l^4{\rm dist}^4(y_k,\Gamma).$$$

$\begin{equation*} {\rm dist}^2(x_{k+1},\Gamma)\leq\tau {\rm dist}^2(y_k,\Gamma)=\tau {\rm dist}^2(x_k+\alpha_k(x_k-x_{k-1}),\Gamma), \end{equation*}$

$\begin{equation*} \tau=1-\gamma(2-\gamma)\sigma^2\frac{{\hat{t}^{4}}l^4(1-\frac{1}{\mu})^4}{L^2(1+\varsigma L(1+\frac{1}{\mu}))^2}<1. \end{equation*}$

${\alpha^2_k}\|x_k-x_{k-1}\|^2\leq\alpha_k\|x_k-x_{k-1}\|^2\leq\frac{1}{k^2}.$

\begin{equation*}\begin{aligned} {\rm dist}(x_{k+1},\Gamma)&\leq\sqrt\tau {\rm dist}(x_k+\alpha_k(x_k-x_{k-1}),\Gamma)\\ &\leq\sqrt\tau \|x_k+\alpha_k(x_k-x_{k-1})-h_{x_k}\|\\ &\leq\sqrt{\tau} (\|x_k-h_{x_k}\|+\alpha_k\|x_k-x_{k-1}\|) \\ &\leq\sqrt\tau {\rm dist}(x_{k},\Gamma)+ \frac{\sqrt\tau}{k}, \end{aligned}\end{equation*}

## 参考文献 原文顺序 文献年度倒序 文中引用次数倒序 被引期刊影响因子

Censor Y, Elfving T.

A multiprojection algorithm using Bregman projections in a product space

Numer Algorithms, 1994, 8: 221-239

López G, Martín-Márquez V, Wang F H, et al.

Solving the split feasibility problem without prior knowledge of matrix norms

Inverse Probl, 2012, 28(8): 085004

Qu B, Xiu N H.

A note on the CQ algorithm for the split feasibility problem

Inverse Probl, 2005, 21(5): 1655-1665

Byrne C.

A unified treatment of some iterative algorithms in signal processing and image reconstruction

Inverse Probl, 2003, 20(1): 103-120

Aubin J P. Optima and Eauilibria: An Introduction to Nonliner Analysis. Berlin: Springer-Verlag, 1993

Byrne C.

Iterative oblique projection onto convex sets and the split feasibility problem

Inverse Probl, 2002, 18(2): 441-453

Yang Q Z.

On variable-step relaxed projection algorithm for variational inequalities

J Math Anal Appl, 2005, 302(1): 166-179

Zhao J L, Yang Q Z.

Self-adaptive projection methods for the multiple-sets split feasibility problem

Inverse Probl, 2011, 27(3): 035009

Wang F H, Xu H K.

Cyclic algorithms for split feasibility problems in Hilbert spaces

Nonlinear Anal: Theory Methods Appl, 2011, 74(12): 4105-4111

Qin X L, Wang L.

A fixed point method for solving a split feasibility problem in Hilbert spaces

Rev R Acad Cienc Exactas Fís Nat Ser A Mat RACSAM, 2019, 113: 315-325

Dong Q L, He S N, Rassias M T.

General splitting methods with linearization for the split feasibility problem

J Glob Optim, 2021, 79: 813-836

Dong Q L, Liu L L, Rassias M T. The strong convergence of Douglas-Rachford methods for the split feasibility problem// Parasidis I N, Providas E, Rassias M T. Mathematical Analysis in Interdisciplinary Research. Switzerland: Springer International Publishing, 2022: 213-233

This paper deals with a linear alternating direction multiplier method (ADMM) for Split feasibility problems (SFP).More specifically,the SFP has been formulated as a separable convex minimization problem with linear constraints,and then extrapolation accelerated linear ADMM has been proposed,which takes advantage of the separable structure,and then rising to sub-problems with closed-form solutions have been given.Furthermore,the global convergence of the proposed method is proved under some suitable conditions.Moreover,the algorithm has been tested by applying to two SFP examples in our theoretical results.

Liu Y, Xue Z H, Wang Y Q, et al.

Extrapolation accelerated linear alternating direction multiplier method for split feasibility problems and its global convergence

Computer Science, 2023, 50(6): 261-265

This paper deals with a linear alternating direction multiplier method (ADMM) for Split feasibility problems (SFP).More specifically,the SFP has been formulated as a separable convex minimization problem with linear constraints,and then extrapolation accelerated linear ADMM has been proposed,which takes advantage of the separable structure,and then rising to sub-problems with closed-form solutions have been given.Furthermore,the global convergence of the proposed method is proved under some suitable conditions.Moreover,the algorithm has been tested by applying to two SFP examples in our theoretical results.

Yu Z S, Lin J, Sun J, et al.

Spectral gradient projection method for monotone nonlinear equations with convex constraints

Appl Numer Math, 2009, 59(10): 2416-2423

Based on three classic line search techniques for finding separating hyperplane, this paper proposes an adaptive line search method. Combining this with the spectral gradient projection method, a spectral gradient projection algorithm for nonsmooth monotone equations with convex constraints is proposed. The proposed method does not calculate and store any matrix, so it is suitable for solving large-scale nonsmooth monotone nonlinear equations. Under mild conditions, the global convergence of the proposed method is proved, and its rate of convergence is analyzed. Numerical experiments show that the proposed algorithm is efficient and robust.

Yin J H, Jian J B, Jiang X Z.

A spectral gradient projection algorithm for convex constrained nonsmooth equations based on an adaptive line search

Math Numer Sin, 2020, 42(4): 457-471

Based on three classic line search techniques for finding separating hyperplane, this paper proposes an adaptive line search method. Combining this with the spectral gradient projection method, a spectral gradient projection algorithm for nonsmooth monotone equations with convex constraints is proposed. The proposed method does not calculate and store any matrix, so it is suitable for solving large-scale nonsmooth monotone nonlinear equations. Under mild conditions, the global convergence of the proposed method is proved, and its rate of convergence is analyzed. Numerical experiments show that the proposed algorithm is efficient and robust.

Sun M, Liu J.

New hybrid conjugate gradient projection method for the convex constrained equations

Calcolo, 2016, 53: 399-411

Ding Y Y, Xiao Y H, Li J W.

A class of conjugate gradient methods for convex constrained monotone equations

Optimization, 2017, 66(12): 2309-2328

Sun M, Liu J.

Three derivative-free projection methods for nonlinear equations with convex constraints

J Appl Math Comput, 2015, 47: 265-276

Liu J K, Li S J.

Multivariate spectral DY-type projection method for convex constrained nonlinear monotone equations

J Ind Manag Optim, 2017, 13(1): 283-295

Gao P T, He C J, Liu Y.

An adaptive family of projection methods for constrained monotone nonlinear equations with applications

Appl Math Comput, 2019, 359: 1-16

Ibrahim A H, Kumam P, Kumam W.

A family of derivative-free conjugate gradient methods for constrained nonlinear equations and image restoration

IEEE Access, 2020, 8: 162714-162729

Yin J H, Jian J B, Jiang X Z, et al.

A hybrid three-term conjugate gradient projection method for constrained nonlinear monotone equations with applications

Numer Algorithms, 2021, 88: 389-418

Polyak B T.

Some methods of speeding up the convergence of iteration methods

USSR Comput Math Math Phys, 1964, 4(5): 1-17

Sahu D R, Cho Y J, Dong Q L, et al.

Inertial relaxed CQ algorithms for solving a split feasibility problem in Hilbert spaces

Numer Algorithms, 2021, 87: 1075-1095

Suantai S, Panyanak B, Kesornprom S, et al.

Inertial projection and contraction methods for split feasibility problem applied to compressed sensing and image restoration

Optim Lett, 2022, 16: 1725-1744

Jian J B, Yin J H, Tang C M, et al.

A family of inertial derivative-free projection methods for constrained nonlinear pseudo-monotone equations with applications

Comput Appl Math, 2022, 41(7): Article 309

Jiang X Z, Ye X M, Huang Z F, et al.

A family of hybrid conjugate gradient method with restart procedure for unconstrained optimizations and image restorations

Comput Oper Res, 2023, 159: 106341

Zarantonello E H. Projections on convex sets in Hilbert space and spectral theory// Zarantonello E H. Contributions to Nonlinear Functional Analysis. New York: Academic Press, 1971: 237-424

Polyak B T. Introduction to Optimization. New York: Optimization Software Inc, 1987

Alves M M, Eckstein J, Geremia M, et al.

Relative-error inertial-relaxed inexact versions of Douglas-Rachford and ADMM splitting algorithms

Comput Optim Appl, 2020, 75: 389-422

Ma G D, Jin J C, Jian J B, et al.

A modified inertial three-term conjugate gradient projection method for constrained nonlinear equations with applications in compressed sensing

Numer Algorithms, 2023, 92(3): 1621-1653

Bauschke H H, Combettes P L. Correction to:Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Switzerland: Springer International Publishing, 2017

Yin J H. Research on projection type algorithms and intexact Levenberg-Marquardt type algorithms for nonlinear equations with applitions. Hohhot: Inner Mongolia University, 2021

Yang Q Z.

The relaxed CQ algorithm solving the split feasibility problem

Inverse Probl, 2004, 20(4): 1261-1266

Shehu Y, Gibali A.

New inertial relaxed method for solving split feasibilities

Optim Lett, 2020, 15: 2109-2126

/

 〈 〉