数学物理学报  2015, Vol. 35 Issue (3): 634-640   PDF (331 KB)    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
罗日才
许弘雷
王五生
一类新的变时滞中立型神经网络的全局渐近稳定性条件
罗日才1, 许弘雷2, 王五生3    
1. 河池学院计算机与信息工程学院 广西宜州 546300;
2. Department of Mathematics and Statistics, Curtin University, Perth, WA 6845, Australia;
3. 河池学院计数学与统计学院 广西宜州 546300
摘要:研究了一类激活函数的状态变量带有微分时滞的中立型神经网络的稳定性问题. 通过构造李亚普诺夫函数, 并利用LMI分析技巧, 获得了该类中立型神经网络的全局渐近稳定性的充分条件. 最后通过实际算例验证了所得结果的有效性.
关键词中立型神经网络     变时滞     全局渐近稳定性     充分条件    
Globally Asymptotic Stability of A New Class of Neutral Neural Networks with Time-Varying Delays
Luo Ricai1, Xu Honglei2, Wang Wusheng3    
1. School of Computer and Information Engineering, Hechi University, Guangxi Yizhou 546300;
2. Department of Mathematics and Statistics, Curtin University, Perth, WA 6845, Australia;
3. School of Mathematics and Statistics, Hechi University, Guangxi Yizhou 546300
Abstract: In this paper, we study the stability problem of a class of neural neutral network systems whose involve an activation function with differential time-delay state variables. By constructing Lyapunov functions and using LMI techniques, we obtain a sufficient condition for the global asymptotic stability of these neural networks. Finally, we demonstrate the validity of our results by use of a numerical example.
Key words: Neutral neural networks     Varying time delays     Global asymptotic stability     Sufficient condition    
1 引言

自从Hopfield[1]在1984年提出了后人以他名字命名的Hopfield神经网络以来, 这类人工神经网络在很多方面得到了广泛的应用,如组合优化[2, 3, 4]、 图像处理[5, 6]、模式识别[7]、信号处理[8]、 通讯技术[9]等等, 所以在过去的数十年中,作为一个递归神经网络, Hopfield神经网络被持续研究[10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]. 在神经网络的实际应用中,一方面由于两个神经元之间信息传递不免存在时滞, 另一方面由于受到诸如有限的开关速度等硬件的影响,时滞现象也是不可避免的, 所以在神经网络研究中引入时滞得到了广泛的关注[15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]. 随着对神经网络的不断深入的研究,学者们发现由于现实世界中神经细胞的复杂性, 许多现有的神经网络模型很难精确描述神经反应过程的特性, 在神经网络系统中应该包含过去状态的微分信息来进一步描述这样的复杂的神经反应动力系统. 这种新的神经网络模型叫做中立型神经网络模型. 在文献[20]中,Orman研究了如下中立型时滞Hopfield神经网络模型

\begin{equation}\label{eq:sys1} \dot{x}(t)=A_{1}x(t)+A_{2}f(x(t))+A_{3}f(x(t-\tau(t)))+A_{4}\dot{x}(t-\tau(t)),\end{equation} (1.1)
其中各符号的含义相同于以下系统(1.2).

Orman 通过构建李亚普诺夫函数的方法,获得了关于该系统平衡点的存在性、 唯一性和全局渐近稳定性的时滞依赖的充分条件.

本文在系统(1.1)的基础上增加了状态变量为微分时滞的激活函数项, 研究如下中立型时滞神经网络模型的全局渐近稳定性

\begin{eqnarray}\label{eq:sys2} \dot{x}(t)&=&A_{1}x(t)+A_{2}f(x(t))+A_{3}f(x(t-\tau_{1}(t)))\nonumber\\ &&+A_{4}\dot{x}(t-\tau_{2}(t))+A_{5}f(\dot{x}(t)) +A_{6}f(\dot{x}(t-\tau_{3}(t))), \end{eqnarray} (1.2)
其中$x(t)=(x_1(t),x_2(t),\cdots,x_n(t))^T $ 表示神经元的状态变量,$f(x)=(f_1(x_1),f_2(x_2),$ $ \cdots,$ $ f_n(x_n))$ 表示激活函数且有界,$A_1 = {\rm{diag}}\{a_1,a_2,\cdots,a_n\}$ ( $a_i<0,i=1,2,$ $\cdots,a_n$),$A_2=(b_{ij})_{n\times n}$, $A_3=(c_{ij})_{n\times n}$ ,$A_4=(d_{ij})_{n\times n}$,$A_5=(e_{ij})_{n\times n}$ 分别表示对应神经元的连接权重系数矩阵,其中,$a_i$表示第$i$ 个神经元的自反馈强度,$b_{ij}$表示第$j$个神经元的输出$f_j(x_j)$ 对于第$i$个神经元的输入的反馈连接强度,如果第$j$个神经元的输 出使第$i$个神经元激活(或抑制),则$b_{ij}>0$ (或$b_{ij}<0$); $c_{ij}$、$d_{ij}$、$e_{ij}$ 类似; $\tau_1(t)$、$\tau_2(t)$、$\tau_3(t)$ 表示对应项的传输时滞,且满足$0\leq \tau_i(t)\leq \bar{\tau}_i\leq \infty$,$0\leq \dot{\tau}_i(t)\leq \tau^{*}_i\leq 1$, 这里$\dot{\tau}_i(t)$分别表示$\tau_i(t)$一阶导数,$\bar{\tau}_i,\tau^{*}_i$ 为某一正常数,其中$i=1,2,3$.

对于激活函数$f$,我们假设满足以下条件

(H)~ $|f_i(x_i)|\leq M_i$,$f(0)=0$,对于$\forall z_1,z_2 \in {\Bbb R}$,且$z_1 \neq z_2$,有$ 0\leq \frac{f_i(z_1)-f_i(z_2)}{z_1-z_2}$ $\leq L_i$,其中$M_i$和$L_i$分别表示正常数,$i=1,2,\cdots,a_n$.

我们证明系统(1.2)的零解是全局渐近稳定的.

2 主要结论

本节我们将建立关于系统(1.2)的零解的全局渐近稳定性的充分条件,我们先做如下标记

(I)~ $ \eta_1=x(t),~ \eta_2=f(x(t)),~ \eta_3=f(x(t-\tau_1(t))),~ \eta_4=\dot{x}(t-\tau_2(t)),~ \eta_5=f(\dot{x}(t)),~ \eta_6=f(\dot{x}(t-\tau_3(t)))$;

(II)~ $ L_M=\max \{L_1,L_2,\cdots,L_n\},~ a_M=\max \{a_1,a_2,\cdots,a_n\},~ \lambda=\frac{a_M}{L_M}$;

(III)~ $E$表示单位矩阵,$N$表示自然数集,上标$T$表示矩阵的转置, 标记$\lambda_m(P)$和$\lambda_M(P)$ 分别表示矩阵$P$的最小特征值和最大特征值,${\rm{Diag}}\{\cdots\}$表示对角矩阵.

用标记(I),系统(1.2)可以表示为以下式子

\begin{equation}\label{eq:sys3} \dot{x}(t)=A_1\eta_1+A_2\eta_2+A_3\eta_3+A_4\eta_4+A_5\eta_5+A_6\eta_6. \end{equation} (2.1)

接下来,我们给出以下定理.

定理2.1 如果存在对称正定矩阵$P,Q,R$使得以下对称矩阵$\Omega$ 负定,那么系统$(1.2)$的零解是全局渐近稳定的 \begin{eqnarray*} \Omega=\left[\begin{array}{cccccc} \Sigma_{11}~ & A_{2}+\Phi_{12} & ~A_{3}+\Phi_{13} ~& A_{4}+\Phi_{14} & ~ A_{5}+\Phi_{15} ~& A_{6}+\Phi_{16}\\ A_{2}^{T}+\Phi_{12}^{T} ~& \Sigma_{22} & A_{3}+\Phi_{23} & A_{4}+\Phi_{24} & A_{5}+\Phi_{25} & A_{6}+\Phi_{26}\\ A_{3}^{T}+\Phi_{13}^{T} ~& A_{3}^{T}+\Phi_{23}^{T} & \Sigma_{33} & \Phi_{34} & \Phi_{35} & \Phi_{36}\\ A_{4}^{T}+\Phi_{14}^{T} ~& A_{4}^{T}+\Phi_{24}^{T} & \Phi_{34}^{T} & \Sigma_{44} & \Phi_{45} & \Phi_{46}\\ A_{5}^{T}+\Phi_{15}^{T} ~& A_{5}^{T}+\Phi_{25}^{T} & \Phi_{35}^{T} & \Phi_{45}^{T} & \Sigma_{55} & \Phi_{56}\\ A_{6}^{T}+\Phi_{16}^{T} ~& A_{6}^{T}+\Phi_{26}^{T} & \Phi_{36}^{T} & \Phi_{46}^{T} & \Phi_{56}^{T} & \Sigma_{66} \end{array}\right], \end{eqnarray*} 其中 ~$ \Phi_{ij}=A_{i}^{T}QA_j ~ (1\leq i\leq ~ j\leq 6,~i,~ j\in N),~ \Sigma_{11}=2A_1+\Phi_{11},~ \Sigma_{22}=2\lambda E+A_{2}+A_{2}^T+P+\Phi_{22},~\\ \Sigma_{33}=-(1-\tau_{1}^{*})P+\Phi_{33},~ \Sigma_{44}=-(1-\tau_{2}^{*})Q+\Phi_{44},~\\ \Sigma_{55}=A_{5}^{T}+R,~ \Sigma_{66}=-(1-\tau_{3}^{*})R+\Phi_{66}. $

构造如下李亚普诺夫函数

\begin{eqnarray}\label{eq:1} V(t)&=&x^{T}(t)x(t)+2\sum\limits_{i=1}^{n}\int_{0}^{x_{i}(t)}f_{i}(s){\rm d}s \nonumber\\ &&+ \int_{-\tau_{1}(t)}^{0}f^{T}(x(t+s))Pf(x(t+s)){\rm d}s\nonumber\\ &&+\int_{-\tau_{2}(t)}^{0}\dot{x}^{T}(t+s)Q\dot{x}(t+s){\rm d}s\nonumber\\ &&+\int_{-\tau_{3}(t)}^{0}f^T(\dot{x}(t+s))Rf(\dot{x}(t+s)){\rm d}s. \end{eqnarray} (2.2)

由假设(H)可得 $$ 0\leq\displaystyle{\frac{f_{i}(x_{i}(t))}{x_{i}(t)}}\leq L_{i}\leq L_{M}, $$ $$ 0\leq |f_{i}(x_{i}(t))|\leq L_{i}|x_{i}(t)|\leq L_{M}|x_{i}(t)|, $$ $$ 0\leq |f_{i}(x_{i}(t))f_{i}(x_{i}(t))|\leq L_{i}|x_{i}(t)f_{i}(x_{i}(t))|\leq L_{M}|x_{i}(t)f_{i}(x_{i}(t))|. $$ 由以上第一个不等式可知,$f_{i}(x_{i}(t))$ 与 $x_{i}(t)$ 是同号的,所以有 $$ 0\leq f_{i}(x_{i}(t))f_{i}(x_{i}(t))\leq L_{i}x_{i}(t)f_{i}(x_{i}(t))\leq L_{M}x_{i}(t)f_{i}(x_{i}(t)), $$ $$ f_{i}(x_{i}(t))x_{i}(t)\geq\frac{1}{L_{M}}f_{i}(x_{i}(t))f_{i}(x_{i}(t)). $$

由$ {a_i<0,~ a_M=\max \{a_1,a_2,\cdots,a_n\},~ \lambda=\frac{a_M}{L_M}}$,可得 \begin{eqnarray*} f^{T}(x(t))Ax(t)&=&\sum\limits_{i=1}^{n}f_{i}(x_{i}(t))a_{i}x_{i}(t)\\ &\leq& \frac{1}{L_{M}}\sum\limits_{i=1}^{n}f_{i}(x_{i}(t))a_{i}f_{i}(x_{i}(t)) \\ &\leq & \frac{a_M}{L_{M}}\sum\limits_{i=1}^{n}f_{i}(x_{i}(t))f_{i}(x_{i}(t))\\ &=& \lambda f^{T}(x(t))f(x(t)). \end{eqnarray*}

用标记(I)和(II),以上不等式可表示为$\eta_2^T A_1 \eta_1 \leq \lambda \eta_2^T \eta_2$.

为了便于求导,我们对(2.2)式的第二、三、四项做变量替换$\tau=t+s$,得

\begin{eqnarray} \label{eq:2} V(t)&=&x^{T}(t)x(t)+2\sum\limits_{i=1}^{n}\int_{0}^{x_{i}(t)}f_{i}(s){\rm d}s\nonumber\\ &&+ \int_{t-\tau_{1}(t)}^{t}f^{T}(x(\tau))Pf(x(\tau)){\rm d}\tau\nonumber\\ &&+\int_{t-\tau_{2}(t)}^{t}\dot{x}^{T}(\tau)Q\dot{x}(\tau){\rm d}\tau\nonumber\\ &&+\int_{t-\tau_{3}(t)}^{t}f^T(\dot{x}(\tau))Rf(\dot{x}(\tau)){\rm d}\tau. \end{eqnarray} (2.3)

沿系统(1.2)的轨线对式(2.3)两边求导,可得 \begin{eqnarray*} \dot{V}(t)& = &2x^T(t)\dot{x}(t)+2f^{T}(x(t))\dot{x}(t)\\ && +f^{T}(x(t))Pf(x(t))-(1-\dot{\tau}_{1}(t))f^{T}(x(t-\tau_{1}(t)))Pf(x(t-\tau_{1}(t)))\\ && +\dot{x}^{T}(t)Q\dot{x}(t)-(1-\dot{\tau}_{2}(t))\dot{x}^{T}(t-\tau_{2}(t))Q\dot{x}(t-\tau_{2}(t))\\ &&+f^{T}(\dot{x}(t))Rf(\dot{x}(t))-(1-\dot{\tau}_{3}(t))f^{T}(\dot{x}(t-\tau_{3}(t)))Rf(\dot{x}(t-\tau_{3}(t))). \end{eqnarray*}

于是用标记(I)以及式(2.1)可得 \begin{eqnarray*} \dot{V}(t)&= & 2\eta_1^{T}(A_1\eta_1+A_2\eta_2+A_3\eta_3+A_4\eta_4+A_5\eta_5+A_6\eta_6)\\ && +2\eta_2^{T}(A_1\eta_1+A_2\eta_2+A_3\eta_3+A_4\eta_4+A_5\eta_5+A_6\eta_6)\\ && +\eta_2^{T}P\eta_2-(1-\dot{\tau}_{1}(t))\eta_3^{T}P\eta_3\\ && +(A_1\eta_1+A_2\eta_2+A_3\eta_3+A_4\eta_4+A_5\eta_5+A_6\eta_6)^T\\ && Q(A_1\eta_1+A_2\eta_2+A_3\eta_3+A_4\eta_4+A_5\eta_5+A_6\eta_6)\\ && -(1-\dot{\tau}_{2}(t))\eta_4^{T}Q\eta_4+\eta_5^{T}R\eta_5-(1-\dot{\tau}_{3}(t))\eta_6^{T}R\eta_6\\ &\leq & 2\eta_1^{T}A_1\eta_1+2\eta_1^{T}A_2\eta_2+2\eta_1^{T}A_3\eta_3+2\eta_1^{T}A_4\eta_4+2\eta_1^{T}A_5\eta_5+2\eta_1^{T}A_6\eta_6\\ && +2\lambda\eta_2^{T}\eta_2+2\eta_2^{T}A_2\eta_2+2\eta_2^{T}A_3\eta_3+2\eta_2^{T}A_4\eta_4+2\eta_2^{T}A_5\eta_5+2\eta_2^{T}A_6\eta_6\\ & &+\eta_2^{T}P\eta_2-(1-\tau_1^*)\eta_3^{T}P\eta_3\\ && +(A_1\eta_1+A_2\eta_2+A_3\eta_3+A_4\eta_4+A_5\eta_5+A_6\eta_6)^T\\ && Q(A_1\eta_1+A_2\eta_2+A_3\eta_3+A_4\eta_4+A_5\eta_5+A_6\eta_6)\\ & &-(1-\tau_2^*)\eta_4^{T}Q\eta_4+\eta_5^{T}R\eta_5-(1-\tau_{3}^*)\eta_6^{T}R\eta_6 \\ &= & (2\eta_1^{T}A_1\eta_1)+(\eta_1^{T}A_2\eta_2+\eta_2^{T}A_2^T\eta_1)+(\eta_1^{T}A_3\eta_3+\eta_3^{T}A_3^T\eta_1)\\ && +(\eta_1^{T}A_4\eta_4+\eta_4^{T}A_4^T\eta_1)+(\eta_1^{T}A_5\eta_5+\eta_5^{T}A_5^T\eta_1)+(\eta_1^{T}A_6\eta_6+\eta_6^{T}A_6^T\eta_1)\\ && +2\lambda\eta_2^{T}\eta_2+(\eta_2^{T}A_2\eta_2+\eta_2^{T}A_2^T\eta_2)+(\eta_2^{T}A_3\eta_3+\eta_3^{T}A_3^T\eta_2)\\ && +(\eta_2^{T}A_4\eta_4+\eta_4^{T}A_4^T\eta_2)+(\eta_2^{T}A_5\eta_5+\eta_5^{T}A_5^T\eta_2)+(\eta_2^{T}A_6\eta_6+\eta_6^{T}A_6^T\eta_2)\\ && +\eta_2^{T}P\eta_2-(1-\tau_{1}^*)\eta_3^{T}P\eta_3\\ && +(A_1\eta_1+A_2\eta_2+A_3\eta_3+A_4\eta_4+A_5\eta_5+A_6\eta_6)^T\\ && Q(A_1\eta_1+A_2\eta_2+A_3\eta_3+A_4\eta_4+A_5\eta_5+A_6\eta_6)\\ && -(1-\tau_2^*)\eta_4^{T}Q\eta_4+\eta_5^{T}R\eta_5-(1-\tau_3^*)\eta_6^{T}R\eta_6\\ &=& \eta_1^{T}(2A_1+A_1^TQA_1)\eta_1+\eta_1^{T}(A_2+A_1^TQA_2)\eta_2+\eta_1^{T}(A_3+A_1^TQA_3)\eta_3\\ && +\eta_1^{T}(A_4+A_1^TQA_4)\eta_4+\eta_1^{T}(A_5+A_1^TQA_5)\eta_5+\eta_1^{T}(A_6+A_1^TQA_6)\eta_6\\ && +\eta_2^{T}(A_2^T+A_2^TQA_1)\eta_1+\eta_2^{T}(2\lambda E+A_2+A_2^T+P+A_2^TQA_2)\eta_2\\ && +\eta_2^{T}(A_3+A_2^TQA_3)\eta_3+\eta_2^{T}(A_4+A_2^TQA_4)\eta_4\\ & &+\eta_2^{T}(A_5+A_2^TQA_5)\eta_5+\eta_2^{T}(A_6+A_2^TQA_6)\eta_6\\ && +\eta_3^{T}(A_3^T+A_3^TQA_1)\eta_1+\eta_3^{T}(A_3^T+A_3^TQA_2)\eta_2+\eta_3^{T}(-(1-\tau_1^*)P\\ &&+ A_3^TQA_3)\eta_3+\eta_3^{T}(A_3^TQA_4)\eta_4+\eta_3^{T}(A_3^TQA_5)\eta_5+\eta_3^{T}(A_3^TQA_6)\eta_6\\ && +\eta_4^{T}(A_4^T+A_4^TQA_1)\eta_1+\eta_4^{T}(A_4^T+A_4^TQA_2)\eta_2+\eta_4^{T}(A_4^TQA_3)\eta_3\\ && +\eta_4^{T}(-(1-\tau_2^*)Q+A_4^TQA_4)\eta_4+\eta_4^{T}(A_4^TQA_5)\eta_5+\eta_4^{T}(A_4^TQA_6)\eta_6\\ & &+\eta_5^{T}(A_5^T+A_5^TQA_1)\eta_1+\eta_5^{T}(A_5^T+A_5^TQA_2)\eta_2+\eta_5^{T}(A_5^TQA_3)\eta_3\\ && +\eta_5^{T}(A_5^TQA_4)\eta_4+\eta_5^{T}(A_5^T+R)\eta_5+\eta_5^{T}(A_5^TQA_6)\eta_6\\ && +\eta_6^{T}(A_6^T+A_6^TQA_1)\eta_1+\eta_6^{T}(A_6^T+A_6^TQA_2)\eta_2+\eta_6^{T}(A_6^TQA_3)\eta_3\\ && +\eta_6^{T}(A_6^TQA_4)\eta_4+\eta_6^{T}(A_6^TQA_5)\eta_5+\eta_6^{T}(-(1-\tau_3^*)R+A_6^TQA_6)\eta_6\\ &=&(\eta_{1}^T ~~\eta_{2}^T ~~\eta_{3}^T ~~ \eta_{4}^T ~~ \eta_{5}^T ~~ \eta_{6}^T )\Omega (\eta_{1}^T ~~\eta_{2}^T ~~\eta_{3}^T ~~ \eta_{4}^T ~~ \eta_{5}^T ~~ \eta_{6}^T )^T, \end{eqnarray*} 其中 $$ \Omega=\left[\begin{array}{cccccc} \Sigma_{11}~ & A_{2}+\Phi_{12} & ~A_{3}+\Phi_{13}~ & A_{4}+\Phi_{14} & ~A_{5}+\Phi_{15} ~ & A_{6}+\Phi_{16}\\ A_{2}^{T}+\Phi_{12}^{T} ~& \Sigma_{22} & A_{3}+\Phi_{23} & A_{4}+\Phi_{24} & A_{5}+\Phi_{25} & A_{6}+\Phi_{26}\\ A_{3}^{T}+\Phi_{13}^{T}~ & A_{3}^{T}+\Phi_{23}^{T} & \Sigma_{33} & \Phi_{34} & \Phi_{35} & \Phi_{36}\\ A_{4}^{T}+\Phi_{14}^{T} ~& A_{4}^{T}+\Phi_{24}^{T} & \Phi_{34}^{T} & \Sigma_{44} & \Phi_{45} & \Phi_{46}\\ A_{5}^{T}+\Phi_{15}^{T}~ & A_{5}^{T}+\Phi_{25}^{T} & \Phi_{35}^{T} & \Phi_{45}^{T} & \Sigma_{55} & \Phi_{56}\\ A_{6}^{T}+\Phi_{16}^{T} ~& A_{6}^{T}+\Phi_{26}^{T} & \Phi_{36}^{T} & \Phi_{46}^{T} & \Phi_{56}^{T} & \Sigma_{66} \end{array}\right]. $$

根据已知条件,$\Omega$负定,所以$\dot{V}(t)$负定,于是定理2.1成立.

3 数值算例

这一节,我们用一个实际例子来验证定理结果的有效性.

考虑如下二元神经网络模型

\begin{eqnarray} \label{example} \left[\begin{array}{cc} \dot{x}_1(t)\\ \dot{x}_2(t) \end{array}\right] &=& \left[\begin{array}{cccccc} -1 & 0 \\ 0 & -1 \end{array}\right] \left[\begin{array}{cc} x_1(t)\\ x_2(t) \end{array}\right] +\left[\begin{array}{cccccc} 0 & 0 \\ 0.2 & 0 \end{array}\right] \left[\begin{array}{cc} f_1(x_1(t))\\ f_2(x_2(t)) \end{array}\right]\nonumber\\ &&+\left[\begin{array}{cccccc} 0 & 0.3 \\ 0.1 & 0 \end{array}\right] \left[\begin{array}{cc} f_1(x_1(t-0.05\tau(t)))\\ f_2(x_2(t-0.05\tau(t))) \end{array}\right]\nonumber\\ &&+\left[\begin{array}{cccccc} 0.01 & 0 \\ 0.2 & 0.02 \end{array}\right] \left[\begin{array}{cc} \dot{x}_1(t-0.03\tau(t)))\\ \dot{x}_2(t-0.03\tau(t))) \end{array}\right] \nonumber\\ &&+\left[\begin{array}{cccccc} -0.20 & 0.03 \\ 0.02 & -0.10 \end{array}\right] \left[\begin{array}{cccccc} f_1(\dot{x}_1(t))\\ f_2(\dot{x}_2(t)) \end{array}\right] \nonumber\\ &&+\left[\begin{array}{cccccc} 0 & -0.035 \\ 0.025 & 0.100 \end{array}\right] \left[\begin{array}{cc} f_1(\dot{x}_1(t-0.01\tau(t)))\\ f_2(\dot{x}_2(t-0.01\tau(t))) \end{array}\right], \end{eqnarray} (3.1)
其中,$ {f_i(x_i)=\frac{1-e^{-x_i}}{1+e^{-x_i}}}, i=1,2$,满足激活函数假设条件(H),$ {\tau(t)=\frac{1}{t^2+1}}$.

在这个系统中,$A_1=\left[\begin{array}{cccccc} -1 & 0 \\ 0 & -1 \end{array}\right]$,$ A_2=\left[\begin{array}{cccccc} 0 & 0 \\ 0.2 & 0 \end{array}\right]$, $ A_3=\left[\begin{array}{cccccc} 0 & 0.3 \\ 0.1 & 0 \end{array}\right]$, $ A_4=\left[\begin{array}{cccccc} 0.01 & 0 \\ 0.2 & 0.02 \end{array}\right]$, $ A_5=\left[\begin{array}{cccccc} -0.20 & 0.03 \\ 0.02 & -0.10 \end{array}\right]$, $ A_6=\left[\begin{array}{cccccc} 0 & -0.035 \\ 0.025 & 0.100 \end{array}\right]$, $L_M=0.5,a_M=-1,\displaystyle{ \lambda=\frac{a_M}{L_M}=-2},\tau_1^*=0.05,\tau_2^*=0.03,\tau_3^*=0.01. $

这里我们取 $P=\left[\begin{array}{cccccc} 0.49 & 0 \\ 0 & 0.37 \end{array}\right],Q=\left[\begin{array}{cccccc} 0.23 & 0 \\ 0 & 0.56 \end{array}\right],R=\left[\begin{array}{cccccc} 0.0274 & 0 \\ 0 & 0.0212 \end{array}\right]$,根据定理2.1,我们利用Matlab来计算$\Omega$的特征值可得: eig$(\Omega)=(-3.7985 ~ -3.3843 ~ -1.8182 ~ -1.4472 $ $ -0.5429 ~ -0.4567 ~ -0.2899 ~ -0.1856 ~ -0.1331 ~ -0.0671 ~ -0.0049 ~ -0.0268 )$,从而得知$\Omega$负定, 由定理2.1的结论即可得到系统(3.1)的零解是全局渐近稳定的.

4 总结

本文从一个新的中立型时滞神经网络的数学模型来研究它的稳定性问题,由定理的结果我们可以看出, 如果时滞为常数或时滞的一阶导数有界,那么系统(1.2)的稳定性将不受影响. 另外,定理的结论中有一个声明条件就是``如果存在对称正定矩阵P、Q、R"满足 负定的这个条件,那么就产生一个新的问题是怎样判断这样的对称正定矩阵是否存在? 这是这个领域研究的一个难题,通常的做法是针对具体问题采用数据测试的方法找到满足 条件的对称正定矩阵. 尽管所得结论带来了一个新的难题, 但是这个难题总是比直接判断系统的原模型是否稳定较容易得多.

参考文献
[1] Hopfield J J. Neurons with graded response have collective computational properties like those of two-state neurons. Proc Aead Sci USA, 1984, 81: 3088-3092
[2] Abe S, Kawakami J, Hirasawa K. Solving inequality constrained combinatorial optimization problems by the hopfield neural networks. Neural Networks, 1992, 5: 663-670
[3] Tamura H, Zhang Z, Xu X S, Ishii M, Tang Z. Lagrangian object relaxation neural network for combinatorial optimization problems. Neurocomputing, 2005, 68: 297-305
[4] Wang R L, Tang Z, Cao Q P. A learning method in Hopfield neural network for combinatorial optimization problem. Neurocomputing, 2002, 48: 1021-1024
[5] Rout S, Seethalakshmy, Srivastava P, Majumdar J. Multi-modal image segmentation using a modified Hopfield neural network. Pattern Recognition, 1998, 31: 743-750
[6] Sammouda R, Adgaba N, Touir A, Al-Ghamdi A. Agriculture satellite image segmentation using a modified artificial Hopfield neural network. Computers in Human Behavior, 2014, 30: 436-441
[7] Suganthan P, Teoh E, Mital D. Pattern recognition by homomorphic graph matching using Hopfield neural networks. Image and Vision Computing, 1995, 13: 45-60
[8] Laskaris N, Fotopoulos S, Papathanasopoulos P, Bezerianos A. Robust moving averages, with Hopfield neural network implementation, for monitoring evoked potential signals. Electroencephalography and Clinical Neurophysiology/Evoked Potentials Section, 1997, 104: 151-156
[9] Calabuig D, Monserrat J F, Gmez-Barquero D, Lzaro O. An efficient dynamic resource allocation algorithm for packet-switched communication networks based on Hopfield neural excitation method. Neurocomputing, 2008, 71: 3439-3446
[10] Zhang W. A weak condition of globally asymptotic stability for neural networks. Applied Mathematics Letters, 2006, 19: 1210-1215
[11] Li X. Zhang Chen. Stability properties for Hopfield neural networks with delays and impulsive perturbations. Nonlinear Analysis: Real World Applications, 2009, 10: 3253-3265
[12] Wang L. Yuying Gao. Global exponential robust stability of reaction-diffusion interval neural networks with time-varying delays. Physics Letters A, 2006, 350: 342-348
[13] Lou X, Ye Q, Cui B. Parameter-dependent robust stability of uncertain neural networks with time-varying delay. Journal of the Franklin Institute, 2012, 349: 1891-1903
[14] Bai C. Global stability of almost periodic solutions of Hopfield neural networks with neutral time-varying delays. Applied Mathematics and Computation, 2008, 203: 72-79
[15] Marcus C, Westervelt R. Stability of analog neural networks with delay. Phys Rev, 1989, 39A: 347-359
[16] Wu J. Symmetric functional-differential equations and neural networks with memory. Trans Am Math Soc, 1999, 350: 4799-4838
[17] Wu J, Zou X. Patterns of sustained oscillations in neural networks with time delayed interactions. Appl Math Comput, 1995, 73: 55-75
[18] Gopalsamy K, He X. Stability in asymmetric Hopfield nets with transmission delays. Physica D, 1994, 76: 1344-358
[19] van den Driessche P, Zou X. Global attractivity in delayed Hopfield neural network models. SIAM J Appl Math, 1998, 58: 1878-1890
[20] Orman Z. New sufficient conditions for global stability of neutral-type neural networks with time delays. Neurocomputing, 2012, 97: 141-148
[21] Zhao H. Global asymptotic stability of Hopfield neural network involving distributed delays. Neural Networks, 2004, 17: 47-53
[22] Rakkiyappan R, Balasubramaniam P. Delay-dependent asymptotic stability for stochastic delayed recurrent neural networks with time varying delays. Applied Mathematics and Computation, 2008, 198: 526-533
[23] Xu H, Chen Y, Teo K L. Global exponential stability of impulsive discrete-time neural networks with time-varying delays. Applied Mathematics and Computations, 2010, 217: 537-544
[24] Chen Y, Xu H. Exponential stability analysis and impulsive tracking control of uncertain time-delayed systems. Journal of Global Optimization, 2012, 52: 323-334
[25] 陈武华, 卢小梅, 李群宏等. 随机Hopfield时滞神经网络均方指数稳定性: LMI方法. 数学物理学报, 2007, 27(1): 109-117