Synchronization for a class of generalized neural networks with interval time-varying delays and reaction-diffusion terms ∗

Abstract. In this paper, the synchronization problem for a class of generalized neural networks with interval time-varying delays and reaction-diffusion terms is investigated under Dirichlet boundary conditions and Neumann boundary conditions, respectively. Based on Lyapunov stability theory, both delay-derivative-dependent and delay-range-dependent conditions are derived in terms of linear matrix inequalities (LMIs), whose solvability heavily depends on the information of reactiondiffusion terms. The proposed generalized neural networks model includes reaction-diffusion local field neural networks and reaction-diffusion static neural networks as its special cases. The obtained synchronization results are easy to check and improve upon the existing ones. In our results, the assumptions for the differentiability and monotonicity on the activation functions are removed. It is assumed that the state delay belongs to a given interval, which means that the lower bound of delay is not restricted to be zero. Finally, the feasibility and effectiveness of the proposed methods is shown by simulation examples.


Introduction
In the past several decades, neural networks have been extensively investigated and successfully applied to signal processing, pattern recognition, artificial intelligence, optimization, fault diagnosis, associative memories, and so on.Such applications heavily depend on the dynamical behaviors of the neural networks.Therefore, the study of dynamical behaviors is a necessary step for practical design of neural networks.

Q. Gan et al.
As pointed in [6,15] and [18], for the study of current neural network, two basic mathematical models are commonly adopted: either local field neural network models or static neural network models.The basic model of local field neural network is described in the following matrix form: where u(t) = [u 1 (t), u 2 (t), . . ., u n (t)] T ∈ R n is the neuron state vector; A = diag(a 1 , a 2 , . . ., a n ) is a diagonal matrix with a i > 0, i = 1, 2, . . ., n; W = (w ij ) n×n is the synaptic connectivity value weigh matrix with w ij being the synaptic connectivity value between neurons i and j; J = [J 1 , J 2 , . . ., J n ] T ∈ R n is a constant input vector; f (u(t)) = [f 1 (u 1 (t)), f 2 (u 2 (t)), . . ., f n (u n (t))] T ∈ R n denotes the neuron activation function.
One can easily check that, under the assumption that A and W are commutative and W is nonsingular, that is, (4) can be transformed to (3) by the substitution of û(t) = W ǔ(t).However, in many real applications of the neural networks, it is not rational to assume the invertibility of the matrix W .Many neural systems, such as the oculomotor integrator [12] or the headdirection system [16], are modelled by non-invertible networks.That is, local field neural network models and static neural network models are not always equivalent.
In [17], a generalized neural network model without considering reaction-diffusion terms was proposed, which includes both the local field neural network model and static neural network model as its special cases, based on that model, both delay-derivativedependent and delay-range-dependent stability criteria were established.In the delay-independent stability analysis, by introducing more information on the activation functions of the neurons into the chosen Lyapunov-Krasovskii functional, a new delay-independent stability criterion was obtained in terms of a simple linear matrix inequality.In the delaydependent stability analysis, by employing an integral inequality and convex combination technique, some novel delay-dependent stability criteria were derived.It has been pointed out that all of these criteria provided a unified frame suitable for both local field neural networks and static neural networks.
In the practical operation, diffusion effects cannot be avoided in the neural networks and electric circuits when electrons are moving in asymmetric electromagnetic fields.Therefore, it is desired to consider the activation of neurons varying in space as well as in time.The reaction-diffusion local field neural networks and reaction-diffusion static neural networks can be described by and respectively, where . ., n) corresponds to the transmission diffusion operator along the ith neuron.
On the other hand, it is well known that there exist time delays in the information processing of neurons due to various reasons.For example, time delays can be caused by the finite switching speed of amplifier circuits in neural networks or deliberately introduced to achieve tasks of dealing with motion-related problems, such as moving image processing.Since time delays as a source of instability and bad performance always appear in many neural networks owing to the finite speed of information processing, the dynamical behaviors for the delayed local field neural networks [1,11] and delayed static neural networks [13,18] have received considerable attention.The delayed reactiondiffusion local field neural networks and delayed reaction-diffusion static neural networks can be described by and respectively, where τ (t) is a time delay, which may be constant or time-varying.Similarly, it is easy to see that reaction-diffusion neural networks ( 6) and ( 7), ( 8) and ( 9) are equivalent, respectively, if A and W satisfy (5).
As special dynamical systems, reaction-diffusion neural networks have also been found to exhibit unpredictable behaviors including periodic oscillations, bifurcation and chaotic attractors, which induced the studies on its chaos synchronization.Therefore, the study of neural synchronization is an important step for both understanding brain science Nonlinear Anal.Model.Control, 21(3):379-399 and designing reaction-diffusion neural networks for practical use.Recently, investigation of the synchronization problems for reaction-diffusion local field neural networks has attracted numerous scientists [8,14].The above conditions can be classified into two categories, namely, diffusion-dependent ones and diffusion-independent ones.The former can make use of information concerning the reaction-diffusions, they are generally less conservative than the latter.
Compared with rich results for local field networks, results for static neural networks are much more scare.To the best of the authors' knowledge, the synchronization problem for static neural networks, especially the synchronization problem for reaction-diffusion static neural networks, has not been studied in the literature and it is interesting to study this problem both in theory and in applications, so there exist open room for further improvement.This situation motivates our present investigation.Therefore, it is interesting to study this problem both in theories and applications.In this paper, we consider the problem of synchronization for a class of generalized reaction-diffusion neural networks model with interval time-varying delays under Dirichlet boundary conditions and Neumann boundary conditions, respectively, which includes reaction-diffusion local field neural networks model and the reaction-diffusion static neural networks model.Based on Lyapunov stability theory and free-weighting matrix approach, both delay-derivativedependent and delay-range-dependent criteria for the synchronization of the proposed neural networks were derived and the controller gain matrix was designed.The activation functions may not be required to satisfy the monotonicity and the derivative of the timevarying delay need not be smaller than one.Consequently, our results obtained in this paper are not only less conservative, but also effectually complement or improve the previously known results.
The main contribution of this paper can be summarized as follows: 1. Inspired by the work [17], a unified model, i.e., general reaction-diffusion neural networks, which include reaction-diffusion static neural networks and reactiondiffusion local field neural networks, is considered in our work.2. By constructing novel Lyapunov-Krasovskii functional, the rigorous requirement of other literatures that the time-derivatives of time-varying delays must be small that one is abandoned in the proposed delay-range-dependent synchronization criterion, i.e., the new criterion are applicable to both fast and slow time-varying delays.3.In [14], the authors pointed out that it is quite difficult to find a chaotic attractor for reaction-diffusion delayed neural networks.Obviously, this is an important and interesting open problem.In this paper, by using the classical implicit format solving the partial differential equations and the method of steps for differential difference equations, we find that if the parameters are appropriately chosen, the reaction-diffusion neural networks can exhibit chaotic attractors.
The notations in this paper are really standard.R n and R n×m denote the n dimensional Euclidean space and the set of all n × m real matrices, respectively; the notation C 2,1 (R + ×R n ; R + ) denotes the family of all nonnegative functions V (t, e(t)) on R + ×R n , which are continuously twice differentiable in e and once differentiable in t; the http://www.mii.lt/NAsuperscript T and −1 denote matrix transposition and matrix inversion, respectively; the notation X Y (resp.X > Y ), where X and Y are symmetric matrices, means that X − Y is semi-positive definite (resp.positive definite); the shorthand diag(•) denotes the block diagonal matrix; det(•) denotes the determinant of matrix; the symmetric terms in asymmetric matrix are denoted by * .
Therefore, (10) can be called a generalized reaction-diffusion neural networks model, based on which synchronization analysis for both reaction-diffusion local field neural Nonlinear Anal.Model.Control, 21(3):379-399 networks and reaction-diffusion static neural networks models can be made in a unified frame even if condition ( 5) is not satisfied.
In this paper, we give the following hypotheses: (H1) We assume that there exist constants l − i and l + i such that the neuron activation functions f i satisfy the following condition: where vi , vi ∈ R, vi = vi (i = 1, 2, . . ., n). (H2) τ (t) is supposed to be an interval time-varying transmission delay satisfying for all t, where τ 1 , τ 2 and µ are constants.
Remark 1.In [7,8,17], hypothesis (H1) has been used to investigated the stability or synchronization of neural networks.The feature of hypothesis (H1) is that the constants l − i and l + i (i = 1, 2, . . ., n) are allowed to be positive, negative or zero.By all appearances, this hypothesis is weaker those given in [1,5,19], which require the monotonicity of the activation functions (l − i = 0) or usual Lipschitz conditions (l − i = l + i ).Such a description is precise in quantifying the lower and upper bounds of the activation functions.
In this paper, we consider two types of boundary conditions as follows: 1) Dirichlet boundary conditions 2) Neumann boundary conditions The initial value of system (10) is where φ(s, x) = (φ 1 (s, x), φ 2 (s, x), . . ., φ n (s, x)) T is bounded and continuous on In order to observe the synchronization behavior of system (10), the response (slaver) system is designed as http://www.mii.lt/NAwhere z(t, x) = (z 1 (t, x), z 2 (t, x), . . ., z n (t, x)) T ; U (t, x) = (U 1 (t, x), U 2 (t, x), . . ., U n (t, x)) T indicates the control input, which will be appropriately designed.Consider a delayed state feedback controller of the following form: where K 1 and K 2 are the controller gains to be determined.The boundary condition and initial condition for response system (14) are given in the forms: 1) Dirichlet boundary conditions 2) Neumann boundary conditions where ψ(s, x) = (ψ 1 (s, x), ψ 2 (s, x), . . ., ψ n (s, x)) T is bounded and continuous on Defining the synchronization error state as e(t, x) = (e 1 (t, x), e 2 (t, x), . . ., e n (t, x)) T = z(t, x) − y(t, x) and subtracting (10) from ( 14) yields the error system as follows: where Before ending this section, we introduce some lemmas, which will be essential in establishing the desired synchronization criteria.
l} is a bound compact set with smooth boundary ∂Ω and mes Ω > 0 in space R l , and let ϕ(x) be a real-valued function belonging to C 1 (Ω), which vanish on the boundary ∂Ω of Ω, i.e., ϕ(x)| ∂Ω = 0, then Then where λ 1 is the smallest positive eigenvalue of the Neumann boundary problem Remark 2. When Ω is bounded, or at least bounded in one direction, Poincaré integral inequality can also hold.The smallest eigenvalue λ 1 of problem ( 21) is only determined by Ω.For example, if

Delay-derivative-dependent synchronization
In this section, we are concerned with the delay-derivative-dependent synchronization analysis (the synchronization conditions are delay-derivative-dependent and delay-rangeindependent) for neural networks (10) and (14).
Theorem 1.For given scalars τ 1 , τ 2 and µ < 1, the two coupled generalized reactiondiffusion neural networks with interval time-varying delays (10) and ( 14) can be globally synchronized under the Dirichlet boundary conditions (11), ( 16) and linear feedback controller (15) if there exist positive definite matrices P , Q 1 and Q 2 , positive definite diagonal matrices H 1 and H 2 , real matrix X 1 and X 2 such that the following linear matrix inequality holds: http://www.mii.lt/NA where Moreover, the gain matrices K 1 and K 2 of the linear feedback controller (15) can be designed as Proof.Define a Lyapunov-Krasovskii functional g T W 0 e(s, x) Q 2 g W 0 e(s, x) ds dx.
Hence, by Lyapunov theorem for functional differential equations [4], the origin of error system (19) is asymptotically stable, this implies that the two systems ( 10) and ( 14) are asymptotically synchronized.The proof of Theorem 1 is completed.
Remark 3. In this paper, X is assumed to be a cube 20) is reduced to the following form: which has been introduced in [7,8] as an important lemma (Friedrichs' inequality [2]).Hence, our results are more general and they effectually complement or improve the previously known results.
Remark 4. In Theorem 1, to guarantee Π < 0, matrix Π 22 should be negative definite.However, in this paper, l + i and l − i (i = 1, 2, . . ., n) are known real scalars and may be positive, negative, or zero, which means that the resulting activation functions may be non-monotonic.Correspondingly, term W T 0 L − H 2 L + W 0 in (22) may not be positive definite, negative definite, or zero.Therefore, when W T 0 L − H 2 L + W 0 0, the bound of the derivation of time delays µ should be smaller than one.It should be pointed that it is interesting to remove the restriction on the bound of the derivation of time delays.
In fact, by defining a Lyapunov-Krasovskii functional candidate as the same as (24) and using the Poincaré integral inequality (Lemma 3), neural networks (10) and ( 14) are asymptotically synchronized under the Neumann boundary conditions ( 12) and ( 17).
Theorem 2. The generalized reaction-diffusion neural networks with interval time-varying delays (10) and ( 14) can be globally synchronized under the Neumann boundary conditions (12), (17) and linear feedback controller (15) if there exist positive definite matrices P , Q 1 and Q 2 , positive definite diagonal matrices H 1 and H 2 , real matrix X 1 and X 2 such that the following linear matrix inequality holds: where http://www.mii.lt/NAλ 1 is the lowest positive eigenvalue of the Neumann boundary problem.Moreover, the gain matrices K 1 and K 2 of the linear feedback controller (15) can be designed as 4 Delay-range-dependent synchronization It should be pointed out that all delay-derivative-dependent synchronization results do not include any information on the size of delays.It is well known that delay-rangedependent synchronization conditions (the synchronization conditions are both delayderivative-dependent and delay-range-dependent) are generally less conservative than delay-derivative-dependent ones, especially when the size of the delay is small.In this section, the delay-range-dependent results are developed to implement the global synchronization between generalized neural networks with mixed time-varying delays and reaction-diffusion terms ( 10) and ( 14).
Theorem 3. The generalized reaction-diffusion neural networks with interval time-varying delays (10) and ( 14) can be globally synchronized under the Dirichlet boundary conditions (11) and linear feedback controller (15), if there exist positive definite matrices and M , real matrix X 1 and X 2 such that the following linear matrix inequality holds: where Q. Gan et al.
Moreover, the gain matrices K 1 and K 2 of the linear feedback controller (15) can be designed as Proof.Define a Lyapunov-Krasovskii functional V (t, e(t, x)) where where . ., β n ), and W 0i denotes the ith row vector of the matrix W 0 .
It follows the Lyapunov theorem in functional differential equations [4] that the generalized reaction-diffusion neural networks (10) and ( 14) are asymptotically synchronized.This completes the proof.

Q. Gan et al.
Remark 5. Compared with the results in Theorem 1, some novel integral terms have been established in the Lyapunov-Krasovskii functional (33), which has two main advantages listed as follows: 1.Both l − i and l + i are taken into account in V 6 (t, e(t, x)), the dedicated construction of Lyapunov-Krasovskii functional (33) does have full information on the recurrent neural network system dynamics.It is therefore that the conservatism is reduced.2. In order to obtain less conservative delay-range-dependent synchronization results on the restriction of the time derivative of time-varying delays τ (t) and enhance the feasibility of the obtained LMIs, free-weighting matrix Γ = [M T , M T ] T is introduced.This leads to a term which can be compensated by computing the time-derivative of V 1 (t, e(t, x)).
These considerations highlight the main differences in the construction of the Lyapunov-Krasovskii functional candidate in this paper.Remark 6.In this paper, we introduce a linear feedback controller to guarantee the synchronization of generalized reaction-diffusion neural networks with time-varying delays.So far, There are many results concerning the stability or synchronization for reaction-diffusion local field neural networks [8,14].However, to the best of the the authors' knowledge, there is no results on the synchronization for reaction-diffusion static neural networks.This motivates us to write this paper.It is the first time to establish the synchronization criterion for a class of generalized reaction-diffusion neural networks with time-varying delays, which includes reaction-diffusion local field neural networks model and reaction-diffusion static neural networks model as its special cases.In this paper, the linear feedback control is developed to study a more reasonable generalized reaction-diffusion neural networks model and the traditional restrictions that τ (t) < 1 is removed in Theorem 3. Hence, our results have been shown to be the generalization and improvement of existing results reported recently in the literature.Remark 7. Today, there are generally two kinds of continuously distributed delays in the neural networks model, i.e., finitely distributed delays and infinitely distributed delays.
In fact, for the reaction-diffusion neural networks with finitely distributed delays, we can introduce a new term in Lyapunov-Krasovskii functional (33) g W 0 e(s, x) ds dx.
It is easy to see that the proposed method in this paper can be used to deal with the synchronization problem for generalized reaction-diffusion neural networks with discrete time-varying and finitely distributed time-varying delays.However, Lemma 2 can not be used to deal with the infinitely distributed delays, therefore, our theoretical results can not be used to deal the synchronization problem for generalized reaction-diffusion neural networks with both discrete time-varying and infinitely distributed delays, this is another problem that we should study in the future.
By utilizing Poincaré integral inequality, following a similar line as in the proof of Theorem 3, the desired synchronization of generalized reaction-diffusion neural networks with Neumann boundary conditions can be obtained readily.
Theorem 4. The generalized reaction-diffusion neural networks with interval time-varying delays (10) and ( 14) can be globally synchronized under the Neumann boundary conditions (12), (17) and linear feedback controller (15), if there exist positive definite matrices P , Q 1 , Q 2 , R 1 , R 2 , S 1 , S 2 , T 1 , T 2 , Z 1 and Z 2 , positive definite diagonal matrices H 1 , H 2 , Λ 1 , Λ 2 and M , real matrix X 1 and X 2 such that the following linear matrix inequality holds: , . . ., 9, i * j = 1, λ 1 is the lowest positive eigenvalue of the Neumann boundary problem.Moreover, the gain matrices K 1 and K 2 of the linear feedback controller (15) can be designed as Remark 8.It is worth pointing out that, for Neumann boundary conditions, the synchronization criteria given in Theorem 4 contain the information not only about the time delays, but also about the reaction-diffusion terms.In this sense, the conditions are diffusion-dependent as well as delay-range-dependent, which has less conservation than the diffusion-independent synchronization criteria in [7,11].
Q. Gan et al.

Numerical simulations
In this section, by using the classical implicit format solving the partial differential equations and the method of steps for differential difference equations, we find that if the parameters are appropriately chosen, the reaction-diffusion neural networks can exhibit chaotic attractors.Furthermore, some numerical examples are given to illustrate the theoretical results above.
Example 1.For the sake of simplification, we consider the following 2-D reaction-diffusion neural network (drive system): where The neural network described in this example is a reaction-diffusion static neural network.The initial condition of the neural network (39) is chosen as The simulation results of (39) are provided in Fig. 1.The chaotic behaviors can be found in Fig. 2, where x is set as −2 and 4, respectively.The response system is described by Meanwhile, we set the initial condition for response system (42) as follows: It is easy to see that τ i = µ = 0, l − i = −0.025and l + i = 0.975, i = 1, 2. By using Matlab LMI control toolbox to solve the LMI in Theorem 1, we can find a set of feasible solutions of ( 22 Therefore, by Theorem 1, we know that the response system (42) can completely synchronize the drive system (39), and the control gain matrix K 1 is K 1 = −13.631710.9697 10.0135 −9.5812 .
The dynamical behavior of the error system between (39) and (42) with parameters (40) and initial conditions (41) and ( 43) is shown in Fig. 3.The simulation results imply that the response system (42) are completely synchronize the drive system (39).
The neural network described in this example is a classic delayed reaction-diffusion cellular neural network.Similarly, the neural network (39) with above parameters (44) exhibits a chaotic behavior as shown in Fig. 4.
By simple computation, we obtain that τ 1 = 3.4, τ 2 = 3.6, µ = 1.1, l − i = −0.025and l + i = 0.975, i = 1, 2. By using Matlab LMI control toolbox to solve the LMI in Theorem 3, we can find a set of feasible solutions of ( 22   The dynamical behavior of the error system between (39) and (42) with parameters (44) and initial conditions (45) and ( 46) is shown in Fig. 5.The simulation results imply that the response system (42) are completely synchronize the drive system (39).Remark 9.In [8], the authors studied the asymptotical synchronization in the mean square for reaction-diffusion neural networks with time-varying delays under the Dirichlet boundary conditions.The main results in [8] cannot be used to study this example with τ (t) > 1 for all t (fast-varying delay).However, after a simple computation, the Nonlinear Anal.Model.Control, 21(3):379-399 conditions of Theorem 3 in this paper hold.The numerical simulations clearly verify the effectiveness of the developed linear feedback controller to the synchronization of generalized reaction-diffusion neural networks with interval time-varying delays.

Conclusion
In this paper, we have dealt with the synchronization problem for a class of chaotic generalized neural networks with interval time-varying delays and reaction-diffusion terms.Based on linear feedback control technique, both delay-derivative-dependent and delayrange-dependent synchronization criteria are derived in terms of LIMs.The proposed sufficient conditions depend on physical parameters of neural networks, time delays and diffusion effects, so can be checked easily and quickly.Finally, some illustrative examples and their simulations show the feasibility and effectiveness of our proposed theoretical results.It is worth mentioning that those new criteria not only extend some existing results, but also have advantages over some previous ones due to less conservatism.
x) Y g W 0 e(s, x) ds dθ dx.