Robust ﬁxed-time synchronization of discontinuous Cohen–Grossberg neural networks with mixed time delays

. This paper aims to investigate the ﬁxed-time synchronization (i.e., synchronization in ﬁxed-time sense) of Cohen–Grossberg drive-response neural networks with discontinuous neuron activations and mixed time delays (both time-varying discrete delay and distributed delay). To accomplish the target of ﬁxed-time synchronization, a novel discontinuous feedback control procedure is ﬁrstly designed for the response neural networks. Then, under the framework of Filippov solutions, by means of functional differential inclusions theory, inequality technique and the nonsmooth analysis theory with Lyapunov-like approach, some sufﬁcient criteria are derived to design the control parameters for achieving ﬁxed-time synchronization of the proposed drive-response systems. Finally, two numerical examples are presented to illustrate the proposed methodologies. inclusions theory, mixed time delays.


Introduction
In this paper, we consider a class of discontinuous Cohen-Grossberg neural networks with mixed time-varying delays as follows: with initial conditions where i = 1, 2, . . . , n, n 2 is the number of neurons in the network, x i (t) represents the state variable of the ith neuron, d i (x i (t)) denotes the amplification function, a i (x i (t)) is an appropriately behaved function, matrices B = (b ij ) ∈ R n×n , C = (c ij ) ∈ R n×n and H = (h ij ) ∈ R n×n are the connection weight matrix and delayed connection weight matrix, respectively, g j denotes the activation functions, I i is the external input to the ith neuron. τ (t) corresponds to the discrete time-varying delay at time t and is a continuous function satisfying 0 τ (t) τ , where τ = max t∈R τ (t), τ a is nonnegative constant; δ ij (t) denotes the distributed time varying delay at time t and is a continuous function satisfying 0 δ ij (t) δ, where δ = max 1 i,j n {max t∈R δ ij (t)}, δ is a nonnegative constant.
Based on the concept of drive-response synchronization, let us take (1) as the drive system and design the following response system: with initial conditions where u i (t) is the control input to be designed later. The synchronization of the drive-response systems was firstly investigated by Pecora and Carroll [34]. Pecora and Carroll showed that the behavior of the drive system can influence that of response system, but the drive system dose not depend on the response system. The synchronization indicates that the states of the response system converge to those of the drive system. Hence, by applying the synchronization property of the driveresponse for neural networks, we can recognize the dynamics of unknown neuron system from those of another well-known neuron system. During the past several years, various kinds of synchronization have been proposed, such as asymptotic synchronization [3,17,22,31,35], exponential synchronization [5,18,36,37,39,54], finite-time synchronization [2,6,29,[41][42][43][44][45][46]50], fixed-time synchronization [8-10, 13, 25, 47], etc. Compared with infinite-time synchronization, the finite-time synchronization intrinsically requires a faster convergence speed. Note that a critical issue about the finite-time synchronization is that the settling time is dependent on the initial conditions of the drive-response systems. Different initial conditions may lead to different convergence time. Nevertheless, the initial conditions of many practical systems can hardly be adjusted or even impossible to be estimated, which results in the inaccessibility of the final settling time and deteriorating of the systems' performance. To overcome this difficulty, in 2012, Polyakov [38] proposed a nonlinear feedback design for the fixed-time stabilization of linear control systems, where the definition of fixed-time stable is firstly introduced. Moreover, the fixed-time synchronization and the finite-time synchronization both require that the errors between the drive system and the response system converge to zero in some finite time and always remain zero afterwards. However, the fixed-time synchronization further requires that the settling time is independent of the initial synchronization errors, and thus the upper bound of the settling time can be given in advance. Thus, the fixed time synchronization is more favorable and applicable than finite time synchronization and has more valuable practical backgrounds.
In recent years, due to the fact that Cohen-Grossberg neural networks (CGNNs) have some practical and important applications in signal processing, classification, parallel computation, pattern recognition, associative memory and optimization, they have been the object of intensive analysis by numerous authors since Cohen-Grossberg neural networks (CGNNs) have been first introduced by Cohen and Grossberg in 1983 [12]. In the applications of the CGNNs, the dynamic behaviors of the CGNNs, such as the existence, uniqueness, Hopf bifurcation and global asymptotic stability or global exponential stability of the equilibrium point, periodic and almost periodic solutions play a key role. For more details, we refer the readers to [4,20,24,26,27,30,32,40,51,53,[55][56][57] and the references cited therein. However, all of the above results were based on the assumption that the activation functions are continuous, Lipschitz continuous or even smooth. Note that, in recent years, considerable efforts have been devoted to investigate the neural network systems with discontinuous activation functions in the literature since the pioneering contribution of Forti and Nistri [15]. See, to name a few, [1,7,16,19,23,28,32,33,48,49]. For example, Cai et al. in [7] studied the periodic dynamics of a class of time-varying delayed neural networks via differential inclusions, Wang and Huang in [48] further discussed the almost periodic dynamical behaviors for Cohen-Grossberg neural networks with discontinuous activations. Based on the work of [7] and [48], given the influence of neutral difference operator, Kong et al. in [23] investigated dynamic behavior of a class of neutral-type neural networks with discontinuous activations and time-varying delays.
However, to the authors' knowledge, few papers have considered the robust fixed-time synchronization of discontinuous Cohen-Grossberg drive-response neural networks with mixed time delays. In order to fill this gap partially and motivated by the above works, in this paper, we aim to investigate the robust fixed-time synchronization control problem for the discontinuous Cohen-Grossberg drive-response neural networks with mixed timevarying delays described by the differential equations (1) and (2). Under the concept of Filippov solution, by applying the differential inclusions, inequality technique and nonsmooth analysis theory with Lyapunov-like approach, some sufficient conditions on robust fixed-time synchronization of drive-response systems (1) and (2) is proposed originally. The highlights and major contributions of this paper are reflected in the subsequent key aspects: (i) It is nontrivial to establish a unified framework to handle the influence of discontinuous activations and mixed delays of the Cohen-Grossberg neural networks. In this paper, we firstly make novel variable transformations for the neural network systems (1) and (2) to get two new differential equations. Then the Cauchy problems of the two new differential equations will be discussed under the Filippov sense [14]. (ii) In order to achieve the robust fixed-time synchronization of discontinuous Cohen-Grossberg drive-response neural networks with mixed time delays, the uncertain differences between the Filippov solutions of the drive system and the corresponding response system are not easily to be dealt with if we use the continuous linear control law. In this paper, the discontinuous control law is designed for the response neural networks. (iii) Several simulation examples have been investigated to verify the correctness of the main theorems and the corollaries.
The function g i in (1) is assumed to satisfy the following properties: (H1) For each i = 1, 2, . . . , n, g i : R → R is piecewise continuous, i.e., g i is continuous except on a countable set of isolate points {ρ i k }, where there exist finite right and left limits g + i (ρ i k ) and g − i (ρ i k ), respectively. Moreover, g i has at most a finite number of discontinuities on any compact interval of R. (H2) For each i = 1, 2, . . . , n, there exists nonnegative constants A i and B i such that . To derive the main results, the following assumptions are introduced.
(H3) d i (x) is continuous and bounded. Moreover, there exists positive constants d i and d i such that, for i = 1, 2, . . . , n, (H4) The derivative of the amplification function a i (x) has a positive lower bound, i.e., there exists a positive constant a i such that, for i = 1, 2, . . . , n, http://www.journals.vu.lt/nonlinear-analysis The remainder part of this paper is organized as follows. Some preliminaries concerning the discontinuous theory and the definition of the fixed-time synchronization are presented in Section 2. The control design schemes are proposed and employed to ensure the robust fixed-time synchronization in Section 3. In Section 4, two numerical simulations are given to illustrate the effectiveness of the obtained results. Finally, conclusions are drawn in Section 5.

Essential definitions and lemmas
Notations. Let R be the space of real number and R n denote the n-dimensional Euclidean space. Consider the column vectors x = (x 1 , x 2 , . . . , x n ) ∈ R n and y = (y 1 , y 2 , . . . , y n ) ∈ R n , where the superscript represents the transpose operator. Finally, let sign(·) denote the sign function.
In the following, the concepts of Filippov solutions and the set-valued Lie derivative are introduced to facilitate the subsequent analysis on the synchronization of the discontinuous Cohen-Grossberg drive-response neural networks (1) and (2). Consider the dynamic system defined by the following differential equation: where x(t) represents the state variable. If f (t, x(t)) is continuous with respect to x(t), then according to Peano's theorem, the existence of a continuously differentiable solution can be guaranteed. If f (t, x(t)) is locally measurable function but is discontinuous with respect to x(t), then the solution of the Cauchy problem (3) will be discussed under the Filippov sense [14].
x(t)) : R + × R n → R n is Lebesgue measurable and locally bouded uniformly in time. A vector function x(t) is called to be a Filippov solution of (3) if x(t) is absolutely continuous and satisfying the following differential inclusion: where t 1 ∈ R + or +∞ and the set-valued function K[f (t, x(t))] is defined as follows: where co(S) denotes the convex closure of set S, B(x, δ) is the open ball with the center at x ∈ R and the radius δ ∈ R, µ(N ) represents the Lebesgue measure of the set N .
Let V : R n → R be a locally Lipschitz continuous function and Ω v ∈ R be the set of points where V is not differentiable. The generalized gradient ∂V : where B(R n ) is the set consisting of all the subsets of R n and M is an arbitrary set of measure zero.
The set-valued Lie derivative of V with respect to system (3) is defined aṡ

Choose the transformation function
It follows from (H2) that 1/d i (x) exists and 1/d i (x) is positive and continuous for all it can be obtained directly thaṫ , ). Substituting the above variable transformations into the original systems (1) and (2), we have http://www.journals.vu.lt/nonlinear-analysis and for a.e. t ∈ [0, b), i = 1, 2, . . . , n.

Main results
In this section, some sufficient criteria are derived to design the control parameters for achieving fixed-time synchronization of the drive-response systems (1) and (2).
Theorem 1. Suppose that assumptions (H1)-(H4) hold and the design parameters are appropriately selected as follows: Then the robust fixed-time synchronization of the discontinuous Cohen-Grossberg driveresponse neural networks (1) and (2) under the control law (9) is achieved. Moreover, lim t→Tmax e(t) = 0, and e(t) = 0 for all t T max , where the settling time T max is given as http://www.journals.vu.lt/nonlinear-analysis Proof. Define a candidate Lyapunov function as follows: It can be easily verified that the composed function V (e(t)) is C-regular. Since |e i (t)| is a locally Lipschitz continuous function in e i on R, recalling the definition of Clarke's generalized gradient of function |e i (t)| at e i (t), we have which means that, for any ν i (t) ∈ ∂(|e i (t)|), we can see that ν i (t) = sign(e i (t)) if e i (t) = 0, and ν i (t) can be arbitrarily chosen in [−1, 1] if e i (t) = 0. In particular, for any i = 1, 2, . . . , n, we choose ν i (t) = sign(e i (t)). It can be seen that Thus, according to the definition of K[·], the set-valued Lie derivative of V (e(t)) along the error dynamics (8) can be calculated aṡ Note that by (8) Substituting (11) into (10), we havė Since the behaved function a i (t) and the transformation function h i (x i (t)) are both strictly monotonically increasing, differentiable and a i (0) = 0, h i (0) = 0, a i (h −1 (w i (t))) is also strictly monotonically increasing and differentiable with respect to t. Thus, in view of (H3), we can see that By applying (H2) we can have In a similar way, we get and Furthermore, noting the fact that h −1 i (z i (t)) is strictly monotone increasing and h −1 i (0) = 0, we can see that sign(ε i (t)) = sign(e i (t)). Then by (9) we can have Substituting (13)-(17) into (12), we obtaiṅ which, together with Lemma 2, yieldṡ According to Lemma 1, we can conclude that the drive-response systems (1) and (2) achieve the robust fixed-time synchronization. Moreover, the settling time can be obtained by applying Lemma 1: Corollary 1. Suppose that assumptions (H1)-(H4) hold, d i (t) ≡ 1, i = 1, 2, . . . , n, and the design parameters are appropriately selected as follows: Then the robust fixed-time synchronization of the discontinuous Cohen-Grossberg driveresponse neural networks (1) and (2) under the control law (9) is achieved. Moreover, lim t→Tmax e(t) = 0, and e(t) = 0 for all t T max , where the settling time T max is given as Proof. The proof is similar to that of Theorem 1, we omit it here.
Remark 1. From Examples 1 and 2, one can see that the activations are discontinuous, unbounded and nonmonotonic, this means that the activations are not continuous, Lipschitz continuous or smooth, which are different from the related references in the literature, such as [4,20,24,26,27,30,32,40,53,55,56]. The results established in the present paper extend the previous work about CGNNs to the discontinuous cases.
Remark 2. For all we know, there is no research on the robust fixed-time synchronization of discontinuous Cohen-Grossberg neural networks with mixed time delays. We also mention that all results in the references cited in the present paper cannot be directly applied to imply the results of robust fixed-time synchronization of Examples 1 and 2. This implies that the results of this paper are essentially new.

Conclusion
In this paper, we have dealt with the robust fixed-time synchronization of discontinuous Cohen-Grossberg neural networks with mixed time-varying delays in order to achieve fixed-time synchronization of the proposed drive-response systems. Firstly, we presented a novel discontinuous feedback control procedure for the response neural networks. Then, under the concept of Filippov solutions, by using functional differential inclusions theory, inequality technique and the nonsmooth analysis theory with Lyapunov-like approach, some new criteria are obtained to design the control parameters. Finally, two simulation examples have been shown to verify the correctness of our proposed main results. To the best of our knowledge, the results presented here have been not appeared in the related literature. Consequently, our results can enrich and extend the corresponding ones known in the literature.