Optimal control problem for the BCF model describing crystal surface growth ∗

In past decades, the optimal control of distributed parameter system had been received much more attention in academic field. A wide spectrum of problems in applications can be solved by methods of optimal control, such as chemical engineering and vehicle dynamics. Modern optimal control theories and applied models are not only represented by ODE, but also by PDE, especially nonlinear parabolic equation. Many papers have already been published to study the control problems of nonlinear parabolic equations, for example, [1, 3, 8, 11, 13, 14, 16] and so on. In this paper, we consider the optimal control problem for the BCF model


Introduction
In past decades, the optimal control of distributed parameter system had been received much more attention in academic field.A wide spectrum of problems in applications can be solved by methods of optimal control, such as chemical engineering and vehicle dynamics.Modern optimal control theories and applied models are not only represented by ODE, but also by PDE, especially nonlinear parabolic equation.Many papers have already been published to study the control problems of nonlinear parabolic equations, for example, [1,3,8,11,13,14,16] and so on.
In this paper, we consider the optimal control problem for the BCF model where a and µ are positive constants, Ω = (0, 1).On the basis of physical considerations, equation ( 1) is supplemented with the boundary value conditions and the initial value condition u(x, 0) = u 0 (x) ∀x ∈ Ω.
Equation ( 1) was presented by Johnson et al. [7] for describing the process of growing of a crystal surface on the basic of the BCF theory due to Burton et al. [2].Here u(x, t) denotes a displacement of height of surface from the standard level (u = 0) at a position x ∈ Ω.The term au xxxx in equation ( 1) denotes a surface diffusion of adatoms, which is caused by the difference of the chemical potential.In the mean time, µu x /(1 + |u x | 2 g) x denotes the effect of surface roughening.
During the past years, many authors have paid much attention to equation (1).It was Johnson et al. [7] who presented this equation for describing the process of growing of a crystal surface on the basis of the BCF theory.Rost and Krug [10] studied the unstable epitaxy on singular surfaces using equation (1) with a prescribed slopedependent surface current.In their papers, they derived scaling relations for the late stage of growth, where power law coarsening of the mound morphology is observed.In [9], in the limit of weak desorption, Pierre-Louis et al. derived equation (1) for a vicinal surface growing in the step flow mode.This limit turned out to be singular, and nonlinearities of arbitrary order need to be taken into account.
Recently, Fujimura and Yagi [4,5] also considered equation (1).In their papers, the uniqueness of local solutions and the global solutions were obtained, a dynamical system determined from the initial-boundary value problem of the model equation was constructed, and the asymptotic behavior of trajectories of the dynamical system was also considered.In [6], Grasselli, Mola and Yagi showed that equation (1) endowed with noflux boundary conditions generates a dissipative dynamical system under very general assumptions on ∂Ω on a phase-space of L 2 -type.They proved that the system possesses a global as well as an exponential attractor.In addition, if ∂Ω is smooth enough, they showed that every trajectory converges to a single equilibrium by means of a suitable Lojasiewicz-Simon inequality.In [15], based on the iteration technique for regularity estimates and the classical existence theorem of global attractors, Zhao and Liu proved the existence of global attractor for equation (1) on some affine space of H k (0 k < +∞).
In this article, we are concerned with the distributed optimal control problem subject to where the operator C is called the observer, S is a real Hilbert space of observations.The control target is to match the given desired state z d in L 2 -sense by adjusting the body force w in a control volume http://www.mii.lt/NANow, we introduce some notations that will be used throughout the paper.For fixed ) and H = L 2 (0, 1), let V * , U * and H * are dual spaces of V , U and H. Then we obtain Clearly, each embedding being dense.
The extension operator B ∈ L(L 2 (Q 0 ), L 2 (0, T ; H)), which is called the controller introduced as We supply H with the inner product (•, •) and the norm • , and we define a space W (0, T ; V ) as which is a Hillbert space endowed with common inner product.This paper is organized as follows.In Section 2, we prove the existence and uniqueness of the weak solution to problem (1)-(3) in a special space and discuss the relation among the norms of weak solution, initial value and control item.In Section 3, we consider the optimal control problem of BCF model and prove the existence of optimal solution.In Section 4, the optimality conditions for BCF model is showed and the optimality system is derived.
In the following, the letters c, c i (i = 1, 2, . . . ) will always denote positive constants different in various occurrences.

Existence and uniqueness of weak solutions
In this section, we prove the existence and uniqueness of weak solution for problem (4), where Bw ∈ L 2 (0, T ; H) and a control w ∈ L 2 (Q 0 ).
In the following Definition 1, we give the definition of the weak solution for problem (4) in the space W (0, T ; V ).
Here we supply V and V * with the inner product We shall give Theorem 1 on the existence and uniqueness of weak solution to problem (4) and prove it.
Proof.The Galerkin method is applied to the proof.
Denote A = (−∂ 2 x ) 2 as a differential operator, let {ψ i } ∞ i=1 denote the eigenfunctions of the operator A. For n ∈ N , define the discrete ansatz space by Performing the Galerkin procedure for equation (4).We find the approximation solutions It is easy to see that the equation of ( 5) is an ordinary differential equation and, according to ODE theory, there exists a unique solution to the equation of ( 5) in the interval [0, t n ).What we should do is to show that the solution is uniformly bounded when t n → T .We need also to show that the times t n there are not decaying to 0 as n → ∞.
Therefore, we shall prove the existence of solution in the following steps.
Step 1. Multiplying the equation of ( 5) by u n and integrating with respect to x over (0, 1), we deduce that Note that Summing up, we immediately get http://www.mii.lt/NASince Bw ∈ L 2 (0, T ; H) is the control item, we can assume that Bw M , where M is a positive constant.Then we have Using Gronwall's inequality, for all t ∈ [0, T ], we obtain Step 2. Multiplying the equation of ( 5) by u n,xx and integrating with respect to x over (0, 1), we deduce that and Then it follows from (7) that Since Bw M , using Gronwall's inequality, we obtain Step 3. Multiplying the equation of ( 5) by u n,xxxx and integrating with respect to x over (0, 1), we deduce that Note that Hence, Therefore, we have Since Bw M , using Gronwall's inequality, we obtain Adding ( 6), ( 8) and ( 9) together gives u n H 2 c, then we have http://www.mii.lt/NAThen the uniform L 2 (0, T ; V ) bound on a sequence {u n } is proved.
Step 4. We prove a uniform L 2 (0, T ; V * ) bound on a sequence {u n,t }.Note that for all η ∈ V , we have Therefore, by ( 8), we have Hence, we get u n,t L 2 (0,T ;V * ) c(c Collecting the previous, we get: (i) For every t ∈ [0, T ], the sequence {u n } n∈N is bounded in L 2 (0, T ; V ), which is independent of the dimension of ansatz space n. (ii) For every t ∈ [0, T ], the sequence {u n,t } n∈N is bounded in L 2 (0, T ; V * ), which is independent of the dimension of ansatz space n.
By the above discussion, we obtain u ∈ W (0, T ; V ).It is easy to check that W (0, T ; V ) is continuously embedded into C([0, T ]; H), which denote the space of continuous functions.We concludes convergence of a subsequences, again denoted by {u n } weak into W (0, T ; V ), weak-star in L ∞ (0, T ; H) and strong in L 2 (0, T ; H) to functions u ∈ W (0, T ; V ).
Since the proof of uniqueness is easy, we omit it.Then Theorem 1 is proved.Now, we shall discuss the relation among the norm of weak solution and initial value and control item.
Theorem 2. Assume that Bw ∈ L 2 (0, T ; H), u 0 ∈ V , then there exist positive constants C and C such that Proof.Clearly, (10) means Multiplying the equation of ( 4) by u, integrating the resulting on (0, 1) and using the same argument as in the proof of Theorem 1, we obtain Using Gronwall's inequality, we have Integrating ( 12) with respect to t on [0, T ], we thus derive that Therefore, by (13) and above inequality, we deduce that We also have On the other hand, for all η ∈ V , we have http://www.mii.lt/NATherefore, Hence, we get By ( 13)-( 16) and the definition of extension operator B, we obtain (11).Hence, Theorem 2 is proved.

Optimal control problem
In this section, we consider the optimal control problem associated with problem (4) and prove the existence of optimal solution.
In the following, we suppose L 2 (Q 0 ) is a Hilbert space of control variables, we also suppose B ∈ L(L 2 (Q 0 ), L 2 (0, T ; H)) is the controller and a control w ∈ L 2 (Q 0 ).Consider the following control system: Here in (17), it is assumed that u 0 ∈ V .By virtue of Theorem 1, we can define the solution map w → u(w) of L 2 (Q 0 ) into W (0, T ; V ).The solution u is called the state of the control system (17).The observation of the state is assumed to be given by Cu.Here C ∈ L(W (0, T ; V ), S) is an operator, which is called the observer, S is a real Hilbert space of observations.The cost function associated with the control system (17) is given by where z d ∈ S is a desired state and δ > 0 is fixed.
Nonlinear Anal.Model.Control, 21(2):223-240 An optimal control problem about problem (17) is where (u, w) satisfies (17 We define an operator e = e(e 1 , e 2 ): X → Y , where Here ∆ 2 is an operator from V to V * .Then we write (19) in the following form: min J(u, w) subject to e(u, w) = 0.
Proof.Suppose (u, w) satisfy the equation e(u, w) = 0.In view of (18), we deduce that By Theorem 2, we obtain As the norm is weakly lower semi-continuous, we achieve that J is weakly lower semi-continuous.Since for all (u, w) ∈ X, J(u, w) 0, there exists λ 0 defined by λ = inf J(u, w): (u, w) ∈ X, e(u, w) = 0 , which means the existence of a minimizing sequence From (20), there exists an element (u * , w * ) ∈ X such that when n → ∞, w n → w * weakly in L 2 (Q 0 ).

Optimality conditions
It is well known that the optimality conditions for w is given by the variational inequality where J (u, w) denotes the Gateaux derivative of J(u, v) at v = w.
The following lemma is essential in deriving necessary optimality conditions.
, is a unique weak solution of the following problem: Proof.Let 0 h 1, u h and u be the solutions of (17) corresponding to w + h(v − w) and w, respectively.We prove the lemma in the following two steps.
Step 2. We prove that (u h − u)/h → z strongly in W (0, T ; V ).Rewrite (24) in the following form: We can easily verify that the above problem possesses a unique weak solution in W (0, T ; V ).On the other hand, it is easy to check that the linear problem (23) possesses a unique weak solution z ∈ W (0, T ; V ).Let r = (u h − u)/h − z, thus r satisfies Setting η = r in (25), κ = u x + θ(u hx − u x ), θ ∈ (0, 1), using differential mean value theorem, we get Note that κ = u x + θ(u hx − u x ) satisfies κ H 1 c and κ ∞ c.Hence, for I 1 , we have On the other hand, for I 2 , we have Summing up, we get as h → 0.
http://www.mii.lt/NAUsing Gronwall's inequality, it is easy to check that (u h − u)/h is strongly convergent to z in C(0, T ; H).Integrating above inequality over (0, T ), we can obtain (u h − u)/h strongly convergent to z in L 2 (0, T ; V ).
On the other hand, for all η ∈ V , we have Hence, using (25), we can obtain Then (u h − u)/h → z strongly in W (0, T ; V ), Lemma 1 is proved.
As in [8], we denote Λ = canonical isomorphism of S onto S * , where S * is the dual spaces of S. By calculating the Gateaux derivative of (20) via Lemma 1, we see that the cost J(v) is weakly Gateaux differentiable at w in the direction v − w.Then, for all L 2 (Q 0 ), (22) can be rewritten as where z is the solution of (23), (•, •) W (0,T ;V ) * ,W (0,T ;V ) denotes the scalar product between W (0, T ; V ) * and W (0, T ; V ).Now, we study the necessary conditions of optimality.To avoid the complexity of observation states, we consider the two types of distributive and terminal value observations.Case 1: C ∈ L(L 2 (0, T ; V ); S).In this case, C * ∈ L(S * ; L 2 (0, T ; V * )) and (26) may be written as for all v ∈ L 2 (Q 0 ).Here we give the definition of a solution to the adjoint equation.Definition 2. Given an optimal control Bw ∈ L 2 (0, T ; H) and u 0 ∈ V , there exists a solution p(v) ∈ W (0, T ; V ) to the adjoint problem According to Theorem 1, the above problem admits a unique solution (after changing t into T − t).
Setting η = z in (28) (with v = w) and using Lemma 1, we get Then we obtain Therefore, we have proved the following theorem.
Theorem 4. We assume that all conditions of Theorem 3 hold.Let us suppose that C ∈ L(L 2 (0, T ; V ); S).The optimal control w is characterized by the system of two PDEs and an inequality: (17), (28) and (29).
Then we have the following theorem.
Theorem 5. We assume that all conditions of Theorem 3 hold.Let us suppose that D ∈ L(H; H).The optimal control w is characterized by the system of two PDEs and an inequality: (17), (32) and (33).