Some remarks on selfnormalization for a simple spatial autoregressive model

In the paper I continue investigations on the self-normalization of simple autoregressive field Xt,s = aXt−1,s + bXt,s−1 + εt,s started in [5]. And extend previous results when the variance of the innovations of the process above are not finite.


Introduction and formulation of results
This paper continues the investigations of the self-normalization for the autoregressive fields on the plane which were started in [5]. In this paper the results for selfnormalization for AR (1) process obtained by Juodis and Račkauskas [4] were generalized for the autoregressive field on the plane. However the results presented in [5] were proved under assumption of the the existence of the second moment for the innovations of the autoregressive field we are working with. Here we give a slight extention of this result and prove that sufficient condition for our results to hold is that innovations belong to the domain of attraction of the normal law.
We consider one of the most simple autoregressive fields We suppose that that ε t,s , (t, s) ∈ Z 2 are i.i.d. random variables with ε t,s ∈ DAN . DAN here stands for domain of attraction of the normal law. We assume as well that |a| + |b| < 1. If the second moment for the innovations is finite this condition guarantees the existence of a stationary solution for (1). However this is not the case and we will have to investigate process (1) with initial conditions X 0,s = 0, X t,0 = 0, (t, s) ∈ Z 2 , t 0, s 0. In this case the process would have the form: Let's define Finally, let us denoteM n = (M n,1 , M n,2 ) and D n = (t, s): (t, s) ∈ Z 2 1 t M n,1 , 1 s M n,2 .
(In the sequel, where there will be no confusion we suppress index n in some notations.) Now our first result can be formulated as follows.
In the paper [2] (see also [1] ) it was shown that It is necessary to note that even in the case of infinite second moments for innovations, the limit distribution is still the same.
We will also reformulate the Theorem 2 in [5] with the new conditions for the innovations. Assume that 1 m 1 M 1 , 1 m 2 M 2 are integers (dependent on n, too) and such that I := M 1 m −1 1 and J := M 2 m −1 2 are integers, we set m 1,i := m 1 (i − 1), m 2,j := m 2 (j − 1) and THEOREM 2. If conditions of Theorem 1 are satisfied and additionally

Proofs
Due to the short format of the paper we find it impossible to place all of the proof here so we will only quote what we need to change in [5] in order to prove theorems formulated above and present only the most interesting part in more detail putting the simpler parts aside. We reintroduce notations used in [5]: In order to verify Theorem 1 wee need to prove: (4) is equivalent to proving R To prove Theorem 2 we would need to prove additionally: i,j κ 2 i,j χ −2 n P −→ 0, i,j η 2 i,j χ −2 n P −→ 0. For the definitions we have to redirect you to [5]. Part (7) is most interesting and we will provide some details of its proof here. We need to show that for the process defined by (2).

X t−1,s X t,s−1 = Z (1) t,s + Z (2) t,s .
Using (2) we define Z (1) t,s and Z (2) t,s like this: The sum Z (1) t,s contains all the members of the product X t−1,s X t,s−1 , where t − 1 − j = t − j 1 and s − k + j = s − 1 + k 1 + j 1 , sum Z (2) t,s has the rest. This means that expectations E|ε t ,s ε t ,s χ −2 n | in sums Z (1) t,s χ −2 n and Z (2) t,s χ −2 n can be estimated using Lemmas 2 and 3 from [5] (which were originally proved in [3]) for the summands in Z (1) t,s and for the summands in Z (2) t,s . C here and anywhere else in the text stands for a constant.
The proof that (t,s)∈D n Z (1) n χ −2 n P −→ γ (−1,1) is just marginally more complex. As we know Denote M 1,n − t = h 1,t and M 2,n − s = h 2,s . If we regroup the coefficients by t,s in sum (t,s)∈D n Z (1) t,s we would get: Thus by each of the t,s in sum (t,s)∈D n Z (1) t,s = (t,s)∈D n τ t,s t,s stands a trimmed version of γ (−1,1) . Here We split the remainder term R (n) t,s into 3 parts: 3) t,s .
Definitions are as follows: The R (n,2) t,s , and R (n, 3) t,s are similar so we will just show how to estimate the impact of one of them, the other one is dealt with in the same way.
The estimation is as follows: Thus the required result (8) is achieved. The proof of (6) follows from the combination of Chebyshev theorem and aforementioned lemmas 2 and 3 from [5]. However in this case estimation of the second moment of the quantity is necessary, but the framework is essentially the same. Proving (4) and (5) is even simpler, it can be done just by applying lemmas and using Newton binomial formula.