Numerical Approximation of Some Infinite Gaussian Series and Integrals∗

Received:31.01.2008 Revised:01.08.2008 Published online:28.08.2008 Abstract. The paper deals with numerical computation of the asymptoti c variance of the so-called increment ratio (IR) statistic and its modificati ons. The IR statistic is useful for estimation and hypothesis testing on fractional parameter H ∈ (0, 1) of random process (time series), see Surgailis et al. [1], Bardet and Surgaili s [2]. The asymptotic variance of the IR statistic is given by an infinite integral (or infinit e series) of4-dimensional Gaussian integrals which depend on parameter H . Our method can be useful for numerical computation of other similar slowly convergent G aussian integrals/series. Graphs and tables of approximate values of the variances σ p(H) andσ̂ 2 p(H), p = 1, 2 are included.


Introduction
Let X 1 , . . ., X n be n observations of the discrete time process (X(t), t ∈ Z) and let S m (τ ) := The statistic IR 2,n was introduced in Surgailis et al. [1], while IR 1,n is its modification.The increment ratio statistics were applied to continuous time processes (X(t), t ∈ R) instead of discrete time sequences in Bardet and Surgailis [2]: where X n (τ ) := X(τ /n).Results in [1] state that if X is stationary/stationary increment Gaussian sequence and satisfies some semi-parametric assumptions, then n m IR p,n − EIR p,n ⇒ N 0, σ 2 p (H) , p = 1, 0 < H < 7/4, p = 2, 0 < H < 3/4, where ⇒ denotes weak convergence of random variables and N (0, σ 2 p (H)) is Gaussian random variable with zero mean and variance Here is a stationary process, (2) where ∆ is the difference operator, defined by ∆ x g(x, y) = g(x + 1, y) − g(x, y), ∆ 2 x g(x, y) = ∆ x ∆ x g(x, y) , ∆ y g(x, y) = g(x, y + 1) − g(x, y), ∆ 2 y g(x, y) = ∆ y ∆ y g(x, y) .Bardet and Surgailis [2] derived results on asymptotic behavior of R p,n , p = 1, 2 statistics.Corollary 4.3 in [2] says that if X continuous time Gaussian process with stationary increments and its variogram V (t) = EX 2 (t) satisfy some additional assumptions, then where The R p,n (IR p,n ) statistics are useful for testing nonparametric hypotheses about the "fractional parameter" (the long-memory parameter d = H − 1/2) of observations (see [1,2]).Precise values of the asymptotic variances in (1) are needed for construction of confidence intervals of the above mentioned tests.The aim of this paper is to discuss numerical approximation of the infinite Gaussian integrals (1) and series (5).We provide graphs and tables of numerical values of functions σ 2 p (H), σ2 p (H), p = 1, 2. For p = 1, the tabulated interval is (0, 3/4), since for H ≥ 3/4 these quantities become infinite.On the other hand, for p = 2 the tabulated interval is (0, 7/4), due to the fact that for H ≥ 7/4 (and p = 2) the series and the integral in (1) again diverge (see [1]).For the "discrete" quantities σ2 p (H), p = 1, 2, we provide a theoretical justification of the approximation with the absolute error ε > 0.
For the "continuous" quantities σ 2 p (H), p = 1, 2, the error of our approximation is only heuristically assessed, because of the lack of available methods for numerical integration of Gaussian integrals as in (1) on a finite interval [−T, T ] ⊂ (−∞, ∞).Because of similarities between the cases p = 1 and p = 2, the subsequent discussion focuses on the approximation of σ 2 1 (H) and σ2 1 (H) only.The above mentioned stationarity of the random process η τ , τ ∈ R allows simplifying integration/summation: τ .
Let us shortly describe semi-heuristic method, used for numerical approximation of I − (τ 0 ), see [5] for details.Recall that rectangle (midpoint) and trapezium formulas can be presented as . The resulting approximation has the form where Except the cases τ = 0 and τ = 1, the covariances Cov(η τ ) has presentation of 4-tulpe integral.Approximate values of 2-tuple integral of (integrable) function over finite integration interval are easy to be obtained using NIntegrate function in Mathematica 5.2 (software for technical and scientific computing).NIntegrate uses adaptive Genz-Malik [6] method to get estimates of the countable integral (options AccuracyGoal, PrecisionGoal and WorkingPrecision allow controlling the absolute error).The rest of this section will discuss the simplification of Cov(η H (1)) has a probability density function where q 11 = q 22 = (1 − A 2 ) −1 , q 12 = q 21 = −A/(1 − A 2 ) are elements of the inverse matrix of the covariance matrix of (Z H (1)) and Proof.By covariance definition, Cov(η In view of (15), it is enough to find E(η (1) 0 ) 2 .Using the identity where x − y = 0, we have After substituting x = r cos φ, y = r sin φ we get Integrate with respect to r first, giving (1 − A)/(1 − 2A cos φ sin φ).In order to obtain the value of the integral with respect to φ use the substitution tan φ = v.
We now turn to the case τ = 1.Introduce the functions + 2a 12 sin φ cos θ sin φ sin θ + 2a 13 sin φ cos θ cos φ where a ij , i, j = 1, 2, 3 are elements of the inverse matrix of the covariance matrix of the random vector (Z where and D 3 (H) denotes the determinant of the matrix in (16).
3 Asymptotic behavior of covariance Cov(η In this Section, we prove Proposition 1 and discuss the numerical approximation strategy of the functions σ 2 1 (H) and σ2 1 (H) for a given 0 < H < 3/4 and a specified tolerance ε > 0.
The Hermite rank of Ψ with respect to Gaussian vector (X, Y ) is defined by rank(f ) = inf{r : ∃ i and j with i + j = r and a i,j = 0}, where the infimum of the empty set is infinity, see [4].Suppose |A| < 1.Let us split the function into Ψ(x, y) = Ψ 1 (x, y) + Ψ 2 (x, y), where Lemma 4. (i) The function Ψ 1 (x, y) has the form where a 0,0 is given in (15), a 2,0 = −a 0,2 = q A A, a (ii) The Hermite rank of the function Ψ 2 (x, y) in (23) with respect to a standard Gaussian vector (X, Y ) is not less than 4.
is a homogeneous function taking values in the interval [0, 1], and Z (p) H (τ ), τ ∈ R is a stationary Gaussian process with zero mean and autocovariance function ρ H (τ ) := Cov Z (1)