Further Comparisons of Finite Difference Schemes for Computational Modelling of Biosensors ∗

Abstract. Simulations are presented for a reaction-diffusion system within a thin layer containing an enzyme, fed with a substrate from the surround ing electrolyte. The chemical term is of the nonlinear Michaelis-Menten type and requires a technique such as Newton iteration for solution. It is shown that approxima ting the nonlinear chemical term in these systems by a linearised form reduces both the ac curacy and, in the case of second-order methods such as Crank-Nicolson, reduces th e global error order from O(δT ) to O(δT ). The first-order methods plain backwards implicit with and w ithout linearisation, and Crank-Nicolson with linearisation are ll of O(δT ) and very similar in performance, requiring, for a given accuracy target, an o rder of magnitude more CPU time than the efficient methods backward implicit with extra polation and Crank-Nicolson, both with Newton iteration to handle the nonlinearity. Stea dy state computations agree with expectations, tending to the known solutions for limit ng cases. The Crank-Nicolson method shows some concentration oscillations close to the o uter layer boundary but this does not propagate to the inner boundary at the electrode. Th e backward implicit methods do not result in such oscillations and if concentration profi les are of interest, may be preferred.


Introduction
This paper describes algorithms for the simulation of chronoamperometry at a thin enzyme layer on an electrode.The thin layer contains an enzyme that converts a substrate S into a product P, the substrate diffusing into the layer from an electrolyte outside the layer.The product P is electroactive at the electrode and the electrode potential is set such that the resulting current is limited by diffusive transport of P to the electrode, setting the boundary condition that the concentration of P is zero at the electrode.The electrolyte outside the enzyme layer is well stirred and contains S at some bulk concentration, this being the boundary concentration at the outer layer surface.The electrolyte does not initially contain P and is assumed to have a much greater volume than the layer, ensuring that the concentration of P remains virtually at zero; this is the other boundary condition for P at the outer layer surface.The system tends towards a steady state.There are analytical solutions for the current for some parameter limits but no general solution exists, which is why a simulation is needed.
Enzyme kinetics was first described by Michaelis and Menten in 1913 [1] and their equation was confirmed by Briggs and Haldane in 1925 [2].The enzyme electrode was suggested by Updike and Hicks in 1967 for the first time [3], followed by other early works [4][5][6][7].These papers attempted to solve the mathematics of the relevant kinetics.The electrodes can either be run in the amperometric mode, in which the product is electrolysed at the electrode, or in the potentiometric mode, where no current is drawn at the electrode and the electrode potential is used to measure the product concentration.A useful review was written in 1990 by Schulmeister [8], who described mathematical and numerical approaches to solving the kinetics of these electrodes.Kulys et al. presented some steady state solutions for the amperometric mode of these electrodes [9], which are used for comparison in the present paper.Some numerical solutions were presented for the amperometric and potentiometric cases [10][11][12], and most recently in 2007 [13,14], where the nonlinear chemical homogeneous terms were approximated by linearised forms.In this paper, we present further algorithms of greater efficiency, not avoiding the nonlinearity.

Theory
The chemical reactions in the layer are where E refers to the enzyme, ES is a transitory complex assumed to be at a steady concentration, and P is the product [15].This is a catalytic reaction, the enzyme itself not being used up.Let the layer be of thickness d along the coordinate x.The partial differential equations describing the kinetics of the two substances S and P within the layer are [8,14] in which s and p are the concentrations of S and P, respectively, t is the time variable, x the coordinate across the layer, D s and D p are the two diffusion coefficients, V m is the maximal rate of the enzymatic reaction, and K M is the Michaelis constant.The chemical reaction term is the Michaelis-Menten term, and gives rise to problems in the simulation, as it is nonlinear.The bulk concentration of S is s 0 .Initially, there is neither S nor P within the enzyme layer.S diffuses into that layer and is converted to P there.Also, because the bulk volume is assumed large and well stirred, there is never appreciable P in the bulk outside the enzyme layer.Initial and boundary conditions are P is thus held at zero concentrations at both ends of the layer, but will attain finite concentrations within it.(In potentiometric mode, the last condition at the electrode becomes the zero-flux condition, x = 0 : ∂p/∂x = 0, and as here, x = d : p = 0).It is convenient to normalise the variables, This leads to a dimensionless form of the kinetic equations, where we have two new symbols Following Kulys et al. [9], we also define the diffusion module The new initial and boundary conditions are T > 0, X = 0 : T > 0, X = 1 : S = 1, P = 0.
The current density g occuring at the electrode due to reduction or oxidation of P is given by where n is the number of electrons transferred and F is the Faraday constant.This becomes the dimensionless current density G, simply given by The system tends towards a steady state at long times (T ≫ 1).There are analytical solutions for the steady state current density [9] only for the extreme cases of κ ≪ 1, and κ ≫ 1, These can be used for comparison with simulated values.
In what follows, both steady state and chronoamperometric solutions will be demonstrated by digital simulation [16].

Discretisation and simulation methods
The domain 0 ≤ X ≤ 1 is divided into N equal intervals H and the time dependent simulations proceed by steps of length δT .The notation used is that we have concentration samples S i , P i , i = 0, . . ., N at the points X i = iH, i = 0, . . ., N (X 0 = 0, X N = 1), and that the plain symbols S i and P i correspond to present, known values at time T and the symbols S ′ i and P ′ i are the next values, at T + δT , to be computed.It is the aim of this work to achieve global concentration and current errors of O(δT 2 , H 2 ).Second order with respect to H is easily achieved by a central difference approximation to the spatial second derivative, and will not be included in order statements hereafter, as the focus is on the order with respect to δT .Achievement of second order error is critically dependent on how the nonlinear term in ( 6) is discretised.
The system (6) has one convenient property.Although it is a coupled pair of equations, only the second equation, for P , depends on the other.Thus we can compute all S values first, handling the nonlinearity in some manner, and then compute all P values, where the nonlinearity is now simply a function of known old and new S values.

Steady state
In order to compute the steady state, one can use the time-marching method [14], driving the simulation to times so long that no further change is noted.This is indeed a good check on the correctness of the simulations, but an easier method to achieve steady state is to set the time derivatives on the left-hand side of (6a) and (6b) to zero, discretise the right-hand side and solve.For S, the discrete expression at the point X i becomes At the electrode surface, we invoke the zero flux condition (4a).In order to achieve a global error of O(H 2 ) here, a three-point forward approximation is used (see and at i = N , condition (10b), These equations now form a system of nonlinear equations to be solved, which we express as with Newton's method [17] suggests itself here, with an initial guess at the vector S = 0, and computing at each Newton iteration a correction vector δS given as the solution of in which J is the Jacobian matrix of F. This is an iterative process, and generally, only 3-4 iterations are required for convergence.This was set to the condition that the Euclidian norm ||δS|| < 10 −8 .

Chronoamperometry
For time-marching problems, the left-hand side of (6a) and (6b) must be included in the simulation.This is always approximated here as where Z stands for either S or P .This approximation takes different orders, depending on which position in the interval δT it refers to.Second order errors with respect to δT are achieved if (21) refers to the midpoint of the time interval; this is the case if Crank-Nicolson is used [18], where the right-hand sides of the partial differential equations are also discretised referred to the midpoint (see below).For the explicit simulation method [19] , the approximation is a forward difference, and for the fully implicit method (backward implicit BI or Laasonen method [20]) it becomes a backward difference, both producing global errors of O(δT ) (see also below).
We now outline several possible methods of solution.In the previous works [13,14], the nonlinearity was avoided by an approximation.It will be seen that this leads to an order reduction to O(δT ), even for methods that otherwise achieve global errors of O(δT 2 ).Nonlinearities often occur in connection with homogeneous chemical terms in electrochemical simulation problems, and in many cases [16, pp. 135-136] secondorder linear approximations can be found.This is not possible here, so we must cope with the nonlinearity in some manner.We explore the use of the backward difference or Laasonen method [20] (here called BI) and of the Crank-Nicolson method [18], here called CN.The explicit method [19] is not used here, as it has stability limitations, nor is hopscotch [21,22] used, although it is a stable method, because of the "propagational inadequacy" problem, pointed out by Feldberg [23].
In all cases below, use is made of the definition for convenience, and we assume r = 1, that is, the two species S and P share the same diffusion coefficient.

Method BI1, backward implicit with linearisation
This is the use of the BI or Laasonen method, but avoids the nonlinearity, as was done in [13].The discretisations for S are Note that the nonlinearity has been removed.BI is a first-order method, so this might not matter.It will be seen in Section 5 that it works as well as BI2 as described be-low.The system can be rearranged in an obvious manner and solved using the Thomas algorithm [16,24,25].The boundary value S 0 is computed using the "u-v" mechanism [16, pp.87-89], which is simply a convenient way to solve the small linear system comprised of the boundary condition equation (the first in ( 23)) together with the next two equations, after they have been reduced to two unknowns each by the first (backward) Thomas sweep (for details, see [16, pp. 86-89]).Having computed all S values, a similar system is written for P , where both the old S i and the new S ′ i values are used.Again, the Thomas algorithm is used.Lastly, G is computed using a three-point current approximation, known to have errors of O(H 2 ) [16,26].

Method BI2, backward implicit without linearisation
With this method, the nonlinearity is not removed.Then the ith equation in (26) becomes and similarly for P , where the nonlinear term is positive.Here, the Thomas algorithm cannot be used.There are two main possibilities.
The nonlinear system can be solved using the Rosenbrock method [27], described for electrochemical digital simulations by Bieniasz [28] and described in detail in [16, pp. 167-172].This evaluates concentrations at the next time level without iteration, and will handle nonlinearities without problems.There are several variants, of different error orders and unconditional stability.
Another way to handle the nonlinearity is, at each step, to use the Newton method as was done above for the steady state computation, computing a correction set in each iteration.This is necessary only for S, after which the known new values S ′ can be used for the computation of P , again using the Thomas algorithm.
For S, the system of equations like (23) but using (25), is cast into a form similar to (19), solving for the correction vector.The system corresponding to (20) is the following: where The system is solved iteratively until δS converges, each time correcting the vector S by this amount.Since changes from one time step to the next are rather small, in most cases only 2-3 iterations are required in order to achieve convergence.Then one solves for P using the equations as in (24), except that the chemical term now contains only terms in (now known) S ′ i .

Method BI3, backward implicit without linearisation and using extrapolation
The BI method is a first-order method with respect to time intervals.It has the advantage of a smooth error response in cases of a sharp initial step, such as we have in the present system, where concentration s 0 (or in dimensionless terms, unity S) is applied at a boundary at T = 0. We shall show below that Crank-Nicolson reacts with oscillations to such sharp transients; this can be damped in several ways [29] but it is often more convenient to use BI with extrapolation [30,31], which preserves the smooth error response and increases the order.Extrapolation was introduced to electrochemical simulations by Strutwolf and Schoeller [32].In the second-order variant, each step is first taken with the full time interval δT , and the new concentrations stored.This is denoted as the operation L 1 .Then the calculation is repeated using two steps of length δT /2, denoted as L 2 2 , and the resulting concentrations combined with those from the first step according to (Z again denotes either S or P).This eliminates the first-order error term, leaving the term in O(δT 2 ).Method BI2 was modified in this sense.

Method CN1, Crank-Nicolson with linearisation
The essence of CN is to match the time derivative approximation, left-hand side of (6a) and (6b), with right-hand side terms referred to the midpoint across the time step.This makes the time derivative approximation a central second-order difference.CN produces global errors of O(δT 2 ).As mentioned above, CN has the problem of reacting to sharp changes in an oscillatory manner, which can be a disadvantage.The oscillations can be damped [29] with some programming effort.In the present case, as will be shown below, there are oscillations in concentrations, but they do not appear to matter, as they do not propagate to the electrode, where the current is computed.It is possible that some parameter choices could produce oscillations even on this side of the enzyme layer and if this happens, then damping techniques can be used.
In [14], CN1 was used, discretising at the ith point for S as The discrete expressions for the boundary conditions at X = 0 are, in the CN manner, and the condition at X = 1 is the same as that for BI1, Section 3.3.The system is easily solved by the Thomas algorithm.Then we have for P, at the ith point, now using the new values for S. Again, this is readily solved.

Method CN2, Crank-Nicolson without linearisation
Using the CN idea, a more consistent discretisation is the following.The first and last discrete equations, expressing boundary conditions, are those for CN1, but the discretisation at point i becomes This is a nonlinear system, and can be solved by the Newton procedure as described above.For P , we then have, at point i, where now both S and S ′ are known.

Estimating error orders
Except at extreme values of κ, there are no analytical solutions, so that if we wish to compute error orders, we cannot use known solutions for the estimation.However, there is a way, due to Østerby [33], which does not require exact values to compare with.In a given simulation, one first takes N steps of length h, with the result R 1 .Then the simulation is repeated using N/2 steps of length 2h, with result R 2 , followed by a third using N/4 steps of length 4h, with result R 4 .These are combined to produce the factor q, given by and the order is then ln q/ ln 2. This was carried out in the present work.

Results and discussion
Except for the steady state computations, all simulations were driven at equal space intervals of 0.001, that is N = 1000, which was found to be adequate, and at various numbers of time intervals δT always to T = 1.
The computations were carried out under Linux Suse 10.2 using the Intel Fortran 90/95 compiler and IEEE 754 standard double precision, giving roughly 16 decimal digit precision.
The orders were computed by the procedure mentioned above, and are tabulated in Table 1.We note that there are only two methods, BI3 and CN2, that produce global errors of second order, the others all having errors of O(δT ), even CN1.Thus it is seen that eliminating the nonlinearity reduces the error order for CN.
Table 1.Error orders with respect to δT for the methods, µ = κ = 10, N = 1000 Steady state currents G were computed as described for a range of κ and three values of µ, and are shown in Fig. 1.They are all referred to both G 0 given in (13) and G ∞ given in (14).Note the convergence to unity at the respective ends of the κ scale.
Errors in the current densities were computed relative to converged current values, obtained by increasing the number of time steps in a simulation using CN2, until there were no further changes in the value to 8 decimals.The errors are expressed as relative errors e, given by where G c is the converged current.This had to be computed for any given parameter set (µ, κ).It was found that N T = 4096 was sufficient for convergence to 8 decimal places.The Newton iterations can easily achieve much better convergence than this, and it is indeed N T here which limits convergence.The errors are shown in Fig. 2. The figure reflects the orders from Table 1 by the slopes and it is seen that all three first-order methods BI1, BI2 and CN1 have very similar errors, CN1 being only a slight improvement on BI1 and BI2.BI2, which accounted for the nonlinearity, might have been thought to show smaller errors than BI1, despite the same order, but did not.Clearly, CN2 and BI3 have the smallest errors and a second-order slope.It is of some interest to compare the efficiency of the methods, in terms of CPU time needed to achieve a target accuracy.We choose the target 0.01% or a relative error of 10 −4 here, for easy comparison, based on Fig. 2. Table 2 shows the results.The two best methods, BI3 and CN2 are an order of magnitude faster than the other three.In practice, such a small error is not needed for comparison with experimental data, nor are the CPU times significant.However, if the simulation method were to be applied to longer simulations such as linear sweep voltammetry and/or if experimental parameters such as µ and κ or diffusion coefficients are to be fitted by doing many runs, then efficiency becomes important.It was mentioned above that Crank-Nicolson has an oscillatory response to sharp initial transients.This is the case here, where the initial condition S(X = 1) = 1 is applied, all other S values initially being zero.Normally, the oscillations that ensue result also in oscillatory currents, but here the sharp transient takes place at the opposite end of the enzyme layer from that at which the current is generated.Fig. 3 shows a time development of the S profile for 50 steps in time with δT = 0.02, and the oscillations in S are clearly seen at the outer plane of the layer, slowly damped with time.It is also clear, however, that these oscillations do not propagate to the electrode, where the profiles are smooth.This is reflected in the current, which also has a smooth time response.However, if one is interested in the concentrations themselves, then the oscillations might not be desirable.In that case, method BI3 offers a similar efficiency and a smooth response, as seen in Fig. 4.

Conclusions
It is seen that attempting to approximate the nonlinear chemical term by a linear form reduces both the accuracy and, in the case of second-order error methods such as Crank-Nicolson, the order from O(δT 2 ) to O(δT ).The first-order methods plain backwards implicit with and without linearisation, and Crank-Nicolson with linearisation all have global errors of O(δT ) and are very similar in performance, requiring, for a given accuracy target, an order of magnitude more CPU time than the efficient methods backward implicit with extrapolation and the Crank-Nicolson, both with Newton iteration to handle the nonlinearity.Steady state computations agree with expectations, tending to the limiting cases for small and large κ.
Crank-Nicolson shows some concentration oscillations close to the outer layer boundary, X = 1, but this does not propagate to the inner boundary at the electrode, so it may not matter.Backward implicit methods do not result in such oscillations and if concentration profiles are of interest, may be preferred.

Fig. 1 .Fig. 2 .
Fig. 1.Steady state responses.The numbers show values of µ.The curves converging to unity at the left-hand edge are G/G0, while those converging to unity at the right-hand edge are G/G∞.

Table 2 .
NT to T = 1 needed and CPU use to achieve a relative error e of 10 −4