Method of symmetric polynomials in the computations of scattering matrix
- Тип работы:
Детальная информация о работе
Выдержка из работы
METHOD OF SYMMETRIC POLYNOMIALS IN THE COMPUTATIONS OF SCATTERING MATRIX
Yu. N. Belyayev
Syktyvkar State University, Oktyabrskii pr. 55, Syktyvkar-167 001, Russia
PACS 02. 10. Yn, 02. 30. Hq, 11. 55. -m
The method for calculating any analytic matrix function by means of symmetric polynomials is presented. The method of symmetric polynomials (MSP) is applied to the calculation of the fundamental matrix of a differential equations system. The scaling method is developed for computation of the scattering matrix. An analytical estimate of the scaling parameter, allowing the calculation of the matrix exponential with the required reliability and accuracy is obtained. This parameter depends on the matrix order n, the value of the matrix elements and layer thickness.
Keywords: layered media, matrix, exponential, symmetric polynomials, roundoff error, truncation error, scaling.
The dynamical theory of electron diffraction in crystals is based on the equations of Howie-Whelan . An analogous approach to X-ray diffraction is based on Takagi-Taupin equations . Calculation of multiple-wave diffraction in the crystalline layer thickness z on these equations leads to the Cauchy problem
= Atf (z), tf (0) = *o, (1)
where ^(z) — column matrix whose components are wave functions ^(z), i = 1, 2,…, n- A = ||aj|| - matrix of order n- column matrix consists of n knowni0.
The fundamental matrix S of the differential equations system (1) converts the wave field ^(0) in the field ^(z) at a depth z: ^(z) = SIn the theory of diffraction S is known as the scattering matrix.
If the crystal is perfect, the scattering matrix is defined as follows:
S = exp (Az) = I + Az + …, (2)
where I — unit matrix. If the coefficients aj of differential equations (1) are functions of z, then one way to solve the problem (1) is the partition of the interval [0,z] on N subdomains: [0 = z0, zi], [zi, z2],…, [zN_i, zN = z]. The number N should be large enough so that A ~ Ai = consti for any z E [zl_i, zi], l = 1, 2,…, N, and the scattering matrix can be approximated by the product: S = n?=N exp (Al (zl — zl-i)).
Calculations of exp (Az) with the aid of the Lagrange-Sylvester , Becker , Newton  formulas or matrix decomposition methods  requires the prior determination of the matrix A eigenvalues Xj, j = 1, 2,…, n. Another approach to the calculation of the matrix exponential is based on the use of symmetric polynomials . This method is applied in the present work to calculate the scattering matrix.
2. Method of symmetric polynomials (MSP)
According to the Cayley-Hamilton theorem, any matrix A = ||ajj || of order n satisfies its characteristic equation Xn — G1Xn-1 + a2Xn-2 — … (-l)nGn = 0, i.e.
An — piAn-1 — p2An-2 — …- pnI = 0. (3)
Here we use the notations
p3 = (-iy-1a3, j = l,…, n, (4)
I = A0, and Gi, i = I, 2,…, n, — sums of principal minors of i-th order det A:
…, Gn = det A. (5)
G1 = an + «22 + ••• + ann, = ^
Conversely, as is known, Gj are the elementary symmetric polynomials in the eigenvalues Xj of matrix A: gi = X1+X2+ • • • +Xn, G2 = ^l=j XiXj, • ••, Gn = X1X2 •••Xn• Therefore, any function of the elementary symmetric polynomials Gi is also symmetric in the eigenvalues Xj of the matrix A.
Definition 1. Functions Bg (n) = Bg (p1, • • •, pn) that satisfy the recurrence relations
i 0, if g = 0,1,…, n- 2, g (n)1, if g = n — 1, (6)
[ PlBg_l (n) + P2Bg-l (n) + ••• + PnBg-n (n),
are called symmetric polynomials of n-th order.
2.1. Representation of matrix functions by means of symmetric polynomials
As a result of equality (3) any integer power j & gt- 0 matrix A can be expressed as a linear combination of the first n powers A0, A, •••, An1:
Aj =? CjiAl, (7)
where Cjl are functions of pi.
Theorem 1. If the matrix A is nonsingular, then the coordinates Cjl of the matrix Aj in the basis A0, A, •••, An-1 are represented by the formulas:
Cj0 = Bj+n1(n) — P1Bj+n2(n) — • • • - Pn_Bj (n),
Cj1 = Bj+n2(n) — P1Bj+n3(n) — • • • - Pn2'-Bj (n),
Cj (n2) = Bj+1 (n) — pBj (n),
Cj (n_ 1) = Bj C^
where j is any integer, and Bj (n) — symmetric polynomials of n-th order matrix A. If det A = 0, then (8) determine the coefficients Cjl for j ^ 0.
Relations (8) is easily verified for j = 0,1, • • •, n — 1 and for other values of j formulas (8) can be proved by induction.
^ ^ ^ j ts any m^g^ if det A = 0,
Cjl = Z0 Pn_l+g Bj1_g (n), l = 0, • • •, n- 11, wherej j y n f det A = 0 (9)
Formulas (8) and (9) are equivalent in the sense that their right-hand sides are equal to each other due to the recurrence relations (6).
Corollary 1.2. Let f (Z) be an entire function of complex variable Z, then the function f (A) of n-th order matrix A has the following representation
f (A) =? A1
+ Y1 Pn-i+aYl ajB3-i-g (n)
where symmetric polynomials Bj-1-g (n) of n-th order matrix A are defined by (6) and al — coefficients of the power series: f (Z) = J2jjj=0 ajZj.
Proof. If f (Z) is an analytic function on the whole complex plane, then the expansion f (Z) = J2jj=0 ajZj remains valid when replacing the complex variable Z on a square matrix A: f (A) = J2°j=0 ajAj. We substitute in the last equality the expression of the matrix Aj according to formulas (7) and (9). Equality (10) — is the result of this substitution.
Corollary 1.3. For any n-th order matrix A and scalar z
exp (Az) = J2(Az)1 ^
1 + Pn-l+gJ2 j Bj-1-g (n)
where pg, g = 1,…, n, and Bj (n) are symmetric polynomials of matrix Az.
2.2. Estimation of symmetric polynomials modulus
The elementary symmetric polynomial aj is the sum of the principal minors of j-th order determinant of the matrix A. The number of such minors is Cnn. One minor of j-th order contains j! terms, each of which is a product of the j matrix elements aj. We denote by max agl the greatest value of the modulus of the matrix elements agl. Therefore, using the definition (4), we obtain the following relations
Pj & lt- PjM = 7--TT (max agl), j = 1 2,…, n.
(n — j)!
j (n) & lt-
x = (2n — 1) max agl.
Proof. Using (12) and definition (6), we obtain
^ PiMBj-i (n) + P2MBj-2(n) + … + PnMBj-n (n) ^
^ PlM^PlMBj-2 (n) + P2MBj-3(n) + … + PnMBj-n-1 (n)) + + P2MBj-2(n) + … + P (n-1)M Bj-n+1(n) + PnMBj-n (n) =
P1M + p2M V& quot-^ i rm (W l=1
cl — 2 _i_ (p1M +), l — 0 1- cn — pnM 2
PiM + P2M V PlM J PIm + P2M
A 1 • no fi A P^M + P2M 2n — 1, PiM P (l+1)M » ,
Applying (12) we find: — = -plM and — & lt- p2M. Consequently,
PlM n PlM
C & lt- plM, l = 0,…, n. These relations allow us to rewrite the formula (15) as follows
Bj (n)| ^ ^ PlMBj-l (n) & lt-^PlMBj-l-i (n). (16)
where x is defined by (14). The last inequality in (16) is recursive. Continuation of this
inequality gives: Bj (n) & lt- xj-nY^i=iplMBn-l (n) = piMxj-n, which is equivalent to (13).
Remark. For matrix Az, where z some scalar, the inequality (13) remains valid, but the parameter x is defined in this case by the expression
x = (2n — l) maxaglz. (17)
j=N J '-
_l l I & lt-x j
x ln n! xJ
& lt- (2n — 1) n_l+1 ^ (l — g)!(2n — 1) g j=N j! & lt-
xN _ln (N +1) ^ n! N
& lt- (2n — 1) n_l+1N !(N + 1 — x) g=0 (l — g)!(2n — 1) g, i X& lt- + • (8)
3. Computation of the matrix exponential
According to relations (11) and (9)
n_ 1 r 1
exp (Az) — E (J) = \eik (J)\ = J](Az)
where J & gt- n and
ii + Cl (J)
j i l J i
Cl (J) = E jT °jl = E Pn-l+gY, j! Bj-i-g (n). (20)
j=n g=0 j=n
The approximation in (19) is replaced by the exact equality if J goes to infinity.
The calculation of the matrix exponential by formulas (l9)-(20) can be performed using the following algorithm.
1. First, the consecutive computations l/(2!),…, l/(J!) are done.
2. Powers of the matrix Az are computed from the second to n-th inclusive, and traces of these matrices
Sg = tr (Az)g, g =l,…, n
3. Sequential calculation of the coefficient pg, defined by (4) can be performed by Newton'-s formula :
gPg = Sg — PiSg-i — … — Pg-iSg, g =l,…, n.
In particular, pi = si, P2 = (s2 — piSi)/2, p3 = (S3 — piS2 — P2Si)/3,----
4. After that, the symmetric polynomials Bl (n) for l = n, n + 1,…, J — 1 are calculated by recurrence formulas (6).
5. The calculation of the sums Sg = ^2'-J=n Bj-1-g (n)/(j!) for g = 0,1,…, n — 1.
6. Substitution of the values which were found in the previous steps in the formula (19) and calculation
exp (Az) = I [1 + Pn? o] + (Az)[1 + (Pn-A + Pn1)j + … +
3.1. Estimation of roundoff errors
(n — 1)!
+ (P1 So + P2S1 + … + p n^n- 1 I
Multiplication of numbers generates roundoff errors. Therefore, the accuracy of the scattering matrix computations can be estimated by the number of multiplications which are used in the calculations. The six steps listed above require for their implementation the following number of multiplications, respectively: N1± = J — 1, N12 = n3(n — 1), N13 = (n — 1)(n + 2)/2, N1, = n (J — n), N1, = nJ — n (3n — 1)/2, N1, = n3 + n (n + 1)/2. On the whole N1 = (2n + 1) J + n4 — n2 — 2 — 3n (n — 1)/2.
For comparison, calculations according formula (2)
(Az) (Az) (Az) (Az) (Az) (Az)
exp (Az) = I + (Az) + (Az+ (Az)^^ + … + (Az)^^ ¦¦¦J
require N2 = (n3 + n2)(J — 1) multiplications.
It is easy to verify that N1 & lt- N2 if J & gt- n. For these values of J, the computation exp (Az) by the MSP generates a smaller roundoff error than calculations by formula (2). Asymptotics of the ratio of N2 to N1 is characterized by
N2 n3 + n2 n2
lim — =- & gt- -.
JN1 2n +1 2
3.2. Truncation error
Let us consider error caused by truncation of series
j 1 n+N 1
E j Bj-1-g (n) — E jBj-1-g (n) ,
and substitution of exact formula (11) for (19).
Definition 2. The relative truncation error of the matrix exponential is
eik (& lt-x>-) — eik (J)
e (J) = max
eik (& lt-x>-)
eik — eik (J) & lt-
& quot-E 1 Cjl
1+ T & gt-
(aik z)1 l!
(1 — t),
Method of symmetric polynomials in the computations of scattering matrix where
'- n- 1
l! C M
l=0 l=0 For x & lt- l from (l8) and (24) it follows
(oik z) l!
& lt- max
T & lt-
xn l n (n + 1)(n — 1)!
«E j *
(2n — 1) n-l+1n!(n +1 — x) g=0 (n — 1 — g)!(2n — 1)» 2
From this relation and (23) we find
eik (?) & gt-
(oik z) l 1 xn (n + 1) (n — 1)! ^ 2 & gt- (2n — 1)2 n!(n + 1 — x)
and from (22) and (l8)
eik (?) — eik (N + n) & lt-
_n!(N + n + 2) xN+2_
(2n — 1)2(N + n + 1)!(N + n + 2 — x)
g=0 (n-1 — g)!(2n — 1) f& quot-
Finally, substitution of expressions (25) and (26) in the definition (2l) gives the proof of the following theorem.
Theorem 3. If max aglz & lt- l/(2n -1) — the relative truncation error e (N + n) of the matrix exponential exp (Az) satisfies the inequality
e (N + n) & lt-
n!(N + n + 2) xN+1 (N + n + 1)!(N + n + 1) '-
where x is defined by (l7).
3.3. Scaling exponent
Using the fundamental property of the exponential function exp A we represent the scattering matrix as follows: S = Xm, where
1 + l! EPn-i+oY^ jB-i-9(n)
[exp (A/m)]m, (27)
m — integer- pi = (-l)i iGi- Gi, i = l, 2,…, n, and Bl (n) are elementary symmetric polynomials and symmetric polynomials of n-th order of matrix (Az/m), respectively.
Corollary 3.l. The relative truncation error of the matrix X calculation according to the
, / ^ n!(N + n + 2)?N+i, , ,. max ajlz
formula (27) e & lt- -- --, provided that? = (2n — l) —
& lt- 1.
(N + n + l)!(N + n + l):
Example. Let the scaling factor m for matrix Az of order n = 4 is the smallest integer satisfying the condition m ^ l0(2n — l) max ajlz. In this case? & lt- 0. l, and the calculation by formula (27) with the value N = 2 gives the matrix X with a relative truncation error less than l0−5.
When m & gt- n +1, the most efficient way of calculating the matrix Xm, as shown in , is to use the theorem (8):
Xm = J] XlY, (-1)n-l+o-Vn-l+g Bm-i-g (n),
where aj and Bl (n) are symmetric polynomials of matrix X. 4. Conclusion
Among the dozens of techniques that were used to calculate the matrix exponential, the method of symmetric polynomials (MSP) has significant advantages. One of the most important of them — is carrying out calculations using analytical estimates. Another is that these estimates do not depend on the eigenvalues of the matrix. Accuracy of the scattering matrix S = exp (Az) calculation by the method of scaling is determined by the truncation errors in the calculation of matrix X and roundoff errors in the computation of the matrix Xm. MSP allows one to control the first error and minimize the latter.
 Cowley J.M. Diffraction Physics. North-Holland Pub. Co., Amsterdam, 410 pp. (1975).
 Pinsker Z. G. Dynamical scattering of X-rays in perfect crystals. Springer-Verlag, Heidelberg, 511 pp. (1978).
 Gantmacher F.R. The Theory of Matrices. Nauka, Moscow, 552 pp. (1988).
 Angot A. Complements de mathematiques a l'-usage des ingenieurs de l'-elektrotechnique et des telecommunications. Masson, Paris, 868 pp. (1982).
 MacDuffee C. C. The Theory of Matrices. Chelsea, New York, 128 pp. (1956).
 Faddeev D.K., Faddeeva V. N. Computational methods of linear algebra. Nauka, Moscow, 656 pp. (1963).
 Belyayev Yu. N. Calculations of transfer matrix by means of symmetric polynomials. Proceedings of the International Conference & quot-Days on Diffraction 2012& quot-, St. Petersburg, Russia May 28 — June 1, 2012. P. 36−41.