DEVELOPMENT OF THE METHOD FOR DYNAMIC REGULARIZATION OF SELECTED ESTIMATES IN THE CORRELATION MATRICES OF OBSERVATIONS

Inversion of the correlation matrix of observations belongs to the class of problems associated with the reversion of cause-effect relationships. This procedure is the basis for solving inverse statistical problems in applications of spectral analysis, space-time processing of multidimensional signals, control theory, identification, prediction and decision making [1–8]. 26. Berezsky, O. Computation of the minimum distance between non-convex polygons for segmentation quality evaluation [Text] /


Introduction
Inversion of the correlation matrix of observations belongs to the class of problems associated with the reversion of cause-effect relationships.This procedure is the basis for solving inverse statistical problems in applications of spectral analysis, space-time processing of multidimensional signals, control theory, identification, prediction and decision making [1][2][3][4][5][6][7][8].
The basic idea of solving the identified problem is the use of regularization methods [3,[6][7][8][9][10][11], allowing to obtain computationally stable estimates of the correlation matrix synchronously with the development of the observed process.The basis of the solution is the synthesis of regularizing operators, in which the law of choice of the regularization parameter μ acquires a priority.
As is known [1], today there is no universal approach to determining the regularization parameter of sample estimates of correlation matrices under a priori uncertainty relative to the input data.The regularization methods developed in this direction belong to the class of static ones and assume the right shift of the spectrum of the correlation matrix by some constant value of the regularizing parameter.Such methods ignore the natural property of self-regulation of sample estimates of correlation matrices and, therefore, are not optimum by the criterion of "computational stabilityconsistency" of the estimates obtained.Hence, the actual problem is the development of an alternative approach to the regularization of sample estimates of correlation matrices under a priori uncertainty relative to the input data.

Literature review and problem statement
The classical problem of the search for the optimum value of the static regularizing parameter μ is solved by the methods of residual, selection, or iterative regularization [13][14][15].In particular, the search for the regularization parameter μ when solving a perturbed system of linear algebraic equations is carried out so that the magnitude of residual of the approximate solution is comparable with the input data of the inverse problem [16,17].Actually, the search for the regularization parameter μ is performed on some set of values, and the selection of the optimum parameter μ is based on a priori information [15].One of the drawbacks of these methods when solving the inverse problem in real time are computing power constraints [7] and the need for additional a priori information on the structure of the solution of the optimization problem and the level of errors in the input data [16].
In [7,13,17], the methods for solving inverse problems with regularization of the maximum likelihood estimate of the correlation matrix of observations are considered.These methods are classified as static regularization methods [18], in the case when the problem of zero eigenvalues is solved by the right shift of the spectrum of the correlation matrix estimate by some constant number μ.Then, the regularized matrix is similar, but not identical to the original one in terms of consistency.As a result, the methods of static regularization of sample estimates of correlation matrices are characterized by both computing power constraints, and specific information constraints [14].In particular, determination of the optimum value of the regularization parameter requires information on the true or expected interference/noise ratio [13], the spectral composition of the correlation matrix of observations and the possible number of noise sources [19].In a nondeterministic situation, it is very problematic to obtain such information [20].
In the methodological sense, the specified constraints give rise to a dialectical contradiction by the criterion of "computational stability -consistency" of the estimate [13].Indeed, the value of the regularization parameter μ should, on the one hand, guarantee the computational stability of the inverse problem, and on the other hand, have minimum effect on the consistency of the matrix estimate, depending on the sample size.
None of the methods for solving inverse problems with regularization of the maximum likelihood estimate of the correlation matrix of observations eliminates the existing problem.Therefore, the problem of investigating the dynamic regularization of the sample estimate of the correlation matrix with respect to the solution of inverse problems under a priori uncertainty is actualized.In this situation, the regularizing parameter of the inverse problem μ should be updated in real time as the input data arrive [20].

The aim and objectives of the study
The aim of the paper is to develop a method of dynamic regularization for obtaining consistent and computationally stable sample estimates of the correlation matrix of observations in real time.
To achieve the aim of the research, the following objectives were set: -to analyze the computational stability and convergence of sample estimates of correlation matrices of observations under a priori uncertainty; -to investigate the consistency of sample estimates of correlation matrices of observations under static and dynamic regularization; -to synthesize the optimum function of dynamic regularization of sample estimates of correlation matrices; -to evaluate the effectiveness of the dynamic regularization method experimentally.

Method of research of the computational stability and consistency of estimates of correlation matrices
In the general theological context, the identified problem is solved on some multidimensional Gaussian distribution.The known properties of such a distribution made it possible to analyze the processes of convergence of sample estimates of correlation matrices in terms of their computational stability and consistency under static and dynamic regularization [12,21].with the δ-correlation of noise and the infinity of the observation interval are [24]:

1. Basic concepts and algorithms used in the research
where n P is the noise power; * is the symbol of Hermitian conjugate; are zero and unit N-dimensional matrices, respectively.
Proceeding from the adopted model, the asymptotic forms of the direct A and inverse −1  A correlation matrices of the observation vector are defined as follows: ) .
The full rank of the correlation matrix A: = rank( ) N A always guarantees the existence of the inverse matrix −1  A .In practical applications, with the finite observation interval [ ] 0, , T , asymptotic representations of the matrices A and −1 A are replaced with their estimates (discrete analogs)  A and . Such estimates are computed as the input data arrive and, under certain conditions, do not differ in the completeness of the rank.
We project the problem of forming the estimates of matrices  A and −  1 A on a grid of time samples, assuming that for a finite sample of size L from the set of vectors Then, the maximum likelihood estimate  L

A
H has the form [4,5,[7][8][9][10][11][12]: where Ω Н is the set of N-dimensional Hermitian matrices.The algorithm (1) reflects the process of direct addition of matrices of a single rank in real time.With an increase in the sample size L to the matrix dimension N, that is, to L=N, the estimate ( ) . A further increase in the sample size L: L>N in the presence of internal noise is accompanied by a natural regularization (self-regularization) of the matrix ( )  .L A .In this case, the correlation matrix estimate ( ) tend to their asymptotic forms [21]: A .The expression (1) and its recurrent computational modification presented at the k-th step A U U allow obtaining direct and recurrent forms of inversion with the arbitrary sample size L: 1 * 1 ( 1) .
Decomposition of the recurrent computing scheme (4) in accordance with the known rule [22] yields the following result: where { } tr * is the trace of the matrix.In computational practice, the criterion of stability and consistency of estimates ( 1)-( 5) is the convergence of the corresponding matrix norms [1,3,23]: The estimate is considered Hadamard stable if for any sample size L the approximation norm β( ) L has a finite val- , where ζ is some positive number.Estimates are considered consistent if the property of their rapid convergence to the corresponding asymptotic forms of the matrices A and −1 A holds: is the probability of an event { } * .The complexity, and sometimes impossibility, of obtaining the analytic dependencies ε( ) L and β( ) L is caused by [24]: -first, the uncertainty of results due to degeneration of the estimates of the N-dimensional rank-deficient Hermitian matrix  ( ) L A in the computational stability loss region ; -secondly, the complexity of describing the statistical distribution of eigenvalues and unitary vectors of the random matrix  ( ) with the arbitrary sample size.The natural way to eliminate these constraints is to carry out simulation studies that are reliable in terms of convergence of the estimates  ( ), to the corresponding asymptotic forms.To simulate the computation of sample estimates of correlation matrices, the N-dimensional normal distribution law of the set of stationary observations with zero mathematical expectation and the given variance was used.The convergence of the computational algorithms (1)-( 5) is demonstrated by the results of the simulation, which are shown in Fig. 1, a -the algorithms (2), ( 4) and ( 5), in view of their recurrent form, have the property of smoothing the estimates , which indicates their effectiveness, that is, the minimum variance of the estimates with respect to the direct summation algorithms (1) and (3); -the algorithms (3)-( 5) are characterized by the objective existence of the region of loss of computational stability G of the estimate −  1 ( ) A U U (in this case, L<N=10).However, under the condition =  1 A I, the estimate (5) will be computationally stable, but it will not satisfy the approximation criterion by the norm β < ( ) 1 L over the whole range of L values (Fig. 2, graph 2).As a result, it should be noted that the estimates (3)-( 5) of the matrix −  1 ( ) L A , despite their consistency in the sample size L, have the computational stability loss region G, where, with the restriction L<N, the approximation β → ∞ ( ) .

L
. As is known, computationally stable matrix estimates can be obtained using the known method of static regularization [1, 7-8, 11, 25].In this case, it is appropriate to investigate the consistency of such estimates.

2. Research of consistency of estimates of correlation matrices under static regularization
Static regularization assumes the "forced" right shift of the spectrum of the initial matrix estimate  ( ) L A by a fixed value of the regularizing parameter µ = ξ > ( ) 0, f , which is consistent with the error level ξ [1,10,11]: This guarantees the computational stability of the estimate regardless of the sample size L: By analogy with ( 6) and ( 7), the consistency of the estimates of the direct µ  ( ) matrices is investigated by the criterion of convergence of the regularized matrix norms: The limit value of the matrix norm (8), despite the consistency of the initial estimate  ( ), L A , does not reach zero and, other things being equal, will be limited to the value of the fixed regularization parameter μ: It follows from ( 10) that with the fixed regularization parameter μ, the estimate µ  ( ) does not meet the consistency criterion To compute the limit value of the matrix norm µ β ( ), L we use the spectral decomposition of the asymptotic matrix A, as well as its estimate  ( ) L A [23,24]: where λ = λ ( ) The expressions ( 11) and ( 12) allow representing the asymptotic form of the direct matrix A as the limit of the spectral decomposition of the consistent estimate  ( ) L A : Passage to the limit (13), owing to the known lemmas of the theory of limits on the sum of infinitesimals and the product of a limited variable by an infinitesimal value, guarantees the existence of the following limits [24]: [ ] In this context, spectral decomposition of the inverse matrix −1  A and its regularized estimate will have the form: Proceeding from ( 16), ( 17), the value of the matrix norm for the arbitrary sample size Thus, based on the existence of the limits ( 14), ( 15) and the condition Π =1 i tr , we obtain the limit value of the matrix norm µ β ( ) L : The expression (18) demonstrates the inconsistency of the regularized estimate with the fixed parameter µ > 0: The specific properties of each of the regularized estimates (3)-( 5) reflect the experimental average nonstationary dependences µ β ( ), L presented in Fig. 3, 4, respectively, for the fixed values of the regularization parameter μ=0.1; μ=0.3 and μ=0.7 with the matrix dimension N=10.Here, the solid lines show the current values of the matrix norms L and the dashed lines -the corresponding theoretical limit values (18).To simulate the processes of computing the regularized estimates, the N-dimensional normal distribution law of the set of stationary observations with zero mathematical expectation and the given variance was used.
Approximation of the convergence trajectories µ β ( ) L of each of the regularized estimates − µ  1 ( ) L A (3)-( 5) to the theoretical limit value (18) for a finite number of iterations L shows that all of them are computationally stable but inconsistent: The choice of the value of the static regularization parameter μ for the given sample size L is determined by the required approximation accuracy µ β ( ).L The law of choice L involves a compromise between the approximation accuracy µ β ( ) L and sample size L. Achievement of such a compromise, under a priori uncertainty about the structure of the spectrum of the correlation matrix − µ  1 ( ), L A , is very problematic.This uncertainty can be eliminated by variation of the parameter μ.At the same time, unjustified variations, for example, an increase in the regularizing parameter, worsen the compliance of the estimates ( 3)-( 5) to the input data, thereby violating their consistency, and, consequently, self-regulation ability (Fig. 3, 4).As is known [1], the universal approach to the search for the optimum value of the regularization parameter by the "computational stability -consistency" criterion is absent for today.An approach can be considered successful if it uses the natural properties of the maximum likelihood estimates  ( ) , in particular, their consistency and self-regulation ability.These properties of the estimates indicate the need for a transition from static regularization (μ=const) to a monotonous decrease of the regularizing parameter as the sample size increases: This kind of regularization of sample estimates of correlation matrices is classified as dynamic regularization.

3. Dynamic regularization of sample estimates of correlation matrices
Dynamic regularization of sample estimates of correlation matrices is based on the uniqueness theorem for inverse problems with perturbed input data [23,25,26].It follows from this theorem that if the value of the parameter μ(L), as the value of the monotone function, satisfies the condition with μ(1)>0, then for the regularized estimates their convergence to the corresponding asymptotic forms takes place: Therefore, unlike the approximate value ( 18), the matrix norm (9) will have the limit

L
. The latter indicates the computational stability and consistency of the estimate with the dynamic regularization parameter μ(L).In the framework of the dynamic regularization method, the algorithms of direct (3) and recurrent ( 4), ( 5) computation of inversion of the matrix estimate − µ  1 ) L A are transformed as follows: , . 1 tr The nature of the convergence µ β ( ) L of the estimates (3)-( 5) to the asymptotic form A (Fig. 1-3) allows us, without violating the generality of arguments on the estimates ( 19)-( 21), to confine ourselves to an investigation of consistency and computational stability of the algorithm (21) only.

Results of research of consistency of estimates of correlation matrices under dynamic regularization
Theoretical research of consistency of the maximum likelihood estimate  ( ) L A and the numerical experiment (Fig. 1, a, b, Fig. 2) indicate the expediency of using the following law of decrease of the dynamic regularization function μ(L) with respect to the parameter L in practical applications: The convergence trajectories µ β ( ) L of the estimate for an arbitrary value of the parameter g>0 of the dynamic regularizer (22) form some surface .

L g L g
A A A .Fig. 5 shows the surface µ β ( , ) L g (Fig. 5, a) and its isolines (Fig. 5, b) obtained with the matrix dimension N=20.The analysis of the dependences presented in Fig. 5 shows that in view of the fact that the function µ β ( ) L is quadratic, the surface µ β ( , ) L g has a so-called "ravine" whose coordinates satisfy the solution of the optimization problem µ β ( ) opt L with respect to the parameter g: where Ω g is the set of possible values of g>0.
In the analog representation, the coordinates of the trajectory of this "ravine" satisfy the solution of the Cauchy problem for the I-th order linear inhomogeneous differential equation ( ) where N is the dimension of the correlation matrix estimate.
Its solution on the grid of L time samples yields the result The theoretical curve g(L) is presented in Fig. 6 by the dashed line.In the same figure, the solid line shows the experimental trajectory of the "ravine" corresponding to the simulation results (Fig. 5).Comparison of the dependences presented in Fig. 5, 6 indicates the consistency of theoretical and experimental data.The expression obtained reflects the process of convergence of the parameter g(L) to its optimal value The defining advantage of variation of the regularizing parameter µ( ) opt L according to (23) is the satisfaction of the criterion of "computational stability -consistency" of the estimates ( 19)-( 21 -is characterized by simplicity of computational operations in real time in the absence of a priori information; -eliminates the problem of choosing the regularization parameter under a priori uncertainty about the input data of the computational problem. The advantage of variation of the regularizing parameter µ( ) opt L shows the family of convergence trajectories µ β ( ) L of the estimate (21), represented by solid lines in Fig. 7 for the chosen values N=10; N=30 and N=50.Here, for comparison, the dashed lines show the convergence trajectories β(L) of the non-regularized estimate (5).These families of dependencies illustrate the loss of computational stability of the consistent estimate (5) with L<N, which is not typical of the estimate (21) with the optimum dynamic regularization parameter µ( ) opt L .

2. Practical application of the dynamic regularization method for solving the problem of useful signal detection
Let us consider the application of the dynamic regularization method to the problem of useful signal detection at the output of the N-dimensional adaptive antenna array.Thus, maximization of the signal-to-noise ratio q at its output implies the determination of the parametric vector by inversion of the estimate of the correlation matrix of observations with the optimum dynamic regularization function The results of the effect of dynamic regularization on the output signal-to-noise ratio q with the antenna array dimension N=70 are shown in Fig. 8. Fig. 8. Signal-to-noise ratios q with the antenna array dimension N=70 The presented dependences illustrate the variation of the signal-to-noise ratio at the output of the adaptive antenna array in the presence ( ) µ q L and absence q(L) of dynamic regularization of the correlation matrix of observations.The potential value of the signal-to-noise ratio is denoted by the dashed line and for the given situation is q 0 =28.5 dB.The comparative analysis of the behavior of the curves ( ) µ q L and q(L) shows that in the dynamic regularization mode, the loss of computational stability in the region G (L<N) is absent.At the same time, the signal-to-noise ratio ( ) µ q L reaches the potential value q 0 for a finite number of iterations L, and the duration of parametric adaptation of the antenna array is substantially reduced.
The advantage of the dynamic regularization method is explained by the monotonous decrease of the regularizing parameter according to the optimum law (23), which ensures the computational stability of estimates of correlation matrices without violating their consistency under a priori uncertainty.The results obtained affect the class of stationary random processes and the corresponding estimates of correlation matrices.It is of interest to further develop the dynamic regularization method for solving the problem of obtaining computationally stable and consistent estimates of correlation matrices of nonstationary random processes.

Conclusions
1.The method for dynamic regularization of sample estimates of correlation matrices was developed, which, using the regularizing parameter search according to the optimum law, was an alternative to static regularization and allowed resolving the "computational stability -consistency" contradiction when forming estimates of this class without violating their natural self-regulation property in the process of increasing the size of the observed sample.
2. The optimum dynamic regularization function was synthesized, the evaluation of which does not require prediction data and additional computing resources to search for the optimum value of the regularization parameter.
3. It is shown that the application of the method of dynamic regularization of sample estimates of correlation matrices extends the capabilities of a wide class of information systems that are designed to solve ill-posed inverse problems under a priori uncertainty.

Fig. 1 .Fig. 2 .
Fig. 1.Dependence of matrix norms ε( ) L and β( ) L on the sample size L for the matrix of size N =10: a -algorithm (3), b -algorithm (4) The graphs β( ) L in Fig. 2 illustrate the convergence of the estimate −  1 ( ) L A obtained by the recurrent algorithm (5) for the following initial conditions: =  1 1 1 T A U U -graph 1 and =  1 A I -graph 2. lg[β(L)] are the eigenvalues of the matrix A and its estimate  ( ); of the matrix A and its estimate  ( ), L A respectively.

Fig. 5 .
Values of the matrix norm µ β ( , ) L g depending on the sample size L and the parameter g : a -surface, b -isolines

Fig. 6 .
Fig. 6.Theoretical and experimental trajectories of the curve g(L) . Hence, the optimum function µ( ) opt L for dynamic regularization of the estimate  ( ) L A has the form:

6 . 6 . 1 .
Discussion of the results of the research of consistency of estimates of correlation matrices under dynamic regularization Advantages of the dynamic regularization method The proposed procedure for dynamic regularization (23) has the following advantages: -unambiguously connects the dynamic regularization function µ( ) opt L with the matrix dimension N and the size of the observed sample L;