# Gradient Flows in Filtering and Fisher-Rao Geometry

###### Abstract

Uncertainty propagation and filtering can be interpreted as gradient flows with respect to suitable metrics in the infinite dimensional manifold of probability density functions. Such a viewpoint has been put forth in recent literature, and a systematic way to formulate and solve the same for linear Gaussian systems has appeared in our previous work where the gradient flows were realized via proximal operators with respect to Wasserstein metric arising in optimal mass transport. In this paper, we derive the evolution equations as proximal operators with respect to Fisher-Rao metric arising in information geometry. We develop the linear Gaussian case in detail and show that a template two step optimization procedure proposed earlier by the authors still applies. Our objective is to provide new geometric interpretations of known equations in filtering, and to clarify the implication of different choices of metric.

## I Introduction

This paper concerns with an emerging viewpoint that gradient flows on the infinite-dimensional manifold of probability density functions can be constructed to approximate both the propagation and measurement update steps in filtering. In the systems-control literature, continuous-time filtering theory has traditionally been approached from a “transport viewpoint” where given the noisy process and measurement models, flow of the posterior probability density function (PDF) of the state vector is described by the Kushner-Stratonovich stochastic partial differential equation (PDE) [1, 2] (or equivalently by the Duncan-Mortensen-Zakai PDE [3, 4, 5] for the unnormalized posterior). In the absence of measurement update, the problem reduces to that of uncertainty propagation subject to the drift and diffusion in the process dynamics, and the evolution of the state PDF is then governed by the Fokker-Planck or Kolmogorov’s forward PDE [6]. Due to numerical difficulties to solve these PDEs, in practice one resorts to approximate the evolution of the state PDFs using sequential Monte Carlo algorithms, which remain computationally challenging in general. This motivates us to look for alternative formulations that are theoretically equivalent to solving the associated transport PDEs.

One such alternative is the “variational viewpoint” that has been put forth in recent literature [7, 8]. Specifically, let us consider the system of Itô stochastic differential equations (SDEs) for the state () and measurement vectors

(1a) | |||||

(1b) |

where , is a potential, the process and measurement noise processes and are Wiener and satisfy and , with . Also, is assumed to be independent of and independent of the initial state . In the absence of (1b), the Fokker-Planck PDE for the state PDF corresponding to (1a) is given by

(2) |

In the absence of (1a), the Kushner-Stratonovich PDE [1, 2] for the state PDF , conditioned on the history of measurements till time , corresponding to (1b) is given by

(3) |

In the “variational viewpoint” for uncertainty propagation, one sets up a discrete time-stepping scheme of the form

(4) |

with step-size , a distance functional between two PDFs, and a functional that depends on the drift and diffusion coefficients in the process model. The functionals and are to be chosen such that as , thus establishing consistency between the solutions of (2) and (4). Similar recursion for the measurement update takes the form

(5) |

where and approximates the prior and posterior PDFs, respectively, and the functional depends on the measurement model and noisy observations. Again, the choices of and must guarantee as , to allow consistency between the solutions of (3) and (5).

Both (4) and (5) can be viewed as evaluating suitable (infinite dimensional) proximal operators [9, 10] of functional with respect to the distance functional . In other words, recursion (4) can be succinctly written as . Similarly, for (5). In particular, (4) and (5) can be seen as the gradient descent of the functional with respect to the distance . For a parallel with finite dimensional gradient descent, see [11, Section I].

For the proximal recursion (4) associated with uncertainty propagation, it was shown in [7] that if is chosen to be the squared Wasserstein-2 distance, which equals the cost of optimal transport between two probability measures under consideration, then is the *free energy functional*, defined to be the sum of an energy functional and scaled negative of the differential entropy functional (see [11, Section II] for details). In this case, the functional depends on both and appearing in the process model (1a).

Laugesen, Mehta, Meyn and Raginsky [8] were first to consider the proximal recursion (5) associated with measurement update, where it was shown that if is chosen to be the Kullback-Leibler divergence, then is the “*expected quadratic surprise” functional*, given by

(6) |

where , , and is the sequence of samples of at for . We refer this particular variational formulation as the LMMR scheme.

In [11], a general framework was introduced to bring any stable controllable linear system to the canonical form (1) for which the proximal recursions (4) or (5) apply. Also, a two step optimization strategy was proposed to compute these recursions, and using these tools, the LMMR scheme in the linear Gaussian setting was shown to recover the Kalman-Bucy filter [12]. It was also found [11, Section IV.B] that the recursion (5) with chosen to be the squared Wasserstein-2 distance, and as in (6) in the linear Gaussian case, results a Luenberger-type estimator with static gain matrix, unlike the Kalman-Bucy filter.

It becomes apparent that the choice of the distance functional cannot be arbitrary, and to appeal the gradient descent interpretation, should preferably define a metric on the manifold of PDFs. While Wasserstein-2 distance is a metric [13, p. 208], the Kullback-Leibler divergence is not. This naturally leads to the question whether one can find a metric and functional in (5) such that the filtering equations can be seen as the gradient descent of with respect to metric . In this paper, by taking to be the geodesic distance induced by the Fisher-Rao metric [15] and as in (6), we answer this in the affirmative for the linear Gaussian setting. Specifically, a variant of the two-step optimization template proposed in [11] allows us to perform explicit computation for (5) with Fisher-Rao metric, and is shown to recover the Kalman-Bucy filter in the limit .

It is perhaps not surprising that the functional in this paper and in the LMMR scheme [8] are the same, the choice of are different, and yet both of these two schemes are shown to recover the same filtering equations. This is due to the well-known fact [14, Ch. 2, p. 26–28] that the Kullback-Leibler divergence between a pair of PDFs where one PDF is infinitesimally perturbed from the other, equals to half the squared geodesic distance in Fisher-Rao metric measured between the same.

The rest of this paper is structured as follows. In Section II, we review some basic aspects of Fisher-Rao geometry on the manifold of probability density functions. In Section III, focusing on the linear Gaussian case, we approximate stochastic estimator as gradient descent with respect to the geodesic distance induced by the Fisher-Rao metric. Section IV concludes the paper.

### Notations and Preliminaries

As in [11], we denote the space of PDFs on by , and the space of PDFs with finite second moments by . The notation is used for the space of PDFs which share the same mean vector and same covariance matrix . The following inclusion is immediate: . We use the symbol to denote a multivariate Gaussian PDF with mean , and covariance . The notation means that the random vector has PDF ; and denotes the expectation operator while, when the probability density is to be specified, .

We use for the differential, for the Jacobian, and for the vectorization operator. For taking derivatives of matrix valued functions, we utilize the Jacobian identification rules [16, p. 199, Table 2]. Notation stands for the identity matrix, and for the Frobenius norm. The set of real matrices is denoted as , the set of symmetric matrices as , and the set of positive definite matrices as . The cone is a smooth differentiable manifold with tangent space , where . The symbols and denote Kronecker product and Kronecker sum^{1}^{1}1For an matrix and an matrix , the Kronecker sum is matrix ., respectively. We will have multiple occasions to use the following three well-known facts: (1) is a linear operator, (2) , and (3) an identity relating and Kronecker product:

(7) |

All matrix logarithms in this paper are to be understood as principal logarithms. Furthermore, the following property (see [17], [18, Theorem 1.13(c)]) will be useful

(8) |

which also holds when is replaced by any analytic matrix function.

## Ii Fisher-Rao Geometry

Given PDF and tangent vectors , the *Fisher-Rao metric* is defined as a Riemannian metric on , given by

(9) |

Let , and consider the space of “square-root PDFs”, given by . Geometrically, represents the non-negative orthant of the (infinite dimensional) unit sphere imbedded in the Hilbert space of square-integrable functions on equipped with the inner product . For , , the (minimal) *geodesic distance induced by Fisher-Rao metric*, denoted hereafter as , is then simply the minor arc-length along the great circle connecting and , i.e.,

(10) |

Since any point on the chord joining and can be represented as , , hence the *minimal geodesic* connecting and , is obtained by parameterizing the arc connecting and , i.e.,

(11) |

The formula (9), (10) and (11) are non-parametric in the sense that they do not require the PDFs to have any finite dimensional parametric co-ordinate representations. If prior knowledge allows one to consider a known parametric family of PDFs with -dimensional parameter vector , then the Fisher-Rao metric (9) admits local coordinate representation, given by the *Fisher information matrix* [19]

(12) | ||||

(13) | ||||

(14) |

The parametric version of the geodesic (11) becomes , , where solves the Euler-Lagrange boundary value problem

(15) |

with and given, and are Christoffel symbols of the second kind defined via . The squared arc-length

(16) |

and a parametric version of the geodesic distance (10) is

(17) |

Of particular interest to us, is the family of multivariate Gaussian PDFs. In this case, for , and , (15) can be written as [21, Theorem 6.1]

(18a) | |||

(18b) |

with , and . Furthermore, (16) becomes

(19) |

The following special cases are well-known [20, 21, 22]:

(20) |

which is also the Mahalanobis distance [23], and that

(21) | ||||

(22) | ||||

where are the eigenvalues of . To the best of our knowledge, no closed-form expression is known for .

## Iii Proximal Recursion with respect to Fisher-Rao Metric

In this Section, we consider approximating continuous-time stochastic estimator via the recursive variational scheme

(23) |

where , is the step-size, and and are the approximations of the prior and posterior PDFs, respectively. We choose the functional as in (6), and focus on the linear Gaussian case with process and measurement models

(24a) | ||||

(24b) |

where and . It is well-known that and the optimal estimator is given by the *Kalman-Bucy filter* [12], consisting of the following SDE-ODE system:

(25a) | |||

(25b) |

where is the Kalman gain.

We next demonstrate that evaluating the proximal recursion (23) for the linear Gaussian case in the limit, indeed recovers the Kalman-Bucy filter (25). To carry out the optimization over in (23), we adopt a two step strategy proposed in [11]. In the first step, we optimize the objective in (23) over the parameterized subspace . In the second step, we optimize over the subspace parameters and . The main insight behind this strategy comes from the fact that the objective in (23) is a sum of two functionals. If we can find a parameterized subspace of , over which the individual s of these two functionals match, then it must also be the of their sum. We detail these steps below. The ensuing calculations require some technical results collected in the Appendix.

### Iii-a Two Step Optimization

1) Optimizing over : Let be the prior state PDF at time . From Lemma 1 in the Appendix, we know

(26) |

On the other hand, for any , we have

(27) |

Consequently

(28) |

and the corresponding infimum value is

which is not convenient for our next step due to the lack of availability of closed-form expression for . This can be circumvented as follows. Instead of choosing with two free parameters, we choose two different parameterized subspaces, viz. and , each with one free parameter. We next carry out the two step optimization strategy for each of the two one-parameter subspaces.

Likewise, the for is , and using the notation (21), the corresponding infimum is

(30) |

2) Optimizing over : Equating the partial derivative of (29) w.r.t. to zero, and setting in the resulting equation, we obtain

(31) |

On the other hand, equating the partial derivative of (30) w.r.t. to zero, using Theorem 1 in Appendix, and then setting in the resulting algebraic equation, we get

(32) |

Pre-multiplying both sides of (32) by results

(33) |

Interestingly, equations (31) and (33) are same as those obtained (see equations (35) and (36) in [11]) by optimizing (5) over with same as in (6), but chosen to be the Kullback-Leibler divergence.

### Iii-B Recovering the Kalman-Bucy Filter

We recall two basic relations (equations (29) and (30) in [11]) for the propagation step obtained by discrete-time stepping of (4) with Wasserstein-2 distance as , and free-energy functional as , for the linear Gaussian process model (24a):

(34) | ||||

(35) |

We refer the readers to [11, Section III.B] for details of their derivations. Intuitively, (34) and (35) can be viewed as the Euler discretizations of the well-known mean and covariance (Lyapunov) ODEs for uncertainty propagation.

Thus we have demonstrated that the measurement update step in Kalman-Bucy filter (25) can be interpreted as the gradient descent of functional (6) with respect to the metric for . This complements our earlier result [11, Section III] showing the propagation step can be interpreted as the gradient descent of certain free energy functional with respect to the Wasserstein-2 metric.

## Iv Conclusions

This paper contributes to an emerging research program that views the filtering equations as gradient flux or steepest descent on the manifold of probability density functions. For gradient descent to be meaningful in infinite dimensions, one requires a metric with respect to which distance between probability density functions are to be measured. By using the geodesic distance induced by the Fisher-Rao metric, we have shown equivalence between a discrete time-stepping procedure for gradient descent on the manifold of conditional probability density functions and the filtering equations in the linear Gaussian setting. Here, our intent has been to understand the underlying geometric implications, and to work out the theoretical details to recover known facts. Our hope is that the results in this paper will help motivate developing these ideas in nonlinear filtering setting and to solve them via proximal algorithms [10].

In this Appendix, we collect several technical results that are used in Section III.

###### Lemma 1

Given and ,

This is a consequence of Cramér-Rao inequality; see Theorem 20 in [24, p. 1512], also Section 1 in [25].

###### Lemma 2

The Jacobians of the matrix valued functions for , for , and for real of sizes such that the product is defined, are respectively given by

(i) ,

(ii) ,

(iii)

(i) Taking the differential operator to both sides of results . Applying to both sides of this resulting expression, using (7), and noticing that

(36) |

Recall that

(37) |

Applying to both sides of (36) yields

(38) |

where the last step is due to the fact that inverse of Kronecker product equals Kronecker product of inverses (in the same order), and that is symmetric. Equating (37) and (38), the statement follows.

(iii) See [26, Appendix A, Lemma 3].

###### Lemma 3

For ,

Consider the spectral decomposition , where is positive diagonal matrix. Then . Writing , the integral of interest equals

Hence the result.

###### Theorem 1

For ,

Let , , , and . From (22), it is clear that our objective is to compute .