Problem

Tracking maneuvering targets (aircraft, missiles, vehicles) in radar/sonar systems requires estimating a continuous kinematic state (position, velocity, acceleration) whose dynamics switch among a finite set of motion regimes (constant velocity, coordinated turn, accelerating). The joint problem of estimating the continuous state together with the discrete mode is the hybrid state estimation problem, and the associated dynamical system is a jump Markov linear system (JMLS). Exact optimal filtering for a JMLS suffers from exponential growth of mode-history hypotheses (r^t Gaussian components after t steps for r modes), so the field has accumulated a family of approximate algorithms — first-order Generalized Pseudo-Bayesian (GPB1), second-order Generalized Pseudo-Bayesian (GPB2), and the Interacting Multiple Model (IMM) — that differ in how they collapse the hypothesis tree back to a fixed bank of r Gaussians per step.

Key idea

The Interacting Multiple Model (IMM) algorithm recovers nearly the same tracking accuracy as GPB2 (which carries Kalman filters per step) at essentially the cost of GPB1 (r filters per step). The trick is the interaction step at the beginning of every cycle: instead of running each mode-conditional filter from its own previous posterior, IMM forms r mixed initial conditions, one for each mode, where the mixed mean and covariance for mode j is a Markov-weighted average over all r previous mode-conditional posteriors. Each filter then runs a single Kalman update from its mixed prior. The overall posterior is a Markov-weighted combination of the r filter outputs. This produces an algorithm whose per-step cost is linear in r (like GPB1) but whose effective hypothesis depth is two steps (like GPB2), because each filter’s mixed prior already encodes information from every mode at the previous step.

Method

  • System model: discrete-time jump Markov linear-Gaussian system. Continuous state x_k evolves under mode-conditional dynamics x_k = F(m_k) x_{k-1} + G(m_k) w_k with mode-dependent process noise; observations z_k = H(m_k) x_k + v_k. Mode index m_k ∈ {1,…,r} follows a homogeneous Markov chain with transition matrix Π.
  • GPB1: maintain a single Gaussian summary at each step; at each cycle run r Kalman filters in parallel from the same prior, then collapse the r posteriors to one Gaussian by mode-probability-weighted moment matching. Cost: r KFs/step.
  • GPB2: maintain r mode-conditional Gaussian summaries at each step; at each cycle run Kalman filters (one per (previous mode, current mode) pair), then collapse each column of the r×r grid back to r mode-conditional Gaussians. Cost: KFs/step. Hypothesis depth: 2 steps.
  • IMM: maintain r mode-conditional Gaussian summaries at each step. At cycle start, interact: form r mixed priors, where the mixed prior for mode j is the Markov-weighted moment-matched combination of the r previous mode-conditional posteriors with weights μ_{i|j} = Π_{ij} μ_i / Σ_k Π_{kj} μ_k. Run r Kalman filters (one per current mode) from the mixed priors. Output is the mode-probability-weighted combination of the r filter outputs. Cost: r KFs/step.
  • Mode probability update: each filter produces an innovation likelihood; mode probabilities are updated by Bayes’ rule using these likelihoods together with the predicted mode probabilities Σ_i Π_{ij} μ_i.
  • Survey scope: the paper reviews IMM variants developed since Blom & Bar-Shalom (1988), application studies in air traffic control / missile tracking / sensor fusion, and a family of generalizations (variable-structure IMM, IMM with augmented state for target type, IMM with maneuver-onset detection).

Results

  • GPB2 ≈ IMM in accuracy: across the surveyed Monte Carlo studies, IMM tracking RMSE matches GPB2 within statistical noise for the standard 2-mode and 3-mode maneuvering-target benchmarks.
  • Cost ratio: IMM is cheaper than GPB2 per step (r filters vs ). For typical 3-mode tracking the wall-clock advantage is roughly 3×.
  • GPB1 underperforms IMM: GPB1 at the same r-filter cost loses noticeably to IMM during mode transitions, because it lacks the interaction step’s two-step effective memory of the mode history.
  • Production deployment: IMM is established as the de facto standard for maneuvering-target tracking in operational radar systems by 1998; the survey consolidates this as the practical winner of the GPB1 / GPB2 / IMM trade-off.

Limitations

  • The survey is restricted to linear Gaussian mode-conditional dynamics; the extension to nonlinear / non-Gaussian JMLS (where each filter would itself be an EKF, UKF, or particle filter) is mentioned but not the focus.
  • IMM still uses moment matching to collapse the mixed prior, which is exact only for Gaussian mixtures with matched means/covariances; for highly skewed or multi-modal mode-conditional posteriors the approximation can degrade.
  • The Markov chain transition matrix Π is assumed known and time-invariant; online identification of Π is treated as an open problem.
  • IMM scales linearly in r, but the mode set itself is fixed in advance — variable-structure IMM (VS-IMM) is surveyed but adds substantial implementation complexity.

Open questions

  • How to learn or adapt the Markov transition matrix Π online from data?
  • How to extend the IMM interaction trick to nonlinear / non-Gaussian mode-conditional filters without losing the cost advantage?
  • How to choose the mode set automatically when the target’s manoeuvre repertoire is not known a priori?
  • How to combine IMM with multi-target data association (JPDAF, MHT) without the combinatorial blow-up that nominally arises from the cartesian product of hypothesis trees?

My take

This survey is the canonical reference for the IMM algorithm and the practical benchmark against which any new switching-state estimation method must be compared. For the present project, the IMM/GPB2 contrast is directly relevant to the Rao-Blackwellized particle filter (RBPF) used for the asset-pricing model: both IMM and RBPF exploit the fact that, conditional on a regime path, the continuous state is linear-Gaussian and a Kalman filter is exact. The crucial difference is the hypothesis bookkeeping. IMM collapses to a fixed bank of r Gaussians per step via moment matching at every cycle; RBPF maintains N full regime-path histories on the particles and does Kalman recursion conditional on each path, only resampling regime histories (never the continuous state). RBPF is therefore strictly more expressive than IMM — it can represent arbitrary mode-history posteriors at the cost of stochastic approximation in regime-history space, where IMM uses deterministic moment-matching that is biased by construction during sustained multi-modality. For low-r problems with weak mode persistence (the IMM sweet spot), IMM is much cheaper. For the present 4-compound-regime asset-pricing model with strong regime persistence and likelihoods that depend on the whole regime history through the Riccati recursion, RBPF is the right choice but the IMM literature still provides the cleanest taxonomy of the cost/accuracy tradeoff that any particle-based scheme must beat.