Implement Kalman Filter Using NumPy in Python

In this tutorial, you’ll learn how to implement the Kalman Filter and its variants using Python and NumPy.

We’ll cover the implementation of the standard Kalman Filter for linear systems, and its extensions – the Extended Kalman Filter (EKF) and the Unscented Kalman Filter (UKF) – for nonlinear systems.



Initializing State and Covariance Matrices

First, import NumPy and set up your initial state matrix (x) and initial covariance matrix (P).

The state matrix represents the initial estimate of the system’s state, while the covariance matrix represents the uncertainty of this estimate.

Here’s how to do it:

import numpy as np
x = np.array([[0],    # Position
              [0]])   # Velocity
P = np.array([[1000, 0],
              [0, 1000]])

After running this code, your state matrix x and covariance matrix P are initialized.

The output matrices will look like this:

State Matrix (x):


This indicates an initial position and velocity of 0.

Covariance Matrix (P):

[[1000    0]
 [0    1000]]

This shows high uncertainty (1000) in both position and velocity estimates.

These matrices form the foundation for your Kalman filter, setting the stage for subsequent predictions and updates.


The Prediction Step

In the prediction step of the Kalman filter, you’ll predict the next state of the system and estimate the error covariance.

This involves using the state transition matrix (often denoted as F) and the process noise covariance matrix (denoted as Q).

These matrices help in forecasting the next state and updating the uncertainty associated with the prediction.

# State Transition Matrix (assuming constant velocity model)
F = np.array([[1, 1],
              [0, 1]])

# Process Noise Covariance Matrix
Q = np.array([[1, 0],
              [0, 1]])

# Predicted State Estimate
x =, x)

# Predicted Estimate Covariance
P =,, F.T)) + Q

After executing this code, the predicted state (x) and the error covariance (P) are updated as follows:

Predicted State (x):


This indicates that the predicted position and velocity remain at 0, based on the initial state and constant velocity model.

Predicted Error Covariance (P):

[[2001    1000]
 [1000    1000]]

The error covariance matrix is now updated, reflecting the increased uncertainty after prediction.


The Update Step

The update step in a Kalman filter is where you incorporate new measurements to refine your predictions.

It involves calculating the Kalman Gain (K), updating the state estimate, and revising the error covariance.

This step is important in merging the prediction with real-world measurements to achieve more accurate estimates.

Here’s how you can execute these tasks using NumPy:

# Measurement Matrix
H = np.array([[1, 0]])

# Measurement Covariance
R = np.array([[1]])

# Measurement (example: position)
z = np.array([[5]])

# Kalman Gain Calculation
S =,, H.T)) + R
K =,, np.linalg.inv(S)))

# State Update
y = z -, x)
x = x +, y)

# Error Covariance Update
I = np.eye(2)
P = -, H), P)

After running this code, you will observe the following updates:

Kalman Gain (K):


This matrix optimally balances the prediction and the measurement.

Updated State (x):


The state estimate now reflects the new measurement, adjusting the initial predictions.

Updated Error Covariance (P):

[[1.0, 0.5]
 [0.5, 0.75]]

The uncertainty in the estimate has been adjusted based on the measurement.


Implement Kalman Filter

In this section, you’ll define functions for the initialization, prediction, and update steps, and then integrate these into a complete Kalman Filter using NumPy.

Initialization Function:

def initialize_kalman():
    x = np.array([[0], [0]])  # Initial state
    P = np.array([[1000, 0], [0, 1000]])  # Initial covariance matrix
    return x, P

Prediction Function:

def predict(x, P, F, Q):
    x =, x)
    P =,, F.T)) + Q
    return x, P

Update Function:

def update(x, P, H, R, z):
    S =,, H.T)) + R
    K =,, np.linalg.inv(S)))
    y = z -, x)
    x = x +, y)
    I = np.eye(H.shape[1])
    P = -, H), P)
    return x, P

Integrating Functions into a Complete Kalman Filter

Now, integrate these functions for a complete cycle of the Kalman Filter:

x, P = initialize_kalman()

# Prediction Matrices
F = np.array([[1, 1], [0, 1]])  # Assuming constant velocity model
Q = np.array([[1, 0], [0, 1]])  # Process noise covariance

# Update Matrices
H = np.array([[1, 0]])  # Measurement matrix
R = np.array([[1]])     # Measurement covariance
z = np.array([[5]])     # New measurement

# Kalman Filter Cycle
x, P = predict(x, P, F, Q)  # Prediction step
x, P = update(x, P, H, R, z)  # Update step

After running this cycle, x and P will contain the updated state and covariance matrices.


Handling Nonlinear Systems

When dealing with nonlinear systems, standard Kalman Filters are not sufficient.

In these cases, you turn to variants like the Extended Kalman Filter (EKF) and the Unscented Kalman Filter (UKF) are designed to handle nonlinearity.

Extended Kalman Filter (EKF)

The EKF addresses nonlinearity by linearizing the system around the current estimate. This is done using the Jacobian matrix, which provides a linear approximation of the system’s behavior.

EKF Steps:

  1. State and Covariance Prediction: Similar to the standard Kalman Filter, but with nonlinear state transition and observation models.
  2. Jacobian Matrix Calculation: Compute the Jacobian matrices for the nonlinear state transition and observation models.
  3. Update Step: Use the Jacobian matrices to update the state and covariance estimates.

Here’s a basic structure for an EKF:

def ekf_predict(x, P, F, Q):
    # Predicted state and covariance (Nonlinear prediction can be added)
    x =, x)
    P =,, F.T)) + Q
    return x, P

def ekf_update(x, P, H, R, z):
    # Update step using Jacobian matrices
    S =,, H.T)) + R
    K =,, np.linalg.inv(S)))
    y = z -, x)  # Nonlinear observation model can be added
    x = x +, y)
    I = np.eye(H.shape[1])
    P = -, H), P)
    return x, P

Unscented Kalman Filter (UKF)

The UKF tackles nonlinearity by using a deterministic sampling technique.

It propagates multiple sample points (sigma points) through the nonlinear functions, providing a better approximation compared to EKF.

UKF Steps:

  1. Sigma Point Generation: Generate a set of points around the current state estimate.
  2. State and Covariance Prediction: Propagate the sigma points through the nonlinear state transition function.
  3. Update Step: Update the state and covariance estimates based on the sigma points and new measurement.

A simplified structure for a UKF look like this:

def ukf_predict(x, P, Q):
    # Sigma point generation and prediction steps
    # (details omitted for brevity)
    return x, P

def ukf_update(x, P, R, z):
    # Update step based on sigma points and new measurement
    # (details omitted for brevity)
    return x, P
Leave a Reply

Your email address will not be published. Required fields are marked *