08 Receding Horizon Drone Racing: Model Predictive Contouring Control¶
In this tutorial we step away from single-shot trajectory optimization and into the world of receding-horizon control. We will implement model predictive contouring control (MPCC), a technique for tracking a reference path as fast as possible. Starting from a simple 2D example with an analytical reference path, we will build up to a full drone racing MPCC that tracks a pre-solved time-optimal trajectory through gates and around obstacles.
Along the way, this tutorial introduces:
- Using OpenSCvx in a receding-horizon setting
- Multi-objective Mayer costs with
ox.Minimizeandox.Maximizeand per-state cost weighting withlam_cost - Cubic spline lookup in the symbolic graph with
ox.Cinterp - Two-phase planning: offline trajectory optimization + online MPCC tracking
Model Predictive Contouring Control¶
The Problem¶
Before jumping into the MPCC implementation, let us first introduce the key concepts of the formulation. Readers in a hurry can jump straight to the Dubins Car and drone racing sections and get to coding.
In the previous tutorials we solved trajectory optimization problems offline and obtained high-performance solutions. In practice, however, we may need a closed-loop controller to execute these trajectories in the presence of unmodeled dynamics, environmental uncertainty, unseen obstacles, etc. A natural approach is to track the pre-solved trajectory using a receding-horizon controller to allow for replanning to account for changes in the environment in a closed-loop fashion. In this tutorial we will implement model predictive contouring control (MPCC), a formulation originally introduced by Lam et al. 2010 and applied to drone racing by Romero et al. 2022 (see also Krinner et al. 2024). We will follow along with the implementation of Romero.
Standard MPC formulations track a trajectory sampled at fixed times; the controller iteratively tries to be at a specific position at a specific moment. MPCC instead works with a spatial reference path and lets the optimizer decide how fast to traverse it. This decoupling of where from when gives the controller freedom to choose the best speed profile rather than being locked to a pre-determined time schedule or heuristic.
The Ideal Cost¶
Consider a desired reference path \(\mathbf{p}^d(\theta)\) parametrized by arc length \(\theta\), and denote the system's position at time step \(k\) as \(\mathbf{p}_k\). The contour error, the shortest distance from the current position to the reference path, is defined as:
The ideal MPCC cost minimizes this contour error while maximizing progress \(\theta_N\) along the path:
where \(q_c > 0\) is the contour weight and \(q_\theta > 0\) weights progress maximization (we will minimize our cost term so a negative sign incentivizes progress). However, this formulation is not computationally tractable. Computing \(e_k^c\) requires solving a nested optimization at every time step, making it unsuitable for use inside an online controller.
Approximating the Contour Error¶
To avoid the nested minimization, Romero et al. introduce an approximate progress variable \(\hat{\theta}\) with its own dynamics:
where \(v_{\hat{\theta}}\) is a new virtual control input determined by the optimizer. Since \(\hat{\theta}_N = \hat{\theta}_0 + \sum v_{\hat{\theta},k} \Delta t\), maximizing final progress is equivalent to maximizing the sum of progress rates, giving us the progress cost:
Instead of searching for the closest point on the path, we simply evaluate the reference at \(\hat{\theta}_k\) and work with the resulting position error:
This error vector can be decomposed into two orthogonal components using the unit tangent \(\mathbf{t}(\hat{\theta}_k)\) of the reference path (which has unit norm because the path is arc-length parametrized):
- Lag error \(\hat{\mathbf{e}}^l\): the projection of \(\mathbf{e}\) onto the tangent direction \(\mathbf{t}\), measuring how far the approximate progress \(\hat{\theta}\) is from the true closest point
- Contour error \(\hat{\mathbf{e}}^c\): the component of \(\mathbf{e}\) in the normal plane, approximating the true contour error \(e_k^c\)
The scalar lag error is the dot product:
and the contour error magnitude follows from the Pythagorean relationship \(\mathbf{e} = \hat{\mathbf{e}}^l + \hat{\mathbf{e}}^c\) with the two components orthogonal:
Then the contour cost can be written as
Why the Lag Error Must Be Small¶
These are approximations — \(\hat{\mathbf{e}}^c\) is not the same as the true contour error \(e_k^c\), and \(\hat{\theta}_k\) is not the same as the true optimal progress \(\theta_k^* = \arg\min_\theta \| \mathbf{p}_k - \mathbf{p}^d(\theta) \|\). How good are they?
Intuitively, the approximation quality is controlled entirely by the lag error. When \(\hat{\mathbf{e}}^l = \mathbf{0}\), the position \(\mathbf{p}_k\) lies in the normal plane at \(\hat{\theta}_k\), which means \(\hat{\theta}_k\) is exactly the closest point on the path \(\hat{\theta}_k = \theta_k^*\) and \(\| \hat{\mathbf{e}}^c \| = e_k^c\) (see the proof in Romero et al. 2022, Proposition 1).
This gives us a practical recipe: by minimizing the lag error term, we keep the optimizer "honest", preventing it from simply choosing a high \(v_\theta\). By keeping the lag error small, the contour error approximation stays accurate. We enforce this by adding the lag error to the cost function with a high weight \(q_l\):
The lag term is not just another tracking objective — it is what makes the entire approximation scheme valid.
The MPCC Cost¶
Combining the contour error, lag error, and progress maximization gives us the full MPCC cost.
Encoding as a Multi-Objective Mayer Cost in OpenSCvx¶
As we saw in Tutorial 01, OpenSCvx uses the Mayer form, expressing the cost purely as a function of the final state rather than as running Lagrange costs. The transformation from one form to the other is trivial; we will simply include the cost terms as integrator states which are summed continuously over the time horizon rather than just at the discrete nodes. We define these new states as the integrated lag error \(s_l\) and integrated contour error \(s_c\)
with \(s_l(0) = s_c(0) = 0\) and add these to the state vector. Similarly, we track the progress \(\hat{\theta}\) as a continuous state defined as the integral of the progress rate \(v_{\hat\theta}\), which is appended to the control vector:
with \(\hat\theta(0)\) initialized to match the current arc-length position along the reference path at \(t=0\).
So far our problems have had a single cost term, typically minimizing \(t_f\). MPCC requires balancing three competing objectives: minimize contour error, minimize lag error, and maximize progress. Minimizing \(s_l(t_f)\) and \(s_c(t_f)\) in the Mayer cost is then equivalent to minimizing the running integrals. Similarly, we maximize the final progress \(\hat{\theta}(t_f)\) to encourage forward motion. We can write the full MPCC cost as follows.
Tip
The cost could mathematically equivalently be constructed as a single integrated state appended to the state vector. We could define a single state \(s\) defined as:
and append that to the state vector instead. However, for convenience and for ease of tuning we will separate out the individual terms.
A Simple Example: Dubins Car on a Circle¶
Let's build up the MPCC step by step, starting from a simple case: a 2D Dubins car tracking a circular reference path. In this case, the reference is analytical (no discrete points needed), so we can focus entirely on getting the MPCC structure correct before expanding to the more general case.
States¶
The physical states are position and heading, as you'd expect for a Dubins car. The MPCC-specific states are progress, and the two cost integrators:
import numpy as np
import openscvx as ox
n_mpc = 8
horizon_duration = 1.5
R_circle = 3.0
total_arc_length = 2 * np.pi * R_circle
# Physical states
position = ox.State("position", shape=(2,))
position.min = [-10.0, -10.0]
position.max = [10.0, 10.0]
position.initial = [R_circle, 0.0]
position.final = [ox.Free(0.0), ox.Free(0.0)]
heading = ox.State("heading", shape=(1,))
heading.min = [-4 * np.pi]
heading.max = [4 * np.pi]
heading.initial = [0.0]
heading.final = [ox.Free(0.0)]
# MPCC states
progress = ox.State("progress", shape=(1,))
progress.min = [-0.5 * total_arc_length]
progress.max = [1.5 * total_arc_length]
progress.initial = [0.0]
progress.final = [ox.Maximize(0.0)]
lag_sum = ox.State("lag_sum", shape=(1,))
lag_sum.min = [0.0]
lag_sum.max = [1e1]
lag_sum.initial = [0.0]
lag_sum.final = [ox.Minimize(0.0)]
contour_sum = ox.State("contour_sum", shape=(1,))
contour_sum.min = [0.0]
contour_sum.max = [1e1]
contour_sum.initial = [0.0]
contour_sum.final = [ox.Minimize(0.0)]
A few things to note here.
- All the physical states have
ox.Freefinals — in a receding horizon setting, there is no fixed terminal condition. progress.finalis set toox.Maximize(0.0). This tells OpenSCvx to maximize the final value of this state, which is how we encode the \(-q_\theta \cdot v_{\hat{\theta}}\) term: maximizing the integral of the progress rate is the same as maximizing the final progress.lag_sum.finalandcontour_sum.finalare set toox.Minimize(0.0). Since these states integrate the squared errors over the horizon, minimizing their final values is equivalent to the running cost we defined above.- The bounds on
progressandheadingare padded beyond a single lap: progress ranges from \(-0.5L\) to \(1.5L\) and heading allows multiple full rotations. This extra room is needed for the warm-starting and guess-shifting strategy in the MPC loop, which we will explain in detail in a later section.
Controls¶
speed = ox.Control("speed", shape=(1,))
speed.min = [0.0]
speed.max = [10.0]
speed.guess = np.full((n_mpc, 1), 5.0)
angular_rate = ox.Control("angular_rate", shape=(1,))
angular_rate.min = [-5.0]
angular_rate.max = [5.0]
progress_rate = ox.Control("progress_rate", shape=(1,))
progress_rate.min = [0.0]
progress_rate.max = [10.0]
progress_rate.guess = np.full((n_mpc, 1), 5.0)
The progress_rate control is the virtual input \(v_{\hat{\theta}}\) from the formulation.
It is constrained to be non-negative so the car can only move forward along the path.
Reference Path and Error Decomposition¶
For a circle of radius \(R\) centered at the origin, the reference path parametrized by arc length \(s\) is:
In code, the symbolic expressions for the reference position and tangent are constructed directly from the progress state:
angle = progress[0] / R_circle
p_ref = ox.Concat(
R_circle * ox.Cos(angle),
R_circle * ox.Sin(angle),
)
tangent = ox.Concat(
-ox.Sin(angle),
ox.Cos(angle),
)
The error decomposition follows the formulation exactly:
e = position - p_ref
# Lag: projection onto tangent
lag_scalar = ox.Sum(e * tangent)
lag_cost = lag_scalar**2
# Contour: Pythagorean decomposition
contour_cost = ox.Max(ox.Sum(e * e) - lag_scalar**2, 0.0)
Numerical Considerations
- We use
ox.Sum(e * e)rather thanox.linalg.Norm(e)**2to avoid the derivative singularity \(\partial \| \mathbf{e} \| / \partial \mathbf{e} = \mathbf{e} / \| \mathbf{e} \|\) at \(\mathbf{e} = 0\) - The
ox.Max(..., 0.0)clamp handles floating-point cases where the subtraction goes slightly negative.
Dynamics¶
dynamics = {
"position": ox.Concat(
speed[0] * ox.Sin(heading[0]),
speed[0] * ox.Cos(heading[0]),
),
"heading": angular_rate[0],
"progress": progress_rate,
"lag_sum": lag_cost,
"contour_sum": contour_cost,
}
The first three lines are the physical dynamics and the progress kinematics.
The last two lines are the cost integrators: lag_sum accumulates the lag cost, and contour_sum accumulates the contour cost.
Because these states have ox.Minimize finals, the optimizer will drive these integrals down while simultaneously maximizing progress.
The relative weighting between these objectives is handled separately via lam_cost in the problem setup, keeping the dynamics clean.
Problem Setup¶
states = [position, heading, progress, lag_sum, contour_sum]
controls = [speed, angular_rate, progress_rate]
constraints = []
for state in [position, heading]:
constraints.extend([
ox.ctcs(state <= state.max),
ox.ctcs(state.min <= state),
])
t = ox.Time(
initial=0.0,
final=horizon_duration,
min=0.0,
max=horizon_duration,
uniform_time_grid=True,
)
Q_LAG = 1e0
Q_CONTOUR = 1e-1
Q_PROGRESS = 1e-1
problem_mpc = ox.Problem(
dynamics=dynamics,
states=states,
controls=controls,
time=t,
constraints=constraints,
N=n_mpc,
algorithm={
"autotuner": ox.ConstantProximalWeight(),
"lam_cost": {
"lag_sum": Q_LAG,
"contour_sum": Q_CONTOUR,
"progress": Q_PROGRESS,
},
},
)
Two differences from the single-shot problems in earlier tutorials.
- The time is fixed (
final=horizon_durationwith no"minimize"), in this case the horizon length is a design parameter, not an optimization variable. - The
lam_costdictionary assigns a weight to each cost state's Mayer contribution. Following Romero et al. the lag weightQ_LAGis highest to keep the progress approximation accurate, whileQ_CONTOURandQ_PROGRESSare tuned lower. This separates the cost tuning from the dynamics formulation — you can adjust these weights without touching the dynamics dictionary.
Note
We use ox.ConstantProximalWeight as the autotuner here to keep the weights constant. This simplifies tuning and reliability during development. You may see better performance with other autotuning methods depending on your problem
The MPC Loop¶
Now we get to the receding-horizon loop itself. The pattern is: solve the problem, extract the solution, advance the initial conditions by one step, and shift the guess forward for warm-starting.
Initialization¶
Before the first solve we need a feasible initial guess. For the analytical circle we assume a constant speed along the reference and compute position, heading, and controls directly:
def set_initial_guess(theta_start: float = 0.0):
ref_speed = 5.0
arc_guess = np.linspace(
theta_start, theta_start + ref_speed * horizon_duration, n_mpc
)
angle_guess = arc_guess / R_circle
position.guess = np.column_stack([
R_circle * np.cos(angle_guess),
R_circle * np.sin(angle_guess),
])
heading.guess = (-angle_guess + np.pi / 2).reshape(-1, 1)
progress.guess = arc_guess.reshape(-1, 1)
lag_sum.guess = np.zeros((n_mpc, 1))
contour_sum.guess = np.zeros((n_mpc, 1))
speed.guess = np.full((n_mpc, 1), ref_speed)
angular_rate.guess = np.full((n_mpc, 1), -ref_speed / R_circle)
progress_rate.guess = np.full((n_mpc, 1), ref_speed)
Note
While this demonstrates the fundamentals of creating the initial guess, this will change once we swap to a discrete reference trajectory.
The Core Loop¶
Solving the MPC problem follows the familiar pattern of initialize -> solve -> post process, except now the solving is done in a loop.
For this initial example we will simply repeat this max_steps times:
problem_mpc.initialize()
for step in range(max_steps):
problem_mpc.reset() # Clear SCP history
results = problem_mpc.solve()
results = problem_mpc.post_process()
nodes = results.nodes
# Update initial conditions and warm-start for next iteration
update_initial_conditions(nodes)
shift_guess(nodes)
The update_initial_conditions(...) and shift_guess(...) functions handle the all-important tasks of preparing the previous solution as the next solve's warm-start and are explained below.
Advancing Initial Conditions and Wrapping¶
To update the initial condition, we take node 1 of the previous solution as the new initial state.
Because we are building up towards a racing example, which may run many laps, we will need to "wrap" the periodic values (progress, heading).
Otherwise, these would accumulate lap over lap and eventually reach their maximum values.
While we could simply set the bounds high enough that we do not reach them within our expected lap count, this is not good engineering and could lead to numerical issues.
To handle this, we subtract complete laps via floor division to keep progress in range.
wrap_offset = (nodes["progress"][1, 0] // total_arc_length) * total_arc_length
progress.initial = np.array([nodes["progress"][1, 0] - wrap_offset])
The state bounds cover \([-0.5L, 1.5L]\), allowing the tail of the MPC horizon to exceed a single lap before the system crosses the lap threshold and is reset.
Heading is wrapped similarly (to the nearest \(2\pi\)) to avoid numerical issues in trig functions.
hdg_wrap = np.round(nodes["heading"][1, 0] / (2 * np.pi)) * (2 * np.pi)
heading.initial = nodes["heading"][1] - hdg_wrap
Note
We use np.round for heading (which can be negative) and // (floor division) for progress (monotonically increasing). The same wrapping is applied in shift_guess below to keep the entire horizon consistent.
The cost integrators are reset to zero each horizon as they measure error within the current horizon, not cumulatively.
Combining all of this results in:
def update_initial_conditions(nodes: dict):
position.initial = nodes["position"][1]
hdg_wrap = np.round(nodes["heading"][1, 0] / (2 * np.pi)) * (2 * np.pi)
heading.initial = nodes["heading"][1] - hdg_wrap
wrap_offset = (nodes["progress"][1, 0] // total_arc_length) * total_arc_length
progress.initial = np.array([nodes["progress"][1, 0] - wrap_offset])
lag_sum.initial = np.array([0.0])
contour_sum.initial = np.array([0.0])
Warm-Starting: Shifting the Guess¶
Since we advanced by one time step, the solution at nodes \([1, \ldots, N-1]\) is a good guess for nodes \([0, \ldots, N-2]\) of the next problem. For the final node we will need a heuristic approach to append a new guess. While this doesn't need to be perfect, the SCP algorithm will do its thing, it can have a large effect on convergence performance. It can be worthwhile to think of a good strategy for your specific problem. For this analytical example, we fill in the new final node with an Euler extrapolation from the last node's state and control.
def shift_guess(nodes: dict):
dt = horizon_duration / (n_mpc - 1)
# Extrapolate a new final node
pos_last = nodes["position"][-1]
hdg_last = nodes["heading"][-1, 0]
spd_last = nodes["speed"][-1, 0]
ar_last = nodes["angular_rate"][-1, 0]
pr_last = nodes["progress_rate"][-1, 0]
ext_pos = pos_last + dt * spd_last * np.array([
np.sin(hdg_last), np.cos(hdg_last)
])
ext_hdg = hdg_last + dt * ar_last
ext_prog = nodes["progress"][-1, 0] + dt * pr_last
# Shift states forward and apply wrapping
shifted_progress = np.vstack([nodes["progress"][1:], [[ext_prog]]])
wrap_offset = (nodes["progress"][1, 0] // total_arc_length) * total_arc_length
shifted_progress -= wrap_offset
shifted_heading = np.vstack([nodes["heading"][1:], [[ext_hdg]]])
hdg_wrap = np.round(nodes["heading"][1, 0] / (2 * np.pi)) * (2 * np.pi)
shifted_heading -= hdg_wrap
position.guess = np.vstack([nodes["position"][1:], [ext_pos]])
heading.guess = shifted_heading
progress.guess = shifted_progress
# Cost integrators: shift and re-zero to preserve the accumulated profile
lag_offset = nodes["lag_sum"][1]
lag_sum.guess = np.maximum(
np.vstack([nodes["lag_sum"][1:] - lag_offset,
nodes["lag_sum"][-1:] - lag_offset]),
0.0,
)
contour_offset = nodes["contour_sum"][1]
contour_sum.guess = np.maximum(
np.vstack([nodes["contour_sum"][1:] - contour_offset,
nodes["contour_sum"][-1:] - contour_offset]),
0.0,
)
# Controls: shift forward, repeat last value for the new final node
speed.guess = np.vstack([nodes["speed"][1:], nodes["speed"][-1:]])
angular_rate.guess = np.vstack([
nodes["angular_rate"][1:], nodes["angular_rate"][-1:]
])
progress_rate.guess = np.vstack([
nodes["progress_rate"][1:], nodes["progress_rate"][-1:]
])
The wrap offsets are computed from node 1 of the previous solution (which becomes node 0 of the new problem) so the whole horizon shifts consistently.
Controls are shifted forward with the last value repeated.
The cost integrators deserve special attention: rather than resetting them to zero, we subtract the value at node 1 (which becomes the new node 0, where the integrator restarts at zero) and shift the rest down.
This preserves the shape of the accumulated cost profile from the previous solution, giving the solver a much better warm-start than a flat zero guess.
The np.maximum(..., 0.0) clamp guards against numerical noise producing slightly negative values.
Discrete Reference Paths¶
We've now set up the core elements of the MPC(C) problem: initializing, solving, and shifting the initial condition and guess. However, in the general case we will not have an analytical reference path. Instead, we may have a set of discrete points from a pre-solved trajectory, a motion capture recording, or hand-placed waypoints. Let us examine an extension of the analytical Dubins car MPCC problem, sampling \(M\) discrete points from the circle as our discrete path. We need a way to evaluate the reference position and tangent at arbitrary progress values within the symbolic expression graph.
Linear interpolation between sparse points creates kinks in the tangent field, which cause oscillations in the contour error. We need something smoother: cubic spline interpolation.
Periodicity via Tiling¶
Before building the splines, we need to do our chores and handle periodicity.
Just as in the analytical case, we must accommodate the horizon looking ahead of the current position for progress values in \([-0.5L, 1.5L]\) which necessitates us "tiling" the data to cover this range.
We tile the single-lap data by replicating the position samples at shifted arc-length offsets to create an s_data that covers the full range of the progress state:
M = 30 # Samples per lap
# s_lap, px_lap, py_lap: single-lap arc-length and position arrays
s_min, s_max = -0.5 * total_arc_length, 1.5 * total_arc_length
n_before = int(np.ceil(-s_min / total_arc_length))
n_after = int(np.ceil(s_max / total_arc_length))
tile_laps = range(-n_before, n_after + 1)
s_data = np.concatenate([s_lap + k * total_arc_length for k in tile_laps])
px_data = np.tile(px_lap, len(tile_laps))
py_data = np.tile(py_lap, len(tile_laps))
Cubic Splines¶
OpenSCvx supports cubic splines via the ox.Cinterp(x, xp, fp) operator.
Given breakpoints xp and values fp (both NumPy arrays, known at problem construction time), it evaluates the natural cubic spline at the symbolic query point x.
This is the symbolic analog of scipy.interpolate.CubicSpline — in fact, it uses the same coefficients under the hood.
With the tiled data in hand, building the reference path is straightforward:
p_ref = ox.Concat(
ox.Cinterp(progress[0], s_data, px_data),
ox.Cinterp(progress[0], s_data, py_data),
)
The tangent field requires a bit more care for this example.
We compute the derivative of the cubic spline at the breakpoints using SciPy, normalize to get unit tangents, and then interpolate those with a second ox.Cinterp:
from scipy.interpolate import CubicSpline as _CS
_dpx = _CS(s_data, px_data)(s_data, 1) # Derivative at breakpoints
_dpy = _CS(s_data, py_data)(s_data, 1)
_tnorm = np.sqrt(_dpx**2 + _dpy**2)
tx_data = _dpx / _tnorm
ty_data = _dpy / _tnorm
tangent = ox.Concat(
ox.Cinterp(progress[0], s_data, tx_data),
ox.Cinterp(progress[0], s_data, ty_data),
)
Tip
If our reference trajectory also contains velocity information, we could use it to define the tangent direction directly without resorting to numerical differentiation
The rest of the MPCC formulation, error decomposition, cost integrators, dynamics, MPC loop, is identical to the analytical case with only minor adjustments to the heuristic components.
This is the beauty of the approach: swapping from an analytical reference to a discrete one is purely a matter of changing how p_ref and tangent are constructed.
Initialization with Discrete References¶
With an analytical reference we could compute position and heading guesses from closed-form expressions. With a discrete reference we instead interpolate the reference data arrays directly:
def set_initial_guess(theta_start: float = 0.0, ref_speed: float = 5.0):
arc_guess = np.linspace(
theta_start, theta_start + ref_speed * horizon_duration, n_mpc
)
# Position: interpolate from reference sample arrays
position.guess = np.column_stack([
np.interp(arc_guess, s_data, px_data),
np.interp(arc_guess, s_data, py_data),
])
# Heading: infer from reference segment directions
seg_idx = np.searchsorted(s_data, arc_guess, side="right") - 1
seg_idx = np.clip(seg_idx, 0, len(s_data) - 2)
seg_dp = np.column_stack([
px_data[seg_idx + 1] - px_data[seg_idx],
py_data[seg_idx + 1] - py_data[seg_idx],
])
hdg_guess = np.arctan2(seg_dp[:, 0], seg_dp[:, 1])
heading.guess = hdg_guess.reshape(-1, 1)
heading.initial = np.array([hdg_guess[0]])
progress.guess = arc_guess.reshape(-1, 1)
progress_rate.guess = np.full((n_mpc, 1), ref_speed)
lag_sum.guess = np.zeros((n_mpc, 1))
contour_sum.guess = np.zeros((n_mpc, 1))
Position is looked up via np.interp (linear is fine for the guess).
For heading, we need the direction of travel at each guess point — but with a discrete reference we don't have an analytical tangent.
Instead, np.searchsorted finds which reference segment each arc_guess value falls on, and we compute the heading from the direction of that segment (seg_dp).
The shift_guess and update_initial_conditions functions are structurally identical to the analytical case — wrapping and extrapolation depend on the dynamics, not the reference representation.
Drone Racing MPCC¶
Now we can put everything together for the real application: a drone racing through gates and around obstacles, tracked by MPCC. This is a two-phase problem:
- Offline: Solve a time-optimal trajectory through the gates (single-shot, as in Tutorial 02)
- Online: Use MPCC to track the solved trajectory in a receding-horizon loop
Time-Optimal Reference Trajectory¶
The offline problem is a standard time-optimal drone racing problem identical to what we built in Tutorial 02: a 3D double integrator flying through a drone race course with gate constraints. We have slightly modified the problem to enforce loop closure: initial and terminal positions and velocities are constrained to be identical. We won't repeat the full setup here; the important output is the solved trajectory:
problem_traj.initialize()
results_traj = problem_traj.solve()
results_traj = problem_traj.post_process()
ref_pos = results_traj.trajectory["position"] # (N_dense, 3)
ref_vel = results_traj.trajectory["velocity"] # (N_dense, 3)
ref_time = results_traj.trajectory["time"].flatten()
The post-processed trajectory gives us a dense set of position, velocity, and time samples. We need to convert this to an arc-length parametrization for the MPCC reference.
Arc length is computed by integrating the speed:
ref_speeds = np.linalg.norm(ref_vel, axis=1)
ds = ref_speeds[:-1] * np.diff(ref_time)
s_lap = np.concatenate([[0.0], np.cumsum(ds)])
total_arc_length = s_lap[-1]
Since the trajectory has loop closure (initial position equals final position), we drop the last point before tiling to avoid duplicate arc-length values at the boundary:
The tiling and ox.Cinterp setup for position follow the same pattern as before, now in 3D:
p_ref = ox.Concat(
ox.Cinterp(progress[0], s_data, px_data),
ox.Cinterp(progress[0], s_data, py_data),
ox.Cinterp(progress[0], s_data, pz_data),
)
The tangent field is constructed using the same spline-derivative approach introduced above, now in 3D.
Warm-Starting from the Reference¶
A nice property of the two-phase approach is that we have the full reference trajectory available for initialization. Rather than guessing a constant speed or linear interpolation, we can look up the reference position, velocity, and force at the arc-length values corresponding to our horizon:
def set_initial_guess(theta_start: float = 0.0):
t_start = np.interp(theta_start, s_data, t_data)
t_guess = np.linspace(t_start, t_start + horizon_duration, n_mpc)
arc_guess = np.interp(t_guess, t_data, s_data)
position.guess = np.column_stack([
np.interp(arc_guess, s_data, px_data),
np.interp(arc_guess, s_data, py_data),
np.interp(arc_guess, s_data, pz_data),
])
velocity.guess = np.column_stack([
np.interp(arc_guess, s_data, vx_data),
np.interp(arc_guess, s_data, vy_data),
np.interp(arc_guess, s_data, vz_data),
])
# ... similarly for force, progress_rate
This gives the MPCC solver an excellent starting point: the initial guess is already close to the optimal trajectory, so convergence is fast from the very first MPC step.
Guess Shifting with the Reference Trajectory¶
In the Dubins car examples we extrapolated the appended final node with a crude Euler step from the last node's dynamics. With the full reference trajectory available, we can do much better: look up the reference state at the extrapolated progress value.
def shift_guess(nodes: dict):
# Map the last node's progress to reference time, step forward, map back
t_last = np.interp(nodes["progress"][-1, 0], s_data, t_data)
ext_prog = np.interp(t_last + dt_mpc, t_data, s_data)
# Look up reference state at the extrapolated progress
ext_pos = np.array([
np.interp(ext_prog, s_data, px_data),
np.interp(ext_prog, s_data, py_data),
np.interp(ext_prog, s_data, pz_data),
])
ext_vel = np.array([
np.interp(ext_prog, s_data, vx_data),
np.interp(ext_prog, s_data, vy_data),
np.interp(ext_prog, s_data, vz_data),
])
# ... similarly for force, progress_rate
# Shift and wrap as before
shifted_progress = np.vstack([nodes["progress"][1:], [[ext_prog]]])
wrap_offset = (nodes["progress"][1, 0] // total_arc_length) * total_arc_length
shifted_progress -= wrap_offset
position.guess = np.vstack([nodes["position"][1:], [ext_pos]])
velocity.guess = np.vstack([nodes["velocity"][1:], [ext_vel]])
progress.guess = shifted_progress
# ...
The appended node now lies on the reference trajectory rather than being a rough dynamical extrapolation.
This is the same idea as set_initial_guess — query the reference at the right arc-length — applied at every MPC step.
The cost integrators are shifted using the same offset-and-clamp pattern introduced above, preserving the accumulated cost profile shape.
Additional Constraints and Extensions¶
Obstacle Avoidance¶
Because we are now running a closed-loop controller we can include additional constraints that were not present when calculating the time-optimal reference trajectory, for example, obstacles placed along the path
for obs_center in obstacle_centers:
constraints.append(
ox.ctcs(obstacle_radius <= ox.linalg.Norm(position - obs_center))
)
Progress-Dependent Gate Constraints¶
For the time-optimal reference trajectory we enforced the gates as convex nodal constraints, placing every k-th optimization node at a gate. This is no longer possible in the receding-horizon setting; we cannot guarantee that any node lies at a particular gate. While the contour error minimization encourages the closed-loop path to follow the reference trajectory through the gates, it does not guarantee it.
To generically constrain the MPCC optimizer to travel through the gates, we borrow the cone formulation from tutorial 04 to create approach cone constraints for each gate. Each cone opens toward its gate so that the only way to satisfy the constraint is to fly through the gate opening. The apex of each cone is placed behind the gate center along the gate normal \(\hat{\mathbf{n}}\) so that the cone's cross-section at the gate plane exactly matches the gate opening:
where \(\mathbf{a}\) is the cone apex, \(\mathbf{c}_{\textrm{gate}}\) is the gate center, \(w\) is the gate half-width, and \(\alpha\) is the cone half-angle.
This problem would quickly become infeasible for multiple cones.
Therefore, we use a progress-dependent condition to trigger a gates cone constraint when the drone is between the previous gate and the current gate.
We use an ox.Cond/ox.All statement as introduced in tutorial 06 to encode this logic statement.
Additionally, we add the constraint that \(v \cdot n_{\textrm{gate}} <=0\) (the velocity projected onto the cone normal is negative) to ensure that the constraint is only active when the drone is flying towards a gate.
This allows the drone to "fly around the corner" before the cone constraint becomes active.
Finally, to speed up initialization of the MPCC problem we use ox.Vmap from tutorial 03 to leverage data-parallelism for all of the gates.
Putting all this together results in our cone constraints:
cone_constraints = ox.Vmap(
lambda apex, R_gate, n_hat, s_gate, s_prev: ox.Cond(
ox.All([
progress[0] >= s_prev,
progress[0] <= s_gate,
ox.Sum(velocity * n_hat) <= 0.0,
]),
g_gate_cone(apex, R_gate, position),
-1.0, # Inactive: return feasible value
),
batch=[all_apexes, all_rotations, all_n_hats, all_s_gates, all_s_prevs],
)
constraints.append(ox.ctcs(cone_constraints <= 0.0))
Free Time Dilation¶
We can leverage the built-in time dilation of OpenSCvx to allow the optimizer to dilate time "as necessary" to improve performance while keeping a fixed time horizon, giving it one more knob to turn. To ensure a consistent loop rate, we constrain that time at node 1 is always \(t_{\textrm{MPCC}}\) in the future.
t = ox.Time(
initial=0.0,
final=horizon_duration, # Still fixed!
min=0.0,
max=horizon_duration,
# Note: uniform_time_grid is NOT set, allowing non-uniform spacing
)
# Pin node 1 at a fixed dt so the MPC loop rate is constant
constraints.append(
(t == horizon_duration / (n_mpc - 1)).convex().at(1)
)
We also shift the time and time dilation guesses identically to the other states and controls:
# Inside shift_guess(nodes):
# Time: shift forward and renormalize so the horizon starts at t=0
dtau = 1.0 / (n_mpc - 1)
ext_time = nodes["time"][-1, 0] + nodes["_time_dilation"][-1, 0] * dtau
shifted_time = np.vstack([nodes["time"][1:], [[ext_time]]])
shifted_time -= shifted_time[0] # Re-zero the horizon
t.guess = shifted_time
# Time dilation: shift forward, repeat last value
t._time_dilation_control.guess = np.vstack(
[nodes["_time_dilation"][1:], nodes["_time_dilation"][-1:]]
)
Note
We have not done extensive comparisons on the benefits/costs of allowing for non-constant time-dilation in the MPCC problem. Depending on your problem, it may or may not be beneficial. At the very least it's a cool feature that can be enabled at minimal cost :)
Realtime Interactive MPCC¶
The realtime_double_integrator_drone_racing.py example extends the drone racing MPCC into a live, interactive demo.
Rather than running a fixed number of MPC steps, it runs indefinitely in a while True loop with realtime viser visualization that updates after each solve.
The key change that enables interactivity is the use of ox.Parameter for the obstacle centers:
obstacle_centers = [
ox.Parameter(f"obstacle_center_{i}", shape=(3,), value=pos)
for i, pos in enumerate(obstacle_center_positions)
]
Unlike a plain NumPy array baked into the symbolic graph at construction time, an ox.Parameter can be updated between solves without re-initializing the problem.
The obstacle avoidance constraints reference these parameters in the usual way:
for obs_center in obstacle_centers:
constraints.append(
ox.ctcs(obstacle_radius <= ox.linalg.Norm(position - obs_center))
)
When the user clicks and drags an obstacle in the viser 3D view, a callback updates both the parameter value and the problem's parameter store:
obstacle_centers[obs_idx].value = new_center
problem_mpc.parameters[obstacle_centers[obs_idx].name] = new_center
The next problem_mpc.solve() call picks up the new obstacle position automatically — no recompilation, no re-initialization.
The drone replans around the moved obstacle on the very next MPC step, demonstrating the closed-loop reactivity that makes MPCC useful in practice.
Warning
Because the user has no constraints on where they place the obstacles, it is possible to position them in such a way as to make the problem infeasible. When this occurs, the SCP algorithm will max out the allowed iterations, resulting in slow convergence and unexpected behavior.
Key Takeaways¶
- Multi-objective Mayer costs:
ox.Minimizeandox.Maximizeon state finals let you encode multiple competing running costs and rewards as integrator states. This is a general pattern that works for any Lagrange-to-Mayer conversion. Thelam_costdictionary provides per-state cost weighting, separating the tuning of relative objective importance from the dynamics formulation. ox.Cinterp: Cubic spline interpolation within the symbolic graph enables smooth lookup of discrete reference data. Pre-computing the tangent field from the spline derivative and re-interpolating it avoids singularities and gives a clean tangent field.- The MPC pattern:
problem.reset()->problem.solve()-> advance initial conditions -> shift guess. Warm-starting from the shifted previous solution is essential for fast convergence. - Progress-dependent gate constraints: Combining
ox.Cond,ox.All, andox.Vmaplets you encode constraints that activate only when the system is in a particular region of the path. This replaces the fixed node-to-gate assignment from single-shot problems with a formulation that works regardless of which nodes happen to be near a gate. - Two-phase planning: Solving the hard global problem offline (time-optimal trajectory through gates) and tracking it online with MPCC is a powerful decomposition. The MPCC handles local disturbances and model mismatch while the offline solution provides the global plan.
Further Reading¶
- MPCC Examples
- Romero et al. (2022). "Model Predictive Contouring Control for Time-Optimal Quadrotor Flight." IEEE Transactions on Robotics.
- Lam et al. (2010). "Model Predictive Contouring Control." IEEE Conference on Decision and Control.
- Krinner et al. (2024). "MPCC++: Model Predictive Contouring Control for Time-Optimal Flight with Safety Constraints." Robotics: Science and Systems.
- Drone Racing: Constraints and 3-DOF Dynamics — the single-shot trajectory used as the MPCC reference
- Obstacle Avoidance: Vmap — vectorized constraints used in the gate cone formulation