Skip to content

algorithms

Successive convexification algorithms for trajectory optimization.

This module provides implementations of SCvx (Successive Convexification) algorithms for solving non-convex trajectory optimization problems through iterative convex approximation.

All algorithms inherit from :class:Algorithm, enabling pluggable algorithm implementations and custom SCvx variants:

class Algorithm(ABC):
    @abstractmethod
    def initialize(self, solver, discretization_solver, jax_constraints,
                   emitter, params, settings) -> None:
        '''Store compiled infrastructure and warm-start solvers.'''
        ...

    @abstractmethod
    def step(self, state, params, settings) -> bool:
        '''Execute one iteration using stored infrastructure.'''
        ...

Immutable components (solver, discretization_solver, jax_constraints, etc.) are stored during initialize(). Mutable configuration (params, settings) is passed per-step to support runtime parameter updates and tolerance tuning.

:class:AlgorithmState holds mutable state during SCP iterations. Algorithms that require additional state can subclass it:

@dataclass
class MyAlgorithmState(AlgorithmState):
    my_custom_field: float = 0.0
Note

AlgorithmState currently combines iteration metrics (costs, weights), trajectory history, and discretization data. A future refactor may separate these concerns into distinct classes for clearer data flow:

@dataclass
class AlgorithmState:
    # Mutable iteration state
    k: int
    J_tr: float
    J_vb: float
    J_vc: float
    lam_prox: float
    lam_cost: float
    lam_vc: ...
    lam_vb: float

@dataclass
class TrajectoryHistory:
    # Accumulated trajectory solutions
    X: List[np.ndarray]
    U: List[np.ndarray]

    @property
    def x(self): return self.X[-1]

    @property
    def u(self): return self.U[-1]

@dataclass
class DebugHistory:
    # Optional diagnostic data (discretization matrices, etc.)
    V_history: List[np.ndarray]
    VC_history: List[np.ndarray]
    TR_history: List[np.ndarray]

Current Implementations:

  • :class:PenalizedTrustRegion: Penalized Trust Region (PTR) algorithm

Algorithm

Bases: ABC

Abstract base class for successive convexification algorithms.

This class defines the interface for SCP algorithms used in trajectory optimization. Implementations should remain minimal and functional, delegating state management to the AlgorithmState dataclass.

The two core methods mirror the SCP workflow:

  • initialize: Store compiled infrastructure and warm-start solvers
  • step: Execute one convex subproblem iteration

Immutable components (ocp, discretization_solver, jax_constraints, etc.) are stored during initialize(). Mutable configuration (params, settings) is passed per-step to support runtime parameter updates and tolerance tuning.

Statefullness

Avoid storing mutable iteration state (costs, weights, trajectories) on self. All iteration state should live in :class:AlgorithmState or a subclass thereof, passed explicitly to step(). This keeps algorithm classes stateless w.r.t. iteration, making data flow explicit and staying close to functional programming principles where possible.

Example

Implementing a custom algorithm::

class MyAlgorithm(Algorithm):
    def initialize(self, solver, discretization_solver,
                   jax_constraints, emitter,
                   params, settings):
        # Store compiled infrastructure
        self._solver = solver
        self._discretization_solver = discretization_solver
        self._jax_constraints = jax_constraints
        self._emitter = emitter
        # Warm-start with initial params/settings...

    def step(self, state, params, settings):
        # Run one iteration using self._* and per-step params/settings
        return converged
Source code in openscvx/algorithms/base.py
class Algorithm(ABC):
    """Abstract base class for successive convexification algorithms.

    This class defines the interface for SCP algorithms used in trajectory
    optimization. Implementations should remain minimal and functional,
    delegating state management to the AlgorithmState dataclass.

    The two core methods mirror the SCP workflow:

    - initialize: Store compiled infrastructure and warm-start solvers
    - step: Execute one convex subproblem iteration

    Immutable components (ocp, discretization_solver, jax_constraints, etc.) are
    stored during initialize(). Mutable configuration (params, settings) is passed
    per-step to support runtime parameter updates and tolerance tuning.

    !!! tip "Statefullness"
        Avoid storing mutable iteration state (costs, weights, trajectories) on
        ``self``. All iteration state should live in :class:`AlgorithmState` or
        a subclass thereof, passed explicitly to ``step()``. This keeps algorithm
        classes stateless w.r.t. iteration, making data flow explicit and staying
        close to functional programming principles where possible.

    Example:
        Implementing a custom algorithm::

            class MyAlgorithm(Algorithm):
                def initialize(self, solver, discretization_solver,
                               jax_constraints, emitter,
                               params, settings):
                    # Store compiled infrastructure
                    self._solver = solver
                    self._discretization_solver = discretization_solver
                    self._jax_constraints = jax_constraints
                    self._emitter = emitter
                    # Warm-start with initial params/settings...

                def step(self, state, params, settings):
                    # Run one iteration using self._* and per-step params/settings
                    return converged
    """

    @abstractmethod
    def initialize(
        self,
        solver: "ConvexSolver",
        discretization_solver: callable,
        jax_constraints: "LoweredJaxConstraints",
        emitter: callable,
        params: dict,
        settings: "Config",
    ) -> None:
        """Initialize the algorithm and store compiled infrastructure.

        This method stores immutable components and performs any setup required
        before the SCP loop begins (e.g., warm-starting solvers). The params and
        settings are passed for warm-start but may change between steps.

        Args:
            solver: Convex subproblem solver (e.g., CVXPySolver)
            discretization_solver: Compiled discretization solver function
            jax_constraints: JIT-compiled JAX constraint functions
            emitter: Callback for emitting iteration progress data
            params: Problem parameters dictionary (for warm-start only)
            settings: Configuration object (for warm-start only)
        """
        ...

    @abstractmethod
    def step(
        self,
        state: AlgorithmState,
        params: dict,
        settings: "Config",
    ) -> bool:
        """Execute one iteration of the SCP algorithm.

        This method solves a single convex subproblem, updates the algorithm
        state in place, and returns whether convergence criteria are met.

        Uses stored infrastructure (ocp, discretization_solver, etc.) with
        per-step params and settings to support runtime modifications.

        Args:
            state: Mutable algorithm state (modified in place)
            params: Problem parameters dictionary (may change between steps)
            settings: Configuration object (may change between steps)

        Returns:
            True if convergence criteria are satisfied, False otherwise.
        """
        ...

    @abstractmethod
    def citation(self) -> List[str]:
        """Return BibTeX citations for this algorithm.

        Implementations should return a list of BibTeX entry strings for the
        papers that should be cited when using this algorithm.

        Returns:
            List of BibTeX citation strings.

        Example:
            Getting citations for an algorithm::

                algorithm = PenalizedTrustRegion()
                for bibtex in algorithm.citation():
                    print(bibtex)
        """
        ...
citation() -> List[str] abstractmethod

Return BibTeX citations for this algorithm.

Implementations should return a list of BibTeX entry strings for the papers that should be cited when using this algorithm.

Returns:

Type Description
List[str]

List of BibTeX citation strings.

Example

Getting citations for an algorithm::

algorithm = PenalizedTrustRegion()
for bibtex in algorithm.citation():
    print(bibtex)
Source code in openscvx/algorithms/base.py
@abstractmethod
def citation(self) -> List[str]:
    """Return BibTeX citations for this algorithm.

    Implementations should return a list of BibTeX entry strings for the
    papers that should be cited when using this algorithm.

    Returns:
        List of BibTeX citation strings.

    Example:
        Getting citations for an algorithm::

            algorithm = PenalizedTrustRegion()
            for bibtex in algorithm.citation():
                print(bibtex)
    """
    ...
initialize(solver: ConvexSolver, discretization_solver: callable, jax_constraints: LoweredJaxConstraints, emitter: callable, params: dict, settings: Config) -> None abstractmethod

Initialize the algorithm and store compiled infrastructure.

This method stores immutable components and performs any setup required before the SCP loop begins (e.g., warm-starting solvers). The params and settings are passed for warm-start but may change between steps.

Parameters:

Name Type Description Default
solver ConvexSolver

Convex subproblem solver (e.g., CVXPySolver)

required
discretization_solver callable

Compiled discretization solver function

required
jax_constraints LoweredJaxConstraints

JIT-compiled JAX constraint functions

required
emitter callable

Callback for emitting iteration progress data

required
params dict

Problem parameters dictionary (for warm-start only)

required
settings Config

Configuration object (for warm-start only)

required
Source code in openscvx/algorithms/base.py
@abstractmethod
def initialize(
    self,
    solver: "ConvexSolver",
    discretization_solver: callable,
    jax_constraints: "LoweredJaxConstraints",
    emitter: callable,
    params: dict,
    settings: "Config",
) -> None:
    """Initialize the algorithm and store compiled infrastructure.

    This method stores immutable components and performs any setup required
    before the SCP loop begins (e.g., warm-starting solvers). The params and
    settings are passed for warm-start but may change between steps.

    Args:
        solver: Convex subproblem solver (e.g., CVXPySolver)
        discretization_solver: Compiled discretization solver function
        jax_constraints: JIT-compiled JAX constraint functions
        emitter: Callback for emitting iteration progress data
        params: Problem parameters dictionary (for warm-start only)
        settings: Configuration object (for warm-start only)
    """
    ...
step(state: AlgorithmState, params: dict, settings: Config) -> bool abstractmethod

Execute one iteration of the SCP algorithm.

This method solves a single convex subproblem, updates the algorithm state in place, and returns whether convergence criteria are met.

Uses stored infrastructure (ocp, discretization_solver, etc.) with per-step params and settings to support runtime modifications.

Parameters:

Name Type Description Default
state AlgorithmState

Mutable algorithm state (modified in place)

required
params dict

Problem parameters dictionary (may change between steps)

required
settings Config

Configuration object (may change between steps)

required

Returns:

Type Description
bool

True if convergence criteria are satisfied, False otherwise.

Source code in openscvx/algorithms/base.py
@abstractmethod
def step(
    self,
    state: AlgorithmState,
    params: dict,
    settings: "Config",
) -> bool:
    """Execute one iteration of the SCP algorithm.

    This method solves a single convex subproblem, updates the algorithm
    state in place, and returns whether convergence criteria are met.

    Uses stored infrastructure (ocp, discretization_solver, etc.) with
    per-step params and settings to support runtime modifications.

    Args:
        state: Mutable algorithm state (modified in place)
        params: Problem parameters dictionary (may change between steps)
        settings: Configuration object (may change between steps)

    Returns:
        True if convergence criteria are satisfied, False otherwise.
    """
    ...

AlgorithmState dataclass

Mutable state for SCP iterations.

This dataclass holds all state that changes during the solve process. It stores only the evolving trajectory arrays, not the full State/Control objects which contain immutable configuration metadata.

Trajectory arrays are stored in history lists, with the current guess accessed via properties that return the latest entry.

A fresh instance is created for each solve, enabling easy reset functionality.

Attributes:

Name Type Description
k int

Current iteration number (starts at 1)

J_tr float

Current trust region cost

J_vb float

Current virtual buffer cost

J_vc float

Current virtual control cost

lam_prox float

Current trust region weight (may adapt during solve)

lam_cost float

Current cost weight (may relax during solve)

lam_vc Union[float, ndarray]

Current virtual control penalty weight

lam_vb float

Current virtual buffer penalty weight

n_x int

Number of states (for unpacking V vectors)

n_u int

Number of controls (for unpacking V vectors)

N int

Number of trajectory nodes (for unpacking V vectors)

X List[ndarray]

List of state trajectory iterates

U List[ndarray]

List of control trajectory iterates

discretizations List[DiscretizationResult]

List of unpacked discretization results

VC_history List[ndarray]

List of virtual control history

TR_history List[ndarray]

List of trust region history

A_bar_history List[ndarray]

List of state transition matrices

B_bar_history List[ndarray]

List of control influence matrices

C_bar_history List[ndarray]

List of control influence matrices for next node

x_prop_history List[ndarray]

List of propagated states

Source code in openscvx/algorithms/base.py
@dataclass
class AlgorithmState:
    """Mutable state for SCP iterations.

    This dataclass holds all state that changes during the solve process.
    It stores only the evolving trajectory arrays, not the full State/Control
    objects which contain immutable configuration metadata.

    Trajectory arrays are stored in history lists, with the current guess
    accessed via properties that return the latest entry.

    A fresh instance is created for each solve, enabling easy reset functionality.

    Attributes:
        k: Current iteration number (starts at 1)
        J_tr: Current trust region cost
        J_vb: Current virtual buffer cost
        J_vc: Current virtual control cost
        lam_prox: Current trust region weight (may adapt during solve)
        lam_cost: Current cost weight (may relax during solve)
        lam_vc: Current virtual control penalty weight
        lam_vb: Current virtual buffer penalty weight
        n_x: Number of states (for unpacking V vectors)
        n_u: Number of controls (for unpacking V vectors)
        N: Number of trajectory nodes (for unpacking V vectors)
        X: List of state trajectory iterates
        U: List of control trajectory iterates
        discretizations: List of unpacked discretization results
        VC_history: List of virtual control history
        TR_history: List of trust region history
        A_bar_history: List of state transition matrices
        B_bar_history: List of control influence matrices
        C_bar_history: List of control influence matrices for next node
        x_prop_history: List of propagated states
    """

    k: int
    J_tr: float
    J_vb: float
    J_vc: float
    n_x: int
    n_u: int
    N: int
    J_nonlin_history: List[float]
    J_lin_history: List[float]
    pred_reduction_history: List[float]
    actual_reduction_history: List[float]
    acceptance_ratio_history: List[float]
    X: List[np.ndarray] = field(default_factory=list)
    U: List[np.ndarray] = field(default_factory=list)
    discretizations: List[DiscretizationResult] = field(default_factory=list)
    VC_history: List[np.ndarray] = field(default_factory=list)
    TR_history: List[np.ndarray] = field(default_factory=list)
    lam_vc_history: List[Union[float, np.ndarray]] = field(default_factory=list)
    lam_cost_history: List[float] = field(default_factory=list)
    lam_vb_history: List[float] = field(default_factory=list)
    lam_prox_history: List[float] = field(default_factory=list)
    x_full: List[np.ndarray] = field(default_factory=list)
    x_prop_full: List[np.ndarray] = field(default_factory=list)

    def accept_solution(self, cand: CandidateIterate) -> None:
        """Accept the given candidate iterate by updating the state in place."""

        if cand.x is None or cand.u is None:
            raise ValueError(
                "No candidate iterate to accept. Expected algorithm to set "
                "`cand.x` and `cand.u` before calling accept_solution()."
            )

        self.X.append(cand.x)
        self.U.append(cand.u)

        if cand.V is not None:
            self.discretizations.append(
                DiscretizationResult.from_V(cand.V, n_x=self.n_x, n_u=self.n_u, N=self.N)
            )
        if cand.VC is not None:
            self.VC_history.append(cand.VC)
        if cand.TR is not None:
            self.TR_history.append(cand.TR)

        if cand.lam_vc is not None:
            self.lam_vc_history.append(cand.lam_vc)
        if cand.lam_cost is not None:
            self.lam_cost_history.append(cand.lam_cost)
        if cand.lam_vb is not None:
            self.lam_vb_history.append(cand.lam_vb)

        if cand.J_nonlin is not None:
            self.J_nonlin_history.append(cand.J_nonlin)
        if cand.J_lin is not None:
            self.J_lin_history.append(cand.J_lin)

    @property
    def x(self) -> np.ndarray:
        """Get current state trajectory array.

        Returns:
            Current state trajectory guess (latest entry in history), shape (N, n_states)
        """
        return self.X[-1]

    @property
    def u(self) -> np.ndarray:
        """Get current control trajectory array.

        Returns:
            Current control trajectory guess (latest entry in history), shape (N, n_controls)
        """
        return self.U[-1]

    def add_discretization(self, V: np.ndarray) -> None:
        """Append a raw discretization matrix as an unpacked result."""
        self.discretizations.append(
            DiscretizationResult.from_V(V, n_x=self.n_x, n_u=self.n_u, N=self.N)
        )

    @property
    def V_history(self) -> List[np.ndarray]:
        """Backward-compatible view of raw discretization matrices.

        Note:
            This is a read-only view. Internal code should prefer
            ``state.discretizations``.
        """
        return [d.V for d in self.discretizations]

    def x_prop(self, index: int = -1) -> np.ndarray:
        """Extract propagated state trajectory from the discretization history.

        Args:
            index: Index into V_history (default: -1 for latest entry)

        Returns:
            Propagated state trajectory x_prop with shape (N-1, n_x), or None if no V_history

        Example:
            After running an iteration, access the propagated states::

                problem.step()
                x_prop = problem.state.x_prop()  # Shape (N-1, n_x), latest
                x_prop_prev = problem.state.x_prop(-2)  # Previous iteration
        """
        if not self.discretizations:
            return None
        return self.discretizations[index].x_prop

    def A_d(self, index: int = -1) -> np.ndarray:
        """Extract discretized state transition matrix from discretizations.

        Args:
            index: Index into V_history (default: -1 for latest entry)

        Returns:
            Discretized state Jacobian A_d with shape (N-1, n_x, n_x), or None if no V_history

        Example:
            After running an iteration, access linearization matrices::

                problem.step()
                A_d = problem.state.A_d()  # Shape (N-1, n_x, n_x), latest
                A_d_prev = problem.state.A_d(-2)  # Previous iteration
        """
        if not self.discretizations:
            return None
        return self.discretizations[index].A_d

    def B_d(self, index: int = -1) -> np.ndarray:
        """Extract discretized control influence matrix (current node).

        Args:
            index: Index into discretization history (default: -1 for latest entry)

        Returns:
            Discretized control Jacobian B_d with shape (N-1, n_x, n_u), or None if empty.

        Example:
            After running an iteration, access linearization matrices::

                problem.step()
                B_d = problem.state.B_d()  # Shape (N-1, n_x, n_u), latest
                B_d_prev = problem.state.B_d(-2)  # Previous iteration
        """
        if not self.discretizations:
            return None
        return self.discretizations[index].B_d

    def C_d(self, index: int = -1) -> np.ndarray:
        """Extract discretized control influence matrix (next node).

        Args:
            index: Index into discretization history (default: -1 for latest entry)

        Returns:
            Discretized control Jacobian C_d with shape (N-1, n_x, n_u), or None if empty.

        Example:
            After running an iteration, access linearization matrices::

                problem.step()
                C_d = problem.state.C_d()  # Shape (N-1, n_x, n_u), latest
                C_d_prev = problem.state.C_d(-2)  # Previous iteration
        """
        if not self.discretizations:
            return None
        return self.discretizations[index].C_d

    @property
    def lam_prox(self) -> float:
        """Get current trust region weight.

        Returns:
            Current trust region weight (latest entry in lam_prox_history)
        """
        if not self.lam_prox_history:
            raise ValueError("lam_prox_history is empty. Initialize state using from_settings().")
        return self.lam_prox_history[-1]

    @property
    def lam_cost(self) -> float:
        """Get current cost weight.

        Returns:
            Current cost weight (latest entry in lam_cost_history)
        """
        if not self.lam_cost_history:
            raise ValueError("lam_cost_history is empty. Initialize state using from_settings().")
        return self.lam_cost_history[-1]

    @property
    def lam_vc(self) -> Union[float, np.ndarray]:
        """Get current virtual control penalty weight.

        Returns:
            Current virtual control penalty weight (latest entry in lam_vc_history)
        """
        if not self.lam_vc_history:
            raise ValueError("lam_vc_history is empty. Initialize state using from_settings().")
        return self.lam_vc_history[-1]

    @property
    def lam_vb(self) -> float:
        """Get current virtual buffer penalty weight.

        Returns:
            Current virtual buffer penalty weight (latest entry in lam_vb_history)
        """
        if not self.lam_vb_history:
            raise ValueError("lam_vb_history is empty. Initialize state using from_settings().")
        return self.lam_vb_history[-1]

    @classmethod
    def from_settings(cls, settings: "Config") -> "AlgorithmState":
        """Create initial algorithm state from configuration.

        Copies only the trajectory arrays from settings, leaving all metadata
        (bounds, boundary conditions, etc.) in the original settings object.

        Args:
            settings: Configuration object containing initial guesses and SCP parameters

        Returns:
            Fresh AlgorithmState initialized from settings with copied arrays
        """
        return cls(
            k=1,
            J_tr=1e2,
            J_vb=1e2,
            J_vc=1e2,
            n_x=settings.sim.n_states,
            n_u=settings.sim.n_controls,
            N=settings.scp.n,
            J_nonlin_history=[],
            J_lin_history=[],
            pred_reduction_history=[],
            actual_reduction_history=[],
            acceptance_ratio_history=[],
            X=[settings.sim.x.guess.copy()],
            U=[settings.sim.u.guess.copy()],
            discretizations=[],
            VC_history=[],
            TR_history=[],
            lam_vc_history=[settings.scp.lam_vc],
            lam_cost_history=[settings.scp.lam_cost],
            lam_vb_history=[settings.scp.lam_vb],
            lam_prox_history=[settings.scp.lam_prox],
        )
V_history: List[np.ndarray] property

Backward-compatible view of raw discretization matrices.

Note

This is a read-only view. Internal code should prefer state.discretizations.

lam_cost: float property

Get current cost weight.

Returns:

Type Description
float

Current cost weight (latest entry in lam_cost_history)

lam_prox: float property

Get current trust region weight.

Returns:

Type Description
float

Current trust region weight (latest entry in lam_prox_history)

lam_vb: float property

Get current virtual buffer penalty weight.

Returns:

Type Description
float

Current virtual buffer penalty weight (latest entry in lam_vb_history)

lam_vc: Union[float, np.ndarray] property

Get current virtual control penalty weight.

Returns:

Type Description
Union[float, ndarray]

Current virtual control penalty weight (latest entry in lam_vc_history)

u: np.ndarray property

Get current control trajectory array.

Returns:

Type Description
ndarray

Current control trajectory guess (latest entry in history), shape (N, n_controls)

x: np.ndarray property

Get current state trajectory array.

Returns:

Type Description
ndarray

Current state trajectory guess (latest entry in history), shape (N, n_states)

A_d(index: int = -1) -> np.ndarray

Extract discretized state transition matrix from discretizations.

Parameters:

Name Type Description Default
index int

Index into V_history (default: -1 for latest entry)

-1

Returns:

Type Description
ndarray

Discretized state Jacobian A_d with shape (N-1, n_x, n_x), or None if no V_history

Example

After running an iteration, access linearization matrices::

problem.step()
A_d = problem.state.A_d()  # Shape (N-1, n_x, n_x), latest
A_d_prev = problem.state.A_d(-2)  # Previous iteration
Source code in openscvx/algorithms/base.py
def A_d(self, index: int = -1) -> np.ndarray:
    """Extract discretized state transition matrix from discretizations.

    Args:
        index: Index into V_history (default: -1 for latest entry)

    Returns:
        Discretized state Jacobian A_d with shape (N-1, n_x, n_x), or None if no V_history

    Example:
        After running an iteration, access linearization matrices::

            problem.step()
            A_d = problem.state.A_d()  # Shape (N-1, n_x, n_x), latest
            A_d_prev = problem.state.A_d(-2)  # Previous iteration
    """
    if not self.discretizations:
        return None
    return self.discretizations[index].A_d
B_d(index: int = -1) -> np.ndarray

Extract discretized control influence matrix (current node).

Parameters:

Name Type Description Default
index int

Index into discretization history (default: -1 for latest entry)

-1

Returns:

Type Description
ndarray

Discretized control Jacobian B_d with shape (N-1, n_x, n_u), or None if empty.

Example

After running an iteration, access linearization matrices::

problem.step()
B_d = problem.state.B_d()  # Shape (N-1, n_x, n_u), latest
B_d_prev = problem.state.B_d(-2)  # Previous iteration
Source code in openscvx/algorithms/base.py
def B_d(self, index: int = -1) -> np.ndarray:
    """Extract discretized control influence matrix (current node).

    Args:
        index: Index into discretization history (default: -1 for latest entry)

    Returns:
        Discretized control Jacobian B_d with shape (N-1, n_x, n_u), or None if empty.

    Example:
        After running an iteration, access linearization matrices::

            problem.step()
            B_d = problem.state.B_d()  # Shape (N-1, n_x, n_u), latest
            B_d_prev = problem.state.B_d(-2)  # Previous iteration
    """
    if not self.discretizations:
        return None
    return self.discretizations[index].B_d
C_d(index: int = -1) -> np.ndarray

Extract discretized control influence matrix (next node).

Parameters:

Name Type Description Default
index int

Index into discretization history (default: -1 for latest entry)

-1

Returns:

Type Description
ndarray

Discretized control Jacobian C_d with shape (N-1, n_x, n_u), or None if empty.

Example

After running an iteration, access linearization matrices::

problem.step()
C_d = problem.state.C_d()  # Shape (N-1, n_x, n_u), latest
C_d_prev = problem.state.C_d(-2)  # Previous iteration
Source code in openscvx/algorithms/base.py
def C_d(self, index: int = -1) -> np.ndarray:
    """Extract discretized control influence matrix (next node).

    Args:
        index: Index into discretization history (default: -1 for latest entry)

    Returns:
        Discretized control Jacobian C_d with shape (N-1, n_x, n_u), or None if empty.

    Example:
        After running an iteration, access linearization matrices::

            problem.step()
            C_d = problem.state.C_d()  # Shape (N-1, n_x, n_u), latest
            C_d_prev = problem.state.C_d(-2)  # Previous iteration
    """
    if not self.discretizations:
        return None
    return self.discretizations[index].C_d
accept_solution(cand: CandidateIterate) -> None

Accept the given candidate iterate by updating the state in place.

Source code in openscvx/algorithms/base.py
def accept_solution(self, cand: CandidateIterate) -> None:
    """Accept the given candidate iterate by updating the state in place."""

    if cand.x is None or cand.u is None:
        raise ValueError(
            "No candidate iterate to accept. Expected algorithm to set "
            "`cand.x` and `cand.u` before calling accept_solution()."
        )

    self.X.append(cand.x)
    self.U.append(cand.u)

    if cand.V is not None:
        self.discretizations.append(
            DiscretizationResult.from_V(cand.V, n_x=self.n_x, n_u=self.n_u, N=self.N)
        )
    if cand.VC is not None:
        self.VC_history.append(cand.VC)
    if cand.TR is not None:
        self.TR_history.append(cand.TR)

    if cand.lam_vc is not None:
        self.lam_vc_history.append(cand.lam_vc)
    if cand.lam_cost is not None:
        self.lam_cost_history.append(cand.lam_cost)
    if cand.lam_vb is not None:
        self.lam_vb_history.append(cand.lam_vb)

    if cand.J_nonlin is not None:
        self.J_nonlin_history.append(cand.J_nonlin)
    if cand.J_lin is not None:
        self.J_lin_history.append(cand.J_lin)
add_discretization(V: np.ndarray) -> None

Append a raw discretization matrix as an unpacked result.

Source code in openscvx/algorithms/base.py
def add_discretization(self, V: np.ndarray) -> None:
    """Append a raw discretization matrix as an unpacked result."""
    self.discretizations.append(
        DiscretizationResult.from_V(V, n_x=self.n_x, n_u=self.n_u, N=self.N)
    )
from_settings(settings: Config) -> AlgorithmState classmethod

Create initial algorithm state from configuration.

Copies only the trajectory arrays from settings, leaving all metadata (bounds, boundary conditions, etc.) in the original settings object.

Parameters:

Name Type Description Default
settings Config

Configuration object containing initial guesses and SCP parameters

required

Returns:

Type Description
AlgorithmState

Fresh AlgorithmState initialized from settings with copied arrays

Source code in openscvx/algorithms/base.py
@classmethod
def from_settings(cls, settings: "Config") -> "AlgorithmState":
    """Create initial algorithm state from configuration.

    Copies only the trajectory arrays from settings, leaving all metadata
    (bounds, boundary conditions, etc.) in the original settings object.

    Args:
        settings: Configuration object containing initial guesses and SCP parameters

    Returns:
        Fresh AlgorithmState initialized from settings with copied arrays
    """
    return cls(
        k=1,
        J_tr=1e2,
        J_vb=1e2,
        J_vc=1e2,
        n_x=settings.sim.n_states,
        n_u=settings.sim.n_controls,
        N=settings.scp.n,
        J_nonlin_history=[],
        J_lin_history=[],
        pred_reduction_history=[],
        actual_reduction_history=[],
        acceptance_ratio_history=[],
        X=[settings.sim.x.guess.copy()],
        U=[settings.sim.u.guess.copy()],
        discretizations=[],
        VC_history=[],
        TR_history=[],
        lam_vc_history=[settings.scp.lam_vc],
        lam_cost_history=[settings.scp.lam_cost],
        lam_vb_history=[settings.scp.lam_vb],
        lam_prox_history=[settings.scp.lam_prox],
    )
x_prop(index: int = -1) -> np.ndarray

Extract propagated state trajectory from the discretization history.

Parameters:

Name Type Description Default
index int

Index into V_history (default: -1 for latest entry)

-1

Returns:

Type Description
ndarray

Propagated state trajectory x_prop with shape (N-1, n_x), or None if no V_history

Example

After running an iteration, access the propagated states::

problem.step()
x_prop = problem.state.x_prop()  # Shape (N-1, n_x), latest
x_prop_prev = problem.state.x_prop(-2)  # Previous iteration
Source code in openscvx/algorithms/base.py
def x_prop(self, index: int = -1) -> np.ndarray:
    """Extract propagated state trajectory from the discretization history.

    Args:
        index: Index into V_history (default: -1 for latest entry)

    Returns:
        Propagated state trajectory x_prop with shape (N-1, n_x), or None if no V_history

    Example:
        After running an iteration, access the propagated states::

            problem.step()
            x_prop = problem.state.x_prop()  # Shape (N-1, n_x), latest
            x_prop_prev = problem.state.x_prop(-2)  # Previous iteration
    """
    if not self.discretizations:
        return None
    return self.discretizations[index].x_prop

AugmentedLagrangian

Bases: AutotuningBase

Augmented Lagrangian method for autotuning SCP weights.

This method uses Lagrange multipliers and penalty parameters to handle constraints. The method: - Updates Lagrange multipliers based on constraint violations - Increases penalty parameters when constraints are violated - Decreases penalty parameters when constraints are satisfied

Source code in openscvx/algorithms/autotuning.py
class AugmentedLagrangian(AutotuningBase):
    """Augmented Lagrangian method for autotuning SCP weights.

    This method uses Lagrange multipliers and penalty parameters to handle
    constraints. The method:
    - Updates Lagrange multipliers based on constraint violations
    - Increases penalty parameters when constraints are violated
    - Decreases penalty parameters when constraints are satisfied
    """

    COLUMNS: List[Column] = [
        Column("J_nonlin", "J_nonlin", 8, "{: .1e}", color_J_nonlin, Verbosity.STANDARD),
        Column("J_lin", "J_lin", 8, "{: .1e}", color_J_nonlin, Verbosity.STANDARD),
        Column("pred_reduction", "pred_red", 9, "{: .1e}", min_verbosity=Verbosity.FULL),
        Column("actual_reduction", "act_red", 9, "{: .1e}", min_verbosity=Verbosity.FULL),
        Column(
            "acceptance_ratio",
            "acc_ratio",
            9,
            "{: .2e}",
            color_acceptance_ratio,
            Verbosity.STANDARD,
        ),
        Column("lam_prox", "lam_prox", 8, "{: .1e}", min_verbosity=Verbosity.FULL),
        Column("adaptive_state", "Adaptive", 16, "{}", color_adaptive_state, Verbosity.FULL),
    ]

    def __init__(
        self,
        rho_init: float = 1.0,
        rho_max: float = 1e6,
        gamma_1: float = 2.0,
        gamma_2: float = 0.5,
        eta_0: float = 1e-2,
        eta_1: float = 1e-1,
        eta_2: float = 0.8,
        ep: float = 0.5,
        eta_lambda: float = 1e0,
        lam_vc_max: float = 1e5,
        lam_prox_min: float = 1e-3,
        lam_prox_max: float = 2e5,
        lam_cost_drop: int = -1,
        lam_cost_relax: float = 1.0,
    ):
        """Initialize Augmented Lagrangian autotuning parameters.

        All parameters have defaults and can be modified after instantiation
        via attribute access (e.g., ``autotuner.rho_max = 1e7``).

        Args:
            rho_init: Initial penalty parameter for constraints. Defaults to 1.0.
            rho_max: Maximum penalty parameter. Defaults to 1e6.
            gamma_1: Factor to increase trust region weight when ratio is low.
                Defaults to 2.0.
            gamma_2: Factor to decrease trust region weight when ratio is high.
                Defaults to 0.5.
            eta_0: Acceptance ratio threshold below which solution is rejected.
                Defaults to 1e-2.
            eta_1: Threshold above which solution is accepted with constant weight.
                Defaults to 1e-1.
            eta_2: Threshold above which solution is accepted with lower weight.
                Defaults to 0.8.
            ep: Threshold for virtual control weight update (nu > ep vs nu <= ep).
                Defaults to 0.5.
            eta_lambda: Step size for virtual control weight update. Defaults to 1e0.
            lam_vc_max: Maximum virtual control penalty weight. Defaults to 1e5.
            lam_prox_min: Minimum trust region (proximal) weight. Defaults to 1e-3.
            lam_prox_max: Maximum trust region (proximal) weight. Defaults to 2e5.
            lam_cost_drop: Iteration after which cost relaxation applies (-1 = never).
                Defaults to -1.
            lam_cost_relax: Factor applied to lam_cost after lam_cost_drop.
                Defaults to 1.0.
        """
        self.rho_init = rho_init
        self.rho_max = rho_max
        self.gamma_1 = gamma_1
        self.gamma_2 = gamma_2
        self.eta_0 = eta_0
        self.eta_1 = eta_1
        self.eta_2 = eta_2
        self.ep = ep
        self.eta_lambda = eta_lambda
        self.lam_vc_max = lam_vc_max
        self.lam_prox_min = lam_prox_min
        self.lam_prox_max = lam_prox_max
        self.lam_cost_drop = lam_cost_drop
        self.lam_cost_relax = lam_cost_relax

    def update_weights(
        self,
        state: "AlgorithmState",
        candidate: "CandidateIterate",
        nodal_constraints: "LoweredJaxConstraints",
        settings: Config,
        params: dict,
    ) -> str:
        """Update SCP weights and cost parameters based on iteration number.

        Args:
            state: Solver state containing current weight values (mutated in place)
            nodal_constraints: Lowered JAX constraints
            settings: Configuration object containing adaptation parameters
            params: Dictionary of problem parameters
        """
        # Calculate nonlinear penalty for current candidate
        nonlinear_cost, nonlinear_penalty, nodal_penalty = self.calculate_nonlinear_penalty(
            candidate.x_prop,
            candidate.x,
            candidate.u,
            state.lam_vc,
            state.lam_vb,
            state.lam_cost,
            nodal_constraints,
            params,
            settings,
        )

        candidate.J_nonlin = nonlinear_cost + nonlinear_penalty + nodal_penalty

        # Update cost relaxation parameter after cost_drop iterations
        if state.k > self.lam_cost_drop:
            candidate.lam_cost = state.lam_cost * self.lam_cost_relax
        else:
            candidate.lam_cost = settings.scp.lam_cost

        lam_prox_k = deepcopy(state.lam_prox)

        if state.k > 1:
            prev_nonlinear_cost, prev_nonlinear_penalty, prev_nodal_penalty = (
                self.calculate_nonlinear_penalty(
                    state.x_prop(),
                    state.x,
                    state.u,
                    state.lam_vc,
                    state.lam_vb,
                    state.lam_cost,
                    nodal_constraints,
                    params,
                    settings,
                )
            )

            J_nonlin_prev = prev_nonlinear_cost + prev_nonlinear_penalty + prev_nodal_penalty

            actual_reduction = J_nonlin_prev - candidate.J_nonlin
            predicted_reduction = J_nonlin_prev - candidate.J_lin

            if predicted_reduction == 0:
                raise ValueError("Predicted reduction is 0.")

            rho = actual_reduction / predicted_reduction

            state.pred_reduction_history.append(predicted_reduction)
            state.actual_reduction_history.append(actual_reduction)
            state.acceptance_ratio_history.append(rho)

            if rho < self.eta_0:
                # Reject Solution and higher weight
                lam_prox_k1 = min(self.lam_prox_max, self.gamma_1 * lam_prox_k)
                state.lam_prox_history.append(lam_prox_k1)
                adaptive_state = "Reject Higher"
            elif rho >= self.eta_0 and rho < self.eta_1:
                # Accept Solution with heigher weight
                lam_prox_k1 = min(self.lam_prox_max, self.gamma_1 * lam_prox_k)
                state.lam_prox_history.append(lam_prox_k1)
                state.accept_solution(candidate)
                adaptive_state = "Accept Higher"
            elif rho >= self.eta_1 and rho < self.eta_2:
                # Accept Solution with constant weight
                lam_prox_k1 = lam_prox_k
                state.lam_prox_history.append(lam_prox_k1)
                state.accept_solution(candidate)
                adaptive_state = "Accept Constant"
            else:
                # Accept Solution with lower weight
                lam_prox_k1 = max(self.lam_prox_min, self.gamma_2 * lam_prox_k)
                state.lam_prox_history.append(lam_prox_k1)
                state.accept_solution(candidate)
                adaptive_state = "Accept Lower"

            # Update virtual control weight matrix
            nu = (settings.sim.inv_S_x @ abs(candidate.x[1:] - candidate.x_prop).T).T

            # Vectorized update: use mask to select between two update rules
            mask = nu > self.ep
            # when abs(nu) > ep
            scale = self.eta_lambda * (1 / (2 * state.lam_prox))
            case1 = state.lam_vc + nu * scale
            # when abs(nu) <= ep
            case2 = state.lam_vc + (nu**2) / self.ep * scale
            vc_new = np.where(mask, case1, case2)
            vc_new = np.minimum(self.lam_vc_max, vc_new)
            candidate.lam_vc = vc_new
            candidate.lam_vb = settings.scp.lam_vb

        else:
            state.lam_prox_history.append(lam_prox_k)
            candidate.lam_vc = settings.scp.lam_vc
            candidate.lam_vb = settings.scp.lam_vb
            state.accept_solution(candidate)
            adaptive_state = "Initial"

        return adaptive_state
__init__(rho_init: float = 1.0, rho_max: float = 1000000.0, gamma_1: float = 2.0, gamma_2: float = 0.5, eta_0: float = 0.01, eta_1: float = 0.1, eta_2: float = 0.8, ep: float = 0.5, eta_lambda: float = 1.0, lam_vc_max: float = 100000.0, lam_prox_min: float = 0.001, lam_prox_max: float = 200000.0, lam_cost_drop: int = -1, lam_cost_relax: float = 1.0)

Initialize Augmented Lagrangian autotuning parameters.

All parameters have defaults and can be modified after instantiation via attribute access (e.g., autotuner.rho_max = 1e7).

Parameters:

Name Type Description Default
rho_init float

Initial penalty parameter for constraints. Defaults to 1.0.

1.0
rho_max float

Maximum penalty parameter. Defaults to 1e6.

1000000.0
gamma_1 float

Factor to increase trust region weight when ratio is low. Defaults to 2.0.

2.0
gamma_2 float

Factor to decrease trust region weight when ratio is high. Defaults to 0.5.

0.5
eta_0 float

Acceptance ratio threshold below which solution is rejected. Defaults to 1e-2.

0.01
eta_1 float

Threshold above which solution is accepted with constant weight. Defaults to 1e-1.

0.1
eta_2 float

Threshold above which solution is accepted with lower weight. Defaults to 0.8.

0.8
ep float

Threshold for virtual control weight update (nu > ep vs nu <= ep). Defaults to 0.5.

0.5
eta_lambda float

Step size for virtual control weight update. Defaults to 1e0.

1.0
lam_vc_max float

Maximum virtual control penalty weight. Defaults to 1e5.

100000.0
lam_prox_min float

Minimum trust region (proximal) weight. Defaults to 1e-3.

0.001
lam_prox_max float

Maximum trust region (proximal) weight. Defaults to 2e5.

200000.0
lam_cost_drop int

Iteration after which cost relaxation applies (-1 = never). Defaults to -1.

-1
lam_cost_relax float

Factor applied to lam_cost after lam_cost_drop. Defaults to 1.0.

1.0
Source code in openscvx/algorithms/autotuning.py
def __init__(
    self,
    rho_init: float = 1.0,
    rho_max: float = 1e6,
    gamma_1: float = 2.0,
    gamma_2: float = 0.5,
    eta_0: float = 1e-2,
    eta_1: float = 1e-1,
    eta_2: float = 0.8,
    ep: float = 0.5,
    eta_lambda: float = 1e0,
    lam_vc_max: float = 1e5,
    lam_prox_min: float = 1e-3,
    lam_prox_max: float = 2e5,
    lam_cost_drop: int = -1,
    lam_cost_relax: float = 1.0,
):
    """Initialize Augmented Lagrangian autotuning parameters.

    All parameters have defaults and can be modified after instantiation
    via attribute access (e.g., ``autotuner.rho_max = 1e7``).

    Args:
        rho_init: Initial penalty parameter for constraints. Defaults to 1.0.
        rho_max: Maximum penalty parameter. Defaults to 1e6.
        gamma_1: Factor to increase trust region weight when ratio is low.
            Defaults to 2.0.
        gamma_2: Factor to decrease trust region weight when ratio is high.
            Defaults to 0.5.
        eta_0: Acceptance ratio threshold below which solution is rejected.
            Defaults to 1e-2.
        eta_1: Threshold above which solution is accepted with constant weight.
            Defaults to 1e-1.
        eta_2: Threshold above which solution is accepted with lower weight.
            Defaults to 0.8.
        ep: Threshold for virtual control weight update (nu > ep vs nu <= ep).
            Defaults to 0.5.
        eta_lambda: Step size for virtual control weight update. Defaults to 1e0.
        lam_vc_max: Maximum virtual control penalty weight. Defaults to 1e5.
        lam_prox_min: Minimum trust region (proximal) weight. Defaults to 1e-3.
        lam_prox_max: Maximum trust region (proximal) weight. Defaults to 2e5.
        lam_cost_drop: Iteration after which cost relaxation applies (-1 = never).
            Defaults to -1.
        lam_cost_relax: Factor applied to lam_cost after lam_cost_drop.
            Defaults to 1.0.
    """
    self.rho_init = rho_init
    self.rho_max = rho_max
    self.gamma_1 = gamma_1
    self.gamma_2 = gamma_2
    self.eta_0 = eta_0
    self.eta_1 = eta_1
    self.eta_2 = eta_2
    self.ep = ep
    self.eta_lambda = eta_lambda
    self.lam_vc_max = lam_vc_max
    self.lam_prox_min = lam_prox_min
    self.lam_prox_max = lam_prox_max
    self.lam_cost_drop = lam_cost_drop
    self.lam_cost_relax = lam_cost_relax
update_weights(state: AlgorithmState, candidate: CandidateIterate, nodal_constraints: LoweredJaxConstraints, settings: Config, params: dict) -> str

Update SCP weights and cost parameters based on iteration number.

Parameters:

Name Type Description Default
state AlgorithmState

Solver state containing current weight values (mutated in place)

required
nodal_constraints LoweredJaxConstraints

Lowered JAX constraints

required
settings Config

Configuration object containing adaptation parameters

required
params dict

Dictionary of problem parameters

required
Source code in openscvx/algorithms/autotuning.py
def update_weights(
    self,
    state: "AlgorithmState",
    candidate: "CandidateIterate",
    nodal_constraints: "LoweredJaxConstraints",
    settings: Config,
    params: dict,
) -> str:
    """Update SCP weights and cost parameters based on iteration number.

    Args:
        state: Solver state containing current weight values (mutated in place)
        nodal_constraints: Lowered JAX constraints
        settings: Configuration object containing adaptation parameters
        params: Dictionary of problem parameters
    """
    # Calculate nonlinear penalty for current candidate
    nonlinear_cost, nonlinear_penalty, nodal_penalty = self.calculate_nonlinear_penalty(
        candidate.x_prop,
        candidate.x,
        candidate.u,
        state.lam_vc,
        state.lam_vb,
        state.lam_cost,
        nodal_constraints,
        params,
        settings,
    )

    candidate.J_nonlin = nonlinear_cost + nonlinear_penalty + nodal_penalty

    # Update cost relaxation parameter after cost_drop iterations
    if state.k > self.lam_cost_drop:
        candidate.lam_cost = state.lam_cost * self.lam_cost_relax
    else:
        candidate.lam_cost = settings.scp.lam_cost

    lam_prox_k = deepcopy(state.lam_prox)

    if state.k > 1:
        prev_nonlinear_cost, prev_nonlinear_penalty, prev_nodal_penalty = (
            self.calculate_nonlinear_penalty(
                state.x_prop(),
                state.x,
                state.u,
                state.lam_vc,
                state.lam_vb,
                state.lam_cost,
                nodal_constraints,
                params,
                settings,
            )
        )

        J_nonlin_prev = prev_nonlinear_cost + prev_nonlinear_penalty + prev_nodal_penalty

        actual_reduction = J_nonlin_prev - candidate.J_nonlin
        predicted_reduction = J_nonlin_prev - candidate.J_lin

        if predicted_reduction == 0:
            raise ValueError("Predicted reduction is 0.")

        rho = actual_reduction / predicted_reduction

        state.pred_reduction_history.append(predicted_reduction)
        state.actual_reduction_history.append(actual_reduction)
        state.acceptance_ratio_history.append(rho)

        if rho < self.eta_0:
            # Reject Solution and higher weight
            lam_prox_k1 = min(self.lam_prox_max, self.gamma_1 * lam_prox_k)
            state.lam_prox_history.append(lam_prox_k1)
            adaptive_state = "Reject Higher"
        elif rho >= self.eta_0 and rho < self.eta_1:
            # Accept Solution with heigher weight
            lam_prox_k1 = min(self.lam_prox_max, self.gamma_1 * lam_prox_k)
            state.lam_prox_history.append(lam_prox_k1)
            state.accept_solution(candidate)
            adaptive_state = "Accept Higher"
        elif rho >= self.eta_1 and rho < self.eta_2:
            # Accept Solution with constant weight
            lam_prox_k1 = lam_prox_k
            state.lam_prox_history.append(lam_prox_k1)
            state.accept_solution(candidate)
            adaptive_state = "Accept Constant"
        else:
            # Accept Solution with lower weight
            lam_prox_k1 = max(self.lam_prox_min, self.gamma_2 * lam_prox_k)
            state.lam_prox_history.append(lam_prox_k1)
            state.accept_solution(candidate)
            adaptive_state = "Accept Lower"

        # Update virtual control weight matrix
        nu = (settings.sim.inv_S_x @ abs(candidate.x[1:] - candidate.x_prop).T).T

        # Vectorized update: use mask to select between two update rules
        mask = nu > self.ep
        # when abs(nu) > ep
        scale = self.eta_lambda * (1 / (2 * state.lam_prox))
        case1 = state.lam_vc + nu * scale
        # when abs(nu) <= ep
        case2 = state.lam_vc + (nu**2) / self.ep * scale
        vc_new = np.where(mask, case1, case2)
        vc_new = np.minimum(self.lam_vc_max, vc_new)
        candidate.lam_vc = vc_new
        candidate.lam_vb = settings.scp.lam_vb

    else:
        state.lam_prox_history.append(lam_prox_k)
        candidate.lam_vc = settings.scp.lam_vc
        candidate.lam_vb = settings.scp.lam_vb
        state.accept_solution(candidate)
        adaptive_state = "Initial"

    return adaptive_state

AutotuningBase

Bases: ABC

Base class for autotuning methods in SCP algorithms.

This class provides common functionality for calculating costs and penalties that are shared across different autotuning strategies (e.g., Penalized Trust Region, Augmented Lagrangian).

Subclasses should implement the update_weights method to define their specific weight update strategy.

Class Attributes

COLUMNS: List of Column specs for autotuner-specific metrics to display. Subclasses override this to add their own columns.

Source code in openscvx/algorithms/autotuning.py
class AutotuningBase(ABC):
    """Base class for autotuning methods in SCP algorithms.

    This class provides common functionality for calculating costs and penalties
    that are shared across different autotuning strategies (e.g., Penalized Trust
    Region, Augmented Lagrangian).

    Subclasses should implement the `update_weights` method to define their specific
    weight update strategy.

    Class Attributes:
        COLUMNS: List of Column specs for autotuner-specific metrics to display.
            Subclasses override this to add their own columns.
    """

    COLUMNS: List[Column] = []

    @staticmethod
    def calculate_cost_from_state(x: np.ndarray, settings: Config) -> float:
        """Calculate cost from state vector based on final_type and initial_type.

        Args:
            x: State trajectory array (n_nodes, n_states)
            settings: Configuration object containing state types

        Returns:
            float: Computed cost
        """
        scaled_x = (settings.sim.inv_S_x @ (x.T - settings.sim.c_x[:, None])).T
        cost = 0.0
        for i in range(settings.sim.n_states):
            if settings.sim.x.final_type[i] == "Minimize":
                cost += scaled_x[-1, i]
            if settings.sim.x.final_type[i] == "Maximize":
                cost -= scaled_x[-1, i]
            if settings.sim.x.initial_type[i] == "Minimize":
                cost += scaled_x[0, i]
            if settings.sim.x.initial_type[i] == "Maximize":
                cost -= scaled_x[0, i]
        return cost

    @staticmethod
    def calculate_nonlinear_penalty(
        x_prop: np.ndarray,
        x_bar: np.ndarray,
        u_bar: np.ndarray,
        lam_vc: np.ndarray,
        lam_vb: float,
        lam_cost: float,
        nodal_constraints: "LoweredJaxConstraints",
        params: dict,
        settings: Config,
    ) -> Tuple[float, float, float]:
        """Calculate nonlinear penalty components.

        This method computes three penalty components:
        1. Cost penalty: weighted original cost
        2. Virtual control penalty: penalty for dynamics violations
        3. Nodal penalty: penalty for constraint violations

        Args:
            x_prop: Propagated state (n_nodes-1, n_states)
            x_bar: Previous iteration state (n_nodes, n_states)
            u_bar: Solution control (n_nodes, n_controls)
            lam_vc: Virtual control weight (scalar or matrix)
            lam_vb: Virtual buffer penalty weight (scalar)
            lam_cost: Cost relaxation parameter (scalar)
            nodal_constraints: Lowered JAX constraints
            params: Dictionary of problem parameters
            settings: Configuration object

        Returns:
            Tuple of (nonlinear_cost, nonlinear_penalty, nodal_penalty):
                - nonlinear_cost: Weighted cost component
                - nonlinear_penalty: Virtual control penalty
                - nodal_penalty: Constraint violation penalty
        """
        nodal_penalty = 0.0

        # Evaluate nodal constraints
        for constraint in nodal_constraints.nodal:
            # Nodal constraint function is vmapped: func(x, u, node, params)
            # When called with arrays, it evaluates at all nodes
            g = constraint.func(x_bar, u_bar, 0, params)
            # Only sum violations at nodes where constraint is enforced
            if constraint.nodes is not None:
                # Filter to only specified nodes
                # Convert to numpy array for JAX compatibility
                nodes_array = np.array(constraint.nodes)
                g_filtered = g[nodes_array]
            else:
                # If no nodes specified, check all nodes
                g_filtered = g
            nodal_penalty += lam_vb * np.sum(np.maximum(0, g_filtered))

        # Evaluate cross-node constraints
        for constraint in nodal_constraints.cross_node:
            # Cross-node constraint function signature: func(X, U, params)
            # No node argument - operates on full trajectory
            g = constraint.func(x_bar, u_bar, params)
            # Cross-node constraints return scalar or array, sum all violations
            nodal_penalty += lam_vb * np.sum(np.maximum(0, g))

        cost = AutotuningBase.calculate_cost_from_state(x_bar, settings)
        x_diff = settings.sim.inv_S_x @ (x_bar[1:, :] - x_prop).T

        return lam_cost * cost, np.sum(lam_vc * np.abs(x_diff.T)), nodal_penalty

    @abstractmethod
    def update_weights(
        self,
        state: "AlgorithmState",
        candidate: "CandidateIterate",
        nodal_constraints: "LoweredJaxConstraints",
        settings: Config,
        params: dict,
    ) -> str:
        """Update SCP weights and cost parameters based on iteration state.

        This method is called each iteration to adapt weights based on the
        current solution quality and constraint satisfaction.

        Args:
            state: Solver state containing current weight values (mutated in place)
            nodal_constraints: Lowered JAX constraints
            settings: Configuration object containing adaptation parameters
            params: Dictionary of problem parameters

        Returns:
            str: Adaptive state string describing the update action (e.g., "Accept Lower")
        """

        pass
calculate_cost_from_state(x: np.ndarray, settings: Config) -> float staticmethod

Calculate cost from state vector based on final_type and initial_type.

Parameters:

Name Type Description Default
x ndarray

State trajectory array (n_nodes, n_states)

required
settings Config

Configuration object containing state types

required

Returns:

Name Type Description
float float

Computed cost

Source code in openscvx/algorithms/autotuning.py
@staticmethod
def calculate_cost_from_state(x: np.ndarray, settings: Config) -> float:
    """Calculate cost from state vector based on final_type and initial_type.

    Args:
        x: State trajectory array (n_nodes, n_states)
        settings: Configuration object containing state types

    Returns:
        float: Computed cost
    """
    scaled_x = (settings.sim.inv_S_x @ (x.T - settings.sim.c_x[:, None])).T
    cost = 0.0
    for i in range(settings.sim.n_states):
        if settings.sim.x.final_type[i] == "Minimize":
            cost += scaled_x[-1, i]
        if settings.sim.x.final_type[i] == "Maximize":
            cost -= scaled_x[-1, i]
        if settings.sim.x.initial_type[i] == "Minimize":
            cost += scaled_x[0, i]
        if settings.sim.x.initial_type[i] == "Maximize":
            cost -= scaled_x[0, i]
    return cost
calculate_nonlinear_penalty(x_prop: np.ndarray, x_bar: np.ndarray, u_bar: np.ndarray, lam_vc: np.ndarray, lam_vb: float, lam_cost: float, nodal_constraints: LoweredJaxConstraints, params: dict, settings: Config) -> Tuple[float, float, float] staticmethod

Calculate nonlinear penalty components.

This method computes three penalty components: 1. Cost penalty: weighted original cost 2. Virtual control penalty: penalty for dynamics violations 3. Nodal penalty: penalty for constraint violations

Parameters:

Name Type Description Default
x_prop ndarray

Propagated state (n_nodes-1, n_states)

required
x_bar ndarray

Previous iteration state (n_nodes, n_states)

required
u_bar ndarray

Solution control (n_nodes, n_controls)

required
lam_vc ndarray

Virtual control weight (scalar or matrix)

required
lam_vb float

Virtual buffer penalty weight (scalar)

required
lam_cost float

Cost relaxation parameter (scalar)

required
nodal_constraints LoweredJaxConstraints

Lowered JAX constraints

required
params dict

Dictionary of problem parameters

required
settings Config

Configuration object

required

Returns:

Type Description
Tuple[float, float, float]

Tuple of (nonlinear_cost, nonlinear_penalty, nodal_penalty): - nonlinear_cost: Weighted cost component - nonlinear_penalty: Virtual control penalty - nodal_penalty: Constraint violation penalty

Source code in openscvx/algorithms/autotuning.py
@staticmethod
def calculate_nonlinear_penalty(
    x_prop: np.ndarray,
    x_bar: np.ndarray,
    u_bar: np.ndarray,
    lam_vc: np.ndarray,
    lam_vb: float,
    lam_cost: float,
    nodal_constraints: "LoweredJaxConstraints",
    params: dict,
    settings: Config,
) -> Tuple[float, float, float]:
    """Calculate nonlinear penalty components.

    This method computes three penalty components:
    1. Cost penalty: weighted original cost
    2. Virtual control penalty: penalty for dynamics violations
    3. Nodal penalty: penalty for constraint violations

    Args:
        x_prop: Propagated state (n_nodes-1, n_states)
        x_bar: Previous iteration state (n_nodes, n_states)
        u_bar: Solution control (n_nodes, n_controls)
        lam_vc: Virtual control weight (scalar or matrix)
        lam_vb: Virtual buffer penalty weight (scalar)
        lam_cost: Cost relaxation parameter (scalar)
        nodal_constraints: Lowered JAX constraints
        params: Dictionary of problem parameters
        settings: Configuration object

    Returns:
        Tuple of (nonlinear_cost, nonlinear_penalty, nodal_penalty):
            - nonlinear_cost: Weighted cost component
            - nonlinear_penalty: Virtual control penalty
            - nodal_penalty: Constraint violation penalty
    """
    nodal_penalty = 0.0

    # Evaluate nodal constraints
    for constraint in nodal_constraints.nodal:
        # Nodal constraint function is vmapped: func(x, u, node, params)
        # When called with arrays, it evaluates at all nodes
        g = constraint.func(x_bar, u_bar, 0, params)
        # Only sum violations at nodes where constraint is enforced
        if constraint.nodes is not None:
            # Filter to only specified nodes
            # Convert to numpy array for JAX compatibility
            nodes_array = np.array(constraint.nodes)
            g_filtered = g[nodes_array]
        else:
            # If no nodes specified, check all nodes
            g_filtered = g
        nodal_penalty += lam_vb * np.sum(np.maximum(0, g_filtered))

    # Evaluate cross-node constraints
    for constraint in nodal_constraints.cross_node:
        # Cross-node constraint function signature: func(X, U, params)
        # No node argument - operates on full trajectory
        g = constraint.func(x_bar, u_bar, params)
        # Cross-node constraints return scalar or array, sum all violations
        nodal_penalty += lam_vb * np.sum(np.maximum(0, g))

    cost = AutotuningBase.calculate_cost_from_state(x_bar, settings)
    x_diff = settings.sim.inv_S_x @ (x_bar[1:, :] - x_prop).T

    return lam_cost * cost, np.sum(lam_vc * np.abs(x_diff.T)), nodal_penalty
update_weights(state: AlgorithmState, candidate: CandidateIterate, nodal_constraints: LoweredJaxConstraints, settings: Config, params: dict) -> str abstractmethod

Update SCP weights and cost parameters based on iteration state.

This method is called each iteration to adapt weights based on the current solution quality and constraint satisfaction.

Parameters:

Name Type Description Default
state AlgorithmState

Solver state containing current weight values (mutated in place)

required
nodal_constraints LoweredJaxConstraints

Lowered JAX constraints

required
settings Config

Configuration object containing adaptation parameters

required
params dict

Dictionary of problem parameters

required

Returns:

Name Type Description
str str

Adaptive state string describing the update action (e.g., "Accept Lower")

Source code in openscvx/algorithms/autotuning.py
@abstractmethod
def update_weights(
    self,
    state: "AlgorithmState",
    candidate: "CandidateIterate",
    nodal_constraints: "LoweredJaxConstraints",
    settings: Config,
    params: dict,
) -> str:
    """Update SCP weights and cost parameters based on iteration state.

    This method is called each iteration to adapt weights based on the
    current solution quality and constraint satisfaction.

    Args:
        state: Solver state containing current weight values (mutated in place)
        nodal_constraints: Lowered JAX constraints
        settings: Configuration object containing adaptation parameters
        params: Dictionary of problem parameters

    Returns:
        str: Adaptive state string describing the update action (e.g., "Accept Lower")
    """

    pass

ConstantProximalWeight

Bases: AutotuningBase

Constant Proximal Weight method.

This method keeps the trust region weight constant throughout the optimization, while still updating virtual control weights and handling cost relaxation. Useful when you want a fixed trust region size without adaptation.

Source code in openscvx/algorithms/autotuning.py
class ConstantProximalWeight(AutotuningBase):
    """Constant Proximal Weight method.

    This method keeps the trust region weight constant throughout the optimization,
    while still updating virtual control weights and handling cost relaxation.
    Useful when you want a fixed trust region size without adaptation.
    """

    def __init__(
        self,
        lam_cost_drop: int = -1,
        lam_cost_relax: float = 1.0,
    ):
        self.lam_cost_drop = lam_cost_drop
        self.lam_cost_relax = lam_cost_relax

    def update_weights(
        self,
        state: "AlgorithmState",
        candidate: "CandidateIterate",
        nodal_constraints: "LoweredJaxConstraints",
        settings: Config,
        params: dict,
    ) -> str:
        """Update SCP weights keeping trust region constant.

        Args:
            state: Solver state containing current weight values (mutated in place)
            nodal_constraints: Lowered JAX constraints
            settings: Configuration object containing adaptation parameters
            params: Dictionary of problem parameters

        Returns:
            str: Adaptive state string (e.g., "Accept", "Reject")
        """
        # Update cost relaxation parameter after cost_drop iterations
        if state.k > self.lam_cost_drop:
            candidate.lam_cost = state.lam_cost * self.lam_cost_relax
        else:
            candidate.lam_cost = settings.scp.lam_cost

        state.lam_prox_history.append(state.lam_prox)
        state.accept_solution(candidate)
        return "Accept Constant"
update_weights(state: AlgorithmState, candidate: CandidateIterate, nodal_constraints: LoweredJaxConstraints, settings: Config, params: dict) -> str

Update SCP weights keeping trust region constant.

Parameters:

Name Type Description Default
state AlgorithmState

Solver state containing current weight values (mutated in place)

required
nodal_constraints LoweredJaxConstraints

Lowered JAX constraints

required
settings Config

Configuration object containing adaptation parameters

required
params dict

Dictionary of problem parameters

required

Returns:

Name Type Description
str str

Adaptive state string (e.g., "Accept", "Reject")

Source code in openscvx/algorithms/autotuning.py
def update_weights(
    self,
    state: "AlgorithmState",
    candidate: "CandidateIterate",
    nodal_constraints: "LoweredJaxConstraints",
    settings: Config,
    params: dict,
) -> str:
    """Update SCP weights keeping trust region constant.

    Args:
        state: Solver state containing current weight values (mutated in place)
        nodal_constraints: Lowered JAX constraints
        settings: Configuration object containing adaptation parameters
        params: Dictionary of problem parameters

    Returns:
        str: Adaptive state string (e.g., "Accept", "Reject")
    """
    # Update cost relaxation parameter after cost_drop iterations
    if state.k > self.lam_cost_drop:
        candidate.lam_cost = state.lam_cost * self.lam_cost_relax
    else:
        candidate.lam_cost = settings.scp.lam_cost

    state.lam_prox_history.append(state.lam_prox)
    state.accept_solution(candidate)
    return "Accept Constant"

DiscretizationResult dataclass

Unpacked discretization data from a multi-shot discretization matrix.

The discretization solver returns a matrix V that stores multiple blocks (propagated state and linearization matrices) across nodes/time. Historically, we stored the raw V matrices and re-unpacked them repeatedly via slicing. This dataclass unpacks once and makes access trivial.

Source code in openscvx/algorithms/base.py
@dataclass(frozen=True, slots=True)
class DiscretizationResult:
    """Unpacked discretization data from a multi-shot discretization matrix.

    The discretization solver returns a matrix ``V`` that stores multiple blocks
    (propagated state and linearization matrices) across nodes/time. Historically,
    we stored the raw ``V`` matrices and re-unpacked them repeatedly via slicing.
    This dataclass unpacks once and makes access trivial.
    """

    V: np.ndarray  # raw V matrix, shape: (flattened_size, n_timesteps)
    x_prop: np.ndarray  # (N-1, n_x)
    A_d: np.ndarray  # (N-1, n_x, n_x)
    B_d: np.ndarray  # (N-1, n_x, n_u)
    C_d: np.ndarray  # (N-1, n_x, n_u)

    @classmethod
    def from_V(cls, V: np.ndarray, n_x: int, n_u: int, N: int) -> "DiscretizationResult":
        """Unpack the final timestep of a raw discretization matrix ``V``."""
        i1, i2 = n_x, n_x + n_x * n_x
        i3, i4 = i2 + n_x * n_u, i2 + 2 * n_x * n_u
        V_final = V[:, -1].reshape(-1, i4)
        return cls(
            V=np.asarray(V),
            x_prop=V_final[:, :i1],
            A_d=V_final[:, i1:i2].reshape(N - 1, n_x, n_x),
            B_d=V_final[:, i2:i3].reshape(N - 1, n_x, n_u),
            C_d=V_final[:, i3:i4].reshape(N - 1, n_x, n_u),
        )
from_V(V: np.ndarray, n_x: int, n_u: int, N: int) -> DiscretizationResult classmethod

Unpack the final timestep of a raw discretization matrix V.

Source code in openscvx/algorithms/base.py
@classmethod
def from_V(cls, V: np.ndarray, n_x: int, n_u: int, N: int) -> "DiscretizationResult":
    """Unpack the final timestep of a raw discretization matrix ``V``."""
    i1, i2 = n_x, n_x + n_x * n_x
    i3, i4 = i2 + n_x * n_u, i2 + 2 * n_x * n_u
    V_final = V[:, -1].reshape(-1, i4)
    return cls(
        V=np.asarray(V),
        x_prop=V_final[:, :i1],
        A_d=V_final[:, i1:i2].reshape(N - 1, n_x, n_x),
        B_d=V_final[:, i2:i3].reshape(N - 1, n_x, n_u),
        C_d=V_final[:, i3:i4].reshape(N - 1, n_x, n_u),
    )

OptimizationResults dataclass

Structured container for optimization results from the Successive Convexification (SCP) solver.

This class provides a type-safe and organized way to store and access optimization results, replacing the previous dictionary-based approach. It includes core optimization data, iteration history for convergence analysis, post-processing results, and flexible storage for plotting and application-specific data.

Attributes:

Name Type Description
converged bool

Whether the optimization successfully converged.

t_final float

Final time of the optimized trajectory.

x_guess ndarray

Optimized state trajectory at discretization nodes, shape (N, n_states).

u_guess ndarray

Optimized control trajectory at discretization nodes, shape (N, n_controls).

nodes dict[str, ndarray]

Dictionary mapping state/control names to arrays at optimization nodes. Includes both user-defined and augmented variables.

trajectory dict[str, ndarray]

Dictionary mapping state/control names to arrays along the propagated trajectory. Added by post_process().

x_history list[ndarray]

State trajectories from each SCP iteration.

u_history list[ndarray]

Control trajectories from each SCP iteration.

discretization_history list[ndarray]

Time discretization from each iteration.

J_tr_history list[ndarray]

Trust region cost history.

J_vb_history list[ndarray]

Virtual buffer cost history.

J_vc_history list[ndarray]

Virtual control cost history.

t_full Optional[ndarray]

Full time grid for interpolated trajectory. Added by propagate_trajectory_results.

x_full Optional[ndarray]

Interpolated state trajectory on full time grid. Added by propagate_trajectory_results.

u_full Optional[ndarray]

Interpolated control trajectory on full time grid. Added by propagate_trajectory_results.

cost Optional[float]

Total cost of the optimized trajectory. Added by propagate_trajectory_results.

ctcs_violation Optional[ndarray]

Continuous-time constraint violations. Added by propagate_trajectory_results.

plotting_data dict[str, Any]

Flexible storage for plotting and application data.

Source code in openscvx/algorithms/optimization_results.py
@dataclass
class OptimizationResults:
    """
    Structured container for optimization results from the Successive Convexification (SCP) solver.

    This class provides a type-safe and organized way to store and access optimization results,
    replacing the previous dictionary-based approach. It includes core optimization data,
    iteration history for convergence analysis, post-processing results, and flexible
    storage for plotting and application-specific data.

    Attributes:
        converged (bool): Whether the optimization successfully converged.
        t_final (float): Final time of the optimized trajectory.
        x_guess (np.ndarray): Optimized state trajectory at discretization nodes,
            shape (N, n_states).
        u_guess (np.ndarray): Optimized control trajectory at discretization nodes,
            shape (N, n_controls).
        nodes (dict[str, np.ndarray]): Dictionary mapping state/control names to arrays
            at optimization nodes. Includes both user-defined and augmented variables.
        trajectory (dict[str, np.ndarray]): Dictionary mapping state/control names to arrays
            along the propagated trajectory. Added by post_process().
        x_history (list[np.ndarray]): State trajectories from each SCP iteration.
        u_history (list[np.ndarray]): Control trajectories from each SCP iteration.
        discretization_history (list[np.ndarray]): Time discretization from each iteration.
        J_tr_history (list[np.ndarray]): Trust region cost history.
        J_vb_history (list[np.ndarray]): Virtual buffer cost history.
        J_vc_history (list[np.ndarray]): Virtual control cost history.
        t_full (Optional[np.ndarray]): Full time grid for interpolated trajectory.
            Added by propagate_trajectory_results.
        x_full (Optional[np.ndarray]): Interpolated state trajectory on full time grid.
            Added by propagate_trajectory_results.
        u_full (Optional[np.ndarray]): Interpolated control trajectory on full time grid.
            Added by propagate_trajectory_results.
        cost (Optional[float]): Total cost of the optimized trajectory.
            Added by propagate_trajectory_results.
        ctcs_violation (Optional[np.ndarray]): Continuous-time constraint violations.
            Added by propagate_trajectory_results.
        plotting_data (dict[str, Any]): Flexible storage for plotting and application data.
    """

    # Core optimization results
    converged: bool
    t_final: float

    # Dictionary-based access to states and controls
    nodes: dict[str, np.ndarray] = field(default_factory=dict)
    trajectory: dict[str, np.ndarray] = field(default_factory=dict)

    # Internal metadata for dictionary construction
    _states: list = field(default_factory=list, repr=False)
    _controls: list = field(default_factory=list, repr=False)

    # History of SCP iterations (single source of truth)
    X: list[np.ndarray] = field(default_factory=list)
    U: list[np.ndarray] = field(default_factory=list)
    discretization_history: list[np.ndarray] = field(default_factory=list)
    J_tr_history: list[np.ndarray] = field(default_factory=list)
    J_vb_history: list[np.ndarray] = field(default_factory=list)
    J_vc_history: list[np.ndarray] = field(default_factory=list)
    TR_history: list[np.ndarray] = field(default_factory=list)
    VC_history: list[np.ndarray] = field(default_factory=list)

    # Convergence histories
    lam_prox_history: list[float] = field(default_factory=list)
    actual_reduction_history: list[float] = field(default_factory=list)
    pred_reduction_history: list[float] = field(default_factory=list)
    acceptance_ratio_history: list[float] = field(default_factory=list)

    @property
    def x(self) -> np.ndarray:
        """Optimal state trajectory at discretization nodes.

        Returns the final converged solution from the SCP iteration history.

        Returns:
            State trajectory array, shape (N, n_states)
        """
        return self.X[-1]

    @property
    def u(self) -> np.ndarray:
        """Optimal control trajectory at discretization nodes.

        Returns the final converged solution from the SCP iteration history.

        Returns:
            Control trajectory array, shape (N, n_controls)
        """
        return self.U[-1]

    # Post-processing results (added by propagate_trajectory_results)
    t_full: Optional[np.ndarray] = None
    x_full: Optional[np.ndarray] = None
    u_full: Optional[np.ndarray] = None
    cost: Optional[float] = None
    ctcs_violation: Optional[np.ndarray] = None

    # Additional plotting/application data (added by user)
    plotting_data: dict[str, Any] = field(default_factory=dict)

    def __post_init__(self):
        """Initialize the results object."""
        pass

    def update_plotting_data(self, **kwargs: Any) -> None:
        """
        Update the plotting data with additional information.

        Args:
            **kwargs: Key-value pairs to add to plotting_data
        """
        self.plotting_data.update(kwargs)

    def get(self, key: str, default: Any = None) -> Any:
        """
        Get a value from the results, similar to dict.get().

        Args:
            key: The key to look up
            default: Default value if key is not found

        Returns:
            The value associated with the key, or default if not found
        """
        # Check if it's a direct attribute
        if hasattr(self, key):
            return getattr(self, key)

        # Check if it's in plotting_data
        if key in self.plotting_data:
            return self.plotting_data[key]

        return default

    def __getitem__(self, key: str) -> Any:
        """
        Allow dictionary-style access to results.

        Args:
            key: The key to look up

        Returns:
            The value associated with the key

        Raises:
            KeyError: If key is not found
        """
        # Check if it's a direct attribute
        if hasattr(self, key):
            return getattr(self, key)

        # Check if it's in plotting_data
        if key in self.plotting_data:
            return self.plotting_data[key]

        raise KeyError(f"Key '{key}' not found in results")

    def __setitem__(self, key: str, value: Any):
        """
        Allow dictionary-style assignment to results.

        Args:
            key: The key to set
            value: The value to assign
        """
        # Check if it's a direct attribute
        if hasattr(self, key):
            setattr(self, key, value)
        else:
            # Store in plotting_data
            self.plotting_data[key] = value

    def __contains__(self, key: str) -> bool:
        """
        Check if a key exists in the results.

        Args:
            key: The key to check

        Returns:
            True if key exists, False otherwise
        """
        return hasattr(self, key) or key in self.plotting_data

    def update(self, other: dict[str, Any]):
        """
        Update the results with additional data from a dictionary.

        Args:
            other: Dictionary containing additional data
        """
        for key, value in other.items():
            self[key] = value

    def to_dict(self) -> dict[str, Any]:
        """
        Convert the results to a dictionary for backward compatibility.

        Returns:
            Dictionary representation of the results
        """
        result_dict = {}

        # Add all direct attributes
        for attr_name in self.__dataclass_fields__:
            if attr_name != "plotting_data":
                result_dict[attr_name] = getattr(self, attr_name)

        # Add plotting data
        result_dict.update(self.plotting_data)

        return result_dict
u: np.ndarray property

Optimal control trajectory at discretization nodes.

Returns the final converged solution from the SCP iteration history.

Returns:

Type Description
ndarray

Control trajectory array, shape (N, n_controls)

x: np.ndarray property

Optimal state trajectory at discretization nodes.

Returns the final converged solution from the SCP iteration history.

Returns:

Type Description
ndarray

State trajectory array, shape (N, n_states)

get(key: str, default: Any = None) -> Any

Get a value from the results, similar to dict.get().

Parameters:

Name Type Description Default
key str

The key to look up

required
default Any

Default value if key is not found

None

Returns:

Type Description
Any

The value associated with the key, or default if not found

Source code in openscvx/algorithms/optimization_results.py
def get(self, key: str, default: Any = None) -> Any:
    """
    Get a value from the results, similar to dict.get().

    Args:
        key: The key to look up
        default: Default value if key is not found

    Returns:
        The value associated with the key, or default if not found
    """
    # Check if it's a direct attribute
    if hasattr(self, key):
        return getattr(self, key)

    # Check if it's in plotting_data
    if key in self.plotting_data:
        return self.plotting_data[key]

    return default
to_dict() -> dict[str, Any]

Convert the results to a dictionary for backward compatibility.

Returns:

Type Description
dict[str, Any]

Dictionary representation of the results

Source code in openscvx/algorithms/optimization_results.py
def to_dict(self) -> dict[str, Any]:
    """
    Convert the results to a dictionary for backward compatibility.

    Returns:
        Dictionary representation of the results
    """
    result_dict = {}

    # Add all direct attributes
    for attr_name in self.__dataclass_fields__:
        if attr_name != "plotting_data":
            result_dict[attr_name] = getattr(self, attr_name)

    # Add plotting data
    result_dict.update(self.plotting_data)

    return result_dict
update(other: dict[str, Any])

Update the results with additional data from a dictionary.

Parameters:

Name Type Description Default
other dict[str, Any]

Dictionary containing additional data

required
Source code in openscvx/algorithms/optimization_results.py
def update(self, other: dict[str, Any]):
    """
    Update the results with additional data from a dictionary.

    Args:
        other: Dictionary containing additional data
    """
    for key, value in other.items():
        self[key] = value
update_plotting_data(**kwargs: Any) -> None

Update the plotting data with additional information.

Parameters:

Name Type Description Default
**kwargs Any

Key-value pairs to add to plotting_data

{}
Source code in openscvx/algorithms/optimization_results.py
def update_plotting_data(self, **kwargs: Any) -> None:
    """
    Update the plotting data with additional information.

    Args:
        **kwargs: Key-value pairs to add to plotting_data
    """
    self.plotting_data.update(kwargs)

PenalizedTrustRegion

Bases: Algorithm

Penalized Trust Region (PTR) successive convexification algorithm.

PTR solves non-convex trajectory optimization problems through iterative convex approximation. Each subproblem balances competing cost terms:

  • Trust region penalty: Discourages large deviations from the previous iterate, keeping the solution within the region where linearization is valid.
  • Virtual control: Relaxes dynamics constraints, penalized to drive defects toward zero as the algorithm converges.
  • Virtual buffer: Relaxes non-convex constraints, similarly penalized to enforce feasibility at convergence.
  • Problem objective and other terms: The user-defined cost (e.g., minimum fuel, minimum time) and any additional penalty terms.

The interplay between these terms guides the optimization: the trust region anchors the solution near the linearization point while virtual terms allow temporary constraint violations that shrink over iterations.

Example

Using PTR with a Problem::

from openscvx.algorithms import PenalizedTrustRegion

problem = Problem(dynamics, constraints, states, controls, N, time)
problem.initialize()
result = problem.solve()
Source code in openscvx/algorithms/penalized_trust_region.py
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
class PenalizedTrustRegion(Algorithm):
    """Penalized Trust Region (PTR) successive convexification algorithm.

    PTR solves non-convex trajectory optimization problems through iterative
    convex approximation. Each subproblem balances competing cost terms:

    - **Trust region penalty**: Discourages large deviations from the previous
      iterate, keeping the solution within the region where linearization is valid.
    - **Virtual control**: Relaxes dynamics constraints, penalized to drive
      defects toward zero as the algorithm converges.
    - **Virtual buffer**: Relaxes non-convex constraints, similarly penalized
      to enforce feasibility at convergence.
    - **Problem objective and other terms**: The user-defined cost (e.g., minimum
      fuel, minimum time) and any additional penalty terms.

    The interplay between these terms guides the optimization: the trust region
    anchors the solution near the linearization point while virtual terms allow
    temporary constraint violations that shrink over iterations.

    Example:
        Using PTR with a Problem::

            from openscvx.algorithms import PenalizedTrustRegion

            problem = Problem(dynamics, constraints, states, controls, N, time)
            problem.initialize()
            result = problem.solve()
    """

    # Base columns emitted by PTR algorithm (before autotuner columns)
    BASE_COLUMNS: List[Column] = [
        Column("iter", "Iter", 4, "{:4d}"),
        Column("dis_time", "Dis (ms)", 8, "{:6.2f}", min_verbosity=Verbosity.STANDARD),
        Column("subprop_time", "Solve (ms)", 10, "{:6.2f}", min_verbosity=Verbosity.STANDARD),
        Column("cost", "Cost", 8, "{: .1e}"),
        Column("J_tr", "J_tr", 8, "{: .1e}", color_J_tr, Verbosity.STANDARD),
        Column("J_vb", "J_vb", 8, "{: .1e}", color_J_vb, Verbosity.STANDARD),
        Column("J_vc", "J_vc", 8, "{: .1e}", color_J_vc, Verbosity.STANDARD),
    ]

    # Columns that always appear last (after autotuner columns)
    TAIL_COLUMNS: List[Column] = [
        Column("prob_stat", "Cvx Status", 11, "{}", color_prob_stat),
    ]

    def __init__(self):
        """Initialize PTR with unset infrastructure.

        Call initialize() before step() to set up compiled components.
        """
        self._solver: "ConvexSolver" = None
        self._discretization_solver: callable = None
        self._jax_constraints: "LoweredJaxConstraints" = None
        self._emitter: callable = None
        self._autotuner: "AutotuningBase" = None

    @property
    def autotuner(self) -> "AutotuningBase":
        """Access the autotuner instance for configuring parameters.

        For AugmentedLagrangian method, parameters can be modified via:
            algorithm.autotuner.rho_max = 1e7
            algorithm.autotuner.mu_max = 1e7
            etc.

        Returns:
            AutotuningBase: The autotuner instance

        Raises:
            AttributeError: If algorithm has not been initialized yet
        """
        if self._autotuner is None:
            raise AttributeError("Autotuner not yet initialized. Call initialize() first.")
        return self._autotuner

    def get_columns(self, verbosity: int = Verbosity.STANDARD) -> List[Column]:
        """Get the columns to display for iteration output.

        Combines base PTR columns with autotuner-specific columns,
        filtered by the requested verbosity level.

        Args:
            verbosity: Minimum verbosity level for columns to include.
                MINIMAL (1): Core metrics only (iter, cost, status)
                STANDARD (2): + timing, penalty terms
                FULL (3): + autotuning diagnostics

        Returns:
            List of Column specs filtered by verbosity level.

        Raises:
            AttributeError: If algorithm has not been initialized yet.
        """
        if self._autotuner is None:
            raise AttributeError("Autotuner not yet initialized. Call initialize() first.")

        all_columns = self.BASE_COLUMNS + self._autotuner.COLUMNS + self.TAIL_COLUMNS
        return [col for col in all_columns if col.min_verbosity <= verbosity]

    def initialize(
        self,
        solver: "ConvexSolver",
        discretization_solver: callable,
        jax_constraints: "LoweredJaxConstraints",
        emitter: callable,
        params: dict,
        settings: Config,
    ) -> None:
        """Initialize PTR algorithm.

        Stores compiled infrastructure and performs a warm-start solve to
        initialize DPP and JAX jacobians.

        Args:
            solver: Convex subproblem solver (e.g., CVXPySolver)
            discretization_solver: Compiled discretization solver
            jax_constraints: JIT-compiled constraint functions
            emitter: Callback for emitting iteration progress
            params: Problem parameters dictionary (for warm-start)
            settings: Configuration object (for warm-start)
        """
        # Store immutable infrastructure
        self._solver = solver
        self._discretization_solver = discretization_solver
        self._jax_constraints = jax_constraints
        self._emitter = emitter

        # Initialize autotuner based on settings
        # The autotuner is configured on ``settings.scp.autotuner`` with a default
        # of :class:`AugmentedLagrangian` when no custom instance is provided.
        self._autotuner = settings.scp.autotuner

        # Set boundary conditions
        self._solver.update_boundary_conditions(
            x_init=settings.sim.x.initial,
            x_term=settings.sim.x.final,
        )

        # Create temporary state for initialization solve
        init_state = AlgorithmState.from_settings(settings)

        # Solve a dumb problem to initialize DPP and JAX jacobians
        _, _, _, x_prop, V_multi_shoot = self._discretization_solver.call(
            init_state.x, init_state.u.astype(float), params
        )

        init_state.add_discretization(V_multi_shoot.__array__())
        _ = self._subproblem(params, init_state, settings)

    def step(
        self,
        state: AlgorithmState,
        params: dict,
        settings: Config,
    ) -> bool:
        """Execute one PTR iteration.

        Solves the convex subproblem, updates state in place, and checks
        convergence based on trust region, virtual buffer, and virtual
        control costs.

        Args:
            state: Mutable solver state (modified in place)
            params: Problem parameters dictionary (may change between steps)
            settings: Configuration object (may change between steps)

        Returns:
            True if J_tr, J_vb, and J_vc are all below their thresholds.

        Raises:
            RuntimeError: If initialize() has not been called.
        """
        if self._solver is None:
            raise RuntimeError(
                "PenalizedTrustRegion.step() called before initialize(). "
                "Call initialize() first to set up compiled infrastructure."
            )

        # Compute discretization before subproblem only for the first iteration
        if state.k == 1:
            t0 = time.time()
            _, _, _, x_prop, V_multi_shoot = self._discretization_solver.call(
                state.x, state.u.astype(float), params
            )
            dis_time = time.time() - t0

            state.add_discretization(V_multi_shoot.__array__())

        # Run the subproblem
        (
            x_sol,
            u_sol,
            cost,
            J_total,
            J_vb_vec,
            J_vc_vec,
            J_tr_vec,
            prob_stat,
            subprop_time,
            vc_mat,
            tr_mat,
        ) = self._subproblem(params, state, settings)

        candidate = CandidateIterate()
        candidate.x = x_sol
        candidate.u = u_sol
        candidate.J_lin = J_total

        t0 = time.time()
        _, _, _, x_prop, V_multi_shoot = self._discretization_solver.call(
            candidate.x, candidate.u.astype(float), params
        )
        dis_time = time.time() - t0

        candidate.V = V_multi_shoot.__array__()
        candidate.x_prop = x_prop.__array__()

        # Update state in place by appending to history
        # The x_guess/u_guess properties will automatically return the latest entry
        candidate.VC = vc_mat
        candidate.TR = tr_mat

        state.J_tr = np.sum(np.array(J_tr_vec))
        state.J_vb = np.sum(np.array(J_vb_vec))
        state.J_vc = np.sum(np.array(J_vc_vec))

        # Update weights in state using configured autotuning method
        adaptive_state = self._autotuner.update_weights(
            state, candidate, self._jax_constraints, settings, params
        )

        # Build emission data - only include nonlinear/reduction metrics when
        # the autotuner actually uses them (constant/ramp methods don't)
        use_full_metrics = not isinstance(
            self._autotuner, (ConstantProximalWeight, RampProximalWeight)
        )

        emission_data = {
            "iter": state.k,
            "dis_time": dis_time * 1000.0,
            "subprop_time": subprop_time * 1000.0,
            "J_tr": state.J_tr,
            "J_vb": state.J_vb,
            "J_vc": state.J_vc,
            "cost": cost[-1],
            "lam_prox": state.lam_prox,
            "prob_stat": prob_stat,
            "adaptive_state": adaptive_state,
        }

        # Only include nonlinear/reduction metrics when autotuner uses them
        # (constant/ramp methods don't compute these, so we don't emit them)
        if use_full_metrics:
            if len(state.pred_reduction_history) == 0:
                pred_reduction = 0.0
            else:
                pred_reduction = state.pred_reduction_history[-1]
            if len(state.actual_reduction_history) == 0:
                actual_reduction = 0.0
            else:
                actual_reduction = state.actual_reduction_history[-1]
            if len(state.acceptance_ratio_history) == 0:
                acceptance_ratio = 0.0
            else:
                acceptance_ratio = state.acceptance_ratio_history[-1]

            emission_data.update(
                {
                    "J_nonlin": candidate.J_nonlin,
                    "J_lin": candidate.J_lin,
                    "pred_reduction": pred_reduction,
                    "actual_reduction": actual_reduction,
                    "acceptance_ratio": acceptance_ratio,
                }
            )

        # Emit data
        self._emitter(emission_data)

        # Increment iteration counter
        state.k += 1

        # Return convergence status
        return (
            (state.J_tr < settings.scp.ep_tr)
            and (state.J_vb < settings.scp.ep_vb)
            and (state.J_vc < settings.scp.ep_vc)
        )

    def _subproblem(
        self,
        params: dict,
        state: AlgorithmState,
        settings: Config,
    ):
        """Solve a single convex subproblem.

        Uses stored infrastructure (solver, discretization_solver, jax_constraints)
        with per-step params and settings.

        Args:
            params: Problem parameters dictionary
            state: Current solver state
            settings: Configuration object

        Returns:
            Tuple containing solution data, costs, and timing information.
        """
        param_dict = params

        # Update solver with dynamics linearization
        self._solver.update_dynamics_linearization(
            x_bar=state.x,
            u_bar=state.u,
            A_d=state.A_d(),
            B_d=state.B_d(),
            C_d=state.C_d(),
            x_prop=state.x_prop(),
        )

        # Build constraint linearization data
        # TODO: (norrisg) investigate why we are passing `0` for the node here
        nodal_linearizations = []
        if self._jax_constraints.nodal:
            for constraint in self._jax_constraints.nodal:
                # Evaluate constraint at all nodes (vmapped function returns shape (N,))
                g_full = np.asarray(constraint.func(state.x, state.u, 0, param_dict))
                grad_g_x_full = np.asarray(constraint.grad_g_x(state.x, state.u, 0, param_dict))
                grad_g_u_full = np.asarray(constraint.grad_g_u(state.x, state.u, 0, param_dict))

                # Ensure g is 1D with shape (N,) - squeeze any extra dimensions
                # This handles cases where constraint might return shape (N, 1) or similar
                g_full = np.squeeze(g_full)
                if g_full.ndim == 0:
                    # Scalar result - expand to (N,)
                    g_full = np.broadcast_to(g_full, (state.x.shape[0],))
                elif g_full.ndim > 1:
                    # Multi-dimensional result - flatten to (N,)
                    # This should not happen for properly decomposed constraints,
                    # but handle it gracefully
                    g_full = g_full.reshape(g_full.shape[0], -1).sum(axis=1)

                # Ensure grad_g_x and grad_g_u have correct shapes
                # grad_g_x should be (N, n_x), grad_g_u should be (N, n_u)
                if grad_g_x_full.ndim == 1:
                    # If 1D, it should be (n_x,) - broadcast to (N, n_x)
                    grad_g_x_full = np.broadcast_to(
                        grad_g_x_full, (state.x.shape[0], grad_g_x_full.shape[0])
                    )
                elif grad_g_x_full.ndim > 2:
                    # Flatten extra dimensions
                    grad_g_x_full = grad_g_x_full.reshape(grad_g_x_full.shape[0], -1)
                    # Take only first n_x columns
                    n_x = state.x.shape[1]
                    if grad_g_x_full.shape[1] > n_x:
                        grad_g_x_full = grad_g_x_full[:, :n_x]

                if grad_g_u_full.ndim == 1:
                    # If 1D, it should be (n_u,) - broadcast to (N, n_u)
                    grad_g_u_full = np.broadcast_to(
                        grad_g_u_full, (state.u.shape[0], grad_g_u_full.shape[0])
                    )
                elif grad_g_u_full.ndim > 2:
                    # Flatten extra dimensions
                    grad_g_u_full = grad_g_u_full.reshape(grad_g_u_full.shape[0], -1)
                    # Take only first n_u columns
                    n_u = state.u.shape[1]
                    if grad_g_u_full.shape[1] > n_u:
                        grad_g_u_full = grad_g_u_full[:, :n_u]

                nodal_linearizations.append(
                    {
                        "g": g_full,
                        "grad_g_x": grad_g_x_full,
                        "grad_g_u": grad_g_u_full,
                    }
                )

        cross_node_linearizations = []
        if self._jax_constraints.cross_node:
            for constraint in self._jax_constraints.cross_node:
                cross_node_linearizations.append(
                    {
                        "g": np.asarray(constraint.func(state.x, state.u, param_dict)),
                        "grad_g_X": np.asarray(constraint.grad_g_X(state.x, state.u, param_dict)),
                        "grad_g_U": np.asarray(constraint.grad_g_U(state.x, state.u, param_dict)),
                    }
                )

        # Update solver with constraint linearizations
        self._solver.update_constraint_linearizations(
            nodal=nodal_linearizations if nodal_linearizations else None,
            cross_node=cross_node_linearizations if cross_node_linearizations else None,
        )

        # Update solver with penalty weights
        self._solver.update_penalties(
            lam_prox=state.lam_prox,
            lam_cost=state.lam_cost,
            lam_vc=state.lam_vc,
            lam_vb=state.lam_vb,
        )

        # Solve the convex subproblem
        t0 = time.time()
        result = self._solver.solve()
        subprop_time = time.time() - t0

        # Extract unscaled trajectories from result
        x_new_guess = result.x
        u_new_guess = result.u

        # Calculate costs from boundary conditions using utility function
        # Note: The original code only considered final_type, but the utility handles both
        # Here we maintain backward compatibility by only using final_type
        costs = [0]
        for i, bc_type in enumerate(settings.sim.x.final_type):
            if bc_type == "Minimize":
                costs += x_new_guess[:, i]
            elif bc_type == "Maximize":
                costs -= x_new_guess[:, i]

        # Create the block diagonal matrix using jax.numpy.block
        inv_block_diag = np.block(
            [
                [
                    settings.sim.inv_S_x,
                    np.zeros((settings.sim.inv_S_x.shape[0], settings.sim.inv_S_u.shape[1])),
                ],
                [
                    np.zeros((settings.sim.inv_S_u.shape[0], settings.sim.inv_S_x.shape[1])),
                    settings.sim.inv_S_u,
                ],
            ]
        )

        # Calculate J_tr_vec using the JAX-compatible block diagonal matrix
        tr_mat = inv_block_diag @ np.hstack((x_new_guess - state.x, u_new_guess - state.u)).T
        J_tr_vec = la.norm(tr_mat, axis=0) ** 2
        vc_mat = np.abs(settings.sim.inv_S_x @ result.nu.T).T
        J_vc_vec = np.sum(vc_mat, axis=1)

        # Sum nodal constraint violations
        J_vb_vec = 0
        for nu_vb_arr in result.nu_vb:
            J_vb_vec += np.maximum(0, nu_vb_arr)

        # Add cross-node constraint violations
        for nu_vb_cross_val in result.nu_vb_cross:
            J_vb_vec += np.maximum(0, nu_vb_cross_val)

        # Convex constraints are already handled in the OCP, no processing needed here
        return (
            x_new_guess,
            u_new_guess,
            costs,
            result.cost,
            J_vb_vec,
            J_vc_vec,
            J_tr_vec,
            result.status,
            subprop_time,
            vc_mat,
            tr_mat,
        )

    def citation(self) -> List[str]:
        """Return BibTeX citations for the PTR algorithm.

        Returns:
            List containing the BibTeX entry for the PTR paper.
        """
        return [
            r"""@article{drusvyatskiy2018error,
  title={Error bounds, quadratic growth, and linear convergence of proximal methods},
  author={Drusvyatskiy, Dmitriy and Lewis, Adrian S},
  journal={Mathematics of operations research},
  volume={43},
  number={3},
  pages={919--948},
  year={2018},
  publisher={INFORMS}
}""",
            r"""@article{szmuk2020successive,
  title={Successive convexification for real-time six-degree-of-freedom powered descent guidance
    with state-triggered constraints},
  author={Szmuk, Michael and Reynolds, Taylor P and A{\c{c}}{\i}kme{\c{s}}e, Beh{\c{c}}et},
  journal={Journal of Guidance, Control, and Dynamics},
  volume={43},
  number={8},
  pages={1399--1413},
  year={2020},
  publisher={American Institute of Aeronautics and Astronautics}
}""",
            r"""@article{reynolds2020dual,
  title={Dual quaternion-based powered descent guidance with state-triggered constraints},
  author={Reynolds, Taylor P and Szmuk, Michael and Malyuta, Danylo and Mesbahi, Mehran and
    A{\c{c}}{\i}kme{\c{s}}e, Beh{\c{c}}et and Carson III, John M},
  journal={Journal of Guidance, Control, and Dynamics},
  volume={43},
  number={9},
  pages={1584--1599},
  year={2020},
  publisher={American Institute of Aeronautics and Astronautics}
}""",
        ]
autotuner: AutotuningBase property

Access the autotuner instance for configuring parameters.

For AugmentedLagrangian method, parameters can be modified via: algorithm.autotuner.rho_max = 1e7 algorithm.autotuner.mu_max = 1e7 etc.

Returns:

Name Type Description
AutotuningBase AutotuningBase

The autotuner instance

Raises:

Type Description
AttributeError

If algorithm has not been initialized yet

__init__()

Initialize PTR with unset infrastructure.

Call initialize() before step() to set up compiled components.

Source code in openscvx/algorithms/penalized_trust_region.py
def __init__(self):
    """Initialize PTR with unset infrastructure.

    Call initialize() before step() to set up compiled components.
    """
    self._solver: "ConvexSolver" = None
    self._discretization_solver: callable = None
    self._jax_constraints: "LoweredJaxConstraints" = None
    self._emitter: callable = None
    self._autotuner: "AutotuningBase" = None
citation() -> List[str]

Return BibTeX citations for the PTR algorithm.

Returns:

Type Description
List[str]

List containing the BibTeX entry for the PTR paper.

Source code in openscvx/algorithms/penalized_trust_region.py
    def citation(self) -> List[str]:
        """Return BibTeX citations for the PTR algorithm.

        Returns:
            List containing the BibTeX entry for the PTR paper.
        """
        return [
            r"""@article{drusvyatskiy2018error,
  title={Error bounds, quadratic growth, and linear convergence of proximal methods},
  author={Drusvyatskiy, Dmitriy and Lewis, Adrian S},
  journal={Mathematics of operations research},
  volume={43},
  number={3},
  pages={919--948},
  year={2018},
  publisher={INFORMS}
}""",
            r"""@article{szmuk2020successive,
  title={Successive convexification for real-time six-degree-of-freedom powered descent guidance
    with state-triggered constraints},
  author={Szmuk, Michael and Reynolds, Taylor P and A{\c{c}}{\i}kme{\c{s}}e, Beh{\c{c}}et},
  journal={Journal of Guidance, Control, and Dynamics},
  volume={43},
  number={8},
  pages={1399--1413},
  year={2020},
  publisher={American Institute of Aeronautics and Astronautics}
}""",
            r"""@article{reynolds2020dual,
  title={Dual quaternion-based powered descent guidance with state-triggered constraints},
  author={Reynolds, Taylor P and Szmuk, Michael and Malyuta, Danylo and Mesbahi, Mehran and
    A{\c{c}}{\i}kme{\c{s}}e, Beh{\c{c}}et and Carson III, John M},
  journal={Journal of Guidance, Control, and Dynamics},
  volume={43},
  number={9},
  pages={1584--1599},
  year={2020},
  publisher={American Institute of Aeronautics and Astronautics}
}""",
        ]
get_columns(verbosity: int = Verbosity.STANDARD) -> List[Column]

Get the columns to display for iteration output.

Combines base PTR columns with autotuner-specific columns, filtered by the requested verbosity level.

Parameters:

Name Type Description Default
verbosity int

Minimum verbosity level for columns to include. MINIMAL (1): Core metrics only (iter, cost, status) STANDARD (2): + timing, penalty terms FULL (3): + autotuning diagnostics

STANDARD

Returns:

Type Description
List[Column]

List of Column specs filtered by verbosity level.

Raises:

Type Description
AttributeError

If algorithm has not been initialized yet.

Source code in openscvx/algorithms/penalized_trust_region.py
def get_columns(self, verbosity: int = Verbosity.STANDARD) -> List[Column]:
    """Get the columns to display for iteration output.

    Combines base PTR columns with autotuner-specific columns,
    filtered by the requested verbosity level.

    Args:
        verbosity: Minimum verbosity level for columns to include.
            MINIMAL (1): Core metrics only (iter, cost, status)
            STANDARD (2): + timing, penalty terms
            FULL (3): + autotuning diagnostics

    Returns:
        List of Column specs filtered by verbosity level.

    Raises:
        AttributeError: If algorithm has not been initialized yet.
    """
    if self._autotuner is None:
        raise AttributeError("Autotuner not yet initialized. Call initialize() first.")

    all_columns = self.BASE_COLUMNS + self._autotuner.COLUMNS + self.TAIL_COLUMNS
    return [col for col in all_columns if col.min_verbosity <= verbosity]
initialize(solver: ConvexSolver, discretization_solver: callable, jax_constraints: LoweredJaxConstraints, emitter: callable, params: dict, settings: Config) -> None

Initialize PTR algorithm.

Stores compiled infrastructure and performs a warm-start solve to initialize DPP and JAX jacobians.

Parameters:

Name Type Description Default
solver ConvexSolver

Convex subproblem solver (e.g., CVXPySolver)

required
discretization_solver callable

Compiled discretization solver

required
jax_constraints LoweredJaxConstraints

JIT-compiled constraint functions

required
emitter callable

Callback for emitting iteration progress

required
params dict

Problem parameters dictionary (for warm-start)

required
settings Config

Configuration object (for warm-start)

required
Source code in openscvx/algorithms/penalized_trust_region.py
def initialize(
    self,
    solver: "ConvexSolver",
    discretization_solver: callable,
    jax_constraints: "LoweredJaxConstraints",
    emitter: callable,
    params: dict,
    settings: Config,
) -> None:
    """Initialize PTR algorithm.

    Stores compiled infrastructure and performs a warm-start solve to
    initialize DPP and JAX jacobians.

    Args:
        solver: Convex subproblem solver (e.g., CVXPySolver)
        discretization_solver: Compiled discretization solver
        jax_constraints: JIT-compiled constraint functions
        emitter: Callback for emitting iteration progress
        params: Problem parameters dictionary (for warm-start)
        settings: Configuration object (for warm-start)
    """
    # Store immutable infrastructure
    self._solver = solver
    self._discretization_solver = discretization_solver
    self._jax_constraints = jax_constraints
    self._emitter = emitter

    # Initialize autotuner based on settings
    # The autotuner is configured on ``settings.scp.autotuner`` with a default
    # of :class:`AugmentedLagrangian` when no custom instance is provided.
    self._autotuner = settings.scp.autotuner

    # Set boundary conditions
    self._solver.update_boundary_conditions(
        x_init=settings.sim.x.initial,
        x_term=settings.sim.x.final,
    )

    # Create temporary state for initialization solve
    init_state = AlgorithmState.from_settings(settings)

    # Solve a dumb problem to initialize DPP and JAX jacobians
    _, _, _, x_prop, V_multi_shoot = self._discretization_solver.call(
        init_state.x, init_state.u.astype(float), params
    )

    init_state.add_discretization(V_multi_shoot.__array__())
    _ = self._subproblem(params, init_state, settings)
step(state: AlgorithmState, params: dict, settings: Config) -> bool

Execute one PTR iteration.

Solves the convex subproblem, updates state in place, and checks convergence based on trust region, virtual buffer, and virtual control costs.

Parameters:

Name Type Description Default
state AlgorithmState

Mutable solver state (modified in place)

required
params dict

Problem parameters dictionary (may change between steps)

required
settings Config

Configuration object (may change between steps)

required

Returns:

Type Description
bool

True if J_tr, J_vb, and J_vc are all below their thresholds.

Raises:

Type Description
RuntimeError

If initialize() has not been called.

Source code in openscvx/algorithms/penalized_trust_region.py
def step(
    self,
    state: AlgorithmState,
    params: dict,
    settings: Config,
) -> bool:
    """Execute one PTR iteration.

    Solves the convex subproblem, updates state in place, and checks
    convergence based on trust region, virtual buffer, and virtual
    control costs.

    Args:
        state: Mutable solver state (modified in place)
        params: Problem parameters dictionary (may change between steps)
        settings: Configuration object (may change between steps)

    Returns:
        True if J_tr, J_vb, and J_vc are all below their thresholds.

    Raises:
        RuntimeError: If initialize() has not been called.
    """
    if self._solver is None:
        raise RuntimeError(
            "PenalizedTrustRegion.step() called before initialize(). "
            "Call initialize() first to set up compiled infrastructure."
        )

    # Compute discretization before subproblem only for the first iteration
    if state.k == 1:
        t0 = time.time()
        _, _, _, x_prop, V_multi_shoot = self._discretization_solver.call(
            state.x, state.u.astype(float), params
        )
        dis_time = time.time() - t0

        state.add_discretization(V_multi_shoot.__array__())

    # Run the subproblem
    (
        x_sol,
        u_sol,
        cost,
        J_total,
        J_vb_vec,
        J_vc_vec,
        J_tr_vec,
        prob_stat,
        subprop_time,
        vc_mat,
        tr_mat,
    ) = self._subproblem(params, state, settings)

    candidate = CandidateIterate()
    candidate.x = x_sol
    candidate.u = u_sol
    candidate.J_lin = J_total

    t0 = time.time()
    _, _, _, x_prop, V_multi_shoot = self._discretization_solver.call(
        candidate.x, candidate.u.astype(float), params
    )
    dis_time = time.time() - t0

    candidate.V = V_multi_shoot.__array__()
    candidate.x_prop = x_prop.__array__()

    # Update state in place by appending to history
    # The x_guess/u_guess properties will automatically return the latest entry
    candidate.VC = vc_mat
    candidate.TR = tr_mat

    state.J_tr = np.sum(np.array(J_tr_vec))
    state.J_vb = np.sum(np.array(J_vb_vec))
    state.J_vc = np.sum(np.array(J_vc_vec))

    # Update weights in state using configured autotuning method
    adaptive_state = self._autotuner.update_weights(
        state, candidate, self._jax_constraints, settings, params
    )

    # Build emission data - only include nonlinear/reduction metrics when
    # the autotuner actually uses them (constant/ramp methods don't)
    use_full_metrics = not isinstance(
        self._autotuner, (ConstantProximalWeight, RampProximalWeight)
    )

    emission_data = {
        "iter": state.k,
        "dis_time": dis_time * 1000.0,
        "subprop_time": subprop_time * 1000.0,
        "J_tr": state.J_tr,
        "J_vb": state.J_vb,
        "J_vc": state.J_vc,
        "cost": cost[-1],
        "lam_prox": state.lam_prox,
        "prob_stat": prob_stat,
        "adaptive_state": adaptive_state,
    }

    # Only include nonlinear/reduction metrics when autotuner uses them
    # (constant/ramp methods don't compute these, so we don't emit them)
    if use_full_metrics:
        if len(state.pred_reduction_history) == 0:
            pred_reduction = 0.0
        else:
            pred_reduction = state.pred_reduction_history[-1]
        if len(state.actual_reduction_history) == 0:
            actual_reduction = 0.0
        else:
            actual_reduction = state.actual_reduction_history[-1]
        if len(state.acceptance_ratio_history) == 0:
            acceptance_ratio = 0.0
        else:
            acceptance_ratio = state.acceptance_ratio_history[-1]

        emission_data.update(
            {
                "J_nonlin": candidate.J_nonlin,
                "J_lin": candidate.J_lin,
                "pred_reduction": pred_reduction,
                "actual_reduction": actual_reduction,
                "acceptance_ratio": acceptance_ratio,
            }
        )

    # Emit data
    self._emitter(emission_data)

    # Increment iteration counter
    state.k += 1

    # Return convergence status
    return (
        (state.J_tr < settings.scp.ep_tr)
        and (state.J_vb < settings.scp.ep_vb)
        and (state.J_vc < settings.scp.ep_vc)
    )

RampProximalWeight

Bases: AutotuningBase

Ramp Proximal Weight method.

This method ramps the proximal weight up linearly over the first few iterations, then keeps it constant.

Source code in openscvx/algorithms/autotuning.py
class RampProximalWeight(AutotuningBase):
    """Ramp Proximal Weight method.

    This method ramps the proximal weight up linearly over the first few iterations,
    then keeps it constant.
    """

    def __init__(
        self,
        ramp_factor: float = 1.0,
        lam_prox_max: float = 1e3,
        lam_cost_drop: int = -1,
        lam_cost_relax: float = 1.0,
    ):
        self.ramp_factor = ramp_factor
        self.lam_prox_max = lam_prox_max
        self.lam_cost_drop = lam_cost_drop
        self.lam_cost_relax = lam_cost_relax

    def update_weights(
        self,
        state: "AlgorithmState",
        candidate: "CandidateIterate",
        nodal_constraints: "LoweredJaxConstraints",
        settings: Config,
        params: dict,
    ) -> str:
        """Update SCP weights keeping trust region constant.

        Args:
            state: Solver state containing current weight values (mutated in place)
            nodal_constraints: Lowered JAX constraints
            settings: Configuration object containing adaptation parameters
            params: Dictionary of problem parameters

        Returns:
            str: Adaptive state string (e.g., "Accept", "Reject")
        """
        # Update cost relaxation parameter after cost_drop iterations
        if state.k > self.lam_cost_drop:
            candidate.lam_cost = state.lam_cost * self.lam_cost_relax
        else:
            candidate.lam_cost = settings.scp.lam_cost

        # Check if we're already at max before updating
        was_at_max = state.lam_prox >= self.lam_prox_max

        # Calculate and append new value
        new_lam_prox = min(state.lam_prox * self.ramp_factor, self.lam_prox_max)
        state.lam_prox_history.append(new_lam_prox)

        # If we were already at max, or if we just reached it and it's staying constant
        if was_at_max:
            state.accept_solution(candidate)
            return "Accept Constant"
        else:
            state.accept_solution(candidate)
            return "Accept Higher"
update_weights(state: AlgorithmState, candidate: CandidateIterate, nodal_constraints: LoweredJaxConstraints, settings: Config, params: dict) -> str

Update SCP weights keeping trust region constant.

Parameters:

Name Type Description Default
state AlgorithmState

Solver state containing current weight values (mutated in place)

required
nodal_constraints LoweredJaxConstraints

Lowered JAX constraints

required
settings Config

Configuration object containing adaptation parameters

required
params dict

Dictionary of problem parameters

required

Returns:

Name Type Description
str str

Adaptive state string (e.g., "Accept", "Reject")

Source code in openscvx/algorithms/autotuning.py
def update_weights(
    self,
    state: "AlgorithmState",
    candidate: "CandidateIterate",
    nodal_constraints: "LoweredJaxConstraints",
    settings: Config,
    params: dict,
) -> str:
    """Update SCP weights keeping trust region constant.

    Args:
        state: Solver state containing current weight values (mutated in place)
        nodal_constraints: Lowered JAX constraints
        settings: Configuration object containing adaptation parameters
        params: Dictionary of problem parameters

    Returns:
        str: Adaptive state string (e.g., "Accept", "Reject")
    """
    # Update cost relaxation parameter after cost_drop iterations
    if state.k > self.lam_cost_drop:
        candidate.lam_cost = state.lam_cost * self.lam_cost_relax
    else:
        candidate.lam_cost = settings.scp.lam_cost

    # Check if we're already at max before updating
    was_at_max = state.lam_prox >= self.lam_prox_max

    # Calculate and append new value
    new_lam_prox = min(state.lam_prox * self.ramp_factor, self.lam_prox_max)
    state.lam_prox_history.append(new_lam_prox)

    # If we were already at max, or if we just reached it and it's staying constant
    if was_at_max:
        state.accept_solution(candidate)
        return "Accept Constant"
    else:
        state.accept_solution(candidate)
        return "Accept Higher"