Skip to content

algorithms

Successive convexification algorithms for trajectory optimization.

This module provides implementations of SCvx (Successive Convexification) algorithms for solving non-convex trajectory optimization problems through iterative convex approximation.

All algorithms inherit from :class:Algorithm, enabling pluggable algorithm implementations and custom SCvx variants:

class Algorithm(ABC):
    @abstractmethod
    def initialize(self, solver, discretization_solver, jax_constraints,
                   emitter, params, settings) -> None:
        '''Store compiled infrastructure and warm-start solvers.'''
        ...

    @abstractmethod
    def step(self, state, params, settings) -> bool:
        '''Execute one iteration using stored infrastructure.'''
        ...

Immutable components (solver, discretization_solver, jax_constraints, etc.) are stored during initialize(). Mutable configuration (params, settings) is passed per-step to support runtime parameter updates and tolerance tuning.

:class:AlgorithmState holds mutable state during SCP iterations. Algorithms that require additional state can subclass it:

@dataclass
class MyAlgorithmState(AlgorithmState):
    my_custom_field: float = 0.0
Note

AlgorithmState currently combines iteration metrics (costs, weights), trajectory history, and discretization data. A future refactor may separate these concerns into distinct classes for clearer data flow:

@dataclass
class AlgorithmState:
    # Mutable iteration state
    k: int
    J_tr: float
    J_vb: float
    J_vc: float
    lam_prox: float
    lam_cost: float
    lam_vc: ...
    lam_vb_nodal: np.ndarray  # (N, n_nodal)
    lam_vb_cross: np.ndarray  # (n_cross,)

@dataclass
class TrajectoryHistory:
    # Accumulated trajectory solutions
    X: List[np.ndarray]
    U: List[np.ndarray]

    @property
    def x(self): return self.X[-1]

    @property
    def u(self): return self.U[-1]

@dataclass
class DebugHistory:
    # Optional diagnostic data (discretization matrices, etc.)
    V_history: List[np.ndarray]
    VC_history: List[np.ndarray]
    TR_history: List[np.ndarray]

Current Implementations:

  • :class:PenalizedTrustRegion: Penalized Trust Region (PTR) algorithm

Algorithm

Bases: ABC

Abstract base class for successive convexification algorithms.

This class defines the interface for SCP algorithms used in trajectory optimization. Implementations should remain minimal and functional, delegating state management to the AlgorithmState dataclass.

The two core methods mirror the SCP workflow:

  • initialize: Store compiled infrastructure and warm-start solvers
  • step: Execute one convex subproblem iteration

Immutable components (ocp, discretization_solver, jax_constraints, etc.) are stored during initialize(). Mutable configuration (params, settings) is passed per-step to support runtime parameter updates and tolerance tuning.

Statefullness

Avoid storing mutable iteration state (costs, weights, trajectories) on self. All iteration state should live in :class:AlgorithmState or a subclass thereof, passed explicitly to step(). This keeps algorithm classes stateless w.r.t. iteration, making data flow explicit and staying close to functional programming principles where possible.

Example

Implementing a custom algorithm::

class MyAlgorithm(Algorithm):
    def initialize(self, solver, discretization_solver,
                   jax_constraints, emitter,
                   params, settings):
        # Store compiled infrastructure
        self._solver = solver
        self._discretization_solver = discretization_solver
        self._jax_constraints = jax_constraints
        self._emitter = emitter
        # Warm-start with initial params/settings...

    def step(self, state, params, settings):
        # Run one iteration using self._* and per-step params/settings
        return converged

Attributes:

Name Type Description
weights Weights

SCP weights used by the algorithm and autotuner. Subclasses must set this in __init__.

k_max int

Maximum number of SCP iterations. Subclasses must set this in __init__.

Source code in openscvx/algorithms/base.py
class Algorithm(ABC):
    """Abstract base class for successive convexification algorithms.

    This class defines the interface for SCP algorithms used in trajectory
    optimization. Implementations should remain minimal and functional,
    delegating state management to the AlgorithmState dataclass.

    The two core methods mirror the SCP workflow:

    - initialize: Store compiled infrastructure and warm-start solvers
    - step: Execute one convex subproblem iteration

    Immutable components (ocp, discretization_solver, jax_constraints, etc.) are
    stored during initialize(). Mutable configuration (params, settings) is passed
    per-step to support runtime parameter updates and tolerance tuning.

    !!! tip "Statefullness"
        Avoid storing mutable iteration state (costs, weights, trajectories) on
        ``self``. All iteration state should live in :class:`AlgorithmState` or
        a subclass thereof, passed explicitly to ``step()``. This keeps algorithm
        classes stateless w.r.t. iteration, making data flow explicit and staying
        close to functional programming principles where possible.

    Example:
        Implementing a custom algorithm::

            class MyAlgorithm(Algorithm):
                def initialize(self, solver, discretization_solver,
                               jax_constraints, emitter,
                               params, settings):
                    # Store compiled infrastructure
                    self._solver = solver
                    self._discretization_solver = discretization_solver
                    self._jax_constraints = jax_constraints
                    self._emitter = emitter
                    # Warm-start with initial params/settings...

                def step(self, state, params, settings):
                    # Run one iteration using self._* and per-step params/settings
                    return converged

    Attributes:
        weights: SCP weights used by the algorithm and autotuner.
            Subclasses must set this in ``__init__``.
        k_max: Maximum number of SCP iterations.
            Subclasses must set this in ``__init__``.
    """

    #: SCP weights. Subclasses must set this in ``__init__``.
    weights: "Weights"

    #: Maximum number of SCP iterations. Subclasses must set this in ``__init__``.
    k_max: int

    #: Optional wall-clock time limit in seconds. ``None`` means no limit.
    #: Subclasses must set this in ``__init__``.
    t_max: Optional[float]

    @abstractmethod
    def initialize(
        self,
        solver: "ConvexSolver",
        discretization_solver: callable,
        jax_constraints: "LoweredJaxConstraints",
        emitter: callable,
        params: dict,
        settings: "Config",
        discretization_solver_impulsive: Optional[Callable] = None,
    ) -> None:
        """Initialize the algorithm and store compiled infrastructure.

        This method stores immutable components and performs any setup required
        before the SCP loop begins (e.g., warm-starting solvers). The params and
        settings are passed for warm-start but may change between steps.

        Args:
            solver: Convex subproblem solver (e.g., CVXPySolver)
            discretization_solver: Compiled discretization solver function
            jax_constraints: JIT-compiled JAX constraint functions
            emitter: Callback for emitting iteration progress data
            params: Problem parameters dictionary (for warm-start only)
            settings: Configuration object (for warm-start only)
            discretization_solver_impulsive: Optional solver for discrete/impulsive
                dynamics evaluated on ``(x_prop, u_discrete)``
        """
        raise NotImplementedError

    @abstractmethod
    def step(
        self,
        state: AlgorithmState,
        params: dict,
        settings: "Config",
    ) -> bool:
        """Execute one iteration of the SCP algorithm.

        This method solves a single convex subproblem, updates the algorithm
        state in place, and returns whether convergence criteria are met.

        Uses stored infrastructure (ocp, discretization_solver, etc.) with
        per-step params and settings to support runtime modifications.

        Args:
            state: Mutable algorithm state (modified in place)
            params: Problem parameters dictionary (may change between steps)
            settings: Configuration object (may change between steps)

        Returns:
            True if convergence criteria are satisfied, False otherwise.
        """
        raise NotImplementedError

    @abstractmethod
    def citation(self) -> List[str]:
        """Return BibTeX citations for this algorithm.

        Implementations should return a list of BibTeX entry strings for the
        papers that should be cited when using this algorithm.

        Returns:
            List of BibTeX citation strings.

        Example:
            Getting citations for an algorithm::

                algorithm = PenalizedTrustRegion()
                for bibtex in algorithm.citation():
                    print(bibtex)
        """
        raise NotImplementedError
citation() -> List[str] abstractmethod

Return BibTeX citations for this algorithm.

Implementations should return a list of BibTeX entry strings for the papers that should be cited when using this algorithm.

Returns:

Type Description
List[str]

List of BibTeX citation strings.

Example

Getting citations for an algorithm::

algorithm = PenalizedTrustRegion()
for bibtex in algorithm.citation():
    print(bibtex)
Source code in openscvx/algorithms/base.py
@abstractmethod
def citation(self) -> List[str]:
    """Return BibTeX citations for this algorithm.

    Implementations should return a list of BibTeX entry strings for the
    papers that should be cited when using this algorithm.

    Returns:
        List of BibTeX citation strings.

    Example:
        Getting citations for an algorithm::

            algorithm = PenalizedTrustRegion()
            for bibtex in algorithm.citation():
                print(bibtex)
    """
    raise NotImplementedError
initialize(solver: ConvexSolver, discretization_solver: callable, jax_constraints: LoweredJaxConstraints, emitter: callable, params: dict, settings: Config, discretization_solver_impulsive: Optional[Callable] = None) -> None abstractmethod

Initialize the algorithm and store compiled infrastructure.

This method stores immutable components and performs any setup required before the SCP loop begins (e.g., warm-starting solvers). The params and settings are passed for warm-start but may change between steps.

Parameters:

Name Type Description Default
solver ConvexSolver

Convex subproblem solver (e.g., CVXPySolver)

required
discretization_solver callable

Compiled discretization solver function

required
jax_constraints LoweredJaxConstraints

JIT-compiled JAX constraint functions

required
emitter callable

Callback for emitting iteration progress data

required
params dict

Problem parameters dictionary (for warm-start only)

required
settings Config

Configuration object (for warm-start only)

required
discretization_solver_impulsive Optional[Callable]

Optional solver for discrete/impulsive dynamics evaluated on (x_prop, u_discrete)

None
Source code in openscvx/algorithms/base.py
@abstractmethod
def initialize(
    self,
    solver: "ConvexSolver",
    discretization_solver: callable,
    jax_constraints: "LoweredJaxConstraints",
    emitter: callable,
    params: dict,
    settings: "Config",
    discretization_solver_impulsive: Optional[Callable] = None,
) -> None:
    """Initialize the algorithm and store compiled infrastructure.

    This method stores immutable components and performs any setup required
    before the SCP loop begins (e.g., warm-starting solvers). The params and
    settings are passed for warm-start but may change between steps.

    Args:
        solver: Convex subproblem solver (e.g., CVXPySolver)
        discretization_solver: Compiled discretization solver function
        jax_constraints: JIT-compiled JAX constraint functions
        emitter: Callback for emitting iteration progress data
        params: Problem parameters dictionary (for warm-start only)
        settings: Configuration object (for warm-start only)
        discretization_solver_impulsive: Optional solver for discrete/impulsive
            dynamics evaluated on ``(x_prop, u_discrete)``
    """
    raise NotImplementedError
step(state: AlgorithmState, params: dict, settings: Config) -> bool abstractmethod

Execute one iteration of the SCP algorithm.

This method solves a single convex subproblem, updates the algorithm state in place, and returns whether convergence criteria are met.

Uses stored infrastructure (ocp, discretization_solver, etc.) with per-step params and settings to support runtime modifications.

Parameters:

Name Type Description Default
state AlgorithmState

Mutable algorithm state (modified in place)

required
params dict

Problem parameters dictionary (may change between steps)

required
settings Config

Configuration object (may change between steps)

required

Returns:

Type Description
bool

True if convergence criteria are satisfied, False otherwise.

Source code in openscvx/algorithms/base.py
@abstractmethod
def step(
    self,
    state: AlgorithmState,
    params: dict,
    settings: "Config",
) -> bool:
    """Execute one iteration of the SCP algorithm.

    This method solves a single convex subproblem, updates the algorithm
    state in place, and returns whether convergence criteria are met.

    Uses stored infrastructure (ocp, discretization_solver, etc.) with
    per-step params and settings to support runtime modifications.

    Args:
        state: Mutable algorithm state (modified in place)
        params: Problem parameters dictionary (may change between steps)
        settings: Configuration object (may change between steps)

    Returns:
        True if convergence criteria are satisfied, False otherwise.
    """
    raise NotImplementedError

AlgorithmState dataclass

Mutable state for SCP iterations.

This dataclass holds all state that changes during the solve process. It stores only the evolving trajectory arrays, not the full State/Control objects which contain immutable configuration metadata.

Trajectory arrays are stored in history lists, with the current guess accessed via properties that return the latest entry.

A fresh instance is created for each solve, enabling easy reset functionality.

Attributes:

Name Type Description
k int

Current iteration number (starts at 1)

J_tr float

Current trust region cost

J_vb float

Current virtual buffer cost

J_vc float

Current virtual control cost

lam_prox ndarray

Current trust region weight (may adapt during solve)

lam_cost Union[float, ndarray]

Current cost weight (may relax during solve)

lam_vc Union[float, ndarray]

Current virtual control penalty weight

lam_vb_nodal ndarray

Current per-node nodal virtual buffer penalty weights

lam_vb_cross ndarray

Current cross-node virtual buffer penalty weights

n_x int

Number of states (for unpacking V vectors)

n_u int

Number of controls (for unpacking V vectors)

N int

Number of trajectory nodes (for unpacking V vectors)

X List[ndarray]

List of state trajectory iterates

U List[ndarray]

List of control trajectory iterates

discretizations List[DiscretizationResult]

List of unpacked discretization results

VC_history List[ndarray]

List of virtual control history

TR_history List[ndarray]

List of trust region history

A_bar_history List[ndarray]

List of state transition matrices

B_bar_history List[ndarray]

List of control influence matrices

C_bar_history List[ndarray]

List of control influence matrices for next node

x_prop_history List[ndarray]

List of propagated states

Source code in openscvx/algorithms/base.py
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
@dataclass
class AlgorithmState:
    """Mutable state for SCP iterations.

    This dataclass holds all state that changes during the solve process.
    It stores only the evolving trajectory arrays, not the full State/Control
    objects which contain immutable configuration metadata.

    Trajectory arrays are stored in history lists, with the current guess
    accessed via properties that return the latest entry.

    A fresh instance is created for each solve, enabling easy reset functionality.

    Attributes:
        k: Current iteration number (starts at 1)
        J_tr: Current trust region cost
        J_vb: Current virtual buffer cost
        J_vc: Current virtual control cost
        lam_prox: Current trust region weight (may adapt during solve)
        lam_cost: Current cost weight (may relax during solve)
        lam_vc: Current virtual control penalty weight
        lam_vb_nodal: Current per-node nodal virtual buffer penalty weights
        lam_vb_cross: Current cross-node virtual buffer penalty weights
        n_x: Number of states (for unpacking V vectors)
        n_u: Number of controls (for unpacking V vectors)
        N: Number of trajectory nodes (for unpacking V vectors)
        X: List of state trajectory iterates
        U: List of control trajectory iterates
        discretizations: List of unpacked discretization results
        VC_history: List of virtual control history
        TR_history: List of trust region history
        A_bar_history: List of state transition matrices
        B_bar_history: List of control influence matrices
        C_bar_history: List of control influence matrices for next node
        x_prop_history: List of propagated states
    """

    k: int
    J_tr: float
    J_vb: float
    J_vc: float
    n_x: int
    n_u: int
    N: int
    J_nonlin_history: List[float]
    J_lin_history: List[float]
    pred_reduction_history: List[float]
    actual_reduction_history: List[float]
    acceptance_ratio_history: List[float]
    X: List[np.ndarray] = field(default_factory=list)
    U: List[np.ndarray] = field(default_factory=list)
    discretizations: List[DiscretizationResult] = field(default_factory=list)
    VC_history: List[np.ndarray] = field(default_factory=list)
    TR_history: List[np.ndarray] = field(default_factory=list)
    lam_vc_history: List[Union[float, np.ndarray]] = field(default_factory=list)
    lam_cost_history: List[Union[float, np.ndarray]] = field(default_factory=list)
    lam_vb_nodal_history: List[np.ndarray] = field(default_factory=list)
    lam_vb_cross_history: List[np.ndarray] = field(default_factory=list)
    lam_prox_history: List[np.ndarray] = field(default_factory=list)
    x_full: List[np.ndarray] = field(default_factory=list)
    x_prop_full: List[np.ndarray] = field(default_factory=list)

    def accept_solution(self, cand: CandidateIterate) -> None:
        """Accept the given candidate iterate by updating the state in place."""
        if cand.x is None or cand.u is None:
            raise ValueError(
                "No candidate iterate to accept. Expected algorithm to set "
                "`cand.x` and `cand.u` before calling accept_solution()."
            )

        self.X.append(cand.x)
        self.U.append(cand.u)

        if cand.V is not None:
            if cand.W is not None:
                self.discretizations.append(
                    DiscretizationResult.from_VW(
                        cand.V,
                        cand.W,
                        n_x=self.n_x,
                        n_u=self.n_u,
                        N=self.N,
                    )
                )
            else:
                self.discretizations.append(
                    DiscretizationResult.from_V(
                        cand.V,
                        n_x=self.n_x,
                        n_u=self.n_u,
                        N=self.N,
                    )
                )
        if cand.VC is not None:
            self.VC_history.append(cand.VC)
        if cand.TR is not None:
            self.TR_history.append(cand.TR)

        if cand.lam_prox is not None:
            self.lam_prox_history.append(cand.lam_prox)
        if cand.lam_vc is not None:
            self.lam_vc_history.append(cand.lam_vc)
        if cand.lam_cost is not None:
            self.lam_cost_history.append(cand.lam_cost)
        if cand.lam_vb_nodal is not None:
            self.lam_vb_nodal_history.append(cand.lam_vb_nodal)
        if cand.lam_vb_cross is not None:
            self.lam_vb_cross_history.append(cand.lam_vb_cross)

        if cand.J_nonlin is not None:
            self.J_nonlin_history.append(cand.J_nonlin)
        if cand.J_lin is not None:
            self.J_lin_history.append(cand.J_lin)

    def reject_solution(self, cand: CandidateIterate) -> None:
        """Reject the current candidate and update only the trust-region weight.

        This is intended for autotuners that decide to *reject* a candidate
        iterate but still want to adapt the proximal (trust-region) weight
        for the next solve. The new trust region weight is taken from
        ``cand.lam_prox`` (shape ``(N, n_states + n_controls)``) and appended
        to the history. It does **not** modify trajectories or any other
        histories.

        Args:
            cand: The rejected candidate iterate; its ``lam_prox`` is recorded.
        """
        self.lam_prox_history.append(cand.lam_prox)

    @property
    def x(self) -> np.ndarray:
        """Get current state trajectory array.

        Returns:
            Current state trajectory guess (latest entry in history), shape (N, n_states)
        """
        return self.X[-1]

    @property
    def u(self) -> np.ndarray:
        """Get current control trajectory array.

        Returns:
            Current control trajectory guess (latest entry in history), shape (N, n_controls)
        """
        return self.U[-1]

    def add_discretization(self, V: np.ndarray) -> None:
        """Append a raw discretization matrix as an unpacked result."""
        self.discretizations.append(
            DiscretizationResult.from_V(V, n_x=self.n_x, n_u=self.n_u, N=self.N)
        )

    def add_impulsive_discretization(
        self,
        W: np.ndarray,
    ) -> None:
        """Attach impulsive discretization data to the latest discretization entry."""
        if not self.discretizations:
            raise ValueError(
                "Cannot attach impulsive discretization before adding the base discretization."
            )
        last = self.discretizations[-1]
        self.discretizations[-1] = DiscretizationResult.from_VW(
            V=last.V,
            W=W,
            n_x=self.n_x,
            n_u=self.n_u,
            N=self.N,
        )

    @property
    def V_history(self) -> List[np.ndarray]:
        """Backward-compatible view of raw discretization matrices.

        Note:
            This is a read-only view. Internal code should prefer
            ``state.discretizations``.
        """
        return [d.V for d in self.discretizations]

    def x_prop(self, index: int = -1) -> np.ndarray:
        """Extract propagated state trajectory from the discretization history.

        Args:
            index: Index into V_history (default: -1 for latest entry)

        Returns:
            Propagated state trajectory x_prop with shape (N-1, n_x), or None if no V_history

        Example:
            After running an iteration, access the propagated states::

                problem.step()
                x_prop = problem.state.x_prop()  # Shape (N-1, n_x), latest
                x_prop_prev = problem.state.x_prop(-2)  # Previous iteration
        """
        if not self.discretizations:
            return None
        return self.discretizations[index].x_prop

    def A_d(self, index: int = -1) -> np.ndarray:
        """Extract discretized state transition matrix from discretizations.

        Args:
            index: Index into V_history (default: -1 for latest entry)

        Returns:
            Discretized state Jacobian A_d with shape (N-1, n_x, n_x), or None if no V_history

        Example:
            After running an iteration, access linearization matrices::

                problem.step()
                A_d = problem.state.A_d()  # Shape (N-1, n_x, n_x), latest
                A_d_prev = problem.state.A_d(-2)  # Previous iteration
        """
        if not self.discretizations:
            return None
        return self.discretizations[index].A_d

    def B_d(self, index: int = -1) -> np.ndarray:
        """Extract discretized control influence matrix (current node).

        Args:
            index: Index into discretization history (default: -1 for latest entry)

        Returns:
            Discretized control Jacobian B_d with shape (N-1, n_x, n_u), or None if empty.

        Example:
            After running an iteration, access linearization matrices::

                problem.step()
                B_d = problem.state.B_d()  # Shape (N-1, n_x, n_u), latest
                B_d_prev = problem.state.B_d(-2)  # Previous iteration
        """
        if not self.discretizations:
            return None
        return self.discretizations[index].B_d

    def C_d(self, index: int = -1) -> np.ndarray:
        """Extract discretized control influence matrix (next node).

        Args:
            index: Index into discretization history (default: -1 for latest entry)

        Returns:
            Discretized control Jacobian C_d with shape (N-1, n_x, n_u), or None if empty.

        Example:
            After running an iteration, access linearization matrices::

                problem.step()
                C_d = problem.state.C_d()  # Shape (N-1, n_x, n_u), latest
                C_d_prev = problem.state.C_d(-2)  # Previous iteration
        """
        if not self.discretizations:
            return None
        return self.discretizations[index].C_d

    def x_prop_plus(self, index: int = -1) -> np.ndarray:
        """Extract discrete dynamics evaluated at x_prop."""
        if not self.discretizations:
            return None
        return self.discretizations[index].x_prop_plus

    def D_d(self, index: int = -1) -> np.ndarray:
        """Extract Jacobian of x_prop_plus w.r.t. x_prop."""
        if not self.discretizations:
            return None
        return self.discretizations[index].D_d

    def E_d(self, index: int = -1) -> np.ndarray:
        """Extract Jacobian of x_prop_plus w.r.t. discrete controls."""
        if not self.discretizations:
            return None
        return self.discretizations[index].E_d

    @property
    def lam_prox(self) -> np.ndarray:
        """Get current trust region weight.

        Returns:
            Array of shape ``(N, n_states + n_controls)``.
        """
        if not self.lam_prox_history:
            raise ValueError("lam_prox_history is empty. Initialize state using from_settings().")
        return self.lam_prox_history[-1]

    @property
    def lam_cost(self) -> Union[float, np.ndarray]:
        """Get current cost weight.

        Returns:
            Current cost weight (latest entry in lam_cost_history).
            Scalar or array of shape ``(n_states,)`` for per-state weighting.
        """
        if not self.lam_cost_history:
            raise ValueError("lam_cost_history is empty. Initialize state using from_settings().")
        return self.lam_cost_history[-1]

    @property
    def lam_vc(self) -> Union[float, np.ndarray]:
        """Get current virtual control penalty weight.

        Returns:
            Current virtual control penalty weight (latest entry in lam_vc_history)
        """
        if not self.lam_vc_history:
            raise ValueError("lam_vc_history is empty. Initialize state using from_settings().")
        return self.lam_vc_history[-1]

    @property
    def lam_vb_nodal(self) -> np.ndarray:
        """Get current virtual buffer penalty weights for nodal constraints.

        Returns:
            Array of shape ``(N, n_nodal_constraints)``.
        """
        if not self.lam_vb_nodal_history:
            raise ValueError(
                "lam_vb_nodal_history is empty. Initialize state using from_settings()."
            )
        return self.lam_vb_nodal_history[-1]

    @property
    def lam_vb_cross(self) -> np.ndarray:
        """Get current virtual buffer penalty weights for cross-node constraints.

        Returns:
            Array of shape ``(n_cross_node_constraints,)``.
        """
        if not self.lam_vb_cross_history:
            raise ValueError(
                "lam_vb_cross_history is empty. Initialize state using from_settings()."
            )
        return self.lam_vb_cross_history[-1]

    @classmethod
    def from_settings(
        cls,
        settings: "Config",
        weights: "Weights",
    ) -> "AlgorithmState":
        """Create initial algorithm state from configuration.

        Copies only the trajectory arrays from settings, leaving all metadata
        (bounds, boundary conditions, etc.) in the original settings object.

        Args:
            settings: Configuration object containing initial guesses and SCP parameters
            weights: Initial weights from the algorithm.
                ``lam_vc`` is expanded to an ``(N-1, n_states)`` array here
                (scalar or per-state values are broadcast).

        Returns:
            Fresh AlgorithmState initialized from settings with copied arrays
        """
        n = settings.sim.n
        n_states = settings.sim.n_states
        n_controls = settings.sim.n_controls
        n_total = n_states + n_controls
        lam_vc_array = np.ones((n - 1, n_states)) * weights.lam_vc

        # Expand lam_prox to (N, n_states + n_controls) array
        lam_prox_array = np.ones((n, n_total)) * weights.lam_prox

        # Expand scalar lam_cost to per-state array
        if isinstance(weights.lam_cost, np.ndarray):
            lam_cost_init = weights.lam_cost.copy()
        else:
            lam_cost_init = np.full(n_states, weights.lam_cost)

        return cls(
            k=1,
            J_tr=1e2,
            J_vb=1e2,
            J_vc=1e2,
            n_x=n_states,
            n_u=settings.sim.n_controls,
            N=n,
            J_nonlin_history=[],
            J_lin_history=[],
            pred_reduction_history=[],
            actual_reduction_history=[],
            acceptance_ratio_history=[],
            X=[settings.sim.x.guess.copy()],
            U=[settings.sim.u.guess.copy()],
            discretizations=[],
            VC_history=[],
            TR_history=[],
            lam_vc_history=[lam_vc_array],
            lam_cost_history=[lam_cost_init],
            lam_vb_nodal_history=[weights.lam_vb_nodal.copy()],
            lam_vb_cross_history=[weights.lam_vb_cross.copy()],
            lam_prox_history=[lam_prox_array],
        )
V_history: List[np.ndarray] property

Backward-compatible view of raw discretization matrices.

Note

This is a read-only view. Internal code should prefer state.discretizations.

lam_cost: Union[float, np.ndarray] property

Get current cost weight.

Returns:

Type Description
Union[float, ndarray]

Current cost weight (latest entry in lam_cost_history).

Union[float, ndarray]

Scalar or array of shape (n_states,) for per-state weighting.

lam_prox: np.ndarray property

Get current trust region weight.

Returns:

Type Description
ndarray

Array of shape (N, n_states + n_controls).

lam_vb_cross: np.ndarray property

Get current virtual buffer penalty weights for cross-node constraints.

Returns:

Type Description
ndarray

Array of shape (n_cross_node_constraints,).

lam_vb_nodal: np.ndarray property

Get current virtual buffer penalty weights for nodal constraints.

Returns:

Type Description
ndarray

Array of shape (N, n_nodal_constraints).

lam_vc: Union[float, np.ndarray] property

Get current virtual control penalty weight.

Returns:

Type Description
Union[float, ndarray]

Current virtual control penalty weight (latest entry in lam_vc_history)

u: np.ndarray property

Get current control trajectory array.

Returns:

Type Description
ndarray

Current control trajectory guess (latest entry in history), shape (N, n_controls)

x: np.ndarray property

Get current state trajectory array.

Returns:

Type Description
ndarray

Current state trajectory guess (latest entry in history), shape (N, n_states)

A_d(index: int = -1) -> np.ndarray

Extract discretized state transition matrix from discretizations.

Parameters:

Name Type Description Default
index int

Index into V_history (default: -1 for latest entry)

-1

Returns:

Type Description
ndarray

Discretized state Jacobian A_d with shape (N-1, n_x, n_x), or None if no V_history

Example

After running an iteration, access linearization matrices::

problem.step()
A_d = problem.state.A_d()  # Shape (N-1, n_x, n_x), latest
A_d_prev = problem.state.A_d(-2)  # Previous iteration
Source code in openscvx/algorithms/base.py
def A_d(self, index: int = -1) -> np.ndarray:
    """Extract discretized state transition matrix from discretizations.

    Args:
        index: Index into V_history (default: -1 for latest entry)

    Returns:
        Discretized state Jacobian A_d with shape (N-1, n_x, n_x), or None if no V_history

    Example:
        After running an iteration, access linearization matrices::

            problem.step()
            A_d = problem.state.A_d()  # Shape (N-1, n_x, n_x), latest
            A_d_prev = problem.state.A_d(-2)  # Previous iteration
    """
    if not self.discretizations:
        return None
    return self.discretizations[index].A_d
B_d(index: int = -1) -> np.ndarray

Extract discretized control influence matrix (current node).

Parameters:

Name Type Description Default
index int

Index into discretization history (default: -1 for latest entry)

-1

Returns:

Type Description
ndarray

Discretized control Jacobian B_d with shape (N-1, n_x, n_u), or None if empty.

Example

After running an iteration, access linearization matrices::

problem.step()
B_d = problem.state.B_d()  # Shape (N-1, n_x, n_u), latest
B_d_prev = problem.state.B_d(-2)  # Previous iteration
Source code in openscvx/algorithms/base.py
def B_d(self, index: int = -1) -> np.ndarray:
    """Extract discretized control influence matrix (current node).

    Args:
        index: Index into discretization history (default: -1 for latest entry)

    Returns:
        Discretized control Jacobian B_d with shape (N-1, n_x, n_u), or None if empty.

    Example:
        After running an iteration, access linearization matrices::

            problem.step()
            B_d = problem.state.B_d()  # Shape (N-1, n_x, n_u), latest
            B_d_prev = problem.state.B_d(-2)  # Previous iteration
    """
    if not self.discretizations:
        return None
    return self.discretizations[index].B_d
C_d(index: int = -1) -> np.ndarray

Extract discretized control influence matrix (next node).

Parameters:

Name Type Description Default
index int

Index into discretization history (default: -1 for latest entry)

-1

Returns:

Type Description
ndarray

Discretized control Jacobian C_d with shape (N-1, n_x, n_u), or None if empty.

Example

After running an iteration, access linearization matrices::

problem.step()
C_d = problem.state.C_d()  # Shape (N-1, n_x, n_u), latest
C_d_prev = problem.state.C_d(-2)  # Previous iteration
Source code in openscvx/algorithms/base.py
def C_d(self, index: int = -1) -> np.ndarray:
    """Extract discretized control influence matrix (next node).

    Args:
        index: Index into discretization history (default: -1 for latest entry)

    Returns:
        Discretized control Jacobian C_d with shape (N-1, n_x, n_u), or None if empty.

    Example:
        After running an iteration, access linearization matrices::

            problem.step()
            C_d = problem.state.C_d()  # Shape (N-1, n_x, n_u), latest
            C_d_prev = problem.state.C_d(-2)  # Previous iteration
    """
    if not self.discretizations:
        return None
    return self.discretizations[index].C_d
D_d(index: int = -1) -> np.ndarray

Extract Jacobian of x_prop_plus w.r.t. x_prop.

Source code in openscvx/algorithms/base.py
def D_d(self, index: int = -1) -> np.ndarray:
    """Extract Jacobian of x_prop_plus w.r.t. x_prop."""
    if not self.discretizations:
        return None
    return self.discretizations[index].D_d
E_d(index: int = -1) -> np.ndarray

Extract Jacobian of x_prop_plus w.r.t. discrete controls.

Source code in openscvx/algorithms/base.py
def E_d(self, index: int = -1) -> np.ndarray:
    """Extract Jacobian of x_prop_plus w.r.t. discrete controls."""
    if not self.discretizations:
        return None
    return self.discretizations[index].E_d
accept_solution(cand: CandidateIterate) -> None

Accept the given candidate iterate by updating the state in place.

Source code in openscvx/algorithms/base.py
def accept_solution(self, cand: CandidateIterate) -> None:
    """Accept the given candidate iterate by updating the state in place."""
    if cand.x is None or cand.u is None:
        raise ValueError(
            "No candidate iterate to accept. Expected algorithm to set "
            "`cand.x` and `cand.u` before calling accept_solution()."
        )

    self.X.append(cand.x)
    self.U.append(cand.u)

    if cand.V is not None:
        if cand.W is not None:
            self.discretizations.append(
                DiscretizationResult.from_VW(
                    cand.V,
                    cand.W,
                    n_x=self.n_x,
                    n_u=self.n_u,
                    N=self.N,
                )
            )
        else:
            self.discretizations.append(
                DiscretizationResult.from_V(
                    cand.V,
                    n_x=self.n_x,
                    n_u=self.n_u,
                    N=self.N,
                )
            )
    if cand.VC is not None:
        self.VC_history.append(cand.VC)
    if cand.TR is not None:
        self.TR_history.append(cand.TR)

    if cand.lam_prox is not None:
        self.lam_prox_history.append(cand.lam_prox)
    if cand.lam_vc is not None:
        self.lam_vc_history.append(cand.lam_vc)
    if cand.lam_cost is not None:
        self.lam_cost_history.append(cand.lam_cost)
    if cand.lam_vb_nodal is not None:
        self.lam_vb_nodal_history.append(cand.lam_vb_nodal)
    if cand.lam_vb_cross is not None:
        self.lam_vb_cross_history.append(cand.lam_vb_cross)

    if cand.J_nonlin is not None:
        self.J_nonlin_history.append(cand.J_nonlin)
    if cand.J_lin is not None:
        self.J_lin_history.append(cand.J_lin)
add_discretization(V: np.ndarray) -> None

Append a raw discretization matrix as an unpacked result.

Source code in openscvx/algorithms/base.py
def add_discretization(self, V: np.ndarray) -> None:
    """Append a raw discretization matrix as an unpacked result."""
    self.discretizations.append(
        DiscretizationResult.from_V(V, n_x=self.n_x, n_u=self.n_u, N=self.N)
    )
add_impulsive_discretization(W: np.ndarray) -> None

Attach impulsive discretization data to the latest discretization entry.

Source code in openscvx/algorithms/base.py
def add_impulsive_discretization(
    self,
    W: np.ndarray,
) -> None:
    """Attach impulsive discretization data to the latest discretization entry."""
    if not self.discretizations:
        raise ValueError(
            "Cannot attach impulsive discretization before adding the base discretization."
        )
    last = self.discretizations[-1]
    self.discretizations[-1] = DiscretizationResult.from_VW(
        V=last.V,
        W=W,
        n_x=self.n_x,
        n_u=self.n_u,
        N=self.N,
    )
from_settings(settings: Config, weights: Weights) -> AlgorithmState classmethod

Create initial algorithm state from configuration.

Copies only the trajectory arrays from settings, leaving all metadata (bounds, boundary conditions, etc.) in the original settings object.

Parameters:

Name Type Description Default
settings Config

Configuration object containing initial guesses and SCP parameters

required
weights Weights

Initial weights from the algorithm. lam_vc is expanded to an (N-1, n_states) array here (scalar or per-state values are broadcast).

required

Returns:

Type Description
AlgorithmState

Fresh AlgorithmState initialized from settings with copied arrays

Source code in openscvx/algorithms/base.py
@classmethod
def from_settings(
    cls,
    settings: "Config",
    weights: "Weights",
) -> "AlgorithmState":
    """Create initial algorithm state from configuration.

    Copies only the trajectory arrays from settings, leaving all metadata
    (bounds, boundary conditions, etc.) in the original settings object.

    Args:
        settings: Configuration object containing initial guesses and SCP parameters
        weights: Initial weights from the algorithm.
            ``lam_vc`` is expanded to an ``(N-1, n_states)`` array here
            (scalar or per-state values are broadcast).

    Returns:
        Fresh AlgorithmState initialized from settings with copied arrays
    """
    n = settings.sim.n
    n_states = settings.sim.n_states
    n_controls = settings.sim.n_controls
    n_total = n_states + n_controls
    lam_vc_array = np.ones((n - 1, n_states)) * weights.lam_vc

    # Expand lam_prox to (N, n_states + n_controls) array
    lam_prox_array = np.ones((n, n_total)) * weights.lam_prox

    # Expand scalar lam_cost to per-state array
    if isinstance(weights.lam_cost, np.ndarray):
        lam_cost_init = weights.lam_cost.copy()
    else:
        lam_cost_init = np.full(n_states, weights.lam_cost)

    return cls(
        k=1,
        J_tr=1e2,
        J_vb=1e2,
        J_vc=1e2,
        n_x=n_states,
        n_u=settings.sim.n_controls,
        N=n,
        J_nonlin_history=[],
        J_lin_history=[],
        pred_reduction_history=[],
        actual_reduction_history=[],
        acceptance_ratio_history=[],
        X=[settings.sim.x.guess.copy()],
        U=[settings.sim.u.guess.copy()],
        discretizations=[],
        VC_history=[],
        TR_history=[],
        lam_vc_history=[lam_vc_array],
        lam_cost_history=[lam_cost_init],
        lam_vb_nodal_history=[weights.lam_vb_nodal.copy()],
        lam_vb_cross_history=[weights.lam_vb_cross.copy()],
        lam_prox_history=[lam_prox_array],
    )
reject_solution(cand: CandidateIterate) -> None

Reject the current candidate and update only the trust-region weight.

This is intended for autotuners that decide to reject a candidate iterate but still want to adapt the proximal (trust-region) weight for the next solve. The new trust region weight is taken from cand.lam_prox (shape (N, n_states + n_controls)) and appended to the history. It does not modify trajectories or any other histories.

Parameters:

Name Type Description Default
cand CandidateIterate

The rejected candidate iterate; its lam_prox is recorded.

required
Source code in openscvx/algorithms/base.py
def reject_solution(self, cand: CandidateIterate) -> None:
    """Reject the current candidate and update only the trust-region weight.

    This is intended for autotuners that decide to *reject* a candidate
    iterate but still want to adapt the proximal (trust-region) weight
    for the next solve. The new trust region weight is taken from
    ``cand.lam_prox`` (shape ``(N, n_states + n_controls)``) and appended
    to the history. It does **not** modify trajectories or any other
    histories.

    Args:
        cand: The rejected candidate iterate; its ``lam_prox`` is recorded.
    """
    self.lam_prox_history.append(cand.lam_prox)
x_prop(index: int = -1) -> np.ndarray

Extract propagated state trajectory from the discretization history.

Parameters:

Name Type Description Default
index int

Index into V_history (default: -1 for latest entry)

-1

Returns:

Type Description
ndarray

Propagated state trajectory x_prop with shape (N-1, n_x), or None if no V_history

Example

After running an iteration, access the propagated states::

problem.step()
x_prop = problem.state.x_prop()  # Shape (N-1, n_x), latest
x_prop_prev = problem.state.x_prop(-2)  # Previous iteration
Source code in openscvx/algorithms/base.py
def x_prop(self, index: int = -1) -> np.ndarray:
    """Extract propagated state trajectory from the discretization history.

    Args:
        index: Index into V_history (default: -1 for latest entry)

    Returns:
        Propagated state trajectory x_prop with shape (N-1, n_x), or None if no V_history

    Example:
        After running an iteration, access the propagated states::

            problem.step()
            x_prop = problem.state.x_prop()  # Shape (N-1, n_x), latest
            x_prop_prev = problem.state.x_prop(-2)  # Previous iteration
    """
    if not self.discretizations:
        return None
    return self.discretizations[index].x_prop
x_prop_plus(index: int = -1) -> np.ndarray

Extract discrete dynamics evaluated at x_prop.

Source code in openscvx/algorithms/base.py
def x_prop_plus(self, index: int = -1) -> np.ndarray:
    """Extract discrete dynamics evaluated at x_prop."""
    if not self.discretizations:
        return None
    return self.discretizations[index].x_prop_plus

AugmentedLagrangian

Bases: AutotuningBase

Augmented Lagrangian method for autotuning SCP weights.

This method uses Lagrange multipliers and penalty parameters to handle constraints. The method: - Updates Lagrange multipliers based on constraint violations - Increases penalty parameters when constraints are violated - Decreases penalty parameters when constraints are satisfied

Source code in openscvx/algorithms/augmented_lagrangian.py
class AugmentedLagrangian(AutotuningBase):
    """Augmented Lagrangian method for autotuning SCP weights.

    This method uses Lagrange multipliers and penalty parameters to handle
    constraints. The method:
    - Updates Lagrange multipliers based on constraint violations
    - Increases penalty parameters when constraints are violated
    - Decreases penalty parameters when constraints are satisfied
    """

    COLUMNS: List[Column] = [
        Column("J_nonlin", "J_nonlin", 8, "{: .1e}", None, Verbosity.STANDARD),
        Column("J_lin", "J_lin", 8, "{: .1e}", None, Verbosity.STANDARD),
        Column("pred_reduction", "pred_red", 9, "{: .1e}", min_verbosity=Verbosity.FULL),
        Column("actual_reduction", "act_red", 9, "{: .1e}", min_verbosity=Verbosity.FULL),
        Column(
            "acceptance_ratio",
            "acc_ratio",
            9,
            "{: .2e}",
            color_acceptance_ratio,
            Verbosity.STANDARD,
        ),
        Column("lam_prox", "lam_prox", 8, "{: .1e}", min_verbosity=Verbosity.FULL),
        Column("adaptive_state", "Adaptive", 16, "{}", color_adaptive_state, Verbosity.FULL),
    ]

    def __init__(
        self,
        rho_init: float = 1.0,
        rho_max: float = 1e2,
        gamma_1: float = 2.0,
        gamma_2: float = 0.5,
        eta_0: float = 1e-2,
        eta_1: float = 1e-1,
        eta_2: float = 0.8,
        ep: float = 0.99,
        eta_lambda: float = 1e1,
        lam_vc_max: float = 1e5,
        lam_prox_min: float = 1e-3,
        lam_prox_max: float = 1e4,
        lam_cost_drop: int = -1,
        lam_cost_relax: float = 1.0,
    ):
        """Initialize Augmented Lagrangian autotuning parameters.

        All parameters have defaults and can be modified after instantiation
        via attribute access (e.g., ``autotuner.rho_max = 1e7``).

        Args:
            rho_init: Initial penalty parameter for constraints. Defaults to 1.0.
            rho_max: Maximum penalty parameter. Defaults to 1e6.
            gamma_1: Factor to increase trust region weight when ratio is low.
                Defaults to 2.0.
            gamma_2: Factor to decrease trust region weight when ratio is high.
                Defaults to 0.5.
            eta_0: Acceptance ratio threshold below which solution is rejected.
                Defaults to 1e-2.
            eta_1: Threshold above which solution is accepted with constant weight.
                Defaults to 1e-1.
            eta_2: Threshold above which solution is accepted with lower weight.
                Defaults to 0.8.
            ep: Threshold for virtual control weight update (nu > ep vs nu <= ep).
                Defaults to 0.5.
            eta_lambda: Step size for virtual control weight update. Defaults to 1e0.
            lam_vc_max: Maximum virtual control penalty weight. Defaults to 1e5.
            lam_prox_min: Minimum trust region (proximal) weight. Defaults to 1e-3.
            lam_prox_max: Maximum trust region (proximal) weight. Defaults to 2e5.
            lam_cost_drop: Iteration after which cost relaxation applies (-1 = never).
                Defaults to -1.
            lam_cost_relax: Factor applied to lam_cost after lam_cost_drop.
                Defaults to 1.0.
        """
        self.rho_init = rho_init
        self.rho_max = rho_max
        self.gamma_1 = gamma_1
        self.gamma_2 = gamma_2
        self.eta_0 = eta_0
        self.eta_1 = eta_1
        self.eta_2 = eta_2
        self.ep = ep
        self.eta_lambda = eta_lambda
        self.lam_vc_max = lam_vc_max
        self.lam_prox_min = lam_prox_min
        self.lam_prox_max = lam_prox_max
        self.lam_cost_drop = lam_cost_drop
        self.lam_cost_relax = lam_cost_relax

    def _update_virtual_control_weights(
        self,
        candidate: "CandidateIterate",
        candidate_x_prop: np.ndarray,
        settings: Config,
        lam_vc: np.ndarray,
        lam_prox: np.ndarray,
    ) -> np.ndarray:
        """Update virtual control penalty weights from state violation.

        Computes scaled violation nu = inv_S_x @ |x[1:] - x_prop|, then applies
        two update rules: when nu > ep (linear in nu), else quadratic in nu.
        Result is clipped to lam_vc_max.
        """
        nu = (settings.sim.inv_S_x @ abs(candidate.x[1:] - candidate_x_prop).T).T
        mask = nu > self.ep
        # TODO: (haynec) use per-variable lam_prox to scale VC updates proportionally
        lam_prox_scalar = float(np.max(lam_prox))
        scale = self.eta_lambda * (1 / (2 * lam_prox_scalar))
        case1 = lam_vc + nu * scale
        case2 = lam_vc + (nu**2) / self.ep * scale
        vc_new = np.where(mask, case1, case2)
        return np.minimum(self.lam_vc_max, vc_new)

    def _update_virtual_buffer_nodal_weights(
        self,
        candidate: "CandidateIterate",
        nodal_constraints: "LoweredJaxConstraints",
        params: dict,
        lam_vb_nodal: np.ndarray,
        lam_prox: np.ndarray,
    ) -> np.ndarray:
        """
        Update virtual buffer penalty weights for nodal constraints.

        Evaluates each nodal constraint to obtain violation
        nu = max(0, g(x, u)), then applies the same two-case update rule
        used for virtual control weights: linear when nu > ep, quadratic
        otherwise. Result is clipped to lam_vc_max.
        """
        lam_vb_new = lam_vb_nodal.copy()
        lam_prox_scalar = float(np.max(lam_prox))
        scale = self.eta_lambda * (1 / (2 * lam_prox_scalar))

        for idx, constraint in enumerate(nodal_constraints.nodal):
            g = constraint.func(candidate.x, candidate.u, 0, params)
            nu = np.maximum(0, g)

            if constraint.nodes is not None:
                nodes_array = np.array(constraint.nodes)
                nu_slice = nu[nodes_array]
                current = lam_vb_nodal[nodes_array, idx]
            else:
                nu_slice = nu
                current = lam_vb_nodal[:, idx]

            mask = nu_slice > self.ep
            case1 = current + nu_slice * scale
            case2 = current + (nu_slice**2) / self.ep * scale
            updated = np.where(mask, case1, case2)

            if constraint.nodes is not None:
                lam_vb_new[nodes_array, idx] = updated
            else:
                lam_vb_new[:, idx] = updated

        return np.minimum(self.lam_vc_max, lam_vb_new)

    def _update_virtual_buffer_cross_weights(
        self,
        candidate: "CandidateIterate",
        nodal_constraints: "LoweredJaxConstraints",
        params: dict,
        lam_vb_cross: np.ndarray,
        lam_prox: np.ndarray,
    ) -> np.ndarray:
        """
        Update virtual buffer penalty weights for cross-node constraints.

        Evaluates each cross-node constraint to obtain total violation
        nu = sum(max(0, g(X, U))), then applies the same two-case update
        rule used for virtual control weights. Result is clipped to lam_vc_max.
        """
        lam_vb_new = lam_vb_cross.copy()
        lam_prox_scalar = float(np.max(lam_prox))
        scale = self.eta_lambda * (1 / (2 * lam_prox_scalar))

        for idx, constraint in enumerate(nodal_constraints.cross_node):
            g = constraint.func(candidate.x, candidate.u, params)
            nu = np.sum(np.maximum(0, g))

            current = lam_vb_cross[idx]
            if nu > self.ep:
                lam_vb_new[idx] = current + nu * scale
            else:
                lam_vb_new[idx] = current + ((nu**2) / self.ep) * scale

        return np.minimum(self.lam_vc_max, lam_vb_new)

    def update_weights(
        self,
        state: "AlgorithmState",
        candidate: "CandidateIterate",
        nodal_constraints: "LoweredJaxConstraints",
        settings: Config,
        params: dict,
        weights: "Weights",
    ) -> str:
        """Update SCP weights and cost parameters based on iteration number.

        Args:
            state: Solver state containing current weight values (mutated in place)
            candidate: Candidate iterate from the current subproblem solve
            nodal_constraints: Lowered JAX constraints
            settings: Configuration object containing adaptation parameters
            params: Dictionary of problem parameters
            weights: Initial weights from the algorithm
        """
        # Calculate nonlinear penalty for current candidate
        candidate_x_prop = (
            candidate.x_prop_plus[1:] if candidate.x_prop_plus is not None else candidate.x_prop
        )
        (
            nonlinear_cost,
            nonlinear_penalty,
            nodal_penalty,
        ) = self.calculate_nonlinear_penalty(
            candidate_x_prop,
            candidate.x,
            candidate.u,
            state.lam_vc,
            state.lam_vb_nodal,
            state.lam_vb_cross,
            state.lam_cost,
            nodal_constraints,
            params,
            settings,
        )

        candidate.J_nonlin = nonlinear_cost + nonlinear_penalty + nodal_penalty

        # Update cost relaxation parameter after cost_drop iterations.
        # When lam_cost is a per-state array, scalar lam_cost_relax scales
        # uniformly, preserving the user-specified per-state weight ratios.
        if state.k > self.lam_cost_drop:
            candidate.lam_cost = state.lam_cost * self.lam_cost_relax
        else:
            candidate.lam_cost = weights.lam_cost

        lam_prox_k = deepcopy(state.lam_prox)

        if state.k > 1:
            state_x_prop_plus = state.x_prop_plus()
            state_x_prop = (
                state_x_prop_plus[1:] if state_x_prop_plus is not None else state.x_prop()
            )
            (
                prev_nonlinear_cost,
                prev_nonlinear_penalty,
                prev_nodal_penalty,
            ) = self.calculate_nonlinear_penalty(
                state_x_prop,
                state.x,
                state.u,
                state.lam_vc,
                state.lam_vb_nodal,
                state.lam_vb_cross,
                state.lam_cost,
                nodal_constraints,
                params,
                settings,
            )

            J_nonlin_prev = prev_nonlinear_cost + prev_nonlinear_penalty + prev_nodal_penalty

            actual_reduction = J_nonlin_prev - candidate.J_nonlin
            predicted_reduction = J_nonlin_prev - candidate.J_lin

            if predicted_reduction == 0:
                raise ValueError("Predicted reduction is 0.")

            rho = actual_reduction / predicted_reduction

            state.pred_reduction_history.append(predicted_reduction)
            state.actual_reduction_history.append(actual_reduction)
            state.acceptance_ratio_history.append(rho)

            if rho < self.eta_0:
                # Reject Solution and higher weight
                lam_prox_k1 = np.minimum(self.lam_prox_max, self.gamma_1 * lam_prox_k)
                candidate.lam_prox = lam_prox_k1
                state.reject_solution(candidate)
                adaptive_state = "Reject Higher"
            elif rho >= self.eta_0 and rho < self.eta_1:
                # Accept Solution with heigher weight
                lam_prox_k1 = np.minimum(self.lam_prox_max, self.gamma_1 * lam_prox_k)
                candidate.lam_prox = lam_prox_k1
                # Update virtual control weight matrix
                candidate.lam_vc = self._update_virtual_control_weights(
                    candidate, candidate_x_prop, settings, state.lam_vc, candidate.lam_prox
                )
                candidate.lam_vb_nodal = self._update_virtual_buffer_nodal_weights(
                    candidate, nodal_constraints, params, state.lam_vb_nodal, candidate.lam_prox
                )
                candidate.lam_vb_cross = self._update_virtual_buffer_cross_weights(
                    candidate, nodal_constraints, params, state.lam_vb_cross, candidate.lam_prox
                )

                state.accept_solution(candidate)
                adaptive_state = "Accept Higher"
            elif rho >= self.eta_1 and rho < self.eta_2:
                # Accept Solution with constant weight
                lam_prox_k1 = lam_prox_k
                candidate.lam_prox = lam_prox_k1
                # Update virtual control weight matrix
                candidate.lam_vc = self._update_virtual_control_weights(
                    candidate, candidate_x_prop, settings, state.lam_vc, candidate.lam_prox
                )
                candidate.lam_vb_nodal = self._update_virtual_buffer_nodal_weights(
                    candidate, nodal_constraints, params, state.lam_vb_nodal, candidate.lam_prox
                )
                candidate.lam_vb_cross = self._update_virtual_buffer_cross_weights(
                    candidate, nodal_constraints, params, state.lam_vb_cross, candidate.lam_prox
                )

                state.accept_solution(candidate)
                adaptive_state = "Accept Constant"
            else:
                # Accept Solution with lower weight
                lam_prox_k1 = np.maximum(self.lam_prox_min, self.gamma_2 * lam_prox_k)
                candidate.lam_prox = lam_prox_k1
                # Update virtual control weight matrix
                candidate.lam_vc = self._update_virtual_control_weights(
                    candidate, candidate_x_prop, settings, state.lam_vc, candidate.lam_prox
                )
                candidate.lam_vb_nodal = self._update_virtual_buffer_nodal_weights(
                    candidate, nodal_constraints, params, state.lam_vb_nodal, candidate.lam_prox
                )
                candidate.lam_vb_cross = self._update_virtual_buffer_cross_weights(
                    candidate, nodal_constraints, params, state.lam_vb_cross, candidate.lam_prox
                )

                state.accept_solution(candidate)
                adaptive_state = "Accept Lower"

        else:
            candidate.lam_prox = lam_prox_k
            candidate.lam_vc = state.lam_vc
            candidate.lam_vb_nodal = state.lam_vb_nodal
            candidate.lam_vb_cross = state.lam_vb_cross
            state.accept_solution(candidate)
            adaptive_state = "Initial"

        return adaptive_state
__init__(rho_init: float = 1.0, rho_max: float = 100.0, gamma_1: float = 2.0, gamma_2: float = 0.5, eta_0: float = 0.01, eta_1: float = 0.1, eta_2: float = 0.8, ep: float = 0.99, eta_lambda: float = 10.0, lam_vc_max: float = 100000.0, lam_prox_min: float = 0.001, lam_prox_max: float = 10000.0, lam_cost_drop: int = -1, lam_cost_relax: float = 1.0)

Initialize Augmented Lagrangian autotuning parameters.

All parameters have defaults and can be modified after instantiation via attribute access (e.g., autotuner.rho_max = 1e7).

Parameters:

Name Type Description Default
rho_init float

Initial penalty parameter for constraints. Defaults to 1.0.

1.0
rho_max float

Maximum penalty parameter. Defaults to 1e6.

100.0
gamma_1 float

Factor to increase trust region weight when ratio is low. Defaults to 2.0.

2.0
gamma_2 float

Factor to decrease trust region weight when ratio is high. Defaults to 0.5.

0.5
eta_0 float

Acceptance ratio threshold below which solution is rejected. Defaults to 1e-2.

0.01
eta_1 float

Threshold above which solution is accepted with constant weight. Defaults to 1e-1.

0.1
eta_2 float

Threshold above which solution is accepted with lower weight. Defaults to 0.8.

0.8
ep float

Threshold for virtual control weight update (nu > ep vs nu <= ep). Defaults to 0.5.

0.99
eta_lambda float

Step size for virtual control weight update. Defaults to 1e0.

10.0
lam_vc_max float

Maximum virtual control penalty weight. Defaults to 1e5.

100000.0
lam_prox_min float

Minimum trust region (proximal) weight. Defaults to 1e-3.

0.001
lam_prox_max float

Maximum trust region (proximal) weight. Defaults to 2e5.

10000.0
lam_cost_drop int

Iteration after which cost relaxation applies (-1 = never). Defaults to -1.

-1
lam_cost_relax float

Factor applied to lam_cost after lam_cost_drop. Defaults to 1.0.

1.0
Source code in openscvx/algorithms/augmented_lagrangian.py
def __init__(
    self,
    rho_init: float = 1.0,
    rho_max: float = 1e2,
    gamma_1: float = 2.0,
    gamma_2: float = 0.5,
    eta_0: float = 1e-2,
    eta_1: float = 1e-1,
    eta_2: float = 0.8,
    ep: float = 0.99,
    eta_lambda: float = 1e1,
    lam_vc_max: float = 1e5,
    lam_prox_min: float = 1e-3,
    lam_prox_max: float = 1e4,
    lam_cost_drop: int = -1,
    lam_cost_relax: float = 1.0,
):
    """Initialize Augmented Lagrangian autotuning parameters.

    All parameters have defaults and can be modified after instantiation
    via attribute access (e.g., ``autotuner.rho_max = 1e7``).

    Args:
        rho_init: Initial penalty parameter for constraints. Defaults to 1.0.
        rho_max: Maximum penalty parameter. Defaults to 1e6.
        gamma_1: Factor to increase trust region weight when ratio is low.
            Defaults to 2.0.
        gamma_2: Factor to decrease trust region weight when ratio is high.
            Defaults to 0.5.
        eta_0: Acceptance ratio threshold below which solution is rejected.
            Defaults to 1e-2.
        eta_1: Threshold above which solution is accepted with constant weight.
            Defaults to 1e-1.
        eta_2: Threshold above which solution is accepted with lower weight.
            Defaults to 0.8.
        ep: Threshold for virtual control weight update (nu > ep vs nu <= ep).
            Defaults to 0.5.
        eta_lambda: Step size for virtual control weight update. Defaults to 1e0.
        lam_vc_max: Maximum virtual control penalty weight. Defaults to 1e5.
        lam_prox_min: Minimum trust region (proximal) weight. Defaults to 1e-3.
        lam_prox_max: Maximum trust region (proximal) weight. Defaults to 2e5.
        lam_cost_drop: Iteration after which cost relaxation applies (-1 = never).
            Defaults to -1.
        lam_cost_relax: Factor applied to lam_cost after lam_cost_drop.
            Defaults to 1.0.
    """
    self.rho_init = rho_init
    self.rho_max = rho_max
    self.gamma_1 = gamma_1
    self.gamma_2 = gamma_2
    self.eta_0 = eta_0
    self.eta_1 = eta_1
    self.eta_2 = eta_2
    self.ep = ep
    self.eta_lambda = eta_lambda
    self.lam_vc_max = lam_vc_max
    self.lam_prox_min = lam_prox_min
    self.lam_prox_max = lam_prox_max
    self.lam_cost_drop = lam_cost_drop
    self.lam_cost_relax = lam_cost_relax
update_weights(state: AlgorithmState, candidate: CandidateIterate, nodal_constraints: LoweredJaxConstraints, settings: Config, params: dict, weights: Weights) -> str

Update SCP weights and cost parameters based on iteration number.

Parameters:

Name Type Description Default
state AlgorithmState

Solver state containing current weight values (mutated in place)

required
candidate CandidateIterate

Candidate iterate from the current subproblem solve

required
nodal_constraints LoweredJaxConstraints

Lowered JAX constraints

required
settings Config

Configuration object containing adaptation parameters

required
params dict

Dictionary of problem parameters

required
weights Weights

Initial weights from the algorithm

required
Source code in openscvx/algorithms/augmented_lagrangian.py
def update_weights(
    self,
    state: "AlgorithmState",
    candidate: "CandidateIterate",
    nodal_constraints: "LoweredJaxConstraints",
    settings: Config,
    params: dict,
    weights: "Weights",
) -> str:
    """Update SCP weights and cost parameters based on iteration number.

    Args:
        state: Solver state containing current weight values (mutated in place)
        candidate: Candidate iterate from the current subproblem solve
        nodal_constraints: Lowered JAX constraints
        settings: Configuration object containing adaptation parameters
        params: Dictionary of problem parameters
        weights: Initial weights from the algorithm
    """
    # Calculate nonlinear penalty for current candidate
    candidate_x_prop = (
        candidate.x_prop_plus[1:] if candidate.x_prop_plus is not None else candidate.x_prop
    )
    (
        nonlinear_cost,
        nonlinear_penalty,
        nodal_penalty,
    ) = self.calculate_nonlinear_penalty(
        candidate_x_prop,
        candidate.x,
        candidate.u,
        state.lam_vc,
        state.lam_vb_nodal,
        state.lam_vb_cross,
        state.lam_cost,
        nodal_constraints,
        params,
        settings,
    )

    candidate.J_nonlin = nonlinear_cost + nonlinear_penalty + nodal_penalty

    # Update cost relaxation parameter after cost_drop iterations.
    # When lam_cost is a per-state array, scalar lam_cost_relax scales
    # uniformly, preserving the user-specified per-state weight ratios.
    if state.k > self.lam_cost_drop:
        candidate.lam_cost = state.lam_cost * self.lam_cost_relax
    else:
        candidate.lam_cost = weights.lam_cost

    lam_prox_k = deepcopy(state.lam_prox)

    if state.k > 1:
        state_x_prop_plus = state.x_prop_plus()
        state_x_prop = (
            state_x_prop_plus[1:] if state_x_prop_plus is not None else state.x_prop()
        )
        (
            prev_nonlinear_cost,
            prev_nonlinear_penalty,
            prev_nodal_penalty,
        ) = self.calculate_nonlinear_penalty(
            state_x_prop,
            state.x,
            state.u,
            state.lam_vc,
            state.lam_vb_nodal,
            state.lam_vb_cross,
            state.lam_cost,
            nodal_constraints,
            params,
            settings,
        )

        J_nonlin_prev = prev_nonlinear_cost + prev_nonlinear_penalty + prev_nodal_penalty

        actual_reduction = J_nonlin_prev - candidate.J_nonlin
        predicted_reduction = J_nonlin_prev - candidate.J_lin

        if predicted_reduction == 0:
            raise ValueError("Predicted reduction is 0.")

        rho = actual_reduction / predicted_reduction

        state.pred_reduction_history.append(predicted_reduction)
        state.actual_reduction_history.append(actual_reduction)
        state.acceptance_ratio_history.append(rho)

        if rho < self.eta_0:
            # Reject Solution and higher weight
            lam_prox_k1 = np.minimum(self.lam_prox_max, self.gamma_1 * lam_prox_k)
            candidate.lam_prox = lam_prox_k1
            state.reject_solution(candidate)
            adaptive_state = "Reject Higher"
        elif rho >= self.eta_0 and rho < self.eta_1:
            # Accept Solution with heigher weight
            lam_prox_k1 = np.minimum(self.lam_prox_max, self.gamma_1 * lam_prox_k)
            candidate.lam_prox = lam_prox_k1
            # Update virtual control weight matrix
            candidate.lam_vc = self._update_virtual_control_weights(
                candidate, candidate_x_prop, settings, state.lam_vc, candidate.lam_prox
            )
            candidate.lam_vb_nodal = self._update_virtual_buffer_nodal_weights(
                candidate, nodal_constraints, params, state.lam_vb_nodal, candidate.lam_prox
            )
            candidate.lam_vb_cross = self._update_virtual_buffer_cross_weights(
                candidate, nodal_constraints, params, state.lam_vb_cross, candidate.lam_prox
            )

            state.accept_solution(candidate)
            adaptive_state = "Accept Higher"
        elif rho >= self.eta_1 and rho < self.eta_2:
            # Accept Solution with constant weight
            lam_prox_k1 = lam_prox_k
            candidate.lam_prox = lam_prox_k1
            # Update virtual control weight matrix
            candidate.lam_vc = self._update_virtual_control_weights(
                candidate, candidate_x_prop, settings, state.lam_vc, candidate.lam_prox
            )
            candidate.lam_vb_nodal = self._update_virtual_buffer_nodal_weights(
                candidate, nodal_constraints, params, state.lam_vb_nodal, candidate.lam_prox
            )
            candidate.lam_vb_cross = self._update_virtual_buffer_cross_weights(
                candidate, nodal_constraints, params, state.lam_vb_cross, candidate.lam_prox
            )

            state.accept_solution(candidate)
            adaptive_state = "Accept Constant"
        else:
            # Accept Solution with lower weight
            lam_prox_k1 = np.maximum(self.lam_prox_min, self.gamma_2 * lam_prox_k)
            candidate.lam_prox = lam_prox_k1
            # Update virtual control weight matrix
            candidate.lam_vc = self._update_virtual_control_weights(
                candidate, candidate_x_prop, settings, state.lam_vc, candidate.lam_prox
            )
            candidate.lam_vb_nodal = self._update_virtual_buffer_nodal_weights(
                candidate, nodal_constraints, params, state.lam_vb_nodal, candidate.lam_prox
            )
            candidate.lam_vb_cross = self._update_virtual_buffer_cross_weights(
                candidate, nodal_constraints, params, state.lam_vb_cross, candidate.lam_prox
            )

            state.accept_solution(candidate)
            adaptive_state = "Accept Lower"

    else:
        candidate.lam_prox = lam_prox_k
        candidate.lam_vc = state.lam_vc
        candidate.lam_vb_nodal = state.lam_vb_nodal
        candidate.lam_vb_cross = state.lam_vb_cross
        state.accept_solution(candidate)
        adaptive_state = "Initial"

    return adaptive_state

AutotuningBase

Bases: ABC

Base class for autotuning methods in SCP algorithms.

This class provides common functionality for calculating costs and penalties that are shared across different autotuning strategies (e.g., Penalized Trust Region, Augmented Lagrangian).

Subclasses must implement the update_weights method.

Class Attributes

COLUMNS: List of Column specs for autotuner-specific metrics to display. Subclasses override this to add their own columns.

Source code in openscvx/algorithms/base.py
class AutotuningBase(ABC):
    """Base class for autotuning methods in SCP algorithms.

    This class provides common functionality for calculating costs and penalties
    that are shared across different autotuning strategies (e.g., Penalized Trust
    Region, Augmented Lagrangian).

    Subclasses must implement the ``update_weights`` method.

    Class Attributes:
        COLUMNS: List of Column specs for autotuner-specific metrics to display.
            Subclasses override this to add their own columns.
    """

    COLUMNS: List[Column] = []

    @staticmethod
    def calculate_cost_from_state(
        x: np.ndarray,
        settings: "Config",
        lam_cost: Union[float, np.ndarray] = 1.0,
    ) -> float:
        """Calculate cost from state vector based on final_type and initial_type.

        Args:
            x: State trajectory array (n_nodes, n_states)
            settings: Configuration object containing state types
            lam_cost: Per-state cost weight. Scalar (applied uniformly) or
                array of shape ``(n_states,)`` for per-state weighting.

        Returns:
            float: Computed cost (weighted by lam_cost)
        """
        scaled_x = (settings.sim.inv_S_x @ (x.T - settings.sim.c_x[:, None])).T
        lam = np.asarray(lam_cost)
        cost = 0.0
        for i in range(settings.sim.n_states):
            w = float(lam[i]) if lam.ndim > 0 else float(lam)
            if settings.sim.x.final_type[i] == "Minimize":
                cost += w * scaled_x[-1, i]
            if settings.sim.x.final_type[i] == "Maximize":
                cost -= w * scaled_x[-1, i]
            if settings.sim.x.initial_type[i] == "Minimize":
                cost += w * scaled_x[0, i]
            if settings.sim.x.initial_type[i] == "Maximize":
                cost -= w * scaled_x[0, i]
        return cost

    @staticmethod
    def calculate_nonlinear_penalty(
        x_prop: np.ndarray,
        x_bar: np.ndarray,
        u_bar: np.ndarray,
        lam_vc: np.ndarray,
        lam_vb_nodal: np.ndarray,
        lam_vb_cross: np.ndarray,
        lam_cost: Union[float, np.ndarray],
        nodal_constraints: "LoweredJaxConstraints",
        params: dict,
        settings: "Config",
    ) -> Tuple[float, float, float]:
        """Calculate nonlinear penalty components.

        This method computes three penalty components:
        1. Cost penalty: weighted original cost
        2. Virtual control penalty: penalty for dynamics violations
        3. Nodal penalty: penalty for constraint violations

        Args:
            x_prop: Propagated state (n_nodes-1, n_states)
            x_bar: Previous iteration state (n_nodes, n_states)
            u_bar: Solution control (n_nodes, n_controls)
            lam_vc: Virtual control weight (scalar or matrix)
            lam_vb_nodal: Virtual buffer penalty weights for nodal
                constraints, shape ``(N, n_nodal)``.
            lam_vb_cross: Virtual buffer penalty weights for cross-node
                constraints, shape ``(n_cross,)``.
            lam_cost: Cost weight. Scalar (applied uniformly) or
                array of shape ``(n_states,)`` for per-state weighting.
            nodal_constraints: Lowered JAX constraints
            params: Dictionary of problem parameters
            settings: Configuration object

        Returns:
            Tuple of (nonlinear_cost, nonlinear_penalty, nodal_penalty):
                - nonlinear_cost: Weighted cost component
                - nonlinear_penalty: Virtual control penalty
                - nodal_penalty: Constraint violation penalty
        """
        nodal_penalty = 0.0

        # Evaluate nodal constraints
        for idx, constraint in enumerate(nodal_constraints.nodal):
            # Nodal constraint function is vmapped: func(x, u, node, params)
            # When called with arrays, it evaluates at all nodes
            g = constraint.func(x_bar, u_bar, 0, params)
            # Only sum violations at nodes where constraint is enforced
            if constraint.nodes is not None:
                nodes_array = np.array(constraint.nodes)
                g_filtered = g[nodes_array]
                w = lam_vb_nodal[nodes_array, idx]
            else:
                g_filtered = g
                w = lam_vb_nodal[:, idx]
            nodal_penalty += np.sum(w * np.maximum(0, g_filtered))

        # Evaluate cross-node constraints
        for idx, constraint in enumerate(nodal_constraints.cross_node):
            w = lam_vb_cross[idx]
            # Cross-node constraint function signature: func(X, U, params)
            # No node argument - operates on full trajectory
            g = constraint.func(x_bar, u_bar, params)
            # Cross-node constraints return scalar or array, sum all violations
            nodal_penalty += w * np.sum(np.maximum(0, g))

        # lam_cost weighting is applied inside calculate_cost_from_state,
        # so the returned cost is already weighted (no outer multiplication).
        cost = AutotuningBase.calculate_cost_from_state(x_bar, settings, lam_cost)
        x_diff = settings.sim.inv_S_x @ (x_bar[1:, :] - x_prop).T

        return cost, np.sum(lam_vc * np.abs(x_diff.T)), nodal_penalty

    @abstractmethod
    def update_weights(
        self,
        state: "AlgorithmState",
        candidate: "CandidateIterate",
        nodal_constraints: "LoweredJaxConstraints",
        settings: "Config",
        params: dict,
        weights: "Weights",
    ) -> str:
        """Update SCP weights and cost parameters based on iteration state.

        This method is called each iteration to adapt weights based on the
        current solution quality and constraint satisfaction.

        Args:
            state: Solver state containing current weight values (mutated in place)
            candidate: Candidate iterate from the current subproblem solve
            nodal_constraints: Lowered JAX constraints
            settings: Configuration object containing adaptation parameters
            params: Dictionary of problem parameters
            weights: Initial weights from the algorithm

        Returns:
            str: Adaptive state string describing the update action (e.g., "Accept Lower")
        """

        pass
calculate_cost_from_state(x: np.ndarray, settings: Config, lam_cost: Union[float, np.ndarray] = 1.0) -> float staticmethod

Calculate cost from state vector based on final_type and initial_type.

Parameters:

Name Type Description Default
x ndarray

State trajectory array (n_nodes, n_states)

required
settings Config

Configuration object containing state types

required
lam_cost Union[float, ndarray]

Per-state cost weight. Scalar (applied uniformly) or array of shape (n_states,) for per-state weighting.

1.0

Returns:

Name Type Description
float float

Computed cost (weighted by lam_cost)

Source code in openscvx/algorithms/base.py
@staticmethod
def calculate_cost_from_state(
    x: np.ndarray,
    settings: "Config",
    lam_cost: Union[float, np.ndarray] = 1.0,
) -> float:
    """Calculate cost from state vector based on final_type and initial_type.

    Args:
        x: State trajectory array (n_nodes, n_states)
        settings: Configuration object containing state types
        lam_cost: Per-state cost weight. Scalar (applied uniformly) or
            array of shape ``(n_states,)`` for per-state weighting.

    Returns:
        float: Computed cost (weighted by lam_cost)
    """
    scaled_x = (settings.sim.inv_S_x @ (x.T - settings.sim.c_x[:, None])).T
    lam = np.asarray(lam_cost)
    cost = 0.0
    for i in range(settings.sim.n_states):
        w = float(lam[i]) if lam.ndim > 0 else float(lam)
        if settings.sim.x.final_type[i] == "Minimize":
            cost += w * scaled_x[-1, i]
        if settings.sim.x.final_type[i] == "Maximize":
            cost -= w * scaled_x[-1, i]
        if settings.sim.x.initial_type[i] == "Minimize":
            cost += w * scaled_x[0, i]
        if settings.sim.x.initial_type[i] == "Maximize":
            cost -= w * scaled_x[0, i]
    return cost
calculate_nonlinear_penalty(x_prop: np.ndarray, x_bar: np.ndarray, u_bar: np.ndarray, lam_vc: np.ndarray, lam_vb_nodal: np.ndarray, lam_vb_cross: np.ndarray, lam_cost: Union[float, np.ndarray], nodal_constraints: LoweredJaxConstraints, params: dict, settings: Config) -> Tuple[float, float, float] staticmethod

Calculate nonlinear penalty components.

This method computes three penalty components: 1. Cost penalty: weighted original cost 2. Virtual control penalty: penalty for dynamics violations 3. Nodal penalty: penalty for constraint violations

Parameters:

Name Type Description Default
x_prop ndarray

Propagated state (n_nodes-1, n_states)

required
x_bar ndarray

Previous iteration state (n_nodes, n_states)

required
u_bar ndarray

Solution control (n_nodes, n_controls)

required
lam_vc ndarray

Virtual control weight (scalar or matrix)

required
lam_vb_nodal ndarray

Virtual buffer penalty weights for nodal constraints, shape (N, n_nodal).

required
lam_vb_cross ndarray

Virtual buffer penalty weights for cross-node constraints, shape (n_cross,).

required
lam_cost Union[float, ndarray]

Cost weight. Scalar (applied uniformly) or array of shape (n_states,) for per-state weighting.

required
nodal_constraints LoweredJaxConstraints

Lowered JAX constraints

required
params dict

Dictionary of problem parameters

required
settings Config

Configuration object

required

Returns:

Type Description
Tuple[float, float, float]

Tuple of (nonlinear_cost, nonlinear_penalty, nodal_penalty): - nonlinear_cost: Weighted cost component - nonlinear_penalty: Virtual control penalty - nodal_penalty: Constraint violation penalty

Source code in openscvx/algorithms/base.py
@staticmethod
def calculate_nonlinear_penalty(
    x_prop: np.ndarray,
    x_bar: np.ndarray,
    u_bar: np.ndarray,
    lam_vc: np.ndarray,
    lam_vb_nodal: np.ndarray,
    lam_vb_cross: np.ndarray,
    lam_cost: Union[float, np.ndarray],
    nodal_constraints: "LoweredJaxConstraints",
    params: dict,
    settings: "Config",
) -> Tuple[float, float, float]:
    """Calculate nonlinear penalty components.

    This method computes three penalty components:
    1. Cost penalty: weighted original cost
    2. Virtual control penalty: penalty for dynamics violations
    3. Nodal penalty: penalty for constraint violations

    Args:
        x_prop: Propagated state (n_nodes-1, n_states)
        x_bar: Previous iteration state (n_nodes, n_states)
        u_bar: Solution control (n_nodes, n_controls)
        lam_vc: Virtual control weight (scalar or matrix)
        lam_vb_nodal: Virtual buffer penalty weights for nodal
            constraints, shape ``(N, n_nodal)``.
        lam_vb_cross: Virtual buffer penalty weights for cross-node
            constraints, shape ``(n_cross,)``.
        lam_cost: Cost weight. Scalar (applied uniformly) or
            array of shape ``(n_states,)`` for per-state weighting.
        nodal_constraints: Lowered JAX constraints
        params: Dictionary of problem parameters
        settings: Configuration object

    Returns:
        Tuple of (nonlinear_cost, nonlinear_penalty, nodal_penalty):
            - nonlinear_cost: Weighted cost component
            - nonlinear_penalty: Virtual control penalty
            - nodal_penalty: Constraint violation penalty
    """
    nodal_penalty = 0.0

    # Evaluate nodal constraints
    for idx, constraint in enumerate(nodal_constraints.nodal):
        # Nodal constraint function is vmapped: func(x, u, node, params)
        # When called with arrays, it evaluates at all nodes
        g = constraint.func(x_bar, u_bar, 0, params)
        # Only sum violations at nodes where constraint is enforced
        if constraint.nodes is not None:
            nodes_array = np.array(constraint.nodes)
            g_filtered = g[nodes_array]
            w = lam_vb_nodal[nodes_array, idx]
        else:
            g_filtered = g
            w = lam_vb_nodal[:, idx]
        nodal_penalty += np.sum(w * np.maximum(0, g_filtered))

    # Evaluate cross-node constraints
    for idx, constraint in enumerate(nodal_constraints.cross_node):
        w = lam_vb_cross[idx]
        # Cross-node constraint function signature: func(X, U, params)
        # No node argument - operates on full trajectory
        g = constraint.func(x_bar, u_bar, params)
        # Cross-node constraints return scalar or array, sum all violations
        nodal_penalty += w * np.sum(np.maximum(0, g))

    # lam_cost weighting is applied inside calculate_cost_from_state,
    # so the returned cost is already weighted (no outer multiplication).
    cost = AutotuningBase.calculate_cost_from_state(x_bar, settings, lam_cost)
    x_diff = settings.sim.inv_S_x @ (x_bar[1:, :] - x_prop).T

    return cost, np.sum(lam_vc * np.abs(x_diff.T)), nodal_penalty
update_weights(state: AlgorithmState, candidate: CandidateIterate, nodal_constraints: LoweredJaxConstraints, settings: Config, params: dict, weights: Weights) -> str abstractmethod

Update SCP weights and cost parameters based on iteration state.

This method is called each iteration to adapt weights based on the current solution quality and constraint satisfaction.

Parameters:

Name Type Description Default
state AlgorithmState

Solver state containing current weight values (mutated in place)

required
candidate CandidateIterate

Candidate iterate from the current subproblem solve

required
nodal_constraints LoweredJaxConstraints

Lowered JAX constraints

required
settings Config

Configuration object containing adaptation parameters

required
params dict

Dictionary of problem parameters

required
weights Weights

Initial weights from the algorithm

required

Returns:

Name Type Description
str str

Adaptive state string describing the update action (e.g., "Accept Lower")

Source code in openscvx/algorithms/base.py
@abstractmethod
def update_weights(
    self,
    state: "AlgorithmState",
    candidate: "CandidateIterate",
    nodal_constraints: "LoweredJaxConstraints",
    settings: "Config",
    params: dict,
    weights: "Weights",
) -> str:
    """Update SCP weights and cost parameters based on iteration state.

    This method is called each iteration to adapt weights based on the
    current solution quality and constraint satisfaction.

    Args:
        state: Solver state containing current weight values (mutated in place)
        candidate: Candidate iterate from the current subproblem solve
        nodal_constraints: Lowered JAX constraints
        settings: Configuration object containing adaptation parameters
        params: Dictionary of problem parameters
        weights: Initial weights from the algorithm

    Returns:
        str: Adaptive state string describing the update action (e.g., "Accept Lower")
    """

    pass

ConstantProximalWeight

Bases: AutotuningBase

Constant Proximal Weight method.

This method keeps the trust region weight constant throughout the optimization, while still updating virtual control weights and handling cost relaxation. Useful when you want a fixed trust region size without adaptation.

Source code in openscvx/algorithms/constant_proximal_weight.py
class ConstantProximalWeight(AutotuningBase):
    """Constant Proximal Weight method.

    This method keeps the trust region weight constant throughout the optimization,
    while still updating virtual control weights and handling cost relaxation.
    Useful when you want a fixed trust region size without adaptation.
    """

    def __init__(
        self,
        lam_cost_drop: int = -1,
        lam_cost_relax: float = 1.0,
    ):
        self.lam_cost_drop = lam_cost_drop
        self.lam_cost_relax = lam_cost_relax

    def update_weights(
        self,
        state: "AlgorithmState",
        candidate: "CandidateIterate",
        nodal_constraints: "LoweredJaxConstraints",
        settings: Config,
        params: dict,
        weights: "Weights",
    ) -> str:
        """Update SCP weights keeping trust region constant.

        Args:
            state: Solver state containing current weight values (mutated in place)
            nodal_constraints: Lowered JAX constraints
            settings: Configuration object containing adaptation parameters
            params: Dictionary of problem parameters
            weights: Initial weights from the algorithm

        Returns:
            str: Adaptive state string (e.g., "Accept", "Reject")
        """
        # Update cost relaxation parameter after cost_drop iterations.
        # When lam_cost is a per-state array, scalar lam_cost_relax scales
        # uniformly, preserving the user-specified per-state weight ratios.
        if state.k > self.lam_cost_drop:
            candidate.lam_cost = state.lam_cost * self.lam_cost_relax
        else:
            candidate.lam_cost = weights.lam_cost

        candidate.lam_prox = state.lam_prox
        state.accept_solution(candidate)
        return "Accept Constant"
update_weights(state: AlgorithmState, candidate: CandidateIterate, nodal_constraints: LoweredJaxConstraints, settings: Config, params: dict, weights: Weights) -> str

Update SCP weights keeping trust region constant.

Parameters:

Name Type Description Default
state AlgorithmState

Solver state containing current weight values (mutated in place)

required
nodal_constraints LoweredJaxConstraints

Lowered JAX constraints

required
settings Config

Configuration object containing adaptation parameters

required
params dict

Dictionary of problem parameters

required
weights Weights

Initial weights from the algorithm

required

Returns:

Name Type Description
str str

Adaptive state string (e.g., "Accept", "Reject")

Source code in openscvx/algorithms/constant_proximal_weight.py
def update_weights(
    self,
    state: "AlgorithmState",
    candidate: "CandidateIterate",
    nodal_constraints: "LoweredJaxConstraints",
    settings: Config,
    params: dict,
    weights: "Weights",
) -> str:
    """Update SCP weights keeping trust region constant.

    Args:
        state: Solver state containing current weight values (mutated in place)
        nodal_constraints: Lowered JAX constraints
        settings: Configuration object containing adaptation parameters
        params: Dictionary of problem parameters
        weights: Initial weights from the algorithm

    Returns:
        str: Adaptive state string (e.g., "Accept", "Reject")
    """
    # Update cost relaxation parameter after cost_drop iterations.
    # When lam_cost is a per-state array, scalar lam_cost_relax scales
    # uniformly, preserving the user-specified per-state weight ratios.
    if state.k > self.lam_cost_drop:
        candidate.lam_cost = state.lam_cost * self.lam_cost_relax
    else:
        candidate.lam_cost = weights.lam_cost

    candidate.lam_prox = state.lam_prox
    state.accept_solution(candidate)
    return "Accept Constant"

DiscretizationResult dataclass

Unpacked discretization data from a multi-shot discretization matrix.

The discretization solver returns a matrix V that stores multiple blocks (propagated state and linearization matrices) across nodes/time. Historically, we stored the raw V matrices and re-unpacked them repeatedly via slicing. This dataclass unpacks once and makes access trivial.

Source code in openscvx/algorithms/base.py
@dataclass(frozen=True, slots=True)
class DiscretizationResult:
    """Unpacked discretization data from a multi-shot discretization matrix.

    The discretization solver returns a matrix ``V`` that stores multiple blocks
    (propagated state and linearization matrices) across nodes/time. Historically,
    we stored the raw ``V`` matrices and re-unpacked them repeatedly via slicing.
    This dataclass unpacks once and makes access trivial.
    """

    V: np.ndarray  # raw V matrix, shape: (flattened_size, n_timesteps)
    x_prop: np.ndarray  # (N-1, n_x)
    A_d: np.ndarray  # (N-1, n_x, n_x)
    B_d: np.ndarray  # (N-1, n_x, n_u)
    C_d: np.ndarray  # (N-1, n_x, n_u)
    x_prop_plus: Optional[np.ndarray] = None  # (N, n_x), discrete dynamics on node states
    D_d: Optional[np.ndarray] = None  # (N, n_x, n_x), d(x_prop_plus)/d(x_node)
    E_d: Optional[np.ndarray] = None  # (N, n_x, n_u), d(x_prop_plus)/d(u_node)

    @classmethod
    def from_V(
        cls,
        V: np.ndarray,
        n_x: int,
        n_u: int,
        N: int,
    ) -> "DiscretizationResult":
        """Unpack the final timestep of a raw discretization matrix ``V``."""
        i1, i2 = n_x, n_x + n_x * n_x
        i3, i4 = i2 + n_x * n_u, i2 + 2 * n_x * n_u
        V_final = V[:, -1].reshape(-1, i4)
        return cls(
            V=np.asarray(V),
            x_prop=V_final[:, :i1],
            A_d=V_final[:, i1:i2].reshape(N - 1, n_x, n_x),
            B_d=V_final[:, i2:i3].reshape(N - 1, n_x, n_u),
            C_d=V_final[:, i3:i4].reshape(N - 1, n_x, n_u),
        )

    @classmethod
    def from_VW(
        cls,
        V: np.ndarray,
        W: np.ndarray,
        n_x: int,
        n_u: int,
        N: int,
    ) -> "DiscretizationResult":
        """Unpack continuous and impulsive discretization blocks from ``V`` and ``W``."""
        base = cls.from_V(V=V, n_x=n_x, n_u=n_u, N=N)

        W_arr = np.asarray(W)
        i_w = n_x + n_x * n_x + n_x * n_u
        i1 = n_x
        i2 = i1 + n_x * n_x
        i3 = i2 + n_x * n_u

        W_final = W_arr[:, -1].reshape(-1, i_w)

        return cls(
            V=base.V,
            x_prop=base.x_prop,
            A_d=base.A_d,
            B_d=base.B_d,
            C_d=base.C_d,
            x_prop_plus=W_final[:, :i1],
            D_d=W_final[:, i1:i2].reshape(W_final.shape[0], n_x, n_x),
            E_d=W_final[:, i2:i3].reshape(W_final.shape[0], n_x, n_u),
        )
from_V(V: np.ndarray, n_x: int, n_u: int, N: int) -> DiscretizationResult classmethod

Unpack the final timestep of a raw discretization matrix V.

Source code in openscvx/algorithms/base.py
@classmethod
def from_V(
    cls,
    V: np.ndarray,
    n_x: int,
    n_u: int,
    N: int,
) -> "DiscretizationResult":
    """Unpack the final timestep of a raw discretization matrix ``V``."""
    i1, i2 = n_x, n_x + n_x * n_x
    i3, i4 = i2 + n_x * n_u, i2 + 2 * n_x * n_u
    V_final = V[:, -1].reshape(-1, i4)
    return cls(
        V=np.asarray(V),
        x_prop=V_final[:, :i1],
        A_d=V_final[:, i1:i2].reshape(N - 1, n_x, n_x),
        B_d=V_final[:, i2:i3].reshape(N - 1, n_x, n_u),
        C_d=V_final[:, i3:i4].reshape(N - 1, n_x, n_u),
    )
from_VW(V: np.ndarray, W: np.ndarray, n_x: int, n_u: int, N: int) -> DiscretizationResult classmethod

Unpack continuous and impulsive discretization blocks from V and W.

Source code in openscvx/algorithms/base.py
@classmethod
def from_VW(
    cls,
    V: np.ndarray,
    W: np.ndarray,
    n_x: int,
    n_u: int,
    N: int,
) -> "DiscretizationResult":
    """Unpack continuous and impulsive discretization blocks from ``V`` and ``W``."""
    base = cls.from_V(V=V, n_x=n_x, n_u=n_u, N=N)

    W_arr = np.asarray(W)
    i_w = n_x + n_x * n_x + n_x * n_u
    i1 = n_x
    i2 = i1 + n_x * n_x
    i3 = i2 + n_x * n_u

    W_final = W_arr[:, -1].reshape(-1, i_w)

    return cls(
        V=base.V,
        x_prop=base.x_prop,
        A_d=base.A_d,
        B_d=base.B_d,
        C_d=base.C_d,
        x_prop_plus=W_final[:, :i1],
        D_d=W_final[:, i1:i2].reshape(W_final.shape[0], n_x, n_x),
        E_d=W_final[:, i2:i3].reshape(W_final.shape[0], n_x, n_u),
    )

OptimizationResults dataclass

Structured container for optimization results from the Successive Convexification (SCP) solver.

This class provides a type-safe and organized way to store and access optimization results, replacing the previous dictionary-based approach. It includes core optimization data, iteration history for convergence analysis, post-processing results, and flexible storage for plotting and application-specific data.

Attributes:

Name Type Description
converged bool

Whether the optimization successfully converged.

t_final float

Final time of the optimized trajectory.

x_guess ndarray

Optimized state trajectory at discretization nodes, shape (N, n_states).

u_guess ndarray

Optimized control trajectory at discretization nodes, shape (N, n_controls).

nodes dict[str, ndarray]

Dictionary mapping state/control names to arrays at optimization nodes. Includes both user-defined and augmented variables.

trajectory dict[str, ndarray]

Dictionary mapping state/control names to arrays along the propagated trajectory. Added by post_process().

x_history list[ndarray]

State trajectories from each SCP iteration.

u_history list[ndarray]

Control trajectories from each SCP iteration.

discretization_history list[ndarray]

Time discretization from each iteration.

J_tr_history list[ndarray]

Trust region cost history.

J_vb_history list[ndarray]

Virtual buffer cost history.

J_vc_history list[ndarray]

Virtual control cost history.

t_full Optional[ndarray]

Full time grid for interpolated trajectory. Added by propagate_trajectory_results.

x_full Optional[ndarray]

Interpolated state trajectory on full time grid. Added by propagate_trajectory_results.

u_full Optional[ndarray]

Interpolated control trajectory on full time grid. Added by propagate_trajectory_results.

cost Optional[float]

Total cost of the optimized trajectory. Added by propagate_trajectory_results.

ctcs_violation Optional[ndarray]

Continuous-time constraint violations. Added by propagate_trajectory_results.

plotting_data dict[str, Any]

Flexible storage for plotting and application data.

For Developers

The metadata={"npz": ...} parameter on each field below is a built-in feature of dataclasses.field. It attaches a read-only mapping to the field definition, not to instances. Instances still only have the normal attributes (result.X, result.converged, etc.).

The save() / load() methods iterate over field metadata to determine how to serialize each field, so adding a new field only requires tagging it here, no separate field-name lists to maintain.

Fields without an "npz" metadata entry are skipped during serialization (e.g. _states, _controls).

Source code in openscvx/algorithms/optimization_results.py
@dataclass
class OptimizationResults:
    """
    Structured container for optimization results from the Successive Convexification (SCP) solver.

    This class provides a type-safe and organized way to store and access optimization results,
    replacing the previous dictionary-based approach. It includes core optimization data,
    iteration history for convergence analysis, post-processing results, and flexible
    storage for plotting and application-specific data.

    Attributes:
        converged (bool): Whether the optimization successfully converged.
        t_final (float): Final time of the optimized trajectory.
        x_guess (np.ndarray): Optimized state trajectory at discretization nodes,
            shape (N, n_states).
        u_guess (np.ndarray): Optimized control trajectory at discretization nodes,
            shape (N, n_controls).
        nodes (dict[str, np.ndarray]): Dictionary mapping state/control names to arrays
            at optimization nodes. Includes both user-defined and augmented variables.
        trajectory (dict[str, np.ndarray]): Dictionary mapping state/control names to arrays
            along the propagated trajectory. Added by post_process().
        x_history (list[np.ndarray]): State trajectories from each SCP iteration.
        u_history (list[np.ndarray]): Control trajectories from each SCP iteration.
        discretization_history (list[np.ndarray]): Time discretization from each iteration.
        J_tr_history (list[np.ndarray]): Trust region cost history.
        J_vb_history (list[np.ndarray]): Virtual buffer cost history.
        J_vc_history (list[np.ndarray]): Virtual control cost history.
        t_full (Optional[np.ndarray]): Full time grid for interpolated trajectory.
            Added by propagate_trajectory_results.
        x_full (Optional[np.ndarray]): Interpolated state trajectory on full time grid.
            Added by propagate_trajectory_results.
        u_full (Optional[np.ndarray]): Interpolated control trajectory on full time grid.
            Added by propagate_trajectory_results.
        cost (Optional[float]): Total cost of the optimized trajectory.
            Added by propagate_trajectory_results.
        ctcs_violation (Optional[np.ndarray]): Continuous-time constraint violations.
            Added by propagate_trajectory_results.
        plotting_data (dict[str, Any]): Flexible storage for plotting and application data.


    !!! note "For Developers"
        The ``metadata={"npz": ...}`` parameter on each field below is a built-in feature of
        `dataclasses.field`.  It attaches a read-only mapping to the **field definition**,
        *not* to instances. Instances still only have the normal attributes (``result.X``,
        ``result.converged``, etc.).

        The ``save()`` / ``load()`` methods iterate over field metadata to
        determine how to serialize each field, so adding a new field only
        requires tagging it here, no separate field-name lists to maintain.

        Fields *without* an ``"npz"`` metadata entry are skipped during
        serialization (e.g. ``_states``, ``_controls``).
    """

    _SCALAR = "scalar"
    _ARRAY_LIST = "array_list"
    _FLOAT_LIST = "float_list"
    _DICT = "dict"
    _OPT_ARRAY = "optional_array"
    _OPT_SCALAR = "optional_scalar"

    # Core optimization results
    converged: bool = field(metadata={"npz": "scalar"})
    t_final: float = field(metadata={"npz": "scalar"})

    # Dictionary-based access to states and controls
    nodes: dict[str, np.ndarray] = field(default_factory=dict, metadata={"npz": "dict"})
    trajectory: dict[str, np.ndarray] = field(default_factory=dict, metadata={"npz": "dict"})

    # Internal metadata for dictionary construction (not serialized)
    _states: list = field(default_factory=list, repr=False)
    _controls: list = field(default_factory=list, repr=False)

    # History of SCP iterations (single source of truth)
    X: list[np.ndarray] = field(default_factory=list, metadata={"npz": "array_list"})
    U: list[np.ndarray] = field(default_factory=list, metadata={"npz": "array_list"})
    discretization_history: list[np.ndarray] = field(
        default_factory=list, metadata={"npz": "array_list"}
    )
    J_tr_history: list[np.ndarray] = field(default_factory=list, metadata={"npz": "array_list"})
    J_vb_history: list[np.ndarray] = field(default_factory=list, metadata={"npz": "array_list"})
    J_vc_history: list[np.ndarray] = field(default_factory=list, metadata={"npz": "array_list"})
    TR_history: list[np.ndarray] = field(default_factory=list, metadata={"npz": "array_list"})
    VC_history: list[np.ndarray] = field(default_factory=list, metadata={"npz": "array_list"})

    # Convergence histories
    lam_prox_history: list[np.ndarray] = field(default_factory=list, metadata={"npz": "array_list"})
    actual_reduction_history: list[float] = field(
        default_factory=list, metadata={"npz": "float_list"}
    )
    pred_reduction_history: list[float] = field(
        default_factory=list, metadata={"npz": "float_list"}
    )
    acceptance_ratio_history: list[float] = field(
        default_factory=list, metadata={"npz": "float_list"}
    )

    @property
    def x(self) -> np.ndarray:
        """Optimal state trajectory at discretization nodes.

        Returns the final converged solution from the SCP iteration history.

        Returns:
            State trajectory array, shape (N, n_states)
        """
        return self.X[-1]

    @property
    def u(self) -> np.ndarray:
        """Optimal control trajectory at discretization nodes.

        Returns the final converged solution from the SCP iteration history.

        Returns:
            Control trajectory array, shape (N, n_controls)
        """
        return self.U[-1]

    # Post-processing results (added by propagate_trajectory_results)
    t_full: Optional[np.ndarray] = field(default=None, metadata={"npz": "optional_array"})
    x_full: Optional[np.ndarray] = field(default=None, metadata={"npz": "optional_array"})
    u_full: Optional[np.ndarray] = field(default=None, metadata={"npz": "optional_array"})
    cost: Optional[float] = field(default=None, metadata={"npz": "optional_scalar"})
    ctcs_violation: Optional[np.ndarray] = field(default=None, metadata={"npz": "optional_array"})

    # Additional plotting/application data (added by user)
    plotting_data: dict[str, Any] = field(default_factory=dict, metadata={"npz": "dict"})

    def __post_init__(self):
        """Initialize the results object."""
        pass

    def update_plotting_data(self, **kwargs: Any) -> None:
        """
        Update the plotting data with additional information.

        Args:
            **kwargs: Key-value pairs to add to plotting_data
        """
        self.plotting_data.update(kwargs)

    def get(self, key: str, default: Any = None) -> Any:
        """
        Get a value from the results, similar to dict.get().

        Args:
            key: The key to look up
            default: Default value if key is not found

        Returns:
            The value associated with the key, or default if not found
        """
        # Check if it's a direct attribute
        if hasattr(self, key):
            return getattr(self, key)

        # Check if it's in plotting_data
        if key in self.plotting_data:
            return self.plotting_data[key]

        return default

    def __getitem__(self, key: str) -> Any:
        """
        Allow dictionary-style access to results.

        Args:
            key: The key to look up

        Returns:
            The value associated with the key

        Raises:
            KeyError: If key is not found
        """
        # Check if it's a direct attribute
        if hasattr(self, key):
            return getattr(self, key)

        # Check if it's in plotting_data
        if key in self.plotting_data:
            return self.plotting_data[key]

        raise KeyError(f"Key '{key}' not found in results")

    def __setitem__(self, key: str, value: Any):
        """
        Allow dictionary-style assignment to results.

        Args:
            key: The key to set
            value: The value to assign
        """
        # Check if it's a direct attribute
        if hasattr(self, key):
            setattr(self, key, value)
        else:
            # Store in plotting_data
            self.plotting_data[key] = value

    def __contains__(self, key: str) -> bool:
        """
        Check if a key exists in the results.

        Args:
            key: The key to check

        Returns:
            True if key exists, False otherwise
        """
        return hasattr(self, key) or key in self.plotting_data

    def update(self, other: dict[str, Any]):
        """
        Update the results with additional data from a dictionary.

        Args:
            other: Dictionary containing additional data
        """
        for key, value in other.items():
            self[key] = value

    def to_dict(self) -> dict[str, Any]:
        """
        Convert the results to a dictionary for backward compatibility.

        Returns:
            Dictionary representation of the results
        """
        result_dict = {}

        # Add all direct attributes
        for attr_name in self.__dataclass_fields__:
            if attr_name != "plotting_data":
                result_dict[attr_name] = getattr(self, attr_name)

        # Add plotting data
        result_dict.update(self.plotting_data)

        return result_dict

    # ------------------------------------------------------------------
    # Serialization
    # ------------------------------------------------------------------

    def save(self, path: Union[str, Path]) -> None:
        """Save results to a compressed ``.npz`` file.

        Serialization behaviour is driven by the ``"npz"`` key in each
        dataclass field's ``metadata``.  Fields without that key are
        skipped.

        Args:
            path: Output file path.  ``.npz`` is appended automatically
                by numpy if not already present.
        """
        data: dict[str, np.ndarray] = {}

        for name, f in self.__dataclass_fields__.items():
            tag = f.metadata.get("npz")
            if tag is None:
                continue
            val = getattr(self, name)

            if tag == self._SCALAR:
                data[name] = np.array(val)

            elif tag == self._ARRAY_LIST:
                data[name] = np.stack([np.asarray(a) for a in val]) if val else np.array([])

            elif tag == self._FLOAT_LIST:
                data[name] = np.array(val, dtype=float)

            elif tag == self._DICT:
                for k, v in val.items():
                    try:
                        arr = np.asarray(v)
                        if arr.dtype.kind != "O":
                            data[f"{name}/{k}"] = arr
                    except (TypeError, ValueError):
                        pass

            elif tag == self._OPT_ARRAY:
                if val is not None:
                    data[name] = np.asarray(val)

            elif tag == self._OPT_SCALAR:
                if val is not None:
                    data[name] = np.array(val)

        np.savez_compressed(str(path), **data)

    @classmethod
    def load(cls, path: Union[str, Path]) -> "OptimizationResults":
        """Load results from a ``.npz`` file previously created by :meth:`save`.

        Args:
            path: Path to the ``.npz`` file.  If the path has no suffix,
                ``.npz`` is appended automatically.

        Returns:
            Reconstructed :class:`OptimizationResults`.
        """
        path = Path(path)
        if not path.suffix:
            path = path.with_suffix(".npz")

        archive = np.load(str(path), allow_pickle=False)
        keys = set(archive.files)

        def _unstack(name: str) -> list:
            if name not in keys:
                return []
            arr = archive[name]
            return [arr[i] for i in range(arr.shape[0])] if arr.size else []

        def _prefixed_dict(prefix: str) -> dict:
            p = prefix + "/"
            return {k[len(p) :]: archive[k] for k in keys if k.startswith(p)}

        kwargs: dict[str, Any] = {}
        deferred: dict[str, tuple] = {}  # fields to set after construction

        for name, f in cls.__dataclass_fields__.items():
            tag = f.metadata.get("npz")
            if tag is None:
                continue

            if tag == cls._SCALAR:
                kwargs[name] = archive[name].item()

            elif tag == cls._ARRAY_LIST:
                kwargs[name] = _unstack(name)

            elif tag == cls._FLOAT_LIST:
                kwargs[name] = archive[name].tolist() if name in keys else []

            elif tag == cls._DICT:
                kwargs[name] = _prefixed_dict(name)

            elif tag == cls._OPT_ARRAY:
                deferred[name] = archive[name] if name in keys else None

            elif tag == cls._OPT_SCALAR:
                deferred[name] = float(archive[name]) if name in keys else None

        result = cls(**kwargs)
        for name, val in deferred.items():
            setattr(result, name, val)
        return result
u: np.ndarray property

Optimal control trajectory at discretization nodes.

Returns the final converged solution from the SCP iteration history.

Returns:

Type Description
ndarray

Control trajectory array, shape (N, n_controls)

x: np.ndarray property

Optimal state trajectory at discretization nodes.

Returns the final converged solution from the SCP iteration history.

Returns:

Type Description
ndarray

State trajectory array, shape (N, n_states)

get(key: str, default: Any = None) -> Any

Get a value from the results, similar to dict.get().

Parameters:

Name Type Description Default
key str

The key to look up

required
default Any

Default value if key is not found

None

Returns:

Type Description
Any

The value associated with the key, or default if not found

Source code in openscvx/algorithms/optimization_results.py
def get(self, key: str, default: Any = None) -> Any:
    """
    Get a value from the results, similar to dict.get().

    Args:
        key: The key to look up
        default: Default value if key is not found

    Returns:
        The value associated with the key, or default if not found
    """
    # Check if it's a direct attribute
    if hasattr(self, key):
        return getattr(self, key)

    # Check if it's in plotting_data
    if key in self.plotting_data:
        return self.plotting_data[key]

    return default
load(path: Union[str, Path]) -> OptimizationResults classmethod

Load results from a .npz file previously created by :meth:save.

Parameters:

Name Type Description Default
path Union[str, Path]

Path to the .npz file. If the path has no suffix, .npz is appended automatically.

required

Returns:

Name Type Description
Reconstructed OptimizationResults

class:OptimizationResults.

Source code in openscvx/algorithms/optimization_results.py
@classmethod
def load(cls, path: Union[str, Path]) -> "OptimizationResults":
    """Load results from a ``.npz`` file previously created by :meth:`save`.

    Args:
        path: Path to the ``.npz`` file.  If the path has no suffix,
            ``.npz`` is appended automatically.

    Returns:
        Reconstructed :class:`OptimizationResults`.
    """
    path = Path(path)
    if not path.suffix:
        path = path.with_suffix(".npz")

    archive = np.load(str(path), allow_pickle=False)
    keys = set(archive.files)

    def _unstack(name: str) -> list:
        if name not in keys:
            return []
        arr = archive[name]
        return [arr[i] for i in range(arr.shape[0])] if arr.size else []

    def _prefixed_dict(prefix: str) -> dict:
        p = prefix + "/"
        return {k[len(p) :]: archive[k] for k in keys if k.startswith(p)}

    kwargs: dict[str, Any] = {}
    deferred: dict[str, tuple] = {}  # fields to set after construction

    for name, f in cls.__dataclass_fields__.items():
        tag = f.metadata.get("npz")
        if tag is None:
            continue

        if tag == cls._SCALAR:
            kwargs[name] = archive[name].item()

        elif tag == cls._ARRAY_LIST:
            kwargs[name] = _unstack(name)

        elif tag == cls._FLOAT_LIST:
            kwargs[name] = archive[name].tolist() if name in keys else []

        elif tag == cls._DICT:
            kwargs[name] = _prefixed_dict(name)

        elif tag == cls._OPT_ARRAY:
            deferred[name] = archive[name] if name in keys else None

        elif tag == cls._OPT_SCALAR:
            deferred[name] = float(archive[name]) if name in keys else None

    result = cls(**kwargs)
    for name, val in deferred.items():
        setattr(result, name, val)
    return result
save(path: Union[str, Path]) -> None

Save results to a compressed .npz file.

Serialization behaviour is driven by the "npz" key in each dataclass field's metadata. Fields without that key are skipped.

Parameters:

Name Type Description Default
path Union[str, Path]

Output file path. .npz is appended automatically by numpy if not already present.

required
Source code in openscvx/algorithms/optimization_results.py
def save(self, path: Union[str, Path]) -> None:
    """Save results to a compressed ``.npz`` file.

    Serialization behaviour is driven by the ``"npz"`` key in each
    dataclass field's ``metadata``.  Fields without that key are
    skipped.

    Args:
        path: Output file path.  ``.npz`` is appended automatically
            by numpy if not already present.
    """
    data: dict[str, np.ndarray] = {}

    for name, f in self.__dataclass_fields__.items():
        tag = f.metadata.get("npz")
        if tag is None:
            continue
        val = getattr(self, name)

        if tag == self._SCALAR:
            data[name] = np.array(val)

        elif tag == self._ARRAY_LIST:
            data[name] = np.stack([np.asarray(a) for a in val]) if val else np.array([])

        elif tag == self._FLOAT_LIST:
            data[name] = np.array(val, dtype=float)

        elif tag == self._DICT:
            for k, v in val.items():
                try:
                    arr = np.asarray(v)
                    if arr.dtype.kind != "O":
                        data[f"{name}/{k}"] = arr
                except (TypeError, ValueError):
                    pass

        elif tag == self._OPT_ARRAY:
            if val is not None:
                data[name] = np.asarray(val)

        elif tag == self._OPT_SCALAR:
            if val is not None:
                data[name] = np.array(val)

    np.savez_compressed(str(path), **data)
to_dict() -> dict[str, Any]

Convert the results to a dictionary for backward compatibility.

Returns:

Type Description
dict[str, Any]

Dictionary representation of the results

Source code in openscvx/algorithms/optimization_results.py
def to_dict(self) -> dict[str, Any]:
    """
    Convert the results to a dictionary for backward compatibility.

    Returns:
        Dictionary representation of the results
    """
    result_dict = {}

    # Add all direct attributes
    for attr_name in self.__dataclass_fields__:
        if attr_name != "plotting_data":
            result_dict[attr_name] = getattr(self, attr_name)

    # Add plotting data
    result_dict.update(self.plotting_data)

    return result_dict
update(other: dict[str, Any])

Update the results with additional data from a dictionary.

Parameters:

Name Type Description Default
other dict[str, Any]

Dictionary containing additional data

required
Source code in openscvx/algorithms/optimization_results.py
def update(self, other: dict[str, Any]):
    """
    Update the results with additional data from a dictionary.

    Args:
        other: Dictionary containing additional data
    """
    for key, value in other.items():
        self[key] = value
update_plotting_data(**kwargs: Any) -> None

Update the plotting data with additional information.

Parameters:

Name Type Description Default
**kwargs Any

Key-value pairs to add to plotting_data

{}
Source code in openscvx/algorithms/optimization_results.py
def update_plotting_data(self, **kwargs: Any) -> None:
    """
    Update the plotting data with additional information.

    Args:
        **kwargs: Key-value pairs to add to plotting_data
    """
    self.plotting_data.update(kwargs)

PenalizedTrustRegion

Bases: Algorithm

Penalized Trust Region (PTR) successive convexification algorithm.

PTR solves non-convex trajectory optimization problems through iterative convex approximation. Each subproblem balances competing cost terms:

  • Trust region penalty: Discourages large deviations from the previous iterate, keeping the solution within the region where linearization is valid.
  • Virtual control: Relaxes dynamics constraints, penalized to drive defects toward zero as the algorithm converges.
  • Virtual buffer: Relaxes non-convex constraints, similarly penalized to enforce feasibility at convergence.
  • Problem objective and other terms: The user-defined cost (e.g., minimum fuel, minimum time) and any additional penalty terms.

The interplay between these terms guides the optimization: the trust region anchors the solution near the linearization point while virtual terms allow temporary constraint violations that shrink over iterations.

Example

Using PTR with a Problem::

from openscvx.algorithms import PenalizedTrustRegion

problem = Problem(dynamics, constraints, states, controls, N, time)
problem.initialize()
result = problem.solve()
Source code in openscvx/algorithms/penalized_trust_region.py
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
class PenalizedTrustRegion(Algorithm):
    """Penalized Trust Region (PTR) successive convexification algorithm.

    PTR solves non-convex trajectory optimization problems through iterative
    convex approximation. Each subproblem balances competing cost terms:

    - **Trust region penalty**: Discourages large deviations from the previous
      iterate, keeping the solution within the region where linearization is valid.
    - **Virtual control**: Relaxes dynamics constraints, penalized to drive
      defects toward zero as the algorithm converges.
    - **Virtual buffer**: Relaxes non-convex constraints, similarly penalized
      to enforce feasibility at convergence.
    - **Problem objective and other terms**: The user-defined cost (e.g., minimum
      fuel, minimum time) and any additional penalty terms.

    The interplay between these terms guides the optimization: the trust region
    anchors the solution near the linearization point while virtual terms allow
    temporary constraint violations that shrink over iterations.

    Example:
        Using PTR with a Problem::

            from openscvx.algorithms import PenalizedTrustRegion

            problem = Problem(dynamics, constraints, states, controls, N, time)
            problem.initialize()
            result = problem.solve()
    """

    # Base columns emitted by PTR algorithm (before autotuner columns)
    BASE_COLUMNS: List[Column] = [
        Column("iter", "Iter", 4, "{:4d}"),
        Column("dis_time", "Dis (ms)", 8, "{:6.2f}", min_verbosity=Verbosity.STANDARD),
        Column("subprop_time", "Solve (ms)", 10, "{:6.2f}", min_verbosity=Verbosity.STANDARD),
        Column("cost", "Cost", 8, "{: .1e}"),
        Column("J_tr", "J_tr", 8, "{: .1e}", color_J_tr, Verbosity.STANDARD),
        Column("J_vb", "J_vb", 8, "{: .1e}", color_J_vb, Verbosity.STANDARD),
        Column("J_vc", "J_vc", 8, "{: .1e}", color_J_vc, Verbosity.STANDARD),
    ]

    # Columns that always appear last (after autotuner columns)
    TAIL_COLUMNS: List[Column] = [
        Column("prob_stat", "Cvx Status", 11, "{}", color_prob_stat),
    ]

    def __init__(
        self,
        autotuner: "AutotuningBase" = None,
        k_max: int = 200,
        t_max: float | None = None,
        lam_prox: Union[float, Dict[str, Union[float, list]]] = 1e-1,
        lam_vc: Union[float, Dict[str, Union[float, list]]] = 1e0,
        lam_cost: Union[float, Dict[str, float]] = 1e-2,
        lam_vb: float = 0.0,
        ep_tr: float = 1e-4,
        ep_vb: float = 1e-4,
        ep_vc: float = 1e-8,
        states: List["State"] = None,
        controls: List["Control"] = None,
    ):
        """Initialize PTR with algorithm parameters and optional autotuner.

        Args:
            autotuner: Weight adaptation strategy. Defaults to
                :class:`AugmentedLagrangian` when ``None``.
            k_max: Maximum SCP iterations. Defaults to 200.
            t_max: Wall-clock time limit in seconds. ``None`` (default)
                disables the time limit.
            lam_prox: Trust region (proximal) weight. Either a float
                (applied uniformly to all states and controls) or a dict
                mapping state/control names to weights, e.g.
                ``{"velocity": 1e0, "thrust": 5e-1}``.  Dict values may
                be scalars, 1-D arrays for per-component weighting, or
                2-D arrays of shape ``(n_nodes, n_components)`` for
                per-node-per-component weighting.  Variables not in the
                dict default to ``1.0``. Defaults to 0.1.
            lam_vc: Virtual control penalty weight. Either a float
                (applied uniformly to all states) or a dict mapping state
                names to per-state weights, e.g.
                ``{"velocity": 1e1, "position": 5e0}``.  Dict values may
                be scalars, 1-D arrays for per-component weighting, or
                2-D arrays of shape ``(n_nodes-1, n_components)`` for
                per-node-per-component weighting.  States not in the dict
                default to ``1.0``. Defaults to 1.0.
            lam_cost: Cost weight. Either a float (applied to all
                minimize/maximize states) or a dict mapping state names
                to per-state weights, e.g.
                ``{"velocity": 1e-1, "time": 1e0}``.  Dict values may
                be arrays for per-component weighting, e.g.
                ``{"position": [0, 0, 1e-6]}``. Defaults to 0.01.
            lam_vb: Virtual buffer penalty weight. Defaults to 0.0.
            ep_tr: Trust region convergence tolerance. Defaults to 1e-4.
            ep_vb: Virtual buffer convergence tolerance. Defaults to 1e-4.
            ep_vc: Virtual control convergence tolerance. Defaults to 1e-8.
            states: Symbolic State objects (required when *lam_cost* or
                *lam_prox* is a dict). Normally provided automatically by
                :class:`Problem`.
            controls: Symbolic Control objects (required when *lam_prox*
                is a dict). Normally provided automatically by
                :class:`Problem`.
        """
        # Compiled infrastructure (set by initialize())
        self._solver: "ConvexSolver" = None
        self._discretization_solver: callable = None
        self._discretization_solver_impulsive: callable = None
        self._jax_constraints: "LoweredJaxConstraints" = None
        self._emitter: callable = None

        # Autotuner
        self.autotuner: "AutotuningBase" = (
            autotuner if autotuner is not None else AugmentedLagrangian()
        )

        # Store states/controls for later re-resolution (e.g. user sets
        # lam_cost or lam_prox to a new dict via the property setter after
        # construction).
        self._states: List["State"] = states
        self._controls: List["Control"] = controls

        # SCP weights (grouped dataclass, dict inputs expanded to arrays)
        self.weights = Weights.build(
            lam_prox=lam_prox,
            lam_vc=lam_vc,
            lam_cost=lam_cost,
            lam_vb=lam_vb,
            states=states,
            controls=controls,
        )

        # SCP convergence parameters
        self.k_max = k_max
        self.t_max = t_max
        self.ep_tr = ep_tr
        self.ep_vb = ep_vb
        self.ep_vc = ep_vc

    # -- Weight properties ---------------------------------------------------
    # These properties resolve dict-valued inputs (e.g. {"velocity": 1e0})
    # to arrays via Weights.resolve_lam_* helpers, then store the result
    # on self.weights.  During SCP iteration the autotuner may mutate the
    # values on self.weights directly (e.g. ramping lam_prox).  Those
    # in-flight changes are tracked in AlgorithmState weight histories.

    @staticmethod
    def _invoke_solver(solver: callable, *args):
        """Call either a compiled solver wrapper (.call) or a plain callable."""
        if hasattr(solver, "call"):
            return solver.call(*args)
        return solver(*args)

    def _recover_prior_node_from_initial(
        self,
        settings: Config,
        x0_fallback: np.ndarray,
    ) -> np.ndarray:
        """Build node-0 prior state from initial conditions (fixed entries exact)."""
        x0_prior = np.asarray(x0_fallback, dtype=float).reshape(-1).copy()
        x0_init = np.asarray(settings.sim.x.initial, dtype=float).reshape(-1)
        is_fixed = np.asarray(settings.sim.x.initial_type) == "Fix"
        x0_prior[is_fixed] = x0_init[is_fixed]
        return x0_prior.reshape(1, -1)

    @property
    def lam_prox(self) -> Union[float, np.ndarray]:
        """Trust region (proximal) weight.

        May be a scalar or array for per-variable / per-node weighting.
        """
        return self.weights.lam_prox

    @lam_prox.setter
    def lam_prox(self, value: Union[float, Dict[str, Union[float, list]]]) -> None:
        self.weights.lam_prox = Weights.resolve_lam_prox(value, self._states, self._controls)

    @property
    def lam_vc(self) -> Union[float, np.ndarray]:
        """Virtual control penalty weight.

        Returns a float when a uniform scalar was provided, or an ndarray
        when per-state weights were given (via dict or array).
        """
        return self.weights.lam_vc

    @lam_vc.setter
    def lam_vc(self, value: Union[float, Dict[str, Union[float, list]]]) -> None:
        self.weights.lam_vc = Weights.resolve_lam_vc(value, self._states)

    @property
    def lam_cost(self) -> Union[float, np.ndarray]:
        """Cost weight.

        Returns a float when a uniform scalar was provided, or an ndarray
        of shape ``(n_states,)`` when per-state weights were given (via dict
        or array).
        """
        return self.weights.lam_cost

    @lam_cost.setter
    def lam_cost(self, value: Union[float, Dict[str, float]]) -> None:
        self.weights.lam_cost = Weights.resolve_lam_cost(value, self._states)

    @property
    def lam_vb(self) -> float:
        """Global virtual buffer penalty weight.

        Per-constraint overrides are set via ``.weight()`` on individual
        constraints.
        """
        return self.weights.lam_vb

    @lam_vb.setter
    def lam_vb(self, value: float) -> None:
        self.weights.lam_vb = float(value)

    def get_columns(self, verbosity: int = Verbosity.STANDARD) -> List[Column]:
        """Get the columns to display for iteration output.

        Combines base PTR columns with autotuner-specific columns,
        filtered by the requested verbosity level.

        Args:
            verbosity: Minimum verbosity level for columns to include.
                MINIMAL (1): Core metrics only (iter, cost, status)
                STANDARD (2): + timing, penalty terms
                FULL (3): + autotuning diagnostics

        Returns:
            List of Column specs filtered by verbosity level.

        Raises:
            AttributeError: If algorithm has not been initialized yet.
        """
        all_columns = self.BASE_COLUMNS + self.autotuner.COLUMNS + self.TAIL_COLUMNS
        return [col for col in all_columns if col.min_verbosity <= verbosity]

    def initialize(
        self,
        solver: "ConvexSolver",
        discretization_solver: callable,
        jax_constraints: "LoweredJaxConstraints",
        emitter: callable,
        params: dict,
        settings: Config,
        discretization_solver_impulsive: Callable[..., object] | None = None,
    ) -> None:
        """Initialize PTR algorithm.

        Stores compiled infrastructure and performs a warm-start solve to
        initialize DPP and JAX jacobians.

        Args:
            solver: Convex subproblem solver (e.g., CVXPySolver)
            discretization_solver: Compiled discretization solver
            jax_constraints: JIT-compiled constraint functions
            emitter: Callback for emitting iteration progress
            params: Problem parameters dictionary (for warm-start)
            settings: Configuration object (for warm-start)
            discretization_solver_impulsive: Optional impulsive/discrete
                discretization solver used to populate W/x_prop_plus/D_d/E_d.
        """
        # Store immutable infrastructure
        self._solver = solver
        self._discretization_solver = discretization_solver
        self._discretization_solver_impulsive = discretization_solver_impulsive
        self._jax_constraints = jax_constraints
        self._emitter = emitter

        # Set boundary conditions
        self._solver.update_boundary_conditions(
            x_init=settings.sim.x.initial,
            x_term=settings.sim.x.final,
        )

        # Create temporary state for initialization solve
        init_state = AlgorithmState.from_settings(settings, self.weights)

        # Solve a dumb problem to initialize DPP and JAX jacobians
        _, _, _, x_prop, V_multi_shoot = self._invoke_solver(
            self._discretization_solver, init_state.x, init_state.u.astype(float), params
        )

        init_state.add_discretization(V_multi_shoot.__array__())
        slice_imp = settings.sim.u.slice_impulsive
        has_impulsive = bool(slice_imp.stop > slice_imp.start)
        if has_impulsive and self._discretization_solver_impulsive is not None:
            u_init = init_state.u.astype(float)
            x0_prior = self._recover_prior_node_from_initial(settings, init_state.x[0])
            x_nodes_prior = np.vstack((x0_prior, np.asarray(x_prop)))
            _, _, _, W_multi_shoot = self._invoke_solver(
                self._discretization_solver_impulsive,
                x_nodes_prior,
                u_init,
                params,
            )
            init_state.add_impulsive_discretization(W_multi_shoot.__array__())
        _ = self._subproblem(params, init_state, settings)

    def step(
        self,
        state: AlgorithmState,
        params: dict,
        settings: Config,
    ) -> bool:
        """Execute one PTR iteration.

        Solves the convex subproblem, updates state in place, and checks
        convergence based on trust region, virtual buffer, and virtual
        control costs.

        Args:
            state: Mutable solver state (modified in place)
            params: Problem parameters dictionary (may change between steps)
            settings: Configuration object (may change between steps)

        Returns:
            True if J_tr, J_vb, and J_vc are all below their thresholds.

        Raises:
            RuntimeError: If initialize() has not been called.
        """
        if self._solver is None:
            raise RuntimeError(
                "PenalizedTrustRegion.step() called before initialize(). "
                "Call initialize() first to set up compiled infrastructure."
            )

        # Compute discretization before subproblem only for the first iteration
        if state.k == 1:
            t0 = time.time()
            _, _, _, x_prop, V_multi_shoot = self._invoke_solver(
                self._discretization_solver, state.x, state.u.astype(float), params
            )

            u_state = state.u.astype(float)
            x0_prior = self._recover_prior_node_from_initial(settings, state.x[0])
            x_nodes_prior = np.vstack((x0_prior, np.asarray(x_prop)))
            _, _, _, W_multi_shoot = self._invoke_solver(
                self._discretization_solver_impulsive, x_nodes_prior, u_state, params
            )
            dis_time = time.time() - t0

            state.add_discretization(V_multi_shoot.__array__())
            state.add_impulsive_discretization(W_multi_shoot.__array__())

        # Run the subproblem
        (
            x_sol,
            u_sol,
            cost,
            J_total,
            J_vb_vec,
            J_vc_vec,
            J_tr_vec,
            prob_stat,
            subprop_time,
            vc_mat,
            tr_mat,
        ) = self._subproblem(params, state, settings)

        candidate = CandidateIterate()
        candidate.x = x_sol
        candidate.u = u_sol
        candidate.J_lin = J_total

        t0 = time.time()
        _, _, _, x_prop, V_multi_shoot = self._invoke_solver(
            self._discretization_solver, candidate.x, candidate.u.astype(float), params
        )

        u_candidate = candidate.u.astype(float)
        x0_prior = self._recover_prior_node_from_initial(settings, candidate.x[0])
        x_nodes_prior = np.vstack((x0_prior, np.asarray(x_prop)))
        x_prop_plus, D_d, E_d, W_multi_shoot = self._invoke_solver(
            self._discretization_solver_impulsive, x_nodes_prior, u_candidate, params
        )

        dis_time = time.time() - t0

        candidate.V = V_multi_shoot.__array__()
        candidate.W = W_multi_shoot.__array__()
        candidate.x_prop = x_prop.__array__()
        candidate.x_prop_plus = x_prop_plus.__array__()
        candidate.D_d = D_d.__array__()
        candidate.E_d = E_d.__array__()

        # Update state in place by appending to history
        # The x_guess/u_guess properties will automatically return the latest entry
        candidate.VC = vc_mat
        candidate.TR = tr_mat

        state.J_tr = np.sum(np.array(J_tr_vec))
        state.J_vb = np.sum(np.array(J_vb_vec))
        state.J_vc = np.sum(np.array(J_vc_vec))

        # Update weights in state using configured autotuning method
        adaptive_state = self.autotuner.update_weights(
            state, candidate, self._jax_constraints, settings, params, self.weights
        )

        # Build emission data - only include nonlinear/reduction metrics when
        # the autotuner actually uses them (constant/ramp methods don't)
        use_full_metrics = not isinstance(
            self.autotuner, (ConstantProximalWeight, RampProximalWeight)
        )

        emission_data = {
            "iter": state.k,
            "dis_time": dis_time * 1000.0,
            "subprop_time": subprop_time * 1000.0,
            "J_tr": state.J_tr,
            "J_vb": state.J_vb,
            "J_vc": state.J_vc,
            "cost": cost[-1],
            # TODO: (haynec) log per-variable lam_prox detail (e.g. min/max range)
            "lam_prox": float(np.max(state.lam_prox)),
            "prob_stat": prob_stat,
            "adaptive_state": adaptive_state,
            "ep_tr": self.ep_tr,
            "ep_vb": self.ep_vb,
            "ep_vc": self.ep_vc,
        }

        # Only include nonlinear/reduction metrics when autotuner uses them
        # (constant/ramp methods don't compute these, so we don't emit them)
        if use_full_metrics:
            if len(state.pred_reduction_history) == 0:
                pred_reduction = 0.0
            else:
                pred_reduction = state.pred_reduction_history[-1]
            if len(state.actual_reduction_history) == 0:
                actual_reduction = 0.0
            else:
                actual_reduction = state.actual_reduction_history[-1]
            if len(state.acceptance_ratio_history) == 0:
                acceptance_ratio = 0.0
            else:
                acceptance_ratio = state.acceptance_ratio_history[-1]

            emission_data.update(
                {
                    "J_nonlin": candidate.J_nonlin,
                    "J_lin": candidate.J_lin,
                    "pred_reduction": pred_reduction,
                    "actual_reduction": actual_reduction,
                    "acceptance_ratio": acceptance_ratio,
                }
            )

        # Emit data
        self._emitter(emission_data)

        # Increment iteration counter
        state.k += 1

        # Return convergence status
        return (state.J_tr < self.ep_tr) and (state.J_vb < self.ep_vb) and (state.J_vc < self.ep_vc)

    def _subproblem(
        self,
        params: dict,
        state: AlgorithmState,
        settings: Config,
    ):
        """Solve a single convex subproblem.

        Uses stored infrastructure (solver, discretization_solver, jax_constraints)
        with per-step params and settings.

        Args:
            params: Problem parameters dictionary
            state: Current solver state
            settings: Configuration object

        Returns:
            Tuple containing solution data, costs, and timing information.
        """
        param_dict = params

        # Update solver with dynamics linearization
        self._solver.update_dynamics_linearization(
            x_bar=state.x,
            u_bar=state.u,
            A_d=state.A_d(),
            B_d=state.B_d(),
            C_d=state.C_d(),
            x_prop=state.x_prop(),
            x_prop_plus=state.x_prop_plus(),
            D_d=state.D_d(),
            E_d=state.E_d(),
        )

        # Build constraint linearization data
        # TODO: (norrisg) investigate why we are passing `0` for the node here
        nodal_linearizations = []
        if self._jax_constraints.nodal:
            for constraint in self._jax_constraints.nodal:
                # Evaluate constraint at all nodes (vmapped function returns shape (N,))
                g_full = np.asarray(constraint.func(state.x, state.u, 0, param_dict))
                grad_g_x_full = np.asarray(constraint.grad_g_x(state.x, state.u, 0, param_dict))
                grad_g_u_full = np.asarray(constraint.grad_g_u(state.x, state.u, 0, param_dict))

                # Ensure g is 1D with shape (N,) - squeeze any extra dimensions
                # This handles cases where constraint might return shape (N, 1) or similar
                g_full = np.squeeze(g_full)
                if g_full.ndim == 0:
                    # Scalar result - expand to (N,)
                    g_full = np.broadcast_to(g_full, (state.x.shape[0],))
                elif g_full.ndim > 1:
                    # Multi-dimensional result - flatten to (N,)
                    # This should not happen for properly decomposed constraints,
                    # but handle it gracefully
                    g_full = g_full.reshape(g_full.shape[0], -1).sum(axis=1)

                # Ensure grad_g_x and grad_g_u have correct shapes
                # grad_g_x should be (N, n_x), grad_g_u should be (N, n_u)
                if grad_g_x_full.ndim == 1:
                    # If 1D, it should be (n_x,) - broadcast to (N, n_x)
                    grad_g_x_full = np.broadcast_to(
                        grad_g_x_full, (state.x.shape[0], grad_g_x_full.shape[0])
                    )
                elif grad_g_x_full.ndim > 2:
                    # Flatten extra dimensions
                    grad_g_x_full = grad_g_x_full.reshape(grad_g_x_full.shape[0], -1)
                    # Take only first n_x columns
                    n_x = state.x.shape[1]
                    if grad_g_x_full.shape[1] > n_x:
                        grad_g_x_full = grad_g_x_full[:, :n_x]

                if grad_g_u_full.ndim == 1:
                    # If 1D, it should be (n_u,) - broadcast to (N, n_u)
                    grad_g_u_full = np.broadcast_to(
                        grad_g_u_full, (state.u.shape[0], grad_g_u_full.shape[0])
                    )
                elif grad_g_u_full.ndim > 2:
                    # Flatten extra dimensions
                    grad_g_u_full = grad_g_u_full.reshape(grad_g_u_full.shape[0], -1)
                    # Take only first n_u columns
                    n_u = state.u.shape[1]
                    if grad_g_u_full.shape[1] > n_u:
                        grad_g_u_full = grad_g_u_full[:, :n_u]

                nodal_linearizations.append(
                    {
                        "g": g_full,
                        "grad_g_x": grad_g_x_full,
                        "grad_g_u": grad_g_u_full,
                    }
                )

        cross_node_linearizations = []
        if self._jax_constraints.cross_node:
            for constraint in self._jax_constraints.cross_node:
                cross_node_linearizations.append(
                    {
                        "g": np.asarray(constraint.func(state.x, state.u, param_dict)),
                        "grad_g_X": np.asarray(constraint.grad_g_X(state.x, state.u, param_dict)),
                        "grad_g_U": np.asarray(constraint.grad_g_U(state.x, state.u, param_dict)),
                    }
                )

        # Update solver with constraint linearizations
        self._solver.update_constraint_linearizations(
            nodal=nodal_linearizations if nodal_linearizations else None,
            cross_node=cross_node_linearizations if cross_node_linearizations else None,
        )

        # Update solver with penalty weights
        self._solver.update_penalties(
            lam_prox=state.lam_prox,
            lam_cost=state.lam_cost,
            lam_vc=state.lam_vc,
            lam_vb_nodal=state.lam_vb_nodal,
            lam_vb_cross=state.lam_vb_cross,
        )

        # Solve the convex subproblem
        t0 = time.time()
        result = self._solver.solve()
        subprop_time = time.time() - t0

        # Extract unscaled trajectories from result
        x_new_guess = result.x
        u_new_guess = result.u

        # Calculate costs from boundary conditions using utility function
        # Note: The original code only considered final_type, but the utility handles both
        # Here we maintain backward compatibility by only using final_type
        costs = [0]
        for i, bc_type in enumerate(settings.sim.x.final_type):
            if bc_type == "Minimize":
                costs += x_new_guess[:, i]
            elif bc_type == "Maximize":
                costs -= x_new_guess[:, i]

        # Create the block diagonal matrix using jax.numpy.block
        inv_block_diag = np.block(
            [
                [
                    settings.sim.inv_S_x,
                    np.zeros((settings.sim.inv_S_x.shape[0], settings.sim.inv_S_u.shape[1])),
                ],
                [
                    np.zeros((settings.sim.inv_S_u.shape[0], settings.sim.inv_S_x.shape[1])),
                    settings.sim.inv_S_u,
                ],
            ]
        )

        # Calculate J_tr_vec using the JAX-compatible block diagonal matrix
        tr_mat = inv_block_diag @ np.hstack((x_new_guess - state.x, u_new_guess - state.u)).T
        J_tr_vec = la.norm(tr_mat, axis=0) ** 2
        vc_mat = np.abs(settings.sim.inv_S_x @ result.nu.T).T
        J_vc_vec = np.sum(vc_mat, axis=1)

        # Sum nodal constraint violations
        J_vb_vec = 0
        for nu_vb_arr in result.nu_vb:
            J_vb_vec += np.maximum(0, nu_vb_arr)

        # Add cross-node constraint violations
        for nu_vb_cross_val in result.nu_vb_cross:
            J_vb_vec += np.maximum(0, nu_vb_cross_val)

        # Convex constraints are already handled in the OCP, no processing needed here
        return (
            x_new_guess,
            u_new_guess,
            costs,
            result.cost,
            J_vb_vec,
            J_vc_vec,
            J_tr_vec,
            result.status,
            subprop_time,
            vc_mat,
            tr_mat,
        )

    def citation(self) -> List[str]:
        """Return BibTeX citations for the PTR algorithm.

        Returns:
            List containing the BibTeX entry for the PTR paper.
        """
        return [
            r"""@article{drusvyatskiy2018error,
  title={Error bounds, quadratic growth, and linear convergence of proximal methods},
  author={Drusvyatskiy, Dmitriy and Lewis, Adrian S},
  journal={Mathematics of operations research},
  volume={43},
  number={3},
  pages={919--948},
  year={2018},
  publisher={INFORMS}
}""",
            r"""@article{szmuk2020successive,
  title={Successive convexification for real-time six-degree-of-freedom powered descent guidance
    with state-triggered constraints},
  author={Szmuk, Michael and Reynolds, Taylor P and A{\c{c}}{\i}kme{\c{s}}e, Beh{\c{c}}et},
  journal={Journal of Guidance, Control, and Dynamics},
  volume={43},
  number={8},
  pages={1399--1413},
  year={2020},
  publisher={American Institute of Aeronautics and Astronautics}
}""",
            r"""@article{reynolds2020dual,
  title={Dual quaternion-based powered descent guidance with state-triggered constraints},
  author={Reynolds, Taylor P and Szmuk, Michael and Malyuta, Danylo and Mesbahi, Mehran and
    A{\c{c}}{\i}kme{\c{s}}e, Beh{\c{c}}et and Carson III, John M},
  journal={Journal of Guidance, Control, and Dynamics},
  volume={43},
  number={9},
  pages={1584--1599},
  year={2020},
  publisher={American Institute of Aeronautics and Astronautics}
}""",
        ]
lam_cost: Union[float, np.ndarray] property writable

Cost weight.

Returns a float when a uniform scalar was provided, or an ndarray of shape (n_states,) when per-state weights were given (via dict or array).

lam_prox: Union[float, np.ndarray] property writable

Trust region (proximal) weight.

May be a scalar or array for per-variable / per-node weighting.

lam_vb: float property writable

Global virtual buffer penalty weight.

Per-constraint overrides are set via .weight() on individual constraints.

lam_vc: Union[float, np.ndarray] property writable

Virtual control penalty weight.

Returns a float when a uniform scalar was provided, or an ndarray when per-state weights were given (via dict or array).

__init__(autotuner: AutotuningBase = None, k_max: int = 200, t_max: float | None = None, lam_prox: Union[float, Dict[str, Union[float, list]]] = 0.1, lam_vc: Union[float, Dict[str, Union[float, list]]] = 1.0, lam_cost: Union[float, Dict[str, float]] = 0.01, lam_vb: float = 0.0, ep_tr: float = 0.0001, ep_vb: float = 0.0001, ep_vc: float = 1e-08, states: List[State] = None, controls: List[Control] = None)

Initialize PTR with algorithm parameters and optional autotuner.

Parameters:

Name Type Description Default
autotuner AutotuningBase

Weight adaptation strategy. Defaults to :class:AugmentedLagrangian when None.

None
k_max int

Maximum SCP iterations. Defaults to 200.

200
t_max float | None

Wall-clock time limit in seconds. None (default) disables the time limit.

None
lam_prox Union[float, Dict[str, Union[float, list]]]

Trust region (proximal) weight. Either a float (applied uniformly to all states and controls) or a dict mapping state/control names to weights, e.g. {"velocity": 1e0, "thrust": 5e-1}. Dict values may be scalars, 1-D arrays for per-component weighting, or 2-D arrays of shape (n_nodes, n_components) for per-node-per-component weighting. Variables not in the dict default to 1.0. Defaults to 0.1.

0.1
lam_vc Union[float, Dict[str, Union[float, list]]]

Virtual control penalty weight. Either a float (applied uniformly to all states) or a dict mapping state names to per-state weights, e.g. {"velocity": 1e1, "position": 5e0}. Dict values may be scalars, 1-D arrays for per-component weighting, or 2-D arrays of shape (n_nodes-1, n_components) for per-node-per-component weighting. States not in the dict default to 1.0. Defaults to 1.0.

1.0
lam_cost Union[float, Dict[str, float]]

Cost weight. Either a float (applied to all minimize/maximize states) or a dict mapping state names to per-state weights, e.g. {"velocity": 1e-1, "time": 1e0}. Dict values may be arrays for per-component weighting, e.g. {"position": [0, 0, 1e-6]}. Defaults to 0.01.

0.01
lam_vb float

Virtual buffer penalty weight. Defaults to 0.0.

0.0
ep_tr float

Trust region convergence tolerance. Defaults to 1e-4.

0.0001
ep_vb float

Virtual buffer convergence tolerance. Defaults to 1e-4.

0.0001
ep_vc float

Virtual control convergence tolerance. Defaults to 1e-8.

1e-08
states List[State]

Symbolic State objects (required when lam_cost or lam_prox is a dict). Normally provided automatically by :class:Problem.

None
controls List[Control]

Symbolic Control objects (required when lam_prox is a dict). Normally provided automatically by :class:Problem.

None
Source code in openscvx/algorithms/penalized_trust_region.py
def __init__(
    self,
    autotuner: "AutotuningBase" = None,
    k_max: int = 200,
    t_max: float | None = None,
    lam_prox: Union[float, Dict[str, Union[float, list]]] = 1e-1,
    lam_vc: Union[float, Dict[str, Union[float, list]]] = 1e0,
    lam_cost: Union[float, Dict[str, float]] = 1e-2,
    lam_vb: float = 0.0,
    ep_tr: float = 1e-4,
    ep_vb: float = 1e-4,
    ep_vc: float = 1e-8,
    states: List["State"] = None,
    controls: List["Control"] = None,
):
    """Initialize PTR with algorithm parameters and optional autotuner.

    Args:
        autotuner: Weight adaptation strategy. Defaults to
            :class:`AugmentedLagrangian` when ``None``.
        k_max: Maximum SCP iterations. Defaults to 200.
        t_max: Wall-clock time limit in seconds. ``None`` (default)
            disables the time limit.
        lam_prox: Trust region (proximal) weight. Either a float
            (applied uniformly to all states and controls) or a dict
            mapping state/control names to weights, e.g.
            ``{"velocity": 1e0, "thrust": 5e-1}``.  Dict values may
            be scalars, 1-D arrays for per-component weighting, or
            2-D arrays of shape ``(n_nodes, n_components)`` for
            per-node-per-component weighting.  Variables not in the
            dict default to ``1.0``. Defaults to 0.1.
        lam_vc: Virtual control penalty weight. Either a float
            (applied uniformly to all states) or a dict mapping state
            names to per-state weights, e.g.
            ``{"velocity": 1e1, "position": 5e0}``.  Dict values may
            be scalars, 1-D arrays for per-component weighting, or
            2-D arrays of shape ``(n_nodes-1, n_components)`` for
            per-node-per-component weighting.  States not in the dict
            default to ``1.0``. Defaults to 1.0.
        lam_cost: Cost weight. Either a float (applied to all
            minimize/maximize states) or a dict mapping state names
            to per-state weights, e.g.
            ``{"velocity": 1e-1, "time": 1e0}``.  Dict values may
            be arrays for per-component weighting, e.g.
            ``{"position": [0, 0, 1e-6]}``. Defaults to 0.01.
        lam_vb: Virtual buffer penalty weight. Defaults to 0.0.
        ep_tr: Trust region convergence tolerance. Defaults to 1e-4.
        ep_vb: Virtual buffer convergence tolerance. Defaults to 1e-4.
        ep_vc: Virtual control convergence tolerance. Defaults to 1e-8.
        states: Symbolic State objects (required when *lam_cost* or
            *lam_prox* is a dict). Normally provided automatically by
            :class:`Problem`.
        controls: Symbolic Control objects (required when *lam_prox*
            is a dict). Normally provided automatically by
            :class:`Problem`.
    """
    # Compiled infrastructure (set by initialize())
    self._solver: "ConvexSolver" = None
    self._discretization_solver: callable = None
    self._discretization_solver_impulsive: callable = None
    self._jax_constraints: "LoweredJaxConstraints" = None
    self._emitter: callable = None

    # Autotuner
    self.autotuner: "AutotuningBase" = (
        autotuner if autotuner is not None else AugmentedLagrangian()
    )

    # Store states/controls for later re-resolution (e.g. user sets
    # lam_cost or lam_prox to a new dict via the property setter after
    # construction).
    self._states: List["State"] = states
    self._controls: List["Control"] = controls

    # SCP weights (grouped dataclass, dict inputs expanded to arrays)
    self.weights = Weights.build(
        lam_prox=lam_prox,
        lam_vc=lam_vc,
        lam_cost=lam_cost,
        lam_vb=lam_vb,
        states=states,
        controls=controls,
    )

    # SCP convergence parameters
    self.k_max = k_max
    self.t_max = t_max
    self.ep_tr = ep_tr
    self.ep_vb = ep_vb
    self.ep_vc = ep_vc
citation() -> List[str]

Return BibTeX citations for the PTR algorithm.

Returns:

Type Description
List[str]

List containing the BibTeX entry for the PTR paper.

Source code in openscvx/algorithms/penalized_trust_region.py
    def citation(self) -> List[str]:
        """Return BibTeX citations for the PTR algorithm.

        Returns:
            List containing the BibTeX entry for the PTR paper.
        """
        return [
            r"""@article{drusvyatskiy2018error,
  title={Error bounds, quadratic growth, and linear convergence of proximal methods},
  author={Drusvyatskiy, Dmitriy and Lewis, Adrian S},
  journal={Mathematics of operations research},
  volume={43},
  number={3},
  pages={919--948},
  year={2018},
  publisher={INFORMS}
}""",
            r"""@article{szmuk2020successive,
  title={Successive convexification for real-time six-degree-of-freedom powered descent guidance
    with state-triggered constraints},
  author={Szmuk, Michael and Reynolds, Taylor P and A{\c{c}}{\i}kme{\c{s}}e, Beh{\c{c}}et},
  journal={Journal of Guidance, Control, and Dynamics},
  volume={43},
  number={8},
  pages={1399--1413},
  year={2020},
  publisher={American Institute of Aeronautics and Astronautics}
}""",
            r"""@article{reynolds2020dual,
  title={Dual quaternion-based powered descent guidance with state-triggered constraints},
  author={Reynolds, Taylor P and Szmuk, Michael and Malyuta, Danylo and Mesbahi, Mehran and
    A{\c{c}}{\i}kme{\c{s}}e, Beh{\c{c}}et and Carson III, John M},
  journal={Journal of Guidance, Control, and Dynamics},
  volume={43},
  number={9},
  pages={1584--1599},
  year={2020},
  publisher={American Institute of Aeronautics and Astronautics}
}""",
        ]
get_columns(verbosity: int = Verbosity.STANDARD) -> List[Column]

Get the columns to display for iteration output.

Combines base PTR columns with autotuner-specific columns, filtered by the requested verbosity level.

Parameters:

Name Type Description Default
verbosity int

Minimum verbosity level for columns to include. MINIMAL (1): Core metrics only (iter, cost, status) STANDARD (2): + timing, penalty terms FULL (3): + autotuning diagnostics

STANDARD

Returns:

Type Description
List[Column]

List of Column specs filtered by verbosity level.

Raises:

Type Description
AttributeError

If algorithm has not been initialized yet.

Source code in openscvx/algorithms/penalized_trust_region.py
def get_columns(self, verbosity: int = Verbosity.STANDARD) -> List[Column]:
    """Get the columns to display for iteration output.

    Combines base PTR columns with autotuner-specific columns,
    filtered by the requested verbosity level.

    Args:
        verbosity: Minimum verbosity level for columns to include.
            MINIMAL (1): Core metrics only (iter, cost, status)
            STANDARD (2): + timing, penalty terms
            FULL (3): + autotuning diagnostics

    Returns:
        List of Column specs filtered by verbosity level.

    Raises:
        AttributeError: If algorithm has not been initialized yet.
    """
    all_columns = self.BASE_COLUMNS + self.autotuner.COLUMNS + self.TAIL_COLUMNS
    return [col for col in all_columns if col.min_verbosity <= verbosity]
initialize(solver: ConvexSolver, discretization_solver: callable, jax_constraints: LoweredJaxConstraints, emitter: callable, params: dict, settings: Config, discretization_solver_impulsive: Callable[..., object] | None = None) -> None

Initialize PTR algorithm.

Stores compiled infrastructure and performs a warm-start solve to initialize DPP and JAX jacobians.

Parameters:

Name Type Description Default
solver ConvexSolver

Convex subproblem solver (e.g., CVXPySolver)

required
discretization_solver callable

Compiled discretization solver

required
jax_constraints LoweredJaxConstraints

JIT-compiled constraint functions

required
emitter callable

Callback for emitting iteration progress

required
params dict

Problem parameters dictionary (for warm-start)

required
settings Config

Configuration object (for warm-start)

required
discretization_solver_impulsive Callable[..., object] | None

Optional impulsive/discrete discretization solver used to populate W/x_prop_plus/D_d/E_d.

None
Source code in openscvx/algorithms/penalized_trust_region.py
def initialize(
    self,
    solver: "ConvexSolver",
    discretization_solver: callable,
    jax_constraints: "LoweredJaxConstraints",
    emitter: callable,
    params: dict,
    settings: Config,
    discretization_solver_impulsive: Callable[..., object] | None = None,
) -> None:
    """Initialize PTR algorithm.

    Stores compiled infrastructure and performs a warm-start solve to
    initialize DPP and JAX jacobians.

    Args:
        solver: Convex subproblem solver (e.g., CVXPySolver)
        discretization_solver: Compiled discretization solver
        jax_constraints: JIT-compiled constraint functions
        emitter: Callback for emitting iteration progress
        params: Problem parameters dictionary (for warm-start)
        settings: Configuration object (for warm-start)
        discretization_solver_impulsive: Optional impulsive/discrete
            discretization solver used to populate W/x_prop_plus/D_d/E_d.
    """
    # Store immutable infrastructure
    self._solver = solver
    self._discretization_solver = discretization_solver
    self._discretization_solver_impulsive = discretization_solver_impulsive
    self._jax_constraints = jax_constraints
    self._emitter = emitter

    # Set boundary conditions
    self._solver.update_boundary_conditions(
        x_init=settings.sim.x.initial,
        x_term=settings.sim.x.final,
    )

    # Create temporary state for initialization solve
    init_state = AlgorithmState.from_settings(settings, self.weights)

    # Solve a dumb problem to initialize DPP and JAX jacobians
    _, _, _, x_prop, V_multi_shoot = self._invoke_solver(
        self._discretization_solver, init_state.x, init_state.u.astype(float), params
    )

    init_state.add_discretization(V_multi_shoot.__array__())
    slice_imp = settings.sim.u.slice_impulsive
    has_impulsive = bool(slice_imp.stop > slice_imp.start)
    if has_impulsive and self._discretization_solver_impulsive is not None:
        u_init = init_state.u.astype(float)
        x0_prior = self._recover_prior_node_from_initial(settings, init_state.x[0])
        x_nodes_prior = np.vstack((x0_prior, np.asarray(x_prop)))
        _, _, _, W_multi_shoot = self._invoke_solver(
            self._discretization_solver_impulsive,
            x_nodes_prior,
            u_init,
            params,
        )
        init_state.add_impulsive_discretization(W_multi_shoot.__array__())
    _ = self._subproblem(params, init_state, settings)
step(state: AlgorithmState, params: dict, settings: Config) -> bool

Execute one PTR iteration.

Solves the convex subproblem, updates state in place, and checks convergence based on trust region, virtual buffer, and virtual control costs.

Parameters:

Name Type Description Default
state AlgorithmState

Mutable solver state (modified in place)

required
params dict

Problem parameters dictionary (may change between steps)

required
settings Config

Configuration object (may change between steps)

required

Returns:

Type Description
bool

True if J_tr, J_vb, and J_vc are all below their thresholds.

Raises:

Type Description
RuntimeError

If initialize() has not been called.

Source code in openscvx/algorithms/penalized_trust_region.py
def step(
    self,
    state: AlgorithmState,
    params: dict,
    settings: Config,
) -> bool:
    """Execute one PTR iteration.

    Solves the convex subproblem, updates state in place, and checks
    convergence based on trust region, virtual buffer, and virtual
    control costs.

    Args:
        state: Mutable solver state (modified in place)
        params: Problem parameters dictionary (may change between steps)
        settings: Configuration object (may change between steps)

    Returns:
        True if J_tr, J_vb, and J_vc are all below their thresholds.

    Raises:
        RuntimeError: If initialize() has not been called.
    """
    if self._solver is None:
        raise RuntimeError(
            "PenalizedTrustRegion.step() called before initialize(). "
            "Call initialize() first to set up compiled infrastructure."
        )

    # Compute discretization before subproblem only for the first iteration
    if state.k == 1:
        t0 = time.time()
        _, _, _, x_prop, V_multi_shoot = self._invoke_solver(
            self._discretization_solver, state.x, state.u.astype(float), params
        )

        u_state = state.u.astype(float)
        x0_prior = self._recover_prior_node_from_initial(settings, state.x[0])
        x_nodes_prior = np.vstack((x0_prior, np.asarray(x_prop)))
        _, _, _, W_multi_shoot = self._invoke_solver(
            self._discretization_solver_impulsive, x_nodes_prior, u_state, params
        )
        dis_time = time.time() - t0

        state.add_discretization(V_multi_shoot.__array__())
        state.add_impulsive_discretization(W_multi_shoot.__array__())

    # Run the subproblem
    (
        x_sol,
        u_sol,
        cost,
        J_total,
        J_vb_vec,
        J_vc_vec,
        J_tr_vec,
        prob_stat,
        subprop_time,
        vc_mat,
        tr_mat,
    ) = self._subproblem(params, state, settings)

    candidate = CandidateIterate()
    candidate.x = x_sol
    candidate.u = u_sol
    candidate.J_lin = J_total

    t0 = time.time()
    _, _, _, x_prop, V_multi_shoot = self._invoke_solver(
        self._discretization_solver, candidate.x, candidate.u.astype(float), params
    )

    u_candidate = candidate.u.astype(float)
    x0_prior = self._recover_prior_node_from_initial(settings, candidate.x[0])
    x_nodes_prior = np.vstack((x0_prior, np.asarray(x_prop)))
    x_prop_plus, D_d, E_d, W_multi_shoot = self._invoke_solver(
        self._discretization_solver_impulsive, x_nodes_prior, u_candidate, params
    )

    dis_time = time.time() - t0

    candidate.V = V_multi_shoot.__array__()
    candidate.W = W_multi_shoot.__array__()
    candidate.x_prop = x_prop.__array__()
    candidate.x_prop_plus = x_prop_plus.__array__()
    candidate.D_d = D_d.__array__()
    candidate.E_d = E_d.__array__()

    # Update state in place by appending to history
    # The x_guess/u_guess properties will automatically return the latest entry
    candidate.VC = vc_mat
    candidate.TR = tr_mat

    state.J_tr = np.sum(np.array(J_tr_vec))
    state.J_vb = np.sum(np.array(J_vb_vec))
    state.J_vc = np.sum(np.array(J_vc_vec))

    # Update weights in state using configured autotuning method
    adaptive_state = self.autotuner.update_weights(
        state, candidate, self._jax_constraints, settings, params, self.weights
    )

    # Build emission data - only include nonlinear/reduction metrics when
    # the autotuner actually uses them (constant/ramp methods don't)
    use_full_metrics = not isinstance(
        self.autotuner, (ConstantProximalWeight, RampProximalWeight)
    )

    emission_data = {
        "iter": state.k,
        "dis_time": dis_time * 1000.0,
        "subprop_time": subprop_time * 1000.0,
        "J_tr": state.J_tr,
        "J_vb": state.J_vb,
        "J_vc": state.J_vc,
        "cost": cost[-1],
        # TODO: (haynec) log per-variable lam_prox detail (e.g. min/max range)
        "lam_prox": float(np.max(state.lam_prox)),
        "prob_stat": prob_stat,
        "adaptive_state": adaptive_state,
        "ep_tr": self.ep_tr,
        "ep_vb": self.ep_vb,
        "ep_vc": self.ep_vc,
    }

    # Only include nonlinear/reduction metrics when autotuner uses them
    # (constant/ramp methods don't compute these, so we don't emit them)
    if use_full_metrics:
        if len(state.pred_reduction_history) == 0:
            pred_reduction = 0.0
        else:
            pred_reduction = state.pred_reduction_history[-1]
        if len(state.actual_reduction_history) == 0:
            actual_reduction = 0.0
        else:
            actual_reduction = state.actual_reduction_history[-1]
        if len(state.acceptance_ratio_history) == 0:
            acceptance_ratio = 0.0
        else:
            acceptance_ratio = state.acceptance_ratio_history[-1]

        emission_data.update(
            {
                "J_nonlin": candidate.J_nonlin,
                "J_lin": candidate.J_lin,
                "pred_reduction": pred_reduction,
                "actual_reduction": actual_reduction,
                "acceptance_ratio": acceptance_ratio,
            }
        )

    # Emit data
    self._emitter(emission_data)

    # Increment iteration counter
    state.k += 1

    # Return convergence status
    return (state.J_tr < self.ep_tr) and (state.J_vb < self.ep_vb) and (state.J_vc < self.ep_vc)

PenalizedTrustRegionConfig

Bases: BaseModel

Validates PenalizedTrustRegion configuration from dict input.

The autotuner field accepts:

  • None — defaults to :class:AugmentedLagrangian.
  • A string — class name only, default parameters.
  • A dict — class name via "type" key plus overrides.
  • An instance — already-constructed autotuner (pass-through).
Source code in openscvx/algorithms/__init__.py
class PenalizedTrustRegionConfig(BaseModel):
    """Validates PenalizedTrustRegion configuration from dict input.

    The ``autotuner`` field accepts:

    * ``None`` — defaults to :class:`AugmentedLagrangian`.
    * A **string** — class name only, default parameters.
    * A **dict** — class name via ``"type"`` key plus overrides.
    * An **instance** — already-constructed autotuner (pass-through).
    """

    autotuner: Optional[Union[AutotunerConfig, AutotuningBase]] = None

    @field_validator("autotuner", mode="before")
    @classmethod
    def _wrap_bare_string(cls, v: Any) -> Any:
        if isinstance(v, str):
            return {"type": v}
        return v

    k_max: int = 200
    t_max: Optional[float] = None
    lam_prox: Union[float, Dict[str, Any]] = 1e-1
    lam_vc: Union[float, Dict[str, Any]] = 1e0
    lam_cost: Union[float, Dict[str, Any]] = 1e-2
    lam_vb: float = 0.0
    ep_tr: float = 1e-4
    ep_vb: float = 1e-4
    ep_vc: float = 1e-8

    model_config = ConfigDict(extra="forbid", arbitrary_types_allowed=True)

    def to_algorithm(
        self,
        states: Optional[List[Any]] = None,
        controls: Optional[List[Any]] = None,
    ) -> PenalizedTrustRegion:
        at = self.autotuner
        if at is None:
            autotuner = None
        elif isinstance(at, AutotuningBase):
            autotuner = at
        else:
            autotuner = at.build()
        kwargs = self.model_dump(exclude={"autotuner"}, exclude_unset=True)
        return PenalizedTrustRegion(
            autotuner=autotuner,
            states=states,
            controls=controls,
            **kwargs,
        )

RampProximalWeight

Bases: AutotuningBase

Ramp Proximal Weight method.

This method ramps the proximal weight up linearly over the first few iterations, then keeps it constant.

Source code in openscvx/algorithms/ramp_proximal_weight.py
class RampProximalWeight(AutotuningBase):
    """Ramp Proximal Weight method.

    This method ramps the proximal weight up linearly over the first few iterations,
    then keeps it constant.
    """

    def __init__(
        self,
        ramp_factor: float = 1.0,
        lam_prox_max: float = 1e3,
        lam_cost_drop: int = -1,
        lam_cost_relax: float = 1.0,
    ):
        self.ramp_factor = ramp_factor
        self.lam_prox_max = lam_prox_max
        self.lam_cost_drop = lam_cost_drop
        self.lam_cost_relax = lam_cost_relax

    def update_weights(
        self,
        state: "AlgorithmState",
        candidate: "CandidateIterate",
        nodal_constraints: "LoweredJaxConstraints",
        settings: Config,
        params: dict,
        weights: "Weights",
    ) -> str:
        """Update SCP weights keeping trust region constant.

        Args:
            state: Solver state containing current weight values (mutated in place)
            nodal_constraints: Lowered JAX constraints
            settings: Configuration object containing adaptation parameters
            params: Dictionary of problem parameters
            weights: Initial weights from the algorithm

        Returns:
            str: Adaptive state string (e.g., "Accept", "Reject")
        """
        # Update cost relaxation parameter after cost_drop iterations.
        # When lam_cost is a per-state array, scalar lam_cost_relax scales
        # uniformly, preserving the user-specified per-state weight ratios.
        if state.k > self.lam_cost_drop:
            candidate.lam_cost = state.lam_cost * self.lam_cost_relax
        else:
            candidate.lam_cost = weights.lam_cost

        # Check if we're already at max before updating
        was_at_max = np.all(state.lam_prox >= self.lam_prox_max)

        # Calculate and append new value
        new_lam_prox = np.minimum(state.lam_prox * self.ramp_factor, self.lam_prox_max)
        candidate.lam_prox = new_lam_prox

        # If we were already at max, or if we just reached it and it's staying constant
        if was_at_max:
            state.accept_solution(candidate)
            return "Accept Constant"
        else:
            state.accept_solution(candidate)
            return "Accept Higher"
update_weights(state: AlgorithmState, candidate: CandidateIterate, nodal_constraints: LoweredJaxConstraints, settings: Config, params: dict, weights: Weights) -> str

Update SCP weights keeping trust region constant.

Parameters:

Name Type Description Default
state AlgorithmState

Solver state containing current weight values (mutated in place)

required
nodal_constraints LoweredJaxConstraints

Lowered JAX constraints

required
settings Config

Configuration object containing adaptation parameters

required
params dict

Dictionary of problem parameters

required
weights Weights

Initial weights from the algorithm

required

Returns:

Name Type Description
str str

Adaptive state string (e.g., "Accept", "Reject")

Source code in openscvx/algorithms/ramp_proximal_weight.py
def update_weights(
    self,
    state: "AlgorithmState",
    candidate: "CandidateIterate",
    nodal_constraints: "LoweredJaxConstraints",
    settings: Config,
    params: dict,
    weights: "Weights",
) -> str:
    """Update SCP weights keeping trust region constant.

    Args:
        state: Solver state containing current weight values (mutated in place)
        nodal_constraints: Lowered JAX constraints
        settings: Configuration object containing adaptation parameters
        params: Dictionary of problem parameters
        weights: Initial weights from the algorithm

    Returns:
        str: Adaptive state string (e.g., "Accept", "Reject")
    """
    # Update cost relaxation parameter after cost_drop iterations.
    # When lam_cost is a per-state array, scalar lam_cost_relax scales
    # uniformly, preserving the user-specified per-state weight ratios.
    if state.k > self.lam_cost_drop:
        candidate.lam_cost = state.lam_cost * self.lam_cost_relax
    else:
        candidate.lam_cost = weights.lam_cost

    # Check if we're already at max before updating
    was_at_max = np.all(state.lam_prox >= self.lam_prox_max)

    # Calculate and append new value
    new_lam_prox = np.minimum(state.lam_prox * self.ramp_factor, self.lam_prox_max)
    candidate.lam_prox = new_lam_prox

    # If we were already at max, or if we just reached it and it's staying constant
    if was_at_max:
        state.accept_solution(candidate)
        return "Accept Constant"
    else:
        state.accept_solution(candidate)
        return "Accept Higher"

Weights dataclass

Penalty weights for the SCP objective function.

Each SCP subproblem minimizes a weighted sum of the original cost, trust region penalty, virtual control penalty, and virtual buffer penalty. This dataclass holds the weights for each term.

Weights can be accessed directly (weights.lam_prox) or through the algorithm's convenience properties (algorithm.lam_prox). During SCP iteration, the autotuner may mutate these fields; those changes are tracked in :class:AlgorithmState weight histories.

Use :meth:build to construct from user-friendly inputs (floats or {name: weight} dicts). Use :meth:build_vb_arrays to populate lam_vb_nodal and lam_vb_cross once symbolic constraints are available.

Attributes:

Name Type Description
lam_prox Union[float, ndarray]

Trust region (proximal) weight. Scalar (uniform) or array of shape (n_x + n_u,) for per-variable weighting, or (N, n_x + n_u) for per-node-per-variable weighting.

lam_vc Union[float, ndarray]

Virtual control penalty weight. Scalar (uniform) or array of shape (n_x,) for per-state weighting, or (N-1, n_x) for per-node-per-state weighting.

lam_cost Union[float, ndarray]

Cost weight. Scalar (uniform across all minimize/maximize states) or array of shape (n_x,) for per-state weighting.

lam_vb float

Global virtual buffer penalty weight. Scalar default applied to every constraint unless overridden via .weight() on individual constraints.

lam_vb_nodal Optional[ndarray]

Per-node weights for nodal constraints, shape (N, n_nodal). Built by :meth:build_vb_arrays.

lam_vb_cross Optional[ndarray]

Weights for cross-node constraints, shape (n_cross,). Built by :meth:build_vb_arrays.

Source code in openscvx/algorithms/weights.py
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
@dataclass
class Weights:
    """Penalty weights for the SCP objective function.

    Each SCP subproblem minimizes a weighted sum of the original cost,
    trust region penalty, virtual control penalty, and virtual buffer
    penalty. This dataclass holds the weights for each term.

    Weights can be accessed directly (``weights.lam_prox``) or through
    the algorithm's convenience properties (``algorithm.lam_prox``).
    During SCP iteration, the autotuner may mutate these fields; those
    changes are tracked in :class:`AlgorithmState` weight histories.

    Use :meth:`build` to construct from user-friendly inputs (floats or
    ``{name: weight}`` dicts). Use :meth:`build_vb_arrays` to populate
    ``lam_vb_nodal`` and ``lam_vb_cross`` once symbolic constraints are
    available.

    Attributes:
        lam_prox: Trust region (proximal) weight. Scalar (uniform) or
            array of shape ``(n_x + n_u,)`` for per-variable weighting,
            or ``(N, n_x + n_u)`` for per-node-per-variable weighting.
        lam_vc: Virtual control penalty weight. Scalar (uniform) or
            array of shape ``(n_x,)`` for per-state weighting, or
            ``(N-1, n_x)`` for per-node-per-state weighting.
        lam_cost: Cost weight. Scalar (uniform across all
            minimize/maximize states) or array of shape ``(n_x,)`` for
            per-state weighting.
        lam_vb: Global virtual buffer penalty weight. Scalar default
            applied to every constraint unless overridden via
            ``.weight()`` on individual constraints.
        lam_vb_nodal: Per-node weights for nodal constraints, shape
            ``(N, n_nodal)``. Built by :meth:`build_vb_arrays`.
        lam_vb_cross: Weights for cross-node constraints, shape
            ``(n_cross,)``. Built by :meth:`build_vb_arrays`.
    """

    lam_prox: Union[float, np.ndarray] = 1e-1
    lam_vc: Union[float, np.ndarray] = 1e0
    lam_cost: Union[float, np.ndarray] = 1e-2
    lam_vb: float = 0.0
    lam_vb_nodal: Optional[np.ndarray] = None
    lam_vb_cross: Optional[np.ndarray] = None

    def __post_init__(self):
        # Coerce lists/lists-of-lists to numpy arrays.
        if isinstance(self.lam_prox, (list, tuple)):
            self.lam_prox = np.asarray(self.lam_prox, dtype=float)
        if isinstance(self.lam_vc, (list, tuple)):
            self.lam_vc = np.asarray(self.lam_vc, dtype=float)
        if isinstance(self.lam_cost, (list, tuple)):
            self.lam_cost = np.asarray(self.lam_cost, dtype=float)

    @classmethod
    def build(
        cls,
        lam_prox: Union[float, Dict[str, Union[float, list, np.ndarray]]] = 1e-1,
        lam_vc: Union[float, Dict[str, Union[float, list, np.ndarray]]] = 1e0,
        lam_cost: Union[float, Dict[str, float]] = 1e-2,
        lam_vb: float = 0.0,
        states: Optional[List["State"]] = None,
        controls: Optional[List["Control"]] = None,
    ) -> "Weights":
        """Construct Weights from user-friendly inputs.

        Accepts floats (applied uniformly) or dicts mapping state/control
        names to per-variable weights. Dict inputs are expanded to dense
        arrays via the ``resolve_lam_*`` methods.

        Args:
            lam_prox: Trust region weight. Float or ``{name: weight}`` dict.
                Dict requires *states* and *controls*.
            lam_vc: Virtual control weight. Float or ``{state_name: weight}``
                dict. Dict requires *states*.
            lam_cost: Cost weight. Float or ``{state_name: weight}`` dict.
                Dict requires *states*.
            lam_vb: Virtual buffer default weight (scalar).
            states: Symbolic State objects (required when any weight is a dict).
            controls: Symbolic Control objects (required when *lam_prox* is a dict).

        Returns:
            A new Weights instance with resolved numeric values.

        Raises:
            ValueError: If a dict weight is given without the required
                states/controls.
        """
        return cls(
            lam_prox=cls.resolve_lam_prox(lam_prox, states, controls),
            lam_vc=cls.resolve_lam_vc(lam_vc, states),
            lam_cost=cls.resolve_lam_cost(lam_cost, states),
            lam_vb=float(lam_vb),
        )

    @staticmethod
    def resolve_lam_prox(
        lam_prox: Union[float, Dict[str, Union[float, list, np.ndarray]]],
        states: Optional[List["State"]] = None,
        controls: Optional[List["Control"]] = None,
    ) -> Union[float, np.ndarray]:
        """Resolve a ``lam_prox`` spec to a numeric value.

        If *lam_prox* is a float it is returned as-is. If it is a dict
        mapping state/control names to weights, it is expanded to a dense
        array using each variable's ``_slice``. Variables not in the dict
        default to ``1.0``.

        Dict values may be scalars, 1-D arrays (per-component), or 2-D
        arrays of shape ``(K, n_components)`` for per-node-per-component
        weighting. All 2-D entries must agree on *K*.

        Args:
            lam_prox: Scalar weight or ``{name: weight}`` dict.
            states: Symbolic State objects (required when *lam_prox* is a dict).
            controls: Symbolic Control objects (required when *lam_prox* is a dict).

        Returns:
            float or np.ndarray of shape ``(n_states + n_controls,)`` or
            ``(K, n_states + n_controls)``.

        Raises:
            ValueError: If *lam_prox* is a dict and *states*/*controls* is ``None``,
                or if dict contains unknown names, or if 2-D entries disagree on K.
        """
        if not isinstance(lam_prox, dict):
            return lam_prox

        if states is None or controls is None:
            raise ValueError(
                "lam_prox was specified as a dict but states and/or "
                "controls were not provided. Pass both so the dict can "
                "be expanded to a per-variable weight array."
            )

        n_states = sum(s.shape[0] if len(s.shape) > 0 else 1 for s in states)
        n_controls = sum(c.shape[0] if len(c.shape) > 0 else 1 for c in controls)
        n_total = n_states + n_controls

        valid_state_names = {s.name for s in states}
        valid_control_names = {c.name for c in controls}
        valid_names = valid_state_names | valid_control_names
        unknown = set(lam_prox.keys()) - valid_names
        if unknown:
            raise ValueError(
                f"lam_prox dict contains unknown name(s): {unknown}. "
                f"Valid names: {sorted(valid_names)}"
            )

        # Build a unified list of (name, n_components, slice_in_output).
        # States occupy columns [0, n_states), controls occupy [n_states, n_total).
        variables: list = []
        for s in states:
            nc = s.shape[0] if len(s.shape) > 0 else 1
            variables.append((s.name, nc, s._slice))
        for c in controls:
            nc = c.shape[0] if len(c.shape) > 0 else 1
            out_slice = slice(n_states + c._slice.start, n_states + c._slice.stop)
            variables.append((c.name, nc, out_slice))

        # First pass: determine if any entry is 2-D and infer K.
        n_nodes: Optional[int] = None
        for name, n_comp, _ in variables:
            if name not in lam_prox:
                continue
            val = np.asarray(lam_prox[name], dtype=float)
            if val.ndim == 2:
                if n_nodes is None:
                    n_nodes = val.shape[0]
                elif val.shape[0] != n_nodes:
                    raise ValueError(
                        f"lam_prox['{name}'] has {val.shape[0]} rows, but a "
                        f"previous entry had {n_nodes} rows. All 2-D entries "
                        f"must have the same number of rows (n_nodes)."
                    )

        # Build the output array.
        if n_nodes is not None:
            lam_arr = np.ones((n_nodes, n_total))
        else:
            lam_arr = np.ones(n_total)

        for name, n_comp, out_slice in variables:
            if name not in lam_prox:
                continue
            val = np.asarray(lam_prox[name], dtype=float)

            if val.ndim == 0:
                lam_arr[..., out_slice] = float(val)
            elif val.ndim == 1:
                if val.shape[0] != n_comp:
                    raise ValueError(
                        f"lam_prox['{name}'] has length {val.shape[0]}, "
                        f"expected scalar or length {n_comp}"
                    )
                lam_arr[..., out_slice] = val
            elif val.ndim == 2:
                if val.shape[1] != n_comp:
                    raise ValueError(
                        f"lam_prox['{name}'] has {val.shape[1]} columns, expected {n_comp}"
                    )
                lam_arr[:, out_slice] = val
            else:
                raise ValueError(
                    f"lam_prox['{name}'] has {val.ndim} dimensions, expected scalar, 1-D, or 2-D"
                )

        return lam_arr

    @staticmethod
    def resolve_lam_vc(
        lam_vc: Union[float, Dict[str, Union[float, list, np.ndarray]]],
        states: Optional[List["State"]] = None,
    ) -> Union[float, np.ndarray]:
        """Resolve a ``lam_vc`` spec to a numeric value.

        If *lam_vc* is a float it is returned as-is. If it is a dict
        mapping state names to weights, it is expanded to a dense array
        using each state's ``_slice``. States not in the dict default
        to ``1.0``.

        Dict values may be scalars, 1-D arrays (per-component), or 2-D
        arrays of shape ``(K, n_components)`` for per-node-per-component
        weighting. All 2-D entries must agree on *K*.

        Args:
            lam_vc: Scalar weight or ``{state_name: weight}`` dict.
            states: Symbolic State objects (required when *lam_vc* is a dict).

        Returns:
            float or np.ndarray of shape ``(n_states,)`` or ``(K, n_states)``.

        Raises:
            ValueError: If *lam_vc* is a dict and *states* is ``None``,
                or if dict contains unknown names, or if 2-D entries disagree on K.
        """
        if not isinstance(lam_vc, dict):
            return lam_vc

        if states is None:
            raise ValueError(
                "lam_vc was specified as a dict but no states were "
                "provided. Pass states so the dict can be expanded to "
                "a per-state weight array."
            )

        n_states = sum(s.shape[0] if len(s.shape) > 0 else 1 for s in states)

        valid_names = {s.name for s in states}
        unknown = set(lam_vc.keys()) - valid_names
        if unknown:
            raise ValueError(
                f"lam_vc dict contains unknown state name(s): {unknown}. "
                f"Valid state names: {sorted(valid_names)}"
            )

        # First pass: determine if any entry is 2-D and infer K.
        n_nodes: Optional[int] = None
        for state in states:
            if state.name not in lam_vc:
                continue
            val = np.asarray(lam_vc[state.name], dtype=float)
            if val.ndim == 2:
                if n_nodes is None:
                    n_nodes = val.shape[0]
                elif val.shape[0] != n_nodes:
                    raise ValueError(
                        f"lam_vc['{state.name}'] has {val.shape[0]} rows, but a "
                        f"previous entry had {n_nodes} rows. All 2-D entries "
                        f"must have the same number of rows (n_nodes-1)."
                    )

        # Build the output array.
        if n_nodes is not None:
            lam_arr = np.ones((n_nodes, n_states))
        else:
            lam_arr = np.ones(n_states)

        for state in states:
            if state.name not in lam_vc:
                continue
            val = np.asarray(lam_vc[state.name], dtype=float)
            n_components = state.shape[0] if len(state.shape) > 0 else 1

            if val.ndim == 0:
                lam_arr[..., state._slice] = float(val)
            elif val.ndim == 1:
                if val.shape[0] != n_components:
                    raise ValueError(
                        f"lam_vc['{state.name}'] has length {val.shape[0]}, "
                        f"expected scalar or length {n_components}"
                    )
                lam_arr[..., state._slice] = val
            elif val.ndim == 2:
                if val.shape[1] != n_components:
                    raise ValueError(
                        f"lam_vc['{state.name}'] has {val.shape[1]} columns, "
                        f"expected {n_components}"
                    )
                lam_arr[:, state._slice] = val
            else:
                raise ValueError(
                    f"lam_vc['{state.name}'] has {val.ndim} dimensions, "
                    f"expected scalar, 1-D, or 2-D"
                )

        return lam_arr

    @staticmethod
    def resolve_lam_cost(
        lam_cost: Union[float, Dict[str, Union[float, list, np.ndarray]]],
        states: Optional[List["State"]] = None,
    ) -> Union[float, np.ndarray]:
        """Resolve a ``lam_cost`` spec to a numeric value.

        If *lam_cost* is a float it is returned as-is. If it is a dict
        mapping state names to weights, it is expanded to a dense array
        of shape ``(n_states,)`` using each state's ``_slice``. States
        without a minimize/maximize objective receive weight 0. States
        **with** an objective **must** appear in the dict.

        Dict values may be scalars (broadcast to every component) or
        arrays matching the state's shape for per-component weighting,
        e.g. ``{"position": [0, 0, 1e-6]}``.

        Args:
            lam_cost: Scalar weight or ``{state_name: weight}`` dict.
            states: Symbolic State objects (required when *lam_cost* is a dict).

        Returns:
            float or np.ndarray of shape ``(n_states,)``.

        Raises:
            ValueError: If *lam_cost* is a dict and *states* is ``None``,
                or if dict contains unknown names, or if dict is missing
                entries for states with minimize/maximize objectives.
        """
        if not isinstance(lam_cost, dict):
            return lam_cost

        if states is None:
            raise ValueError(
                "lam_cost was specified as a dict but no states were "
                "provided. Pass states so the dict can be expanded to "
                "a per-state weight array."
            )

        n_states = sum(s.shape[0] if len(s.shape) > 0 else 1 for s in states)
        lam_arr = np.zeros(n_states)

        valid_names = {s.name for s in states}
        unknown = set(lam_cost.keys()) - valid_names
        if unknown:
            raise ValueError(
                f"lam_cost dict contains unknown state name(s): {unknown}. "
                f"Valid state names: {sorted(valid_names)}"
            )

        # Identify states that have minimize/maximize objectives.
        cost_states: Set[str] = set()
        for state in states:
            if state.initial_type is not None:
                for t in state.initial_type:
                    if t in ("Minimize", "Maximize"):
                        cost_states.add(state.name)
                        break
            if state.final_type is not None:
                for t in state.final_type:
                    if t in ("Minimize", "Maximize"):
                        cost_states.add(state.name)
                        break

        # Check that all cost states are in the dict
        missing = cost_states - set(lam_cost.keys())
        if missing:
            raise ValueError(
                f"lam_cost dict is missing weight(s) for state(s) with "
                f"minimize/maximize objectives: {missing}. All states with "
                f"cost terms must have a weight in the dict."
            )

        # Fill the array.
        for state in states:
            if state.name in lam_cost:
                val = np.asarray(lam_cost[state.name], dtype=float)
                n_components = state.shape[0] if len(state.shape) > 0 else 1
                if val.ndim > 0 and val.shape[0] != n_components:
                    raise ValueError(
                        f"lam_cost['{state.name}'] has length {val.shape[0]}, "
                        f"expected scalar or length {n_components}"
                    )
                lam_arr[state._slice] = val

        return lam_arr

    def build_vb_arrays(
        self,
        N: int,
        nodal_constraints: list,
        cross_node_constraints: list,
        n_byof_nodal: int = 0,
        n_byof_cross: int = 0,
    ) -> None:
        """Build per-constraint virtual buffer weight arrays.

        Inspects each symbolic constraint's shape (to account for vector
        decomposition) and ``.weight()`` overrides, then populates
        ``self.lam_vb_nodal`` and ``self.lam_vb_cross``.

        Args:
            N: Number of trajectory nodes.
            nodal_constraints: Symbolic ``NodalConstraint`` objects (post-
                preprocessing, pre-lowering).
            cross_node_constraints: Symbolic ``CrossNodeConstraint`` objects.
            n_byof_nodal: Number of byof nodal constraints (each adds one
                column with the default weight).
            n_byof_cross: Number of byof cross-node constraints (each adds
                one entry with the default weight).
        """
        default_vb = float(self.lam_vb)

        # Count decomposed nodal constraints (vector → multiple scalars).
        # Vector constraints are decomposed element-wise during lowering
        # (see decompose_vector_nodal_constraints), so each element gets its
        # own column.  We mirror that here via check_shape() to ensure the
        # array dimensions match the post-decomposition constraint count.
        n_nodal = 0
        for nc in nodal_constraints:
            shape = nc.constraint.lhs.check_shape()
            n_nodal += int(np.prod(shape)) if len(shape) > 0 else 1

        # Byof constraints are scalar (one column each), added after symbolic.
        n_nodal += n_byof_nodal
        n_cross = len(cross_node_constraints) + n_byof_cross

        # max(..., 1) avoids size-0 CVXPy parameters.
        n_nodal_param = max(n_nodal, 1)
        n_cross_param = max(n_cross, 1)

        lam_vb_nodal = np.full((N, n_nodal_param), default_vb)
        lam_vb_cross = np.full(n_cross_param, default_vb)

        # Apply per-constraint .weight() overrides for nodal constraints.
        col = 0
        for nc in nodal_constraints:
            shape = nc.constraint.lhs.check_shape()
            n_elem = int(np.prod(shape)) if len(shape) > 0 else 1

            w = nc._lam_vb
            if w is not None:
                nodes = nc.nodes if nc.nodes is not None else list(range(N))
                if isinstance(w, (int, float)):
                    lam_vb_nodal[nodes, col : col + n_elem] = float(w)
                elif isinstance(w, np.ndarray):
                    if w.ndim == 1:
                        # (n_elem,) — broadcast across nodes
                        for i in range(n_elem):
                            val = float(w[0]) if len(w) == 1 else float(w[i])
                            lam_vb_nodal[nodes, col + i] = val
                    elif w.ndim == 2:
                        # (n_nodes, n_elem) — per-node-per-element
                        for i in range(n_elem):
                            c_i = 0 if w.shape[1] == 1 else i
                            lam_vb_nodal[nodes, col + i] = w[:, c_i]

            col += n_elem

        # Apply per-constraint .weight() overrides for cross-node constraints.
        for idx, cc in enumerate(cross_node_constraints):
            if cc._lam_vb is not None:
                lam_vb_cross[idx] = float(cc._lam_vb)

        self.lam_vb_nodal = lam_vb_nodal
        self.lam_vb_cross = lam_vb_cross
build(lam_prox: Union[float, Dict[str, Union[float, list, np.ndarray]]] = 0.1, lam_vc: Union[float, Dict[str, Union[float, list, np.ndarray]]] = 1.0, lam_cost: Union[float, Dict[str, float]] = 0.01, lam_vb: float = 0.0, states: Optional[List[State]] = None, controls: Optional[List[Control]] = None) -> Weights classmethod

Construct Weights from user-friendly inputs.

Accepts floats (applied uniformly) or dicts mapping state/control names to per-variable weights. Dict inputs are expanded to dense arrays via the resolve_lam_* methods.

Parameters:

Name Type Description Default
lam_prox Union[float, Dict[str, Union[float, list, ndarray]]]

Trust region weight. Float or {name: weight} dict. Dict requires states and controls.

0.1
lam_vc Union[float, Dict[str, Union[float, list, ndarray]]]

Virtual control weight. Float or {state_name: weight} dict. Dict requires states.

1.0
lam_cost Union[float, Dict[str, float]]

Cost weight. Float or {state_name: weight} dict. Dict requires states.

0.01
lam_vb float

Virtual buffer default weight (scalar).

0.0
states Optional[List[State]]

Symbolic State objects (required when any weight is a dict).

None
controls Optional[List[Control]]

Symbolic Control objects (required when lam_prox is a dict).

None

Returns:

Type Description
Weights

A new Weights instance with resolved numeric values.

Raises:

Type Description
ValueError

If a dict weight is given without the required states/controls.

Source code in openscvx/algorithms/weights.py
@classmethod
def build(
    cls,
    lam_prox: Union[float, Dict[str, Union[float, list, np.ndarray]]] = 1e-1,
    lam_vc: Union[float, Dict[str, Union[float, list, np.ndarray]]] = 1e0,
    lam_cost: Union[float, Dict[str, float]] = 1e-2,
    lam_vb: float = 0.0,
    states: Optional[List["State"]] = None,
    controls: Optional[List["Control"]] = None,
) -> "Weights":
    """Construct Weights from user-friendly inputs.

    Accepts floats (applied uniformly) or dicts mapping state/control
    names to per-variable weights. Dict inputs are expanded to dense
    arrays via the ``resolve_lam_*`` methods.

    Args:
        lam_prox: Trust region weight. Float or ``{name: weight}`` dict.
            Dict requires *states* and *controls*.
        lam_vc: Virtual control weight. Float or ``{state_name: weight}``
            dict. Dict requires *states*.
        lam_cost: Cost weight. Float or ``{state_name: weight}`` dict.
            Dict requires *states*.
        lam_vb: Virtual buffer default weight (scalar).
        states: Symbolic State objects (required when any weight is a dict).
        controls: Symbolic Control objects (required when *lam_prox* is a dict).

    Returns:
        A new Weights instance with resolved numeric values.

    Raises:
        ValueError: If a dict weight is given without the required
            states/controls.
    """
    return cls(
        lam_prox=cls.resolve_lam_prox(lam_prox, states, controls),
        lam_vc=cls.resolve_lam_vc(lam_vc, states),
        lam_cost=cls.resolve_lam_cost(lam_cost, states),
        lam_vb=float(lam_vb),
    )
build_vb_arrays(N: int, nodal_constraints: list, cross_node_constraints: list, n_byof_nodal: int = 0, n_byof_cross: int = 0) -> None

Build per-constraint virtual buffer weight arrays.

Inspects each symbolic constraint's shape (to account for vector decomposition) and .weight() overrides, then populates self.lam_vb_nodal and self.lam_vb_cross.

Parameters:

Name Type Description Default
N int

Number of trajectory nodes.

required
nodal_constraints list

Symbolic NodalConstraint objects (post- preprocessing, pre-lowering).

required
cross_node_constraints list

Symbolic CrossNodeConstraint objects.

required
n_byof_nodal int

Number of byof nodal constraints (each adds one column with the default weight).

0
n_byof_cross int

Number of byof cross-node constraints (each adds one entry with the default weight).

0
Source code in openscvx/algorithms/weights.py
def build_vb_arrays(
    self,
    N: int,
    nodal_constraints: list,
    cross_node_constraints: list,
    n_byof_nodal: int = 0,
    n_byof_cross: int = 0,
) -> None:
    """Build per-constraint virtual buffer weight arrays.

    Inspects each symbolic constraint's shape (to account for vector
    decomposition) and ``.weight()`` overrides, then populates
    ``self.lam_vb_nodal`` and ``self.lam_vb_cross``.

    Args:
        N: Number of trajectory nodes.
        nodal_constraints: Symbolic ``NodalConstraint`` objects (post-
            preprocessing, pre-lowering).
        cross_node_constraints: Symbolic ``CrossNodeConstraint`` objects.
        n_byof_nodal: Number of byof nodal constraints (each adds one
            column with the default weight).
        n_byof_cross: Number of byof cross-node constraints (each adds
            one entry with the default weight).
    """
    default_vb = float(self.lam_vb)

    # Count decomposed nodal constraints (vector → multiple scalars).
    # Vector constraints are decomposed element-wise during lowering
    # (see decompose_vector_nodal_constraints), so each element gets its
    # own column.  We mirror that here via check_shape() to ensure the
    # array dimensions match the post-decomposition constraint count.
    n_nodal = 0
    for nc in nodal_constraints:
        shape = nc.constraint.lhs.check_shape()
        n_nodal += int(np.prod(shape)) if len(shape) > 0 else 1

    # Byof constraints are scalar (one column each), added after symbolic.
    n_nodal += n_byof_nodal
    n_cross = len(cross_node_constraints) + n_byof_cross

    # max(..., 1) avoids size-0 CVXPy parameters.
    n_nodal_param = max(n_nodal, 1)
    n_cross_param = max(n_cross, 1)

    lam_vb_nodal = np.full((N, n_nodal_param), default_vb)
    lam_vb_cross = np.full(n_cross_param, default_vb)

    # Apply per-constraint .weight() overrides for nodal constraints.
    col = 0
    for nc in nodal_constraints:
        shape = nc.constraint.lhs.check_shape()
        n_elem = int(np.prod(shape)) if len(shape) > 0 else 1

        w = nc._lam_vb
        if w is not None:
            nodes = nc.nodes if nc.nodes is not None else list(range(N))
            if isinstance(w, (int, float)):
                lam_vb_nodal[nodes, col : col + n_elem] = float(w)
            elif isinstance(w, np.ndarray):
                if w.ndim == 1:
                    # (n_elem,) — broadcast across nodes
                    for i in range(n_elem):
                        val = float(w[0]) if len(w) == 1 else float(w[i])
                        lam_vb_nodal[nodes, col + i] = val
                elif w.ndim == 2:
                    # (n_nodes, n_elem) — per-node-per-element
                    for i in range(n_elem):
                        c_i = 0 if w.shape[1] == 1 else i
                        lam_vb_nodal[nodes, col + i] = w[:, c_i]

        col += n_elem

    # Apply per-constraint .weight() overrides for cross-node constraints.
    for idx, cc in enumerate(cross_node_constraints):
        if cc._lam_vb is not None:
            lam_vb_cross[idx] = float(cc._lam_vb)

    self.lam_vb_nodal = lam_vb_nodal
    self.lam_vb_cross = lam_vb_cross
resolve_lam_cost(lam_cost: Union[float, Dict[str, Union[float, list, np.ndarray]]], states: Optional[List[State]] = None) -> Union[float, np.ndarray] staticmethod

Resolve a lam_cost spec to a numeric value.

If lam_cost is a float it is returned as-is. If it is a dict mapping state names to weights, it is expanded to a dense array of shape (n_states,) using each state's _slice. States without a minimize/maximize objective receive weight 0. States with an objective must appear in the dict.

Dict values may be scalars (broadcast to every component) or arrays matching the state's shape for per-component weighting, e.g. {"position": [0, 0, 1e-6]}.

Parameters:

Name Type Description Default
lam_cost Union[float, Dict[str, Union[float, list, ndarray]]]

Scalar weight or {state_name: weight} dict.

required
states Optional[List[State]]

Symbolic State objects (required when lam_cost is a dict).

None

Returns:

Type Description
Union[float, ndarray]

float or np.ndarray of shape (n_states,).

Raises:

Type Description
ValueError

If lam_cost is a dict and states is None, or if dict contains unknown names, or if dict is missing entries for states with minimize/maximize objectives.

Source code in openscvx/algorithms/weights.py
@staticmethod
def resolve_lam_cost(
    lam_cost: Union[float, Dict[str, Union[float, list, np.ndarray]]],
    states: Optional[List["State"]] = None,
) -> Union[float, np.ndarray]:
    """Resolve a ``lam_cost`` spec to a numeric value.

    If *lam_cost* is a float it is returned as-is. If it is a dict
    mapping state names to weights, it is expanded to a dense array
    of shape ``(n_states,)`` using each state's ``_slice``. States
    without a minimize/maximize objective receive weight 0. States
    **with** an objective **must** appear in the dict.

    Dict values may be scalars (broadcast to every component) or
    arrays matching the state's shape for per-component weighting,
    e.g. ``{"position": [0, 0, 1e-6]}``.

    Args:
        lam_cost: Scalar weight or ``{state_name: weight}`` dict.
        states: Symbolic State objects (required when *lam_cost* is a dict).

    Returns:
        float or np.ndarray of shape ``(n_states,)``.

    Raises:
        ValueError: If *lam_cost* is a dict and *states* is ``None``,
            or if dict contains unknown names, or if dict is missing
            entries for states with minimize/maximize objectives.
    """
    if not isinstance(lam_cost, dict):
        return lam_cost

    if states is None:
        raise ValueError(
            "lam_cost was specified as a dict but no states were "
            "provided. Pass states so the dict can be expanded to "
            "a per-state weight array."
        )

    n_states = sum(s.shape[0] if len(s.shape) > 0 else 1 for s in states)
    lam_arr = np.zeros(n_states)

    valid_names = {s.name for s in states}
    unknown = set(lam_cost.keys()) - valid_names
    if unknown:
        raise ValueError(
            f"lam_cost dict contains unknown state name(s): {unknown}. "
            f"Valid state names: {sorted(valid_names)}"
        )

    # Identify states that have minimize/maximize objectives.
    cost_states: Set[str] = set()
    for state in states:
        if state.initial_type is not None:
            for t in state.initial_type:
                if t in ("Minimize", "Maximize"):
                    cost_states.add(state.name)
                    break
        if state.final_type is not None:
            for t in state.final_type:
                if t in ("Minimize", "Maximize"):
                    cost_states.add(state.name)
                    break

    # Check that all cost states are in the dict
    missing = cost_states - set(lam_cost.keys())
    if missing:
        raise ValueError(
            f"lam_cost dict is missing weight(s) for state(s) with "
            f"minimize/maximize objectives: {missing}. All states with "
            f"cost terms must have a weight in the dict."
        )

    # Fill the array.
    for state in states:
        if state.name in lam_cost:
            val = np.asarray(lam_cost[state.name], dtype=float)
            n_components = state.shape[0] if len(state.shape) > 0 else 1
            if val.ndim > 0 and val.shape[0] != n_components:
                raise ValueError(
                    f"lam_cost['{state.name}'] has length {val.shape[0]}, "
                    f"expected scalar or length {n_components}"
                )
            lam_arr[state._slice] = val

    return lam_arr
resolve_lam_prox(lam_prox: Union[float, Dict[str, Union[float, list, np.ndarray]]], states: Optional[List[State]] = None, controls: Optional[List[Control]] = None) -> Union[float, np.ndarray] staticmethod

Resolve a lam_prox spec to a numeric value.

If lam_prox is a float it is returned as-is. If it is a dict mapping state/control names to weights, it is expanded to a dense array using each variable's _slice. Variables not in the dict default to 1.0.

Dict values may be scalars, 1-D arrays (per-component), or 2-D arrays of shape (K, n_components) for per-node-per-component weighting. All 2-D entries must agree on K.

Parameters:

Name Type Description Default
lam_prox Union[float, Dict[str, Union[float, list, ndarray]]]

Scalar weight or {name: weight} dict.

required
states Optional[List[State]]

Symbolic State objects (required when lam_prox is a dict).

None
controls Optional[List[Control]]

Symbolic Control objects (required when lam_prox is a dict).

None

Returns:

Type Description
Union[float, ndarray]

float or np.ndarray of shape (n_states + n_controls,) or

Union[float, ndarray]

(K, n_states + n_controls).

Raises:

Type Description
ValueError

If lam_prox is a dict and states/controls is None, or if dict contains unknown names, or if 2-D entries disagree on K.

Source code in openscvx/algorithms/weights.py
@staticmethod
def resolve_lam_prox(
    lam_prox: Union[float, Dict[str, Union[float, list, np.ndarray]]],
    states: Optional[List["State"]] = None,
    controls: Optional[List["Control"]] = None,
) -> Union[float, np.ndarray]:
    """Resolve a ``lam_prox`` spec to a numeric value.

    If *lam_prox* is a float it is returned as-is. If it is a dict
    mapping state/control names to weights, it is expanded to a dense
    array using each variable's ``_slice``. Variables not in the dict
    default to ``1.0``.

    Dict values may be scalars, 1-D arrays (per-component), or 2-D
    arrays of shape ``(K, n_components)`` for per-node-per-component
    weighting. All 2-D entries must agree on *K*.

    Args:
        lam_prox: Scalar weight or ``{name: weight}`` dict.
        states: Symbolic State objects (required when *lam_prox* is a dict).
        controls: Symbolic Control objects (required when *lam_prox* is a dict).

    Returns:
        float or np.ndarray of shape ``(n_states + n_controls,)`` or
        ``(K, n_states + n_controls)``.

    Raises:
        ValueError: If *lam_prox* is a dict and *states*/*controls* is ``None``,
            or if dict contains unknown names, or if 2-D entries disagree on K.
    """
    if not isinstance(lam_prox, dict):
        return lam_prox

    if states is None or controls is None:
        raise ValueError(
            "lam_prox was specified as a dict but states and/or "
            "controls were not provided. Pass both so the dict can "
            "be expanded to a per-variable weight array."
        )

    n_states = sum(s.shape[0] if len(s.shape) > 0 else 1 for s in states)
    n_controls = sum(c.shape[0] if len(c.shape) > 0 else 1 for c in controls)
    n_total = n_states + n_controls

    valid_state_names = {s.name for s in states}
    valid_control_names = {c.name for c in controls}
    valid_names = valid_state_names | valid_control_names
    unknown = set(lam_prox.keys()) - valid_names
    if unknown:
        raise ValueError(
            f"lam_prox dict contains unknown name(s): {unknown}. "
            f"Valid names: {sorted(valid_names)}"
        )

    # Build a unified list of (name, n_components, slice_in_output).
    # States occupy columns [0, n_states), controls occupy [n_states, n_total).
    variables: list = []
    for s in states:
        nc = s.shape[0] if len(s.shape) > 0 else 1
        variables.append((s.name, nc, s._slice))
    for c in controls:
        nc = c.shape[0] if len(c.shape) > 0 else 1
        out_slice = slice(n_states + c._slice.start, n_states + c._slice.stop)
        variables.append((c.name, nc, out_slice))

    # First pass: determine if any entry is 2-D and infer K.
    n_nodes: Optional[int] = None
    for name, n_comp, _ in variables:
        if name not in lam_prox:
            continue
        val = np.asarray(lam_prox[name], dtype=float)
        if val.ndim == 2:
            if n_nodes is None:
                n_nodes = val.shape[0]
            elif val.shape[0] != n_nodes:
                raise ValueError(
                    f"lam_prox['{name}'] has {val.shape[0]} rows, but a "
                    f"previous entry had {n_nodes} rows. All 2-D entries "
                    f"must have the same number of rows (n_nodes)."
                )

    # Build the output array.
    if n_nodes is not None:
        lam_arr = np.ones((n_nodes, n_total))
    else:
        lam_arr = np.ones(n_total)

    for name, n_comp, out_slice in variables:
        if name not in lam_prox:
            continue
        val = np.asarray(lam_prox[name], dtype=float)

        if val.ndim == 0:
            lam_arr[..., out_slice] = float(val)
        elif val.ndim == 1:
            if val.shape[0] != n_comp:
                raise ValueError(
                    f"lam_prox['{name}'] has length {val.shape[0]}, "
                    f"expected scalar or length {n_comp}"
                )
            lam_arr[..., out_slice] = val
        elif val.ndim == 2:
            if val.shape[1] != n_comp:
                raise ValueError(
                    f"lam_prox['{name}'] has {val.shape[1]} columns, expected {n_comp}"
                )
            lam_arr[:, out_slice] = val
        else:
            raise ValueError(
                f"lam_prox['{name}'] has {val.ndim} dimensions, expected scalar, 1-D, or 2-D"
            )

    return lam_arr
resolve_lam_vc(lam_vc: Union[float, Dict[str, Union[float, list, np.ndarray]]], states: Optional[List[State]] = None) -> Union[float, np.ndarray] staticmethod

Resolve a lam_vc spec to a numeric value.

If lam_vc is a float it is returned as-is. If it is a dict mapping state names to weights, it is expanded to a dense array using each state's _slice. States not in the dict default to 1.0.

Dict values may be scalars, 1-D arrays (per-component), or 2-D arrays of shape (K, n_components) for per-node-per-component weighting. All 2-D entries must agree on K.

Parameters:

Name Type Description Default
lam_vc Union[float, Dict[str, Union[float, list, ndarray]]]

Scalar weight or {state_name: weight} dict.

required
states Optional[List[State]]

Symbolic State objects (required when lam_vc is a dict).

None

Returns:

Type Description
Union[float, ndarray]

float or np.ndarray of shape (n_states,) or (K, n_states).

Raises:

Type Description
ValueError

If lam_vc is a dict and states is None, or if dict contains unknown names, or if 2-D entries disagree on K.

Source code in openscvx/algorithms/weights.py
@staticmethod
def resolve_lam_vc(
    lam_vc: Union[float, Dict[str, Union[float, list, np.ndarray]]],
    states: Optional[List["State"]] = None,
) -> Union[float, np.ndarray]:
    """Resolve a ``lam_vc`` spec to a numeric value.

    If *lam_vc* is a float it is returned as-is. If it is a dict
    mapping state names to weights, it is expanded to a dense array
    using each state's ``_slice``. States not in the dict default
    to ``1.0``.

    Dict values may be scalars, 1-D arrays (per-component), or 2-D
    arrays of shape ``(K, n_components)`` for per-node-per-component
    weighting. All 2-D entries must agree on *K*.

    Args:
        lam_vc: Scalar weight or ``{state_name: weight}`` dict.
        states: Symbolic State objects (required when *lam_vc* is a dict).

    Returns:
        float or np.ndarray of shape ``(n_states,)`` or ``(K, n_states)``.

    Raises:
        ValueError: If *lam_vc* is a dict and *states* is ``None``,
            or if dict contains unknown names, or if 2-D entries disagree on K.
    """
    if not isinstance(lam_vc, dict):
        return lam_vc

    if states is None:
        raise ValueError(
            "lam_vc was specified as a dict but no states were "
            "provided. Pass states so the dict can be expanded to "
            "a per-state weight array."
        )

    n_states = sum(s.shape[0] if len(s.shape) > 0 else 1 for s in states)

    valid_names = {s.name for s in states}
    unknown = set(lam_vc.keys()) - valid_names
    if unknown:
        raise ValueError(
            f"lam_vc dict contains unknown state name(s): {unknown}. "
            f"Valid state names: {sorted(valid_names)}"
        )

    # First pass: determine if any entry is 2-D and infer K.
    n_nodes: Optional[int] = None
    for state in states:
        if state.name not in lam_vc:
            continue
        val = np.asarray(lam_vc[state.name], dtype=float)
        if val.ndim == 2:
            if n_nodes is None:
                n_nodes = val.shape[0]
            elif val.shape[0] != n_nodes:
                raise ValueError(
                    f"lam_vc['{state.name}'] has {val.shape[0]} rows, but a "
                    f"previous entry had {n_nodes} rows. All 2-D entries "
                    f"must have the same number of rows (n_nodes-1)."
                )

    # Build the output array.
    if n_nodes is not None:
        lam_arr = np.ones((n_nodes, n_states))
    else:
        lam_arr = np.ones(n_states)

    for state in states:
        if state.name not in lam_vc:
            continue
        val = np.asarray(lam_vc[state.name], dtype=float)
        n_components = state.shape[0] if len(state.shape) > 0 else 1

        if val.ndim == 0:
            lam_arr[..., state._slice] = float(val)
        elif val.ndim == 1:
            if val.shape[0] != n_components:
                raise ValueError(
                    f"lam_vc['{state.name}'] has length {val.shape[0]}, "
                    f"expected scalar or length {n_components}"
                )
            lam_arr[..., state._slice] = val
        elif val.ndim == 2:
            if val.shape[1] != n_components:
                raise ValueError(
                    f"lam_vc['{state.name}'] has {val.shape[1]} columns, "
                    f"expected {n_components}"
                )
            lam_arr[:, state._slice] = val
        else:
            raise ValueError(
                f"lam_vc['{state.name}'] has {val.ndim} dimensions, "
                f"expected scalar, 1-D, or 2-D"
            )

    return lam_arr