---
title: "Integration Framework Architecture"
subtitle: "Multi-Backend Integration for ODEs and SDEs"
author: "Gil Benezer"
date: today
format:
html:
toc: true
toc-depth: 5
code-fold: show
code-tools: true
theme: cosmo
execute:
eval: true
cache: true
warning: false
---
## Overview {#sec-overview}
The ControlDESymulation numerical integration framework provides **multi-backend**, **multi-method** support for integrating both **deterministic ordinary differential equations (ODEs)** and **stochastic differential equations (SDEs)**. The framework consists of 14 core files organized into a clean two-track architecture with a shared method registry.
**Key capabilities:**
- **Backend agnostic**: Seamless switching between NumPy, PyTorch, and JAX
- **40+ integration methods**: From simple Euler to high-order adaptive schemes
- **Stochastic support**: Full SDE integration with noise structure exploitation
- **Factory pattern**: Automatic method selection based on system properties
- **Production ready**: Professional-grade error control and performance tracking
::: {.callout-important}
## User Interface
**Most users should NOT directly instantiate integrators or factories.** Instead, use the high-level system interface:
```python
# Recommended: Use system.integrate() - delegates to appropriate integrator
system = Pendulum(m_val=1.0, l_val=0.5)
result = system.integrate(
x0=np.array([1.0, 0.0]),
method='RK45', # Optional: framework selects automatically
t_span=(0, 10)
)
# Advanced: Direct integrator access (only when needed for fine control)
integrator = IntegratorFactory.auto(system)
result = integrator.integrate(x0, u_func, t_span=(0, 10))
```
The system's `integrate()` method automatically handles backend selection, method routing, and integrator lifecycle. Direct integrator instantiation is only needed for advanced use cases requiring explicit integrator reuse or fine-grained control.
:::
## Architecture Overview {#sec-architecture}
The framework follows a dual-track design separating deterministic and stochastic integration, with a shared **method registry** providing centralized method management:
```
Shared Infrastructure
├── MethodRegistry (centralized method normalization and validation)
│ ├── normalize_method_name() - cross-backend name mapping
│ ├── validate_method() - backend/system compatibility
│ ├── is_sde_method() / is_fixed_step() - classification
│ └── get_available_methods() - method discovery
│
Track 1: Deterministic ODE Integration
├── IntegratorBase (abstract interface)
├── IntegratorFactory (method selection via registry)
├── Backend-Specific Implementations:
│ ├── ScipyIntegrator (NumPy/SciPy)
│ ├── TorchDiffEqIntegrator (PyTorch with GPU)
│ ├── DiffraxIntegrator (JAX with XLA)
│ └── DiffEqPyIntegrator (Julia via Python)
└── FixedStepIntegrators (Euler, Heun, Midpoint, RK4)
Track 2: Stochastic SDE Integration
├── SDEIntegratorBase (extends IntegratorBase)
├── SDEIntegratorFactory (SDE-specific creation via registry)
├── Backend-Specific Implementations:
│ ├── TorchSDEIntegrator (PyTorch SDEs)
│ ├── DiffraxSDEIntegrator (JAX SDEs)
│ └── DiffEqPySDEIntegrator (Julia SDEs)
└── CustomBrownianPath (deterministic noise for testing)
```
```{python}
#| label: setup-imports
#| echo: false
#| output: false
# Global setup for all code examples
import numpy as np
import torch
import jax
import jax.numpy as jnp
from cdesym import (
Pendulum,
IntegratorFactory,
OrnsteinUhlenbeck,
SDEIntegratorFactory,
)
SEED = 42
# Suppress JAX GPU warnings
import warnings
warnings.filterwarnings('ignore', category=UserWarning, module='jax')
```
## Method Registry: Centralized Method Management {#sec-method-registry}
**File:** `method_registry.py`
The `method_registry` module serves as the **single source of truth** for all integration methods across backends. It provides centralized method normalization, validation, and classification for both ODE and SDE integrators.
### Design Philosophy
**Backend Naming Conventions:**
| Backend | Convention | Examples |
|---------|------------|----------|
| NumPy/Julia (DiffEqPy) | Capitalized | `EM`, `Tsit5`, `SRIW1` |
| PyTorch (TorchSDE/TorchDiffEq) | lowercase | `euler`, `dopri5`, `milstein` |
| JAX (Diffrax) | PascalCase | `Euler`, `ItoMilstein`, `Tsit5` |
**Canonical Names:**
User-friendly aliases that work across all backends (e.g., `euler_maruyama`, `milstein`, `rk45`). These are automatically normalized to backend-specific names.
### Core Functions
```python
from cdesym.systems.base.numerical_integration.method_registry import (
normalize_method_name,
validate_method,
is_sde_method,
is_fixed_step,
get_available_methods,
get_method_info,
)
# Normalize canonical names to backend-specific
normalize_method_name('euler_maruyama', 'numpy') # → 'EM'
normalize_method_name('euler_maruyama', 'torch') # → 'euler'
normalize_method_name('euler_maruyama', 'jax') # → 'Euler'
# Validate method/backend/system compatibility
is_valid, error = validate_method('euler_maruyama', 'torch', is_stochastic=True)
# Classify methods
is_sde_method('euler_maruyama') # → True
is_sde_method('rk4') # → False
is_fixed_step('rk4') # → True
is_fixed_step('RK45') # → False (adaptive)
# Discover available methods
methods = get_available_methods('torch', method_type='stochastic')
```
### Method Classification
Methods are classified along two dimensions:
1. **System type**: Deterministic (ODE) vs Stochastic (SDE)
2. **Time stepping**: Fixed-step vs Adaptive
| Category | Examples | Use Case |
|----------|----------|----------|
| `DETERMINISTIC_FIXED_STEP` | `euler`, `heun`, `midpoint`, `rk4` | Manual implementations |
| `DETERMINISTIC_ADAPTIVE` | `RK45`, `LSODA`, `dopri5`, `tsit5` | Production ODE solvers |
| `SDE_FIXED_STEP` | `EM`, `euler`, `milstein`, `SRIW1` | Most SDE methods |
| `SDE_ADAPTIVE` | `LambaEM`, `AutoEM`, `adaptive_heun` | Rare adaptive SDE |
### Normalization Map
The registry maintains a comprehensive map for cross-backend method translation:
```python
NORMALIZATION_MAP = {
# SDE Methods
'euler_maruyama': {'numpy': 'EM', 'torch': 'euler', 'jax': 'Euler'},
'milstein': {'numpy': 'RKMil', 'torch': 'milstein', 'jax': 'ItoMilstein'},
'sra1': {'numpy': 'SRA1', 'torch': 'srk', 'jax': 'SRA1'},
# ODE Methods
'rk45': {'numpy': 'RK45', 'torch': 'dopri5', 'jax': 'tsit5'},
'dopri5': {'numpy': 'RK45', 'torch': 'dopri5', 'jax': 'dopri5'},
'tsit5': {'numpy': 'Tsit5', 'torch': 'dopri5', 'jax': 'tsit5'},
# ... more mappings
}
```
### Usage in Factories
Both `IntegratorFactory` and `SDEIntegratorFactory` use the registry for method handling:
```python
# IntegratorFactory.create() internally:
method = normalize_method_name(method, backend)
is_valid, error = validate_method(method, backend, is_stochastic=False)
if is_fixed_step(method) and dt is None:
raise ValueError("Fixed-step method requires dt")
```
## Track 1: Deterministic ODE Integration {#sec-track-1}
### IntegratorBase: Abstract Interface {#sec-integratorbase}
**File:** `integrator_base.py`
The `IntegratorBase` class defines the unified interface that all numerical integrators must implement. This abstraction enables backend-agnostic integration while maintaining consistent behavior across implementations.
**Core responsibilities:**
- Define integration contract through abstract methods
- Track performance statistics (function evaluations, steps, timing)
- Manage integration parameters (time step, tolerances, backend)
- Provide consistent result format via `IntegrationResult` TypedDict
**Key attributes:**
```python
system: ContinuousSystemBase # Dynamical system to integrate
dt: float # Time step (or initial guess for adaptive)
step_mode: StepMode # FIXED or ADAPTIVE
backend: Backend # 'numpy', 'torch', or 'jax'
rtol: float # Relative error tolerance (adaptive methods)
atol: float # Absolute error tolerance (adaptive methods)
_stats: dict # Performance tracking
```
**Abstract methods required by all implementations:**
```python
def step(self, x: Array, u: Array, dt: float) -> Array:
"""Single integration step: x(t) -> x(t + dt)"""
def integrate(
self,
x0: Array,
u_func: Callable,
t_span: tuple[float, float]
) -> IntegrationResult:
"""Multi-step integration over time interval"""
@property
def name(self) -> str:
"""Unique identifier for this integrator"""
```
**StepMode enumeration:**
- `FIXED`: Constant time step (euler, rk4, midpoint)
- `ADAPTIVE`: Variable time step with error control (RK45, dopri5, tsit5)
### IntegratorFactory: Smart Creation {#sec-integratorfactory}
**File:** `integrator_factory.py`
The `IntegratorFactory` provides intelligent integrator creation with automatic backend and method selection. It encapsulates the complexity of choosing appropriate integration methods based on system properties and use case requirements.
**Integration with Method Registry:**
The factory delegates method normalization and validation to the centralized `method_registry`:
```python
# Internally, IntegratorFactory.create() does:
method = normalize_method_name(method, backend) # Registry function
is_valid, error = validate_method(method, backend, is_stochastic=False)
if is_fixed_step(method) and dt is None:
raise ValueError("Fixed-step method requires dt")
```
**Julia Preference for NumPy Backend:**
When using the NumPy backend, `IntegratorFactory` automatically prefers Julia implementations for basic methods when DiffEqPy is available:
- `euler` → Julia's `Euler` (better performance)
- `heun` → Julia's `Heun`
- `midpoint` → Julia's `Midpoint`
To explicitly use manual Python implementations:
```python
integrator = IntegratorFactory.create(
system, backend='numpy', method='euler', prefer_manual=True
)
```
**Factory methods:**
```python
# Automatic selection based on system and backend
IntegratorFactory.auto(system, backend=None)
# Production use with auto-stiffness detection
IntegratorFactory.for_production(system, **options)
# High-performance Julia integration
IntegratorFactory.for_julia(system, algorithm='Tsit5', **options)
# Neural ODE training with adjoint gradients
IntegratorFactory.for_neural_ode(system, **options)
# JAX optimization workflows
IntegratorFactory.for_optimization(system, **options)
# Direct method specification
IntegratorFactory.create(system, backend, method, **options)
```
**Method registry:**
The factory uses the centralized method registry for mapping method names to backends and capabilities:
| Method | Backend | Type | Order | Best For |
|--------|---------|------|-------|----------|
| LSODA | NumPy (scipy) | Adaptive | Variable | Automatic stiffness detection |
| RK45 | NumPy (scipy) | Adaptive | 5(4) | General non-stiff ODEs |
| DOP853 | NumPy (scipy) | Adaptive | 8(5,3) | High-accuracy requirements |
| Radau | NumPy (scipy) | Adaptive | 5 | Stiff systems (implicit) |
| BDF | NumPy (scipy) | Adaptive | Variable | Very stiff systems |
| Tsit5 | NumPy (Julia) | Adaptive | 5(4) | High performance |
| Vern9 | NumPy (Julia) | Adaptive | 9(8) | Maximum accuracy |
| Rodas5 | NumPy (Julia) | Adaptive | 5(4) | Stiff (Rosenbrock method) |
| dopri5 | PyTorch/JAX | Adaptive | 5(4) | Neural ODEs, GPU acceleration |
| dopri8 | PyTorch/JAX | Adaptive | 8 | High-accuracy neural ODEs |
| tsit5 | JAX (diffrax) | Adaptive | 5(4) | Optimization, XLA compilation |
| euler | Any | Fixed | 1 | Simple systems, education |
| heun | Any | Fixed | 2 | Improved Euler, predictor-corrector |
| midpoint | Any | Fixed | 2 | Second-order, midpoint evaluation |
| rk4 | Any | Fixed | 4 | Moderate accuracy, fixed step |
### ScipyIntegrator: Adaptive NumPy Integration {#sec-scipyintegrator}
**File:** `scipy_integrator.py`
Wraps `scipy.integrate.solve_ivp` to provide professional-grade adaptive integration with comprehensive error control. This is the recommended starting point for most NumPy-based applications.
**Supported methods:**
- **RK45** (default): Dormand-Prince 5(4) — general purpose, good balance
- **RK23**: Bogacki-Shampine 3(2) — fast, lower accuracy
- **DOP853**: Dormand-Prince 8(5,3) — very high accuracy
- **Radau**: Implicit Runge-Kutta — stiff systems
- **BDF**: Backward differentiation formulas — very stiff systems
- **LSODA**: Automatic stiffness detection — adapts to problem
**Key features:**
- Professional adaptive time stepping with embedded error estimation
- Configurable error tolerances (`rtol`, `atol`)
- Dense output via continuous extension (interpolation between steps)
- Event detection for state-dependent conditions
- Support for both controlled and autonomous systems
**Usage example:**
```{python}
#| label: ex-scipy
from cdesym.systems.base.numerical_integration.scipy_integrator import ScipyIntegrator
# Create system
system = Pendulum(m_val=1.0, l_val=0.5)
# Create adaptive integrator with tight tolerances
integrator = ScipyIntegrator(
system,
method='RK45',
rtol=1e-6,
atol=1e-8
)
# Integrate over time span
x0 = np.array([1.0, 0.0]) # [angle, angular_velocity]
u_func = lambda t, x: np.zeros(1) # No control input
result = integrator.integrate(x0, u_func, t_span=(0, 10))
print(f"Success: {result['success']}")
print(f"Function evaluations: {result['nfev']}")
print(f"Steps taken: {result['nsteps']}")
```
**When to use:**
- General-purpose NumPy applications
- When you need reliable adaptive stepping
- Systems with unknown stiffness (use LSODA)
- When dense output is required
### TorchDiffEqIntegrator: GPU-Accelerated PyTorch {#sec-torchdiffeqintegrator}
**File:** `torchdiffeq_integrator.py**
Provides PyTorch integration with GPU acceleration and automatic differentiation support. Essential for neural ODE applications and gradient-based optimization.
**Supported methods:**
- **dopri5**: Dormand-Prince 5(4) — recommended default
- **dopri8**: Dormand-Prince 8 — high accuracy
- **adaptive_heun**: Heun's method — good for moderately stiff
- **bosh3**: Bogacki-Shampine 3 — fast, lower accuracy
- **fehlberg2**: Fehlberg 2(1) — very fast, low accuracy
- **explicit_adams**, **implicit_adams**: Multi-step methods
**Key features:**
- **GPU acceleration**: Automatic CUDA support for large-scale problems
- **Automatic differentiation**: Seamless gradient computation through dynamics
- **Adjoint method**: Memory-efficient backpropagation for neural ODEs
- **Batch processing**: Vectorized integration of multiple trajectories
**Usage example:**
```{python}
#| label: ex-torch
from cdesym.systems.base.numerical_integration.torchdiffeq_integrator import TorchDiffEqIntegrator
system = Pendulum(m_val=1.0, l_val=0.5)
# Create GPU-accelerated integrator
integrator = TorchDiffEqIntegrator(
system,
method='dopri5',
backend='torch',
device='cpu', # Use 'cuda:0' for GPU
rtol=1e-6,
atol=1e-8
)
x0 = torch.tensor([1.0, 0.0], requires_grad=True)
u_func = lambda t, x: torch.zeros(1)
result = integrator.integrate(x0, u_func, t_span=(0, 10))
print(f"Solver: {result['solver']}")
print(f"Integration time: {result['integration_time']:.4f}s")
```
**When to use:**
- Neural ODE training
- GPU-accelerated simulations
- Gradient-based optimization
- Large-scale batch processing
### DiffraxIntegrator: JAX with XLA Compilation {#sec-diffraxintegrator}
**File:** `diffrax_integrator.py`
Leverages JAX's diffrax library for high-performance integration with XLA compilation. Ideal for optimization workflows requiring extensive JIT compilation and functional transformations.
**Supported methods:**
- **tsit5**: Tsitouras 5(4) — recommended, efficient
- **dopri5**: Dormand-Prince 5(4) — standard reference
- **dopri8**: Dormand-Prince 8 — high accuracy
- **heun**: Heun's method — simple, robust
- **ralston**: Ralston's method — improved stability
- **reversible_heun**: Time-reversible integration
**Key features:**
- **XLA compilation**: Near-C++ performance via just-in-time compilation
- **JAX transformations**: `jit`, `vmap`, `grad`, `pmap` all work seamlessly
- **Functional style**: Pure functions enable advanced optimizations
- **Efficient for optimization**: Fast repeated evaluations with parameter sweeps
**Usage example:**
```{python}
#| label: ex-jax
from cdesym.systems.base.numerical_integration.diffrax_integrator import DiffraxIntegrator
system = Pendulum(m_val=1.0, l_val=0.5)
# Create JAX integrator
integrator = DiffraxIntegrator(
system,
method='tsit5',
backend='jax',
rtol=1e-6,
atol=1e-8
)
x0 = jnp.array([1.0, 0.0])
u_func = lambda t, x: jnp.zeros(1)
result = integrator.integrate(x0, u_func, t_span=(0, 10))
print(f"Method: {result['solver']}")
print(f"Success: {result['success']}")
```
**When to use:**
- Parameter optimization requiring many integrations
- When you need `vmap` for batch parameter sweeps
- Gradient-based control optimization
- Maximum computational efficiency
### DiffEqPyIntegrator: Julia's Solver Ecosystem {#sec-diffeqpyintegrator}
**File:** `diffeqpy_integrator.py`
Accesses Julia's extensive DifferentialEquations.jl ecosystem through Python bindings. Provides the most comprehensive method library with production-grade performance.
**Supported method families:**
**Explicit Runge-Kutta:**
- Tsit5, Vern6, Vern7, Vern8, Vern9 — variable order adaptive
- DP5, DP8 — Dormand-Prince variants
**Rosenbrock (implicit for stiff):**
- Rosenbrock23, Rosenbrock32, Rodas4, Rodas5
**BDF methods:**
- TRBDF2, KenCarp3, KenCarp4, KenCarp5
**Specialized:**
- RadauIIA5 — implicit Runge-Kutta
- ROCK2, ROCK4 — stabilized methods
- VelocityVerlet, SymplecticEuler — symplectic integrators
**Auto-switching (composite):**
- AutoTsit5(Rosenbrock23()) — switches based on stiffness
- AutoVern7(Rodas5()) — high accuracy with stiffness handling
**Key features:**
- **Highest performance**: Often 2-10× faster than SciPy
- **Automatic stiffness detection**: Seamlessly switches solvers
- **Extensive method library**: 100+ algorithms available
- **Production reliability**: Battle-tested in scientific computing
**Usage example:**
```{python}
#| label: ex-julia
from cdesym.systems.base.numerical_integration.diffeqpy_integrator import DiffEqPyIntegrator
system = Pendulum(m_val=1.0, l_val=0.5)
# Ultra-high accuracy Julia integration
integrator = DiffEqPyIntegrator(
system,
algorithm='Vern9', # 9th order method
backend='numpy',
reltol=1e-12,
abstol=1e-14
)
x0 = np.array([1.0, 0.0])
u_func = lambda t, x: np.zeros(1)
result = integrator.integrate(x0, u_func, t_span=(0, 100))
print(f"Function evaluations: {result['nfev']}")
print(f"Achieved accuracy: reltol={1e-12}, abstol={1e-14}")
```
**When to use:**
- Maximum performance requirements
- Very stiff systems needing automatic detection
- High-accuracy applications (>8 digits)
- Production environments with reliability requirements
### FixedStepIntegrators: Educational and Simple Systems {#sec-fixedstepintegrators}
**File:** `fixed_step_integrators.py`
Manual implementations of classic fixed-step methods. These provide transparent, backend-agnostic integration suitable for education and simple systems where adaptive stepping is unnecessary.
**Available methods:**
- **ExplicitEulerIntegrator**: Forward Euler (order 1) — simplest method
- **HeunIntegrator**: Heun's method / Improved Euler (order 2) — predictor-corrector approach
- **MidpointIntegrator**: Midpoint method (order 2) — midpoint evaluation
- **RK4Integrator**: Classic Runge-Kutta 4 (order 4) — excellent balance
**HeunIntegrator:**
Heun's method uses a predictor-corrector approach with trapezoidal averaging:
```
k1 = f(x_k, u_k) # Predictor (Euler step)
x_pred = x_k + dt * k1
k2 = f(x_pred, u_k) # Corrector
x_{k+1} = x_k + (dt/2) * (k1 + k2) # Trapezoidal rule
```
Also known as Improved Euler or Explicit Trapezoid method. Uses 2 function evaluations per step for second-order accuracy.
**Key features:**
- **Backend agnostic**: Work with NumPy, PyTorch, and JAX arrays
- **Transparent implementation**: Clear, readable code for learning
- **Constant time step**: Predictable computational cost
- **No external dependencies**: Pure Python implementations
- **TypedDict results**: All integrators return `IntegrationResult` TypedDict
**Usage example:**
```{python}
#| label: ex-fixed
from cdesym.systems.base.numerical_integration.fixed_step_integrators import (
RK4Integrator,
HeunIntegrator,
)
system = Pendulum(m_val=1.0, l_val=0.5)
x0 = np.array([1.0, 0.0])
u_func = lambda t, x: np.zeros(1)
# RK4 with fixed time step (4 function evals/step)
rk4_integrator = RK4Integrator(system, dt=0.01, backend='numpy')
result_rk4 = rk4_integrator.integrate(x0, u_func, t_span=(0, 10))
# Heun's method (2 function evals/step, predictor-corrector)
heun_integrator = HeunIntegrator(system, dt=0.01, backend='numpy')
result_heun = heun_integrator.integrate(x0, u_func, t_span=(0, 10))
print(f"RK4 - Steps: {result_rk4['nsteps']}, Solver: {result_rk4['solver']}")
print(f"Heun - Steps: {result_heun['nsteps']}, Solver: {result_heun['solver']}")
```
**When to use:**
- Educational purposes and learning
- Simple systems with smooth dynamics
- Real-time applications requiring predictable timing
- Debugging with deterministic stepping
## Track 2: Stochastic SDE Integration {#sec-track-2}
### SDEIntegratorBase: Stochastic Abstract Interface {#sec-sdeintegratorbase}
**File:** `sde_integrator_base.py`
Extends `IntegratorBase` to handle stochastic differential equations of the form:
$$dx = f(x, u, t)dt + g(x, u, t)dW$$
where:
- $f(x, u, t)$: Drift term (deterministic dynamics)
- $g(x, u, t)$: Diffusion term (stochastic intensity)
- $dW$: Brownian motion increments (Wiener process)
**Key differences from deterministic integration:**
1. **Random noise generation**: Requires proper Brownian motion sampling
2. **Convergence types**:
- **Strong convergence**: Pathwise accuracy (Monte Carlo, control)
- **Weak convergence**: Distribution accuracy (statistics)
3. **Noise structure exploitation**: Additive, diagonal, scalar, or general
4. **Multiple realizations**: Monte Carlo simulation support
5. **Interpretation**: Itô vs Stratonovich calculus
**Additional abstract methods:**
```python
def step(
self,
x: Array,
u: Array,
dt: float,
dW: Array
) -> Array:
"""Single SDE step with provided noise"""
def integrate_monte_carlo(
self,
x0: Array,
u_func: Callable,
t_span: tuple[float, float],
n_paths: int
) -> SDEIntegrationResult:
"""Multiple trajectory simulation for statistics"""
```
### SDEIntegratorFactory: SDE-Specific Creation {#sec-sdeintegratorfactory}
**File:** `sde_integrator_factory.py`
Provides intelligent SDE integrator creation with automatic noise structure detection and method selection.
**Integration with Method Registry:**
Like `IntegratorFactory`, the SDE factory delegates to the centralized `method_registry` for method normalization and validation:
```python
# Internally, SDEIntegratorFactory.create() does:
method = normalize_method_name(method, backend) # Registry function
is_valid, error = validate_method(method, backend, is_stochastic=True)
if is_fixed_step(method) and dt is None:
raise ValueError("Fixed-step SDE method requires dt")
```
**Factory methods:**
```python
# Automatic selection based on noise structure
SDEIntegratorFactory.auto(sde_system, backend=None)
# Direct method specification
SDEIntegratorFactory.create(
sde_system, backend, method, dt, **options
)
# Optimized for Monte Carlo simulations
SDEIntegratorFactory.for_monte_carlo(
sde_system, noise_type='general', **options
)
# Neural SDE training with adjoint
SDEIntegratorFactory.for_neural_sde(sde_system, adjoint=True, **options)
# Julia DiffEqPy SDE solvers
SDEIntegratorFactory.for_julia(sde_system, algorithm='SRIW1', **options)
# Gradient-based optimization
SDEIntegratorFactory.for_optimization(sde_system, backend=None, **options)
```
**Available SDE methods:**
| Method | Backend | Convergence | Noise Type | Order |
|--------|---------|-------------|------------|-------|
| euler-maruyama | All | Strong 0.5 | General | 0.5 |
| milstein | PyTorch/NumPy | Strong 1.0 | Diagonal | 1.0 |
| reversible_heun | PyTorch/JAX | Strong 1.0 | Additive | 1.0 |
| adaptive_heun | PyTorch | Strong 1.0 | Additive | 1.0 (adaptive) |
| srk | PyTorch | Strong | General | Variable |
| midpoint | PyTorch | Strong | General | Variable |
**Noise structure types:**
- **Additive**: $g(x, u, t) = g(t)$ — diffusion independent of state
- **Diagonal**: $g$ is diagonal matrix — independent noise channels
- **Scalar**: $g$ is scalar — single noise source
- **General**: Full matrix $g$ — correlated noise
### TorchSDEIntegrator: PyTorch SDE Integration {#sec-torchsdeintegrator}
**File:** `torchsde_integrator.py`
GPU-accelerated SDE integration using the `torchsde` library. Supports automatic differentiation through stochastic dynamics.
**Supported methods:**
- **euler**: Euler-Maruyama (strong order 0.5) — Itô and Stratonovich
- **milstein**: Milstein method (strong order 1.0 for diagonal noise) — Itô only
- **srk**: Stochastic Runge-Kutta (general noise) — Itô and Stratonovich
- **midpoint**: Midpoint method — Stratonovich only
- **reversible_heun**: Reversible Heun (strong order 1.0 for additive noise) — Stratonovich only
- **adaptive_heun**: Adaptive Heun with error control — Itô and Stratonovich
**Key features:**
- GPU acceleration for large-scale stochastic simulations
- Adaptive stepping with stochastic error control
- Noise structure exploitation for efficiency
- Adjoint method for memory-efficient gradients through SDEs
- **SDE type compatibility**: Most methods support either Itô or Stratonovich (see method list)
**Usage example:**
```{python}
#| label: ex-torchsde
from cdesym.systems.base.numerical_integration.stochastic.torchsde_integrator import TorchSDEIntegrator
# Create Ornstein-Uhlenbeck process: dx = -α*x*dt + σ*dW
sde_system = OrnsteinUhlenbeck(alpha=2.0, sigma=0.3)
integrator = TorchSDEIntegrator(
sde_system,
method='euler', # Euler-Maruyama for Itô SDEs
dt=0.01,
backend='torch'
)
x0 = torch.tensor([0.5])
u_func = lambda t, x: torch.zeros(1)
result = integrator.integrate(x0, u_func, t_span=(0, 10))
print(f"Convergence type: {result['convergence_type']}")
print(f"SDE type: {result['sde_type']}")
```
### DiffraxSDEIntegrator: JAX SDE Integration {#sec-diffraxsdeintegrator}
**File:** `diffrax_sde_integrator.py`
JAX-based SDE integration with XLA compilation and functional transformations. Excellent for optimization involving stochastic dynamics.
**Supported methods:**
- **euler**: Euler-Maruyama
- **heun**: Heun's method (if supported by diffrax)
- **reversible_heun**: Time-reversible stochastic integration
**Key features:**
- XLA compilation for near-C++ performance
- JAX transformations (`jit`, `vmap`, `grad`) work seamlessly
- Custom noise support for deterministic testing
- Efficient for stochastic optimization
**Usage example:**
```{python}
#| label: ex-diffraxsde
from cdesym.systems.base.numerical_integration.stochastic.diffrax_sde_integrator import DiffraxSDEIntegrator
sde_system = OrnsteinUhlenbeck(alpha=2.0, sigma=0.3)
integrator = DiffraxSDEIntegrator(
sde_system,
method='euler',
dt=0.01,
backend='jax',
seed=SEED
)
x0 = jnp.array([0.5])
u_func = lambda t, x: jnp.zeros(1)
result = integrator.integrate(x0, u_func, t_span=(0, 10))
print(f"Method: {result['solver']}")
print(f"Steps: {result['nsteps']}")
```
### DiffEqPySDEIntegrator: Julia SDE Integration {#sec-diffeqpysdeintegrator}
**File:** `diffeqpy_sde_integrator.py`
Access to Julia's comprehensive SDE solver ecosystem through Python bindings.
**Supported methods:**
- Euler-Maruyama variants
- Milstein method
- Stochastic Rosenbrock methods
- Advanced Julia SDE algorithms
**Key features:**
- Production-grade SDE solvers
- Automatic noise structure detection
- High-performance algorithms
- Extensive method library
**Usage example:**
```{python}
#| label: ex-diffeqpysde
from cdesym.systems.base.numerical_integration.stochastic.diffeqpy_sde_integrator import DiffEqPySDEIntegrator
sde_system = OrnsteinUhlenbeck(alpha=2.0, sigma=0.3)
integrator = DiffEqPySDEIntegrator(
sde_system,
method='EM', # Euler-Maruyama
dt=0.01,
backend='numpy',
seed=SEED
)
x0 = np.array([0.5])
u_func = lambda t, x: np.zeros(1)
result = integrator.integrate(x0, u_func, t_span=(0, 10))
print(f"Diffusion evaluations: {result['diffusion_evals']}")
```
### CustomBrownianPath: Deterministic Noise for Testing {#sec-custombrownianpath}
**File:** `custom_brownian.py`
Provides custom Brownian motion paths for deterministic testing and reproducibility. Implements diffrax's `AbstractPath` interface.
**Key features:**
- User-provided noise increments
- Deterministic testing support
- Custom noise patterns (e.g., zero noise, specific realizations)
- Compatible with diffrax integrators
**Usage example:**
```{python}
#| label: ex-custombrown
from jax import random
from cdesym.systems.base.numerical_integration.stochastic.custom_brownian import (
CustomBrownianPath, create_custom_or_random_brownian
)
from cdesym.systems.base.numerical_integration.stochastic.diffrax_sde_integrator import DiffraxSDEIntegrator
sde_system = OrnsteinUhlenbeck(alpha=2.0, sigma=0.3)
nw = sde_system.nw
# Zero noise for deterministic testing
dW = jnp.zeros((nw,))
brownian = CustomBrownianPath(t0=0.0, t1=10.0, dW=dW)
# Or generate random noise
key = random.key(SEED)
brownian_random = create_custom_or_random_brownian(
key, t0=0.0, t1=10.0, shape=(nw,), dW=None
)
# Use in integration
integrator = DiffraxSDEIntegrator(sde_system, method='euler', dt=0.01, backend='jax')
x0 = jnp.array([0.5])
u_func = lambda t, x: jnp.zeros(1)
result = integrator.integrate(
x0, u_func, t_span=(0, 10), brownian_path=brownian
)
print("Deterministic SDE integration (zero noise)")
```
**Important notes:**
1. **Time span matching**: `CustomBrownianPath(t0, t1, dW)` must match `t_span`
2. **Single dW for entire integration**: Unlike `step()`, provide one `dW` for the full interval
3. **Automatic interpolation**: Diffrax handles internal time queries
## Integration Result Types {#sec-result-types}
### IntegrationResult (Deterministic ODEs) {#sec-integrationresult}
All ODE integrators return a TypedDict with consistent fields:
```python
{
't': array, # Time points (T,)
'x': array, # States (T, nx) - time-major format
'success': bool, # Integration succeeded
'message': str, # Status message
'nfev': int, # Function evaluations
'nsteps': int, # Steps taken
'integration_time': float, # Wall clock time (seconds)
'solver': str, # Integrator name
# Optional (adaptive methods only):
'njev': int, # Jacobian evaluations
'nlu': int, # LU decompositions
'status': int, # Solver-specific status code
'sol': object, # Dense output object (if requested)
'dense_output': bool, # Dense output available
}
```
### SDEIntegrationResult (Stochastic SDEs) {#sec-sdeintegrationresult}
SDE integrators extend `IntegrationResult` with stochastic-specific fields:
```python
{
# All IntegrationResult fields, plus:
'diffusion_evals': int, # Diffusion function calls
'noise_samples': array, # Brownian increments used
'n_paths': int, # Number of trajectories
'convergence_type': str, # 'strong' or 'weak'
'sde_type': str, # 'ito' or 'stratonovich'
'noise_type': str, # 'additive', 'diagonal', 'scalar', 'general'
# For Monte Carlo (n_paths > 1):
'x': array, # (n_paths, T, nx)
'statistics': dict, # {'mean', 'std', 'q25', 'q50', 'q75'}
}
```
## Practical Usage Examples {#sec-usage-examples}
::: {.callout-note}
## Recommended Usage Pattern
The examples below show **direct integrator instantiation for documentation purposes**. In practice, most users should use the system's `integrate()` method which delegates to the appropriate integrator automatically:
```python
# Recommended approach
result = system.integrate(x0=x0, method='RK45', t_span=(0, 10))
# Advanced approach (shown in examples below)
integrator = IntegratorFactory.auto(system)
result = integrator.integrate(x0, u_func, t_span=(0, 10))
```
:::
### Example 1: Recommended High-Level Interface
```{python}
#| label: ex-recommended
system = Pendulum(m_val=1.0, l_val=0.5)
# Most users should use this approach - system handles delegation
x0 = np.array([0.1, 0.0])
result = system.integrate(
x0=x0,
method='RK45', # Optional: auto-selected if not specified
t_span=(0.0, 10.0)
)
print(f"Method: {result['solver']}")
print(f"Steps: {result['nsteps']}")
print(f"Success: {result['success']}")
```
### Example 2: Advanced - Direct Integrator Access
```{python}
#| label: ex-quickstart
# Advanced: Direct integrator instantiation for reuse or fine control
system = Pendulum(m_val=1.0, l_val=0.5)
integrator = IntegratorFactory.auto(system)
# Can reuse integrator for multiple integrations
x0 = np.array([0.1, 0.0])
u_func = lambda t, x: np.zeros(1)
result = integrator.integrate(x0, u_func, t_span=(0.0, 10.0))
print(f"Method: {result['solver']}")
print(f"Steps: {result['nsteps']}")
```
### Example 3: High-Accuracy Julia Integration
```{python}
#| label: ex-julia-accuracy
# Recommended: System-level interface
system = Pendulum(m_val=1.0, l_val=0.5)
x0 = np.array([1.0, 0.0])
result = system.integrate(
x0=x0,
method='Vern9', # Julia's 9th order method
rtol=1e-12,
atol=1e-14,
t_span=(0, 100)
)
print(f"Function evaluations: {result['nfev']}")
print(f"Integration time: {result['integration_time']:.4f}s")
```
### Example 4: GPU-Accelerated Neural ODE
```{python}
#| label: ex-neural-ode
system = Pendulum(m_val=1.0, l_val=0.5)
# Switch to PyTorch backend for gradients
with system.use_backend('torch'):
x0 = torch.tensor([1.0, 0.0], requires_grad=True)
result = system.integrate(
x0=x0,
method='dopri5',
device='cpu', # Use 'cuda:0' for GPU
t_span=(0, 10)
)
# Gradient computation works seamlessly
final_state = result['x'][-1]
loss = final_state.sum()
loss.backward()
print(f"Gradient w.r.t. x0: {x0.grad}")
```
### Example 5: Stochastic Simulation with Monte Carlo
```{python}
#| label: ex-monte-carlo
from cdesym.systems.base.numerical_integration.stochastic import get_trajectory_statistics
# Create stochastic system
sde_system = OrnsteinUhlenbeck(alpha=2.0, sigma=0.3)
# Use integrator for Monte Carlo (don't specify backend, it's handled internally)
integrator = SDEIntegratorFactory.for_monte_carlo(
sde_system, n_paths=1000
)
x0 = np.array([1.0])
u_func = lambda t, x: np.zeros(1)
result = integrator.integrate_monte_carlo(
x0, u_func, t_span=(0, 10), n_paths=1000
)
# Extract statistics
stats = get_trajectory_statistics(result)
# Extract scalar values from arrays
mean_final = float(stats['mean'][-1].flat[0])
std_final = float(stats['std'][-1].flat[0])
print(f"Mean at t=10: {mean_final:.4f}")
print(f"Std at t=10: {std_final:.4f}")
```
### Example 6: Deterministic Testing with Custom Noise
```{python}
#| label: ex-custom-noise
from cdesym.systems.base.numerical_integration.stochastic.custom_brownian import CustomBrownianPath
sde_system = OrnsteinUhlenbeck(alpha=2.0, sigma=0.3)
nw = sde_system.nw
# Zero noise for deterministic testing
dW = jnp.zeros((nw,))
brownian = CustomBrownianPath(t0=0.0, t1=10.0, dW=dW)
# Create JAX integrator
integrator = SDEIntegratorFactory.create(
sde_system,
backend='jax',
method='Euler',
dt=0.01,
seed=SEED
)
# Integrate with custom noise
x0 = jnp.array([0.5])
u_func = lambda t, x: jnp.zeros(1)
result = integrator.integrate(
x0, u_func, t_span=(0, 10), brownian_path=brownian
)
print(f"Deterministic integration complete")
print(f"Final state: {result['x'][-1]}")
```
## Integrator Selection Guide {#sec-selection-guide}
### By Use Case
| Use Case | Recommended Approach | Rationale |
|----------|---------------------|-----------|
| General ODE | `IntegratorFactory.for_production(system)` | LSODA auto-stiffness detection |
| Neural ODE | `IntegratorFactory.for_neural_ode(system)` | Adjoint method for memory efficiency |
| Optimization | `IntegratorFactory.for_optimization(system)` | JAX with XLA compilation |
| High Accuracy | `IntegratorFactory.for_julia(system, 'Vern9')` | Julia 9th order methods |
| Stiff ODE | `ScipyIntegrator(system, method='BDF')` | Backward differentiation |
| Simple ODE | `RK4Integrator(system, dt=0.01)` | Classic fixed-step |
| General SDE | `SDEIntegratorFactory.auto(sde_system)` | Automatic noise detection |
| Monte Carlo | `SDEIntegratorFactory.for_monte_carlo(...)` | Parallelized trajectories |
### By Backend Capabilities
| Backend | ODE Integrator | SDE Integrator | Best For |
|---------|---------------|----------------|----------|
| NumPy | ScipyIntegrator, DiffEqPyIntegrator | DiffEqPySDEIntegrator | General purpose, highest compatibility |
| PyTorch | TorchDiffEqIntegrator | TorchSDEIntegrator | GPU acceleration, neural networks |
| JAX | DiffraxIntegrator | DiffraxSDEIntegrator | Optimization, functional programming |
### By System Properties
| System Type | Best Method | Convergence Order | Notes |
|-------------|-------------|-------------------|-------|
| Non-stiff | RK45, Tsit5, dopri5 | 5(4) | General purpose |
| Stiff | BDF, Radau, Rodas5 | Variable | Implicit methods required |
| Very stiff | BDF, LSODA | Variable | Automatic stiffness detection |
| High accuracy | Vern9, DOP853 | 9(8) or 8(5,3) | For precision-critical applications |
| Real-time | RK4, euler | 4 or 1 | Fixed step, predictable timing |
| Additive noise SDE | reversible_heun, adaptive_heun | 1.0 (strong) | Exploits simplified noise structure |
| General SDE | Euler-Maruyama | 0.5 (strong) | Robust for any noise type |
| Diagonal noise SDE | Milstein | 1.0 (strong) | Higher order for independent channels |
## Design Principles {#sec-design-principles}
### Backend Abstraction
All integrators provide consistent interfaces across computational backends. Switch between NumPy, PyTorch, and JAX without changing integration code:
```python
# Same interface, different backends
integrator_numpy = IntegratorFactory.auto(system, backend='numpy')
integrator_torch = IntegratorFactory.auto(system, backend='torch')
integrator_jax = IntegratorFactory.auto(system, backend='jax')
```
### Factory Pattern for Complexity Management
Factories encapsulate the complexity of choosing appropriate integrators:
```python
# User specifies intent, factory handles details
IntegratorFactory.for_production(system) # → LSODA
IntegratorFactory.for_neural_ode(system) # → dopri5 with adjoint
IntegratorFactory.for_optimization(system) # → tsit5 with JAX
```
### Unified Result Types
TypedDict results provide consistent structure with IDE support:
```python
result: IntegrationResult = integrator.integrate(...)
# IDE knows 'success', 'nfev', 'x', 't' are available
```
### Composition Over Inheritance
Integrators compose with systems rather than inheriting deeply:
```python
class Integrator:
def __init__(self, system: ContinuousSystemBase):
self.system = system # Composition
```
### Built-in Performance Tracking
All integrators track statistics automatically:
```python
print(f"Function evaluations: {integrator._stats['total_fev']}")
print(f"Integration steps: {integrator._stats['total_steps']}")
print(f"Computation time: {integrator._stats['total_time']:.4f}s")
```
## Key Strengths {#sec-strengths}
1. **Multi-backend support**: Seamless NumPy/PyTorch/JAX switching
2. **Extensive method library**: 40+ integration methods across ODE and SDE
3. **Centralized method registry**: Unified method normalization, validation, and discovery
4. **Canonical name support**: Use portable names (`euler_maruyama`, `rk45`) across backends
5. **Intelligent factories**: Automatic method selection based on system properties
6. **Type safety**: TypedDict results with full IDE support
7. **GPU acceleration**: First-class support via PyTorch and JAX
8. **XLA compilation**: Near-native performance with JAX
9. **Julia integration**: Access to world-class DifferentialEquations.jl ecosystem
10. **SDE support**: Comprehensive stochastic integration framework
11. **Noise exploitation**: Automatic detection and optimization for noise structure
12. **Monte Carlo**: Built-in multi-trajectory simulation
13. **Custom noise**: Deterministic testing with user-provided Brownian paths
14. **Production ready**: Professional error control and performance tracking
## Summary
The ControlDESymulation integration framework provides a comprehensive, production-ready solution for numerical integration across deterministic and stochastic systems. With support for multiple backends, extensive method libraries, and intelligent automation through factory patterns, it enables efficient development of control theory and machine learning applications requiring state-of-the-art numerical integration.