ControlDESymulation: Design Philosophy and Architecture
A Type-Driven, Composition-Based Framework for Dynamical Systems
Executive Summary
ControlDESymulation is a Python library for symbolic specification and multi-backend simulation of nonlinear dynamical systems. It embodies a type-driven, composition-based architecture that achieves three seemingly contradictory goals simultaneously:
- Mathematical Rigor - Proper control theory, stochastic processes, and linearization
- Software Engineering Excellence - Clean architecture, zero duplication, extensive testing
- Multi-Backend Performance - Seamless NumPy/PyTorch/JAX with GPU/XLA support
The library consists of 39 core files organized into 4 architectural layers, serving researchers in control theory, robotics, and machine learning.
Most users should interact with the framework at two levels:
Built-in Systems (
cdesym.systems.builtin.*)from cdesym import Pendulum, CartPole, LorenzSystem system = Pendulum() result = system.simulate(x0, u=None, t_span=(0, 10))UI Framework (Define your own systems)
from cdesym import ContinuousSymbolicSystem import sympy as sp class MySystem(ContinuousSymbolicSystem): def define_system(self, m=1.0, k=10.0): # Define symbolic system pass
Lower layers (Type System, Delegation Layer, Integration Framework) are internal implementation details. Users should NOT directly instantiate:
BackendManager,CodeGenerator,DynamicsEvaluator(Delegation Layer)IntegratorFactory,ScipyIntegrator, etc. (Integration Framework)- TypedDict classes from Type System
The UI framework automatically composes these internal components and exposes their functionality through clean, user-facing methods. This document explains the internal architecture for framework developers and advanced users who need to understand how the framework works.
User Interaction Model
The Right Way to Use the Framework
The framework is designed with clear user-facing and internal boundaries:
✓ User-Facing APIs (Use These):
# Level 1: Use built-in systems
from cdesym import Pendulum, VanDerPol, LangevinDynamics
system = Pendulum()
result = system.simulate(x0, controller, t_span=(0, 10))
A, B = system.linearize(x_eq, u_eq)
# Level 2: Define custom systems
from cdesym import ContinuousSymbolicSystem
class MySystem(ContinuousSymbolicSystem):
def define_system(self, **params):
# Your symbolic definition
self.state_vars = [...]
self._f_sym = sp.Matrix([...])
system = MySystem() # Framework handles all internal composition✗ Internal APIs (Don’t Use These Directly):
# ❌ WRONG: Direct delegation layer access
from cdesym.systems.base.utils.backend_manager import BackendManager
from cdesym.systems.base.utils.code_generator import CodeGenerator
from cdesym.systems.base.utils.dynamics_evaluator import DynamicsEvaluator
backend = BackendManager() # Don't do this!
code_gen = CodeGenerator(system) # Framework creates these automatically
dynamics = DynamicsEvaluator(system, code_gen, backend) # Internal!
# ❌ WRONG: Direct integrator instantiation
from cdesym.systems.base.numerical_integration.scipy_integrator import ScipyIntegrator
integrator = ScipyIntegrator(system, method='RK45') # Don't do this!
# ✓ CORRECT: Use system interface
result = system.integrate(x0, u=None, t_span=(0, 10), method='RK45')
# Framework creates appropriate integrator internallyWhy This Separation Matters
User-Facing Benefits:
- Simple, consistent interface across all systems
- No need to understand internal architecture
- Framework handles complexity automatically
- Changes to internals don’t break user code
Framework Developer Benefits:
- Can refactor internal components freely
- Clear separation of concerns
- Easier to maintain and test
- Well-defined extension points
When You Might Need Internal APIs
The only time to directly use internal components:
- Framework Extension - Adding new system types, integrators, or utilities
- Advanced Debugging - Diagnosing framework issues
- Performance Optimization - Custom integration workflows
- Research Purposes - Experimenting with new numerical methods
For these advanced use cases, refer to the architecture documentation for each layer.
Core Design Philosophy
1. Type-Driven Design
Principle: Types are not just annotations—they are the architecture.
The entire framework is built on a foundational type system (200+ types) that provides:
- Semantic Clarity:
StateVector,GainMatrixinstead ofnp.ndarray - Type Safety: Static checking via mypy/pyright catches errors before runtime
- IDE Support: Autocomplete knows
result['t']exists and isTimePoints - Self-Documentation: Type signatures encode mathematical constraints
Example comparison:
# ❌ Bad: Unclear types and semantics
def bad(x, u): # What are these? What dimensions? What backend?
return x + u
# ✓ Good: Clear semantics and constraints
def good(x: StateVector, u: ControlVector) -> StateVector:
"""Clear: state in, control in, state out. Works with any backend."""
return x + uImpact: Every function signature is a mini-specification. New developers understand code by reading types.
2. Composition Over Inheritance
Principle: Systems compose specialized utilities rather than inheriting monolithic bases.
Traditional OOP would create deep inheritance hierarchies. We rejected this in favor of composition via delegation:
# ❌ NOT this (deep inheritance):
class System(BackendManager, CodeGenerator, DynamicsEvaluator, ...):
pass # 50 methods, unclear responsibilities
# ✓ YES this (composition):
class System:
def __init__(self):
self.backend = BackendManager() # Multi-backend support
self._code_gen = CodeGenerator() # Symbolic → numerical
self._dynamics = DynamicsEvaluator() # Forward evaluation
self._linearization = LinearizationEngine() # Jacobians
self.equilibria = EquilibriumHandler() # Named equilibriaBenefits:
| Benefit | Description |
|---|---|
| Single Responsibility | Each utility does one thing well |
| Testability | Test utilities in isolation |
| Reusability | Use BackendManager anywhere |
| Clarity | Explicit dependencies |
| Flexibility | Easy to swap implementations |
We DO use cooperative multiple inheritance in the UI framework—but only at the top level where it provides genuine value (avoiding duplication while maintaining clean interfaces).
3. Backend Agnosticism
Principle: Write once, run on NumPy/PyTorch/JAX without code changes.
Supporting multiple backends is not a feature—it’s a design constraint that forces better architecture:
# Same code works with all backends
def dynamics(x: StateVector, u: ControlVector) -> StateVector:
# x can be np.ndarray, torch.Tensor, or jax.Array
return -K @ x # Works with all!
# Backend switching is trivial
system.set_default_backend('torch')
system.to_device('cuda:0')Architectural implications:
- ArrayLike Union Type: All array types accept
Union[np.ndarray, torch.Tensor, jnp.ndarray] - BackendManager Utility: Centralized backend detection and conversion
- Per-Backend Caching: Code generated once per backend, then cached
- Device Management: Automatic GPU placement when available
Result: Users can start with NumPy for prototyping, switch to PyTorch for neural ODEs, or JAX for optimization—with zero code changes.
4. Zero Code Duplication
Principle: Every line of code should exist in exactly one place.
We eliminated ~1,800 lines of duplication between continuous and discrete systems through strategic abstraction:
Before refactoring: Continuous and discrete systems each had:
- Parameter handling
- Backend management
- Code generation
- Symbolic validation
- Configuration persistence
After refactoring: SymbolicSystemBase provides shared functionality:
- All parameter logic: ONE implementation
- All backend logic: ONE BackendManager
- All code generation: ONE CodeGenerator
- All validation: ONE SymbolicValidator
Implementation via cooperative multiple inheritance:
Layer 0: SymbolicSystemBase (shared foundation)
|
+----+----+
| |
Layer 1: ContinuousSystemBase, DiscreteSystemBase (time-domain specific)
| |
+----------+-----------+
|
Layer 2: ContinuousSymbolicSystem, DiscreteSymbolicSystem
This isn’t inheritance for convenience—it’s strategic abstraction to eliminate duplication while maintaining clarity.
5. Structured Results via TypedDict
Principle: Never return plain dictionaries—use TypedDict for structure and safety.
# ❌ BAD: Plain dict (no IDE support, no type checking)
def integrate() -> dict:
return {'t': t, 'x': x, 'success': True}
# ✓ GOOD: TypedDict (type-safe, self-documenting)
def integrate() -> IntegrationResult:
return {
't': t, # TimePoints - IDE knows this
'x': x, # StateTrajectory - IDE knows this
'success': True, # bool - IDE knows this
'nfev': 100, # int - Required field
'integration_time': 0.5,
'solver': 'RK45'
}Benefits:
- ✓ Type checker ensures all required fields present
- ✓ IDE autocompletes field names
- ✓ Documentation embedded in type definition
- ✓ Optional fields clearly marked (
total=False) - ✓ Refactoring safe (rename propagates)
Used throughout:
| Result Type | Purpose |
|---|---|
IntegrationResult |
ODE integration output |
SDEIntegrationResult |
SDE integration output |
ExecutionStats |
Performance metrics |
ValidationResult |
System validation status |
BackendConfig |
Configuration data |
6. Protocol-Based Interfaces
Principle: Define interfaces via Protocol (structural typing) not inheritance.
Protocols enable duck typing with type safety:
from typing import Protocol
class DynamicalSystemProtocol(Protocol):
"""Any class satisfying this structure is a dynamical system."""
@property
def nx(self) -> int: ...
@property
def nu(self) -> int: ...
def __call__(self, x: StateVector, u: ControlVector) -> StateVector: ...
# No inheritance needed!
class MySystem: # Doesn't inherit from anything
@property
def nx(self) -> int:
return 2
@property
def nu(self) -> int:
return 1
def __call__(self, x: StateVector, u: ControlVector) -> StateVector:
return x + u
# Satisfies protocol structurally
system: DynamicalSystemProtocol = MySystem() # ✓ Type checker approves!Advantages:
- No inheritance coupling
- Structural subtyping (like Go interfaces)
- Easy to implement interfaces
- Compose protocols naturally
- Third-party types work automatically
7. Factory Pattern for Complex Creation
Principle: Hide complexity behind simple factory methods.
Creating integrators involves choosing backends, methods, and configurations. Factories simplify this:
# ❌ Instead of this complexity:
if backend == 'numpy':
if method == 'RK45':
return ScipyIntegrator(system, method='RK45', rtol=1e-6)
elif method == 'Tsit5':
return DiffEqPyIntegrator(system, algorithm='Tsit5')
elif backend == 'torch':
return TorchDiffEqIntegrator(system, method='dopri5')
# ... 50 more cases
# ✓ We provide this simplicity:
integrator = IntegratorFactory.auto(system)
# or
integrator = IntegratorFactory.for_production(system)
# or
integrator = IntegratorFactory.for_neural_ode(system)Available factory methods:
| Method | Purpose |
|---|---|
auto() |
Best integrator for system/backend |
for_production() |
LSODA/AutoTsit5 for reliability |
for_optimization() |
JAX tsit5 for speed |
for_neural_ode() |
PyTorch dopri5 with adjoint |
for_julia() |
Highest performance |
create() |
Full control when needed |
Result: Simple interface for common cases, full control when needed.
8. Semantic Naming
Principle: Names should convey mathematical meaning, not implementation details.
Good semantic names:
- ✓
StateVectornotArrayLike- conveys it’s a state - ✓
GainMatrixnotMatrix- conveys it’s for feedback control - ✓
DynamicsEvaluatornotFunctionCaller- conveys purpose - ✓
LinearizationEnginenotJacobianComputer- conveys operation
Bad implementation names:
- ✗
data- what data? - ✗
arr1,arr2- meaningless - ✗
compute()- compute what? - ✗
process_stuff()- what stuff?
Impact: Code reads like mathematical papers. Control theorists immediately understand.
9. Progressive Disclosure of Complexity
Principle: Simple things should be simple, complex things should be possible.
The framework provides three levels of interaction, each building on the previous:
Level 1 - Beginner (Use Built-in Systems):
from cdesym import Pendulum, CartPole, LorenzSystem
# Use pre-defined systems from cdesym.systems.builtin
system = Pendulum()
result = system.simulate(x0, u=np.zeros(1), t_span=(0, 10))
# All internal complexity handled automaticallyLevel 2 - Intermediate (Define Custom Systems):
from cdesym import ContinuousSymbolicSystem
import sympy as sp
# Define your own system using UI framework
class MySystem(ContinuousSymbolicSystem):
def define_system(self, m=1.0, k=10.0):
x, v = sp.symbols('x v', real=True)
u = sp.symbols('u', real=True)
self.state_vars = [x, v]
self.control_vars = [u]
self._f_sym = sp.Matrix([v, -k*x/m + u/m])
system = MySystem(m=2.0) # Framework composes internals
result = system.integrate(x0, u=None, t_span=(0, 10))Level 3 - Expert (Framework Extension):
# ⚠️ Advanced: Only for framework developers
# Directly use internal APIs for custom integrators, utilities, etc.
# Custom integrator implementation
from cdesym.systems.base.numerical_integration import IntegratorBase
class MyCustomIntegrator(IntegratorBase):
def step(self, x, u, dt):
# Custom integration logic
pass
# Custom utility for system composition
from cdesym.systems.base.utils import BackendManager
class MyUtility:
def __init__(self, system, backend_mgr: BackendManager):
# Custom utility using internal components
pass- Beginners: Use built-in systems from
cdesym.systems.builtin.* - Intermediate: Subclass
ContinuousSymbolicSystemorDiscreteSymbolicSystem - Advanced: Only access Layers 0-2 when extending the framework itself
Principle applied:
- Default arguments for common cases
- Progressive power through optional parameters
- Expert features available but hidden from typical users
- Internal complexity encapsulated at each level
Architectural Layers
The following sections describe the internal architecture of the framework. Typical users do not need to understand these layers and should not directly instantiate components from Layers 0-2.
Users should work with: - Built-in systems from cdesym.systems.builtin.* - UI framework by subclassing ContinuousSymbolicSystem or DiscreteSymbolicSystem
The internal layers are documented here for: - Framework contributors and maintainers - Advanced users implementing custom integrators or utilities - Researchers studying the framework architecture
The library consists of 4 distinct architectural layers, each with clear responsibilities:
Layer 0: Type System (Foundation)
Purpose: Foundational types and structured results
Files: 7 modules, 200+ types
Key components:
| Module | Purpose |
|---|---|
core.py |
Vectors, matrices, functions |
backends.py |
Backend enums, configs |
trajectories.py |
Time series results |
linearization.py |
Jacobian types |
symbolic.py |
SymPy integration |
protocols.py |
Abstract interfaces |
utilities.py |
Type guards, helpers |
Design principles:
- Semantic over structural naming
- Backend-agnostic unions
- TypedDict for all results
- Protocol-based interfaces
Impact: Every layer above uses these types. Changes here propagate everywhere—so we keep them stable and well-designed.
Layer 1: Delegation Layer (Services)
Purpose: Specialized utilities via composition
Files: 11 modules
Core utilities:
BackendManager- Multi-backend supportCodeGenerator- Symbolic → numericalEquilibriumHandler- Named equilibriaSymbolicValidator- System validation
Deterministic services:
DynamicsEvaluator- Forward dynamicsLinearizationEngine- JacobiansObservationEngine- Output evaluation
Stochastic services:
DiffusionHandler- SDE diffusionNoiseCharacterizer- Noise analysisSDEValidator- SDE validation
Low-level:
codegen_utils- SymPy code generation
Design principles:
- Single responsibility per utility
- Composition not inheritance
- Dependency injection
- Lazy initialization with caching
Impact: UI framework composes these utilities. Each utility is independently testable and reusable.
Layer 2: Integration Framework (Numerical Methods)
Purpose: Multi-backend numerical integration
Files: 13 modules
Deterministic (ODE) integrators:
| Integrator | Backend | Purpose |
|---|---|---|
ScipyIntegrator |
NumPy | scipy.integrate.solve_ivp |
TorchDiffEqIntegrator |
PyTorch | GPU acceleration + autograd |
DiffraxIntegrator |
JAX | XLA compilation |
DiffEqPyIntegrator |
Julia | Highest performance |
FixedStepIntegrators |
Any | Euler, RK4, Midpoint |
Stochastic (SDE) integrators:
| Integrator | Backend | Purpose |
|---|---|---|
TorchSDEIntegrator |
PyTorch | GPU SDE integration |
DiffraxSDEIntegrator |
JAX | XLA-compiled SDEs |
DiffEqPySDEIntegrator |
Julia | Production-grade SDEs |
CustomBrownianPath |
JAX | Custom noise for testing |
Design principles:
- Factory pattern for creation
- Unified result types (TypedDict)
- Backend abstraction
- Performance tracking
Supported methods: 40+ integration methods across 4 backends
Layer 3: UI Framework (User-Facing Systems)
Purpose: Symbolic system definition and high-level interface
Files: 8 modules
Architecture hierarchy:
Layer 0: SymbolicSystemBase
└─ Time-agnostic foundation
Layer 1: Time-domain bases
├─ ContinuousSystemBase
└─ DiscreteSystemBase
Layer 2: Concrete implementations
├─ ContinuousSymbolicSystem
└─ DiscreteSymbolicSystem
Layer 3: Stochastic extensions
├─ ContinuousStochasticSystem
└─ DiscreteStochasticSystem
Key responsibilities:
| Component | Responsibility |
|---|---|
SymbolicSystemBase |
Symbolic variables, parameters, code generation, backend management, equilibria, config |
ContinuousSystemBase |
Continuous-time interface (dx/dt = f) |
DiscreteSystemBase |
Discrete-time interface (x[k+1] = f) |
ContinuousSymbolicSystem |
Combine symbolic + continuous |
DiscreteSymbolicSystem |
Combine symbolic + discrete |
| Stochastic Systems | Add diffusion handling |
DiscretizedSystem |
Continuous → discrete conversion |
Design principles:
- Cooperative multiple inheritance (strategic use only)
- Zero code duplication
- Template method pattern
- Composition for utilities
Design Patterns Used
1. Template Method Pattern
Where: All system base classes
How: Base class defines workflow, subclasses fill in details
class SymbolicSystemBase(ABC):
def __init__(self, *args, **kwargs):
# 1. Call user's define_system()
self.define_system(*args, **kwargs)
# 2. Validate
self._validator.validate(self)
# 3. Initialize utilities
self._setup_utilities()
# 4. Compile functions
self._compile()
@abstractmethod
def define_system(self, **params):
"""User implements this."""
passBenefit: Consistent initialization workflow. Users only implement define_system().
2. Factory Method Pattern
Where: IntegratorFactory, SDEIntegratorFactory
How: Factory methods create appropriate concrete classes
class IntegratorFactory:
@classmethod
def create(cls, system, backend, method, **opts):
"""Create appropriate integrator based on inputs."""
if backend == 'numpy':
if method in SCIPY_METHODS:
return ScipyIntegrator(system, method, **opts)
elif method in JULIA_METHODS:
return DiffEqPyIntegrator(system, method, **opts)
elif backend == 'torch':
return TorchDiffEqIntegrator(system, method, **opts)
# ...
@classmethod
def auto(cls, system):
"""Best integrator for system."""
backend = system.backend.default_backend
method = cls._BACKEND_DEFAULTS[backend]
return cls.create(system, backend, method)Benefit: Users get right integrator without knowing details.
3. Strategy Pattern
Where: Integration methods
How: Different algorithms (strategies) with same interface
# All integrators implement same interface
class IntegratorBase(ABC):
@abstractmethod
def integrate(self, x0, u_func, t_span) -> IntegrationResult:
pass
# Different strategies
integrator = ScipyIntegrator(system, method='RK45') # Strategy 1
integrator = DiffraxIntegrator(system, method='tsit5') # Strategy 2
# Same interface
result = integrator.integrate(x0, u_func, t_span)Benefit: Swap integration methods without code changes.
4. Dependency Injection
Where: All delegation layer utilities
How: Dependencies injected via constructor
class DynamicsEvaluator:
def __init__(
self,
system: SymbolicSystemBase,
code_gen: CodeGenerator,
backend_mgr: BackendManager
):
# Dependencies injected, not created internally
self.system = system
self.code_gen = code_gen
self.backend_mgr = backend_mgrBenefit: Easy to test (mock dependencies), clear dependencies.
5. Lazy Initialization
Where: Code generation, function compilation
How: Generate/compile on first use, cache result
class CodeGenerator:
def generate_dynamics(self, backend):
# Check cache first
if self._f_funcs[backend] is not None:
return self._f_funcs[backend] # Instant
# Generate only if needed
func = self._compile_dynamics(backend)
# Cache for next time
self._f_funcs[backend] = func
return funcBenefit: Fast startup, compile only what’s needed.
6. Observer Pattern
Where: Performance statistics, validation
How: Utilities track events and report statistics
class DynamicsEvaluator:
def evaluate(self, x, u):
start = time.time()
result = self._f_func(x, u)
elapsed = time.time() - start
# Update statistics
self._stats['calls'] += 1
self._stats['total_time'] += elapsed
return result
def get_stats(self) -> ExecutionStats:
return self._statsBenefit: Built-in performance monitoring.
Mathematical Rigor
Control Theory Foundations
The library implements proper control theory:
1. State-Space Representation
\[ \begin{aligned} \text{Continuous:} \quad & \dot{x} = f(x, u, t) \\ & y = h(x, t) \\ \\ \text{Discrete:} \quad & x[k+1] = f(x[k], u[k]) \\ & y[k] = h(x[k]) \end{aligned} \]
2. Linearization
\[ \begin{aligned} \delta\dot{x} &= A\cdot\delta x + B\cdot\delta u && \text{(continuous)} \\ \delta x[k+1] &= A_d\cdot\delta x[k] + B_d\cdot\delta u[k] && \text{(discrete)} \end{aligned} \]
where: - \(A = \frac{\partial f}{\partial x}\) (state Jacobian) - \(B = \frac{\partial f}{\partial u}\) (control Jacobian)
3. Higher-Order Systems
For order \(n\) system \(q^{(n)} = f(q, \dot{q}, \ldots, q^{(n-1)}, u)\):
\[ \text{State: } x = [q, \dot{q}, \ldots, q^{(n-1)}]^T \\ \text{Dynamics: } \dot{x} = [\dot{q}, \ddot{q}, \ldots, q^{(n)}]^T \]
4. Stochastic Processes
\[ \begin{aligned} \text{Itô:} \quad & dx = f(x,u)dt + g(x,u)dW \\ \text{Stratonovich:} \quad & dx = f(x,u)dt + g(x,u)\circ dW \end{aligned} \]
Noise types:
- Additive: \(g(x,u) = G\) (constant)
- Multiplicative: \(g\) depends on \(x\) or \(u\)
- Diagonal: Independent noise channels
- Scalar: Single Wiener process
Numerical Methods
ODE Solvers (40+ methods):
| Category | Methods |
|---|---|
| Explicit RK | RK45, Tsit5, Vern9, dopri5 |
| Implicit | Radau, BDF, Rodas5 |
| Auto-stiffness | LSODA, AutoTsit5 |
| Fixed-step | Euler, RK4, Midpoint |
SDE Solvers:
| Method | Convergence | Noise Type |
|---|---|---|
| Euler-Maruyama | Strong 0.5 | General |
| Milstein | Strong 1.0 | Diagonal |
| Heun | Strong 1.0 | Additive |
| Stochastic RK | Variable | General |
Performance Considerations
1. Caching Strategy
Three-level cache:
- Symbolic Cache: Jacobians computed once symbolically
- Per-Backend Cache: Compiled functions per backend
- Equilibrium Cache: Linearizations at equilibria
# First call: symbolic computation + compilation
A, B = system.linearize(x_eq, u_eq)
# Second call: cached (instant)
A, B = system.linearize(x_eq, u_eq)2. Backend Optimization
NumPy:
- Common subexpression elimination (CSE)
- Fast numerical modules
- Vectorized operations
PyTorch:
- Symbolic simplification before codegen
- GPU tensor operations
- Automatic differentiation
- Adjoint method for memory efficiency
JAX:
- JIT compilation via
jax.jit - XLA optimization
- Pure functional style
- Automatic vectorization (vmap)
3. Batching Support
All evaluators support batched operations:
# Single evaluation
dx = system(x, u) # x: (nx,), u: (nu,) → dx: (nx,)
# Batched evaluation (possible 100x speedup over loop)
dx_batch = system(x_batch, u_batch)
# x: (100, nx), u: (100, nu) → dx: (100, nx)4. GPU Acceleration
PyTorch:
system.set_default_backend('torch')
system.to_device('cuda:0')
x = torch.tensor([[1.0, 0.0]], device='cuda:0')
dx = system(x, u) # Computed on GPUJAX:
system.set_default_backend('jax')
system.to_device('cuda:0')
x = jnp.array([1.0, 0.0])
dx = jax.jit(system)(x, u) # XLA compiled, GPU enabledTesting Philosophy
1. Type-Driven Testing
Types guide what to test:
def test_dynamics_signature():
"""Type annotations specify contract."""
x: StateVector = np.array([1.0, 0.0])
u: ControlVector = np.array([0.5])
dx: StateVector = system(x, u)
assert isinstance(dx, np.ndarray)
assert dx.shape == (system.nx,)2. Property-Based Testing
Test mathematical properties:
def test_linearization_is_linear():
"""Linearization should be linear in δx and δu."""
A, B = system.linearize(x_eq, u_eq)
δx1, δx2 = np.random.randn(2, nx)
# Linearity: f(αx₁ + βx₂) = αf(x₁) + βf(x₂)
α, β = 0.3, 0.7
lhs = A @ (α*δx1 + β*δx2)
rhs = α*(A @ δx1) + β*(A @ δx2)
np.testing.assert_allclose(lhs, rhs)3. Multi-Backend Consistency
Same results across backends:
def test_backend_consistency():
"""NumPy, PyTorch, JAX should agree."""
x_np = np.array([1.0, 0.0])
dx_np = system(x_np, backend='numpy')
dx_torch = system(torch.tensor(x_np), backend='torch')
dx_jax = system(jnp.array(x_np), backend='jax')
np.testing.assert_allclose(dx_np, dx_torch.numpy())
np.testing.assert_allclose(dx_np, np.array(dx_jax))4. Regression Testing
Critical numerical values frozen:
def test_pendulum_energy_conservation():
"""Known system should have expected behavior."""
system = Pendulum(m=1.0, l=1.0, g=9.81)
# Energy should be conserved (no damping)
E0 = compute_energy(x0)
x_final = system.simulate(x0, u=None, t_span=(0, 10))[-1]
E_final = compute_energy(x_final)
np.testing.assert_allclose(E0, E_final, rtol=1e-6)Documentation Philosophy
1. Self-Documenting Code
Code should be readable without comments:
# ❌ Bad
def f(x, u, m): # What is this?
return x[1], -m*x[0] + u
# ✓ Good
def compute_dynamics(
state: StateVector,
control: ControlVector,
stiffness: float
) -> StateVector:
"""
Compute dynamics for mass-spring system.
Parameters
----------
state : StateVector
[position, velocity]
control : ControlVector
Applied force
stiffness : float
Spring constant k
Returns
-------
StateVector
[velocity, acceleration]
"""
position, velocity = state
force = control
acceleration = -stiffness * position + force
return np.array([velocity, acceleration])2. Examples in Docstrings
Every public function has usage examples:
def linearize(
self,
x_eq: EquilibriumState,
u_eq: EquilibriumControl
) -> DeterministicLinearization:
"""
Compute linearization at equilibrium.
Returns state and control Jacobians (A, B).
Examples
--------
>>> # Linearize at origin
>>> A, B = system.linearize(
... x_eq=np.zeros(2),
... u_eq=np.zeros(1)
... )
>>> print(A.shape) # (2, 2)
>>> print(B.shape) # (2, 1)
>>>
>>> # Check stability
>>> eigenvalues = np.linalg.eigvals(A)
>>> stable = np.all(np.real(eigenvalues) < 0)
"""3. Mathematical Documentation
Explain theory behind code:
"""
Linearization Engine for Dynamical Systems
Mathematical Background
-----------------------
For a nonlinear system:
dx/dt = f(x, u)
The linearization at (x_eq, u_eq) is:
δẋ = A·δx + B·δu
where:
A = ∂f/∂x|(x_eq, u_eq) ∈ ℝ^(nx×nx) (State Jacobian)
B = ∂f/∂u|(x_eq, u_eq) ∈ ℝ^(nx×nu) (Control Jacobian)
This enables:
- Stability analysis via eigenvalues of A
- LQR controller design
- Observer design (Kalman filter)
- Small-signal analysis
"""4. Architecture Documents
High-level guides (like this one!) explain design philosophy and patterns.
Error Handling Philosophy
1. Fail Fast, Fail Clearly
Detect errors as early as possible with clear messages:
# ❌ Bad
def compute(x):
return x[5] # IndexError: vague
# ✓ Good
def compute(state: StateVector) -> float:
if len(state) < 6:
raise ValueError(
f"State must have at least 6 elements for this computation. "
f"Got {len(state)} elements: {state}"
)
return state[5]2. Validation at Construction
Catch errors during __init__, not during use:
class System(SymbolicSystemBase):
def define_system(self):
# Bad parameter type
self.parameters = {'m': 1.0} # String key!
# Validation catches this immediately:
# ValueError: Parameter keys must be Symbol, not str.
# Found string key: 'm'
# Use: m_sym = sp.symbols('m'); parameters = {m_sym: 1.0}3. Type Checking Before Runtime
Use type annotations + mypy to catch errors before running:
$ mypy src/
error: Argument 1 to "compute" has incompatible type "List[float]";
expected "ndarray[Any, dtype[Any]]"4. Helpful Error Messages
Include context and solutions:
if x.shape[0] != self.nx:
raise ValueError(
f"State dimension mismatch.\n"
f"Expected: {self.nx} (from system definition)\n"
f"Got: {x.shape[0]} (from input)\n"
f"State: {x}\n"
f"Hint: Check that state vector has correct dimension."
)Extension Points
The architecture provides clear extension points:
1. Add New System Type
class MyCustomSystem(SymbolicSystemBase):
"""Just implement define_system()."""
def define_system(self, **params):
# Define symbolic system
self.state_vars = [...]
self._f_sym = sp.Matrix([...])
self.parameters = {...}
def setup_equilibria(self):
"""Optional: Auto-set equilibria upon instantiation."""
# Define custom equilibrium setup
pass2. Add New Integrator
class MyIntegrator(IntegratorBase):
"""Implement abstract methods."""
def step(self, x, u, dt):
# Single step logic
pass
def integrate(self, x0, u_func, t_span):
# Multi-step logic
return IntegrationResult(...)3. Add New Utility
class MyUtility:
"""Independent utility via composition."""
def __init__(self, system):
self.system = system
def my_operation(self):
# Custom operation
pass
# Use via composition
system._my_utility = MyUtility(system)4. Add New Backend
# 1. Add to Backend type
Backend = Literal["numpy", "torch", "jax", "my_backend"]
# 2. Extend BackendManager
class BackendManager:
def _convert_to_backend(self, arr, backend):
if backend == "my_backend":
return my_backend.array(arr)
# ...
# 3. Add to codegen_utils
def generate_function(expr, vars, backend):
if backend == "my_backend":
return my_backend.lambdify(...)
# ...Trade-offs and Decisions
1. Cooperative Multiple Inheritance
Decision: Use cooperative multiple inheritance ONLY in UI framework Layer 2
Pros:
- Eliminates ~1,800 lines of duplication
- Clean interfaces (ContinuousSymbolicSystem has both capabilities)
- Python’s MRO handles it correctly with
super()
Cons:
- Can be confusing if overused
- Requires careful design
Why limited use: We restrict it to where it provides genuine value—the top-level system classes that need both symbolic machinery and time-domain interfaces.
2. TypedDict vs Dataclass
Decision: Use TypedDict for results, not dataclass
Pros:
- Compatible with plain dictionaries (gradual typing)
- No runtime overhead
- Works with JSON serialization
Cons:
- Not as pythonic as dataclass
- No default values (use
total=Falseinstead)
Why TypedDict: Integration results come from external libraries (scipy, etc.) as dictionaries. TypedDict lets us type them without conversion.
3. Backend Support
Decision: Support NumPy, PyTorch, JAX (not TensorFlow)
| Backend | Rationale |
|---|---|
| NumPy | Universal, stable, CPU |
| PyTorch | Neural networks, GPU, mature ecosystem |
| JAX | Functional, JIT, XLA, research-friendly |
| TensorFlow | ✗ Skipped due to complexity, declining use in research |
Why these three: Cover 95% of use cases with minimal complexity.
4. Symbolic Engine
Decision: Use SymPy (not custom symbolic engine)
Rationale:
- ✓ Mature, well-tested symbolic math
- ✓ Excellent documentation
- ✓ Large community
- ✗ Can be slow for very large systems
- ✗ Limited control over simplification
Why SymPy: Reinventing symbolic math is not our value proposition. SymPy is battle-tested.
5. Testing Framework
Decision: pytest (not unittest)
Rationale:
- ✓ Less boilerplate
- ✓ Better fixtures
- ✓ Parametrized tests
- ✓ Better assertions
Why pytest: Industry standard, developer-friendly.
Future Directions
Active Development
Features being implemented prior to release:
Classical Control Theory
- Stability, controllability, and observability metrics
- Kalman Filter, Luenberger Observer design
- Linear Quadratic (Gaussian) Regulator control design
- Callable controllers
Visualization
- Plotly-based interactive plotting
- Trajectory visualization across all variables
- Phase portrait visualization in 2D and 3D
Planned Features
- RL Environment Synthesis
- Gymnasium library interface compatibility
- Export of Gymnasium environments from symbolic dynamics
- Synthetic Data Generation
- Classes and methods for generating synthetic physical data
- Export in standard formats
- Parameter and Uncertainty Estimation
- System identification
- Bayesian inference
- Adaptive control
- Conformal methods
- Sobol indices
- Morris screening
- Neural Controller Design
- Protocol interface for backend-agnostic functionality
- Neural controller training
- Neural certificate function construction and verification
- Lyapunov, barrier, contraction metric
- Forward and backward reachability analysis
- Model Predictive Control (MPC)
- Receding horizon optimization
- Constraint handling
- Real-time capable
- Integration with do-mpc, CasADi, acados
- Advanced Stochastic
- Particle filters
- Stochastic MPC
- Noisy measurement models
- Other robust and/or stochastic control
- System Composition
- Connector protocol interfaces to couple multiple subsystems
Potential Future Extensions
Long-term possibilities:
- Hybrid Systems
- Switched dynamics
- Hybrid automata
- Jump/flow dynamics
- Distributed Systems
- Multi-agent dynamics
- Network topology
- Consensus protocols
- Delay Systems
- Time-delayed feedback
- DDE integration
- Delayed stability analysis
- PDE Systems
- Spatiotemporal dynamics
- Finite/discrete element methods
- Spectral methods
Conclusion
ControlDESymulation demonstrates that mathematical rigor, software engineering excellence, and multi-backend performance are not competing goals—they are mutually reinforcing when built on a foundation of:
- Type-Driven Design - Types are architecture
- Composition Over Inheritance - Build with utilities
- Backend Agnosticism - Write once, run anywhere
- Zero Duplication - Strategic abstraction
- Structured Results - TypedDict everywhere
- Protocol Interfaces - Duck typing with safety
- Factory Patterns - Hide complexity
- Semantic Naming - Code reads like math
- Progressive Disclosure - Simple to complex
The result is a library where:
- Control theorists find familiar mathematics
- Software engineers find clean architecture
- ML researchers find GPU acceleration
- Students find gentle learning curves
- Experts find power and flexibility
Code organized into 4 architectural layers, implementing 200+ types and 40+ integration methods—all serving a single vision: symbolic dynamical systems done right.
Appendix: Statistics Summary
Type Distribution
| Category | Count | Examples |
|---|---|---|
| Vector Types | 15+ | StateVector, ControlVector |
| Matrix Types | 30+ | StateMatrix, GainMatrix |
| Function Types | 10+ | DynamicsFunction, ControlPolicy |
| Backend Types | 20+ | Backend, Device, NoiseType |
| Trajectory Types | 15+ | StateTrajectory, IntegrationResult |
| Linearization Types | 15+ | DeterministicLinearization |
| Symbolic Types | 10+ | SymbolicExpression |
| Protocol Types | 20+ | DynamicalSystemProtocol |
| Utility Types | 20+ | ExecutionStats, TypeGuards |
| TypedDict Results | 15+ | IntegrationResult |
| TOTAL | 200+ | Complete type system |
Integration Methods
| Category | Count | Examples |
|---|---|---|
| NumPy (scipy) | 6 | RK45, LSODA, BDF, Radau |
| NumPy (Julia) | 20+ | Tsit5, Vern9, Rodas5, AutoTsit5 |
| PyTorch | 8 | dopri5, dopri8, adaptive_heun |
| JAX | 8 | tsit5, dopri5, heun, ralston |
| Fixed-step | 3 | euler, midpoint, rk4 |
| SDE Methods | 10+ | euler-maruyama, milstein, heun |
| TOTAL | 55+ | Comprehensive coverage |
Design Patterns
| Pattern | Count | Where Used |
|---|---|---|
| Template Method | 8 | All system base classes |
| Factory Method | 2 | Integrator/SDE factories |
| Strategy | 55+ | All integration methods |
| Dependency Injection | 11 | All delegation utilities |
| Lazy Initialization | 7 | Code generation, caching |
| Observer | 5 | Performance statistics |
| Protocol | 20+ | All structural interfaces |
The numbers tell the story: a comprehensive, well-architected library built on solid design principles.