types.protocols.DiscreteSystemProtocol
types.protocols.DiscreteSystemProtocol()Minimal interface for discrete-time dynamical systems.
This protocol defines the basic contract that all discrete-time systems must satisfy, regardless of their implementation (symbolic, numerical, learned, etc.).
Any object implementing this protocol can be: - Simulated forward in time - Used in trajectory-based algorithms - Rolled out with policies - Used in reinforcement learning
Implementations
Concrete classes that satisfy this protocol: - DiscreteSymbolicSystem: Symbolic discrete-time system - DiscreteStochasticSystem: Stochastic discrete-time system - DiscretizedSystem: Numerical discretization of continuous system - NeuralDiscreteSystem: Neural network dynamics (future) - DataDrivenDiscreteSystem: Learned from data (future)
Required Attributes
dt : float Sampling period in seconds nx : int Number of state variables nu : int Number of control inputs
Required Methods
step(x, u, k) -> x_next Single time step update: x[k+1] = f(x[k], u[k]) simulate(x0, u_sequence, n_steps) -> result Multi-step simulation
Use Cases
- Monte Carlo simulation
- Trajectory collection for learning
- Rollout with exploration policies
- Basic dynamics analysis
Examples
Function accepting any discrete system:
>>> def collect_trajectories(
... system: DiscreteSystemProtocol,
... n_trials: int = 100
... ) -> List[DiscreteSimulationResult]:
... '''Collect random trajectories from any discrete system.'''
... trajectories = []
... for _ in range(n_trials):
... x0 = np.random.randn(system.nx)
... u_seq = np.random.randn(100, system.nu)
... result = system.simulate(x0, u_seq, n_steps=100)
... trajectories.append(result)
... return trajectories
>>>
>>> # Works with any discrete system:
>>> trajs1 = collect_trajectories(DiscreteSymbolicSystem(...)) # ✓
>>> trajs2 = collect_trajectories(DiscretizedSystem(...)) # ✓Reinforcement learning rollout:
>>> def evaluate_policy(
... system: DiscreteSystemProtocol,
... policy: Callable,
... n_episodes: int
... ) -> float:
... '''Evaluate policy on system.'''
... total_reward = 0.0
... for _ in range(n_episodes):
... x = np.random.randn(system.nx)
... for k in range(100):
... u = policy(x, k)
... x = system.step(x, u, k)
... total_reward += reward_function(x, u)
... return total_reward / n_episodesType checking example:
>>> def bad_function(system: DiscreteSystemProtocol):
... system.linearize(...) # ✗ mypy error: not in protocol!
>>>
>>> def good_function(system: LinearizableDiscreteProtocol):
... system.linearize(...) # ✓ OK: protocol includes thisNotes
The runtime_checkable? decorator allows isinstance() checks, but this should be used sparingly. Prefer static type checking at development time.
Attributes
| Name | Description |
|---|---|
| dt | Sampling period in seconds. |
| nu | Number of control inputs. |
| nx | Number of state variables. |
Methods
| Name | Description |
|---|---|
| simulate | Simulate system for multiple steps. |
| step | Compute next state: x[k+1] = f(x[k], u[k]). |
simulate
types.protocols.DiscreteSystemProtocol.simulate(x0, u_sequence, n_steps)Simulate system for multiple steps.
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| x0 | StateVector | Initial state (nx,) | required |
| u_sequence | DiscreteControlInput | Control sequence (various formats supported) | required |
| n_steps | int | Number of steps to simulate | required |
Returns
| Name | Type | Description |
|---|---|---|
| DiscreteSimulationResult | Trajectory data including states, controls, time indices |
step
types.protocols.DiscreteSystemProtocol.step(x, u=None, k=0)Compute next state: x[k+1] = f(x[k], u[k]).
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| x | StateVector | Current state (nx,) | required |
| u | Optional[ControlVector] | Control input (nu,), None for autonomous/zero control | None |
| k | int | Time step index (for time-varying systems) | 0 |
Returns
| Name | Type | Description |
|---|---|---|
| StateVector | Next state x[k+1] |