systems.builtin.deterministic.discrete.DiscreteDoubleIntegrator
systems.builtin.deterministic.discrete.DiscreteDoubleIntegrator(*args, **kwargs)Discrete-time double integrator (position-controlled point mass).
Physical System:
The double integrator models the simplest mechanical system: a point mass that can be directly controlled by force or acceleration. It represents the discretized continuous-time system:
d²p/dt² = u (continuous time)
where p is position and u is acceleration (force/mass). The discrete-time approximation depends on the discretization method:
Zero-Order Hold (ZOH) - Exact Discretization: Assumes control input is held constant between samples, leading to: p[k+1] = p[k] + dt·v[k] + 0.5·dt²·u[k] v[k+1] = v[k] + dt·u[k]
This is the EXACT solution to the continuous system with piecewise-constant control, making it the preferred discretization for control design.
Forward Euler - First-Order Approximation: Simple but less accurate: p[k+1] = p[k] + dt·v[k] v[k+1] = v[k] + dt·u[k]
Backward Euler - Implicit Method: More stable but requires solving implicit equation: p[k+1] = p[k] + dt·v[k+1] v[k+1] = v[k] + dt·u[k]
Tustin/Trapezoidal - Second-Order Accurate: Bilinear transform, preserves frequency response: p[k+1] = p[k] + dt·(v[k] + v[k+1])/2 v[k+1] = v[k] + dt·u[k]
This class uses ZOH by default as it’s exact and most common in control.
State Space:
State: x[k] = [p[k], v[k]] Position state: - p: Position [m] (or [rad] for rotational systems) * Can be unbounded: -∞ < p < ∞ * Typically constrained in practice: p_min ≤ p ≤ p_max * Examples: cart position, joint angle, satellite position
Velocity state:
- v: Velocity [m/s] (or [rad/s] for rotational)
* Can be unbounded: -∞ < v < ∞
* Typically constrained: v_min ≤ v ≤ v_max (actuator limits)
* Examples: cart speed, joint angular velocity, satellite velocity
Control: u[k] = [a[k]] - a: Acceleration [m/s²] (or angular acceleration [rad/s²]) * Proportional to applied force: a = F/m * Bounded by actuator capacity: a_min ≤ a ≤ a_max * Typical range: -10 to +10 m/s² for mechanical systems * For satellite: ±0.01 rad/s² (small thrusters) * For robot joint: ±100 rad/s² (powerful servos)
Output: y[k] = [p[k]] or [p[k], v[k]] - Position-only measurement (most common): y[k] = p[k] Examples: encoder, GPS, camera - Full state measurement: y[k] = [p[k], v[k]] Examples: encoder + tachometer, IMU
Dynamics (Zero-Order Hold):
The ZOH discretization gives EXACT discrete-time dynamics:
p[k+1] = p[k] + dt·v[k] + 0.5·dt²·u[k]
v[k+1] = v[k] + dt·u[k]
Matrix Form: x[k+1] = Ad·x[k] + Bd·u[k]
where: Ad = [1 dt ] (State transition matrix) [0 1 ]
Bd = [0.5·dt²] (Control input matrix)
[ dt ]
Physical Interpretation:
Position update: - Current position: p[k] - Displacement due to velocity: dt·v[k] (drift term) - Displacement due to acceleration: 0.5·dt²·u[k] (control term) - New position: p[k+1]
The 0.5·dt² factor comes from integration: ∫₀^dt ∫₀^τ u dσ dτ = 0.5·dt²·u
Velocity update: - Current velocity: v[k] - Change due to acceleration: dt·u[k] - New velocity: v[k+1]
Linearity: The double integrator is LINEAR - superposition applies: f(αx₁ + βx₂, αu₁ + βu₂) = αf(x₁, u₁) + βf(x₂, u₂)
This makes it ideal for: - Linear control design (LQR, pole placement) - Analytical stability analysis - Frequency domain analysis - Teaching/learning control theory
Parameters:
dt : float, default=0.1 Sampling/discretization time step [s] Critical parameter affecting: - System dynamics approximation accuracy - Control bandwidth and performance - Computational requirements - Stability margins
Guidelines:
- Nyquist: dt < π/ω_max (sample faster than fastest frequency)
- Shannon: dt < 1/(2·f_bandwidth)
- Rule of thumb: 10-20 samples per desired closed-loop rise time
- Typical values: 0.001-0.1 s for mechanical systems
method : str, default=‘zoh’ Discretization method: - ‘zoh’: Zero-order hold (exact, recommended) - ‘euler’: Forward Euler (simple, less accurate) - ‘backward_euler’: Implicit (more stable) - ‘tustin’: Trapezoidal/bilinear (frequency-preserving)
mass : float, default=1.0 Mass of the system [kg] Only used if control is force (not acceleration) Affects control authority: a = F/m Typical: 0.1-1000 kg depending on application
use_force_input : bool, default=False If True, control input is force F [N] instead of acceleration Dynamics become: x[k+1] = Ad·x[k] + Bd·(u[k]/m) Useful for systems where force is the natural actuator variable
Equilibria:
Origin (zero position, zero velocity): x_eq = [0, 0] u_eq = 0
This is a MARGINALLY STABLE equilibrium: - Not asymptotically stable (doesn’t return to origin naturally) - Any initial displacement persists forever (constant velocity) - Requires feedback control for stabilization
Eigenvalue Analysis: The system matrix Ad has eigenvalues: λ₁ = 1, λ₂ = 1
Both eigenvalues are on the unit circle (|λ| = 1), confirming marginal stability. - |λ| < 1 would be stable (decays to zero) - |λ| > 1 would be unstable (grows unbounded) - |λ| = 1 means neutral stability (persists)
Any constant velocity is an equilibrium: x_eq = [p, v] for any p, v u_eq = 0
These form a family of equilibria parameterized by v*. The system naturally preserves any velocity when no control is applied (Newton’s first law).
Controllability:
The discrete double integrator is COMPLETELY CONTROLLABLE from any state to any other state in finite time.
Controllability Matrix: C = [Bd, Ad·Bd] = [0.5·dt² 0.5·dt³] [ dt dt² ]
Controllability Test: rank(C) = 2 = nx ✓ Fully controllable
Physical Meaning: - Can reach ANY position and velocity from ANY initial condition - Requires at most 2 time steps with unbounded control - With bounded control |u| ≤ u_max, requires more time - Minimum time to reach (p_target, 0) from (0, 0): t_min = √(2·|p_target|/u_max)
Bang-Bang Control: Time-optimal control is bang-bang (maximum effort switching once): 1. Apply u = +u_max until midpoint 2. Switch to u = -u_max to brake 3. Arrive at target with v = 0
Observability:
Position-only measurement: y[k] = p[k] C = [1 0]
Observability matrix:
O = [C ] = [1 0 ]
[C·Ad] [1 dt]
rank(O) = 2 = nx ✓ Fully observable
Can reconstruct full state (position AND velocity) from position measurements over time. Velocity is estimated from position differences.
Full state measurement: y[k] = [p[k], v[k]] C = [1 0] [0 1]
Trivially observable - direct measurement of all states.
Control Objectives:
Common control goals for double integrator:
Regulation: Drive to origin Goal: x → [0, 0] LQR optimal control: u[k] = -K·x[k] K = [k_p, k_d] (PD-like gains)
Closed-loop dynamics: x[k+1] = (Ad - Bd·K)·x[k]
Design K to place eigenvalues inside unit circle.
Tracking: Follow reference trajectory Goal: x[k] → x_ref[k] Control law: u[k] = -K·(x[k] - x_ref[k]) + u_ff[k]
Feedforward term: u_ff[k] = (x_ref[k+1] - Ad·x_ref[k]) / Bd
Point-to-point motion: (0,0) → (p_target, 0) Goal: Move to target position with zero final velocity Minimum-time control: bang-bang Minimum-energy control: sinusoidal profile
Setpoint stabilization: p → p_target Goal: Regulate position while allowing nonzero velocity Requires integral action or feedforward: u[k] = -K·[p[k]-p_target, v[k]] + u_ss
Velocity regulation: v → v_target Goal: Maintain constant velocity (cruise control) Simple proportional control: u[k] = k_v·(v_target - v[k])
State Constraints:
Physical constraints that must be enforced:
- Position limits: p_min ≤ p[k] ≤ p_max
- Physical workspace boundaries
- Track length, joint angle limits
- Must be respected by controller
- Model Predictive Control (MPC) handles naturally
- Velocity limits: v_min ≤ v[k] ≤ v_max
- Actuator/motor speed limits
- Safety considerations
- Energy/power constraints
- Typical: |v| ≤ 10 m/s for cart, |v| ≤ 100 rad/s for motor
- Acceleration limits: a_min ≤ u[k] ≤ a_max
- Actuator force/torque saturation
- Most critical practical constraint
- Causes nonlinearity (saturation)
- Anti-windup required for integral control
- Typical: |u| ≤ 10 m/s² for cart, |u| ≤ 100 rad/s² for servo
- Jerk limits: |u[k+1] - u[k]|/dt ≤ jerk_max (advanced)
- Smoothness requirement
- Reduces mechanical wear
- Passenger comfort (elevators, vehicles)
- Requires higher-order planning
Numerical Considerations:
Stability: The ZOH discretization is EXACT - no numerical stability issues regardless of dt (assuming arithmetic precision).
For other methods: - Euler: Stable for any dt (system is already stable) - Backward Euler: Always stable (A-stable method) - Tustin: Stable for any dt
Accuracy: - ZOH: Exact for piecewise-constant control - Euler: O(dt) error (first-order) - Tustin: O(dt²) error (second-order)
For control design, ZOH is preferred because: - Matches real digital control (sample-and-hold) - Preserves controllability exactly - No approximation error in discrete control law
Conditioning: For very small dt, the controllability matrix becomes ill-conditioned: cond(C) ≈ O(1/dt²)
This is rarely a problem in practice since dt is typically 0.001-0.1 s.
Computation: State update is trivial - just 2 additions and 3 multiplications per step. This makes it ideal for real-time control on embedded systems.
Control Design Examples:
1. Pole Placement (Deadbeat Control - Fastest Response): Place both eigenvalues at origin → reaches target in 2 steps
Desired characteristic equation: z² = 0
Control law: u[k] = -K·x[k]
Gain: K = [1/dt², 2/dt]
For dt = 0.1: K = [100, 20]
Warning: Requires very large control effort!
2. LQR (Optimal Quadratic Cost): Minimize: J = Σ (x’·Q·x + u’·R·u)
Choose Q, R to balance:
- Q large: Fast response, aggressive control
- R large: Smooth control, slower response
Example: Q = diag([10, 1]), R = 1
Result: K ≈ [3.2, 2.6] for dt = 0.1
This gives good compromise between speed and control effort.
3. PD-Like Control (Intuitive): u[k] = -k_p·p[k] - k_d·v[k]
- k_p: Position gain (stiffness)
- k_d: Velocity gain (damping)
Design rules:
- Larger k_p → faster position correction
- Larger k_d → more damping (less overshoot)
- Critical damping: k_d = 2·√k_p
- Under-damped: k_d < 2·√k_p (oscillatory)
- Over-damped: k_d > 2·√k_p (sluggish)
4. Model Predictive Control (MPC): Solve optimization at each step: min_{u[k:k+N]} Σ (‖x[i]-x_ref[i]‖² + R‖u[i]‖²) subject to: x[i+1] = Ad·x[i] + Bd·u[i] x_min ≤ x[i] ≤ x_max u_min ≤ u[i] ≤ u_max
Advantages:
- Handles constraints explicitly
- Optimal preview control
- Can incorporate feedforward
Disadvantage: Requires online optimization (computational cost)
Example Usage:
Create double integrator with 10 Hz sampling (dt=0.1s)
system = DiscreteDoubleIntegrator(dt=0.1)
Initial condition: 1m displaced, at rest
x0 = np.array([1.0, 0.0]) # [position, velocity]
Open-loop simulation (no control)
result = system.simulate( … x0=x0, … u_sequence=np.zeros(100), # No control … n_steps=100 … ) # Result: constant position at p=1.0 (neutral stability)
Design LQR controller for regulation
Ad, Bd = system.linearize(np.zeros(2), np.zeros(1)) Q = np.diag([10.0, 1.0]) # Care more about position R = np.array([[1.0]]) lqr_result = system.control.design_lqr( … Ad, Bd, Q, R, system_type=‘discrete’ … ) K = lqr_result[‘gain’] print(f”LQR gain: K = {K}“)
Closed-loop eigenvalues (should be inside unit circle)
eigenvalues = lqr_result[‘closed_loop_eigenvalues’] print(f”Closed-loop eigenvalues: {eigenvalues}“) print(f”Stable: {np.all(np.abs(eigenvalues) < 1)}“)
Simulate with LQR control
def lqr_controller(x, k): … return -K @ x
result_lqr = system.rollout(x0, lqr_controller, n_steps=100)
Plot results
import matplotlib.pyplot as plt fig, axes = plt.subplots(3, 1, figsize=(10, 8))
Position
axes[0].plot(result_lqr[‘time_steps’] * system.dt, … result_lqr[‘states’][:, 0]) axes[0].set_ylabel(‘Position [m]’) axes[0].axhline(0, color=‘r’, linestyle=‘–’, label=‘Target’) axes[0].legend() axes[0].grid()
Velocity
axes[1].plot(result_lqr[‘time_steps’] * system.dt, … result_lqr[‘states’][:, 1]) axes[1].set_ylabel(‘Velocity [m/s]’) axes[1].axhline(0, color=‘r’, linestyle=‘–’) axes[1].grid()
Control effort
axes[2].plot(result_lqr[‘time_steps’][:-1] * system.dt, … result_lqr[‘controls’]) axes[2].set_ylabel(‘Acceleration [m/s²]’) axes[2].set_xlabel(‘Time [s]’) axes[2].grid()
plt.tight_layout() plt.show()
Point-to-point motion with MPC
from scipy.optimize import minimize
def mpc_controller(x, k): … ’‘’Simple MPC with horizon N=10’’’ … N = 10 # Prediction horizon … p_target = 2.0 # Target position … u_max = 5.0 # Acceleration limit … … def cost(u_seq): … x_pred = x.copy() … total_cost = 0.0 … for i in range(N): … u_i = u_seq[i] … # State update … x_pred = Ad @ x_pred + Bd * u_i … # Stage cost … error = x_pred[0] - p_target … total_cost += 10error2 + x_pred[1]2 + 0.1u_i**2 … return total_cost … … # Optimize … u_init = np.zeros(N) … bounds = [(-u_max, u_max)] * N … result = minimize(cost, u_init, bounds=bounds, method=‘SLSQP’) … … return result.x[0] # Return first control action
result_mpc = system.rollout(x0, mpc_controller, n_steps=50)
Check trajectory constraints
print(f”Max velocity: {np.max(np.abs(result_mpc[‘states’][:, 1])):.2f} m/s”) print(f”Max acceleration: {np.max(np.abs(result_mpc[‘controls’])):.2f} m/s²”) print(f”Final position error: {result_mpc[‘states’][-1, 0] - 2.0:.4f} m”)
Physical Insights:
Newton’s First Law: The double integrator embodies inertia - objects in motion stay in motion unless acted upon by a force (control). This is why it’s marginally stable: any velocity persists forever without control.
Minimum-Time Control: To move from (0, 0) to (p_target, 0) in minimum time with |u| ≤ u_max:
- Accelerate at u_max for time t₁
- Decelerate at -u_max for time t₂
- Arrive at target with v = 0
Solution: t₁ = t₂ = √(|p_target|/u_max) Total time: t_total = 2·√(|p_target|/u_max)
This creates a parabolic position profile (constant acceleration).
Energy Considerations: Kinetic energy: E_k = 0.5·m·v² Control energy: E_c = ∫ u² dt
LQR with R > 0 trades off reaching target quickly (minimize E_k) versus using less control effort (minimize E_c).
Frequency Response: For continuous system (ω = frequency): H(s) = 1/s² → |H(jω)| = 1/ω² (-40 dB/decade)
This is a double integrator in frequency domain: - Integrates input twice to get output - Phase lag: -180° (always lags by half cycle) - Very sensitive to low frequencies (drift) - Requires feedback for stability
Discrete Frequency Response: For discrete system: H(z) = Bd / (z - 1)²
Resonance at z = 1 (ω = 0) - infinite DC gain without feedback.
Relationship to Other Systems: - Cart on frictionless track - Satellite position (1D) - Inverted pendulum (linearized about upright) - Mass-spring system (with k = 0, b = 0) - Robot joint (simplified, no gravity or friction)
Common Pitfalls:
Forgetting the 0.5 factor: p[k+1] = p[k] + dt·v[k] + 0.5·dt²·u[k] ✓ NOT: p[k+1] = p[k] + dt·v[k] + dt²·u[k] ✗
Ignoring actuator saturation: Real actuators saturate! Linear control assumes unlimited u. Use anti-windup or MPC for constrained control.
Too small dt for available computation: Faster sampling requires faster computation. Ensure control law executes within dt.
Forgetting neutral stability: Without control, ANY velocity persists forever. Always close the loop for practical systems.
Not handling measurement noise: Real position sensors have noise. Use Kalman filter to estimate velocity from noisy position.
Extensions:
This basic double integrator can be extended to:
Damped double integrator: Add friction/drag term: v[k+1] = v[k] + dt·(u[k] - b·v[k]) Now asymptotically stable even without control
Mass-spring-damper: Add spring force: p[k+1] includes -k·p[k] term Creates oscillatory dynamics
Multi-dimensional: 3D motion: x[k] = [p_x, v_x, p_y, v_y, p_z, v_z] Each axis is independent double integrator
Coupled system: Multiple masses connected by springs Creates higher-order system
With disturbances: Add process noise: x[k+1] = Ad·x[k] + Bd·u[k] + w[k] Requires stochastic control (LQG)
See Also:
ContinuousDoubleIntegrator : Continuous-time version DiscreteOscillator : With spring force added DiscreteDampedIntegrator : With friction InvertedPendulum : Nonlinear extension (with sin(θ))
Methods
| Name | Description |
|---|---|
| compute_controllability_matrix | Compute controllability matrix C = [Bd, Ad·Bd]. |
| compute_minimum_time | Compute minimum time to reach target position from origin with bounded control. |
| compute_observability_matrix | Compute observability matrix O = [C; C·Ad]. |
| compute_system_matrices | Compute discrete-time state-space matrices Ad, Bd. |
| define_system | Define discrete-time double integrator dynamics. |
| design_deadbeat_controller | Design deadbeat controller (fastest possible response). |
| design_pd_controller | Design PD (proportional-derivative) controller. |
| generate_minimum_energy_control | Generate minimum-energy control sequence for given time. |
| generate_minimum_time_control | Generate minimum-time control sequence (bang-bang). |
| setup_equilibria | Set up equilibrium points. |
compute_controllability_matrix
systems.builtin.deterministic.discrete.DiscreteDoubleIntegrator.compute_controllability_matrix(
)Compute controllability matrix C = [Bd, Ad·Bd].
Returns
| Name | Type | Description |
|---|---|---|
| np.ndarray | Controllability matrix (2×2) |
Examples
>>> system = DiscreteDoubleIntegrator(dt=0.1)
>>> C = system.compute_controllability_matrix()
>>> rank = np.linalg.matrix_rank(C)
>>> print(f"Controllability matrix rank: {rank}")
>>> print(f"System is controllable: {rank == 2}")Notes
The double integrator is always completely controllable. rank(C) = 2 for any dt > 0.
compute_minimum_time
systems.builtin.deterministic.discrete.DiscreteDoubleIntegrator.compute_minimum_time(
p_target,
u_max,
)Compute minimum time to reach target position from origin with bounded control.
For bang-bang control: |u| ≤ u_max
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| p_target | float | Target position [m] | required |
| u_max | float | Maximum acceleration [m/s²] | required |
Returns
| Name | Type | Description |
|---|---|---|
| float | Minimum time [s] |
Examples
>>> system = DiscreteDoubleIntegrator(dt=0.1)
>>> t_min = system.compute_minimum_time(p_target=10.0, u_max=2.0)
>>> print(f"Minimum time to reach 10m with 2 m/s² limit: {t_min:.2f} s")Notes
Minimum-time trajectory is bang-bang: 1. Accelerate at u_max until midpoint 2. Decelerate at -u_max until target 3. Arrive with v = 0
Time to reach target: t_min = 2·√(|p_target|/u_max)
This assumes starting from rest: (p, v) = (0, 0) → (p_target, 0)
compute_observability_matrix
systems.builtin.deterministic.discrete.DiscreteDoubleIntegrator.compute_observability_matrix(
C_output=None,
)Compute observability matrix O = [C; C·Ad].
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| C_output | Optional[np.ndarray] | Output matrix (1×2 or 2×2) If None, uses position-only measurement: C = [1, 0] | None |
Returns
| Name | Type | Description |
|---|---|---|
| np.ndarray | Observability matrix |
Examples
>>> system = DiscreteDoubleIntegrator(dt=0.1)
>>> # Position-only measurement
>>> O = system.compute_observability_matrix()
>>> rank = np.linalg.matrix_rank(O)
>>> print(f"Observable: {rank == 2}")
>>>
>>> # Full state measurement
>>> C_full = np.eye(2)
>>> O_full = system.compute_observability_matrix(C_full)Notes
For position-only measurement C = [1, 0]: O = [1 0 ] [1 dt ]
rank(O) = 2 → fully observable (can estimate velocity from position)
compute_system_matrices
systems.builtin.deterministic.discrete.DiscreteDoubleIntegrator.compute_system_matrices(
)Compute discrete-time state-space matrices Ad, Bd.
Returns
| Name | Type | Description |
|---|---|---|
| tuple | (Ad, Bd) where: - Ad: State transition matrix (2×2) - Bd: Control input matrix (2×1) |
Examples
>>> system = DiscreteDoubleIntegrator(dt=0.1)
>>> Ad, Bd = system.compute_system_matrices()
>>> print(f"Ad =\n{Ad}")
>>> print(f"Bd =\n{Bd}")Notes
For ZOH discretization: Ad = [1 dt] [0 1 ]
Bd = [0.5*dt²]
[ dt ]
define_system
systems.builtin.deterministic.discrete.DiscreteDoubleIntegrator.define_system(
dt=0.1,
method='zoh',
mass=1.0,
use_force_input=False,
)Define discrete-time double integrator dynamics.
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| dt | float | Sampling time step [s] | 0.1 |
| method | str | Discretization method: - ‘zoh’: Zero-order hold (exact, recommended) - ‘euler’: Forward Euler (simple) - ‘backward_euler’: Backward Euler (implicit) - ‘tustin’: Trapezoidal/bilinear transform | 'zoh' |
| mass | float | System mass [kg] (only used if use_force_input=True) | 1.0 |
| use_force_input | bool | If True, control input is force [N] instead of acceleration [m/s²] | False |
design_deadbeat_controller
systems.builtin.deterministic.discrete.DiscreteDoubleIntegrator.design_deadbeat_controller(
)Design deadbeat controller (fastest possible response).
Places both closed-loop eigenvalues at origin, giving finite-time settling in 2 steps.
Returns
| Name | Type | Description |
|---|---|---|
| np.ndarray | Deadbeat control gain K (1×2) |
Examples
>>> system = DiscreteDoubleIntegrator(dt=0.1)
>>> K_db = system.design_deadbeat_controller()
>>> print(f"Deadbeat gain: K = {K_db}")
>>>
>>> # Simulate
>>> def deadbeat_control(x, k):
... return -K_db @ x
>>>
>>> x0 = np.array([1.0, 0.0])
>>> result = system.rollout(x0, deadbeat_control, n_steps=10)
>>>
>>> # Check settling time
>>> settling_idx = np.where(np.abs(result['states'][:, 0]) < 0.01)[0]
>>> print(f"Settled in {settling_idx[0]} steps")Notes
Deadbeat control gives fastest response but requires VERY large control effort. The gain magnitude scales as 1/dt², so smaller time steps require exponentially larger control.
For dt = 0.1: K ≈ [100, 20] For dt = 0.01: K ≈ [10000, 200]
Use with caution in systems with actuator limits!
design_pd_controller
systems.builtin.deterministic.discrete.DiscreteDoubleIntegrator.design_pd_controller(
k_p,
k_d=None,
damping_ratio=1.0,
)Design PD (proportional-derivative) controller.
Control law: u[k] = -k_p·p[k] - k_d·v[k]
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| k_p | float | Position gain (stiffness) Larger k_p → faster position response | required |
| k_d | Optional[float] | Velocity gain (damping) If None, computed from damping_ratio | None |
| damping_ratio | float | Desired damping ratio (only used if k_d is None) - ζ = 1.0: Critical damping (no overshoot) - ζ < 1.0: Under-damped (oscillatory) - ζ > 1.0: Over-damped (sluggish) | 1.0 |
Returns
| Name | Type | Description |
|---|---|---|
| np.ndarray | PD gain matrix K = [k_p, k_d] |
Examples
>>> system = DiscreteDoubleIntegrator(dt=0.1)
>>>
>>> # Critical damping (no overshoot)
>>> K_critical = system.design_pd_controller(k_p=10.0, damping_ratio=1.0)
>>> print(f"Critical damping: K = {K_critical}")
>>>
>>> # Under-damped (faster but oscillatory)
>>> K_underdamped = system.design_pd_controller(k_p=10.0, damping_ratio=0.5)
>>>
>>> # Over-damped (slow but smooth)
>>> K_overdamped = system.design_pd_controller(k_p=10.0, damping_ratio=2.0)
>>>
>>> # Manual k_d specification
>>> K_manual = system.design_pd_controller(k_p=10.0, k_d=5.0)Notes
The relationship between continuous and discrete PD gains: Continuous: k_d = 2·ζ·√k_p (for ζ = 1: k_d = 2√k_p) Discrete: Similar but depends on dt
For discrete double integrator, the closed-loop poles are: z = 1 - k_p·dt²/2 ± √((k_p·dt²/2)² - k_d·dt·k_p·dt²/2)
generate_minimum_energy_control
systems.builtin.deterministic.discrete.DiscreteDoubleIntegrator.generate_minimum_energy_control(
p_target,
t_final,
n_steps,
)Generate minimum-energy control sequence for given time.
Minimizes ∫ u² dt subject to reaching target at t_final.
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| p_target | float | Target position [m] | required |
| t_final | float | Final time [s] | required |
| n_steps | int | Number of discrete steps | required |
Returns
| Name | Type | Description |
|---|---|---|
| np.ndarray | Control sequence (n_steps,) |
Examples
>>> system = DiscreteDoubleIntegrator(dt=0.1)
>>> u_seq = system.generate_minimum_energy_control(
... p_target=10.0,
... t_final=10.0,
... n_steps=100
... )Notes
For double integrator, minimum-energy control is sinusoidal: u(t) = A·sin(πt/t_final)
where A is chosen to reach p_target at t_final.
generate_minimum_time_control
systems.builtin.deterministic.discrete.DiscreteDoubleIntegrator.generate_minimum_time_control(
p_target,
u_max,
n_steps,
)Generate minimum-time control sequence (bang-bang).
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| p_target | float | Target position [m] | required |
| u_max | float | Maximum acceleration [m/s²] | required |
| n_steps | int | Number of discrete time steps | required |
Returns
| Name | Type | Description |
|---|---|---|
| np.ndarray | Control sequence (n_steps,) |
Examples
>>> system = DiscreteDoubleIntegrator(dt=0.1)
>>> u_seq = system.generate_minimum_time_control(
... p_target=10.0,
... u_max=2.0,
... n_steps=100
... )
>>>
>>> # Simulate with bang-bang control
>>> result = system.simulate(
... x0=np.zeros(2),
... u_sequence=u_seq,
... n_steps=100
... )
>>>
>>> print(f"Final position: {result['states'][-1, 0]:.2f} m")
>>> print(f"Final velocity: {result['states'][-1, 1]:.2f} m/s")Notes
The switching time occurs at the midpoint: t_switch = √(|p_target|/u_max)
setup_equilibria
systems.builtin.deterministic.discrete.DiscreteDoubleIntegrator.setup_equilibria(
)Set up equilibrium points.
Adds the origin (zero position, zero velocity) as default equilibrium. Note: ANY constant velocity is technically an equilibrium with u=0, but we only add the origin for simplicity.