systems.base.numerical_integration.TorchDiffEqIntegrator
systems.base.numerical_integration.TorchDiffEqIntegrator(
system,
dt=None,
step_mode=StepMode.ADAPTIVE,
backend='torch',
method='dopri5',
adjoint=False,
**options,
)PyTorch-based ODE integrator using the torchdiffeq library.
Supports adaptive and fixed-step integration with various solvers and automatic differentiation through PyTorch’s autograd.
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| system | SymbolicDynamicalSystem | Continuous-time system to integrate (controlled or autonomous) | required |
| dt | Optional[ScalarLike] | Time step size | None |
| step_mode | StepMode | FIXED or ADAPTIVE stepping mode | StepMode.ADAPTIVE |
| backend | str | Must be ‘torch’ for this integrator | 'torch' |
| method | str | Solver method. Options: ‘dopri5’, ‘dopri8’, ‘adams’, ‘bosh3’, ‘euler’, ‘midpoint’, ‘rk4’, ‘explicit_adams’, ‘implicit_adams’. Default: ‘dopri5’ | 'dopri5' |
| adjoint | bool | Use adjoint method for memory-efficient backpropagation. Default: False Note: Adjoint method requires the ODE function to be an nn.Module. Only use adjoint=True for Neural ODE applications where the system is a neural network. For regular dynamical systems, use adjoint=False. | False |
| **options | Additional options including rtol, atol, max_steps | {} |
Available Methods
Adaptive (Recommended): - dopri5: Dormand-Prince 5(4) - general purpose [DEFAULT] - dopri8: Dormand-Prince 8 - high accuracy - bosh3: Bogacki-Shampine 3(2) - lower accuracy - adaptive_heun: Adaptive Heun method - fehlberg2: Fehlberg method
Fixed-Step: - euler: Forward Euler (1st order) - midpoint: Midpoint method (2nd order) - rk4: Classic Runge-Kutta 4 (4th order)
Multistep: - explicit_adams: Explicit Adams method - implicit_adams: Implicit Adams method - fixed_adams: Fixed-step Adams method
Examples
>>> import torch
>>> from torchdiffeq_integrator import TorchDiffEqIntegrator
>>>
>>> # Regular controlled dynamical system (adjoint=False)
>>> integrator = TorchDiffEqIntegrator(
... system,
... dt=0.01,
... backend='torch',
... method='dopri5',
... adjoint=False # Default for regular systems
... )
>>>
>>> x0 = torch.tensor([1.0, 0.0])
>>> result = integrator.integrate(
... x0,
... lambda t, x: torch.zeros(1),
... (0.0, 10.0)
... )
>>> print(f"Success: {result['success']}")
>>> print(f"Steps: {result['nsteps']}")
>>>
>>> # Autonomous system
>>> integrator = TorchDiffEqIntegrator(autonomous_system, backend='torch')
>>> result = integrator.integrate(
... x0=torch.tensor([1.0, 0.0]),
... u_func=lambda t, x: None,
... t_span=(0.0, 10.0)
... )
>>>
>>> # Neural ODE (adjoint=True for memory efficiency)
>>> class NeuralODE(torch.nn.Module):
... def __init__(self):
... super().__init__()
... self.net = torch.nn.Sequential(
... torch.nn.Linear(2, 50),
... torch.nn.Tanh(),
... torch.nn.Linear(50, 2)
... )
... def forward(self, t, x):
... return self.net(x)
>>>
>>> neural_ode = NeuralODE()
>>> integrator_neural = TorchDiffEqIntegrator(
... neural_ode,
... backend='torch',
... method='dopri5',
... adjoint=True # Memory-efficient for neural networks
... )Attributes
| Name | Description |
|---|---|
| name | Return the name of the integrator. |
Methods
| Name | Description |
|---|---|
| disable_adjoint | Disable adjoint method (use standard backpropagation). |
| enable_adjoint | Enable adjoint method for memory-efficient backpropagation. |
| integrate | Integrate over time interval with control policy. |
| integrate_with_gradient | Integrate and compute gradients w.r.t. initial conditions. |
| step | Take one integration step: x(t) → x(t + dt). |
| to_device | Move system parameters to specified device (if applicable). |
disable_adjoint
systems.base.numerical_integration.TorchDiffEqIntegrator.disable_adjoint()Disable adjoint method (use standard backpropagation).
Examples
>>> integrator.disable_adjoint()
>>> assert integrator.use_adjoint == Falseenable_adjoint
systems.base.numerical_integration.TorchDiffEqIntegrator.enable_adjoint()Enable adjoint method for memory-efficient backpropagation.
Notes
Adjoint method trades computation for memory - useful for Neural ODEs with many steps. Requires system to be an nn.Module.
Examples
>>> integrator.enable_adjoint()
>>> assert integrator.use_adjoint == Trueintegrate
systems.base.numerical_integration.TorchDiffEqIntegrator.integrate(
x0,
u_func,
t_span,
t_eval=None,
dense_output=False,
)Integrate over time interval with control policy.
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| x0 | StateVector | Initial state (nx,) | required |
| u_func | Callable[[ScalarLike, StateVector], Optional[ControlVector]] | Control policy: (t, x) → u (or None for autonomous systems) | required |
| t_span | TimeSpan | Integration interval (t_start, t_end) | required |
| t_eval | Optional[TimePoints] | Specific times at which to store solution | None |
| dense_output | bool | If True, return dense interpolated solution (not supported by torchdiffeq) | False |
Returns
| Name | Type | Description |
|---|---|---|
| IntegrationResult | TypedDict containing: - t: Time points (T,) - x: State trajectory (T, nx) - success: Whether integration succeeded - message: Status message - nfev: Number of function evaluations - nsteps: Number of steps taken - integration_time: Computation time (seconds) - solver: Integrator name |
Examples
>>> # Controlled system
>>> result = integrator.integrate(
... x0=torch.tensor([1.0, 0.0]),
... u_func=lambda t, x: torch.tensor([0.5]),
... t_span=(0.0, 10.0)
... )
>>> print(f"Success: {result['success']}")
>>> print(f"Final state: {result['x'][-1]}")
>>>
>>> # Autonomous system
>>> result = integrator.integrate(
... x0=torch.tensor([1.0, 0.0]),
... u_func=lambda t, x: None,
... t_span=(0.0, 10.0)
... )
>>>
>>> # Evaluate at specific times
>>> t_eval = torch.linspace(0, 10, 1001)
>>> result = integrator.integrate(x0, u_func, (0, 10), t_eval=t_eval)
>>> assert result["t"].shape == (1001,)integrate_with_gradient
systems.base.numerical_integration.TorchDiffEqIntegrator.integrate_with_gradient(
x0,
u_func,
t_span,
loss_fn,
t_eval=None,
)Integrate and compute gradients w.r.t. initial conditions.
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| x0 | StateVector | Initial state (requires_grad=True for gradients) | required |
| u_func | Callable[[ScalarLike, StateVector], Optional[ControlVector]] | Control policy (or None for autonomous) | required |
| t_span | TimeSpan | Time span (t_start, t_end) | required |
| loss_fn | Callable[[IntegrationResult], torch.Tensor] | Loss function taking IntegrationResult | required |
| t_eval | Optional[TimePoints] | Evaluation times | None |
Returns
| Name | Type | Description |
|---|---|---|
| tuple | (loss_value: float, gradient_wrt_x0: StateVector) |
Examples
>>> # Define loss (e.g., final state error)
>>> def loss_fn(result):
... x_final = result["x"][-1]
... x_target = torch.tensor([1.0, 0.0])
... return torch.sum((x_final - x_target)**2)
>>>
>>> # Compute loss and gradient
>>> x0 = torch.tensor([0.0, 0.0], requires_grad=True)
>>> loss, grad = integrator.integrate_with_gradient(
... x0=x0,
... u_func=lambda t, x: torch.zeros(1),
... t_span=(0.0, 10.0),
... loss_fn=loss_fn
... )
>>> print(f"Loss: {loss:.4f}")
>>> print(f"Gradient: {grad}")step
systems.base.numerical_integration.TorchDiffEqIntegrator.step(
x,
u=None,
dt=None,
)Take one integration step: x(t) → x(t + dt).
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| x | StateVector | Current state (nx,) or (batch, nx) | required |
| u | Optional[ControlVector] | Control input (nu,) or (batch, nu), or None for autonomous systems | None |
| dt | Optional[ScalarLike] | Step size (uses self.dt if None) | None |
Returns
| Name | Type | Description |
|---|---|---|
| StateVector | Next state x(t + dt) |
Examples
>>> # Controlled system
>>> x_next = integrator.step(
... x=torch.tensor([1.0, 0.0]),
... u=torch.tensor([0.5])
... )
>>>
>>> # Autonomous system
>>> x_next = integrator.step(
... x=torch.tensor([1.0, 0.0]),
... u=None
... )to_device
systems.base.numerical_integration.TorchDiffEqIntegrator.to_device(device)Move system parameters to specified device (if applicable).
Parameters
| Name | Type | Description | Default |
|---|---|---|---|
| device | str | Device identifier (‘cpu’, ‘cuda’, ‘cuda:0’, etc.) | required |
Notes
This only works if the system is a PyTorch nn.Module. For regular dynamical systems, this is a no-op.
Examples
>>> # For Neural ODE systems
>>> integrator.to_device('cuda:0')
>>>
>>> # For regular systems (no effect)
>>> integrator.to_device('cpu')