Quickstart Guide
Welcome to DirectTrajOpt.jl! This guide will get you up and running in minutes.
What is DirectTrajOpt?
DirectTrajOpt.jl solves trajectory optimization problems - finding optimal control sequences that drive a dynamical system from an initial state to a goal state while minimizing a cost function.
Installation
First, install the package:
using Pkg
Pkg.add("DirectTrajOpt")You'll also need NamedTrajectories.jl for defining trajectories:
using DirectTrajOpt
using NamedTrajectories
using LinearAlgebra
using CairoMakieA Minimal Example
Let's solve a simple problem: drive a 2D system from [0, 0] to [1, 0] with minimal control effort.
Step 1: Define the Trajectory
A trajectory contains your states, controls, and time information:
N = 50 # number of time steps
traj = NamedTrajectory(
(
x = randn(2, N), # 2D state
u = randn(1, N), # 1D control
Δt = fill(0.1, N), # time step
);
timestep = :Δt,
controls = :u,
initial = (x = [0.0, 0.0],),
final = (x = [1.0, 0.0],),
bounds = (Δt = (0.05, 0.2), u = 1.0),
)N = 50, (x = 1:2, u = 3:3, → Δt = 4:4)Step 2: Define the Dynamics
Specify how your system evolves. For bilinear dynamics ẋ = (G₀ + u₁G₁) x:
G_drift = [-0.1 1.0; -1.0 -0.1] # drift term
G_drives = [[0.0 1.0; 1.0 0.0]] # control term
G = u -> G_drift + sum(u .* G_drives)
integrator = BilinearIntegrator(G, :x, :u, traj)BilinearIntegrator{DirectTrajOpt.Integrators.var"#7#8"{Main.var"#2#3"}}(DirectTrajOpt.Integrators.var"#7#8"{Main.var"#2#3"}(Main.var"#2#3"()), :x, :u, 2, 6, 98)Step 3: Define the Objective
What do we want to minimize? Let's penalize control effort:
obj = QuadraticRegularizer(:u, traj, 1.0)QuadraticRegularizer(:u, [1.0], [0.0 0.0 … 0.0 0.0], [1, 2, 3, 4, 5, 6, 7, 8, 9, 10 … 41, 42, 43, 44, 45, 46, 47, 48, 49, 50])Step 4: Create and Solve
Combine everything into a problem and solve:
prob = DirectTrajOptProblem(traj, obj, integrator)
solve!(prob; max_iter = 100, verbose = false)This is Ipopt version 3.14.19, running with linear solver MUMPS 5.8.2.
Number of nonzeros in equality constraint Jacobian...: 776
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 1254
Total number of variables............................: 196
variables with only lower bounds: 0
variables with lower and upper bounds: 100
variables with only upper bounds: 0
Total number of equality constraints.................: 98
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 1.3846842e-01 2.86e+00 2.86e-08 0.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 1.8765294e-01 1.43e-01 9.27e-01 -1.3 2.58e+00 - 9.92e-01 1.00e+00h 1
2 7.3077355e-02 9.80e-02 7.14e-01 -1.0 6.06e+00 - 4.55e-01 3.17e-01f 1
3 1.0631861e-01 6.09e-02 5.23e-01 -2.1 4.08e+00 - 4.00e-01 3.78e-01h 1
4 1.7495983e-01 3.29e-02 1.83e-01 -1.7 2.18e+00 - 8.65e-01 4.59e-01h 1
5 2.9968831e-01 1.98e-02 3.96e-01 -1.8 1.02e+00 - 8.77e-01 4.08e-01h 1
6 5.3626179e-01 1.20e-02 2.70e+00 -1.8 1.49e+00 - 9.82e-01 4.59e-01h 1
7 6.9621250e-01 9.79e-03 1.40e+01 -1.4 2.02e+00 - 1.00e+00 1.94e-01h 1
8 7.9905556e-01 8.62e-03 6.17e+01 -1.0 2.53e+00 - 1.00e+00 1.23e-01h 1
9 8.9377647e-01 7.85e-03 8.69e+02 -0.7 3.96e+00 - 1.00e+00 9.08e-02h 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
10 9.1655557e-01 7.64e-03 8.59e+03 -0.1 2.09e+00 - 1.00e+00 2.67e-02h 1
11 9.2012149e-01 7.61e-03 1.78e+06 0.5 6.02e-01 - 1.00e+00 4.87e-03h 1
12 9.2019915e-01 7.61e-03 1.73e+10 2.3 2.10e+00 - 1.00e+00 1.02e-04h 1
13 9.2019834e-01 7.58e-03 1.72e+08 2.7 1.94e-05 10.0 9.90e-01 1.00e+00f 1
14 9.2017595e-01 7.46e-03 3.42e+05 2.2 1.03e-04 9.5 1.00e+00 1.00e+00f 1
15 9.3891413e-01 7.46e-03 1.66e+08 -4.0 1.10e+04 - 8.56e-05 8.70e-05h 1
16r 9.3891413e-01 7.46e-03 9.80e+02 2.7 0.00e+00 - 0.00e+00 4.57e-07R 4
17r 9.1160917e-01 4.93e-03 9.47e+00 0.4 5.41e-01 - 9.90e-01 9.90e-01f 1
Number of Iterations....: 18
Number of objective function evaluations = 23
Number of objective gradient evaluations = 19
Number of equality constraint evaluations = 23
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 19
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 18
Total seconds in IPOPT = 5.532
EXIT: Invalid number in NLP function or derivative detected.Step 5: Access the Solution
Let's look at the results.
plot(prob.trajectory)
The optimized trajectory is stored in prob.trajectory:
println("Final state: ", prob.trajectory.x[:, end])
println("Control norm: ", norm(prob.trajectory.u))Final state: [1.0, 0.0]
Control norm: 6.83981471214229What You Can Do
- Multiple objectives: Combine regularization, minimum time, terminal costs
- Flexible dynamics: Linear, bilinear, time-dependent systems
- Add constraints: Bounds, path constraints, custom nonlinear constraints
- Smooth controls: Penalize derivatives for smooth, implementable controls
- Free time: Optimize trajectory duration
This page was generated using Literate.jl.