Tutorial: Linear System Control
In this tutorial, we'll solve a simple 2D linear control problem from start to finish.
Problem Description
We want to control a 2D oscillator from rest at the origin to a target position, minimizing control effort.
System dynamics:
\[\dot{x} = (G_0 + u_1 G_1) x\]
Goal: Drive from x(0) = [0, 0] to x(N) = [1, 0]
Objective: Minimize control effort ∫ ||u||² dt
using DirectTrajOpt
using NamedTrajectories
using LinearAlgebra
using Statistics
using PrintfStep 1: Define the System Dynamics
The drift matrix (natural dynamics):
G_drift = [
-0.1 1.0;
-1.0 -0.1
]2×2 Matrix{Float64}:
-0.1 1.0
-1.0 -0.1The drive matrix (control influence):
G_drives = [[
0.0 1.0;
1.0 0.0
]]1-element Vector{Matrix{Float64}}:
[0.0 1.0; 1.0 0.0]Generator function:
G = u -> G_drift + sum(u .* G_drives)#2 (generic function with 1 method)Let's understand what this system does:
- G_drift creates damped oscillations
- G_drives couples the two states symmetrically
- Control u affects how the states influence each other
Step 2: Create the Trajectory
Time parameters
N = 50# number of time steps
Δt = 0.1 # time step size
total_time = N * Δt # 5 seconds
println("Total time: $total_time seconds")Total time: 5.0 secondsInitial and goal states
x_init = [0.0, 0.0]
x_goal = [1.0, 0.0]2-element Vector{Float64}:
1.0
0.0Create initial guess with linear interpolation
x_guess = hcat([x_init + (x_goal - x_init) * (t/(N-1)) for t = 0:(N-1)]...)
u_guess = zeros(1, N)1×50 Matrix{Float64}:
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0Create the trajectory
traj = NamedTrajectory(
(x = x_guess, u = u_guess, Δt = fill(Δt, N));
timestep = :Δt,
controls = :u,
initial = (x = x_init,),
final = (x = x_goal,),
)
println("Trajectory dimensions:")
println(" States: ", traj.dims.x)
println(" Controls: ", traj.dims.u)
println(" Time steps: ", traj.N)Trajectory dimensions:
States: 2
Controls: 1
Time steps: 50Step 3: Define the Dynamics Constraint
Use BilinearIntegrator for our control-linear system:
integrator = BilinearIntegrator(G, :x, :u, traj)
println("Integrator created for bilinear dynamics")Integrator created for bilinear dynamicsStep 4: Define the Objective
Minimize control effort:
obj = QuadraticRegularizer(:u, traj, 1.0)
println("Objective: minimize ∫ ||u||² dt")Objective: minimize ∫ ||u||² dtStep 5: Create and Solve the Problem
Assemble the optimization problem:
prob = DirectTrajOptProblem(traj, obj, integrator)
probDirectTrajOptProblem
Trajectory
Timesteps: 50
Duration: 4.9
Knot dim: 4
Variables: x (2), u (1), Δt (1)
Controls: u, Δt
Objective: QuadraticRegularizer on :u (R = [1.0], all)
Dynamics (1 integrators)
BilinearIntegrator: :x = exp(Δt G(:u)) :x (dim = 2)
Constraints (3 total: 2 equality, 1 bounds)
EqualityConstraint: "initial value of x"
EqualityConstraint: "final value of x"
BoundsConstraint: "bounds on Δt"println("Solving optimization problem...")
println("="^50)Solving optimization problem...
==================================================Solve with Ipopt:
solve!(prob; max_iter = 100, verbose = false)
println("="^50)
println("Optimization complete!\n")This is Ipopt version 3.14.19, running with linear solver MUMPS 5.8.2.
Number of nonzeros in equality constraint Jacobian...: 776
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 1254
Total number of variables............................: 196
variables with only lower bounds: 50
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 98
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 0.0000000e+00 9.68e-02 1.00e+00 0.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 4.3528327e-05 3.39e-02 1.68e+00 -4.0 3.37e-01 0.0 9.90e-01 1.00e+00h 1
Number of Iterations....: 2
Number of objective function evaluations = 3
Number of objective gradient evaluations = 3
Number of equality constraint evaluations = 3
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 2
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 2
Total seconds in IPOPT = 7.349
EXIT: Invalid number in NLP function or derivative detected.
==================================================
Optimization complete!Step 6: Analyze the Solution
Extract the solution:
x_sol = prob.trajectory.x
u_sol = prob.trajectory.u
times = cumsum([0.0; prob.trajectory.Δt[:]])
println("Solution analysis:")
println(" Initial state: ", x_sol[:, 1])
println(" Final state: ", x_sol[:, end])
println(" Goal state: ", x_goal)
println(" Final error: ", norm(x_sol[:, end] - x_goal))Solution analysis:
Initial state: [0.0, 0.0]
Final state: [1.0, 0.0]
Goal state: [1.0, 0.0]
Final error: 0.0Control statistics:
u_norm = norm(u_sol)
u_max = maximum(abs.(u_sol))
u_mean = mean(abs.(u_sol))
println("\nControl statistics:")
println(" Total norm: ", u_norm)
println(" Max magnitude: ", u_max)
println(" Mean magnitude: ", u_mean)
Control statistics:
Total norm: 0.9751520506543323
Max magnitude: 0.22487367295261695
Mean magnitude: 0.11288232664156773Step 7: Verify Dynamics
Check that the solution satisfies the dynamics at a few points:
function verify_dynamics(x, u, Δt, G, k)
# Compute x[k+1] using the dynamics
x_k = x[:, k]
u_k = u[:, k]
Δt_k = Δt[k]
# Matrix exponential integration
x_k1_predicted = exp(Δt_k * G(u_k)) * x_k
x_k1_actual = x[:, k+1]
error = norm(x_k1_predicted - x_k1_actual)
return error
end
println("\nDynamics verification (error at selected time steps):")
for k in [1, 10, 25, 40, N-1]
error = verify_dynamics(x_sol, u_sol, prob.trajectory.Δt, G, k)
println(" k=$k: error = ", error)
end
Dynamics verification (error at selected time steps):
k=1: error = 9.487394314388142e-29
k=10: error = 0.009632631185615042
k=25: error = 0.02906171570831664
k=40: error = 0.03289057530673798
k=49: error = 0.002193042455111153Visualization (Conceptual)
In a Jupyter notebook or with plotting packages, you could visualize:
println("\n" * "="^50)
println("SOLUTION SUMMARY")
println("="^50)
println("\nState trajectory (first 10 and last 10 time steps):")
println("Time | x₁ | x₂")
println("-"^25)
for k in [1:10; (N-9):N]
t = times[k]
println(@sprintf("%.2f | %7.4f | %7.4f", t, x_sol[1, k], x_sol[2, k]))
end
println("\nControl trajectory (first 10 and last 10 time steps):")
println("Time | u")
println("-"^15)
for k in [1:10; (N-9):N]
t = times[k]
println(@sprintf("%.2f | %7.4f", t, u_sol[1, k]))
end
==================================================
SOLUTION SUMMARY
==================================================
State trajectory (first 10 and last 10 time steps):
Time | x₁ | x₂
-------------------------
0.00 | 0.0000 | 0.0000
0.30 | -0.0000 | -0.0000
0.46 | 0.0000 | 0.0002
0.58 | 0.0002 | 0.0009
0.69 | 0.0006 | 0.0023
0.78 | 0.0013 | 0.0047
0.87 | 0.0026 | 0.0081
0.94 | 0.0045 | 0.0126
1.01 | 0.0071 | 0.0183
1.07 | 0.0107 | 0.0253
1.97 | 0.8227 | 0.2355
1.99 | 0.8583 | 0.2227
2.01 | 0.8925 | 0.2066
2.03 | 0.9247 | 0.1863
2.06 | 0.9539 | 0.1605
2.09 | 0.9785 | 0.1280
2.13 | 0.9959 | 0.0889
2.16 | 1.0036 | 0.0456
2.19 | 1.0023 | 0.0008
2.20 | 1.0000 | 0.0000
Control trajectory (first 10 and last 10 time steps):
Time | u
---------------
0.00 | 0.0000
0.30 | 0.0075
0.46 | 0.0123
0.58 | 0.0142
0.69 | 0.0140
0.78 | 0.0126
0.87 | 0.0111
0.94 | 0.0102
1.01 | 0.0106
1.07 | 0.0129
1.97 | 0.1581
1.99 | 0.1535
2.01 | 0.1456
2.03 | 0.1285
2.06 | 0.0897
2.09 | 0.0098
2.13 | -0.1091
2.16 | -0.1936
2.19 | -0.0630
2.20 | -0.0000Key Takeaways
- Linear interpolation provides a good initial guess for smooth problems
- BilinearIntegrator handles control-linear dynamics exactly
- Boundary conditions (initial/final) are enforced as hard constraints
- Control effort minimization produces smooth, efficient controls
- The solver finds a solution that satisfies dynamics and reaches the goal
Exercises
Try modifying the problem:
Exercise 1: Change the goal
Try reaching x_goal = [0.5, 0.5] instead
Exercise 2: Add control bounds
Limit the control: -1.0 ≤ u ≤ 1.0
traj_bounded = NamedTrajectory(
(x = x_guess, u = u_guess, Δt = fill(Δt, N));
timestep=:Δt,
controls=:u,
initial=(x = x_init,),
final=(x = x_goal,),
bounds=(u = 1.0,)
)Exercise 3: Vary the control weight
Try QuadraticRegularizer(:u, traj, 0.1) (less penalty) or QuadraticRegularizer(:u, traj, 10.0) (more penalty)
Exercise 4: Add a terminal cost
Use soft goal constraint instead:
traj_soft = NamedTrajectory(
(x = x_guess, u = u_guess, Δt = fill(Δt, N));
timestep=:Δt,
controls=:u,
initial=(x = x_init,) # No final constraint
)
obj_soft = QuadraticRegularizer(:u, traj_soft, 1.0) +
TerminalObjective(x -> 100.0 * norm(x - x_goal)^2, :x, traj_soft)Next Steps
- Bilinear Control Tutorial: Multiple drives and bounds
- Minimum Time Tutorial: Optimize trajectory duration
- Smooth Controls Tutorial: Add derivative penalties
This page was generated using Literate.jl.