Objectives
Objectives define what the optimization minimizes. Piccolo.jl provides objectives for fidelity, regularization, and leakage suppression.
Overview
The total objective is a weighted sum:
\[J = Q \cdot J_{\text{fidelity}} + R_u \cdot J_u + R_{du} \cdot J_{du} + R_{ddu} \cdot J_{ddu} + \cdots\]
Where:
J_fidelity: Infidelity (1 - F)J_u,J_du,J_ddu: Control regularization terms
Fidelity Objectives
UnitaryInfidelityObjective
For unitary gate synthesis:
obj = UnitaryInfidelityObjective(:Ũ⃗, U_goal; Q=100.0)Fidelity metric:
\[F = \frac{1}{d^2} \left| \text{Tr}(U_{\text{goal}}^\dagger U) \right|^2\]
Where d is the Hilbert space dimension.
KetInfidelityObjective
For single state transfer:
obj = KetInfidelityObjective(:ψ̃, ψ_goal; Q=100.0)Fidelity metric:
\[F = \left| \langle \psi_{\text{goal}} | \psi \rangle \right|^2\]
CoherentKetInfidelityObjective
For multiple state transfers with phase coherence (used by MultiKetTrajectory):
obj = CoherentKetInfidelityObjective(
[:ψ̃1, :ψ̃2],
[ψ_goal1, ψ_goal2];
Q=100.0
)This ensures both:
- Each state reaches its target
- Relative phases between states are preserved
UnitaryFreePhaseInfidelityObjective
When global phase doesn't matter:
obj = UnitaryFreePhaseInfidelityObjective(:Ũ⃗, U_goal; Q=100.0)Optimizes over the global phase to find the best match.
Regularization Objectives
Regularization penalizes large or rapidly-varying controls.
QuadraticRegularizer
The standard regularization form:
# Penalize control magnitude
reg_u = QuadraticRegularizer(:u, traj, R)
# Penalize control derivatives
reg_du = QuadraticRegularizer(:du, traj, R)
reg_ddu = QuadraticRegularizer(:ddu, traj, R)Mathematical form:
\[J_x = \sum_k \| x_k \|^2\]
Why Regularize?
- Smoothness: Derivative regularization encourages smooth pulses
- Robustness: Prevents exploiting numerical precision
- Hardware-friendliness: Bounded, smooth controls are easier to implement
- Convergence: Regularization improves optimization landscape
Leakage Objectives
For multilevel systems, leakage to non-computational states can be penalized.
LeakageObjective
# For EmbeddedOperator with defined subspace
op = EmbeddedOperator(:X, sys) # X gate in computational subspace
obj = LeakageObjective(:Ũ⃗, op; Q=10.0)Mathematical form: Penalizes population outside the computational subspace at the final time.
Via PiccoloOptions
The easier way is through PiccoloOptions:
opts = PiccoloOptions(
leakage_constraint=true,
leakage_constraint_value=1e-3,
leakage_cost=10.0
)
qcp = SmoothPulseProblem(qtraj, N; piccolo_options=opts)Using Objectives in Problem Templates
Problem templates automatically set up objectives. You typically don't create them manually.
Automatic Setup
Let's see how the key parameters affect optimization:
using Piccolo
# Set up a system
H_drift = PAULIS[:Z]
H_drives = [PAULIS[:X], PAULIS[:Y]]
sys = QuantumSystem(H_drift, H_drives, [1.0, 1.0])
T, N = 10.0, 100
times = collect(range(0, T, length = N))
pulse = ZeroOrderPulse(0.1 * randn(2, N), times)
U_goal = GATES[:X]
qtraj = UnitaryTrajectory(sys, pulse, U_goal)
# SmoothPulseProblem automatically creates:
# - UnitaryInfidelityObjective (for UnitaryTrajectory)
# - QuadraticRegularizer for :u, :du, :ddu
qcp = SmoothPulseProblem(
qtraj,
N;
Q = 100.0, # Fidelity weight
R_u = 1e-3, # Control regularization
R_du = 1e-2, # First derivative regularization
R_ddu = 1e-2, # Second derivative regularization
)
cached_solve!(qcp, "objectives_example"; max_iter = 50)
fidelity(qcp)0.9995499563577507Trajectory-Dependent Objectives
The objective type is chosen based on the trajectory:
| Trajectory Type | Default Objective |
|---|---|
UnitaryTrajectory | UnitaryInfidelityObjective |
KetTrajectory | KetInfidelityObjective |
MultiKetTrajectory | CoherentKetInfidelityObjective |
Objective Weights
The Q Parameter
Q weights the fidelity objective:
- Higher Q (e.g., 1000): Prioritize fidelity over smoothness
- Lower Q (e.g., 10): Allow more flexibility in controls
The R Parameters
R, R_u, R_du, R_ddu weight regularization:
- Higher R: Smoother, smaller controls
- Lower R: More aggressive controls allowed
Balancing Trade-offs
Let's compare different weight settings:
# High fidelity weight
qcp_high_Q =
SmoothPulseProblem(UnitaryTrajectory(sys, pulse, U_goal), N; Q = 1000.0, R = 1e-2)
cached_solve!(qcp_high_Q, "objectives_high_Q"; max_iter = 100)
fidelity(qcp_high_Q)0.999885473640225High regularization
qcp_high_R =
SmoothPulseProblem(UnitaryTrajectory(sys, pulse, U_goal), N; Q = 100.0, R = 0.1)
cached_solve!(qcp_high_R, "objectives_high_R"; max_iter = 100)
fidelity(qcp_high_R)0.999991002581111Typical Starting Values
| Parameter | Typical Range | Starting Point |
|---|---|---|
Q | 10 - 10000 | 100 |
R | 1e-6 - 1.0 | 1e-2 |
R_u | same as R | R |
R_du | same as R | R |
R_ddu | same as R | R |
Best Practices
1. Start with Defaults
Problem templates have sensible defaults. Start there:
qcp = SmoothPulseProblem(qtraj, N) # Uses Q=100, R=1e-22. Tune Q First
If fidelity is too low, increase Q.
3. Tune R if Controls are Problematic
If controls are too noisy or large, increase R.
4. Use Per-Derivative Tuning for Fine Control
qcp = SmoothPulseProblem(
qtraj, N;
R_u=1e-4, # Allow larger control values
R_du=1e-2, # Penalize jumps moderately
R_ddu=0.1 # Strongly penalize acceleration
)See Also
- Constraints - Hard constraints on solutions
- Problem Templates - How objectives are used
- SmoothPulseProblem - Parameter reference
This page was generated using Literate.jl.