GRAPE Optimizer Deep Dive¶
This tutorial provides an in-depth understanding of the GRAPE (GRadient Ascent Pulse Engineering) algorithm and how to use it effectively in QubitOS.
Prerequisites¶
- Completed Pulse Generation Tutorial
- Basic understanding of quantum mechanics
- Familiarity with optimization concepts
What is GRAPE?¶
GRAPE is a gradient-based optimal control algorithm that finds pulse shapes to implement desired quantum gates with high fidelity.
The Core Idea¶
Given: - A target unitary \(U_\text{target}\) (the gate we want) - A system Hamiltonian \(H_0\) (the qubit's natural evolution) - Control Hamiltonians \(H_c\) (how pulses affect the qubit)
Find: Control amplitudes \(u_k(t)\) that maximize gate fidelity.
Mathematical Formulation¶
The system evolves according to:
GRAPE maximizes the fidelity function:
where \(d\) is the Hilbert space dimension.
GRAPE Algorithm Steps¶
1. Initialize random control pulses u_k(t)
2. Repeat until converged:
a. Forward propagate: compute U(0→T) with current pulses
b. Backward propagate: compute gradient of fidelity
c. Update pulses: u_k ← u_k + η * ∂F/∂u_k
d. Check convergence: if F ≥ target, stop
QubitOS Implementation¶
from qubitos.pulsegen import GrapeOptimizer, GrapeConfig
from qubitos.pulsegen.hamiltonians import get_target_unitary
config = GrapeConfig(
num_time_steps=100, # Time discretization
duration_ns=50, # Total pulse duration
max_iterations=200, # Maximum optimization steps
target_fidelity=0.999, # Convergence threshold
learning_rate=0.1, # Gradient step size
)
optimizer = GrapeOptimizer(config)
target = get_target_unitary("X", num_qubits=1)
result = optimizer.optimize(target, num_qubits=1)
Configuration Parameters¶
Essential Parameters¶
| Parameter | Type | Default | Description |
|---|---|---|---|
num_time_steps |
int | 100 | Number of pulse samples |
duration_ns |
float | 20.0 | Pulse duration in nanoseconds |
max_iterations |
int | 1000 | Maximum optimization iterations |
target_fidelity |
float | 0.999 | Stop when fidelity exceeds this |
learning_rate |
float | 1.0 | Gradient ascent step size |
Advanced Parameters¶
| Parameter | Type | Default | Description |
|---|---|---|---|
max_amplitude |
float | 100.0 | Maximum pulse amplitude (MHz) |
convergence_threshold |
float | 1e-8 | Stop when improvement < threshold |
use_second_order |
bool | False | Use GRAPE-II optimization |
regularization |
float | 0.0 | L2 penalty on pulse amplitude |
random_seed |
int | None | None | Random seed for reproducibility |
Tuning for Better Results¶
When Optimization Doesn't Converge¶
Problem: Fidelity plateaus below target
Solutions:
# 1. Increase iterations
config = GrapeConfig(max_iterations=500)
# 2. Reduce learning rate (more stable but slower)
config = GrapeConfig(learning_rate=0.05)
# 3. Increase time steps (finer control)
config = GrapeConfig(num_time_steps=200)
# 4. Longer pulse duration (easier optimization)
config = GrapeConfig(duration_ns=100)
When Optimization is Too Slow¶
Problem: Each iteration takes too long
Solutions:
# 1. Fewer time steps
config = GrapeConfig(num_time_steps=50)
# 2. Higher learning rate (faster but less stable)
config = GrapeConfig(learning_rate=0.2)
# 3. Lower target fidelity
config = GrapeConfig(target_fidelity=0.99)
Finding the Right Balance¶
# Good starting point for single-qubit gates
config = GrapeConfig(
num_time_steps=100,
duration_ns=50,
max_iterations=200,
target_fidelity=0.999,
learning_rate=0.1,
)
# For difficult gates (H, T)
config = GrapeConfig(
num_time_steps=150,
duration_ns=80,
max_iterations=400,
target_fidelity=0.999,
learning_rate=0.05,
)
Convergence Analysis¶
Monitoring Convergence¶
optimizer = GrapeOptimizer(config)
target = get_target_unitary("X", num_qubits=1)
result = optimizer.optimize(target, num_qubits=1)
# Access fidelity history
fidelities = result.fidelity_history
iterations = range(len(fidelities))
# Plot convergence
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 5))
plt.semilogy(iterations, [1 - f for f in fidelities])
plt.xlabel('Iteration')
plt.ylabel('Infidelity (1 - F)')
plt.title('GRAPE Convergence')
plt.grid(True, alpha=0.3)
plt.axhline(y=1e-3, color='r', linestyle='--', label='Target')
plt.legend()
plt.show()
Typical Convergence Behavior¶
Iteration 1: F = 0.52 (random start)
Iteration 10: F = 0.85 (rapid improvement)
Iteration 50: F = 0.98 (approaching target)
Iteration 100: F = 0.999 (converged)
Signs of Problems¶
| Symptom | Cause | Solution |
|---|---|---|
| F oscillates | Learning rate too high | Reduce learning_rate |
| F plateaus at 0.5 | Wrong Hamiltonian | Check system parameters |
| F increases slowly | Learning rate too low | Increase learning_rate |
| F jumps around | Numerical instability | Reduce learning_rate or enable use_second_order |
Understanding Fidelity¶
Gate Fidelity vs. State Fidelity¶
- Gate fidelity: How close \(U_\text{achieved}\) is to \(U_\text{target}\)
- State fidelity: How close \(|\psi_\text{final}\rangle\) is to target state
QubitOS uses average gate fidelity (Nielsen 2002):
Interpreting Fidelity Values¶
| Fidelity | Quality | Application |
|---|---|---|
| > 0.999 | Excellent | Research-grade |
| 0.99 - 0.999 | Good | Most applications |
| 0.95 - 0.99 | Acceptable | Proof of concept |
| < 0.95 | Poor | Needs improvement |
From Fidelity to Error Rate¶
The error probability per gate is approximately:
For 99.9% fidelity: ~0.1% error per gate.
Multi-Qubit Gates¶
Two-Qubit Gates¶
from qubitos.pulsegen import GrapeOptimizer, GrapeConfig
from qubitos.pulsegen.hamiltonians import (
get_target_unitary,
build_hamiltonian,
)
# Build a two-qubit drift Hamiltonian from Pauli strings
drift, controls = build_hamiltonian(
drift="5.0 * Z0 + 5.2 * Z1 + 0.01 * Z0 Z1",
num_qubits=2,
)
config = GrapeConfig(
num_time_steps=200, # More steps for 2Q gates
duration_ns=100, # Longer duration
max_iterations=500, # More iterations
)
optimizer = GrapeOptimizer(config)
target = get_target_unitary("CNOT", num_qubits=2, qubit_indices=[0, 1])
result = optimizer.optimize(
target, num_qubits=2,
drift_hamiltonian=drift,
control_hamiltonians=controls,
)
Challenges with Multi-Qubit Gates¶
- Larger Hilbert space: \(4^n\) vs \(2^n\) for n qubits
- More control parameters: Multiple drive lines
- Crosstalk: Pulses affect neighboring qubits
- Longer optimization: More iterations needed
Regularization¶
Pulse Smoothness¶
Prevent high-frequency oscillations:
Amplitude Constraints¶
Limit pulse amplitudes to hardware-safe values:
Advanced Techniques¶
Randomized Starting Points¶
import numpy as np
def optimize_with_restarts(gate_name, n_restarts=5):
best_result = None
target = get_target_unitary(gate_name, num_qubits=1)
for i in range(n_restarts):
config = GrapeConfig(random_seed=i) # Different initialization
optimizer = GrapeOptimizer(config)
result = optimizer.optimize(target, num_qubits=1)
if best_result is None or result.fidelity > best_result.fidelity:
best_result = result
return best_result
result = optimize_with_restarts("H", n_restarts=10)
print(f"Best fidelity: {result.fidelity:.6f}")
Warm Starting¶
Use a previously optimized pulse as starting point:
# Optimize X gate
target_x = get_target_unitary("X", num_qubits=1)
x_result = optimizer.optimize(target_x, num_qubits=1)
# Use as starting point for similar gate
target_y = get_target_unitary("Y", num_qubits=1)
result = optimizer.optimize(
target_y, num_qubits=1,
initial_pulses=(x_result.i_envelope, x_result.q_envelope),
)
Composite Pulses¶
Build complex gates from simpler ones:
# Hadamard = RY(π/2) @ RZ(π)
target_ry = get_target_unitary("RY", num_qubits=1, angle=np.pi/2)
ry_result = optimizer.optimize(target_ry, num_qubits=1)
target_rz = get_target_unitary("RZ", num_qubits=1, angle=np.pi)
rz_result = optimizer.optimize(target_rz, num_qubits=1)
# Concatenate pulses
h_i = np.concatenate([rz_result.i_envelope, ry_result.i_envelope])
h_q = np.concatenate([rz_result.q_envelope, ry_result.q_envelope])
Performance Optimization¶
Parallel Execution¶
from concurrent.futures import ProcessPoolExecutor
def optimize_gate(gate_name):
target = get_target_unitary(gate_name, num_qubits=1)
return optimizer.optimize(target, num_qubits=1)
gates = ["X", "Y", "Z", "H", "S", "T"]
with ProcessPoolExecutor(max_workers=4) as executor:
results = list(executor.map(optimize_gate, gates))
Caching Results¶
For caching optimization results, use JSON-based serialization to avoid security risks associated with pickle (which can execute arbitrary code when loading untrusted data):
import hashlib
import json
from pathlib import Path
import numpy as np
def get_or_optimize(gate, config, cache_dir="pulse_cache"):
cache_dir = Path(cache_dir)
cache_dir.mkdir(exist_ok=True)
# Create cache key using SHA-256 (avoid MD5 - cryptographically broken)
key = hashlib.sha256(f"{gate}-{config}".encode()).hexdigest()
cache_file = cache_dir / f"{key}.json"
if cache_file.exists():
with open(cache_file, 'r') as f:
data = json.load(f)
# Reconstruct result from cached data
return {
"fidelity": data["fidelity"],
"converged": data["converged"],
"iterations": data["iterations"],
"i_envelope": np.array(data["i_envelope"]),
"q_envelope": np.array(data["q_envelope"]),
}
target = get_target_unitary(gate, num_qubits=1)
result = optimizer.optimize(target, num_qubits=1)
# Cache as JSON (safe serialization)
cache_data = {
"fidelity": result.fidelity,
"converged": result.converged,
"iterations": result.iterations,
"i_envelope": result.i_envelope.tolist(),
"q_envelope": result.q_envelope.tolist(),
}
with open(cache_file, 'w') as f:
json.dump(cache_data, f)
return result
Never use pickle for caching
Pickle can execute arbitrary code when loading untrusted data. Always use JSON or another safe serialization format for caching optimization results.
Debugging Tips¶
Check the Hamiltonian¶
from qubitos.pulsegen.hamiltonians import build_hamiltonian
# Build and inspect Hamiltonian matrices
drift, controls = build_hamiltonian(num_qubits=1)
print("Drift Hamiltonian:")
print(drift)
print("\nControl Hamiltonians:")
for i, H in enumerate(controls):
print(f" H_control[{i}]:")
print(f" {H}")
Verify Target Unitary¶
from qubitos.pulsegen.hamiltonians import get_target_unitary
# Check target vs achieved gate
target = get_target_unitary("X", num_qubits=1)
print("Target unitary:")
print(target)
print("\nAchieved unitary:")
print(result.final_unitary)
print("\nDifference:")
print(np.abs(target - result.final_unitary))
Visualize Optimization Progress¶
# Plot fidelity over iterations
plt.figure(figsize=(10, 5))
plt.plot(result.fidelity_history)
plt.xlabel('Iteration')
plt.ylabel('Fidelity')
plt.title('Optimization Progress')
plt.axhline(y=config.target_fidelity, color='r', linestyle='--')
plt.grid(True, alpha=0.3)
plt.show()
References¶
- Khaneja et al. (2005) "Optimal control of coupled spin dynamics"
- de Fouquieres et al. (2011) "Second order gradient ascent pulse engineering"
- Nielsen (2002) "A simple formula for the average gate fidelity"
Next Steps¶
- Custom Hamiltonians - Build your own system models
- API Reference - Complete API documentation
- Notebooks - Interactive examples