GRAPE Optimization Deep Dive¶
This notebook provides a comprehensive exploration of the GRAPE (GRadient Ascent Pulse Engineering) algorithm as implemented in QubitOS. You'll learn how the algorithm works, how to configure it for optimal performance, and how to analyze convergence behavior.
Prerequisites¶
- Basic understanding of quantum mechanics and quantum gates
- Familiarity with the QubitOS quickstart tutorial
- Python with NumPy and Matplotlib
Topics Covered¶
- GRAPE Algorithm Fundamentals - The mathematics behind pulse optimization
- Configuration Deep Dive - Understanding every parameter
- Convergence Analysis - Visualizing and understanding optimization progress
- Performance Tuning - Achieving high fidelity efficiently
- Advanced Topics - Second-order methods, regularization, and callbacks
Part 1: GRAPE Algorithm Fundamentals¶
What is GRAPE?¶
GRAPE is a quantum optimal control algorithm that finds control pulses to implement a desired quantum gate with high fidelity. The name stands for GRadient Ascent Pulse Engineering.
The algorithm was first introduced by:
Khaneja et al., "Optimal control of coupled spin dynamics", J. Magn. Reson. 172, 296-305 (2005)
The Optimization Problem¶
Given:
- A target unitary $U_{\text{target}}$ (the gate we want to implement)
- A system Hamiltonian describing the quantum hardware
- Control fields that we can modulate in time
Find:
- Time-dependent control amplitudes $\{u_k(t)\}$ that maximize the gate fidelity:
$$F = \frac{|\text{Tr}(U_{\text{target}}^\dagger U_{\text{achieved}})|^2 + d}{d^2 + d}$$
where $d = 2^n$ is the Hilbert space dimension for $n$ qubits.
Time Discretization¶
GRAPE discretizes the total pulse duration $T$ into $N$ time steps:
$$\Delta t = \frac{T}{N}$$
At each time step $k$, the system evolves under the Hamiltonian:
$$H_k = H_0 + \sum_j u_j^{(k)} H_j$$
where:
- $H_0$ is the drift Hamiltonian (always-on terms)
- $H_j$ are control Hamiltonians (e.g., $\sigma_x$, $\sigma_y$)
- $u_j^{(k)}$ are the control amplitudes we optimize
The Propagator¶
Each time step generates a propagator:
$$U_k = \exp(-i H_k \Delta t)$$
The total evolution is:
$$U_{\text{achieved}} = U_N \cdot U_{N-1} \cdot \ldots \cdot U_2 \cdot U_1$$
# Let's start by importing the necessary modules
import numpy as np
import matplotlib.pyplot as plt
from qubitos.pulsegen.grape import GrapeOptimizer, GrapeConfig, GrapeResult, generate_pulse
from qubitos.pulsegen.hamiltonians import (
get_target_unitary,
parse_pauli_string,
build_hamiltonian,
PAULI_X,
PAULI_Y,
PAULI_Z,
)
# Set up plotting style
plt.style.use('seaborn-v0_8-whitegrid')
plt.rcParams['figure.figsize'] = (10, 6)
plt.rcParams['font.size'] = 12
Gradient Computation¶
The key insight of GRAPE is that we can analytically compute the gradient of the fidelity with respect to each control amplitude.
For the fidelity $F = \frac{|\chi|^2 + d}{d^2 + d}$ where $\chi = \text{Tr}(U_{\text{target}}^\dagger U)$:
$$\frac{\partial F}{\partial u_j^{(k)}} = \frac{2}{d^2 + d} \text{Re}\left(\chi^* \cdot \text{Tr}(U_{\text{target}}^\dagger Q_k \frac{\partial U_k}{\partial u_j} P_{k-1})\right)$$
where:
- $P_{k-1} = U_{k-1} \cdots U_1$ (forward propagator)
- $Q_k = U_N \cdots U_{k+1}$ (backward propagator)
- $\frac{\partial U_k}{\partial u_j} = -i \Delta t \cdot H_j \cdot U_k$
# Let's visualize the forward and backward propagator concept
fig, ax = plt.subplots(1, 1, figsize=(12, 4))
# Draw timeline
n_steps = 8
for i in range(n_steps):
ax.add_patch(plt.Rectangle((i, 0.3), 0.9, 0.4, facecolor='lightblue', edgecolor='navy'))
ax.text(i + 0.45, 0.5, f'$U_{i+1}$', ha='center', va='center', fontsize=14)
# Forward propagator
ax.annotate('', xy=(3.5, 0.1), xytext=(0, 0.1),
arrowprops=dict(arrowstyle='->', color='green', lw=2))
ax.text(1.75, -0.05, r'$P_{k-1} = U_{k-1} \cdots U_1$', ha='center', fontsize=12, color='green')
# Current propagator
ax.add_patch(plt.Rectangle((4, 0.3), 0.9, 0.4, facecolor='orange', edgecolor='red', lw=2))
ax.text(4.45, 0.5, f'$U_k$', ha='center', va='center', fontsize=14, fontweight='bold')
ax.text(4.45, 0.85, 'Gradient\ncomputed here', ha='center', fontsize=10, color='red')
# Backward propagator
ax.annotate('', xy=(n_steps - 0.1, 0.1), xytext=(5, 0.1),
arrowprops=dict(arrowstyle='<-', color='purple', lw=2))
ax.text(6.5, -0.05, r'$Q_k = U_N \cdots U_{k+1}$', ha='center', fontsize=12, color='purple')
ax.set_xlim(-0.5, n_steps + 0.5)
ax.set_ylim(-0.3, 1.1)
ax.set_aspect('equal')
ax.axis('off')
ax.set_title('GRAPE Forward and Backward Propagators', fontsize=14, fontweight='bold')
plt.tight_layout()
plt.show()
Part 2: Configuration Deep Dive¶
The GrapeConfig class provides all the knobs for tuning the optimization. Let's explore each parameter in detail.
# Default configuration
default_config = GrapeConfig()
print("Default GrapeConfig parameters:")
print(f" num_time_steps: {default_config.num_time_steps}")
print(f" duration_ns: {default_config.duration_ns}")
print(f" target_fidelity: {default_config.target_fidelity}")
print(f" max_iterations: {default_config.max_iterations}")
print(f" learning_rate: {default_config.learning_rate}")
print(f" convergence_threshold: {default_config.convergence_threshold}")
print(f" max_amplitude: {default_config.max_amplitude} MHz")
print(f" use_second_order: {default_config.use_second_order}")
print(f" regularization: {default_config.regularization}")
print(f" random_seed: {default_config.random_seed}")
num_time_steps¶
The number of piecewise-constant segments in the pulse. This is a crucial parameter:
- Too few steps -> Limited control authority, may not reach target fidelity
- Too many steps -> More parameters to optimize, slower convergence, hardware bandwidth limits
A good rule of thumb: 50-200 steps for typical single-qubit gates.
# Compare different numbers of time steps
time_steps_list = [20, 50, 100, 200]
results_by_steps = {}
for n_steps in time_steps_list:
config = GrapeConfig(
num_time_steps=n_steps,
duration_ns=20.0,
target_fidelity=0.9999,
max_iterations=500,
random_seed=42,
)
result = generate_pulse("X", config=config)
results_by_steps[n_steps] = result
print(f"n_steps={n_steps:3d}: fidelity={result.fidelity:.6f}, iterations={result.iterations}")
# Visualize convergence for different time steps
fig, ax = plt.subplots(figsize=(10, 6))
for n_steps, result in results_by_steps.items():
ax.semilogy(1 - np.array(result.fidelity_history), label=f'{n_steps} steps')
ax.set_xlabel('Iteration')
ax.set_ylabel('Infidelity (1 - F)')
ax.set_title('Effect of Time Discretization on Convergence')
ax.legend()
ax.set_ylim(1e-5, 1)
ax.axhline(y=1e-4, color='red', linestyle='--', alpha=0.5, label='Target: 99.99%')
plt.tight_layout()
plt.show()
duration_ns¶
The total pulse duration in nanoseconds. This affects:
- Shorter pulses -> Faster gates, but require higher drive amplitudes
- Longer pulses -> More room for optimization, but increased decoherence
Typical values for superconducting qubits: 10-50 ns for single-qubit gates.
# Compare different pulse durations
durations = [5.0, 10.0, 20.0, 50.0]
results_by_duration = {}
for duration in durations:
config = GrapeConfig(
num_time_steps=100,
duration_ns=duration,
target_fidelity=0.9999,
max_iterations=500,
random_seed=42,
)
result = generate_pulse("X", config=config)
results_by_duration[duration] = result
print(f"duration={duration:5.1f} ns: fidelity={result.fidelity:.6f}, max_amp={np.max(np.abs(result.i_envelope)):.1f} MHz")
# Visualize pulses for different durations
fig, axes = plt.subplots(2, 2, figsize=(12, 8))
for ax, (duration, result) in zip(axes.flat, results_by_duration.items()):
t = np.linspace(0, duration, len(result.i_envelope))
ax.plot(t, result.i_envelope, 'b-', label='I', linewidth=1.5)
ax.plot(t, result.q_envelope, 'r-', label='Q', linewidth=1.5)
ax.set_xlabel('Time (ns)')
ax.set_ylabel('Amplitude (MHz)')
ax.set_title(f'Duration = {duration} ns (F = {result.fidelity:.4f})')
ax.legend(loc='upper right')
ax.axhline(y=0, color='gray', linestyle='-', alpha=0.3)
plt.suptitle('X Gate Pulses for Different Durations', fontsize=14, fontweight='bold')
plt.tight_layout()
plt.show()
target_fidelity¶
The optimization stops when this fidelity is achieved. Higher targets require more iterations.
Common targets:
- 0.99 (99%) - Quick prototyping
- 0.999 (99.9%) - Standard quality
- 0.9999 (99.99%) - High fidelity
- 0.99999 (99.999%) - Fault-tolerant threshold
# Compare iterations needed for different fidelity targets
fidelity_targets = [0.99, 0.999, 0.9999, 0.99999]
for target in fidelity_targets:
config = GrapeConfig(
num_time_steps=100,
duration_ns=20.0,
target_fidelity=target,
max_iterations=2000,
random_seed=42,
)
result = generate_pulse("X", config=config)
status = "converged" if result.converged else "not converged"
print(f"target={target:.5f}: achieved={result.fidelity:.6f}, iterations={result.iterations:4d} {status}")
max_amplitude¶
The maximum allowed pulse amplitude in MHz. This constraint is important for:
- Hardware limits - AWGs and amplifiers have maximum output
- Linearity - Higher amplitudes may cause nonlinear effects
- Heating - Excessive power can heat the qubit
# Compare different amplitude limits
max_amps = [25.0, 50.0, 100.0, 200.0]
results_by_amp = {}
for max_amp in max_amps:
config = GrapeConfig(
num_time_steps=100,
duration_ns=20.0,
target_fidelity=0.9999,
max_iterations=500,
max_amplitude=max_amp,
random_seed=42,
)
result = generate_pulse("X", config=config)
results_by_amp[max_amp] = result
actual_max = max(np.max(np.abs(result.i_envelope)), np.max(np.abs(result.q_envelope)))
print(f"max_amp={max_amp:5.1f} MHz: fidelity={result.fidelity:.6f}, actual_max={actual_max:.1f} MHz")
learning_rate¶
The step size for gradient ascent. QubitOS uses an adaptive learning rate that adjusts based on progress:
- Base rate is scaled by ~100x to account for typical gradient magnitudes
- Rate decays over iterations (0.999^iteration)
- Rate increases if making consistent progress
- Rate decreases if oscillating
# Compare different learning rates
learning_rates = [0.1, 0.5, 1.0, 2.0, 5.0]
results_by_lr = {}
for lr in learning_rates:
config = GrapeConfig(
num_time_steps=100,
duration_ns=20.0,
target_fidelity=0.999,
max_iterations=300,
learning_rate=lr,
random_seed=42,
)
result = generate_pulse("X", config=config)
results_by_lr[lr] = result
print(f"lr={lr:.1f}: fidelity={result.fidelity:.6f}, iterations={result.iterations}")
# Visualize convergence for different learning rates
fig, ax = plt.subplots(figsize=(10, 6))
for lr, result in results_by_lr.items():
ax.semilogy(1 - np.array(result.fidelity_history), label=f'lr={lr}')
ax.set_xlabel('Iteration')
ax.set_ylabel('Infidelity (1 - F)')
ax.set_title('Effect of Learning Rate on Convergence')
ax.legend()
ax.set_ylim(1e-4, 1)
plt.tight_layout()
plt.show()
Part 3: Convergence Analysis¶
Understanding how the optimization converges is crucial for debugging and tuning.
Fidelity History¶
The GrapeResult.fidelity_history attribute tracks fidelity at each iteration:
# Run optimization and capture detailed history
config = GrapeConfig(
num_time_steps=100,
duration_ns=20.0,
target_fidelity=0.9999,
max_iterations=500,
random_seed=42,
)
result = generate_pulse("X", config=config)
print(f"Optimization completed:")
print(f" Final fidelity: {result.fidelity:.6f}")
print(f" Iterations: {result.iterations}")
print(f" Converged: {result.converged}")
print(f" History length: {len(result.fidelity_history)}")
# Plot convergence in detail
fig, axes = plt.subplots(1, 2, figsize=(14, 5))
# Linear scale
axes[0].plot(result.fidelity_history, 'b-', linewidth=1.5)
axes[0].axhline(y=config.target_fidelity, color='red', linestyle='--', label='Target')
axes[0].set_xlabel('Iteration')
axes[0].set_ylabel('Fidelity')
axes[0].set_title('Fidelity vs Iteration (Linear Scale)')
axes[0].legend()
axes[0].set_ylim(0.5, 1.01)
# Log scale (infidelity)
infidelity = 1 - np.array(result.fidelity_history)
axes[1].semilogy(infidelity, 'b-', linewidth=1.5)
axes[1].axhline(y=1 - config.target_fidelity, color='red', linestyle='--', label='Target')
axes[1].set_xlabel('Iteration')
axes[1].set_ylabel('Infidelity (1 - F)')
axes[1].set_title('Infidelity vs Iteration (Log Scale)')
axes[1].legend()
plt.tight_layout()
plt.show()
Convergence Rate Analysis¶
The rate of fidelity improvement tells us about the optimization landscape:
# Analyze convergence rate
fidelity_history = np.array(result.fidelity_history)
improvement = np.diff(fidelity_history)
fig, axes = plt.subplots(1, 2, figsize=(14, 5))
# Improvement per iteration
axes[0].plot(improvement, 'g-', linewidth=0.8, alpha=0.7)
axes[0].axhline(y=0, color='gray', linestyle='-')
axes[0].set_xlabel('Iteration')
axes[0].set_ylabel('Fidelity Improvement')
axes[0].set_title('Fidelity Improvement per Iteration')
# Running average improvement
window = 20
running_avg = np.convolve(improvement, np.ones(window)/window, mode='valid')
axes[1].semilogy(running_avg[running_avg > 0], 'b-', linewidth=1.5)
axes[1].set_xlabel('Iteration')
axes[1].set_ylabel('Average Improvement (window=20)')
axes[1].set_title('Smoothed Convergence Rate')
plt.tight_layout()
plt.show()
Comparing Gates¶
Different gates have different optimization difficulty:
# Compare convergence for different gates
gates = ["X", "Y", "Z", "H", "SX"]
gate_results = {}
for gate in gates:
config = GrapeConfig(
num_time_steps=100,
duration_ns=20.0,
target_fidelity=0.9999,
max_iterations=500,
random_seed=42,
)
result = generate_pulse(gate, config=config)
gate_results[gate] = result
print(f"{gate:3s}: fidelity={result.fidelity:.6f}, iterations={result.iterations:3d}")
# Plot convergence comparison
fig, ax = plt.subplots(figsize=(10, 6))
for gate, result in gate_results.items():
ax.semilogy(1 - np.array(result.fidelity_history), label=gate)
ax.set_xlabel('Iteration')
ax.set_ylabel('Infidelity (1 - F)')
ax.set_title('Convergence Comparison: Single-Qubit Gates')
ax.legend()
ax.set_ylim(1e-5, 1)
plt.tight_layout()
plt.show()
Strategy 1: Initial Pulse Choice¶
GRAPE can start from random pulses or user-provided initial guesses. Good initial guesses can dramatically speed up convergence.
# Compare random initialization vs. educated guess
config = GrapeConfig(
num_time_steps=100,
duration_ns=20.0,
target_fidelity=0.9999,
max_iterations=500,
random_seed=42,
)
# Random initialization (default)
optimizer = GrapeOptimizer(config)
target = get_target_unitary("X")
result_random = optimizer.optimize(target, num_qubits=1)
print(f"Random init: fidelity={result_random.fidelity:.6f}, iterations={result_random.iterations}")
# Educated guess: approximate X gate pulse (Gaussian envelope)
n_steps = config.num_time_steps
t = np.linspace(-3, 3, n_steps) # Normalized time
gaussian = np.exp(-t**2 / 2)
initial_i = 50 * gaussian # Scale to reasonable amplitude
initial_q = np.zeros(n_steps) # X gate primarily uses I quadrature
result_guess = optimizer.optimize(
target,
num_qubits=1,
initial_pulses=(initial_i, initial_q)
)
print(f"Gaussian init: fidelity={result_guess.fidelity:.6f}, iterations={result_guess.iterations}")
# Compare convergence
fig, ax = plt.subplots(figsize=(10, 6))
ax.semilogy(1 - np.array(result_random.fidelity_history), 'b-', label='Random init', linewidth=1.5)
ax.semilogy(1 - np.array(result_guess.fidelity_history), 'g-', label='Gaussian init', linewidth=1.5)
ax.set_xlabel('Iteration')
ax.set_ylabel('Infidelity (1 - F)')
ax.set_title('Effect of Initial Pulse on Convergence')
ax.legend()
ax.set_ylim(1e-5, 1)
plt.tight_layout()
plt.show()
Strategy 2: Two-Stage Optimization¶
First optimize with coarse time steps, then refine with finer steps:
from scipy.interpolate import interp1d
# Stage 1: Coarse optimization
config_coarse = GrapeConfig(
num_time_steps=25, # Fewer steps
duration_ns=20.0,
target_fidelity=0.99, # Lower target
max_iterations=200,
random_seed=42,
)
result_coarse = generate_pulse("X", config=config_coarse)
print(f"Stage 1 (coarse): fidelity={result_coarse.fidelity:.6f}, iterations={result_coarse.iterations}")
# Interpolate to finer grid
t_coarse = np.linspace(0, 1, len(result_coarse.i_envelope))
t_fine = np.linspace(0, 1, 100)
interp_i = interp1d(t_coarse, result_coarse.i_envelope, kind='cubic')
interp_q = interp1d(t_coarse, result_coarse.q_envelope, kind='cubic')
initial_i_fine = interp_i(t_fine)
initial_q_fine = interp_q(t_fine)
# Stage 2: Fine optimization
config_fine = GrapeConfig(
num_time_steps=100,
duration_ns=20.0,
target_fidelity=0.9999,
max_iterations=300,
random_seed=42,
)
optimizer = GrapeOptimizer(config_fine)
target = get_target_unitary("X")
result_fine = optimizer.optimize(
target,
num_qubits=1,
initial_pulses=(initial_i_fine, initial_q_fine)
)
print(f"Stage 2 (fine): fidelity={result_fine.fidelity:.6f}, iterations={result_fine.iterations}")
print(f"Total iterations: {result_coarse.iterations + result_fine.iterations}")
Strategy 3: Using Callbacks¶
The callback parameter lets you monitor and control optimization in real-time:
# Track additional metrics with a callback
detailed_history = []
def tracking_callback(iteration: int, fidelity: float) -> bool:
"""Track optimization progress.
Args:
iteration: Current iteration number
fidelity: Current fidelity
Returns:
True to stop optimization, False to continue
"""
detailed_history.append({
'iteration': iteration,
'fidelity': fidelity,
})
# Print progress every 100 iterations
if iteration % 100 == 0:
print(f" Iteration {iteration:4d}: fidelity = {fidelity:.6f}")
# Early stopping if we plateau
if len(detailed_history) > 50:
recent = [h['fidelity'] for h in detailed_history[-50:]]
if max(recent) - min(recent) < 1e-7:
print(f" Early stopping at iteration {iteration} (plateau detected)")
return True # Stop
return False # Continue
# Run with callback
config = GrapeConfig(
num_time_steps=100,
duration_ns=20.0,
target_fidelity=0.99999, # Very high target
max_iterations=1000,
random_seed=42,
)
optimizer = GrapeOptimizer(config)
target = get_target_unitary("X")
print("Running optimization with callback:")
result_callback = optimizer.optimize(target, num_qubits=1, callback=tracking_callback)
print(f"\nFinal: fidelity={result_callback.fidelity:.6f}, iterations={result_callback.iterations}")
Part 5: Advanced Topics¶
Regularization¶
Regularization adds a penalty term to encourage smoother pulses:
$$\mathcal{L} = F - \lambda \sum_k |u_k|^2$$
This helps when:
- Hardware has limited bandwidth
- You want to reduce power consumption
- Noisy gradients cause erratic pulses
# Compare regularization strengths
regularizations = [0.0, 0.001, 0.01, 0.1]
results_by_reg = {}
for reg in regularizations:
config = GrapeConfig(
num_time_steps=100,
duration_ns=20.0,
target_fidelity=0.999,
max_iterations=500,
regularization=reg,
random_seed=42,
)
result = generate_pulse("X", config=config)
results_by_reg[reg] = result
# Compute pulse smoothness (total variation)
tv = np.sum(np.abs(np.diff(result.i_envelope))) + np.sum(np.abs(np.diff(result.q_envelope)))
print(f"reg={reg:.3f}: fidelity={result.fidelity:.6f}, total_variation={tv:.1f}")
# Visualize effect of regularization on pulse smoothness
fig, axes = plt.subplots(2, 2, figsize=(12, 8))
for ax, (reg, result) in zip(axes.flat, results_by_reg.items()):
t = np.linspace(0, 20, len(result.i_envelope))
ax.plot(t, result.i_envelope, 'b-', label='I', linewidth=1.5)
ax.plot(t, result.q_envelope, 'r-', label='Q', linewidth=1.5)
ax.set_xlabel('Time (ns)')
ax.set_ylabel('Amplitude (MHz)')
ax.set_title(f'Regularization = {reg} (F = {result.fidelity:.4f})')
ax.legend(loc='upper right')
ax.axhline(y=0, color='gray', linestyle='-', alpha=0.3)
plt.suptitle('Effect of Regularization on Pulse Shape', fontsize=14, fontweight='bold')
plt.tight_layout()
plt.show()
Custom Drift Hamiltonian¶
Real qubits have always-on terms like detuning or coupling. You can include these:
# Optimization with a detuning term
config = GrapeConfig(
num_time_steps=100,
duration_ns=20.0,
target_fidelity=0.999,
max_iterations=500,
random_seed=42,
)
optimizer = GrapeOptimizer(config)
target = get_target_unitary("X")
# No drift (ideal case)
result_no_drift = optimizer.optimize(target, num_qubits=1)
# With 10 MHz Z detuning
drift = 10.0 * PAULI_Z # 10 MHz detuning
result_with_drift = optimizer.optimize(target, num_qubits=1, drift_hamiltonian=drift)
print(f"Without drift: fidelity={result_no_drift.fidelity:.6f}")
print(f"With 10 MHz Z detuning: fidelity={result_with_drift.fidelity:.6f}")
# Compare pulses with and without drift
fig, axes = plt.subplots(1, 2, figsize=(14, 5))
t = np.linspace(0, 20, 100)
axes[0].plot(t, result_no_drift.i_envelope, 'b-', label='I', linewidth=1.5)
axes[0].plot(t, result_no_drift.q_envelope, 'r-', label='Q', linewidth=1.5)
axes[0].set_xlabel('Time (ns)')
axes[0].set_ylabel('Amplitude (MHz)')
axes[0].set_title('No Drift (Ideal Qubit)')
axes[0].legend()
axes[1].plot(t, result_with_drift.i_envelope, 'b-', label='I', linewidth=1.5)
axes[1].plot(t, result_with_drift.q_envelope, 'r-', label='Q', linewidth=1.5)
axes[1].set_xlabel('Time (ns)')
axes[1].set_ylabel('Amplitude (MHz)')
axes[1].set_title('With 10 MHz Z Detuning')
axes[1].legend()
plt.suptitle('Effect of Drift Hamiltonian on Optimized Pulses', fontsize=14, fontweight='bold')
plt.tight_layout()
plt.show()
Two-Qubit Gates¶
GRAPE can also optimize two-qubit gates, though they require more iterations:
# Optimize a CZ gate
config = GrapeConfig(
num_time_steps=100,
duration_ns=50.0, # Longer duration for two-qubit gate
target_fidelity=0.99, # Lower target initially
max_iterations=1000,
random_seed=42,
)
print("Optimizing CZ gate (this may take a moment)...")
result_cz = generate_pulse("CZ", num_qubits=2, config=config)
print(f"CZ gate: fidelity={result_cz.fidelity:.6f}, iterations={result_cz.iterations}")
# Visualize CZ gate pulse and convergence
fig, axes = plt.subplots(1, 2, figsize=(14, 5))
t = np.linspace(0, 50, len(result_cz.i_envelope))
axes[0].plot(t, result_cz.i_envelope, 'b-', label='I', linewidth=1.5)
axes[0].plot(t, result_cz.q_envelope, 'r-', label='Q', linewidth=1.5)
axes[0].set_xlabel('Time (ns)')
axes[0].set_ylabel('Amplitude (MHz)')
axes[0].set_title('CZ Gate Pulse')
axes[0].legend()
axes[1].semilogy(1 - np.array(result_cz.fidelity_history), 'b-', linewidth=1.5)
axes[1].axhline(y=0.01, color='red', linestyle='--', label='Target (99%)')
axes[1].set_xlabel('Iteration')
axes[1].set_ylabel('Infidelity (1 - F)')
axes[1].set_title('CZ Gate Convergence')
axes[1].legend()
plt.tight_layout()
plt.show()
Verifying the Final Unitary¶
The GrapeResult.final_unitary attribute contains the achieved unitary matrix:
# Get X gate result and verify
config = GrapeConfig(
num_time_steps=100,
duration_ns=20.0,
target_fidelity=0.9999,
max_iterations=500,
random_seed=42,
)
result_x = generate_pulse("X", config=config)
# Compare achieved vs target
target_x = get_target_unitary("X")
achieved = result_x.final_unitary
print("Target X gate:")
print(np.round(target_x, 3))
print("\nAchieved unitary:")
print(np.round(achieved, 3))
print("\nDifference (absolute values):")
print(np.round(np.abs(achieved - target_x), 6))
# Visualize unitary matrices
fig, axes = plt.subplots(1, 3, figsize=(15, 4))
# Target
im0 = axes[0].imshow(np.abs(target_x), cmap='Blues', vmin=0, vmax=1)
axes[0].set_title('Target |U|')
plt.colorbar(im0, ax=axes[0])
# Achieved
im1 = axes[1].imshow(np.abs(achieved), cmap='Blues', vmin=0, vmax=1)
axes[1].set_title('Achieved |U|')
plt.colorbar(im1, ax=axes[1])
# Difference
diff = np.abs(achieved - target_x)
im2 = axes[2].imshow(diff, cmap='Reds', vmin=0, vmax=np.max(diff))
axes[2].set_title('|Difference|')
plt.colorbar(im2, ax=axes[2])
for ax in axes:
ax.set_xticks([0, 1])
ax.set_yticks([0, 1])
ax.set_xticklabels(['|0>', '|1>'])
ax.set_yticklabels(['<0|', '<1|'])
plt.suptitle('Unitary Matrix Comparison', fontsize=14, fontweight='bold')
plt.tight_layout()
plt.show()
Summary¶
In this notebook, we covered:
GRAPE Fundamentals
- Time discretization into piecewise-constant pulses
- Propagator formulation
- Analytical gradient computation
Configuration Parameters
num_time_steps: Trade-off between control and complexityduration_ns: Shorter = faster gates but higher amplitudestarget_fidelity: Higher targets need more iterationsmax_amplitude: Hardware constraintlearning_rate: Adaptive scaling built-in
Convergence Analysis
- Fidelity history tracking
- Log-scale infidelity plots
- Gate comparison
Performance Tuning
- Good initial guesses
- Two-stage coarse-to-fine optimization
- Callbacks for monitoring and early stopping
Advanced Topics
- Regularization for smooth pulses
- Custom drift Hamiltonians
- Two-qubit gate optimization
- Unitary verification
Next Steps¶
- Custom Hamiltonians notebook: Build complex multi-qubit systems
- Hardware execution: Send pulses to real quantum hardware via HAL
- Calibration integration: Use measured qubit parameters
References¶
- Khaneja et al., "Optimal control of coupled spin dynamics", J. Magn. Reson. 172, 296-305 (2005)
- de Fouquieres et al., "Second order gradient ascent pulse engineering", J. Magn. Reson. 212, 412-417 (2011)
- Nielsen, M.A., "A simple formula for the average gate fidelity", Phys. Lett. A 303, 249-252 (2002)