Optimization
High-level optimization interface and gradient computation.
Main Interface
DistributedEmitterOpt.optimize! — Function
optimize!(prob; kwargs...) -> (g_opt, p_opt)Run topology optimization with beta-continuation.
Keyword arguments:
max_iter– iterations per beta value (default 40)β_schedule– projection steepness values to sweepα_schedule– optional loss schedule (same length as beta_schedule)use_constraints– enable linewidth constraintstol– relative tolerance for convergence
optimize!(prob::EigenOptimizationProblem; kwargs...) -> (g_opt, p_opt)Run eigenvalue-based optimization with beta-continuation. Note: eigen sensitivities are TODO and will error during gradient evaluation.
DistributedEmitterOpt.objective_and_gradient! — Function
objective_and_gradient!(grad, p, prob) -> Float64Forward + adjoint pass. Main entry point for optimization.
Utilities
DistributedEmitterOpt.evaluate — Function
Single evaluation (no optimization loop).
DistributedEmitterOpt.test_gradient — Function
Finite-difference gradient check.