Optimization
High-level optimization interface and gradient computation.
Main Interface
DistributedEmitterOpt.optimize! — Function
optimize!(prob; kwargs...) -> (g_opt, p_opt)Run topology optimization with beta-continuation.
Keyword arguments:
max_iter– iterations per beta value (default 40)β_schedule– projection steepness values to sweepα_schedule– optional loss schedule (same length as beta_schedule)use_constraints– enable linewidth constraints on the final beta epoch onlytol– relative tolerance for convergencebackup– enable autosaving(p, g_history)checkpointsbackup_every– autosave interval (iterations)backup_path– optional checkpoint path (defaultjoinpath(prob.root, "results_backup.jld2"))resume_from– optional checkpoint path to resume from
optimize!(prob::EigenOptimizationProblem; kwargs...) -> (g_opt, p_opt)Run eigenvalue-based optimization with beta-continuation. Note: eigen sensitivities are TODO and will error during gradient evaluation.
DistributedEmitterOpt.objective_and_gradient! — Function
objective_and_gradient!(grad, p, prob) -> Float64Forward + adjoint pass. Main entry point for optimization.
Utilities
DistributedEmitterOpt.evaluate — Function
Single evaluation (no optimization loop).
DistributedEmitterOpt.test_gradient — Function
Finite-difference gradient check.