sktopt.core.optimizers.loglagrangian
- class sktopt.core.optimizers.loglagrangian.LogLagrangian_Config(dst_path: str = './result/pytests', interpolation: Literal['SIMP', 'RAMP'] = 'SIMP', record_times: int = 20, max_iters: int = 200, p_init: float = 1.0, p: float = 3.0, p_step: int = -1, vol_frac_init: float = 0.8, vol_frac: float = 0.4, vol_frac_step: int = -3, beta_init: float = 1.0, beta: float = 2, beta_step: int = -1, beta_curvature: float = 2.0, beta_eta: float = 0.5, eta: float = 0.6, percentile_init: float = 60, percentile: float = -90, percentile_step: int = -1, filter_radius_init: float = 2.0, filter_radius: float = 1.2, filter_radius_step: int = -3, E0: float = 210000000000.0, E_min: float = 210000000.0, rho_min: float = 0.01, rho_max: float = 1.0, move_limit_init: float = 0.3, move_limit: float = 0.1, move_limit_step: int = -3, restart: bool = False, restart_from: int = -1, export_img: bool = False, export_img_opaque: bool = False, design_dirichlet: bool = False, lambda_lower: float = -10000000.0, lambda_upper: float = 10000000.0, sensitivity_filter: bool = False, solver_option: Literal['spsolve', 'cg_pyamg'] = 'spsolve', scaling: bool = False, mu_p: float = 5.0, lambda_v: float = 0.1, lambda_decay: float = 0.9)
Bases:
DensityMethodConfig
Configuration for Log-space Lagrangian Gradient Update method.
This class defines the configuration parameters for performing topology optimization via gradient-based updates in the logarithmic domain. Unlike traditional Optimality Criteria (OC) methods, this approach explicitly follows the gradient of the Lagrangian (compliance + volume penalty) and applies the update in log-space to ensure positive densities and multiplicative-like behavior.
- mu_p
Weighting factor applied to the volume constraint in the Lagrangian. This term scales the influence of the volume penalty in the descent direction.
- Type:
float
- lambda_v
Initial value for the Lagrange multiplier associated with the volume constraint. This is added directly to the compliance gradient to form the full Lagrangian gradient.
- Type:
float
- lambda_decay
Decay factor applied to lambda_v over iterations, allowing gradual tuning of constraint strength.
- Type:
float
- lambda_lower
Minimum allowed value for the Lagrange multiplier. Can be negative in this formulation, since lambda_v is added to the gradient rather than used as a ratio.
- Type:
float
- lambda_upper
Maximum allowed value for the Lagrange multiplier. Clamping helps avoid instability in the update steps due to large penalties.
- Type:
float
Notes
This method is sometimes referred to as ‘EUMOC’, but it is mathematically distinct from classical OC-based updates. It performs gradient descent on the Lagrangian in log(ρ)-space, leading to multiplicative behavior while maintaining gradient fidelity.
- lambda_decay: float = 0.9
- lambda_lower: float = -10000000.0
- lambda_upper: float = 10000000.0
- lambda_v: float = 0.1
- mu_p: float = 5.0
- class sktopt.core.optimizers.loglagrangian.LogLagrangian_Optimizer(cfg: LogLagrangian_Config, tsk: TaskConfig)
Bases:
DensityMethod
Topology optimization solver using log-space Lagrangian gradient descent.
This optimizer performs sensitivity-based topology optimization by applying gradient descent on the Lagrangian (compliance + volume penalty) in log(ρ)-space. Unlike traditional Optimality Criteria (OC) methods, this method adds the volume constraint term (λ) directly to the compliance gradient before updating.
By performing updates in log-space, the method ensures strictly positive densities and exhibits multiplicative behavior in the original density space, which can enhance numerical stability—particularly for problems with low volume fractions or steep gradients.
In each iteration, the update follows:
log(ρ_new) = log(ρ) - η · (∂C/∂ρ + λ)
which is equivalent to:
ρ_new = ρ · exp( -η · (∂C/∂ρ + λ) )
Here: - ∂C/∂ρ is the compliance sensitivity, - λ is the Lagrange multiplier (volume penalty weight), - η is a step size parameter.
- config
Configuration object specifying optimization parameters such as mu_p, lambda_v, decay schedules, and filtering strategies.
- Type:
LogGradientUpdateConfig
- mesh, basis, etc.
Core finite element components required for stiffness evaluation, boundary conditions, and optimization loop execution.
- Type:
inherited from common.DensityMethod
Notes
Although previously referred to as ‘EUMOC’, this method is not derived from the traditional Optimality Criteria framework. Instead, it implements log-space gradient descent on the Lagrangian, directly adding the volume penalty to the sensitivity.
- rho_update(iter_loop: int, rho_candidate: ndarray, rho_projected: ndarray, dC_drho_ave: ndarray, u_dofs: ndarray, strain_energy_ave: ndarray, scaling_rate: ndarray, move_limit: float, eta: float, beta: float, tmp_lower: ndarray, tmp_upper: ndarray, percentile: float, elements_volume_design: ndarray, elements_volume_design_sum: float, vol_frac: float)
- sktopt.core.optimizers.loglagrangian.kkt_log_update(rho, dC, lambda_v, scaling_rate, eta, move_limit, tmp_lower, tmp_upper, rho_min: float, rho_max: float, percentile: float, interpolation: str)
In-place version of the modified OC update (log-space), computing dL = dC + lambda_v inside the function.
Parameters: - rho: np.ndarray, design variables (will be updated in-place) - dC: np.ndarray, compliance sensitivity (usually negative) - lambda_v: float, Lagrange multiplier for volume constraint - move: float, maximum allowed change per iteration - eta: float, learning rate - rho_min: float, minimum density - rho_max: float, maximum density - tmp_lower, tmp_upper, scaling_rate: work arrays (same shape as rho)