mmagic.engine.schedulers
¶
Package Contents¶
Classes¶
Linear learning rate scheduler for image generation. |
|
Decays the learning rate of each parameter group by linearly changing |
- class mmagic.engine.schedulers.LinearLrInterval(*args, interval=1, **kwargs)[source]¶
Bases:
mmengine.optim.LinearLR
Linear learning rate scheduler for image generation.
In the beginning, the learning rate is ‘start_factor’ defined in mmengine. We give a target learning rate ‘end_factor’ and a start point ‘begin’. If :attr:self.by_epoch is True, ‘begin’ is calculated by epoch, otherwise, calculated by iteration.” Before ‘begin’, we fix learning rate as ‘start_factor’; After ‘begin’, we linearly update learning rate to ‘end_factor’.
- Parameters
interval (int) – The interval to update the learning rate. Default: 1.
- class mmagic.engine.schedulers.ReduceLR(optimizer, mode: str = 'min', factor: float = 0.1, patience: int = 10, threshold: float = 0.0001, threshold_mode: str = 'rel', cooldown: int = 0, min_lr: float = 0.0, eps: float = 1e-08, **kwargs)[source]¶
Bases:
mmengine.optim._ParamScheduler
Decays the learning rate of each parameter group by linearly changing small multiplicative factor until the number of epoch reaches a pre-defined milestone:
end
.Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler.
Note
- The learning rate of each parameter group will be update at regular
intervals.
- Parameters
optimizer (Optimizer or OptimWrapper) – Wrapped optimizer.
mode (str, optional) – One of min, max. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing. Default: ‘min’.
factor (float, optional) – Factor by which the learning rate will be reduced. new_lr = lr * factor. Default: 0.1.
patience (int, optional) – Number of epochs with no improvement after which learning rate will be reduced. For example, if patience = 2, then we will ignore the first 2 epochs with no improvement, and will only decrease the LR after the 3rd epoch if the loss still hasn’t improved then. Default: 10.
threshold (float, optional) – Threshold for measuring the new optimum, to only focus on significant changes. Default: 1e-4.
threshold_mode (str, optional) – One of rel, abs. In rel mode, dynamic_threshold = best * ( 1 + threshold ) in ‘max’ mode or best * ( 1 - threshold ) in min mode. In abs mode, dynamic_threshold = best + threshold in max mode or best - threshold in min mode. Default: ‘rel’.
cooldown (int, optional) – Number of epochs to wait before resuming normal operation after lr has been reduced. Default: 0.
min_lr (float, optional) – Minimum LR value to keep. If LR after decay is lower than min_lr, it will be clipped to this value. Default: 0.
eps (float, optional) – Minimal decay applied to lr. If the difference between new and old lr is smaller than eps, the update is ignored. Default: 1e-8.
begin (int) – Step at which to start updating the learning rate. Defaults to 0.
end (int) – Step at which to stop updating the learning rate.
last_step (int) – The index of last step. Used for resume without state dict. Defaults to -1.
by_epoch (bool) – Whether the scheduled learning rate is updated by epochs. Defaults to True.
- property in_cooldown¶