* Add alpha_mask parameter and apply masked loss
* Fix type hint in trim_and_resize_if_required function
* Refactor code to use keyword arguments in train_util.py
* Fix alpha mask flipping logic
* Fix alpha mask initialization
* Fix alpha_mask transformation
* Cache alpha_mask
* Update alpha_masks to be on CPU
* Set flipped_alpha_masks to Null if option disabled
* Check if alpha_mask is None
* Set alpha_mask to None if option disabled
* Add description of alpha_mask option to docs
This can be used to train away from a group of images you don't want
As this moves the model away from a point instead of towards it, the change in the model is unbounded
So, don't set it too low. -4e-7 seemed to work well.
* add huber loss and huber_c compute to train_util
* add reduction modes
* add huber_c retrieval from timestep getter
* move get timesteps and huber to own function
* add conditional loss to all training scripts
* add cond loss to train network
* add (scheduled) huber_loss to args
* fixup twice timesteps getting
* PHL-schedule should depend on noise scheduler's num timesteps
* *2 multiplier to huber loss cause of 1/2 a^2 conv.
The Taylor expansion of sqrt near zero gives 1/2 a^2, which differs from a^2 of the standard MSE loss. This change scales them better against one another
* add option for smooth l1 (huber / delta)
* unify huber scheduling
* add snr huber scheduler
---------
Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com>
- we have to prepare optimizer and ds_model at the same time.
- pull/1139#issuecomment-1986790007
Signed-off-by: BootsofLagrangian <hard2251@yonsei.ac.kr>
* Add get_my_logger()
* Use logger instead of print
* Fix log level
* Removed line-breaks for readability
* Use setup_logging()
* Add rich to requirements.txt
* Make simple
* Use logger instead of print
---------
Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com>