kabachuha
90b18795fc
Add option to use Scheduled Huber Loss in all training pipelines to improve resilience to data corruption ( #1228 )
...
* add huber loss and huber_c compute to train_util
* add reduction modes
* add huber_c retrieval from timestep getter
* move get timesteps and huber to own function
* add conditional loss to all training scripts
* add cond loss to train network
* add (scheduled) huber_loss to args
* fixup twice timesteps getting
* PHL-schedule should depend on noise scheduler's num timesteps
* *2 multiplier to huber loss cause of 1/2 a^2 conv.
The Taylor expansion of sqrt near zero gives 1/2 a^2, which differs from a^2 of the standard MSE loss. This change scales them better against one another
* add option for smooth l1 (huber / delta)
* unify huber scheduling
* add snr huber scheduler
---------
Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com >
2024-04-07 13:54:21 +09:00
ykume
cd587ce62c
verify command line args if wandb is enabled
2024-04-05 08:23:03 +09:00
Kohya S
a2b8531627
make each script consistent, fix to work w/o DeepSpeed
2024-03-25 22:28:46 +09:00
Kohya S
fbb98f144e
Merge branch 'dev' into deep-speed
2024-03-20 18:15:26 +09:00
gesen2egee
095b8035e6
save state on train end
2024-03-10 23:33:38 +08:00
Kohya S
e3ccf8fbf7
make deepspeed_utils
2024-02-27 21:30:46 +09:00
Kohya S
eefb3cc1e7
Merge branch 'deep-speed' into deepspeed
2024-02-27 18:57:42 +09:00
Kohya S
f4132018c5
fix to work with cpu_count() == 1 closes #1134
2024-02-24 19:25:31 +09:00
BootsofLagrangian
4d5186d1cf
refactored codes, some function moved into train_utils.py
2024-02-22 16:20:53 +09:00
Kohya S
358ca205a3
Merge branch 'dev' into dev_device_support
2024-02-12 13:01:54 +09:00
Kohya S
e24d9606a2
add clean_memory_on_device and use it from training
2024-02-12 11:10:52 +09:00
BootsofLagrangian
03f0816f86
the reason not working grad accum steps found. it was becasue of my accelerate settings
2024-02-09 17:47:49 +09:00
Kohya S
055f02e1e1
add logging args for training scripts
2024-02-08 21:16:42 +09:00
BootsofLagrangian
62556619bd
fix full_fp16 compatible and train_step
2024-02-07 16:42:05 +09:00
BootsofLagrangian
7d2a9268b9
apply offloading method runable for all trainer
2024-02-05 22:42:06 +09:00
BootsofLagrangian
4295f91dcd
fix all trainer about vae
2024-02-05 20:19:56 +09:00
Kohya S
efd3b58973
Add logging arguments and update logging setup
2024-02-04 20:44:10 +09:00
Yuta Hayashibe
5f6bf29e52
Replace print with logger if they are logs ( #905 )
...
* Add get_my_logger()
* Use logger instead of print
* Fix log level
* Removed line-breaks for readability
* Use setup_logging()
* Add rich to requirements.txt
* Make simple
* Use logger instead of print
---------
Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com >
2024-02-04 18:14:34 +09:00
BootsofLagrangian
dfe08f395f
support deepspeed
2024-02-04 03:12:42 +09:00
Disty0
a6a2b5a867
Fix IPEX support and add XPU device to device_utils
2024-01-31 17:32:37 +03:00
Aarni Koskela
afc38707d5
Refactor memory cleaning into a single function
2024-01-23 14:28:50 +02:00
Aarni Koskela
6f3f701d3d
Deduplicate ipex initialization code
2024-01-19 18:07:36 +02:00
Kohya S
32b759a328
Add wandb_run_name parameter to init_kwargs #1032
2024-01-14 22:02:03 +09:00
Kohya S
912dca8f65
fix duplicated sample gen for every epoch ref #907
2023-12-07 22:13:38 +09:00
Isotr0py
db84530074
Fix gradients synchronization for multi-GPUs training ( #989 )
...
* delete DDP wrapper
* fix train_db vae and train_network
* fix train_db vae and train_network unwrap
* network grad sync
---------
Co-authored-by: Kohya S <52813779+kohya-ss@users.noreply.github.com >
2023-12-07 22:01:42 +09:00
Kohya S
383b4a2c3e
Merge pull request #907 from shirayu/add_option_sample_at_first
...
Add option --sample_at_first
2023-12-03 21:00:32 +09:00
feffy380
6b3148fd3f
Fix min-snr-gamma for v-prediction and ZSNR.
...
This fixes min-snr for vpred+zsnr by dividing directly by SNR+1.
The old implementation did it in two steps: (min-snr/snr) * (snr/(snr+1)), which causes division by zero when combined with --zero_terminal_snr
2023-11-07 23:02:25 +01:00
Kohya S
6231aa91e2
common lr logging, set default None to ddp_timeout
2023-11-05 19:09:17 +09:00
Yuta Hayashibe
2c731418ad
Added sample_images() for --sample_at_first
2023-10-29 22:08:42 +09:00
Kohya S
96d877be90
support separate LR for Text Encoder for SD1/2
2023-10-29 21:30:32 +09:00
Kohya S
9d6a5a0c79
Merge pull request #899 from shirayu/use_moving_average
...
Show moving average loss in the progress bar
2023-10-29 14:37:58 +09:00
Yuta Hayashibe
63992b81c8
Fix initialize place of loss_recorder
2023-10-27 21:13:29 +09:00
Yuta Hayashibe
0d21925bdf
Use @property
2023-10-27 18:14:27 +09:00
Yuta Hayashibe
efef5c8ead
Show "avr_loss" instead of "loss" because it is moving average
2023-10-27 17:59:58 +09:00
Yuta Hayashibe
3d2bb1a8f1
Add LossRecorder and use moving average in all places
2023-10-27 17:49:49 +09:00
青龍聖者@bdsqlsz
202f2c3292
Debias Estimation loss ( #889 )
...
* update for bnb 0.41.1
* fixed generate_controlnet_subsets_config for training
* Revert "update for bnb 0.41.1"
This reverts commit 70bd3612d8 .
* add debiased_estimation_loss
* add train_network
* Revert "add train_network"
This reverts commit 6539363c5c .
* Update train_network.py
2023-10-23 22:59:14 +09:00
Yuta Hayashibe
27f9b6ffeb
updated typos to v1.16.15 and fix typos
2023-10-01 21:51:24 +09:00
Disty0
b64389c8a9
Intel ARC support with IPEX
2023-09-19 18:05:05 +03:00
Kohya S
73a08c0be0
Merge pull request #630 from ddPn08/sdxl
...
make tracker init_kwargs configurable
2023-07-20 22:05:55 +09:00
Kohya S
6d2d8dfd2f
add zero_terminal_snr option
2023-07-18 23:17:23 +09:00
ddPn08
b841dd78fe
make tracker init_kwargs configurable
2023-07-11 10:21:45 +09:00
Kohya S
ea182461d3
add min/max_timestep
2023-07-03 20:44:42 +09:00
Kohya S
bfd909ab79
Merge branch 'main' into original-u-net
2023-06-24 08:49:07 +09:00
Kohya S
0cfcb5a49c
fix lr/d*lr is not logged with prodigy in finetune
2023-06-24 08:36:09 +09:00
Kohya S
5114e8daf1
fix training scripts except controlnet not working
2023-06-22 08:46:53 +09:00
Kohya S
92e50133f8
Merge branch 'original-u-net' into dev
2023-06-17 21:57:08 +09:00
Kohya S
19dfa24abb
Merge branch 'main' into original-u-net
2023-06-16 20:59:34 +09:00
青龍聖者@bdsqlsz
e97d67a681
Support for Prodigy(Dadapt variety for Dylora) ( #585 )
...
* Update train_util.py for DAdaptLion
* Update train_README-zh.md for dadaptlion
* Update train_README-ja.md for DAdaptLion
* add DAdatpt V3
* Alignment
* Update train_util.py for experimental
* Update train_util.py V3
* Update train_README-zh.md
* Update train_README-ja.md
* Update train_util.py fix
* Update train_util.py
* support Prodigy
* add lower
2023-06-15 21:12:53 +09:00
Kohya S
9806b00f74
add arbitrary dataset feature to each script
2023-06-15 20:39:39 +09:00
ykume
9e1683cf2b
support sdpa
2023-06-11 21:26:15 +09:00