rockerBOO
|
60a76ebb72
|
Add caching gemma2, add gradient checkpointing, refactor lumina model code
|
2025-02-16 01:06:34 -05:00 |
|
sdbds
|
7323ee1b9d
|
update lora_lumina
|
2025-02-15 17:10:34 +08:00 |
|
Dango233
|
e54462a4a9
|
Fix SD3 trained lora loading and merging
|
2024-11-07 09:54:12 +00:00 |
|
kohya-ss
|
b502f58488
|
Fix emb_dim to work.
|
2024-10-29 23:29:50 +09:00 |
|
kohya-ss
|
ce5b532582
|
Fix additional LoRA to work
|
2024-10-29 22:29:24 +09:00 |
|
kohya-ss
|
0af4edd8a6
|
Fix split_qkv
|
2024-10-29 21:51:56 +09:00 |
|
Kohya S
|
b649bbf2b6
|
Merge branch 'sd3' into sd3_5_support
|
2024-10-27 10:20:35 +09:00 |
|
Kohya S
|
731664b8c3
|
Merge branch 'dev' into sd3
|
2024-10-27 10:20:14 +09:00 |
|
Kohya S
|
e070bd9973
|
Merge branch 'main' into dev
|
2024-10-27 10:19:55 +09:00 |
|
Kohya S
|
ca44e3e447
|
reduce VRAM usage, instead of increasing main RAM usage
|
2024-10-27 10:19:05 +09:00 |
|
Kohya S
|
150579db32
|
Merge branch 'sd3' into sd3_5_support
|
2024-10-26 22:03:41 +09:00 |
|
Kohya S
|
8549669f89
|
Merge branch 'dev' into sd3
|
2024-10-26 22:02:57 +09:00 |
|
Kohya S
|
900d551a6a
|
Merge branch 'main' into dev
|
2024-10-26 22:02:36 +09:00 |
|
Kohya S
|
56b4ea963e
|
Fix LoRA metadata hash calculation bug in svd_merge_lora.py, sdxl_merge_lora.py, and resize_lora.py closes #1722
|
2024-10-26 22:01:10 +09:00 |
|
kohya-ss
|
d2c549d7b2
|
support SD3 LoRA
|
2024-10-25 21:58:31 +09:00 |
|
Kohya S
|
822fe57859
|
add workaround for 'Some tensors share memory' error #1614
|
2024-09-28 20:57:27 +09:00 |
|
Kohya S
|
706a48d50e
|
Merge branch 'dev' into sd3
|
2024-09-19 21:15:39 +09:00 |
|
Kohya S
|
d7e14721e2
|
Merge branch 'main' into dev
|
2024-09-19 21:15:19 +09:00 |
|
Kohya S
|
9c757c2fba
|
fix SDXL block index to match LBW
|
2024-09-19 21:14:57 +09:00 |
|
Kohya S
|
0cbe95bcc7
|
fix text_encoder_lr to work with int closes #1608
|
2024-09-17 21:21:28 +09:00 |
|
Kohya S
|
d8d15f1a7e
|
add support for specifying blocks in FLUX.1 LoRA training
|
2024-09-16 23:14:09 +09:00 |
|
Kohya S
|
c9ff4de905
|
Add support for specifying rank for each layer in FLUX.1
|
2024-09-14 22:17:52 +09:00 |
|
Kohya S
|
2d8ee3c280
|
OFT for FLUX.1
|
2024-09-14 15:48:16 +09:00 |
|
Kohya S
|
0485f236a0
|
Merge branch 'dev' into sd3
|
2024-09-13 22:39:24 +09:00 |
|
Kohya S
|
93d9fbf607
|
improve OFT implementation closes #944
|
2024-09-13 22:37:11 +09:00 |
|
Kohya S
|
c15a3a1a65
|
Merge branch 'dev' into sd3
|
2024-09-13 21:30:49 +09:00 |
|
Kohya S
|
43ad73860d
|
Merge branch 'main' into dev
|
2024-09-13 21:29:51 +09:00 |
|
Kohya S
|
b755ebd0a4
|
add LBW support for SDXL merge LoRA
|
2024-09-13 21:29:31 +09:00 |
|
Kohya S
|
f4a0bea6dc
|
format by black
|
2024-09-13 21:26:06 +09:00 |
|
terracottahaniwa
|
734d2e5b2b
|
Support Lora Block Weight (LBW) to svd_merge_lora.py (#1575)
* support lora block weight
* solve license incompatibility
* Fix issue: lbw index calculation
|
2024-09-13 20:45:35 +09:00 |
|
Kohya S
|
f3ce80ef8f
|
Merge branch 'dev' into sd3
|
2024-09-13 19:49:16 +09:00 |
|
Kohya S
|
9d2860760d
|
Merge branch 'main' into dev
|
2024-09-13 19:48:53 +09:00 |
|
Kohya S
|
3387dc7306
|
formatting, update README
|
2024-09-13 19:45:42 +09:00 |
|
Kohya S
|
57ae44eb61
|
refactor to make safer
|
2024-09-13 19:45:00 +09:00 |
|
Maru-mee
|
1d7118a622
|
Support : OFT merge to base model (#1580)
* Support : OFT merge to base model
* Fix typo
* Fix typo_2
* Delete unused parameter 'eye'
|
2024-09-13 19:01:36 +09:00 |
|
Kohya S
|
cefe52629e
|
fix to work old notation for TE LR in .toml
|
2024-09-12 12:36:07 +09:00 |
|
Kohya S
|
d10ff62a78
|
support individual LR for CLIP-L/T5XXL
|
2024-09-10 20:32:09 +09:00 |
|
Kohya S
|
90ed2dfb52
|
feat: Add support for merging CLIP-L and T5XXL LoRA models
|
2024-09-05 08:39:29 +09:00 |
|
Kohya S
|
56cb2fc885
|
support T5XXL LoRA, reduce peak memory usage #1560
|
2024-09-04 23:15:27 +09:00 |
|
Kohya S
|
b65ae9b439
|
T5XXL LoRA training, fp8 T5XXL support
|
2024-09-04 21:33:17 +09:00 |
|
Kohya S
|
3be712e3e0
|
feat: Update direct loading fp8 ckpt for LoRA training
|
2024-08-27 21:40:02 +09:00 |
|
Kohya S
|
0087a46e14
|
FLUX.1 LoRA supports CLIP-L
|
2024-08-27 19:59:40 +09:00 |
|
Kohya S
|
5639c2adc0
|
fix typo
|
2024-08-24 16:37:49 +09:00 |
|
Kohya S
|
cf689e7aa6
|
feat: Add option to split projection layers and apply LoRA
|
2024-08-24 16:35:43 +09:00 |
|
Kohya S
|
99744af53a
|
Merge branch 'dev' into sd3
|
2024-08-22 21:34:24 +09:00 |
|
Kohya S
|
afb971f9c3
|
fix SD1.5 LoRA extraction #1490
|
2024-08-22 21:33:15 +09:00 |
|
Kohya S
|
bf9f798985
|
chore: fix typos, remove debug print
|
2024-08-22 19:59:38 +09:00 |
|
Kohya S
|
b0a980844a
|
added a script to extract LoRA
|
2024-08-22 19:57:29 +09:00 |
|
Kohya S
|
2b07a92c8d
|
Fix error in applying mask in Attention and add LoRA converter script
|
2024-08-21 12:30:23 +09:00 |
|
Kohya S
|
e17c42cb0d
|
Add BFL/Diffusers LoRA converter #1467 #1458 #1483
|
2024-08-21 12:28:45 +09:00 |
|