Kohya S
|
cbe2a9da45
|
feat: add conversion script for LoRA models to ComfyUI format with reverse option
|
2025-09-16 21:48:47 +09:00 |
|
Kohya S
|
1a73b5e8a5
|
feat: add script to convert LoRA format to ComfyUI format
|
2025-09-14 20:49:20 +09:00 |
|
Kohya S
|
4e2a80a6ca
|
refactor: update imports to use safetensors_utils for memory-efficient operations
|
2025-09-13 21:07:11 +09:00 |
|
Kohya S
|
bae7fa74eb
|
Merge branch 'sd3' into feat-hunyuan-image-2.1-inference
|
2025-09-13 20:13:58 +09:00 |
|
Kohya S
|
8783f8aed3
|
feat: faster safetensors load and split safetensor utils
|
2025-09-13 19:51:38 +09:00 |
|
Kohya S
|
209c02dbb6
|
feat: HunyuanImage LoRA training
|
2025-09-12 21:40:42 +09:00 |
|
Kohya S
|
7f983c558d
|
feat: block swap for inference and initial impl for HunyuanImage LoRA (not working)
|
2025-09-11 22:15:22 +09:00 |
|
Kohya S
|
5149be5a87
|
feat: initial commit for HunyuanImage-2.1 inference
|
2025-09-11 12:54:12 +09:00 |
|
Kohya S
|
18e62515c4
|
Merge branch 'dev' into sd3
|
2025-08-15 19:05:07 +09:00 |
|
woctordho
|
3ad71e1acf
|
Refactor to avoid mutable global variable
|
2025-08-15 11:24:51 +08:00 |
|
woctordho
|
c6fab554f4
|
Support resizing ControlLoRA
|
2025-08-15 03:00:40 +08:00 |
|
Symbiomatrix
|
3ce0c6e71f
|
Fix.
|
2025-08-15 02:59:03 +08:00 |
|
Symbiomatrix
|
63ec59fc0b
|
Support for multiple format loras.
|
2025-08-15 02:59:03 +08:00 |
|
Kohya S
|
450630c6bd
|
fix: create network from weights not working
|
2025-07-29 20:32:24 +09:00 |
|
Kohya S
|
c28e7a47c3
|
feat: add regex-based rank and learning rate configuration for FLUX.1 LoRA
|
2025-07-26 19:35:42 +09:00 |
|
Kohya S
|
9eda938876
|
Merge branch 'sd3' into feature-chroma-support
|
2025-07-21 13:32:22 +09:00 |
|
Kohya S
|
77a160d886
|
fix: skip LoRA creation for None text encoders (CLIP-L for Chroma)
|
2025-07-20 21:25:43 +09:00 |
|
Kohya S
|
88dc3213a9
|
fix: support LoRA w/o TE for create_network_from_weights
|
2025-07-13 20:46:24 +09:00 |
|
rockerBOO
|
a87e999786
|
Change to 3
|
2025-07-07 17:12:07 -04:00 |
|
rockerBOO
|
0145efc2f2
|
Merge branch 'sd3' into lumina
|
2025-06-09 18:13:06 -04:00 |
|
rockerBOO
|
f62c68df3c
|
Make grad_norm and combined_grad_norm None is not recording
|
2025-05-01 01:37:57 -04:00 |
|
青龍聖者@bdsqlsz
|
9f1892cc8e
|
Merge branch 'sd3' into lumina
|
2025-04-06 16:13:43 +08:00 |
|
Kohya S.
|
59d98e45a9
|
Merge pull request #1974 from rockerBOO/lora-ggpo
Add LoRA-GGPO for Flux
|
2025-03-30 21:07:31 +09:00 |
|
rockerBOO
|
182544dcce
|
Remove pertubation seed
|
2025-03-26 14:23:04 -04:00 |
|
Kohya S
|
6364379f17
|
Merge branch 'dev' into sd3
|
2025-03-21 22:07:50 +09:00 |
|
rockerBOO
|
3647d065b5
|
Cache weight norms estimate on initialization. Move to update norms every step
|
2025-03-18 14:25:09 -04:00 |
|
rockerBOO
|
ea53290f62
|
Add LoRA-GGPO for Flux
|
2025-03-06 00:00:38 -05:00 |
|
rockerBOO
|
e8c15c7167
|
Remove log
|
2025-03-04 02:30:08 -05:00 |
|
rockerBOO
|
9fe8a47080
|
Undo dropout after up
|
2025-03-04 02:28:56 -05:00 |
|
rockerBOO
|
1f22a94cfe
|
Update embedder_dims, add more flexible caption extension
|
2025-03-04 02:25:50 -05:00 |
|
sdbds
|
5e45df722d
|
update gemma2 train attention layer
|
2025-03-04 08:07:33 +08:00 |
|
Ivan Chikish
|
acdca2abb7
|
Fix [occasionally] missing text encoder attn modules
Should fix #1952
I added alternative name for CLIPAttention.
I have no idea why this name changed.
Now it should accept both names.
|
2025-03-01 20:35:45 +03:00 |
|
rockerBOO
|
60a76ebb72
|
Add caching gemma2, add gradient checkpointing, refactor lumina model code
|
2025-02-16 01:06:34 -05:00 |
|
sdbds
|
7323ee1b9d
|
update lora_lumina
|
2025-02-15 17:10:34 +08:00 |
|
Dango233
|
e54462a4a9
|
Fix SD3 trained lora loading and merging
|
2024-11-07 09:54:12 +00:00 |
|
kohya-ss
|
b502f58488
|
Fix emb_dim to work.
|
2024-10-29 23:29:50 +09:00 |
|
kohya-ss
|
ce5b532582
|
Fix additional LoRA to work
|
2024-10-29 22:29:24 +09:00 |
|
kohya-ss
|
0af4edd8a6
|
Fix split_qkv
|
2024-10-29 21:51:56 +09:00 |
|
Kohya S
|
b649bbf2b6
|
Merge branch 'sd3' into sd3_5_support
|
2024-10-27 10:20:35 +09:00 |
|
Kohya S
|
731664b8c3
|
Merge branch 'dev' into sd3
|
2024-10-27 10:20:14 +09:00 |
|
Kohya S
|
e070bd9973
|
Merge branch 'main' into dev
|
2024-10-27 10:19:55 +09:00 |
|
Kohya S
|
ca44e3e447
|
reduce VRAM usage, instead of increasing main RAM usage
|
2024-10-27 10:19:05 +09:00 |
|
Kohya S
|
150579db32
|
Merge branch 'sd3' into sd3_5_support
|
2024-10-26 22:03:41 +09:00 |
|
Kohya S
|
8549669f89
|
Merge branch 'dev' into sd3
|
2024-10-26 22:02:57 +09:00 |
|
Kohya S
|
900d551a6a
|
Merge branch 'main' into dev
|
2024-10-26 22:02:36 +09:00 |
|
Kohya S
|
56b4ea963e
|
Fix LoRA metadata hash calculation bug in svd_merge_lora.py, sdxl_merge_lora.py, and resize_lora.py closes #1722
|
2024-10-26 22:01:10 +09:00 |
|
kohya-ss
|
d2c549d7b2
|
support SD3 LoRA
|
2024-10-25 21:58:31 +09:00 |
|
Kohya S
|
822fe57859
|
add workaround for 'Some tensors share memory' error #1614
|
2024-09-28 20:57:27 +09:00 |
|
Kohya S
|
706a48d50e
|
Merge branch 'dev' into sd3
|
2024-09-19 21:15:39 +09:00 |
|
Kohya S
|
d7e14721e2
|
Merge branch 'main' into dev
|
2024-09-19 21:15:19 +09:00 |
|