mirror of
https://github.com/kohya-ss/sd-scripts.git
synced 2026-04-06 13:47:06 +00:00
feat: Add support for merging CLIP-L and T5XXL LoRA models
This commit is contained in:
22
README.md
22
README.md
@@ -11,6 +11,9 @@ The command to install PyTorch is as follows:
|
|||||||
|
|
||||||
### Recent Updates
|
### Recent Updates
|
||||||
|
|
||||||
|
Sep 5, 2024:
|
||||||
|
The LoRA merge script now supports CLIP-L and T5XXL LoRA. Please specify `--clip_l` and `--t5xxl`. `--clip_l_save_to` and `--t5xxl_save_to` specify the save destination for CLIP-L and T5XXL. See [Merge LoRA to FLUX.1 checkpoint](#merge-lora-to-flux1-checkpoint) for details.
|
||||||
|
|
||||||
Sep 4, 2024:
|
Sep 4, 2024:
|
||||||
- T5XXL LoRA is supported in LoRA training. Remove `--network_train_unet_only` and add `train_t5xxl=True` to `--network_args`. CLIP-L is also trained at the same time (T5XXL only cannot be trained). The trained model can be used with ComfyUI. See [Key Features for FLUX.1 LoRA training](#key-features-for-flux1-lora-training) for details.
|
- T5XXL LoRA is supported in LoRA training. Remove `--network_train_unet_only` and add `train_t5xxl=True` to `--network_args`. CLIP-L is also trained at the same time (T5XXL only cannot be trained). The trained model can be used with ComfyUI. See [Key Features for FLUX.1 LoRA training](#key-features-for-flux1-lora-training) for details.
|
||||||
- In LoRA training, when `--fp8_base` is specified, you can specify `t5xxl_fp8_e4m3fn.safetensors` as the T5XXL weights. However, it is recommended to use fp16 weights for caching.
|
- In LoRA training, when `--fp8_base` is specified, you can specify `t5xxl_fp8_e4m3fn.safetensors` as the T5XXL weights. However, it is recommended to use fp16 weights for caching.
|
||||||
@@ -276,7 +279,7 @@ CLIP-L LoRA is not supported.
|
|||||||
|
|
||||||
### Merge LoRA to FLUX.1 checkpoint
|
### Merge LoRA to FLUX.1 checkpoint
|
||||||
|
|
||||||
`networks/flux_merge_lora.py` merges LoRA to FLUX.1 checkpoint. __The script is experimental.__
|
`networks/flux_merge_lora.py` merges LoRA to FLUX.1 checkpoint, CLIP-L or T5XXL models. __The script is experimental.__
|
||||||
|
|
||||||
```
|
```
|
||||||
python networks/flux_merge_lora.py --flux_model flux1-dev.safetensors --save_to output.safetensors --models lora1.safetensors --ratios 2.0 --save_precision fp16 --loading_device cuda --working_device cpu
|
python networks/flux_merge_lora.py --flux_model flux1-dev.safetensors --save_to output.safetensors --models lora1.safetensors --ratios 2.0 --save_precision fp16 --loading_device cuda --working_device cpu
|
||||||
@@ -284,13 +287,24 @@ python networks/flux_merge_lora.py --flux_model flux1-dev.safetensors --save_to
|
|||||||
|
|
||||||
You can also merge multiple LoRA models into a FLUX.1 model. Specify multiple LoRA models in `--models`. Specify the same number of ratios in `--ratios`.
|
You can also merge multiple LoRA models into a FLUX.1 model. Specify multiple LoRA models in `--models`. Specify the same number of ratios in `--ratios`.
|
||||||
|
|
||||||
`--loading_device` is the device to load the LoRA models. `--working_device` is the device to merge (calculate) the models. Default is `cpu` for both. Loading / working device examples are below (in the case of `--save_precision fp16` or `--save_precision bf16`):
|
CLIP-L and T5XXL LoRA are supported. `--clip_l` and `--clip_l_save_to` are for CLIP-L, `--t5xxl` and `--t5xxl_save_to` are for T5XXL. Sample command is below.
|
||||||
|
|
||||||
|
```
|
||||||
|
--clip_l clip_l.safetensors --clip_l_save_to merged_clip_l.safetensors --t5xxl t5xxl_fp16.safetensors --t5xxl_save_to merged_t5xxl.safetensors
|
||||||
|
```
|
||||||
|
|
||||||
|
FLUX.1, CLIP-L, and T5XXL can be merged together or separately for memory efficiency.
|
||||||
|
|
||||||
|
An experimental option `--mem_eff_load_save` is available. This option is for memory-efficient loading and saving. It may also speed up loading and saving.
|
||||||
|
|
||||||
|
`--loading_device` is the device to load the LoRA models. `--working_device` is the device to merge (calculate) the models. Default is `cpu` for both. Loading / working device examples are below (in the case of `--save_precision fp16` or `--save_precision bf16`, `float32` will consume more memory):
|
||||||
|
|
||||||
- 'cpu' / 'cpu': Uses >50GB of RAM, but works on any machine.
|
- 'cpu' / 'cpu': Uses >50GB of RAM, but works on any machine.
|
||||||
- 'cuda' / 'cpu': Uses 24GB of VRAM, but requires 30GB of RAM.
|
- 'cuda' / 'cpu': Uses 24GB of VRAM, but requires 30GB of RAM.
|
||||||
- 'cuda' / 'cuda': Uses 30GB of VRAM, but requires 30GB of RAM, faster than 'cuda' / 'cpu'.
|
- 'cpu' / 'cuda': Uses 4GB of VRAM, but requires 50GB of RAM, faster than 'cpu' / 'cpu' or 'cuda' / 'cpu'.
|
||||||
|
- 'cuda' / 'cuda': Uses 30GB of VRAM, but requires 30GB of RAM, faster than 'cpu' / 'cpu' or 'cuda' / 'cpu'.
|
||||||
|
|
||||||
In the case of LoRA models are trained with `bf16`, we are not sure which is better, `fp16` or `bf16` for `--save_precision`.
|
`--save_precision` is the precision to save the merged model. In the case of LoRA models are trained with `bf16`, we are not sure which is better, `fp16` or `bf16` for `--save_precision`.
|
||||||
|
|
||||||
The script can merge multiple LoRA models. If you want to merge multiple LoRA models, specify `--concat` option to work the merged LoRA model properly.
|
The script can merge multiple LoRA models. If you want to merge multiple LoRA models, specify `--concat` option to work the merged LoRA model properly.
|
||||||
|
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ import argparse
|
|||||||
import math
|
import math
|
||||||
import os
|
import os
|
||||||
import time
|
import time
|
||||||
|
from typing import Any, Dict, Union
|
||||||
|
|
||||||
import torch
|
import torch
|
||||||
from safetensors import safe_open
|
from safetensors import safe_open
|
||||||
@@ -34,11 +35,11 @@ def load_state_dict(file_name, dtype):
|
|||||||
return sd, metadata
|
return sd, metadata
|
||||||
|
|
||||||
|
|
||||||
def save_to_file(file_name, state_dict, dtype, metadata, mem_eff_save=False):
|
def save_to_file(file_name, state_dict: Dict[str, Union[Any, torch.Tensor]], dtype, metadata, mem_eff_save=False):
|
||||||
if dtype is not None:
|
if dtype is not None:
|
||||||
logger.info(f"converting to {dtype}...")
|
logger.info(f"converting to {dtype}...")
|
||||||
for key in tqdm(list(state_dict.keys())):
|
for key in tqdm(list(state_dict.keys())):
|
||||||
if type(state_dict[key]) == torch.Tensor:
|
if type(state_dict[key]) == torch.Tensor and state_dict[key].dtype.is_floating_point:
|
||||||
state_dict[key] = state_dict[key].to(dtype)
|
state_dict[key] = state_dict[key].to(dtype)
|
||||||
|
|
||||||
logger.info(f"saving to: {file_name}")
|
logger.info(f"saving to: {file_name}")
|
||||||
@@ -49,26 +50,76 @@ def save_to_file(file_name, state_dict, dtype, metadata, mem_eff_save=False):
|
|||||||
|
|
||||||
|
|
||||||
def merge_to_flux_model(
|
def merge_to_flux_model(
|
||||||
loading_device, working_device, flux_model, models, ratios, merge_dtype, save_dtype, mem_eff_load_save=False
|
loading_device,
|
||||||
|
working_device,
|
||||||
|
flux_path: str,
|
||||||
|
clip_l_path: str,
|
||||||
|
t5xxl_path: str,
|
||||||
|
models,
|
||||||
|
ratios,
|
||||||
|
merge_dtype,
|
||||||
|
save_dtype,
|
||||||
|
mem_eff_load_save=False,
|
||||||
):
|
):
|
||||||
# create module map without loading state_dict
|
# create module map without loading state_dict
|
||||||
logger.info(f"loading keys from FLUX.1 model: {flux_model}")
|
|
||||||
lora_name_to_module_key = {}
|
lora_name_to_module_key = {}
|
||||||
with safe_open(flux_model, framework="pt", device=loading_device) as flux_file:
|
if flux_path is not None:
|
||||||
keys = list(flux_file.keys())
|
logger.info(f"loading keys from FLUX.1 model: {flux_path}")
|
||||||
for key in keys:
|
with safe_open(flux_path, framework="pt", device=loading_device) as flux_file:
|
||||||
if key.endswith(".weight"):
|
keys = list(flux_file.keys())
|
||||||
module_name = ".".join(key.split(".")[:-1])
|
for key in keys:
|
||||||
lora_name = lora_flux.LoRANetwork.LORA_PREFIX_FLUX + "_" + module_name.replace(".", "_")
|
if key.endswith(".weight"):
|
||||||
lora_name_to_module_key[lora_name] = key
|
module_name = ".".join(key.split(".")[:-1])
|
||||||
|
lora_name = lora_flux.LoRANetwork.LORA_PREFIX_FLUX + "_" + module_name.replace(".", "_")
|
||||||
|
lora_name_to_module_key[lora_name] = key
|
||||||
|
|
||||||
|
lora_name_to_clip_l_key = {}
|
||||||
|
if clip_l_path is not None:
|
||||||
|
logger.info(f"loading keys from clip_l model: {clip_l_path}")
|
||||||
|
with safe_open(clip_l_path, framework="pt", device=loading_device) as clip_l_file:
|
||||||
|
keys = list(clip_l_file.keys())
|
||||||
|
for key in keys:
|
||||||
|
if key.endswith(".weight"):
|
||||||
|
module_name = ".".join(key.split(".")[:-1])
|
||||||
|
lora_name = lora_flux.LoRANetwork.LORA_PREFIX_TEXT_ENCODER_CLIP + "_" + module_name.replace(".", "_")
|
||||||
|
lora_name_to_clip_l_key[lora_name] = key
|
||||||
|
|
||||||
|
lora_name_to_t5xxl_key = {}
|
||||||
|
if t5xxl_path is not None:
|
||||||
|
logger.info(f"loading keys from t5xxl model: {t5xxl_path}")
|
||||||
|
with safe_open(t5xxl_path, framework="pt", device=loading_device) as t5xxl_file:
|
||||||
|
keys = list(t5xxl_file.keys())
|
||||||
|
for key in keys:
|
||||||
|
if key.endswith(".weight"):
|
||||||
|
module_name = ".".join(key.split(".")[:-1])
|
||||||
|
lora_name = lora_flux.LoRANetwork.LORA_PREFIX_TEXT_ENCODER_T5 + "_" + module_name.replace(".", "_")
|
||||||
|
lora_name_to_t5xxl_key[lora_name] = key
|
||||||
|
|
||||||
|
flux_state_dict = {}
|
||||||
|
clip_l_state_dict = {}
|
||||||
|
t5xxl_state_dict = {}
|
||||||
if mem_eff_load_save:
|
if mem_eff_load_save:
|
||||||
flux_state_dict = {}
|
if flux_path is not None:
|
||||||
with MemoryEfficientSafeOpen(flux_model) as flux_file:
|
with MemoryEfficientSafeOpen(flux_path) as flux_file:
|
||||||
for key in tqdm(flux_file.keys()):
|
for key in tqdm(flux_file.keys()):
|
||||||
flux_state_dict[key] = flux_file.get_tensor(key).to(loading_device) # dtype is not changed
|
flux_state_dict[key] = flux_file.get_tensor(key).to(loading_device) # dtype is not changed
|
||||||
|
|
||||||
|
if clip_l_path is not None:
|
||||||
|
with MemoryEfficientSafeOpen(clip_l_path) as clip_l_file:
|
||||||
|
for key in tqdm(clip_l_file.keys()):
|
||||||
|
clip_l_state_dict[key] = clip_l_file.get_tensor(key).to(loading_device)
|
||||||
|
|
||||||
|
if t5xxl_path is not None:
|
||||||
|
with MemoryEfficientSafeOpen(t5xxl_path) as t5xxl_file:
|
||||||
|
for key in tqdm(t5xxl_file.keys()):
|
||||||
|
t5xxl_state_dict[key] = t5xxl_file.get_tensor(key).to(loading_device)
|
||||||
else:
|
else:
|
||||||
flux_state_dict = load_file(flux_model, device=loading_device)
|
if flux_path is not None:
|
||||||
|
flux_state_dict = load_file(flux_path, device=loading_device)
|
||||||
|
if clip_l_path is not None:
|
||||||
|
clip_l_state_dict = load_file(clip_l_path, device=loading_device)
|
||||||
|
if t5xxl_path is not None:
|
||||||
|
t5xxl_state_dict = load_file(t5xxl_path, device=loading_device)
|
||||||
|
|
||||||
for model, ratio in zip(models, ratios):
|
for model, ratio in zip(models, ratios):
|
||||||
logger.info(f"loading: {model}")
|
logger.info(f"loading: {model}")
|
||||||
@@ -81,8 +132,20 @@ def merge_to_flux_model(
|
|||||||
up_key = key.replace("lora_down", "lora_up")
|
up_key = key.replace("lora_down", "lora_up")
|
||||||
alpha_key = key[: key.index("lora_down")] + "alpha"
|
alpha_key = key[: key.index("lora_down")] + "alpha"
|
||||||
|
|
||||||
if lora_name not in lora_name_to_module_key:
|
if lora_name in lora_name_to_module_key:
|
||||||
logger.warning(f"no module found for LoRA weight: {key}. LoRA for Text Encoder is not supported yet.")
|
module_weight_key = lora_name_to_module_key[lora_name]
|
||||||
|
state_dict = flux_state_dict
|
||||||
|
elif lora_name in lora_name_to_clip_l_key:
|
||||||
|
module_weight_key = lora_name_to_clip_l_key[lora_name]
|
||||||
|
state_dict = clip_l_state_dict
|
||||||
|
elif lora_name in lora_name_to_t5xxl_key:
|
||||||
|
module_weight_key = lora_name_to_t5xxl_key[lora_name]
|
||||||
|
state_dict = t5xxl_state_dict
|
||||||
|
else:
|
||||||
|
logger.warning(
|
||||||
|
f"no module found for LoRA weight: {key}. Skipping..."
|
||||||
|
f"LoRAの重みに対応するモジュールが見つかりませんでした。スキップします。"
|
||||||
|
)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
down_weight = lora_sd.pop(key)
|
down_weight = lora_sd.pop(key)
|
||||||
@@ -93,11 +156,7 @@ def merge_to_flux_model(
|
|||||||
scale = alpha / dim
|
scale = alpha / dim
|
||||||
|
|
||||||
# W <- W + U * D
|
# W <- W + U * D
|
||||||
module_weight_key = lora_name_to_module_key[lora_name]
|
weight = state_dict[module_weight_key]
|
||||||
if module_weight_key not in flux_state_dict:
|
|
||||||
weight = flux_file.get_tensor(module_weight_key)
|
|
||||||
else:
|
|
||||||
weight = flux_state_dict[module_weight_key]
|
|
||||||
|
|
||||||
weight = weight.to(working_device, merge_dtype)
|
weight = weight.to(working_device, merge_dtype)
|
||||||
up_weight = up_weight.to(working_device, merge_dtype)
|
up_weight = up_weight.to(working_device, merge_dtype)
|
||||||
@@ -121,7 +180,7 @@ def merge_to_flux_model(
|
|||||||
# logger.info(conved.size(), weight.size(), module.stride, module.padding)
|
# logger.info(conved.size(), weight.size(), module.stride, module.padding)
|
||||||
weight = weight + ratio * conved * scale
|
weight = weight + ratio * conved * scale
|
||||||
|
|
||||||
flux_state_dict[module_weight_key] = weight.to(loading_device, save_dtype)
|
state_dict[module_weight_key] = weight.to(loading_device, save_dtype)
|
||||||
del up_weight
|
del up_weight
|
||||||
del down_weight
|
del down_weight
|
||||||
del weight
|
del weight
|
||||||
@@ -129,7 +188,7 @@ def merge_to_flux_model(
|
|||||||
if len(lora_sd) > 0:
|
if len(lora_sd) > 0:
|
||||||
logger.warning(f"Unused keys in LoRA model: {list(lora_sd.keys())}")
|
logger.warning(f"Unused keys in LoRA model: {list(lora_sd.keys())}")
|
||||||
|
|
||||||
return flux_state_dict
|
return flux_state_dict, clip_l_state_dict, t5xxl_state_dict
|
||||||
|
|
||||||
|
|
||||||
def merge_to_flux_model_diffusers(
|
def merge_to_flux_model_diffusers(
|
||||||
@@ -508,17 +567,28 @@ def merge(args):
|
|||||||
if save_dtype is None:
|
if save_dtype is None:
|
||||||
save_dtype = merge_dtype
|
save_dtype = merge_dtype
|
||||||
|
|
||||||
dest_dir = os.path.dirname(args.save_to)
|
assert (
|
||||||
|
args.save_to or args.clip_l_save_to or args.t5xxl_save_to
|
||||||
|
), "save_to or clip_l_save_to or t5xxl_save_to must be specified / save_toまたはclip_l_save_toまたはt5xxl_save_toを指定してください"
|
||||||
|
dest_dir = os.path.dirname(args.save_to or args.clip_l_save_to or args.t5xxl_save_to)
|
||||||
if not os.path.exists(dest_dir):
|
if not os.path.exists(dest_dir):
|
||||||
logger.info(f"creating directory: {dest_dir}")
|
logger.info(f"creating directory: {dest_dir}")
|
||||||
os.makedirs(dest_dir)
|
os.makedirs(dest_dir)
|
||||||
|
|
||||||
if args.flux_model is not None:
|
if args.flux_model is not None or args.clip_l is not None or args.t5xxl is not None:
|
||||||
if not args.diffusers:
|
if not args.diffusers:
|
||||||
state_dict = merge_to_flux_model(
|
assert (args.clip_l is None and args.clip_l_save_to is None) or (
|
||||||
|
args.clip_l is not None and args.clip_l_save_to is not None
|
||||||
|
), "clip_l_save_to must be specified if clip_l is specified / clip_lが指定されている場合はclip_l_save_toも指定してください"
|
||||||
|
assert (args.t5xxl is None and args.t5xxl_save_to is None) or (
|
||||||
|
args.t5xxl is not None and args.t5xxl_save_to is not None
|
||||||
|
), "t5xxl_save_to must be specified if t5xxl is specified / t5xxlが指定されている場合はt5xxl_save_toも指定してください"
|
||||||
|
flux_state_dict, clip_l_state_dict, t5xxl_state_dict = merge_to_flux_model(
|
||||||
args.loading_device,
|
args.loading_device,
|
||||||
args.working_device,
|
args.working_device,
|
||||||
args.flux_model,
|
args.flux_model,
|
||||||
|
args.clip_l,
|
||||||
|
args.t5xxl,
|
||||||
args.models,
|
args.models,
|
||||||
args.ratios,
|
args.ratios,
|
||||||
merge_dtype,
|
merge_dtype,
|
||||||
@@ -526,7 +596,10 @@ def merge(args):
|
|||||||
args.mem_eff_load_save,
|
args.mem_eff_load_save,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
state_dict = merge_to_flux_model_diffusers(
|
assert (
|
||||||
|
args.clip_l is None and args.t5xxl is None
|
||||||
|
), "clip_l and t5xxl are not supported with --diffusers / clip_l、t5xxlはDiffusersではサポートされていません"
|
||||||
|
flux_state_dict = merge_to_flux_model_diffusers(
|
||||||
args.loading_device,
|
args.loading_device,
|
||||||
args.working_device,
|
args.working_device,
|
||||||
args.flux_model,
|
args.flux_model,
|
||||||
@@ -536,8 +609,10 @@ def merge(args):
|
|||||||
save_dtype,
|
save_dtype,
|
||||||
args.mem_eff_load_save,
|
args.mem_eff_load_save,
|
||||||
)
|
)
|
||||||
|
clip_l_state_dict = None
|
||||||
|
t5xxl_state_dict = None
|
||||||
|
|
||||||
if args.no_metadata:
|
if args.no_metadata or (flux_state_dict is None or len(flux_state_dict) == 0):
|
||||||
sai_metadata = None
|
sai_metadata = None
|
||||||
else:
|
else:
|
||||||
merged_from = sai_model_spec.build_merged_from([args.flux_model] + args.models)
|
merged_from = sai_model_spec.build_merged_from([args.flux_model] + args.models)
|
||||||
@@ -546,15 +621,24 @@ def merge(args):
|
|||||||
None, False, False, False, False, False, time.time(), title=title, merged_from=merged_from, flux="dev"
|
None, False, False, False, False, False, time.time(), title=title, merged_from=merged_from, flux="dev"
|
||||||
)
|
)
|
||||||
|
|
||||||
logger.info(f"saving FLUX model to: {args.save_to}")
|
if flux_state_dict is not None and len(flux_state_dict) > 0:
|
||||||
save_to_file(args.save_to, state_dict, save_dtype, sai_metadata, args.mem_eff_load_save)
|
logger.info(f"saving FLUX model to: {args.save_to}")
|
||||||
|
save_to_file(args.save_to, flux_state_dict, save_dtype, sai_metadata, args.mem_eff_load_save)
|
||||||
|
|
||||||
|
if clip_l_state_dict is not None and len(clip_l_state_dict) > 0:
|
||||||
|
logger.info(f"saving clip_l model to: {args.clip_l_save_to}")
|
||||||
|
save_to_file(args.clip_l_save_to, clip_l_state_dict, save_dtype, None, args.mem_eff_load_save)
|
||||||
|
|
||||||
|
if t5xxl_state_dict is not None and len(t5xxl_state_dict) > 0:
|
||||||
|
logger.info(f"saving t5xxl model to: {args.t5xxl_save_to}")
|
||||||
|
save_to_file(args.t5xxl_save_to, t5xxl_state_dict, save_dtype, None, args.mem_eff_load_save)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
state_dict, metadata = merge_lora_models(args.models, args.ratios, merge_dtype, args.concat, args.shuffle)
|
flux_state_dict, metadata = merge_lora_models(args.models, args.ratios, merge_dtype, args.concat, args.shuffle)
|
||||||
|
|
||||||
logger.info("calculating hashes and creating metadata...")
|
logger.info("calculating hashes and creating metadata...")
|
||||||
|
|
||||||
model_hash, legacy_hash = train_util.precalculate_safetensors_hashes(state_dict, metadata)
|
model_hash, legacy_hash = train_util.precalculate_safetensors_hashes(flux_state_dict, metadata)
|
||||||
metadata["sshs_model_hash"] = model_hash
|
metadata["sshs_model_hash"] = model_hash
|
||||||
metadata["sshs_legacy_hash"] = legacy_hash
|
metadata["sshs_legacy_hash"] = legacy_hash
|
||||||
|
|
||||||
@@ -562,12 +646,12 @@ def merge(args):
|
|||||||
merged_from = sai_model_spec.build_merged_from(args.models)
|
merged_from = sai_model_spec.build_merged_from(args.models)
|
||||||
title = os.path.splitext(os.path.basename(args.save_to))[0]
|
title = os.path.splitext(os.path.basename(args.save_to))[0]
|
||||||
sai_metadata = sai_model_spec.build_metadata(
|
sai_metadata = sai_model_spec.build_metadata(
|
||||||
state_dict, False, False, False, True, False, time.time(), title=title, merged_from=merged_from, flux="dev"
|
flux_state_dict, False, False, False, True, False, time.time(), title=title, merged_from=merged_from, flux="dev"
|
||||||
)
|
)
|
||||||
metadata.update(sai_metadata)
|
metadata.update(sai_metadata)
|
||||||
|
|
||||||
logger.info(f"saving model to: {args.save_to}")
|
logger.info(f"saving model to: {args.save_to}")
|
||||||
save_to_file(args.save_to, state_dict, save_dtype, metadata)
|
save_to_file(args.save_to, flux_state_dict, save_dtype, metadata)
|
||||||
|
|
||||||
|
|
||||||
def setup_parser() -> argparse.ArgumentParser:
|
def setup_parser() -> argparse.ArgumentParser:
|
||||||
@@ -592,6 +676,18 @@ def setup_parser() -> argparse.ArgumentParser:
|
|||||||
default=None,
|
default=None,
|
||||||
help="FLUX.1 model to load, merge LoRA models if omitted / 読み込むモデル、指定しない場合はLoRAモデルをマージする",
|
help="FLUX.1 model to load, merge LoRA models if omitted / 読み込むモデル、指定しない場合はLoRAモデルをマージする",
|
||||||
)
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--clip_l",
|
||||||
|
type=str,
|
||||||
|
default=None,
|
||||||
|
help="path to clip_l (*.sft or *.safetensors), should be float16 / clip_lのパス(*.sftまたは*.safetensors)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--t5xxl",
|
||||||
|
type=str,
|
||||||
|
default=None,
|
||||||
|
help="path to t5xxl (*.sft or *.safetensors), should be float16 / t5xxlのパス(*.sftまたは*.safetensors)",
|
||||||
|
)
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"--mem_eff_load_save",
|
"--mem_eff_load_save",
|
||||||
action="store_true",
|
action="store_true",
|
||||||
@@ -617,6 +713,18 @@ def setup_parser() -> argparse.ArgumentParser:
|
|||||||
default=None,
|
default=None,
|
||||||
help="destination file name: safetensors file / 保存先のファイル名、safetensorsファイル",
|
help="destination file name: safetensors file / 保存先のファイル名、safetensorsファイル",
|
||||||
)
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--clip_l_save_to",
|
||||||
|
type=str,
|
||||||
|
default=None,
|
||||||
|
help="destination file name for clip_l: safetensors file / clip_lの保存先のファイル名、safetensorsファイル",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--t5xxl_save_to",
|
||||||
|
type=str,
|
||||||
|
default=None,
|
||||||
|
help="destination file name for t5xxl: safetensors file / t5xxlの保存先のファイル名、safetensorsファイル",
|
||||||
|
)
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"--models",
|
"--models",
|
||||||
type=str,
|
type=str,
|
||||||
|
|||||||
Reference in New Issue
Block a user