FLUX.1 LoRA Collection - NSFW Models
Changelog
v1.3 (2025-10-14)
- Updated version consistency (v1.3 throughout document)
- Verified YAML frontmatter compliance with Hugging Face standards
- Confirmed directory structure analysis (empty repository awaiting models)
- Maintained comprehensive documentation for future LoRA additions
v1.2 (2025-10-14)
- Fixed YAML frontmatter positioning to meet Hugging Face standards
- YAML frontmatter now starts at line 1 as required
- Moved version comment to proper position after YAML section
- Ensured full compliance with Hugging Face model card metadata requirements
v1.1 (2025-10-13)
- Enhanced repository organization documentation
- Added comprehensive LoRA training specifications
- Expanded performance optimization guidelines
- Improved multi-LoRA blending examples
- Added detailed prompt engineering best practices
- Updated hardware requirements with more granular specifications
- Added troubleshooting section for common issues
- Clarified precision compatibility across FP16/FP8 base models
v1.0 (2025-10-13)
- Initial repository structure and README
- Basic LoRA usage documentation
- Integration examples with FLUX.1-dev base models
Model Description
This repository contains a collection of LoRA (Low-Rank Adaptation) adapters for FLUX.1-dev models, focused on specialized content generation. LoRA adapters provide efficient fine-tuning by adding small trainable parameters to the base model, enabling style variations, character customization, and domain-specific generation without modifying the original model weights.
Key Capabilities:
- Efficient fine-tuning with minimal storage footprint (typically 10-500MB per LoRA)
- Compatible with FLUX.1-dev base models (FP16, FP8, and quantized variants)
- Stackable adapters for combining multiple styles/concepts
- Fast loading and switching between different LoRAs
- Preserves base model quality while adding specialized capabilities
Repository Contents
Directory Structure
flux-dev-loras-nsfw/
βββ loras/
β βββ flux/
β βββ [LoRA model files will be stored here]
βββ README.md (this file)
Model Files
Current Status: Repository structure initialized, awaiting model files.
Expected File Types:
.safetensors- LoRA adapter weights (recommended format).json- LoRA configuration metadata.txt- Trigger words and usage instructions
Typical LoRA Sizes:
- Small LoRAs: 10-50 MB (style adapters)
- Medium LoRAs: 50-200 MB (character/concept adapters)
- Large LoRAs: 200-500 MB (complex multi-concept adapters)
Total Repository Size
Current: 23 KB (empty structure) Expected: Varies based on LoRA collection (typically 100MB - 5GB total)
Hardware Requirements
For LoRA Usage with FLUX.1-dev
Minimum Requirements:
- GPU: NVIDIA RTX 3060 (12GB VRAM) or equivalent
- RAM: 16 GB system memory
- Storage: 500 MB - 10 GB (depending on collection size)
- VRAM Usage: Base model (11-13GB) + LoRA overhead (100-500MB)
Recommended Setup:
- GPU: NVIDIA RTX 4090 (24GB VRAM) or A100
- RAM: 32 GB system memory
- Storage: 10-50 GB for comprehensive collection
- VRAM: 16-20GB for comfortable multi-LoRA usage
LoRA-Specific Benefits:
- Much lower VRAM overhead than full model fine-tunes
- Can load/unload LoRAs dynamically without restarting
- Multiple LoRAs can be combined with weighted blending
Usage Examples
Basic LoRA Loading (Diffusers)
from diffusers import FluxPipeline
import torch
# Load base FLUX.1-dev model
pipe = FluxPipeline.from_pretrained(
"E:\\huggingface\\flux-dev-fp16", # Your local FLUX.1-dev path
torch_dtype=torch.float16
)
pipe.to("cuda")
# Load LoRA adapter
pipe.load_lora_weights(
"E:\\huggingface\\flux-dev-loras-nsfw\\loras\\flux",
weight_name="your_lora.safetensors",
adapter_name="custom_style"
)
# Generate with LoRA
prompt = "your prompt with trigger words"
image = pipe(
prompt=prompt,
num_inference_steps=30,
guidance_scale=7.5,
height=1024,
width=1024
).images[0]
image.save("output.png")
Multi-LoRA Blending
# Load multiple LoRAs
pipe.load_lora_weights(
"E:\\huggingface\\flux-dev-loras-nsfw\\loras\\flux",
weight_name="style_lora.safetensors",
adapter_name="style"
)
pipe.load_lora_weights(
"E:\\huggingface\\flux-dev-loras-nsfw\\loras\\flux",
weight_name="character_lora.safetensors",
adapter_name="character"
)
# Set adapter weights for blending
pipe.set_adapters(["style", "character"], adapter_weights=[0.8, 0.6])
# Generate with combined LoRAs
image = pipe(prompt="your combined prompt").images[0]
Dynamic LoRA Switching
# Unload current LoRA
pipe.unload_lora_weights()
# Load different LoRA
pipe.load_lora_weights(
"E:\\huggingface\\flux-dev-loras-nsfw\\loras\\flux",
weight_name="different_lora.safetensors"
)
# Generate with new style
image = pipe(prompt="new prompt").images[0]
Using with ComfyUI
1. Place LoRA files in: ComfyUI/models/loras/
2. In ComfyUI workflow:
- Add "Load LoRA" node
- Connect to your FLUX.1-dev model
- Set strength: 0.6-1.0 (adjust for desired effect)
- Add trigger words to your prompt
Model Specifications
LoRA Architecture
Format: SafeTensors (recommended for security and efficiency)
Rank: Varies by LoRA (typical range: 4-128)
- Low rank (4-32): Lightweight style adapters
- Medium rank (32-64): Balanced quality/size
- High rank (64-128): Maximum quality, larger files
Precision Options:
- FP16: Standard precision for most use cases
- FP32: Higher precision for professional workflows
- Quantized: Experimental lower-precision variants
Base Model Compatibility:
- FLUX.1-dev (primary)
- FLUX.1-schnell (compatible with adjustments)
- Works with FP16, FP8, and quantized base models
Training Specifications
LoRAs in this collection are typically trained with:
- Training steps: 500-5000 (varies by complexity)
- Learning rate: 1e-4 to 1e-5
- Batch size: 1-4
- Base model: FLUX.1-dev
- Dataset: Specialized domain-specific images
Performance Tips
LoRA Loading Optimization
Fast Switching:
# Keep base model loaded, only swap LoRAs
pipe.unload_lora_weights() # Fast
pipe.load_lora_weights(...) # Fast
# Much faster than reloading entire base model
Memory Management:
- Load only needed LoRAs to minimize VRAM usage
- Use
pipe.unload_lora_weights()when switching styles - Consider LoRA weight caching for frequently used adapters
Weight Adjustment:
- Start with LoRA strength 0.7-1.0
- Lower weights (0.3-0.6) for subtle effects
- Higher weights (1.0-1.5) for strong style enforcement
- Test different weights to find optimal balance
Multi-LoRA Best Practices
Combining LoRAs:
- Limit to 2-3 LoRAs simultaneously for stability
- Adjust individual weights to balance effects
- Test combinations individually before stacking
- Monitor VRAM usage when loading multiple adapters
Prompt Engineering with LoRAs:
- Include trigger words specific to each LoRA
- Place important trigger words early in prompt
- Use emphasis syntax:
(trigger word:1.2)for stronger effect - Avoid conflicting concepts between multiple LoRAs
Advanced Prompt Engineering
Trigger Word Strategies:
# Basic trigger word usage
prompt = "anime style, portrait of a character" # Generic
# Optimized with LoRA trigger words
prompt = "anime_style_v2, detailed portrait, character design" # LoRA-specific
Weight and Emphasis Techniques:
Standard: "beautiful landscape"
Emphasized: "(beautiful:1.3) landscape"
De-emphasized: "(background:0.7) focus on subject"
LoRA trigger: "(anime_style:1.2) detailed character"
Multi-LoRA Prompt Structure:
[Style LoRA trigger words] [Subject description] [Character LoRA trigger words] [Scene details]
Example:
"cinematic_lighting, fantasy_style, portrait of warrior, heroic_pose, detailed armor, misty forest background"
Negative Prompt Optimization:
negative_prompt = """
low quality, blurry, distorted, deformed, artifacts,
watermark, signature, text, bad anatomy, overexposed
"""
# Use negative prompts to avoid common LoRA artifacts
Context Length Management:
- FLUX.1 supports up to 512 tokens in prompt
- Prioritize important concepts at the beginning
- Use concise, descriptive language
- Avoid redundant or conflicting terms
Generation Quality
Optimal Settings:
- Steps: 20-35 (FLUX.1-dev), 4-8 (FLUX.1-schnell)
- Guidance Scale: 3.5-7.5 (lower for creative freedom)
- Resolution: 1024x1024 native, up to 2048x2048 with VRAM
- LoRA Strength: 0.7-1.0 for most use cases
Quality Troubleshooting:
- Over-fitting: Reduce LoRA strength to 0.5-0.7
- Weak effect: Increase strength or add trigger words
- Artifacts: Lower inference steps or reduce guidance scale
- VRAM errors: Reduce resolution or unload unused LoRAs
Common Issues and Solutions
Issue: LoRA not affecting output
- Solution: Verify trigger words are included in prompt
- Check LoRA strength is set to 0.7+
- Ensure LoRA file is compatible with base model version
- Try increasing adapter weight in multi-LoRA scenarios
Issue: Out of Memory (OOM) errors
- Solution: Unload unused LoRAs with
pipe.unload_lora_weights() - Reduce base model to FP8 precision
- Lower generation resolution (e.g., 1024x1024 β 768x768)
- Limit simultaneous LoRAs to 1-2 adapters
- Enable CPU offloading:
pipe.enable_model_cpu_offload()
Issue: Generation artifacts or distortion
- Solution: Lower LoRA strength to 0.5-0.7
- Reduce guidance scale to 3.5-5.0
- Increase inference steps to 30-40
- Check for conflicting trigger words between multiple LoRAs
Issue: Slow generation with LoRA
- Solution: LoRA adds minimal overhead; check base model optimization
- Ensure CUDA is properly installed and utilized
- Use FP8 base model for faster inference
- Consider torch.compile() for PyTorch 2.0+
- Cache LoRA weights for frequently used adapters
Issue: LoRA file won't load
- Solution: Verify file format is
.safetensors(preferred) - Check file path uses correct absolute path format
- Ensure LoRA was trained for FLUX.1-dev architecture
- Try loading without
adapter_nameparameter - Check diffusers library version (0.30.0+ recommended)
Integration with FLUX.1 Base Models
Compatible Base Model Paths
This LoRA collection works with local FLUX.1-dev installations:
# FP16 precision (highest quality)
base_model = "E:\\huggingface\\flux-dev-fp16"
# FP8 precision (memory efficient)
base_model = "E:\\huggingface\\flux-dev-fp8"
# Load base model + LoRA
pipe = FluxPipeline.from_pretrained(base_model, torch_dtype=torch.float16)
pipe.load_lora_weights("E:\\huggingface\\flux-dev-loras-nsfw\\loras\\flux", ...)
Precision Considerations
FP16 Base Model:
- Best quality and LoRA effect fidelity
- Recommended for production use
- Requires ~11-13GB VRAM
FP8 Base Model:
- Memory efficient (~6-8GB VRAM)
- Slight quality reduction
- LoRA effects may be slightly less pronounced
- Good for experimentation and iteration
Workflow Optimization
Development Workflow:
# 1. Initial setup - load base model once
pipe = FluxPipeline.from_pretrained(
"E:\\huggingface\\flux-dev-fp16",
torch_dtype=torch.float16
).to("cuda")
# 2. Iterative LoRA testing
for lora_file in ["style1.safetensors", "style2.safetensors", "style3.safetensors"]:
pipe.unload_lora_weights() # Fast cleanup
pipe.load_lora_weights(
"E:\\huggingface\\flux-dev-loras-nsfw\\loras\\flux",
weight_name=lora_file
)
image = pipe(prompt).images[0]
image.save(f"output_{lora_file}.png")
Batch Generation Pipeline:
import itertools
# Test multiple LoRAs with multiple prompts
loras = ["anime_v1.safetensors", "realistic_v2.safetensors"]
prompts = ["portrait of character", "landscape scene", "action pose"]
strengths = [0.6, 0.8, 1.0]
for lora, prompt, strength in itertools.product(loras, prompts, strengths):
pipe.unload_lora_weights()
pipe.load_lora_weights(
"E:\\huggingface\\flux-dev-loras-nsfw\\loras\\flux",
weight_name=lora
)
pipe.set_adapters([lora.split('.')[0]], adapter_weights=[strength])
image = pipe(prompt, num_inference_steps=28).images[0]
image.save(f"batch_{lora}_{strength}_{prompt[:10]}.png")
Production Inference Optimization:
# Enable torch.compile for PyTorch 2.0+
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
# Enable attention slicing for lower VRAM
pipe.enable_attention_slicing()
# Use VAE tiling for high-res generation
pipe.vae.enable_tiling()
# Generate with optimizations
image = pipe(
prompt=prompt,
num_inference_steps=28,
guidance_scale=3.5,
generator=torch.manual_seed(42) # Reproducible results
).images[0]
License
LoRA Collection License: Varies by individual LoRA (check metadata)
Base Model License: FLUX.1 Community License (non-commercial use)
- See: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
- Commercial use requires license from Black Forest Labs
Usage Restrictions:
- Respect individual LoRA creator licenses
- NSFW content: Ensure compliance with local laws and platform policies
- Attribution required for derivative works
- No redistribution without permission from LoRA creators
Citation
FLUX.1 Base Model
@article{flux2024,
title={FLUX.1: A New Generation of Diffusion Models},
author={Black Forest Labs},
journal={Technical Report},
year={2024},
url={https://blackforestlabs.ai/flux-1}
}
LoRA Methodology
@article{hu2021lora,
title={LoRA: Low-Rank Adaptation of Large Language Models},
author={Hu, Edward J and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Wang, Lu and Chen, Weizhu},
journal={arXiv preprint arXiv:2106.09685},
year={2021}
}
Resources and Links
Official FLUX.1 Resources
- Model Card: https://huggingface.co/black-forest-labs/FLUX.1-dev
- Official Website: https://blackforestlabs.ai
- GitHub: https://github.com/black-forest-labs/flux
LoRA Training and Usage
- Diffusers LoRA Guide: https://huggingface.co/docs/diffusers/training/lora
- LoRA Training Tutorial: https://huggingface.co/blog/lora
- ComfyUI LoRA Docs: https://github.com/comfyanonymous/ComfyUI
Community and Support
- Hugging Face Discussions: https://huggingface.co/black-forest-labs/FLUX.1-dev/discussions
- FLUX.1 Discord: https://discord.gg/flux-community
- Reddit: r/StableDiffusion, r/LocalLLaMA
LoRA Compatibility Matrix
Supported Base Models
| Base Model | Precision | VRAM Required | LoRA Compatibility | Performance |
|---|---|---|---|---|
| FLUX.1-dev | FP16 | 11-13 GB | β Full | Best quality |
| FLUX.1-dev | FP8 | 6-8 GB | β Full | Good quality |
| FLUX.1-schnell | FP16 | 11-13 GB | β οΈ Partial | Fast inference |
| FLUX.1-schnell | FP8 | 6-8 GB | β οΈ Partial | Very fast |
Notes:
- LoRAs trained on FLUX.1-dev work best with dev models
- Schnell compatibility requires testing (different distillation)
- Quantized models (GGUF) have experimental LoRA support
Library Version Requirements
Minimum Requirements:
diffusers >= 0.30.0
transformers >= 4.44.0
torch >= 2.0.0
safetensors >= 0.4.0
accelerate >= 0.23.0
Recommended for Best Performance:
diffusers >= 0.31.0
transformers >= 4.45.0
torch >= 2.4.0 (with CUDA 12.1+)
xformers >= 0.0.22 (optional, for memory efficiency)
Installation Command:
pip install diffusers>=0.30.0 transformers>=4.44.0 torch>=2.0.0 safetensors accelerate
LoRA Format Compatibility
Supported Formats:
- β
.safetensors- Primary format (recommended) - β
.bin- PyTorch format (legacy, less secure) - β οΈ
.pt- Legacy format (test before use) - β
.ckpt- Not supported (Stable Diffusion format)
Format Conversion:
from safetensors.torch import save_file, load_file
# Convert .bin to .safetensors
state_dict = torch.load("lora.bin", map_location="cpu")
save_file(state_dict, "lora.safetensors")
Repository Maintenance
Adding New LoRAs
When adding new LoRA files to this repository:
- File Placement: Place
.safetensorsfiles inloras/flux/ - Naming Convention: Use descriptive names (e.g.,
anime_style_v2.safetensors) - Documentation: Create accompanying
.txtfile with trigger words and settings - Metadata: Include
.jsonconfig if available - Update README: Document file size, rank, and recommended usage settings
Organization Best Practices
Subdirectory Structure (for large collections):
loras/flux/
βββ characters/ # Character-specific LoRAs
βββ styles/ # Artistic style adapters
βββ concepts/ # Concept/theme LoRAs
βββ quality/ # Quality enhancement LoRAs
File Naming:
- Include version:
style_name_v1.safetensors - Indicate rank:
character_r64.safetensors(if relevant) - Use lowercase with underscores
Repository Statistics
Current Status: Repository initialized, awaiting model files
Directory Structure:
- Total directories: 2 (loras/, loras/flux/)
- Model files: 0 (awaiting LoRA collection)
- Documentation files: 1 (README.md)
Disk Usage:
- Current: 23 KB
- Expected with collection: 100 MB - 5 GB (varies by LoRA count)
README Version: v1.3
Last Updated: 2025-10-14
Maintained by: Local AI Model Collection
Repository Path: E:\huggingface\flux-dev-loras-nsfw
- Downloads last month
- -