video
video |
|---|
🧠 SynAdult Multimodality Dataset
Paper: Farooq, M. A., Kielty, P., Yao, W., & Corcoran, P. (2025). SynAdult: Multimodal Synthetic Adult Dataset Generation via Diffusion Models and Neuromorphic Event Simulation for Critical Biometric Applications. IEEE Access, 13, 137327–137347. DOI: 10.1109/ACCESS.2025.3594875
Homepage: https://mali-farooq.github.io/SynAdult/
License: Open research use, non-commercial Maintainer: Muhammad Ali Farooq (University of Galway)
🧩 Dataset Summary
SynAdult is the first multimodal synthetic adult facial dataset designed to support privacy-preserving, bias-aware, and neuromorphic vision research. It integrates diffusion-based 2D face generation, video retargeting, event-based neuromorphic simulation, and 3D morph reconstruction — forming a unified benchmark for facial expression analysis, affective computing, and ethical biometric systems.
💡 Key Features
Multimodal Coverage: RGB images, facial expression videos, neuromorphic event data, and 3D facial meshes.
Demographic Diversity: Balanced representation across three ethnicities (Asian, African, White) and two genders, with varying facial expressions and poses.
Privacy-Aware Vision: Event-based representations derived via V2E (Video-to-Event) simulation enable research on low-latency, privacy-preserving sensing.
Synthetic Ethics: All data are AI-generated, free from personally identifiable content, ensuring ethical compliance with FAIR and GDPR principles.
📊 Dataset Composition Modality Description Purpose 2D RGB Faces Photorealistic adult facial images generated using Stable Diffusion XL (SDXL) fine-tuned with LoRA + DreamBooth Appearance modelling Video Sequences Facial expression animations using LivePortrait retargeting (smile, frown, surprise, head pose changes) Expression & emotion dynamics Event Data Simulated using V2E to generate asynchronous event streams with high temporal resolution Privacy-preserving motion and gaze research 3D Meshes Derived using UV-IDM for identity-consistent 3D facial geometry 3D reconstruction, AR/VR, affective computing
Each subject includes:
10–12 facial expressions
5–10 head pose variations
Corresponding event and 3D data pairs
🧠 Generation Pipeline
Text-to-Image Synthesis: Fine-tuned SDXL using FFHQ, UTK, and AgeDB datasets. LoRA modules enabled efficient domain adaptation for age, gender, and ethnicity control.
Portrait Animation: LivePortrait applied for realistic facial motion synthesis aligned with emotion categories.
Event Conversion: V2E simulator converts video streams into neuromorphic data to emulate event cameras.
3D Morphing: UV-IDM reconstructs detailed 3D geometry from 2D faces for AR/VR and 3D reasoning.
📈 Validation & Benchmarks
CLIP Score: 33.3 (avg. female) / 35.9 (avg. male)
KID: 0.064 (male) / 0.080 (female)
BRISQUE: 9.23 (male) / 15.85 (female)
Landmark Error (NME): 1.5–7.5% across modalities
Expression Classification: >92% accuracy using LibreFace backbone
These results confirm strong realism, alignment, and cross-modality consistency.
⚙️ Intended Use
Training multimodal models for facial recognition, affective computing, expression analysis
Benchmarking privacy-preserving neuromorphic vision
Testing robustness of edge-deployed AI systems under diverse adult demographics
🔐 Ethics & Licensing
All data are synthetic, generated under a non-identifiable, privacy-compliant pipeline. Released for research and educational purposes only. Redistribution and commercial use are not permitted without written consent.
🧾 Citation
Please cite as:
@article{farooq2025synadult, title={SynAdult: Multimodal Synthetic Adult Dataset Generation via Diffusion Models and Neuromorphic Event Simulation for Critical Biometric Applications}, author={Farooq, Muhammad Ali and Kielty, Paul and Yao, Wang and Corcoran, Peter}, journal={IEEE Access}, volume={13}, pages={137327--137347}, year={2025}, doi={10.1109/ACCESS.2025.3594875} }
- Downloads last month
- 41