🤗 Sentence Transformers is joining Hugging Face! 🤗 This formalizes the existing maintenance structure, as I've personally led the project for the past two years on behalf of Hugging Face! Details:
Today, the Ubiquitous Knowledge Processing (UKP) Lab is transferring the project to Hugging Face. Sentence Transformers will remain a community-driven, open-source project, with the same open-source license (Apache 2.0) as before. Contributions from researchers, developers, and enthusiasts are welcome and encouraged. The project will continue to prioritize transparency, collaboration, and broad accessibility.
We see an increasing wish from companies to move from large LLM APIs to local models for better control and privacy, reflected in the library's growth: in just the last 30 days, Sentence Transformer models have been downloaded >270 million times, second only to transformers.
I would like to thank the UKP Lab, and especially Nils Reimers and Iryna Gurevych, both for their dedication to the project and for their trust in myself, both now and two years ago. Back then, neither of you knew me well, yet you trusted me to take the project to new heights. That choice ended up being very valuable for the embedding & Information Retrieval community, and I think this choice of granting Hugging Face stewardship will be similarly successful.
I'm very excited about the future of the project, and for the world of embeddings and retrieval at large!
We introduced 🥬 TinyLettuce: lightweight hallucination detection with 17–68M encoders
Instead of relying on huge LLM judges that are slow and costly, we built tiny Ettin-based detectors that you can train in hours on a single GPU and run efficiently on CPU.
Here’s what’s inside: 1️⃣ Synthetic Data Generator A toolkit to create hallucinations with controllable error types—no manual annotation bottlenecks. 2️⃣ TinyLettuce Models (17–68M) Compact classifiers built on Ettin encoders, designed for efficiency (8K context, modern transformer backbone). 3️⃣ Data & Training Utilities Scripts and APIs to generate domain-specific labeled pairs at scale, plus ~3.6k examples we used for training. 4️⃣ Open and MIT-licensed Code, data, and models are freely available for research and production. 5️⃣ Performance Highlights - TinyLettuce-17M reaches 90.87% F1 (synthetic), outperforming GPT-OSS-120B (83.38%) and Qwen3-235B (79.84%) - Runs in real-time on CPU: low latency, minimal memory, and pennies per million checks - Shows competitive results on RAGTruth benchmarks
😎 I just published Sentence Transformers v5.1.0, and it's a big one. 2x-3x speedups of SparseEncoder models via ONNX and/or OpenVINO backends, easier distillation data preparation with hard negatives mining, and more:
1️⃣ Faster ONNX and OpenVINO backends for SparseEncoder models Usage is as simple as backend="onnx" or backend="openvino" when initializing a SparseEncoder to get started, but I also included utility functions for optimization, dynamic quantization, and static quantization, plus benchmarks.
2️⃣ New n-tuple-scores output format from mine_hard_negatives This new output format is immediately compatible with the MarginMSELoss and SparseMarginMSELoss for training SentenceTransformer, CrossEncoder, and SparseEncoder losses.
3️⃣ Gathering across devices When doing multi-GPU training using a loss that has in-batch negatives (e.g. MultipleNegativesRankingLoss), you can now use gather_across_devices=True to load in-batch negatives from the other devices too! Essentially a free lunch, pretty big impact potential in my evals.
4️⃣ Trackio support If you also upgrade transformers, and you install trackio with pip install trackio, then your experiments will also automatically be tracked locally with trackio. Just open up localhost and have a look at your losses/evals, no logins, no metric uploading.
5️⃣ MTEB Documentation We've added some documentation on evaluating SentenceTransformer models properly with MTEB. It's rudimentary as the documentation on the MTEB side is already great, but it should get you started.
Plus many more smaller features & fixes (crash fixes, compatibility with datasets v4, FIPS compatibility, etc.).
Big thanks to all of the contributors for helping with the release, many of the features from this release were proposed by others. I have a big list of future potential features that I'd love to add, but I'm