Umar Butler PRO
AI & ML interests
Recent Activity
Organizations
Hey @elsatch . Eventually, yes. But it'll take a lot more work to get right!
Right now, we've got a fairly diverse and representative benchmark of English-language legal tasks. Adding one Chinese task here and one German task there would immediately skew results towards whichever model is the most 'multilingual' and not necessarily the best at generalized legal retrieval.
To control for that, we'd essentially need to create multiple copies of MLEB, each for a different language, which would ensure that, for example, a model can't be regarded as the best at Arabic legal retrieval simply because it does well at two specific contract-focused datasets.
Building MLEB took quite a bit of work and effort. A lot of time was spent on curation rather than creation. There were many datasets that didn't make the cut, either due to being of low quality or being too trivial.
I have a law degree and previously led AI at the Australian Attorney-General's Department, so I've been able to draw on that knowledge and experience to understand what's good and what isn't.
We'd need to have others from different cultures who have similar backgrounds to be able to help with ensuring that their language's data was also of high quality and trustworthy provenance.
Today, the Ubiquitous Knowledge Processing (UKP) Lab is transferring the project to Hugging Face. Sentence Transformers will remain a community-driven, open-source project, with the same open-source license (Apache 2.0) as before. Contributions from researchers, developers, and enthusiasts are welcome and encouraged. The project will continue to prioritize transparency, collaboration, and broad accessibility.
Read our full announcement for more details and quotes from UKP and Hugging Face leadership: https://huggingface.co/blog/sentence-transformers-joins-hf
We see an increasing wish from companies to move from large LLM APIs to local models for better control and privacy, reflected in the library's growth: in just the last 30 days, Sentence Transformer models have been downloaded >270 million times, second only to transformers.
I would like to thank the UKP Lab, and especially Nils Reimers and Iryna Gurevych, both for their dedication to the project and for their trust in myself, both now and two years ago. Back then, neither of you knew me well, yet you trusted me to take the project to new heights. That choice ended up being very valuable for the embedding & Information Retrieval community, and I think this choice of granting Hugging Face stewardship will be similarly successful.
I'm very excited about the future of the project, and for the world of embeddings and retrieval at large!
Thanks @tomaarsen ! We're big fans of your work, some of which helped build Kanon 2 Embedder!
The vast majority of our training data was out-of-distribution (relative to MLEB), however, there was definitely some overlap, which is hard to avoid when training only on domain data. We took extra care though to ensure no overlap with any test sets, including test sets not included in MLEB (e.g., AILA (which was of relatively low quality anyway), LegalBench, etc.).
Ultimately, the most important test set is your own real-world data, hence why we encourage people to try Kanon 2 Embedder for themselves and see if it improves performance 😊