Add `text-embeddings-inference` tag and snippets
Browse files
README.md
CHANGED
|
@@ -11,9 +11,10 @@ tags:
|
|
| 11 |
- transformers
|
| 12 |
- embeddings
|
| 13 |
- mteb
|
|
|
|
| 14 |
---
|
| 15 |
|
| 16 |
-
|
| 17 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 18 |
|
| 19 |
**Model Summary:** _granite-embedding-reranker-english-r2_ is a 149M parameter dense cross-encoder model from the Granite Embeddings collection that can be used to generate high quality text embeddings. This model produces embedding vectors of size 768 based on context length of upto 8192 tokens. Compared to most other open-source models, this model was only trained using open-source relevance-pair datasets with permissive, enterprise-friendly license, plus IBM collected and generated datasets.
|
|
@@ -191,6 +192,40 @@ for doc, score in reranker_ranked:
|
|
| 191 |
print(f"{score:.4f} | {doc}")
|
| 192 |
```
|
| 193 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 194 |
## Evaluation Results
|
| 195 |
|
| 196 |
The performance of the Granite Embedding English reranking model on BEIR, MLDR, and Miracl benchmarks is reported below. All models are evaluated on the top-20 documents retrieved from the granite-embedding-english-small-r2 or granite-embedding-english-r2 retrievers respectively.
|
|
|
|
| 11 |
- transformers
|
| 12 |
- embeddings
|
| 13 |
- mteb
|
| 14 |
+
- text-embeddings-inference
|
| 15 |
---
|
| 16 |
|
| 17 |
+
# granite-embedding-reranker-english-r2
|
| 18 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 19 |
|
| 20 |
**Model Summary:** _granite-embedding-reranker-english-r2_ is a 149M parameter dense cross-encoder model from the Granite Embeddings collection that can be used to generate high quality text embeddings. This model produces embedding vectors of size 768 based on context length of upto 8192 tokens. Compared to most other open-source models, this model was only trained using open-source relevance-pair datasets with permissive, enterprise-friendly license, plus IBM collected and generated datasets.
|
|
|
|
| 192 |
print(f"{score:.4f} | {doc}")
|
| 193 |
```
|
| 194 |
|
| 195 |
+
**Usage with Hugging Face Text Embeddings Inference (TEI):**
|
| 196 |
+
|
| 197 |
+
This is a simple example of how to deploy the reranking model with [Text Embeddings Inference (TEI)](https://github.com/huggingface/text-embeddings-inference), a blazing fast inference solution for text embedding models, via Docker.
|
| 198 |
+
|
| 199 |
+
- On CPU:
|
| 200 |
+
```bash
|
| 201 |
+
docker run -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-latest --model-id ibm-granite/granite-embedding-reranker-english-r2
|
| 202 |
+
```
|
| 203 |
+
|
| 204 |
+
- On NVIDIA GPU:
|
| 205 |
+
```bash
|
| 206 |
+
docker run --gpus all -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cuda-latest --model-id ibm-granite/granite-embedding-reranker-english-r2
|
| 207 |
+
```
|
| 208 |
+
|
| 209 |
+
Then you can send requests to the deployed API via the `/rerank` route (see the [Text Embeddings Inference OpenAPI Specification](https://huggingface.github.io/text-embeddings-inference/) for more details):
|
| 210 |
+
|
| 211 |
+
```bash
|
| 212 |
+
curl http://0.0.0.0:8080/rerank \
|
| 213 |
+
-H "Content-Type: application/json" \
|
| 214 |
+
-d '{
|
| 215 |
+
"query": "what is the chemical formula of water?",
|
| 216 |
+
"texts": [
|
| 217 |
+
"Water is an inorganic compound with the chemical formula H2O.",
|
| 218 |
+
"In liquid form, H2O is also called '\''water'\'' at standard temperature and pressure.",
|
| 219 |
+
"The weather is nice today",
|
| 220 |
+
"Quick sort is a divide and conquer algorithm that sorts by partitioning."
|
| 221 |
+
],
|
| 222 |
+
"raw_scores": false,
|
| 223 |
+
"return_text": false,
|
| 224 |
+
"truncate": true,
|
| 225 |
+
"truncation_direction": "Right"
|
| 226 |
+
}'
|
| 227 |
+
```
|
| 228 |
+
|
| 229 |
## Evaluation Results
|
| 230 |
|
| 231 |
The performance of the Granite Embedding English reranking model on BEIR, MLDR, and Miracl benchmarks is reported below. All models are evaluated on the top-20 documents retrieved from the granite-embedding-english-small-r2 or granite-embedding-english-r2 retrievers respectively.
|