Update README.md
Browse files
README.md
CHANGED
|
@@ -12,7 +12,7 @@ tags:
|
|
| 12 |
|
| 13 |

|
| 14 |
|
| 15 |
-
# **Face-Confidence-SigLIP2**
|
| 16 |
|
| 17 |
> **Face-Confidence-SigLIP2** is a vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for **binary image classification**. It is trained to distinguish between images of **confident faces** and **unconfident faces** using the **SiglipForImageClassification** architecture.
|
| 18 |
|
|
@@ -53,6 +53,8 @@ pip install -q transformers torch pillow gradio
|
|
| 53 |
|
| 54 |
---
|
| 55 |
|
|
|
|
|
|
|
| 56 |
## **Inference Code**
|
| 57 |
|
| 58 |
```python
|
|
@@ -102,6 +104,13 @@ if __name__ == "__main__":
|
|
| 102 |
|
| 103 |
---
|
| 104 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 105 |
## **Intended Use**
|
| 106 |
|
| 107 |
**Face-Confidence-SigLIP2** can be used for:
|
|
|
|
| 12 |
|
| 13 |

|
| 14 |
|
| 15 |
+
# **Face-Confidence-SigLIP2(Experimental)**
|
| 16 |
|
| 17 |
> **Face-Confidence-SigLIP2** is a vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for **binary image classification**. It is trained to distinguish between images of **confident faces** and **unconfident faces** using the **SiglipForImageClassification** architecture.
|
| 18 |
|
|
|
|
| 53 |
|
| 54 |
---
|
| 55 |
|
| 56 |
+
> Image Scale (Optimal): 256 × 256
|
| 57 |
+
|
| 58 |
## **Inference Code**
|
| 59 |
|
| 60 |
```python
|
|
|
|
| 104 |
|
| 105 |
---
|
| 106 |
|
| 107 |
+
## **Demo Inference(Image)**
|
| 108 |
+
|
| 109 |
+

|
| 110 |
+

|
| 111 |
+

|
| 112 |
+

|
| 113 |
+
|
| 114 |
## **Intended Use**
|
| 115 |
|
| 116 |
**Face-Confidence-SigLIP2** can be used for:
|