File size: 1,995 Bytes
692be88 9fa2b06 692be88 9fa2b06 615fb83 692be88 9fa2b06 692be88 615fb83 692be88 9fa2b06 692be88 9fa2b06 692be88 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
---
language: en
tags:
- question-answering
- squad
- transformers
datasets:
- squad
metrics:
- exact_match
- f1
model-index:
- name: HariomSahu/albert-squad-qa
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: SQuAD
type: squad
metrics:
- type: exact_match
value: N/A
- type: f1
value: 89.93540108105752
---
# albert-base-v2 fine-tuned on SQuAD
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the SQuAD dataset.
## Training Details
### Training Hyperparameters
- **Model**: albert-base-v2
- **Dataset**: SQuAD
- **Optimizer**: adamw
- **Learning Rate Scheduler**: cosine_with_restarts
- **Learning Rate**: 6e-05
- **Batch Size**: 28 per device
- **Total Batch Size**: 224
- **Epochs**: 6 (with early stopping)
- **Weight Decay**: 0.005
- **Warmup Ratio**: 0.08
- **Max Gradient Norm**: 0.5
### Early Stopping
- **Patience**: 4
- **Metric**: f1
- **Best Epoch**: 2
## Usage
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("HariomSahu/albert-squad-qa")
model = AutoModelForQuestionAnswering.from_pretrained("HariomSahu/albert-squad-qa")
# Example usage
question = "What is the capital of France?"
context = "France is a country in Europe. Its capital city is Paris."
inputs = tokenizer(question, context, return_tensors="pt")
outputs = model(**inputs)
# Get answer
start_scores, end_scores = outputs.start_logits, outputs.end_logits
start_index = start_scores.argmax()
end_index = end_scores.argmax()
answer = tokenizer.decode(inputs["input_ids"][0][start_index:end_index+1])
print(f"Answer: {answer}")
```
## Evaluation Results
The model achieved the following results on the evaluation set:
- **Exact Match**: 80.4541
- **F1 Score**: 88.7676
## Training Configuration Hash
Config Hash: a8d23824
This hash can be used to reproduce the exact training configuration.
|