renhehuang commited on
Commit
d380886
·
verified ·
1 Parent(s): e35dfc4

Upload bert-traditional-chinese-classifier with accuracy 87.71%

Browse files
README.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zh
4
+ tags:
5
+ - text-classification
6
+ - chinese
7
+ - traditional-chinese
8
+ - bert
9
+ - pytorch
10
+ license: apache-2.0
11
+ datasets:
12
+ - custom
13
+ metrics:
14
+ - accuracy
15
+ - f1
16
+ model-index:
17
+ - name: bert-traditional-chinese-classifier
18
+ results:
19
+ - task:
20
+ type: text-classification
21
+ name: Traditional Chinese Classification
22
+ metrics:
23
+ - type: accuracy
24
+ value: 0.8771
25
+ name: Accuracy
26
+ - type: f1
27
+ value: 0.8771
28
+ name: F1 Score
29
+ ---
30
+
31
+ # BERT Traditional Chinese Classifier v7
32
+
33
+ 這是一個用於區分大陸繁體和台灣繁體的 BERT 分類模型。
34
+
35
+ ## 模型描述
36
+
37
+ - **基礎模型**: ckiplab/bert-base-chinese
38
+ - **任務**: 繁體中文文本分類(大陸繁體 vs 台灣繁體)
39
+ - **準確率**: 87.71%
40
+ - **訓練數據量**: 156824 樣本
41
+
42
+ ## 特點
43
+
44
+ - ✅ 支持長文本處理(最大長度 384 tokens)
45
+ - ✅ 使用 Focal Loss 處理類別不平衡
46
+ - ✅ Multi-Sample Dropout 提高泛化能力
47
+ - ✅ 分層學習率優化
48
+ - ✅ 漸進解凍訓練策略
49
+
50
+ ## 使用方法
51
+
52
+ ```python
53
+ from transformers import AutoTokenizer, AutoModel
54
+ import torch
55
+
56
+ # 載入模型和 tokenizer
57
+ tokenizer = AutoTokenizer.from_pretrained("renhehuang/bert-traditional-chinese-classifier")
58
+ model = torch.load("pytorch_model.bin") # 需要自定義模型類
59
+
60
+ # 預測
61
+ text = "您的繁體中文文本"
62
+ inputs = tokenizer(text, return_tensors="pt", max_length=384, truncation=True)
63
+ outputs = model(**inputs)
64
+ prediction = outputs.logits.argmax(-1).item()
65
+
66
+ # 0: 大陸繁體, 1: 台灣繁體
67
+ label = "大陸繁體" if prediction == 0 else "台灣繁體"
68
+ print(f"預測: {label}")
69
+ ```
70
+
71
+ ## 訓練配置
72
+
73
+ - **Batch Size**: 16
74
+ - **Learning Rate**: 2e-05 (base), 4e-05 (head)
75
+ - **Epochs**: 4
76
+ - **Max Length**: 384
77
+ - **Loss Function**: Focal Loss (gamma=2.0)
78
+
79
+ ## 性能指標
80
+
81
+ ### 整體性能
82
+ - 準確率: 87.71%
83
+
84
+ ### 分層性能(按文本長度)
85
+ 詳見評估報告
86
+
87
+ ## 引用
88
+
89
+ 如果您使用此模型,請引用:
90
+
91
+ ```
92
+ @misc{bert-traditional-chinese-classifier,
93
+ author = {renhehuang},
94
+ title = {BERT Traditional Chinese Classifier},
95
+ year = {2025},
96
+ publisher = {Hugging Face},
97
+ howpublished = {\url{https://huggingface.co/renhehuang/bert-traditional-chinese-classifier}}
98
+ }
99
+ ```
100
+
101
+ ## 授權
102
+
103
+ Apache 2.0
104
+
105
+ ## 聯繫方式
106
+
107
+ 如有問題,請在 Hugging Face 模型頁面或 GitHub 上提出 issue。
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertForMaskedLM"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "directionality": "bidi",
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 0,
20
+ "pooler_fc_size": 768,
21
+ "pooler_num_attention_heads": 12,
22
+ "pooler_num_fc_layers": 3,
23
+ "pooler_size_per_head": 128,
24
+ "pooler_type": "first_token_transform",
25
+ "position_embedding_type": "absolute",
26
+ "tokenizer_class": "BertTokenizerFast",
27
+ "transformers_version": "4.57.0",
28
+ "type_vocab_size": 2,
29
+ "use_cache": true,
30
+ "vocab_size": 21128
31
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d572859e01ace15886178219e2e460a45169cb651e6fc72c11cc6b0a483e6ca7
3
+ size 409201943
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 512,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "BertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
training_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_name": "ckiplab/bert-base-chinese",
3
+ "max_length": 384,
4
+ "batch_size": 16,
5
+ "learning_rate_base": 2e-05,
6
+ "learning_rate_head": 4e-05,
7
+ "epochs": 4,
8
+ "use_focal_loss": true,
9
+ "focal_gamma": 2.0,
10
+ "focal_alpha": 0.4,
11
+ "label_smoothing": 0.05,
12
+ "num_train_samples": 156824,
13
+ "num_eval_samples": 39207,
14
+ "accuracy": 0.877139286351927
15
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff