SFLY5 commited on
Commit
2204b18
·
verified ·
1 Parent(s): f7fa4d0

Add files using upload-large-folder tool

Browse files
Files changed (1) hide show
  1. README.md +5 -39
README.md CHANGED
@@ -24,7 +24,7 @@ tags:
24
  </div>
25
 
26
  <div align="center" style="line-height: 1;">
27
- <a href="LICENSE" style="margin: 2px;">
28
  <img alt="License" src="https://img.shields.io/badge/License-Apache2.0-A5de54" style="display: inline-block; vertical-align: middle;"/>
29
  </a>
30
  </div>
@@ -35,11 +35,11 @@ tags:
35
 
36
  The advanced capabilities of the ERNIE 4.5 models, particularly the MoE-based A47B and A3B series, are underpinned by several key technical innovations:
37
 
38
- 1. **Multimodal Heterogeneous MoE Pre-Training**: Our models are jointly trained on both textual and visual modalities to better capture the nuances of multimodal information and improve performance on tasks involving text generation, image understanding, and cross-modal reasoning. To achieve this without one modality hindering the learning of another, we designed a *heterogeneous MoE structure*, incorporated *modality-isolated routing*, and employed *router orthogonal loss* and *multimodal token-balanced loss*. These architectural choices ensure that both modalities are effectively represented, allowing for mutual reinforcement during training.
39
 
40
- 2. **Scaling-Efficient Infrastructure**: We propose a novel heterogeneous hybrid parallelism and hierarchical load balancing strategy for efficient training of ERNIE 4.5 models. By using intra-node expert parallelism, memory-efficient pipeline scheduling, FP8 mixed-precision training and fine-grained recomputation methods, we achieve remarkable pre-training throughput. For inference, we propose Multi-Expert Parallel Collaboration method and Convolutional Code Quantization algorithm to achieve 4-bit/2-bit lossless quantization. Furthermore, we introduce PD disaggregation with dynamic role switching for effective resource utilization to enhance inference performance for ERNIE 4.5 MoE models. Built on [PaddlePaddle](https://github.com/PaddlePaddle/Paddle), ERNIE 4.5 delivers high-performance inference across a wide range of hardware platforms.
41
 
42
- 3. **Modality-Specific Post-training**: To meet the diverse requirements of real-world applications, we fine-tuned variants of the pretrained model for specific modalities. Our *LLMs* are optimized for general-purpose language understanding and generation. The *VLMs* focuses on visual-language understanding and supports both thinking and no-thinking mode. Each model employed a combination of *Supervised Fine-tuning (SFT)* *Direct Preference Optimization (DPO)* or a modified reinforcement learning method named *Unified Preference Optimization (UPO)* for post-training.
43
 
44
  To ensure the stability of multimodal joint training, we adopt a staged training strategy. In the first and second stage, we train only the text-related parameters, enabling the model to develop strong fundamental language understanding as well as long-text processing capabilities. The final multimodal stage extends capabilities to images and videos by introducing additional parameters including a ViT for image feature extraction, an adapter for feature transformation, and visual experts for multimodal understanding. At this stage, text and visual modalities mutually enhance each other. After pretraining trillions tokens, we obtained ERNIE-4.5-VL-424B-A47B-Base.
45
 
@@ -58,45 +58,11 @@ ERNIE-4.5-VL-424B-A47B-Base is a multimodal MoE Base model, with 424B total para
58
  | Vision Experts(Total / Activated) | 64 / 8 |
59
  | Context Length | 131072 |
60
 
61
- ## Benchmark
62
-
63
- | Capability | Benchmark | ERNIE-4.5-VL-424B-A47B-Base | GPT-4.1 |
64
- | ----------------- | ------------------- | --------------------------- | ------- |
65
- | Average | | | |
66
- | Visual Perception | CVBench | | 82.49 |
67
- | | CountBench | | |
68
- | | RealWorldQA | | 77.25 |
69
- | | VLMAreBlind | | |
70
- | Knowledge | CCBench | | 78.65 |
71
- | Chart&Doc&OCR | OCRBench | | 83.00 |
72
- | | TableVQA | | 72.13 |
73
- | | ChartQA | | 82.56 |
74
- | | DocVQA(val) | | 87.84 |
75
- | | ChartXiv-Reasoning | | 58.30 |
76
- | Vision-Reasoning | VisualPuzzle | | 45.63 |
77
- | | Logicvista | | |
78
- | STEM | OlympiadBench | | 39.95 |
79
- | | MathVista(testmini) | | 70.90 |
80
- | | MathVerse | | 62.46 |
81
- | | MMMU(val) | | 73.07 |
82
- | | AI2D | | 95.34 |
83
- | | MathVision | | 50.46 |
84
- | Video | MVBench | | 64.15 |
85
- | | VideoMME w/o subs | | 74.49 |
86
- | | VideoMME w/ subs | | 78.90 |
87
- | | MLVU | | 73.33 |
88
- | | LongVideoBench | | 63.47 |
89
-
90
  ## Quickstart
91
 
92
  ### vLLM inference
93
 
94
- vLLM is currently being adapted, priority can be given to using our forked repository [vllm](https://github.com/CSWYF3634076/vllm/tree/ernie). We are working with the community to fully support ERNIE4.5 models, stay tuned.
95
-
96
- ```bash
97
- # 80G * 16 GPU
98
- vllm serve baidu/ERNIE-4.5-VL-424B-A47B-Base-PT --trust-remote-code
99
- ```
100
 
101
  ## License
102
 
 
24
  </div>
25
 
26
  <div align="center" style="line-height: 1;">
27
+ <a href="#license" style="margin: 2px;">
28
  <img alt="License" src="https://img.shields.io/badge/License-Apache2.0-A5de54" style="display: inline-block; vertical-align: middle;"/>
29
  </a>
30
  </div>
 
35
 
36
  The advanced capabilities of the ERNIE 4.5 models, particularly the MoE-based A47B and A3B series, are underpinned by several key technical innovations:
37
 
38
+ 1. **Multimodal Heterogeneous MoE Pre-Training:** Our models are jointly trained on both textual and visual modalities to better capture the nuances of multimodal information and improve performance on tasks involving text understanding and generation, image understanding, and cross-modal reasoning. To achieve this without one modality hindering the learning of another, we designed a *heterogeneous MoE structure*, incorporated *modality-isolated routing*, and employed *router orthogonal loss* and *multimodal token-balanced loss*. These architectural choices ensure that both modalities are effectively represented, allowing for mutual reinforcement during training.
39
 
40
+ 2. **Scaling-Efficient Infrastructure:** We propose a novel heterogeneous hybrid parallelism and hierarchical load balancing strategy for efficient training of ERNIE 4.5 models. By using intra-node expert parallelism, memory-efficient pipeline scheduling, FP8 mixed-precision training and finegrained recomputation methods, we achieve remarkable pre-training throughput. For inference, we propose *multi-expert parallel collaboration* method and *convolutional code quantization* algorithm to achieve 4-bit/2-bit lossless quantization. Furthermore, we introduce PD disaggregation with dynamic role switching for effective resource utilization to enhance inference performance for ERNIE 4.5 MoE models. Built on [PaddlePaddle](https://github.com/PaddlePaddle/Paddle), ERNIE 4.5 delivers high-performance inference across a wide range of hardware platforms.
41
 
42
+ 3. **Modality-Specific Post-Training:** To meet the diverse requirements of real-world applications, we fine-tuned variants of the pre-trained model for specific modalities. Our LLMs are optimized for general-purpose language understanding and generation. The VLMs focuses on visuallanguage understanding and supports both thinking and non-thinking modes. Each model employed a combination of *Supervised Fine-tuning (SFT)*, *Direct Preference Optimization (DPO)* or a modified reinforcement learning method named *Unified Preference Optimization (UPO)* for post-training.
43
 
44
  To ensure the stability of multimodal joint training, we adopt a staged training strategy. In the first and second stage, we train only the text-related parameters, enabling the model to develop strong fundamental language understanding as well as long-text processing capabilities. The final multimodal stage extends capabilities to images and videos by introducing additional parameters including a ViT for image feature extraction, an adapter for feature transformation, and visual experts for multimodal understanding. At this stage, text and visual modalities mutually enhance each other. After pretraining trillions tokens, we obtained ERNIE-4.5-VL-424B-A47B-Base.
45
 
 
58
  | Vision Experts(Total / Activated) | 64 / 8 |
59
  | Context Length | 131072 |
60
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
  ## Quickstart
62
 
63
  ### vLLM inference
64
 
65
+ We are working with the community to fully support ERNIE4.5 models, stay tuned.
 
 
 
 
 
66
 
67
  ## License
68