File size: 258 Bytes
bfb2dff |
1 2 3 4 5 6 7 8 |
default_stage:
default_modifiers:
QuantizationModifier:
targets: [Linear]
ignore: ['re:.*lm_head', 're:.*self_attn', 're:.*router', 're:.*vision_model.*', 're:.*multi_modal_projector.*',
Llama4TextAttention]
scheme: FP8_BLOCK
|