Otabor35 commited on
Commit
9dc6277
·
verified ·
1 Parent(s): 955bb3b

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -61
README.md DELETED
@@ -1,61 +0,0 @@
1
- ---
2
- license: llama2
3
- ---
4
-
5
- # Edit
6
-
7
- PS : https://github.com/Cornell-RelaxML/quip-sharp/issues/13
8
-
9
- As mentioned in the above issue thread,
10
-
11
- - for accurate hessian generation, use a larger devset (e.g., 4096) and consider accumulating hessians in fp32 if consumer GPUs with fast fp64 are not available.
12
- - changing the Hessian dataset from a natural language dataset to a mathematical dataset, as the task is a math model.
13
-
14
- ---
15
-
16
- Experiment QUIP 2-bit E8P12 version that works in textgen-webui with quip mode loader
17
-
18
- Generated by using scripts from https://gitee.com/yhyu13/llama_-tools
19
-
20
- Original weight : https://huggingface.co/Xwin-LM/Xwin-Math-7B-V1.0
21
-
22
- GPTQ 4bit : https://huggingface.co/Yhyu13/Xwin-Math-7B-V1.0-GPTQ-4bit
23
-
24
- ---
25
-
26
- This repo used `hessian_offline_llama.py` provided by QUIP repo to generate hessian specifically for the orignal model before applying Quip quantization.
27
-
28
- It took quite a long time for hessian for all 31 layers, about 6 hours for 7B models on a single RTX3090. I am not sure if I made any error.
29
-
30
- QUIP byproducts are also uploaded.
31
-
32
-
33
- Perplexity calcaultead using `eval_ppl.py` provided by QUIP repo
34
- QUIP PPL:
35
-
36
- - wikitext2 perplexity: 11.247852325439453
37
-
38
- - c4 perplexity: 16.275997161865234
39
-
40
- Original model PPL:
41
-
42
- - wikitext2 perplexity: 6.042122840881348
43
-
44
- - c4 perplexity: 8.430611610412598
45
-
46
- Looks like something is wrong, the quantized model is a disaster.
47
-
48
- ---
49
-
50
- Here is some testing done in textgen-webui, I was using Q&A from this dataset https://huggingface.co/datasets/TIGER-Lab/MathInstruct
51
-
52
- It seems the 2 bit could hardly answer any question correctly, in compare to GPTQ 4bit version. But the https://huggingface.co/relaxml/Llama-2-13b-E8P-2Bit model made by the author of QUIP seems to work fine, just as good as GPTQ.
53
-
54
- So in conclusion, this is a very experimental model that I made just to testify QUIP, I may made some error. But I think it is a good start.
55
-
56
- QUIP 2 bit version:
57
- ![Alt text](img/XMath_Quip2.png)
58
-
59
-
60
- GPTQ 4 bit version:
61
- ![Alt text](img/textgen-xinmath.png)