patrickvonplaten pandora-s commited on
Commit
352ae5a
·
verified ·
1 Parent(s): 7ac711f

[AUTO] CVST Tokenizer Badger (#43)

Browse files

- [AUTO] CVST Tokenizer Badger (cbe91cf1bc5582ba4f5f3f081a99b9f2d590b27b)


Co-authored-by: pandora <[email protected]>

Files changed (1) hide show
  1. README.md +56 -0
README.md CHANGED
@@ -11,6 +11,62 @@ license_link: https://mistral.ai/licences/MNPL-0.1.md
11
 
12
  # Model Card for Codestral-22B-v0.1
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  Codestral-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried:
15
  - As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications
16
  - As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code)
 
11
 
12
  # Model Card for Codestral-22B-v0.1
13
 
14
+ ###
15
+
16
+ > [!WARNING]
17
+ > 🚫
18
+ > The `transformers` tokenizer is not properly configured. Make sure that your encoding and decoding is correct by using `mistral-common` as shown below:
19
+
20
+ ## Encode and Decode with `mistral_common`
21
+
22
+ ```py
23
+ from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
24
+ from mistral_common.protocol.instruct.messages import UserMessage
25
+ from mistral_common.protocol.instruct.request import ChatCompletionRequest
26
+
27
+ mistral_models_path = "MISTRAL_MODELS_PATH"
28
+
29
+ tokenizer = MistralTokenizer.v3()
30
+
31
+ completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
32
+
33
+ tokens = tokenizer.encode_chat_completion(completion_request).tokens
34
+ ```
35
+
36
+ ## Inference with `mistral_inference`
37
+
38
+ ```py
39
+ from mistral_inference.model import Transformer
40
+ from mistral_inference.generate import generate
41
+
42
+ model = Transformer.from_folder(mistral_models_path)
43
+ out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
44
+
45
+ result = tokenizer.decode(out_tokens[0])
46
+
47
+ print(result)
48
+ ```
49
+
50
+ ## Inference with hugging face `transformers`
51
+
52
+ ```py
53
+ from transformers import AutoModelForCausalLM
54
+
55
+ model = AutoModelForCausalLM.from_pretrained("mistralai/Codestral-22B-v0.1")
56
+ model.to("cuda")
57
+
58
+ generated_ids = model.generate(tokens, max_new_tokens=1000, do_sample=True)
59
+
60
+ # decode with mistral tokenizer
61
+ result = tokenizer.decode(generated_ids[0].tolist())
62
+ print(result)
63
+ ```
64
+
65
+ > [!TIP]
66
+ > PRs to correct the `transformers` tokenizer so that it gives 1-to-1 the same results as the `mistral_common` reference implementation are very welcome!
67
+
68
+ ---
69
+
70
  Codestral-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried:
71
  - As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications
72
  - As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code)