kgreenewald commited on
Commit
14222a1
·
verified ·
1 Parent(s): 6f1bedf

Update requirement_check/README.md

Browse files
Files changed (1) hide show
  1. requirement_check/README.md +46 -3
requirement_check/README.md CHANGED
@@ -34,9 +34,52 @@ This **Requirement Checker** family of adapters are designed to check if specifi
34
 
35
 
36
  ### Quickstart Example
37
-
38
-
39
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
  ## Evaluation
42
 
 
34
 
35
 
36
  ### Quickstart Example
37
+ First, see information elsewhere in this repo on how to start up a vLLM server hosting the LoRAs and/or aLoRAs. Once this server is started, it can be queried via the OpenAI API. An example for this intrinsic follows.
38
+
39
+ ```
40
+ import os
41
+ import openai
42
+ import json
43
+ import granite_common
44
+
45
+ PROMPT = "What is IBM?"
46
+ REQUIREMENTS = "Use a formal tone.\n Do not use long words."
47
+ REPONSE = ... # this should be generated by the base model corresponding to the chosen adapter
48
+ REQUIREMENT_TO_CHECK = "Use a formal tone."
49
+
50
+ request = {
51
+ "messages": [
52
+ {
53
+ "content": PROMPT + "\nRequirements: " + REQUIREMENTS,
54
+ "role": "user"
55
+ },
56
+ {
57
+ "role": "assistant",
58
+ "content": RESPONSE
59
+ },
60
+ ],
61
+ "extra_body": {
62
+ "requirement": REQUIREMENT_TO_CHECK
63
+ },
64
+ "model": "requirement_check",
65
+ "temperature": 0.0
66
+ }
67
+ openai_base_url = ...
68
+ openai_api_key = ...
69
+ io_yaml_file = "./rag_intrinsics_lib/requirement_check/.../io.yaml"
70
+
71
+ rewriter = granite_common.IntrinsicsRewriter(config_file=io_yaml_file)
72
+ result_processor = granite_common.IntrinsicsResultProcessor(config_file=io_yaml_file)
73
+
74
+ rewritten_request = rewriter.transform(request)
75
+
76
+ client = openai.OpenAI(base_url=openai_base_url, api_key=openai_api_key)
77
+ chat_completion = client.chat.completions.create(**rewritten_request.model_dump())
78
+
79
+ transformed_completion = result_processor.transform(chat_completion)
80
+
81
+ print(transformed_completion.model_dump_json(indent=2))
82
+ ```
83
 
84
  ## Evaluation
85