Small changes in readme for Vllm serve
#166
by
Navanit-AI
- opened
README.md
CHANGED
|
@@ -94,10 +94,7 @@ transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-20b
|
|
| 94 |
vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
|
| 95 |
|
| 96 |
```bash
|
| 97 |
-
uv pip install
|
| 98 |
-
--extra-index-url https://wheels.vllm.ai/gpt-oss/ \
|
| 99 |
-
--extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
|
| 100 |
-
--index-strategy unsafe-best-match
|
| 101 |
|
| 102 |
vllm serve openai/gpt-oss-20b
|
| 103 |
```
|
|
|
|
| 94 |
vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
|
| 95 |
|
| 96 |
```bash
|
| 97 |
+
uv pip install vllm
|
|
|
|
|
|
|
|
|
|
| 98 |
|
| 99 |
vllm serve openai/gpt-oss-20b
|
| 100 |
```
|