now working on vllm
I run vllm serve unsloth/gpt-oss-20b-BF16 --async-scheduling on nvidia hardware h20, got this:
Loading safetensors checkpoint shards: 0% Completed | 0/9 [00:00<?, ?it/s]
(VllmWorker pid=52093) Warning: model.layers.5.mlp.experts.down_proj not found in params_dict
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] WorkerProc failed to start.
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] Traceback (most recent call last):
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] File "/root/autodl-tmp/serve_gpt/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 533, in worker_main
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] worker = WorkerProc(*args, **kwargs)
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] File "/root/autodl-tmp/serve_gpt/.venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 402, in init
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] self.worker.load_model()
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] File "/root/autodl-tmp/serve_gpt/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 211, in load_model
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] self.model_runner.load_model(eep_scale_up=eep_scale_up)
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] File "/root/autodl-tmp/serve_gpt/.venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1946, in load_model
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] self.model = model_loader.load_model(
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] ^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] File "/root/autodl-tmp/serve_gpt/.venv/lib/python3.12/site-packages/vllm/model_executor/model_loader/base_loader.py", line 49, in load_model
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] self.load_weights(model, model_config)
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] File "/root/autodl-tmp/serve_gpt/.venv/lib/python3.12/site-packages/vllm/model_executor/model_loader/default_loader.py", line 259, in load_weights
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] loaded_weights = model.load_weights(
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] ^^^^^^^^^^^^^^^^^^^
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] File "/root/autodl-tmp/serve_gpt/.venv/lib/python3.12/site-packages/vllm/model_executor/models/gpt_oss.py", line 429, in load_weights
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] param = params_dict[new_name]
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] ~~~~~~~~~~~^^^^^^^^^^
(VllmWorker pid=52093) ERROR 08-25 14:48:19 [multiproc_executor.py:559] KeyError: 'model.layers.5.mlp.experts.w2_bias'
Loading safetensors checkpoint shards: 0% Completed | 0/9 [00:00<?, ?it/s]
Amazing yes Im pretty sure vLLM supports it now!
sorry? still meeting this issue
any update on how to fix it? Thanks.
https://github.com/hiyouga/LLaMA-Factory/issues/8969 => is this causing an issue? how to address it. thanks.