Unable to Structured output
9
#111 opened 3 months ago
by
Arnab07
Running gpt-oss-20b on an RTX 4070 Ti (12GB) using Transformers
3
#110 opened 3 months ago
by
Biiigstone
With openai harmony response, I cannot generate response.
1
#108 opened 3 months ago
by
Noah0627
safe_serialization
#105 opened 3 months ago
by
cuongdk253
NVIDIA L40S GPU's for MXFP4 quantization
2
#100 opened 3 months ago
by
lordim
CUDA out of memory issues when running gptoss model on colab T4
2
#99 opened 3 months ago
by
sumeetm
model quality issues
5
#92 opened 3 months ago
by
TheBigBlockPC
The Abliterated version is the version we deserved.
😔
1
#91 opened 3 months ago
by
Koitenshin
The censorship here is just... amazing.
👀
👍
8
3
#90 opened 3 months ago
by
smokeofc
Does transformers utilize PyTorch SDPA's flash_attention for openai/gpt-oss-20b?
3
#89 opened 3 months ago
by
NooBaymax
Quick Results: Testing Multilingual Capabilities via Code Generation from Ukrainian Competitive Programming Tasks
🚀
4
3
#87 opened 3 months ago
by
anon-researcher-ua
How to turn off thinking mode
👍
🔥
7
7
#86 opened 3 months ago
by
Gierry
I haven't seen a more useful model than this one.
1
#85 opened 3 months ago
by
Maria99934
How to run it on macOS
❤️
1
7
#84 opened 3 months ago
by
kopyl
There is no spark.. only darkness
3
#83 opened 3 months ago
by
JFE
tool calling not working as expected?
👍
2
10
#80 opened 3 months ago
by
Spider-Jerusalem
SVG problem
#79 opened 3 months ago
by
Alkohole
Honest Review of the Model
😎
👍
52
3
#78 opened 3 months ago
by
MarinaraSpaghetti
behavior between GptOssExperts and Mxfp4GptOssExperts
#77 opened 3 months ago
by
DaleMeng
'accelerate' missing in the model card.
👍
1
1
#76 opened 3 months ago
by
Ina-Developer
Qwen3 beat gpt-oss with just 0.6B with good quality enough to be usable
👀
2
#75 opened 3 months ago
by
yousef1727
triton_kernels and multiprocessing
2
#74 opened 3 months ago
by
piotrm0
Thanks for the great model
👍
👀
4
#73 opened 3 months ago
by
deniiiiiij
Running gpt-oss Without FlashAttention 3 – Any Alternatives to Ollama?
3
#72 opened 3 months ago
by
shinho0902
getting curernt Date template issue with OLLAMA_ORIGINS="chrome-extension://*" OLLAMA_DEBUG=1 ollama serve
1
#71 opened 3 months ago
by
lokesher
how to ignore the quantization
❤️
👍
4
3
#70 opened 3 months ago
by
zzbysd
Is there no usable PyTorch for Python 3.12?
2
#69 opened 3 months ago
by
xiaotianyu2025
This is by far the worst model out there... Won't be using this. 🚮🗑️
🤯
➕
7
2
#67 opened 3 months ago
by
UniversalLove333
question: setting reasoning effort
5
#66 opened 3 months ago
by
TheBigBlockPC
🚀 Best Practices for Evaluating GPT-OSS Models: Speed & Benchmark Testing Guide
❤️
🔥
10
1
#64 opened 3 months ago
by
Yunxz
information how to get it working on a 3090
👍
4
11
#63 opened 3 months ago
by
TheBigBlockPC
It seems to be censored a bit too much.
🤯
👍
10
7
#62 opened 3 months ago
by
petwoe
MXFP4 only runs on h100 or b100 or later versions,
😔
2
12
#61 opened 3 months ago
by
kishan51
AIME 25 Accuracy Discrepancy for GPT-OSS-20B (Reasoning Effort=High)
3
#58 opened 3 months ago
by
jiayi37u
Report
🤝
1
1
#55 opened 3 months ago
by
Mks1837
Thinking but no solution?
4
#54 opened 3 months ago
by
ZeroWw
transformers + openai_gpt_oss on modal to run 👌🏻
👍
2
#51 opened 3 months ago
by
weege007
self_attn.k_proj.bias are all 0 for all layers
2
#50 opened 3 months ago
by
DaleMeng
no able to deploy using vLLM on A10G
3
#49 opened 3 months ago
by
abhisskk
so much censorship
👍
24
6
#48 opened 3 months ago
by
Utochi
How to set reasoning effort in the shown example?
👍
3
12
#47 opened 3 months ago
by
TianheWu
When asked for a story with a gay dog character having a crush on a cat, the model only tells stories with straight characters, and cannot seem to correct itself.
😔
4
#46 opened 3 months ago
by
CyroPaws
It's incredibly easy to get the model to spread false information defending conversion therapy for transgender people.
❤️
👀
5
4
#45 opened 3 months ago
by
CyroPaws
GGUFs of This model, tested, and conforms to tensor sizes - IQ4NL, Q5_1, MXFP4_MOE.gguf
1
#44 opened 3 months ago
by
DavidAU
🚀[Fine-tuning] LoRA fine-tuning openai/gpt-oss-20b 👋
👀
👍
7
3
#43 opened 3 months ago
by
study-hjt
Can't Run The Demo FT Code? "Cannot copy out of meta tensor"
#42 opened 3 months ago
by
alentzy
RTX 5000 can't not use this model...
👍
2
2
#41 opened 3 months ago
by
arvis
3090s?
3
#39 opened 3 months ago
by
xiaotianyu2025
Kaggle error
23
#38 opened 3 months ago
by
legolasyiu