Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
QuantStack
/
InternVL3_5-38B-gguf
like
1
Follow
QuantStack
1.47k
GGUF
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
1
Deploy
Use this model
ce6d8de
InternVL3_5-38B-gguf
108 GB
1 contributor
History:
12 commits
wsbagnsv1
Upload internvl3_5-38b-q2_k.gguf with huggingface_hub
ce6d8de
verified
3 months ago
.gitattributes
Safe
1.98 kB
Upload internvl3_5-38b-q3_k_s.gguf with huggingface_hub
3 months ago
InternVL3_5-38B-iq4_xs.gguf
Safe
17.9 GB
xet
Upload InternVL3_5-38B-iq4_xs.gguf
3 months ago
README.md
Safe
324 Bytes
Update README.md
3 months ago
internvl3_5-38b-q2_k.gguf
Safe
12.3 GB
xet
Upload internvl3_5-38b-q2_k.gguf with huggingface_hub
3 months ago
internvl3_5-38b-q3_k_s.gguf
Safe
14.4 GB
xet
Upload internvl3_5-38b-q3_k_s.gguf with huggingface_hub
3 months ago
internvl3_5-38b-q8_0.gguf
Safe
34.8 GB
xet
Upload internvl3_5-38b-q8_0.gguf with huggingface_hub
3 months ago
mmproj-InternVL3_5-38B-bf16.gguf
Safe
11.3 GB
xet
Upload 2 files
3 months ago
mmproj-InternVL3_5-38B-f16.gguf
Safe
11.3 GB
xet
Upload 2 files
3 months ago
mmproj-InternVL3_5-38B-q8_0.gguf
Safe
6 GB
xet
Upload mmproj-InternVL3_5-38B-q8_0.gguf
3 months ago