--- base_model: - google/gemma-3-27b-it-qat --- # gemma-3-27b-it-qat GGUF Recommended way to run this model: ```sh llama-server -hf ggml-org/gemma-3-27b-it-qat-GGUF -c 0 -fa ``` Then, access http://localhost:8080