GGUF when? 8 bit quant when?
Just a little system prompt bro, need to see it's math capability ask it about how many r's in strawberry c'mon bro, pls man, just a little test
should drop very soon it's in community hands now 🚀
when the q8 gguf will come?
I just tried to GGUF the 162.68 GB HF model and it seems it's not supported yet in Llama.cpp. I tried making a few changes but it's 3am in Europe and I'm off to sleep, sadly without being able to test the model. without Llama.cpp then none of the other tools like Ollama, LM Studio etc. are going to have support. I'm sure it won't be long, hopefully by the time I wake up :-)
Excited!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
+1
+1
+1
+1
+1
+1
+1
Oh yes, please a GGUF model so that windows users also can load it (right now it's only possible on MAC)