yano
AI & ML interests
Recent Activity
Organizations
Unrelated Question
Woah.
The splits can't be loaded in KoboldCPP
Hmmm... i'd prefer a factual answer over personal bias; At least when factual answers are possible.
But for writing and ERP and the like, i can see this being interesting to know how we affect the models. Understanding something is the best way to fine-tune them.
Somewhat disappointing
I remember letting my computer run protein/gene folding back in 2010-2016... free spare cycles...
Sorry, probably slightly off topic.
v2 - thoughts
Interesting and curious.
In my limited experience, asking for flaws in something i would only get surface level identification of flaws of why something wouldn't work.
This potentially may improve that a lot. Yeah wanting to build tower out of gold is a silly example, but i see a lot of movies and books where events happen 'so the plot can move forward', and making idiotic choices is one thing that has turned me off from a lot of media including books; Writers using something to identify and remove the braindead stupidities would certainly be a huge improvement in the long run.
Decent
for a logo... top on, 12th over (or 3rd from the right), the depth steam-punk like 3D look seems to stand out for me the most.
Otherwise any of them will work, but a lot of them look rather generic.
Decent model
Error loading model? (Q6)
A number of models i wish i could get it to shut up on the thinking, in fact 'thinking' modes tends to turn me off on what would otherwise be good models.
Thinking generally is a summarizing of all the details, sometimes giving better insight on what has more emphasis, but usually the 'thinking' i see is useless, as half the output beyond being a summary it talks about staying within guidelines and not offending anyone. 99% of the time 'thinking' is just a waste of time.
Then again i'm not using the cloud versions so maybe those actually are much better at it.
Actually one thing 'thinking' might do for very very large models, is to gather appropriate data all in one block right before it answers. Thus it might use 2 modes, a very large context mode, and then a shorter faster one with the more recent context. That in theory may work...