AI Chatbot Services
There's ChatGPT, Gemini, Copilot, Grok, Pi, AI Studio, Claude, Perplexity, DeepSeek, MiniMax, Le Chat and others.
Which do you use? Which is your primary? Why? How do you find your primary compared to the others you've used?
Do you use the free versions or paid? If you use the pay version, why?
What do you use any or all them for? How helpful are they for what you do?

Comments
We chat with eliza and billy.
No hostname available. affbrr
That screenshot of Billy. Chefs Kiss.
My pronouns are like/subscribe.
You can use all of them!
🤡 "The tags provided to users are worthless permanent labels~ accept them or leave!" 🤡
Its like choosing a Linux distribution. There are top four or five, and then there are derivatives, and then there are the outliers…
You know the names.
LLMs not very different. Most of them trained on similar datasets…Except some are proprietary license…many are open source
blog | exploring visually |
I use a free chatbot AI. If I exhaust the free quota, I'll switch to another free chatbot AI.
Perplexity for Google replacement. Z.ai codingplan
How do you find the difference(s) between the free version and the paid versions of the (same) AI chatbot?
I have used Perplexity as I liked its search capability, it's getting harder to use traditional search engines. However the quality has slowly been declining.
For things I would like to keep private you can run Jan or Msty locally, even on lower end machines.
It's been getting slower in what way?
Keep private from whom?
Not getting slower but issues with quality like switching to worse models behind the scenes. This has been an issue for lots of LLMs though. For example I'll ask a question, the first answer is "I can't do that" or a general overview, and then I'll say, you can do that or search again, then it will give a wrong answer, then I'll say, really, that doesn't sound right and maybe on the third try it will finally get the answer right. The reason I still use it is that it does cite the sources that it finds online and it's good for things search engines are not good for such as "What's the restaurant shown in this Youtube video"
Any data sent to a LLM online is going to be shared with the provider. In addition a lot of those models have guardrails around sensitive topics that can even be triggered by really innocent things ("I can't speculate about a public figure") which is annoying. If you're running a model locally, no data is leaving your computer, and you can use models that don't have censorship.