Available online as in, you just log in to a website and use it, not on hugging face or github, where you need to download, install and configure.
LLMs are already made so “safe” that they won’t even describe an erotic or crime story - content you would easily find visually represented in all its detail on Netflix, Amazon, HBO, youtube, etc. Ie writing “Game of Thrones” with an AI is not possible in most chat bots anymore.
DuckDuckGo currently provides free access to four different LLMs. They say they don’t store user conversations, but I’m not sure I trust that, or that that won’t change at some point even if they don’t right now.
Most of them have the strawberry problem (or some variant where that word is explicitly patched(?)), fail basic arithmetic and apologise repeatedly and often without being able to better themselves when mistakes are pointed out. Standard LLM fare for 2024/5.
Thanks but what is the strawberry problem?
Can’t count the number of R’s in Strawberry
Wait I shouldn’t trust them. I’ve fully switched to them as self hosting on my ancient device is too slow and i dont wanna use anthropic or openai frontends because I don’t trust them via data.
Very easy to run yourself theses days as long as you have a decent GPU or Mac unified silicon. OpenWebUi will let you host your own ChatGPT that you can choose different models to run. Dolphin Mixtral is a pretty popular “unlocked” model.
LLMs are expensive to operate, hard to find free such services…
There’s a few, some of them have a free tier, most of them are paid. They are often geared towards role play or storytelling. If you search for uncensored roleplaying AI I’m sure you’ll find some.
perchance.org/aí-story-generator is completely uncensored and free.
You are right that something that most others will host for free are going to be censored since otherwise they might have some kind of responsibility legally. I learned this while trying to diagnose an issue with my cars door lock.
At the end of the day, anything you ask some hosted llm is being recorded so if you actually want something uncensored or something that gives you a sense of freedom then the only real option is to self host.
Luckily, it’s very simple and can even be done on a low spec device if you pick the right model. The amount and type of ram you have a will dictate how many parameters you can run at a decent speed.
Here’s 3 options with increasing difficulty (not much however) written for Arch Linux: https://infosec.pub/comment/13623228
It says wrong key on the 0bin
Sorry about that bad link. Here it is.
Install ollama
```sh pacman -S ollama ```
Download any uncensored llm
From ollama’s library
Serve, Pull, and Run
- In terminal A execute
ollama serve
- In terminal B execute
ollama pull wizard-vicuna-uncensored:7B ollama run wizard-vicuna-uncensored:7B
From huggingface
download and any gguf model you want with uncensored in the name. I like ggufs from TheBloke
- Example using SOLAR-10.7B-Instruct-v1.0-uncensored-GGUF
- Click on Files and Versons and download solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf
- change directory to where the downloaded gguf is and write a modelfile with just a FROM line
echo "FROM ~/Documents/ollama/models/solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf" >| ~/Documents/ollama/modelfiles/solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf.modelfile
- Serve, Create, and Run
- In terminal A execute
ollama serve
- In terminal B execute
ollama create solar-10:7b -f ~/Documents/ollama/modelfiles/solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf.modelfile ollama run solar-10:7b
- In terminal A execute
Create a GGUF file from a non gguf llm for ollama
setup python env
Install pyenv and then follow instructions to update .bashrc
curl https://pyenv.run/ | bash
Update pyenv and install a version of python you need
source "${HOME}"/.bashrc pyenv update pyenv install 3.9
Create a virtual environment
pyenv virtualenv 3.9 ggufc
Use the virtual environment and download the pre-reqs
pyenv activate ggufc pip install --upgrade pip pip install huggingface_hub mkdir -p ~/Documents/ollama/python cd ~/Documents/ollama/python git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp pip install -r requirements.txt
Download the model from huggingface.
For this example, Im going to pull llama3.2_1b_2025_uncensored Note that this llm is 1B so can be ran on a low spec device.
mkdir -p ~/Documents/ollama/python mkdir -p ~/Documents/ollama/models model_repo_slug='carsenk' model_repo_name='llama3.2_1b_2025_uncensored' model_id="$model_repo_slug/$model_repo_name" cat << EOF >| ~/Documents/ollama/python/fetch.py from huggingface_hub import snapshot_download model_id="$model_id" snapshot_download(repo_id=model_id, local_dir="$model_id", local_dir_use_symlinks=False, revision="main") EOF cd ~/Documents/ollama/models python ~/Documents/ollama/python/fetch.py
Transpose HF to GGUF
python ~/Documents/ollama/python/llama.cpp/convert.py "$model_id" \ --outfile "$model_repo_name".gguf \ --outtype q8_0
Serve, Organize, Create, and Run
- In terminal A execute
ollama serve
- Open a new terminal while ollama is being served.
mkdir -p ~/Documents/ollama/modelfiles echo "FROM ~/Documents/ollama/models/llama3.2_1b_2025_uncensored.gguf" >| ~/Documents/ollama/modelfiles/llama3.2_1b_2025_uncensored.modelfile ollama create llama3.2:1b -f ~/Documents/ollama/modelfiles/llama3.2_1b_2025_uncensored.modelfile ollama run llama3.2:1b
- In terminal A execute
I don’t know which one of them is good, but I’ve seen like a dozen or so online services, mostly for roleplay / virtual girl/boyfriend stuff etc. They’re paid, though. Or you can pay openrouter (more general LLM connector, also paid). I’m not sure if you’re looking for something like that or something free. They’re definitely out there and available to the public.
It’s mostly OpenAI, Microsoft etc who have free services, but they’re limited in what they’ll talk about. And there is one free community project I’m aware of: that would be AI Horde. It’s mostly for images but offers text, too. I haven’t used it in a while, not sure how/if it works.
AI Horde does both text and image yes. You can easily use it on the browser from https://artbot.site/ (images) or https://lite.koboldai.net/ (text). You can register an account for free on https://aihorde.net/register to go faster than anons (or it’s going to be quite slow). It’s crowdsourced and volunteer run.
If you’re not able to run one yourself.
Try DuckDuckGo in mixtral mode. https://duckduckgo.com/?q=DuckDuckGo+AI+Chat&ia=chat&duckai=1
Not certain if it’s still active but koboldai seems to be a community sourced tool that doesn’t have built-in limitations because it’s a non-commercial site.
Wasnt that the one with the in-built holocaust denial?
No idea. I used it to translate stuff uncensored
Just checked, its the one.
That’s what happens when you crowd source from forums and social I guess.
Its rather intentional with gab.ai.
Noted
I’m browsing from Israel and I’m blocked. Guess it’s Anti-Semitic as well. /J
Was about to post a Hugging Face link til I finished reading. For what it’s worth, once you have Ollama installed it’s a single command to download, install, and immediately drop into a chat with a model, either from Ollama’s library or Hugging Face, or anyone else. On Arch the entire process to get it working with gpu acceleration was installing 2 packages then start ollama.