Available online as in, you just log in to a website and use it, not on hugging face or github, where you need to download, install and configure.

LLMs are already made so “safe” that they won’t even describe an erotic or crime story - content you would easily find visually represented in all its detail on Netflix, Amazon, HBO, youtube, etc. Ie writing “Game of Thrones” with an AI is not possible in most chat bots anymore.

  • palordrolap@fedia.io
    link
    fedilink
    arrow-up
    21
    arrow-down
    1
    ·
    2 months ago

    DuckDuckGo currently provides free access to four different LLMs. They say they don’t store user conversations, but I’m not sure I trust that, or that that won’t change at some point even if they don’t right now.

    Most of them have the strawberry problem (or some variant where that word is explicitly patched(?)), fail basic arithmetic and apologise repeatedly and often without being able to better themselves when mistakes are pointed out. Standard LLM fare for 2024/5.

  • pepperprepper@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    3
    ·
    2 months ago

    Very easy to run yourself theses days as long as you have a decent GPU or Mac unified silicon. OpenWebUi will let you host your own ChatGPT that you can choose different models to run. Dolphin Mixtral is a pretty popular “unlocked” model.

  • breakingcups@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    2 months ago

    There’s a few, some of them have a free tier, most of them are paid. They are often geared towards role play or storytelling. If you search for uncensored roleplaying AI I’m sure you’ll find some.

  • CubitOom@infosec.pub
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    2 months ago

    You are right that something that most others will host for free are going to be censored since otherwise they might have some kind of responsibility legally. I learned this while trying to diagnose an issue with my cars door lock.

    At the end of the day, anything you ask some hosted llm is being recorded so if you actually want something uncensored or something that gives you a sense of freedom then the only real option is to self host.

    Luckily, it’s very simple and can even be done on a low spec device if you pick the right model. The amount and type of ram you have a will dictate how many parameters you can run at a decent speed.

    Here’s 3 options with increasing difficulty (not much however) written for Arch Linux: https://infosec.pub/comment/13623228

      • CubitOom@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Sorry about that bad link. Here it is.

        Install ollama

        ```sh
        pacman -S ollama
        ```
        

        Download any uncensored llm

        From ollama’s library

        Serve, Pull, and Run

        1. In terminal A execute
          ollama serve
          
        2. In terminal B execute
          ollama pull wizard-vicuna-uncensored:7B
          ollama run wizard-vicuna-uncensored:7B
          

        From huggingface

        download and any gguf model you want with uncensored in the name. I like ggufs from TheBloke

        • Example using SOLAR-10.7B-Instruct-v1.0-uncensored-GGUF
          • Click on Files and Versons and download solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf
          • change directory to where the downloaded gguf is and write a modelfile with just a FROM line
            echo "FROM ~/Documents/ollama/models/solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf" >| ~/Documents/ollama/modelfiles/solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf.modelfile
            
          • Serve, Create, and Run
            1. In terminal A execute
              ollama serve
              
            2. In terminal B execute
              ollama create solar-10:7b -f ~/Documents/ollama/modelfiles/solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf.modelfile
              ollama run solar-10:7b
              

        Create a GGUF file from a non gguf llm for ollama

        setup python env

        Install pyenv and then follow instructions to update .bashrc

        curl https://pyenv.run/ | bash
        

        Update pyenv and install a version of python you need

        source "${HOME}"/.bashrc
        pyenv update
        pyenv install 3.9
        

        Create a virtual environment

        pyenv virtualenv 3.9 ggufc
        

        Use the virtual environment and download the pre-reqs

        pyenv activate ggufc
        pip install --upgrade pip
        pip install huggingface_hub
        mkdir -p ~/Documents/ollama/python
        cd ~/Documents/ollama/python
        git clone https://github.com/ggerganov/llama.cpp.git
        cd llama.cpp
        pip install -r requirements.txt
        

        Download the model from huggingface.

        For this example, Im going to pull llama3.2_1b_2025_uncensored Note that this llm is 1B so can be ran on a low spec device.

        mkdir -p ~/Documents/ollama/python
        mkdir -p ~/Documents/ollama/models
        model_repo_slug='carsenk'
        model_repo_name='llama3.2_1b_2025_uncensored'
        model_id="$model_repo_slug/$model_repo_name"
        cat << EOF >| ~/Documents/ollama/python/fetch.py
        from huggingface_hub import snapshot_download
        
        model_id="$model_id"
        snapshot_download(repo_id=model_id, local_dir="$model_id",
                          local_dir_use_symlinks=False, revision="main")
        EOF
        
        cd ~/Documents/ollama/models
        python ~/Documents/ollama/python/fetch.py
        

        Transpose HF to GGUF

        python ~/Documents/ollama/python/llama.cpp/convert.py "$model_id" \
          --outfile "$model_repo_name".gguf \
          --outtype q8_0
        

        Serve, Organize, Create, and Run

        1. In terminal A execute
          ollama serve
          
        2. Open a new terminal while ollama is being served.
          mkdir -p ~/Documents/ollama/modelfiles
          echo "FROM ~/Documents/ollama/models/llama3.2_1b_2025_uncensored.gguf" >| ~/Documents/ollama/modelfiles/llama3.2_1b_2025_uncensored.modelfile
          ollama create llama3.2:1b -f ~/Documents/ollama/modelfiles/llama3.2_1b_2025_uncensored.modelfile
          ollama run llama3.2:1b
          
  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 months ago

    I don’t know which one of them is good, but I’ve seen like a dozen or so online services, mostly for roleplay / virtual girl/boyfriend stuff etc. They’re paid, though. Or you can pay openrouter (more general LLM connector, also paid). I’m not sure if you’re looking for something like that or something free. They’re definitely out there and available to the public.

    It’s mostly OpenAI, Microsoft etc who have free services, but they’re limited in what they’ll talk about. And there is one free community project I’m aware of: that would be AI Horde. It’s mostly for images but offers text, too. I haven’t used it in a while, not sure how/if it works.

  • xmunk@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    2 months ago

    Not certain if it’s still active but koboldai seems to be a community sourced tool that doesn’t have built-in limitations because it’s a non-commercial site.

  • antihumanitarian@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 months ago

    Was about to post a Hugging Face link til I finished reading. For what it’s worth, once you have Ollama installed it’s a single command to download, install, and immediately drop into a chat with a model, either from Ollama’s library or Hugging Face, or anyone else. On Arch the entire process to get it working with gpu acceleration was installing 2 packages then start ollama.