Available online as in, you just log in to a website and use it, not on hugging face or github, where you need to download, install and configure.

LLMs are already made so “safe” that they won’t even describe an erotic or crime story - content you would easily find visually represented in all its detail on Netflix, Amazon, HBO, youtube, etc. Ie writing “Game of Thrones” with an AI is not possible in most chat bots anymore.

    • CubitOom@infosec.pub
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Sorry about that bad link. Here it is.

      Install ollama

      ```sh
      pacman -S ollama
      ```
      

      Download any uncensored llm

      From ollama’s library

      Serve, Pull, and Run

      1. In terminal A execute
        ollama serve
        
      2. In terminal B execute
        ollama pull wizard-vicuna-uncensored:7B
        ollama run wizard-vicuna-uncensored:7B
        

      From huggingface

      download and any gguf model you want with uncensored in the name. I like ggufs from TheBloke

      • Example using SOLAR-10.7B-Instruct-v1.0-uncensored-GGUF
        • Click on Files and Versons and download solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf
        • change directory to where the downloaded gguf is and write a modelfile with just a FROM line
          echo "FROM ~/Documents/ollama/models/solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf" >| ~/Documents/ollama/modelfiles/solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf.modelfile
          
        • Serve, Create, and Run
          1. In terminal A execute
            ollama serve
            
          2. In terminal B execute
            ollama create solar-10:7b -f ~/Documents/ollama/modelfiles/solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf.modelfile
            ollama run solar-10:7b
            

      Create a GGUF file from a non gguf llm for ollama

      setup python env

      Install pyenv and then follow instructions to update .bashrc

      curl https://pyenv.run/ | bash
      

      Update pyenv and install a version of python you need

      source "${HOME}"/.bashrc
      pyenv update
      pyenv install 3.9
      

      Create a virtual environment

      pyenv virtualenv 3.9 ggufc
      

      Use the virtual environment and download the pre-reqs

      pyenv activate ggufc
      pip install --upgrade pip
      pip install huggingface_hub
      mkdir -p ~/Documents/ollama/python
      cd ~/Documents/ollama/python
      git clone https://github.com/ggerganov/llama.cpp.git
      cd llama.cpp
      pip install -r requirements.txt
      

      Download the model from huggingface.

      For this example, Im going to pull llama3.2_1b_2025_uncensored Note that this llm is 1B so can be ran on a low spec device.

      mkdir -p ~/Documents/ollama/python
      mkdir -p ~/Documents/ollama/models
      model_repo_slug='carsenk'
      model_repo_name='llama3.2_1b_2025_uncensored'
      model_id="$model_repo_slug/$model_repo_name"
      cat << EOF >| ~/Documents/ollama/python/fetch.py
      from huggingface_hub import snapshot_download
      
      model_id="$model_id"
      snapshot_download(repo_id=model_id, local_dir="$model_id",
                        local_dir_use_symlinks=False, revision="main")
      EOF
      
      cd ~/Documents/ollama/models
      python ~/Documents/ollama/python/fetch.py
      

      Transpose HF to GGUF

      python ~/Documents/ollama/python/llama.cpp/convert.py "$model_id" \
        --outfile "$model_repo_name".gguf \
        --outtype q8_0
      

      Serve, Organize, Create, and Run

      1. In terminal A execute
        ollama serve
        
      2. Open a new terminal while ollama is being served.
        mkdir -p ~/Documents/ollama/modelfiles
        echo "FROM ~/Documents/ollama/models/llama3.2_1b_2025_uncensored.gguf" >| ~/Documents/ollama/modelfiles/llama3.2_1b_2025_uncensored.modelfile
        ollama create llama3.2:1b -f ~/Documents/ollama/modelfiles/llama3.2_1b_2025_uncensored.modelfile
        ollama run llama3.2:1b