It just feels too good to be true.

I’m currently using it for formatting technical texts and it’s amazing. It doesn’t generate them properly. But if I give it the bulk of the info it makes it pretty af.

Also just talking and asking for advice in the most random kinds of issues. It gives seriously good advice. But it makes me worry about whether I’m volunteering my personal problems and innermost thoughts to a company that will misuse that.

Are these concerns valid?

        • @ftothe3@lemm.ee
          link
          fedilink
          111 months ago

          How is this able to run without a gpu? Is it that the models are small enough so that only a cpu is needed?

          • @d3Xt3r@beehaw.org
            link
            fedilink
            211 months ago

            Yes, but it’s a bit more than that. All models are produced using a process known as neural network quantization, which optimizes them to be able to run on a CPU. This, plus appropriate backend code written in C, means GPT4All is quite efficient and needs only 4-8GB of RAM (depending on the model) and a CPU with AVX/AVX2 support.