This isn’t a joke, though it almost seems like one. It uses Llama 3.1, and supposedly the conversation data stays on the device and gets forgotten over time (through what the founder calls a rolling “context window”).

The implementation is interesting, and you can see the founder talking about earlier prototypes and project goals in interviews from several months ago.

iOS only, for now.

Edit: Apparently, you can build your own for around $50 that runs on ChatGPT instead of Llama. I’m sure you could also figure out how to switch it to the LLM of your choice.

  • @Warl0k3@lemmy.world
    link
    fedilink
    English
    514 months ago

    LMFAO. The audacity of calling the token limit a “rolling context window” like it’s a desirable feature and not just a design aspect of every LLM…

    • @hoshikarakitaridia@lemmy.world
      link
      fedilink
      English
      264 months ago

      Yeah that part tripped me up.

      “Rolling context window”? You mean one of the universal properties of LLMs in it’s current state? The one that is so big for Google’s latest AI endeavors that they are flexing with it?

      It’s hilarious to say that’s a privacy feature. It’s like calling amnesia a learning opportunity.

      These claims make me think this is worse than the R1 rabbit or whatever it’s called. Although it’s very difficult to be worse, considering the CEO turned out to be a full-on crypto scammer.

      • @Telorand@reddthat.comOP
        link
        fedilink
        English
        24 months ago

        Check the edit for instructions on how to build your own. It’s even called “Friend,” so “friend” is likely a modified version of that (ChatGPT vs Llama, respectively).

        I would certainly feel better about it if I had full control over the encryption endpoints, at a minimum.

  • @Telorand@reddthat.comOP
    link
    fedilink
    English
    144 months ago

    I will be waiting for the tech YouTubers and early adopters to render their judgement before I even consider yet another AI wearable, but this aims to be less of a personal assistant and more of a “Tamagotchi.”

    • @Clusterfck
      link
      English
      114 months ago

      I think that’s what sets this one apart (and makes it less expensive) from the other devices like this. This thing only needs a mic, an LLM and a Bluetooth radio. It won’t search the whole internet for answers or tell you what you’re looking at, but it will talk shit on that bitch Tonya in accounting with you.

  • @randon31415@lemmy.world
    link
    fedilink
    English
    74 months ago

    $99? I just ordered the parts for $50 (including shipping and handling and a 100 count on/off switches of which I only need one)

    Also confused why they say it is “always on” if it has an off switch. A TV can be always on until you turn it off. Once I build it, I’ll see what can be switched around - I am hoping to get something like the superbooga extension for oobabooga (RAG vetorization of documents) working with the transcripts.

    Was a bit worried about Whisper STT, but I think it is the open source on device one, not the runs on OpenAI servers version.

  • @Sanctus@lemmy.world
    link
    fedilink
    English
    24 months ago

    I actually cant talk shit about this one. So far seems to be what you’d want for the data side. All on device. No subscription. It does come off as weird and still is probably a bad idea to receive advice. But at 99 bucks its gotta be one of the cheapest AI device from a startup so far. I won’t get one, but I dont absolutely hate it.

    • Angry_Autist (he/him)
      link
      fedilink
      English
      54 months ago

      All-on-device AIs that could run on an iPhone would be terrible. Its sending tokens somewhere I guaran-fucking-tee.

      • @bandwidthcrisis@lemmy.world
        link
        fedilink
        English
        64 months ago

        The FAQ says that it requires an Internet connection.

        It also mentions e2ee, which isn’t too reassuring when one of the ends is their servers.

        • Angry_Autist (he/him)
          link
          fedilink
          English
          44 months ago

          Exactly, there are ways to make the tokens unreadable even in a server hosted LLM but I know for a fact that’s not what’s going to happen here.

          And I fully expect all of our engagement data to be used in the 2028 election to target us.

        • @Telorand@reddthat.comOP
          link
          fedilink
          English
          34 months ago

          This was my big concern as well. E2EE only matters if you control each end. That’s why I’ll let the YouTubers and security analysts dissect it first.

          Check my edit for instructions on how to build your own. It’s even called “Friend,” so it’s probably the same thing tweaked for a different LLM.

      • Aatube
        link
        fedilink
        0
        edit-2
        4 months ago

        false advertising law: hello?

        Anyone can test if it’s sending with a firewall. If it’s not connected to the internet, it ain’t sending. Don’t forget that iPhone chips have been the Moore’s law executors.

        • Angry_Autist (he/him)
          link
          fedilink
          English
          54 months ago

          Do you even have a slight idea how processing intense even 5 year old LLM models are?

          And iPhones aren’t magically immune to thermodynamic.