• @arymandias@feddit.de
      link
      fedilink
      61 year ago

      With large language models it will basically be a technocratie of prompt hackers, which are at least humans and thus have a stake in Humanity.

  • @JohnDClay@sh.itjust.works
    link
    fedilink
    111 year ago

    The whole point of asimov’s laws of robotics was things can go wrong even if a system adhered to them perfectly. And current AI attempts doesn’t even have that.

  • @Blapoo@lemmy.ml
    link
    fedilink
    91 year ago

    I honestly ponder if an LLM trained on every human on earth’s input once a month about their opinions on the world and what should be done to fix it would have a “normalized trend” in that regard.

    LLMBOT 9000 2024!

    • @SCB@lemmy.world
      link
      fedilink
      111 year ago

      There are more dumb people than smart people so a “normalized trend” would be a pretty bad idea.

      Most people, regardless of personal beliefs, are highly susceptible to populist rhetoric, and generally you want an AI governance bot to make the right choices, not the popular choices.

      • @CitizenKong@lemmy.world
        link
        fedilink
        1
        edit-2
        1 year ago

        There are more dumb people than smart people.

        Since “dumb” and “smart” are both defined by the median between each other, there logically follows that there are always about as much “dumb” as “smart” people.

        • @SCB@lemmy.world
          link
          fedilink
          -21 year ago

          It really doesn’t, because there are very few smart people and shitloads of stupid people. “Average” intelligence levels are quite low, and this is why.

    • @FMT99@lemmy.world
      link
      fedilink
      21 year ago

      Haven’t read ol Bob since the 2000s. Gotta say it didn’t age as poorly as most others from that era.