Totally not a an AI asking this question.

  • Greyscale
    link
    English
    410 months ago

    That would require the humans controlling the experiment to both be willing to input altruistic goals AND accept the consequences that get us there.

    We can’t even surrender a drop of individualism and accept that trains are the way we should travel non-trivial distances.

    • pjhenry1216
      link
      fedilink
      210 months ago

      In a dictatorship with an AI being in control, I don’t think there’s a question of accepting consequences at they very least.

      There is no such thing as best case scenario objectively, so it’s always going to be a question of what goals the AI has, whether it’s given them or arrives at them on its own.