cross-posted from: https://lemmy.world/post/19416727

Artificial intelligence is worse than humans in every way at summarising documents and might actually create additional work for people, a government trial of the technology has found.

Amazon conducted the test earlier this year for Australia’s corporate regulator the Securities and Investments Commission (ASIC) using submissions made to an inquiry. The outcome of the trial was revealed in an answer to a questions on notice at the Senate select committee on adopting artificial intelligence.

The test involved testing generative AI models before selecting one to ingest five submissions from a parliamentary inquiry into audit and consultancy firms. The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions with a focus on ASIC mentions, recommendations, references to more regulation, and to include the page references and context.

Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts. Then, a group of reviewers blindly assessed the summaries produced by both humans and AI for coherency, length, ASIC references, regulation references and for identifying recommendations. They were unaware that this exercise involved AI at all.

These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.

  • @Linktank@lemmy.today
    link
    fedilink
    -9
    edit-2
    4 months ago

    Oh did I say that somewhere in my comment? Please point out to me the part where I mentioned singularity.

    Fucking Dons man, every one I have ever met has been some kind of dick.

    • don
      link
      fedilink
      4
      edit-2
      4 months ago

      You still didn’t answer my question. I asked how being shit at summarizing means AI’s going to surpass us.

      Also, read your own comment. You never mentioned singularity, I did. It’s a common term used to refer to AI passing us, which you did mention as happening soon, IYO.

      • @Linktank@lemmy.today
        link
        fedilink
        -104 months ago

        Did you ever watch the videos of the robots learning to walk? We’re at that stage right now with summarizing. Pretty soon they’ll be dancing and jumping at that too.

        If you fail to understand how progress works then I don’t think there’s an explanation I can offer you.

        • don
          link
          fedilink
          84 months ago

          Are you deliberately ignoring the point article? If AI is worse at summarizing than a human, then it hasn’t gotten to the point of summarizing better than a human. It’s gotten to the point of being able to fail at it worse than humans. It will have passed summarizing when it’s at least on par with the average human.

          I have seen many videos of robots trying to walk and often failing, while the average human baby consistently learned to do it faster than the robots, who still failed. Your position seems to be “Any progress AI programmers make means AI’s gonna overtake us really soon!”

          There’s such a thing as negative progress, and if these are your best examples of progress, then I don’t think you’re capable of giving an effective explanation of the concept to begin with.

          • @Linktank@lemmy.today
            link
            fedilink
            -104 months ago

            I don’t need to convince you of anything. I also don’t really care if you think it will happen soon or at all.

            I’m impressed with the progress so far and I believe that models will become available that will be able to do a far better job than an average human in a relatively short period of time. That being the next decade or two.

            Your defeatist whiney argumentative comments aside.