• robador51@lemmy.ml
    link
    fedilink
    arrow-up
    14
    ·
    1 day ago

    I work in an environment where persuasion and synthesis of vast amounts of information gives a major edge. I see 2 types of people. There’s those who are actually really good at what they do without help of LLM’s who can benefit by making their output even better by use of AI, by honing and optimizing their work, and there’s those who are absolutely shit without use of LLM’s who’re even worse once they start using it.

    Unfortunately the latter group is the vast majority.

    The first group already has strong ideas, and then the LLM can accelerate and elevate their thinking. They use it as a brainstorm helper. They validate the output. They don’t necesarrily work faster.

    The second group doesn’t know what to do, will ask the LLM, trust the output with little to no scrutiny. They use it as a means of production. They deliver fast.

    I think this pattern we see in most fields. Software development for example. A true senior developer might be able to create better output, or produce things a bit faster even. But a bad programmer will still have bad output, and probably exponentially so when they lean more into the tool.

    The second group is dangerous. They’re as delusional as the output the LLM’s tend to generate. They feel empowered, and see the increase in output as a personal victory, as if it unlocked some lingering quality in them that was always there. Qualities that highly capable people had to work for years for to attain. Look how productive I am, look at what I did, they’ll think. They create the noise that capable people have to now deal with, it’s all the slop we see, and it’s everywhere.

    That’s what I hate about it.

    Anyway

      • aqwxcvbnji [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        1 day ago

        No, learning things in school and doing scientific research is good. Letting that get destroyed by a couple of Silicon Valley olicharchs is bad.

        Obviously the Byzantine admission system and absurd tuition fees in the US (and UK) are horrific, but that’s not what’s being destroyed here.

    • haxboar [none/use name]@hexbear.net
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      2 days ago

      I felt that way when I was 18, and knew more about certain topics than my professors did because I had the internet. Also, I remember realising that education was more about tolerating bureacracy than actually knowing material, when I was 10.

      Sheesh, the US education system stucks

  • varmint [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    82
    ·
    2 days ago

    We’re witnessing the death of academia in real time. Knowledge acquisition will cease and we will descend into a pit of regurgitated slurry until this system collapses

    • Blakey [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      26
      ·
      2 days ago

      It kinda needs to happen in a lot of ways. I like academia on, like, a conceptual level, but “publish or perish” and the reproducibility crisis are imo signs of a deeply entrenched problem and I am not convinced it can be solved by reform. The breakdown of liberal academia is probably as inevitable and necessary as the breakdown of capitalism and liberalism.

  • volcel_olive_oil [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    72
    ·
    2 days ago

    spent so much time trying to make the computer learn things they forgot how humans learn things

    this is part of “everyone is twelve”. very serious academics going “this is fantastic. I can skip eight weeks of school!”

    • facow [he/him, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      40
      ·
      2 days ago

      Cargo cult behavior. Churn out 50 slop papers you maybe skim over and no one else reads or attempts to replicate. Feed the slop back into the slop machine to shit out a thesis. Congrats you’ve got your doctorate without learning anything or generating anything of value!

      • CupcakeOfSpice [she/her, fae/faer]@hexbear.net
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 days ago

        That’s what really gets me! I see the Grammarly commercials where they say they can just follow the AI to improve/write their papers and get the grade they want. Cool, but have you considered the grade isn’t the end goal? Like, maybe the assignment was to teach you something and by not learning it you have harmed your studies? Maybe getting a lower grade and some feedback would assist you?

  • I recently tried using an LLM to find out whether a niche issue in my thesis had already been discussed in the literature. I fed the LLM extremely specific prompts, specific enough, in fact, that it could actually cough up a result that looked similar enough to my problem that I first thought that it had actually found literature on my question. The problem: the literature either did not exist, even though the authors it was attributed to are contributors to my field, or it does exist but does not contain the answer the LLM gave. I know because I had read literally every paper the LLM spat out that actually exists. These machines are ok at some simple tasks like giving a general overview over the current literature in a field, but miserably fail anything more specific than that.

    • UmbraVivi [he/him, she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 day ago

      The way I think about it is: The more frequently the correct answer to a question has been given on the internet, the more reliable an LLM is to give that correct answer to that question. So it’s pretty reliable on surface-level questions in a vast array of fields. But the more specific and niche you get, the less explored the topic you’re asking the LLM about is, the more likely it is to just make stuff up.

      • haxboar [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I always thought about it like Wikipedia. Is it a good overview of the concept?

        Usually, yeah.

        Is it accurate/reliable enough to quote or use for anything important?

        God no

    • Moidialectica [he/him, comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      22
      ·
      2 days ago

      Trust me, it’s like this for every field; geology, programming, history, story writing, philosophy

      I have made use of it, I do regularly use it, but to not acknowledge it’s fucking shit and should not be put near any serious work without the up-most scrutiny is a joke

      And I believe the propagators of AI lack either the skills needed to actually tell how bad it is, or want to believe otherwise because it makes it so much easier for them

  • Damarcusart [he/him, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    27
    ·
    2 days ago

    Ah yes, why bother learning all that pesky “medical knowledge” when training to become a doctor, when you can just get an AI to do all the work for you! I’m sure this sort of attitude will have no real world repercussions!

  • Big [any, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    34
    ·
    2 days ago

    At this point, the only way to save higher learning is to go back to exclusively oral teaching.

    Turns out Socrates was right all along.

  • BodyBySisyphus [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    43
    ·
    2 days ago

    Looking forward to the coming retraction because it turns out your interview coding was nondeterministic and your results are not reproducible.

    …somebody’s out there trying to see if research is reproducible, right? anakin-padme-2

    …papers will get pulled from LLM training sets when they get retracted, right? anakin-padme-2

    …there isn’t a massive number of social sciences papers already published that are basically useless because their results aren’t meaningful outside of a narrow set of subjectively specified predictor variables, right? anakin-padme-2

  • mrfugu [he/him, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    37
    ·
    2 days ago

    I don’t give a shit if it’s qualitative. If its data you need directly recorded please don’t use the hallucination chat service.

  • Hohsia [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    20
    ·
    2 days ago

    Tech bros (and all those who repeat their talking points) are dangerous people and should be treated as such

      • CupcakeOfSpice [she/her, fae/faer]@hexbear.net
        link
        fedilink
        English
        arrow-up
        10
        ·
        2 days ago

        I think in Dune’s Butlerian Jihad they considered anything that “thought” on the level of an electronic calculator a thinking machine. An abacus might be alright, but we have Mentats for that.

        • aqwxcvbnji [none/use name]@hexbear.net
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 day ago

          Dune’s Butlerian Jihad

          I’ve seen “Butlerian jihad” used so many times on this site, and never knew it was a Dune reference. I always thought it was some inside joke I didn’t get which referenced feminist theorist Judith Butler, in the sense of “we need the Holy War for feminism”