• conditional_soup@lemm.ee
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    6
    ·
    2 days ago

    Yeah, I know. I use it for work in tech. If I encounter a novel (to me) problem and I don’t even know where to start with how to attack the problem, the LLM can sometimes save me hours of googling by just describing my problem to it in a chat format, describing what I want to do, and asking if there’s a commonly accepted approach or library for handling it. Sure, it sometimes hallucinate a library, but that’s why I go and verify and read the docs myself instead of just blindly copying and pasting.

    • lefaucet@slrpnk.net
      link
      fedilink
      English
      arrow-up
      35
      arrow-down
      1
      ·
      edit-2
      2 days ago

      That last step of verifying is often being skipped and is getting HARDER to do

      The hallucinations spread like wildfire on the internet. Doesn’t matter what’s true; just what gets clicks that encourages more apparent “citations”. Another even worse fertilizer of false citations is the desire to push false narratives by power-hungry bastards

      AI rabbit holes are getting too deep to verify. It really is important to keep digital hallucinations out of the academic loop, especially for things with life-and-death consequences like medical school

      • medgremlin@midwest.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        This is why I just use google to look for the NIH article I want, or I go straight to DynaMed or UpToDate. (The NIH does have a search function, but it’s terrible meaning it’s just easier to use google to find the link to the article I actually want.)

        • Detun3d@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 day ago

          I’ll just add that I’ve had absolutely no benefit, just time wasted, when using the most popular services such as ChatGPT, Gemini and Copilot. Yes, sometimes it gets a few things right, mostly things that are REALLY easy and quick to find even when using a more limited search engine such as Mojeek. Most of the time these services will either spit out blatant lies or outdated info. That’s one side of the issue with these services, and I won’t even get into misinformation injected by their companies. The other main issue I find for research is that you can’t get a broader, let alone precise picture about anything without searching for information yourself, filtering the sources yourself and learning and building better criteria yourself, through trial and error. Oftentimes it’s good info that you weren’t initially searching for what makes your time well spent and it’s always better to have 10 people contrast information they’ve gathered from websites and libraries based on their preferences and concerns than 10 people doing the same thing with information they were served by an AI with minimal input and even less oversight. Better to train a light LLM model (or setup any other kind of automation that performs even better) with custom parameters at your home or office to do very specific tasks that are truly useful, reliable and time saving than trusting and feeding sloppy machines from sloppy companies.