• billwashere
      link
      fedilink
      English
      729 days ago

      I use AI for what Google used to be able to do: Finding answers to simple questions. Usually about tech but sometimes movies or music. Like how do I add a physical volume to LVM, or what are the specs of this little fan model? Or who was that actress in a movie about kids buried in a collapsed building? Things like that…

        • billwashere
          link
          fedilink
          English
          128 days ago

          It links to the original article it found so you can check its work, which is nice. It’s perplexity.ai if you’re curious. I find it quite useful. And as much as AI makes shit up I wouldn’t trust it otherwise.

          • Cool. Yeah I think the best use case of AI is just gonna be better search of unorganized that. Having said that though, it would never be as good as a good search engine with organized data.

    • Jesus
      link
      fedilink
      529 days ago

      Summarizing, drafting things, understanding complex things that are filled with jargon, etc.

    • @cybersandwich@lemmy.world
      link
      fedilink
      429 days ago

      People are treating AI like crypto, and on some level I don’t blame them because a lot of hype-bros moved from crypto to AI. You can blame the silicon valley hype machine + Wall Street rewarding and punishing companies for going all in or not doing enough, respectively, for the Lemmy anti-new-tech tenor.

      That and lemmy seema full of angsty asshats and curmudgeons that love to dogpile things. They feel like they have to counter balance the hype. Sure, that’s fair.

      But with AI there is something there.

      I use all sorts of AI on a daily basis. I’d venture to say most everyone reading this uses it without even knowing.

      I set up my server to transcribe and diarize my my favorite podcasts that I’ve been listening to for 20 years. Whisper transcribes, pyannote diarieizes, gpt4o uses context clues to find and replace “speaker01” with “Leo”, and the. It saves those transcripts so that I can easily switch them. It’s a fun a hobby thing but this type of thing is hugely useful and applicable to large companies and individuals alike.

      I use kagi’s assistant (which basically lets you access all the big models) on a daily basis for searching stuff, drafting boilerplate for emails, recipes, etc.

      I have a local llm with ragw that I use for more personal stuff like, I had it do the BS work for my performance plan using notes I’d taken from the year. I’ve had it help me reword my resume.

      I have it parse huge policy memos into things I actually might give a shit about.

      I’ve used it to run though a bunch of semi-structured data on documents and pull relevant data. It’s not necessarily precise but it’s accurate enough for my use case.

      There is a tool we use that uses CV to do sentiment analysis of users (as they use websites/apps) so we can improve our ux / cx. There’s some ml tooling that also can tell if someone’s getting frustrated. By the way, they’re moving their mouse if they’re thrashing it or what not.

      There’s also a couple use cases that I think we’re looking at at work to help eliminate bias so things like parsing through a bunch of resumes. There’s always a human bias when you’re doing that and there’s evidence that shows llms can do that with less bias than a human and maybe it’ll lead to better results or selections.

      So I guess all that to say is I find myself using AI or ml llms on a pretty frequent basis and I see a lot of value in what they can provide. I don’t think it’s going to take people’s jobs. I don’t think it’s going to solve world hunger. I don’t think it’s going to do much of what the hypros say. I don’t think we’re anywhere near AGI, but I do think that there is something there and I think it’s going to change the way we interact with our technology moving forward and I think it’s a great thing.

      • @WoodScientist@lemmy.world
        link
        fedilink
        729 days ago

        So here’s the path that you’re envisioning:

        1. Someone wants to send you a communication of some sort. They draft a series of bullet points or short version.

        2. They have an LLM elaborate it into a long-form email or report.

        3. They send the long-from to you.

        4. You receive it and have an LLM summarize the long-form into a short-form.

        5. You read the short form.

        Do you realize how stupid this whole process is? The LLM in step (2) cannot create new useful information from nothing. It is simply elaborating on the bullet points or short version of whatever was fed to it. It’s extrapolating and elaborating, and it is doing so in a lossy manner. Then in step (4), you go through ANOTHER lossy process. The LLM in step (4) is summarizing things, and it might be removing some of the original real information the human created in step (1), rather than the useless fluff the LLM in step (2) added.

        WHY NOT JUST HAVE THE PERSON DIRECTLY SEND YOU THE BULLET POINTS FROM STEP (1)???!!

        This is idiocy. Pure and simply idiocy. We send start with a series of bullet points, and we end with a series of bullet points, and it’s translated through two separate lossy translation matrices. And we pointlessly burn huge amounts of electricity in the process.

        This is fucking stupid. If no one is actually going to read the long-form communications, the long-form communications SHOULDN’T EXIST.

        • @spector@lemmy.ca
          link
          fedilink
          228 days ago

          Also neither side necessarily knows the others filter chain. Generational loss could grow exponentially. Not only loss but addition by fabrication. Each side trading back and forth indeterminate deletions/additions. It’s worse than traditional generational loss. It’s generational noise which can resemble signal too.

          So if I receive a long form then how do I know if the substantial text is worth reading for the nuance from an actual human being. I can’t tell that apart from generated filler. If a human wrote the long form then maybe they’ve elaborated some nuance that deserved long form.

          On the flip side of the same coin. If I receive a short form either generated by me or them. Then to what degree can I trust the indeterminate noisy summary. I just have to trust that the LLM picked out precisely the key points that the author wanted to convey. And trust that nuance was not lost, skewed, or fabricated.

          It would be inevitable that two sides end up in a shooting war. Proverbial or otherwise. Because two communiques were playing a fancy game of telephone. Information that was lost or fabricated resulted in an incident but neither side knows which shot first because nobody realized the miscommunication started happening several generations ago.

        • Yep, pretty much every single “good” use case of AI I’ve seen is basically a band aid solution to enshitification.

          You know what’s a good solution to that? Removing the profit motive.

        • @cybersandwich@lemmy.world
          link
          fedilink
          028 days ago

          That’s not what I am envisioning at all. That would be absurd.

          Ironically, an gpt4o understood my post better than you :P

          " Overall, your perspective appreciates the real-world applications and benefits of AI while maintaining a critical eye on the surrounding hype and skepticism. You see AI as a transformative tool that, when used appropriately, can enhance both individual and organizational capabilities."

                • @cybersandwich@lemmy.world
                  link
                  fedilink
                  028 days ago

                  Haha, yea I’m familiar with it(always heard it called the Barnum effect though it sounds like they are the same thing), but this isn’t a fortune cookie-esque, meyers-briggs response.

                  In this case it actually summarized my post(I guess you could make the case that my post is an opinion that’s shared by many people–so forer-y in that sense), and to my other point, it didn’t misunderstand and tell me I was envisioning LLMs sending emails back and forth to each other.

                  Either way, there is this general tenor of negativity on Lemmy about AI (usually conflated to mean just LLMs). I think it’s a little misplaced. People are lumping the tech I’m with the hype bros- Altman, Musk, etc. the tech is transformative and there are plenty of valuable uses for it. It can solve real problems now. It doesn’t need to be AGI to do that. It doesn’t need to be perfect to do that.

                  • I read this comment chain and no? They are giving you actual criticism about the fundamental behaviour of the technology.

                    The person basically explained the broken telephone game and how “summarizing” will always have data loss by definition, and you just responded with:

                    In this case it actually summarized my post(I guess you could make the case that my post is an opinion that’s shared by many people–so forer-y in that sense)

                    Just because you couldn’t notice the data loss doesn’t mean the principle isn’t true.

                    Your basically saying translating something from English to Spanish and then back to English again is flawless cause it worked for some words for you.

      • @schizo@forum.uncomfortable.business
        link
        fedilink
        English
        229 days ago

        The problem is basically this: if you’re a knowledge worker, then yes, your ass is at risk.

        If your job is to summarize policy documents and write corpo-speak documents and then sit in meetings for hours to talk about what you’ve been doing, and you’re using the AI to do it, then your employer doesn’t really need you. They could just use the AI to do that and save the money they’re paying you.

        Right now they probably won’t be replacing anyone other than the bottom of the ladder support types, but 5 years? 10? 15?

        If your job is typing on a keyboard and then talking to someone else about all the typing you’ve done, you’re directly at risk, eventually.

    • @MrSqueezles@lemmynsfw.com
      link
      fedilink
      328 days ago
      • Write stream of consciousness and have AI turn it into a decent email
      • Tell me the name of this thing so I can research it
      • Coding, but don’t expect it to be a good coding tutor
      • Bedtime stories where kids decide what happens next and I don’t always have to tax my brain after a long day of work
      • I’m taking a road trip to San Francisco. Plan it for me with stops for sightseeing, eating, and sleeping.
    • @M0oP0o@mander.xyz
      link
      fedilink
      229 days ago

      Mostly stupid stuff involving sailor moon for me, using the lie machine for anything but funny pictures seems like maybe a bad idea at the moment:

    • @do_not_pm_me@thelemmy.club
      link
      fedilink
      129 days ago

      I use it to summarize things for me. Or rewrite something I’ve written a bit better. I usually need to spot check it, but it’s still nice to have.

      • @TheFriar@lemm.ee
        link
        fedilink
        129 days ago

        rewrite something I’ve written a bit better

        Woah, that’s the biggest bummer of a reason I’ve seen for it. If you read good stuff and write stuff you’d get better at it.

        • @do_not_pm_me@thelemmy.club
          link
          fedilink
          128 days ago

          It’s just like any tool.

          I use photoshop for instance to edit photos rather than editing them in paint.

          Sure I might be able to do the same thing without it, but it makes the process much faster.