• Johnny_Arson [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      30
      ·
      23 days ago

      There are more guardrails but the company I work for relies heavily on salesforce and I wonder if this is applicable. I don’t care I missed my bus and said fuck it and called in sick.

      • TraschcanOfIdeology [they/them, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        21 days ago

        From what I know about Salesforce, it depends on how heavy the company has gone on AI stuff. By itself, Salesforce is just a client database with some extra things on top, but if you’re using Ai to write reports or analyze data, might as well ask a magic 8-ball.

        • Johnny_Arson [they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          21 days ago

          They want us to engage with “all the tools it offers” I have not been directed specifically to deal with the analytics part of it but I am sure the actual field reps are. I just do mostly customer service side stuff, processing orders/returns and assisting the remote sales team. I absolutely loath its “genius” AI powered search functions which I have to use constantly. It can’t even do simple intuitive things like if I am searching for the name of the client I just spoke to to log my activity, it can’t even figure out I am looking for Bob Smith from within the contact card I am on and instead I have to open up the full list of Bob Smiths and then find the one for that specific company, which I assume would be one of the few things an LLM should be able to do well.

          • TraschcanOfIdeology [they/them, comrade/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            21 days ago

            They want us to engage with “all the tools it offers”

            That just sounds like management overpaid for a piece of software they don’t really understand and want people to spend their day throwing shit at the wall to see what sticks, no matter how difficult it makes simple tasks. If you want to implement a piece of tech in a process, you have to very specifically define which parts of that software will be used and how, otherwise it’s a headache for everyone involved. It’s like giving a set of knives to someone who mostly chops vegetables, and asking them to engage with the knives it offers, even though they have no use for a jamón slicing knife.

            Idk much about Salesforce tbh but what you describe does sound like one of those legacy ways of doing something that has worked the same way for 25 years even though it makes no sense, but it would be a disaster if someone changed it to make sense. Now you just put a chatbot in charge of it, and blame the user for not being able to prompt it right.

    • SuperZutsuki [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      22 days ago

      It’s very telling that they just implemented the AI without even giving its answers any sanity checks near the beginning. Could have caught it day one but no, it’s magic and checking would be a waste of time brainworms

      • BodyBySisyphus [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        22 days ago

        It drives me absolutely bonkers that there are smart people out there groveling and scraping for jobs while gormless jokers like this have secure six-figure salaries.

      • TraschcanOfIdeology [they/them, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        21 days ago

        I mean, it could’ve worked well at the beginning, then fallen off the rails for some reason or another.

        That’s the dumb and scary thing about AI stuff: it might work today, it might work for years (if you’re lucky), but every time you execute a prompt, you’re rolling the dice on whether the mystery box will decide to just make up some shit from here on out. If you need a person to check the AI’s output to make sure it’s not hallucinating, might as well cut the Ai off from the loop altogether and use the checker’s output from the get-go.

        • invalidusernamelol [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          21 days ago

          I’m the solo developer at my company of 50 people. Literally everything we use was written by me because I got fed up with the “numbers” guys fucking up spreadsheets.

          It’s all SQL now baby, so they literally can’t get rid of me because they don’t even know how it works, only that it does lol

  • WafflesTasteGood [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    46
    ·
    23 days ago

    A mere 12 percent of CEOs reported that it’d accomplished both goals.

    That 12 percent is either full of shit, they were running things like garbage to begin with, or the shitstorm just hasn’t hit them yet.

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      English
      arrow-up
      48
      ·
      23 days ago

      I actually think around 10% success rate sounds about right here. There are niches where this tech works well, but it’s being applied everywhere indiscriminately. So it makes sense that most deployments fail, but a small percentage actually finds the right niche.

      • TraschcanOfIdeology [they/them, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        21 days ago

        This reminds me a lot of the dotcom bubble in that everyone was trying to make online businesses, even in industries where it made no sense. Online retail and other stuff was actually useful, but 99% of those businesses were graft that had no actual use, just an excuse to grab VC funding and run.

        People are putting chatbots and llms e everywhere, even when they’re unnecessary or even dangerous to implement.

        Edit: just saw your comment further down. You put it way better than I could.

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          21 days ago

          Yup, this is exactly like the dotCom bubble, except on an even bigger scale with a lot more shady business practices if that’s even possible.

  • DasRav [any, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    40
    ·
    23 days ago

    CEO: “Wow, this could replace me, because all I really do is send an email once a day and try to say nice things about business business while trying to profit off of insider trading. And this thing won’t do the last bit, so it’s better then me! Everyone use the theft machine!”

    Workers: “Healthcare?”

    CEO: “No. Only use theft machine!”

  • Goblinmancer [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    36
    ·
    23 days ago

    Dont worry its all “speculated ROI” now and when shit goes up in flames give the ceos 10 billion dollars while firing everyone else.

  • Infamousblt [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    28
    ·
    edit-2
    23 days ago

    This sounds damning but also doesn’t mean a lot. Many companies go many many years building things before seeing a “return” on it. They make money but they are making less than they’re burning in VC funds, and as they start approaching the break even point, they use that to go get more VC funds to burn to keep expanding. This is largely how the tech industry works. Very very few companies are cash flow positive during their growth phases.

    It does mean that there is a risk in this investment because from a business perspective AI hasn’t been proven as a valuable investment yet but it still might for at least some of these companies and unless we really do run into the physics issues with AI with regards to data center capacity and build rate it could take a decade or more for this “we aren’t seeing a return yet” thing to matter

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      English
      arrow-up
      18
      ·
      23 days ago

      I agree, it’s basically a completely new tool looking for a market fit. It’s also worth noting that these companies are basically looking for one stellar application. If they hit on something that works really well, that’s gonna be the business model. So, they’re perfectly fine with most of the pilots failing if they can find one that works well.

      That said, I do think there is a bubble where a lot of companies are implementing these tools without having a good fit for them, and there’s a ton of money being wasted in the process. It’s kind of the same thing we saw with the dotCom bubble. When it popped, there was an extinction event where most companies went belly up, but we got a ton of useful tech out of it that underpins the internet today.

      I expect we’ll see a similar thing happen with AI. Except, this time around there’s another factor which is that there’s direct competition from China. My prediction is that Chinese models will win in the end because Chinese companies aren’t looking for direct monetization, they’re treating models as infrastructure, sort of what we see with Linux. Most companies don’t try to monetize it directly, they build stuff like AWS on top of it and that becomes the product.

      I expect American companies are just going to run out of runway in the near future, and they’re also getting squeezed by cheap Chinese models that are also open source. Big companies prefer running stuff on prem because they can keep their data private that way, and they can tune the models any way they want. Meanwhile, stuff like DeepSeek is orders of magnitude cheaper than Claude for individual use. So I just don’t see a long term business model for models as a service, especially not at pricing like Anthropic or even Google. Vast majority of people aren’t gonna pay 20 bucks a month for this stuff, let alone a 100.

  • plinky [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    23 days ago

    there were some pictures floating on twitter (in tooze circles can’t find it quickly nvm), that there were some increases in total factor productivity for tech workers and some text adjacent fields, around 1-3% from ai stuff over quarter (using self reporting for one axis, but it shows some correlation, so) (the most weird is construction, i don’t know what to make of it)

    • Speaker [e/em/eir]@hexbear.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      22 days ago

      How are they measuring productivity? If one of your KPIs is “Accelerate Business Objectives By Leveraging Theft Machine Synergy”, the results may be a bit skewed. 😄

    • the most weird is construction, i don’t know what to make of it

      Construction usually involves a lot of repetitive, very detail-oriented tasks, like invoicing, writing buy orders, signing off on deliveries, doing payroll, scheduling shifts, project management. A very limited scope LLM can make these tasks much easier I imagine.

  • BarneyPiccolo@lemmy.today
    link
    fedilink
    English
    arrow-up
    11
    ·
    22 days ago

    These ghouls are practically giddy at the prospect of firing as many workers as possible. Sorry to disappoint them.