image description (contains clarifications on background elements)

Lots of different seemingly random images in the background, including some fries, mr. crabs, a girl in overalls hugging a stuffed tiger, a mark zuckerberg “big brother is watching” poser, two images of fluttershy (a pony from my little pony) one of them reading “u only kno my swag, not my lore”, a picture of parkzer parkzer from the streamer “dougdoug” and a slider gameplay element from the rhythm game “osu”. The background is made light so that the text can be easily read. The text reads:

i wanna know if we are on the same page about ai.
if u diagree with any of this or want to add something,
please leave a comment!
smol info:
- LM = Language Model (ChatGPT, Llama, Gemini, Mistral, ...)
- VLM = Vision Language Model (Qwen VL, GPT4o mini, Claude 3.5, ...)
- larger model = more expensivev to train and run
smol info end
- training processes on current AI systems is often
clearly unethical and very bad for the environment :(
- companies are really bad at selling AI to us and
giving them a good purpose for average-joe-usage
- medical ai (e.g. protein folding) is almost only positive
- ai for disabled people is also almost only postive
- the idea of some AI machine taking our jobs is scary
- "AI agents" are scary. large companies are training
them specifically to replace human workers
- LMs > image generation and music generation
- using small LMs for repetitive, boring tasks like
classification feels okay
- using the largest, most environmentally taxing models
for everything is bad. Using a mixture of smaller models
can often be enough
- people with bad intentions using AI systems results
in bad outcome
- ai companies train their models however they see fit.
if an LM "disagrees" with you, that's the trainings fault
- running LMs locally feels more okay, since they need
less energy and you can control their behaviour
I personally think more positively about LMs, but almost
only negatively about image and audio models.
Are we on the same page? Or am I an evil AI tech sis?

IMAGE DESCRIPTION END


i hope this doesn’t cause too much hate. i just wanna know what u people and creatures think <3

  • smiletolerantly@awful.systems
    link
    fedilink
    arrow-up
    12
    ·
    21 hours ago

    LMs give the appearance of understanding, but as soon as you try to use them for anything that you actually are knowledgable in, the facade crumbles.

    Even for repetitive tasks, you have to do a lot of manual checking to ensure they did not start hallucinating half way through.

    • WillStealYourUsername@lemmy.blahaj.zoneM
      link
      fedilink
      English
      arrow-up
      7
      ·
      21 hours ago

      I haven’t really used AIs myself, however one of my brothers loves AI for boilerplate code which he of course looks over afterwards. If it saves time and you only have to do some minor editing then that seems like a win to me. Probably shouldn’t be used like this in any non-hobby project by people who aren’t adept at coding however

      • smiletolerantly@awful.systems
        link
        fedilink
        arrow-up
        8
        ·
        20 hours ago

        I’m a programmer as well. When ChatGPT & Co initially came out, I was pretty excited tbh and attempted to integrate it into my workflow, which kinda worked-ish? But was also a lot of me being amazed by the novelty, and forgiving of the shortcomings.

        Did not take me long to phase them out again though. (And no, it’s not the models I used; I have tried again now and then with the new, supposedly perfect-for-programming models, same results). The only edgecase where they are generally useful (to me at least) are simple tasks that I have some general knowledge of (to double theck the LM’s work) but not have any interest in learning anything further than I already know. Which does occur here and there, but rarely.

        For everything else programming-related, it’s flat out shit.I do not beleive they are a time saver for even moderately difficult programs. Bu the time you’ve run around in enough circles, explaining “now, this does not do what you say it does”, “that’s the same wring answer you gave me two responses ago”, “you have hallucinated that function”, and found out the framework in use dropped that general structure in version 5, you may as well do it yourself, and actually learn how to do it at the same time.

        For work, I eventually found that it took me longer to describe the business logic (and do the above dance) than to just… do the work. I also have more confidence in the code, and understand it completely.

        In terms of programming aids, a linter, formatter and LSP are, IMHO, a million times more useful than any LM.

        • arisunz@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          6
          ·
          19 hours ago

          this matches my experience too. good IDEs or editors with LSP support allll the way.

          also wanna add that it’s weird to me that we turn to LLMs to generate mountains of boilerplate instead of… y’know, fixing our damn tools in the first place (or using them correctly, or to their fullest) so that said boilerplate is unnecessary. abstractions have always been a thing. it seems so inefficient.

            • Badabinski@kbin.earth
              link
              fedilink
              arrow-up
              2
              ·
              18 hours ago

              I also 100% agree with you. My work has a developer productivity team that tries to make sure we have access to good tools, and those folks have been all over AI like flies on shit lately. I’ve started to feel a bit like a crazy Luddite because I do not feel like Copilot increases my productivity. I’m spending like 90% of my time reading docs, debugging and exploring fucked up edge cases, or staring off into space while contemplating if I’m about to introduce some godawful race condition between two disparate systems running in kubernetes or something. Senior developers usually do shit that would take hours to properly summarize for a language model.

              And yeah, if I have to write a shitload boilerplate then I’m writing bad code and probably need to add or fix abstraction. Worst case, there’s always vim macros or a quick shell oneliner to generate that shit. The barrier to progress is useful because it warns me that I’m being a dummy. I don’t want to get rid of that when the only benefit is that I get to context switch between code review mode and system synthesis mode.

              • smiletolerantly@awful.systems
                link
                fedilink
                arrow-up
                2
                ·
                18 hours ago

                Yeah, with seniors it’s even more clear how little LMs can help.

                I feel you on the AI tools being pushed thing. My company is too small to have a dedicated team for something like that, buuuut… As of last week, we’re wasting resources on an internal server hosting Deepseek on absurd hardware. Like, far more capable than our prod server.

                Oh, an we pride ourselves on being soooo environmentally friendly 😊🎉

        • WillStealYourUsername@lemmy.blahaj.zoneM
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          20 hours ago

          for even moderately difficult programs.

          My brother uses it to generate templates and basic structs and functions, not to generate novel code. That’s probably the difference here. I believe it’s integrated into his text editor as well? It’s the one github offers

          Edit: Probably wouldn’t be useful if it wasn’t integrated into the editor and therefore the generation being just a click away or some sort of autofill. Actually writing a prompt does sound tedious

    • Jumuta@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 hours ago

      I’ve heard this argument so many fucking times and i hate genai but there’s no practical difference between understanding and having the appearance of such, that is just a human construct that we use to try to feel artificially superior ffs

      • smiletolerantly@awful.systems
        link
        fedilink
        arrow-up
        1
        ·
        16 hours ago

        No. I am not saying that to put man and machine in two boxes. I am saying that because it is a huge difference, and yes, a practical one.

        An LLM can talk about a topic for however long you wish, but it does not know what it is talking about, it has no understanding or concept of the topic. And that shines through the instance you hit a spot where training data was lacking and it starts hallucinating. LLMs have “read” an unimaginable amount of texts on computer science, and yet as soon as I ask something that is niche, it spouts bullshit. Not it’s fault, it’s not lying; it’s just doing what it always does, putting statistically likely token after statistically liken token, only in this case, the training data was insufficient.

        But it does not understand or know that either; it just keeps talking. I go “that is absolutely not right, remember that <…> is <…,>” and whether or not what I said was true, it will go "Yes, you are right! I see now, <continues to hallucinate> ".

        There’s no ghost in the machine. Just fancy text prediction.

    • Smorty [she/her]@lemmy.blahaj.zoneOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      21 hours ago

      you’re right, it doesn’t do classification perfectly every time. but it drills down on the amount of human labour required to classify a large set of data.

      about the knowledge: it really comes down to which model you are talking to. “generalist” models like GPT4o or claude 3.5 sonnet have been trained to know many things somewhat, but no single thing perfectly.

      currently companies seem to train largely on IT-related things. these models are great at helping me program, but they are terrible at specifically writing GDScript (a niche game-programming language) since they forget all the methods and components the language has.

      • smiletolerantly@awful.systems
        link
        fedilink
        arrow-up
        6
        ·
        20 hours ago

        Even with LMs supposedly specialising in the areas that I am knowledgable (but by no means an expert) in, it’s the same. Drill down even slightly beyond surface-level, and it’s either plain wrong, or halucinated when not immediately disprovable.

        And why wouldn’t it be? These things do not possess knowledge, they possess the ability to generate texts about things we’d like them to be knowledgable in, and that is a crucial difference.