• 54 Posts
  • 11K Comments
Joined 3 years ago
cake
Cake day: July 9th, 2023

help-circle
  • There’s enough people who drink the koolaid.

    I helped this one as guy use an LLM to migrate his test suite to java21. It did help him incorporate some new language features but I don’t see how it made up for my time sitting with him

    …… yet to management he saved 20% time. They trust that number despite no actual measurement, and hold it up as efficiency we all need to find.

    But certainly if a 20% efficiency gain were real, that would be well worth $100


  • Unfortunately for me it’s a kpi so I need to figure out how to do something useful with it.

    LLM is good for

    • temporary scripts like to export data
    • boilerplate for new code
    • simple or repetitious code like unit tests

    But just in time for my performance review, I spent a week ignoring my work to set and tweak rule sets. Now it can be noticeably more useful

    • set context so it understands your code better. No more stupid results like switching languages, making up a new test framework, or randomly use a different mocking tool
    • create actions. I’m very happy with a code refactoring ruleset I created. It successfully finds refactoring opportunities (matches cyclonatic complexity hotspots) and recommends approaches and is really good at presenting recommendations so I can understand and accept or reject. I tweaked it until it no longer suggests stupid crap, although I really haven’t been able to use much of the code it tries.
    • establish workflow. Still in progress but a ruleset to understand how we use our ticketing system, conventions for commit messages , etc. if I can get it to the point of trusting it, it should automate some of the source control actions and work tracking actions

  • While I also don’t see how it’s productive, it can be useful for certain things, certain steps. But it really seems like you need to have the knowledge in question to help it do a good job.

    People underestimate how much handholding it needs. You can tell it to do something and it might but you may not like the results. However with a bit of interaction or setting context, it might. The pretentious are calling it “prompt engineering” but it’s a combination of asking ai questions and modifying your terminology until it does what you want

    People also don’t seem to understand ai really puts a premium on evaluation. You don’t see it being written but you own it, so you really need to look through the result in detail to understand whether it’s what you wanted. I see this in code a lot where the LLM produces something but a junior developer doesn’t have the skill to evaluate it before committing to source control













  • Horrible article and worse headline, that trivializes a serious issue.

    1. There is no “insolvent”. There really isn’t even the concept
    2. There is a serious increase in debt that we ought to do something about (like by cutting the recent tax breaks for wealthy)
    3. Social security and Medicare are underfunded, but it’s stupid to add a “75 year liability” to the current year and complain about how big it is. It’s like 80% funded by tax revenue over that 75 years and certainly not due this year